Search Results

Search found 13403 results on 537 pages for 'epm performance tuning'.

Page 452/537 | < Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >

  • Speedstep and Intel x5570?

    - by sajal
    Hi, My new server has 2 x X5570 CPUs. Now here is the output of grep -i hz /proc/cpuinfo model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz cpu MHz : 1600.231 It always remains the same.. no matter how much load is mysql or any other app hogging. Even when mysql eats 2 or 3 CPUs at 100% each, the output of cpuinfo is the same. If fact mysql performance for some heavy inserts is poorer than my old E5430 server. Any clues? I contacted the server provider, they tried turning off SpeedStep and still we see the same results. Any insights would be helpful cause I am paying heavily for this box and would love to milk all juice i can.

    Read the article

  • Windows 7 on a 64-bit computer

    - by GetFree
    I read on Wikipedia that Windows 7 on a 64-bit PC needs twice as much RAM as on a 32-bit PC. I understand why is that: every number stored in memory takes 8 bytes rather than just 4. That, in simple terms, means that your amount of RAM is reduced to half when you use Windows 7 on a 64-bit computer. Now, I have a Intel Core 2 Duo Laptop with Windows Vista right now (2 GB of RAM). My question is: Since Core 2 is a 64-bit architecture, if I upgrade to Windows 7 will my laptop be working as if it had just 1 GB of RAM? Or... to say it in other words: Having a 64-bit PC with Windows 7 do you need twice as much RAM as you need on a 32-bit PC to have the same performance? If I am right, then I'd say it's a terrible business to have a 64-bit computer and Windows 7 on it (I hope I am mistaken, though). Follow-up: After some answers, I'm realizing it's not the same thing to have a 32-bit OS on a 64-bit PC than a 64-bit OS on a 64-bit PC. Apparently, the problem of Windows 7 requiring twice as much RAM on 64-bit architectures is when you have both the OS and PC supporting 64 bits. I'd like new answers to address this issue. Also, is it possible to have more that 4 GB of RAM on a 64-bit PC using a 32-bit version of Windows?

    Read the article

  • Why does ping flooding a domain name freezes and not a direct ip address

    - by CYREX
    I am wondering why, when ping flooding a domain, the ping flood freezes after a couple of seconds then continues and this freeze, unfreeze continues until i stop the ping flood. When i do the same using the ip it does not freeze. NEVER. i did for example sudo ping -f IP (It does not freeze) then i did sudo ping -f DomainName (It freezes after a couple of seconds) Why does ping flooding an IP not freezes and ping flooding the same place using the domain name does freeze. EDIT - What i mean about freezing is that the behavior of the ping flood should send a ping and create a dot (.) for each ping but also remove each dot (.) after receiving the echo request. Looks something like this: .......... <-- This means you just send 10 ping requests. If the requests are answer, for each request answer a dot is removed. The freeze happens when this is sending or receiving. The dots will stay there frozen, like is not receiving or sending any packets. For the PING FLOOD. I do not mean in the evil way of flooding a place, i mean in the testing way. To test the performance/speed of the request send and answered of the ping requests. If you send a ping flood to google's IP for about 10 seconds you would have send about 1000 packets.but if you do it to google's domain name (google.com) it will create the freeze am talking about. IMPORTANT - Do not confuse with flooding a site with ping of death attacks.

    Read the article

  • Apachebench on node.js server returning "apr_poll: The timeout specified has expired (70007)" after ~30 requests

    - by Scott
    I just started working with node.js and doing some experimental load testing with ab is returning an error at around 30 requests or so. I've found other pages showing a lot better concurrency numbers than I am such as: http://zgadzaj.com/benchmarking-nodejs-basic-performance-tests-against-apache-php Are there some critical server configuration settings that need done to achieve those numbers? I've watched memory on top and I still see a decent amount of free memory while running ab, watched mongostat as well and not seeing anything that looks suspicious. The command I'm running, and the error is: ab -k -n 100 -c 10 postrockandbeyond.com/ This is ApacheBench, Version 2.0.41-dev <$Revision: 1.121.2.12 $> apache-2.0 Copyright (c) 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Copyright (c) 2006 The Apache Software Foundation, http://www.apache.org/ Benchmarking postrockandbeyond.com (be patient)...apr_poll: The timeout specified has expired (70007) Total of 32 requests completed Does anyone have any suggestions on things I should look in to that may be causing this? I'm running it on osx lion, but have also run the same command on the server with the same results. EDIT: I eventually solved this issue. I was using a TTAPI, which was connecting to turntable.fm through websockets. On the homepage, I was connecting on every request. So what was happening was that after a certain number of connections, everything would fall apart. If you're running into the same issue, check out whether you are hitting external services each request.

    Read the article

  • Windows Start Menu Not Staying on Top

    - by Jeff Rapp
    Hey everyone. I've had this problem since Windows Vista. I did a clean install with Windows 7 and hoped it would fix the problem. Also swapped out the video card just to rule out a strange driver issue. Here's what's happening. After running for some period of time (usually a few hours), the Start button/orb will loose it's "Chrome" and turn into a plain button that just says "Start." It will work fine for a while, but then the start menu will just stop showing. Additionally, when I hit Win+D to show the desktop, the entire taskbar completely disappears. I can get it back usually by moving/minimizing windows that may be overlapping where the start menu should show. Otherwise, it requires either a full reboot or I'll end up killing & restarting the explorer.exe process. I realize that this is a strange issue - I took a video of it http://www.youtube.com/watch?v=0B3WwT0uyr4 Thanks! --Edit-- Here's my HijackThis log: Logfile of Trend Micro HijackThis v2.0.3 (BETA) Scan saved at 4:19:00 PM, on 12/16/2009 Platform: Unknown Windows (WinNT 6.01.3504) MSIE: Internet Explorer v8.00 (8.00.7600.16385) Boot mode: Normal Running processes: C:\Program Files (x86)\Pantone\hueyPRO\hueyPROTray.exe C:\Program Files (x86)\Adobe\Acrobat 9.0\Acrobat\acrotray.exe C:\Program Files (x86)\iTunes\iTunesHelper.exe C:\Program Files (x86)\Java\jre6\bin\jusched.exe C:\Program Files (x86)\MagicDisc\MagicDisc.exe C:\Program Files (x86)\Trillian\trillian.exe C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe C:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\Ssms.exe C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe C:\Program Files (x86)\Common Files\Microsoft Shared\DevServer\9.0\WebDev.WebServer.EXE C:\Program Files (x86)\Notepad++\notepad++.exe C:\Program Files (x86)\Fiddler2\Fiddler.exe C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE\mspdbsrv.exe C:\Program Files (x86)\iTunes\iTunes.exe C:\Program Files (x86)\Adobe\Adobe Illustrator CS4\Support Files\Contents\Windows\Illustrator.exe C:\Program Files (x86)\ColorPic 4.1\ColorPic.exe C:\Program Files (x86)\Adobe\Acrobat 9.0\Acrobat\Acrobat.exe C:\Program Files (x86)\Common Files\Microsoft Shared\Help 9\dexplore.exe C:\Program Files (x86)\Common Files\Microsoft Shared\Help 9\dexplore.exe C:\Program Files (x86)\Internet Explorer\IEXPLORE.EXE C:\Program Files (x86)\Internet Explorer\IEXPLORE.EXE C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE\devenv.exe C:\Program Files (x86)\eBay\Blackthorne\bin\BT.exe C:\Program Files (x86)\Internet Explorer\IEXPLORE.EXE C:\Program Files (x86)\CamStudio\Recorder.exe C:\Program Files (x86)\CamStudio\Playplus.exe C:\Program Files (x86)\Mozilla Firefox 3.6 Beta 3\firefox.exe C:\Program Files (x86)\CamStudio\Playplus.exe C:\Program Files (x86)\PuTTY\putty.exe C:\Program Files (x86)\CamStudio\Playplus.exe C:\Program Files (x86)\CamStudio\Playplus.exe C:\Program Files (x86)\TrendMicro\HiJackThis\HiJackThis.exe R1 - HKCU\Software\Microsoft\Internet Explorer\Main,Search Page = http://go.microsoft.com/fwlink/?LinkId=54896 R0 - HKCU\Software\Microsoft\Internet Explorer\Main,Start Page = http://go.microsoft.com/fwlink/?LinkId=69157 R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Default_Page_URL = http://go.microsoft.com/fwlink/?LinkId=69157 R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Default_Search_URL = http://go.microsoft.com/fwlink/?LinkId=54896 R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Search Page = http://go.microsoft.com/fwlink/?LinkId=54896 R0 - HKLM\Software\Microsoft\Internet Explorer\Main,Start Page = http://go.microsoft.com/fwlink/?LinkId=69157 R0 - HKLM\Software\Microsoft\Internet Explorer\Search,SearchAssistant = R0 - HKLM\Software\Microsoft\Internet Explorer\Search,CustomizeSearch = R0 - HKLM\Software\Microsoft\Internet Explorer\Main,Local Page = C:\Windows\SysWOW64\blank.htm R0 - HKCU\Software\Microsoft\Internet Explorer\Toolbar,LinksFolderName = F2 - REG:system.ini: UserInit=userinit.exe O2 - BHO: ContributeBHO Class - {074C1DC5-9320-4A9A-947D-C042949C6216} - C:\Program Files (x86)\Adobe\/Adobe Contribute CS4/contributeieplugin.dll O2 - BHO: AcroIEHelperStub - {18DF081C-E8AD-4283-A596-FA578C2EBDC3} - C:\Program Files (x86)\Common Files\Adobe\Acrobat\ActiveX\AcroIEHelperShim.dll O2 - BHO: Groove GFS Browser Helper - {72853161-30C5-4D22-B7F9-0BBC1D38A37E} - C:\PROGRA~2\MICROS~1\Office14\GROOVEEX.DLL O2 - BHO: Windows Live Sign-in Helper - {9030D464-4C02-4ABF-8ECC-5164760863C6} - C:\Program Files (x86)\Common Files\Microsoft Shared\Windows Live\WindowsLiveLogin.dll O2 - BHO: Adobe PDF Conversion Toolbar Helper - {AE7CD045-E861-484f-8273-0445EE161910} - C:\Program Files (x86)\Common Files\Adobe\Acrobat\ActiveX\AcroIEFavClient.dll O2 - BHO: URLRedirectionBHO - {B4F3A835-0E21-4959-BA22-42B3008E02FF} - C:\PROGRA~2\MICROS~1\Office14\URLREDIR.DLL O2 - BHO: Java(tm) Plug-In 2 SSV Helper - {DBC80044-A445-435b-BC74-9C25C1C588A9} - C:\Program Files (x86)\Java\jre6\bin\jp2ssv.dll O2 - BHO: SmartSelect - {F4971EE7-DAA0-4053-9964-665D8EE6A077} - C:\Program Files (x86)\Common Files\Adobe\Acrobat\ActiveX\AcroIEFavClient.dll O3 - Toolbar: Adobe PDF - {47833539-D0C5-4125-9FA8-0819E2EAAC93} - C:\Program Files (x86)\Common Files\Adobe\Acrobat\ActiveX\AcroIEFavClient.dll O3 - Toolbar: Contribute Toolbar - {517BDDE4-E3A7-4570-B21E-2B52B6139FC7} - C:\Program Files (x86)\Adobe\/Adobe Contribute CS4/contributeieplugin.dll O4 - HKLM\..\Run: [AdobeCS4ServiceManager] "C:\Program Files (x86)\Common Files\Adobe\CS4ServiceManager\CS4ServiceManager.exe" -launchedbylogin O4 - HKLM\..\Run: [Adobe Acrobat Speed Launcher] "C:\Program Files (x86)\Adobe\Acrobat 9.0\Acrobat\Acrobat_sl.exe" O4 - HKLM\..\Run: [Acrobat Assistant 8.0] "C:\Program Files (x86)\Adobe\Acrobat 9.0\Acrobat\Acrotray.exe" O4 - HKLM\..\Run: [Adobe_ID0ENQBO] C:\PROGRA~2\COMMON~1\Adobe\ADOBEV~1\Server\bin\VERSIO~2.EXE O4 - HKLM\..\Run: [QuickTime Task] "C:\Program Files (x86)\QuickTime\QTTask.exe" -atboottime O4 - HKLM\..\Run: [iTunesHelper] "C:\Program Files (x86)\iTunes\iTunesHelper.exe" O4 - HKLM\..\Run: [SunJavaUpdateSched] "C:\Program Files (x86)\Java\jre6\bin\jusched.exe" O4 - HKUS\S-1-5-19\..\Run: [Sidebar] %ProgramFiles%\Windows Sidebar\Sidebar.exe /autoRun (User 'LOCAL SERVICE') O4 - HKUS\S-1-5-19\..\RunOnce: [mctadmin] C:\Windows\System32\mctadmin.exe (User 'LOCAL SERVICE') O4 - HKUS\S-1-5-20\..\Run: [Sidebar] %ProgramFiles%\Windows Sidebar\Sidebar.exe /autoRun (User 'NETWORK SERVICE') O4 - HKUS\S-1-5-20\..\RunOnce: [mctadmin] C:\Windows\System32\mctadmin.exe (User 'NETWORK SERVICE') O4 - Startup: ChatNowDesktop.appref-ms O4 - Startup: MagicDisc.lnk = C:\Program Files (x86)\MagicDisc\MagicDisc.exe O4 - Startup: Trillian.lnk = C:\Program Files (x86)\Trillian\trillian.exe O4 - Global Startup: Digsby.lnk = C:\Program Files (x86)\Digsby\digsby.exe O4 - Global Startup: hueyPROTray.lnk = C:\Program Files (x86)\Pantone\hueyPRO\hueyPROTray.exe O4 - Global Startup: OfficeSAS.lnk = ? O8 - Extra context menu item: Append Link Target to Existing PDF - res://C:\Program Files (x86)\Common Files\Adobe\Acrobat\ActiveX\AcroIEFavClient.dll/AcroIEAppendSelLinks.html O8 - Extra context menu item: Append to Existing PDF - res://C:\Program Files (x86)\Common Files\Adobe\Acrobat\ActiveX\AcroIEFavClient.dll/AcroIEAppend.html O8 - Extra context menu item: Convert Link Target to Adobe PDF - res://C:\Program Files (x86)\Common Files\Adobe\Acrobat\ActiveX\AcroIEFavClient.dll/AcroIECaptureSelLinks.html O8 - Extra context menu item: Convert to Adobe PDF - res://C:\Program Files (x86)\Common Files\Adobe\Acrobat\ActiveX\AcroIEFavClient.dll/AcroIECapture.html O8 - Extra context menu item: E&xport to Microsoft Excel - res://C:\PROGRA~1\MICROS~1\Office14\EXCEL.EXE/3000 O8 - Extra context menu item: S&end to OneNote - res://C:\PROGRA~1\MICROS~1\Office14\ONBttnIE.dll/105 O9 - Extra button: Send to OneNote - {2670000A-7350-4f3c-8081-5663EE0C6C49} - C:\Program Files (x86)\Microsoft Office\Office14\ONBttnIE.dll O9 - Extra 'Tools' menuitem: Se&nd to OneNote - {2670000A-7350-4f3c-8081-5663EE0C6C49} - C:\Program Files (x86)\Microsoft Office\Office14\ONBttnIE.dll O9 - Extra button: OneNote Lin&ked Notes - {789FE86F-6FC4-46A1-9849-EDE0DB0C95CA} - C:\Program Files (x86)\Microsoft Office\Office14\ONBttnIELinkedNotes.dll O9 - Extra 'Tools' menuitem: OneNote Lin&ked Notes - {789FE86F-6FC4-46A1-9849-EDE0DB0C95CA} - C:\Program Files (x86)\Microsoft Office\Office14\ONBttnIELinkedNotes.dll O9 - Extra button: Fiddler2 - {CF819DA3-9882-4944-ADF5-6EF17ECF3C6E} - "C:\Program Files (x86)\Fiddler2\Fiddler.exe" (file missing) O9 - Extra 'Tools' menuitem: Fiddler2 - {CF819DA3-9882-4944-ADF5-6EF17ECF3C6E} - "C:\Program Files (x86)\Fiddler2\Fiddler.exe" (file missing) O13 - Gopher Prefix: O16 - DPF: {5554DCB0-700B-498D-9B58-4E40E5814405} (RSClientPrint 2008 Class) - http://reportserver/Reports/Reserved.ReportViewerWebControl.axd?ReportSession=oxadkhfvfvt1hzf2eh3y1ay2&ControlID=b89e27f15e734f3faee1308eebdfab2a&Culture=1033&UICulture=9&ReportStack=1&OpType=PrintCab&Arch=X86 O16 - DPF: {82774781-8F4E-11D1-AB1C-0000F8773BF0} (DLC Class) - https://transfers.ds.microsoft.com/FTM/TransferSource/grTransferCtrl.cab O16 - DPF: {D27CDB6E-AE6D-11CF-96B8-444553540000} (Shockwave Flash Object) - http://fpdownload2.macromedia.com/get/shockwave/cabs/flash/swflash.cab O17 - HKLM\System\CCS\Services\Tcpip\Parameters: Domain = LapkoSoft.local O17 - HKLM\System\CCS\Services\Tcpip\..\{5992B87A-643B-4385-A914-249B98BF7129}: NameServer = 192.168.1.10 O17 - HKLM\System\CS1\Services\Tcpip\Parameters: Domain = LapkoSoft.local O17 - HKLM\System\CS2\Services\Tcpip\Parameters: Domain = LapkoSoft.local O18 - Filter hijack: text/xml - {807573E5-5146-11D5-A672-00B0D022E945} - C:\Program Files (x86)\Common Files\Microsoft Shared\OFFICE14\MSOXMLMF.DLL O23 - Service: Adobe Version Cue CS4 - Adobe Systems Incorporated - C:\Program Files (x86)\Common Files\Adobe\Adobe Version Cue CS4\Server\bin\VersionCueCS4.exe O23 - Service: @%SystemRoot%\system32\Alg.exe,-112 (ALG) - Unknown owner - C:\Windows\System32\alg.exe (file missing) O23 - Service: Apple Mobile Device - Apple Inc. - C:\Program Files (x86)\Common Files\Apple\Mobile Device Support\bin\AppleMobileDeviceService.exe O23 - Service: ASP.NET State Service (aspnet_state) - Unknown owner - C:\Windows\Microsoft.NET\Framework\v2.0.50727\aspnet_state.exe (file missing) O23 - Service: Bonjour Service - Apple Inc. - C:\Program Files (x86)\Bonjour\mDNSResponder.exe O23 - Service: @%SystemRoot%\system32\efssvc.dll,-100 (EFS) - Unknown owner - C:\Windows\System32\lsass.exe (file missing) O23 - Service: @%systemroot%\system32\fxsresm.dll,-118 (Fax) - Unknown owner - C:\Windows\system32\fxssvc.exe (file missing) O23 - Service: FLEXnet Licensing Service - Acresso Software Inc. - C:\Program Files (x86)\Common Files\Macrovision Shared\FLEXnet Publisher\FNPLicensingService.exe O23 - Service: FLEXnet Licensing Service 64 - Acresso Software Inc. - C:\Program Files\Common Files\Macrovision Shared\FLEXnet Publisher\FNPLicensingService64.exe O23 - Service: @%windir%\system32\inetsrv\iisres.dll,-30007 (IISADMIN) - Unknown owner - C:\Windows\system32\inetsrv\inetinfo.exe (file missing) O23 - Service: iPod Service - Apple Inc. - C:\Program Files\iPod\bin\iPodService.exe O23 - Service: @keyiso.dll,-100 (KeyIso) - Unknown owner - C:\Windows\system32\lsass.exe (file missing) O23 - Service: @comres.dll,-2797 (MSDTC) - Unknown owner - C:\Windows\System32\msdtc.exe (file missing) O23 - Service: @%SystemRoot%\System32\netlogon.dll,-102 (Netlogon) - Unknown owner - C:\Windows\system32\lsass.exe (file missing) O23 - Service: NVIDIA Performance Driver Service - Unknown owner - C:\Program Files\NVIDIA Corporation\Performance Drivers\nvPDsvc.exe O23 - Service: NVIDIA Display Driver Service (nvsvc) - Unknown owner - C:\Windows\system32\nvvsvc.exe (file missing) O23 - Service: @%systemroot%\system32\psbase.dll,-300 (ProtectedStorage) - Unknown owner - C:\Windows\system32\lsass.exe (file missing) O23 - Service: @%systemroot%\system32\Locator.exe,-2 (RpcLocator) - Unknown owner - C:\Windows\system32\locator.exe (file missing) O23 - Service: @%SystemRoot%\system32\samsrv.dll,-1 (SamSs) - Unknown owner - C:\Windows\system32\lsass.exe (file missing) O23 - Service: @%SystemRoot%\system32\snmptrap.exe,-3 (SNMPTRAP) - Unknown owner - C:\Windows\System32\snmptrap.exe (file missing) O23 - Service: @%systemroot%\system32\spoolsv.exe,-1 (Spooler) - Unknown owner - C:\Windows\System32\spoolsv.exe (file missing) O23 - Service: @%SystemRoot%\system32\sppsvc.exe,-101 (sppsvc) - Unknown owner - C:\Windows\system32\sppsvc.exe (file missing) O23 - Service: TeamViewer 5 (TeamViewer5) - TeamViewer GmbH - C:\Program Files (x86)\TeamViewer\Version5\TeamViewer_Service.exe O23 - Service: @%SystemRoot%\system32\ui0detect.exe,-101 (UI0Detect) - Unknown owner - C:\Windows\system32\UI0Detect.exe (file missing) O23 - Service: @%SystemRoot%\system32\vaultsvc.dll,-1003 (VaultSvc) - Unknown owner - C:\Windows\system32\lsass.exe (file missing) O23 - Service: @%SystemRoot%\system32\vds.exe,-100 (vds) - Unknown owner - C:\Windows\System32\vds.exe (file missing) O23 - Service: @%systemroot%\system32\vssvc.exe,-102 (VSS) - Unknown owner - C:\Windows\system32\vssvc.exe (file missing) O23 - Service: @%systemroot%\system32\wbengine.exe,-104 (wbengine) - Unknown owner - C:\Windows\system32\wbengine.exe (file missing) O23 - Service: @%Systemroot%\system32\wbem\wmiapsrv.exe,-110 (wmiApSrv) - Unknown owner - C:\Windows\system32\wbem\WmiApSrv.exe (file missing) O23 - Service: @%PROGRAMFILES%\Windows Media Player\wmpnetwk.exe,-101 (WMPNetworkSvc) - Unknown owner - C:\Program Files (x86)\Windows Media Player\wmpnetwk.exe (file missing)

    Read the article

  • Dell Perc 6i with FreeBSD 8.1 errors with mfi0: COMMAND xxxxxxxx TIMEOUT AFTER xxx SECONDS

    - by jDempster
    We've recently bought two Dell PowerEdge R710 servers with Perc 6i controllers and 6x 135GB SAS Drives. We'd done some pretty extensive testing on a Dell PowerEdge R510 server with a Perc 6i and 4x 135GB SAS Drives running FreeBSD 8.1 for it's wonderful ZFS support and mfiutil. We hadn't had any problems with the R510 and had got to a point where we where happy with the performance of ZFS. Since running FreeBSD 8.1 on the R710 we've been getting errors from the RAID controller. mfi0: COMMAND 0xffffff80005d1770 TIMEOUT AFTER 6178 SECONDS This usually brings the system to a stand still. But it doesn't always happen, and performs very well up until it does happen. We've been running the disk as 3 mirrored drives striped in ZFS. So far we've noticed that running the drives with RAID10 on the RAID seems to work without errors (still testing). At first I thought hardware error as we'd been running FreeBSD on the R510 with the same controller without any issues. But both R710 have the same issue. All controllers are running the same firmware.

    Read the article

  • DNS slow after losing DNS Server

    - by Tim
    We have set up a small Windows Server 2008 R2 network with a domain controller which is also acting as the DNS server for the network (we opted to install DNS when setting up the domain). This network isn't connected to the Internet in any way, so all machines have been configured to use the IP address of the domain controller as their primary DNS and no secondary DNS server has been configured. If we shut down or unplug the network cable from the domain controller, DNS lookups become quite slow and the performance of the network suffers. For example, running a ping command using a hostname takes around 5-6 seconds to resolve the name. I presume this is because it is looking for the DNS, then falling back to some other method of resolving the names as the DNS server is now gone. All the machines have static IP addresses so we are considering just putting all entries in the HOSTS file of each machine. However, it would be nice to have a centralised DNS in case we one day change the IP of one of the machines. Is there a better way to speed this up?

    Read the article

  • Use RAID for desktop computer?

    - by NickAldwin
    I'm building a new computer over the summer. I'm fairly competent in computer hardware, and am thus building the computer from scratch. I have everything planned out, but I was wondering if I should consider RAID/what RAID to use. I plan to purchase 2x1TB drives. Currently I'm leaning toward RAID 1 for the redundancy -- I've heard newer super-capacity drives fail more often than one would think, and I don't want to have a problem and lose all my data. What do you think? My mobo supports RAID 0/1/5/10. Is it worth it to use RAID at all, or should I just use a backup service like Mozy? Should I consider RAID 0 instead, for the performance? I'm kind-of going back and forth on this one. Thanks a lot for your help. EDIT: I'd like to avoid the OS drive different from Data drive situation, because that can get frustrating when programs like to store a lot of data in their program files folder. I've lived with that situation before and it gets annoying after a while.

    Read the article

  • How can I forward an application with X11 in grayscale

    - by ??????? ???????????
    I am trying to run a graphical application at home and display it on a it on a laptop which is located about six routing hops away. The problem is that the connection is so slow (or rather there is so much GOOEY being transfered) that the mouse is unresponsive and it takes a "long time" to redraw the window even at a resolution of 800x600 pixels. The connection speeds are 10MBit up at home and about 1MBit down on the laptop, which I think should be sufficient for looking at some GUI in (almost) real time. Since this traffic is sent over over a secure shell, I have enabled Compression with highest CompressionLevel along with Ciphers set to blowfish-cbc. This has substantially improved the responsiveness of the application, making it nearly usable. However, my goal is to improve the performance even further by sacrificing colors and even frame rate. The application to be displayed a Qemu SDL window with a graphically-oriented OS in it. This is not strictly relevant, but perhaps there are options to tweak the SDL output which I am not aware of. A possible workaround would be to run the application in a "hidden" X server and enabling TigerVNC on that X server. This would automatically give me the benefits of an optimized VNC viewport, but the goal is to do without (reduce complexity). The question I'm asking is what are my options for reducing the data-rate generated on the server in order to make the graphical application more usable on the client. As mentioned, colors are not important and I could probably work with 5-16 fps. Both machines are running Gentoo with the software in question being: workstation X.Org X Server 1.10.4 OpenSSH_5.8p1-hpn13v10, OpenSSL 1.0.0e QEMU emulator version 0.15.1 (qemu-kvm-0.15.1) laptop X.Org X Server 1.12.2 OpenSSH_5.8p1-hpn13v10lpk, OpenSSL 1.0.0j

    Read the article

  • Ubuntu virtual memory caches suck up memory

    - by Tom
    Hey all, I've got an Ubuntu 9.10 64-bit server that seems to use up all available memory. According to my munin graphs, almost all of the memory used up is in the swap cache, cache, and slab cache. (I take this to mean virtual memory caches, am I right in assuming this?) Once memory usage approaches 100%, some (although not all) system services such as SSH become sluggish and unresponsive. After rebooting the system, performance and memory usage become normal for a time. Some interesting tidbits: The system runs Apache 2, MySQL, Munin, and sshd. The memory usage spikes happen at the same time every night (at 10 PM sharp.) There appears to be nothing in the crontab for any of the users, and nothing in /etc/cron.d/* out of the ordinary, let alone something that would occur at 10 PM. My question is, how do I figure out what is causing the memory suckage? I've tried the usual utilities (e.g. ps, top, etc) but I can't seem to find anything unusual. Any ideas? Thanks in advance!

    Read the article

  • Hardware reserved memory issue

    - by Robert Koritnik
    I've seen lots of folks having problem with hardware reserved memory issue in Windows 7/Server 2008 R2. I have it myself but not as huge as others have. Problem description When you install Windows 7 (or its bigger brother Windows Server 2008 R2) your memory may not be fully utilised. If you look at Task Manager > Performance Tab > Resource Monitor > Memory Tab And scroll to the bottom of the list you will see a graphical representation of your memory. Some of it may be hardware reserved. Previous Windows versions didn't have this problem. System was able to utilise all memory available. Question Is there any solution to lower/remove hardware reserved memory? Sidenote I tried installing 32 and 64 bit versions but to no avail. I also tried both Windows: 7 and Server 2008 R2. But always get the same amount reserved by HW. On previous Windows versions I had more memory available because I'm simultaneously running 2 VMs on host (so three machines all together). And my memory peaks much higher now as it did on older versions.

    Read the article

  • SSL Certifcate Request s2003 DC CA DNS Name not Avaiable.

    - by Beuy
    I am trying to submit a request for an SSL certificate on a Domain Controller in order to enable LDAP SSL, and having no end of problems. I am following the information provided at http://support.microsoft.com/default.aspx?scid=kb;en-us;321051 & http://adldap.sourceforge.net/wiki/doku.php?id=ldap_over_ssl Steps taken so far: Create Servername.inf with the following information ;----------------- request.inf ----------------- [Version] Signature="$Windows NT$ [NewRequest] Subject = "CN=servername.domain.loc" ; replace with the FQDN of the DC KeySpec = 1 KeyLength = 1024 ; Can be 1024, 2048, 4096, 8192, or 16384. ; Larger key sizes are more secure, but have ; a greater impact on performance. Exportable = TRUE MachineKeySet = TRUE SMIME = False PrivateKeyArchive = FALSE UserProtected = FALSE UseExistingKeySet = FALSE ProviderName = "Microsoft RSA SChannel Cryptographic Provider" ProviderType = 12 RequestType = PKCS10 KeyUsage = 0xa0 [EnhancedKeyUsageExtension] OID=1.3.6.1.5.5.7.3.1 ; this is for Server Authentication ;----------------------------------------------- Create Certificate request by running: certreq -new Servername.inf Servername.req Attempt to submit Certificate request to CA by running: certreq -submit -attrib "CertificateTemplate: DomainController" request.req At which point I get the following error: The DNS name is unavailable and cannot be added to the Subject Alternate Name. 0x8009480f (-2146875377) Trouble shooting steps I have taken so far 1. Modify the Domain Controller Template to supply Subject Name in Request restart Certificate Service, include SAN in Request, same error. 2. Re-installed Certificate Services / IIS / Restarted machine countless times Any help resolving the issue would be greatly appreciated.

    Read the article

  • Subversion gives Error 500 until authenticating with a web browser

    - by Farseeker
    We used to use Collabnet SVN/Apache combo on a Windows server with LDAP authentication, and whilst the performance wasn't brilliant it used to work perfectly. After switching to a fresh Ubuntu 10 install, and setting up an Apache/SVN/LDAP configuration, we have HTTPS access to our repositories, using Active Directory authentication via LDAP. We're now having a very peculiar issue. Whenever a new user accesses a repository, our SVN clients (we have a few depending on the tool, but for arguments sake, let's stick to Tortoise SVN) report "Error 500 - Unknown Response". To get around this, we have to log into the repo using a web browser and navigate 'backwards' until it works E.G: SVN Checkout https://svn.example.local/SVN/MyRepo/MyModule/ - Error 500 (bad) Webbrowse to https://svn.example.local/SVN/MyRepo/MyModule/ - Error 500 (bad) Webbrowse to https://svn.example.local/SVN/MyRepo/ - Error 500 (bad) Webbrowse to https://svn.example.local/SVN/ - Forbidden 403 (correct) Webbrowse to https://svn.example.local/SVN/MyRepo/ - OK 200 (correct) SVN Checkout https://svn.example.local/SVN/MyRepo/MyModule/ - Error 500 (bad) Webbrowse to https://svn.example.local/SVN/MyRepo/MyModule/ - OK 200 (correct) SVN Checkout https://svn.example.local/SVN/MyRepo/MyModule/ - OK 200 (correct) It seems to require authentication up the tree, starting from the svnparentpath up through to the module required. Has anyone seen anything like this before? Any ideas on where to start before I ditch it back to Collabnet's SVN server?

    Read the article

  • Outgrew MongoDB … now what?

    - by samsmith
    We dump debug and transaction logs into mongodb. We really like mongodb because: Blazing insert perf document oriented Ability to let the engine drop inserts when needed for performance But there is this big problem with mongodb: The index must fit in physical RAM. In practice, this limits us to 80-150gb of raw data (we currently run on a system with 16gb RAM). Sooooo, for us to have 500gb or a tb of data, we would need 50gb or 80gb of RAM. Yes, I know this is possible. We can add servers and use mongo sharding. We can buy a special server box that can take 100 or 200 gb of RAM, but this is the tail wagging the dog! We could spend boucoup $$$ on hardware to run FOSS, when SQL Server Express can handle WAY more data on WAY less hardware than Mongo (SQL Server does not meet our architectural desires, or we would use it!) We are not going to spend huge $ on hardware here, because it is necessary only because of the Mongo architecture, not because of the inherent processing/storage needs. (And sharding? Please! Cost aside, who needs the ongoing complexity of three, five, or more servers to manage a relatively small load?) Bottom line: MongoDB is FOSS, but we gotta spend $$$$$$$ on hardware to run it? We sould rather buy commercial SW! I am sure we are not the first to hit this issue, so we ask the community: Where do we go next? (We already run Mongo v2) Thanks!!

    Read the article

  • authbind, privbind or iptables REDIRECT (port 80 to 8080)?

    - by chris_l
    Hi, I'd like to run Glassfish v3 as a non-privileged user on Linux (Debian), but make it available on port 80. I'm currently doing this with iptables: iptables -t nat -I PREROUTING -p tcp -d x.x.x.x --dport 80 -j REDIRECT --to-port 8080 This works, but I wonder: If this has any significant performance impact compared to binding directly to port 80 If I could make a similar setup also work for HTTPS (or if that must run on 443) If there's a way to avoid other users from binding to port 8080 (in case my server crashes) - maybe block that port permanently to other users somehow? ...or if I should use authbind/privbind instead? Problem: I couldn't make it work with authbind or privbind so far. For authbind, I edited asadmin's last line to: exec authbind --deep "$JAVA" -Djava.net.preferIPv4Stack=true -jar ... For privbind: exec privbind -u glassfish "$JAVA" -Djava.net.preferIPv4Stack=true -jar ... (Only) with these settings, I can successfully perform a create-domain --domainport 80. This proves, that authbind and privbind actually work (the authbind version of the script is called by the glassfish user; the privbind version is called by root of course). However, in both cases I get the following exception, when starting the domain (start-domain): [#|2010-03-20T13:25:21.925+0100|SEVERE|glassfishv3.0|javax.enterprise.system.core.com.sun.enterprise.v3.server|_ThreadID=11;_ThreadName=FelixStartLevel;|Shutting down v3 due to startup exception : Permission denied: 80=com.sun.enterprise.v3.services.impl.monitor.MonitorableSelectorHandler@1fc25e5|#] I haven't found a solution for that yet (after searching the web, it seems, that this isn't so easy?) But maybe, the solution with iptables is good enough - what do you think? Thanks, Chris

    Read the article

  • How does it hurt to use Linux (Ubuntu) as a guest OS for all my tasks?

    - by sauparna
    I have a machine running Windows, where the disk has two partitions C (50 GB) and D (250GB). I do research in Information Retrieval and need to work with a large corpus (more than 50 GB) and in Linux. So if I want to install Linux on the existing system, keeping the Windows installation intact, will it be fine to run it in a virtual box? (say, QEMU, VMWare, etc.) An alternative is using Wubi. In that case the Linux installation has to be on drive C. Then, if I keep a small Linux installation (say 5GB) on C, and my corpus on D (mounted in Linux), how will it affect the performance of my programs which would be accessing the mounted Windows drive D. Is it feasible to use Linux this way? Which of the above is better if at all they are a way out? Note : Since my post in July 2010, I have been using and have tried several ways of maintaining a disk-image that I can mount in Linux. I had a 100GB qcow2 disk and a 100GB raw disk, both formatted to an EXT3 file system. I was mounting and connecting to the qcow2 disk using qemu-nbd. The problem was that every now and then, the connection to the disk would get lost and the running programs would throw disk I/O errors. The raw disk would mount and work fine as a loop mounted device, but when writing data to it, the mount.ntfs program would hog the CPU and the process would take an enormous amount of time. I was in fact running make on a piece of software located on this raw disk, and after a point of time make was waiting while mount.ntfs would show 100% CPU usage.

    Read the article

  • Help building maya render node spec

    - by Ak
    Hi there, I'm looking to build 4x Maya render slaves/nodes for a friend of mine when his project gets green lit. The project involves MentalRay and lots of glass. I'm unsure if the new i7's 9xx or 8xx with hyper threading will do any better than a core 2 quad of the same (or close enough) speed. Does hyper threading make a difference to Maya or is it more performance per core based? I'm sure he's prefer I'd build another render node than pay for a bleeding edge CPU that only adds fractionly more GHz. -- The rest of the spec so far: 4Gb - 8Gb ram 64 bit OS: Probably Windows 7 (I know Linux is free, but want to build something my friend can support himself as easily as he supports his own workstation) 1TB HDD to hold textures, Maya files and renders which will be copied to central storage later Mobo with on-board video, gigabit NIC 500 - 650 watt PSU Desktop case something like a: Cooler Master ATCS 840 The machines will sold afterwards if necessary. -- If anyone has had experience in Maya and has done any tests with the new CPUs vs. the older ones I'd really appreciate your input.

    Read the article

  • mdadm+zfs vs mdadm+lvm

    - by Alex
    This may be a naive question since I'm new to this and I cannot find any results about mdadm+zfs, but after some testing it seems it might work: The use case is a server with RAID6 for some data that is backed-up somewhat infrequently. I think I'm well served by any of ZFS or RAID6. Platform is Linux. Performance is secondary. So the two setups I am considering are: A RAID6 array plus regular LVM and ext4 A RAID6 array plus ZFS (without redundancy). Is this second option that I don't see discussed at all. Why ZFS+RAID6? It's mainly because the inability of ZFS to grow a raidz2 with new disks. You can replace disks with larger ones, I know, but not add another disk. You can accomplish 2-disk redundancy and ZFS disk growth using mdadm as the redundancy layer. Besides that main point (otherwise I could go directly to raidz2 without RAID under it), these are the pros-cons that I see for each option: ZFS has snapshots without preallocated space. LVM requires preallocation (might be no longer true). ZFS has checksumming (very interested in this) and compression (nice bonus). LVM has online filesystem growth (ZFS can do it offline with export/mdadm --grow/import). LVM has encryption (ZFS-on-Linux has not). This is the only major con of this combo I see. I guess I could go RAID6+LVM+ZFS... seems too heavy, or not? So, to close with a proper question: 1) Is there anything that inherently discourages or precludes RAID6+ZFS? Anyone has experience with a setup like this? 2) Are there possibilities for checksumming and compression that would make ZFS unnecessary (maintaining the possibility of filesystem growth)? Because the RAID6+LVM combo seems the sanctioned, tested way.

    Read the article

  • Still about SSD potentials...write and read speed

    - by Macroideal
    I have been working on SSD (solid state disk) for several months..Problems and Questions hit my head unexpectedly..Coz i am a virgin in ssd... Especially these days I was testing the write-read speed of ssd, which I was always caring.... however result turned out not good as I expected, or even worse Three kinds of read-write were implemented in my test read and write directly from and into ssd, with openning ssd as a whole device. in windows: _open("\\:g", ***).. It can be very tricky and hairy that you'd write a data with size of folds of 512, at the disk position of folds of 512bytes... So, If you wanto write just a byte or 4 bytes, you'v to write at least a whole sector one time. Read and write data from and into files located in SSD... Read and Write data from and into files in mechanical Disk I compared the pratices below...I found ssd sucks...the ssd performs worse than mechanical disk... so i am wondering where i can get the potential performance of ssd, since ssd is said to a substitute for mechanical disk in the future.. Nevertheless, I test ssd with a pro-hard-disk tools..ssd is like twice speedier than mechanical disk. So, why?

    Read the article

  • mysql_tzinfo_to_sql missing on my system

    - by Sk1ppeR
    I ran into problem with timezones within MySQL. Long story short, my application is worldwide, and each database has it's own timezone set within the application (not the server) in the way of "Europe/Berlin", "Europe/Vienna", "America/Sao Paulo". Obviously this is unacceptable for MySQL at first per connection. I read that it handles data better if you use UTC offsets. Basically my goal is to log a field's alteration in another table using a trigger. For that I use UNIX_TIMESTAMP within the trigger. Although UNIX_TIMESTAMP() follows the global timezone for the server which obviously bothers me a lot :| So I went to search for a "per connection" solution to use inside the trigger and well I found that mysql_tzinfo_to_sql can actually import zone info (UTC offsets) from my linux's zoneinfo files. Although to my amuse, when I ran the commant I got the following: bash: mysql_tzinfo_to_sql: command not found So I'm looking for a solution to fix that. I don't want to "map" the timezone names into UTC offset just so I could use in the trigger. Is there an alternative tool? Or at least sources for this one in particular only? What kind of queries does this tool generates so I could do it manually then if there is no alternative tool. Thanks in advance on any help on the issue! P.S: The OS is Debian GNU/Linux 6.0 and the MySQL server is the one from aptitude with performance tweaks with my.cnf

    Read the article

  • Fedora 16 can connect to samba share using smbclient but not in nautilus 3.2.1

    - by Nathan Jones
    I have a machine running Ubuntu 11.10 Server acting as a Samba server to share my home directory. Everything works fine on my Windows 7 machine, but on my Fedora 16 laptop, if I use Nautilus to try to access the share using smb://192.168.0.8/nathan in the location bar, it just has the loading cursor and does nothing. It never shows any errors, nothing. Using smbclient works just fine, but I'd like to get it working in Nautilus. I know that there can be problems with SELinux and Samba, so I created a file called booleans.local that contains samba_enable_home_dirs=1. My smb.conf file looks like this: # For Unix password sync to work on a Debian GNU/Linux system, the following # parameters must be set (thanks to Ian Kahan <<[email protected]> for # sending the correct chat script for the passwd program in Debian Sarge). passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . # This boolean controls whether PAM will be used for password changes # when requested by an SMB client instead of the program listed in # 'passwd program'. The default is 'no'. pam password change = yes # This option controls how unsuccessful authentication attempts are mapped # to anonymous connections map to guest = bad user ########## Domains ########### # Is this machine able to authenticate users. Both PDC and BDC # must have this setting enabled. If you are the BDC you must # change the 'domain master' setting to no # ; domain logons = yes # # The following setting only takes effect if 'domain logons' is set # It specifies the location of the user's profile directory # from the client point of view) # The following required a [profiles] share to be setup on the # samba server (see below) ; logon path = \\%N\profiles\%U # Another common choice is storing the profile in the user's home directory # (this is Samba's default) # logon path = \\%N\%U\profile # The following setting only takes effect if 'domain logons' is set # It specifies the location of a user's home directory (from the client # point of view) ; logon drive = H: # logon home = \\%N\%U # The following setting only takes effect if 'domain logons' is set # It specifies the script to run during logon. The script must be stored # in the [netlogon] share # NOTE: Must be store in 'DOS' file format convention ; logon script = logon.cmd # This allows Unix users to be created on the domain controller via the SAMR # RPC pipe. The example command creates a user account with a disabled Unix # password; please adapt to your needs ; add user script = /usr/sbin/adduser --quiet --disabled-password --gecos "" %u # This allows machine accounts to be created on the domain controller via the # SAMR RPC pipe. # The following assumes a "machines" group exists on the system ; add machine script = /usr/sbin/useradd -g machines -c "%u machine account" -d /var/lib/samba -s /bin/false %u # This allows Unix groups to be created on the domain controller via the SAMR # RPC pipe. ; add group script = /usr/sbin/addgroup --force-badname %g ########## Printing ########## # If you want to automatically load your printer list rather # than setting them up individually then you'll need this # load printers = yes # lpr(ng) printing. You may wish to override the location of the # printcap file ; printing = bsd ; printcap name = /etc/printcap # CUPS printing. See also the cupsaddsmb(8) manpage in the # cupsys-client package. ; printing = cups ; printcap name = cups ############ Misc ############ # Using the following line enables you to customise your configuration # on a per machine basis. The %m gets replaced with the netbios name # of the machine that is connecting ; include = /home/samba/etc/smb.conf.%m # Most people will find that this option gives better performance. # See smb.conf(5) and /usr/share/doc/samba-doc/htmldocs/Samba3-HOWTO/speed.html # for details # You may want to add the following on a Linux system: # SO_RCVBUF=8192 SO_SNDBUF=8192 # socket options = TCP_NODELAY # The following parameter is useful only if you have the linpopup package # installed. The samba maintainer and the linpopup maintainer are # working to ease installation and configuration of linpopup and samba. ; message command = /bin/sh -c '/usr/bin/linpopup "%f" "%m" %s; rm %s' & # Domain Master specifies Samba to be the Domain Master Browser. If this # machine will be configured as a BDC (a secondary logon server), you # must set this to 'no'; otherwise, the default behavior is recommended. # domain master = auto # Some defaults for winbind (make sure you're not using the ranges # for something else.) ; idmap uid = 10000-20000 ; idmap gid = 10000-20000 ; template shell = /bin/bash # The following was the default behaviour in sarge, # but samba upstream reverted the default because it might induce # performance issues in large organizations. # See Debian bug #368251 for some of the consequences of *not* # having this setting and smb.conf(5) for details. ; winbind enum groups = yes ; winbind enum users = yes # Setup usershare options to enable non-root users to share folders # with the net usershare command. # Maximum number of usershare. 0 (default) means that usershare is disabled. ; usershare max shares = 100 # Allow users who've been granted usershare privileges to create # public shares, not just authenticated ones usershare allow guests = yes #======================= Share Definitions ======================= # Un-comment the following (and tweak the other settings below to suit) # to enable the default home directory shares. This will share each # user's home director as \\server\username [homes] comment = Home Directories browseable = yes # By default, the home directories are exported read-only. Change the # next parameter to 'no' if you want to be able to write to them. read only = no # File creation mask is set to 0700 for security reasons. If you want to # create files with group=rw permissions, set next parameter to 0775. ; create mask = 0775 # Directory creation mask is set to 0700 for security reasons. If you want to # create dirs. with group=rw permissions, set next parameter to 0775. ; directory mask = 0775 # By default, \\server\username shares can be connected to by anyone # with access to the samba server. Un-comment the following parameter # to make sure that only "username" can connect to \\server\username # The following parameter makes sure that only "username" can connect # # This might need tweaking when using external authentication schemes valid users = %S # Un-comment the following and create the netlogon directory for Domain Logons # (you need to configure Samba to act as a domain controller too.) ;[netlogon] ; comment = Network Logon Service ; path = /home/samba/netlogon ; guest ok = yes ; read only = yes # Un-comment the following and create the profiles directory to store # users profiles (see the "logon path" option above) # (you need to configure Samba to act as a domain controller too.) # The path below should be writable by all users so that their # profile directory may be created the first time they log on ;[profiles] ; comment = Users profiles ; path = /home/samba/profiles ; guest ok = no ; browseable = no ; create mask = 0600 ; directory mask = 0700 [printers] comment = All Printers browseable = no path = /var/spool/samba printable = yes guest ok = no read only = no create mask = 0700 # Windows clients look for this share name as a source of downloadable # printer drivers [print$] comment = Printer Drivers path = /var/lib/samba/printers browseable = yes read only = yes guest ok = no # Uncomment to allow remote administration of Windows print drivers. # You may need to replace 'lpadmin' with the name of the group your # admin users are members of. # Please note that you also need to set appropriate Unix permissions # to the drivers directory for these users to have write rights in it ; write list = root, @lpadmin # A sample share for sharing your CD-ROM with others. ;[cdrom] ; comment = Samba server's CD-ROM ; read only = yes ; locking = no ; path = /cdrom ; guest ok = yes # The next two parameters show how to auto-mount a CD-ROM when the # cdrom share is accesed. For this to work /etc/fstab must contain # an entry like this: # # /dev/scd0 /cdrom iso9660 defaults,noauto,ro,user 0 0 # # The CD-ROM gets unmounted automatically after the connection to the # # If you don't want to use auto-mounting/unmounting make sure the CD # is mounted on /cdrom # ; preexec = /bin/mount /cdrom ; postexec = /bin/umount /cdrom smbusers: <nathan> = <"nathan"> Any help would be very much appreciated! Thanks!

    Read the article

  • Router slowing my connection?

    - by Roberto
    I have a Linksys WRT54G and I pay for a 12Mbps connection. I've been testing my connection using speedtest.net for many days and always get 8Mbps. I called the support and they told me to bypass the router and test. I did it and got 16Mbps (much more than I pay for), so I thought "this guy just changed my speed so can he blame my router", and he blamed it. But to my surprise, everytime I bypass the router I get 16Mbps and when I use the router I get 8Mbps. Is this guy trolling me somehow (configuring the VOIP-modem-stuff to different profiles depending o the MAC address connecting to it) or is my router a POS? How can I find out? I don't know what's the thing the router connects to, it's a kind of VOIP adapter; the link is this one, but unfortunately I don't think you'll understand because it's in Portuguese. I know they can remotely connect to it, that's the origin of my conspiracy theory :) I just tested wired to the router and got 10Mbps (and still 8Mbps on wifi and 16Mbps without router) O_o I'm 5cm away from my router, so no obstacles to interfere, right? ------ UPDATE ------- It's a WRT54G V8, I'm using firmware v8.00.7 (will install 8.00.8 tomorrow, but I saw that it's only a minor fix to UPnP denial of service security vulnerability). Results: IPerf LAN-LAN: 80Mbps IPerf LAN-WLAN: 19Mbps (therefore we can ignore wireless issues/settings) I wasn't able to make the (W)LAN-WAN NAT-enabled test with IPerf, I get a connection refused error. I'm not sure if did it right: ran in server mode, configured router to forward that port to my IP and tried to connect to my internet IP that got from this site. I don't think there is a way to disable NAT using this firmware. Question: Let's suppose it's an underpowered hardware issue. Is it right to assume that custom firmwares could resolve the issue because they are possibly better implemented and would make better use of the router resources? I couldn't find any references pointing to wired performance improvements with the use of custom firmware.

    Read the article

  • How much network latency is "typical" for east - west coast USA?

    - by Jeff Atwood
    At the moment we're trying to decide whether to move our datacenter from the west coast (Corvallis, OR) to the east coast (NY, NY). However, I am seeing some disturbing latency numbers from my location (Berkeley, CA) to the NYC host. Here's a sample result, retrieving a small .png logo file in Google Chrome and using the dev tools to see how long the request takes: Berkeley to NYC server: 215 ms latency, 46ms transfer time, 261ms total Berkeley to Corvallis server: 114ms latency, 41ms transfer time, 155ms total some URLs if you want to try yourself: http://careers.stackoverflow.com/content/cso/img/logo.png (NY, NY) http://serverfault.com/cache/logo.png (Corvallis, OR) It makes sense that Corvallis, OR is geographically closer to Berkeley, CA so I expect the connection to be a bit faster.. but I'm seeing an increase in latency of +100ms when I perform the same test to the NYC server. That seems .. excessive to me. Particularly since the time spent transferring the actual data only went up 10%, yet the latency went up ten times as much! That feels... wrong... to me. I found a few links here that were helpful (through Google no less!) ... http://serverfault.com/questions/63531/does-routing-distance-affect-performance-significantly http://serverfault.com/questions/61719/how-does-geography-affect-network-latency http://serverfault.com/questions/6210/latency-in-internet-connections-from-europe-to-usa ... but nothing authoritative. So, is this normal? It doesn't feel normal. What is the "typical" latency I should expect when moving network packets from the east coast <--> west coast of the USA?

    Read the article

  • IIS serving static content gives 503 at random

    - by Steffen
    We're having a few issues with our image server. It's a Win 2008 running IIS 7.5 and it only serves static content: images. It has run without issues for quite a while, until recently when we disabled Output Caching, as we noticed having it enabled meant it sent no-cache host-headers to the clients (forcing them to fetch the images from the server every time) We've read quite a bit about it, and it seems IIS just works that way - either you use Output Caching or you get to use cache host-headers. Anyway having disabled the Output Cache, we now experience random 5 minutes intervals, where all requests just get a 503 Service Unavailable. During this period the "Files cached" performance counter staggers (neither increased nor decreased) and after the period all caches are flushed. You might find it weird I talk about caching, since we disabled Output Caching. The thing is we changed the ObjectTTL parameter in registry, so we cache files for 3 minutes (which has worked very well, our Disk I/O dropped significantly) So even with Output Caching disabled, we're still caching plenty of files - if we could just get rid of the random 503 it'd be perfect :-D We don't get any messages in the Windows event log during these 503 intervals, so we're pretty stumped as to what to do. Any ideas are very welcome :-)

    Read the article

  • Amazon AWS EC2 + Puppet, get Puppet to know AWS instance tags

    - by Piotr Jasiulewicz
    I am having a problem with my AWS deployment, fairly new to AWS and Puppet. So coming to my question - can you distinguish puppet nodes with AWS machine tags or CNAME domains? A little background about the plan: have multiple clusters of machines, one php cluster, one legacy php cluster, one java cluster, one perl cluster control configuration with puppet - still pretty new to puppet but as a developer I like the idea of being able to version control configuration of servers have autoscaling enabled on those clusters - obviously the main benefit of the cloud that makes the much hight cost when it comes to any reasonable performance worth it (those amazon machines are slower than my phone...) deployment controlled by Capistrano, this makes things a lot easier So in AWS you get those super nasty public/private machine dns's... no way you can identify machines on those. In order to easer the problem, seams like AWS want's you to tag everything - so I did. Found a script that makes a CNAME record for each machine with the tag "ShortName" thanks to the Route53 API. Every machine has a ShortName tag that becomes its CNAME, unfortunately puppet still resolves the private dns name. I'd like to have node 'perl-cluster'{} in puppet, anyone any clue ho to achieve this? Thanks

    Read the article

< Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >