Search Results

Search found 8206 results on 329 pages for 'firefox addon'.

Page 155/329 | < Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >

  • Recently "exposed" to Clicksor

    - by I take Drukqs
    Previous information concerning my issue can be found here: Virus...? Tons of people talking. Not sure where else to ask. I heard the loud sounds and whatnot from one of the ads but nothing was harmed and other than having the life scared out of me nothing was immediately affected. Literally nothing has changed. I really don't know what to do. My PC seems fine and from what Malwarebytes and Spybot tells me none of my files have been infected with anything. If you need more information I will be glad to supply it. Thanks in advance. Malwarebytes quick and full scan: Clean. Spybot S&D scan: Clean. HijackThis log: Logfile of Trend Micro HijackThis v2.0.4 Scan saved at 5:23:33 PM, on 2/2/2011 Platform: Windows 7 (WinNT 6.00.3504) MSIE: Internet Explorer v8.00 (8.00.7600.16700) Boot mode: Normal Running processes: C:\Program Files (x86)\DeviceVM\Browser Configuration Utility\BCU.exe C:\Program Files (x86)\NEC Electronics\USB 3.0 Host Controller Driver\Application\nusb3mon.exe C:\Program Files (x86)\Common Files\InstallShield\UpdateService\issch.exe C:\Program Files (x86)\Common Files\Java\Java Update\jusched.exe C:\Program Files (x86)\Pidgin\pidgin.exe C:\Program Files (x86)\Steam\Steam.exe C:\Program Files (x86)\Mozilla Firefox\firefox.exe C:\Program Files (x86)\uTorrent\uTorrent.exe C:\Fraps\fraps.exe C:\Program Files (x86)\Mozilla Firefox\plugin-container.exe C:\Program Files (x86)\Trend Micro\HiJackThis\HiJackThis.exe R1 - HKCU\Software\Microsoft\Internet Explorer\Main,Search Page = http://go.microsoft.com/fwlink/?LinkId=54896 R0 - HKCU\Software\Microsoft\Internet Explorer\Main,Start Page = http://go.microsoft.com/fwlink/?LinkId=69157 R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Default_Page_URL = http://go.microsoft.com/fwlink/?LinkId=69157 R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Default_Search_URL = http://go.microsoft.com/fwlink/?LinkId=54896 R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Search Page = http://go.microsoft.com/fwlink/?LinkId=54896 R0 - HKLM\Software\Microsoft\Internet Explorer\Main,Start Page = http://go.microsoft.com/fwlink/?LinkId=69157 R0 - HKLM\Software\Microsoft\Internet Explorer\Search,SearchAssistant = R0 - HKLM\Software\Microsoft\Internet Explorer\Search,CustomizeSearch = R0 - HKLM\Software\Microsoft\Internet Explorer\Main,Local Page = C:\Windows\SysWOW64\blank.htm R0 - HKCU\Software\Microsoft\Internet Explorer\Toolbar,LinksFolderName = R3 - URLSearchHook: SearchHook Class - {BC86E1AB-EDA5-4059-938F-CE307B0C6F0A} - C:\Program Files (x86)\DeviceVM\Browser Configuration Utility\AddressBarSearch.dll F2 - REG:system.ini: UserInit=userinit.exe O2 - BHO: AcroIEHelperStub - {18DF081C-E8AD-4283-A596-FA578C2EBDC3} - C:\Program Files (x86)\Common Files\Adobe\Acrobat\ActiveX\AcroIEHelperShim.dll O2 - BHO: Windows Live ID Sign-in Helper - {9030D464-4C02-4ABF-8ECC-5164760863C6} - C:\Program Files (x86)\Common Files\Microsoft Shared\Windows Live\WindowsLiveLogin.dll O2 - BHO: Java(tm) Plug-In 2 SSV Helper - {DBC80044-A445-435b-BC74-9C25C1C588A9} - C:\Program Files (x86)\Java\jre6\bin\jp2ssv.dll O4 - HKLM\..\Run: [BCU] "C:\Program Files (x86)\DeviceVM\Browser Configuration Utility\BCU.exe" O4 - HKLM\..\Run: [JMB36X IDE Setup] C:\Windows\RaidTool\xInsIDE.exe O4 - HKLM\..\Run: [NUSB3MON] "C:\Program Files (x86)\NEC Electronics\USB 3.0 Host Controller Driver\Application\nusb3mon.exe" O4 - HKLM\..\Run: [ISUSScheduler] "C:\Program Files (x86)\Common Files\InstallShield\UpdateService\issch.exe" -start O4 - HKLM\..\Run: [ISUSPM Startup] C:\PROGRA~2\COMMON~1\INSTAL~1\UPDATE~1\ISUSPM.exe -startup O4 - HKLM\..\Run: [Adobe Reader Speed Launcher] "C:\Program Files (x86)\Adobe\Reader 10.0\Reader\Reader_sl.exe" O4 - HKLM\..\Run: [Adobe ARM] "C:\Program Files (x86)\Common Files\Adobe\ARM\1.0\AdobeARM.exe" O4 - HKLM\..\Run: [SunJavaUpdateSched] "C:\Program Files (x86)\Common Files\Java\Java Update\jusched.exe" O4 - HKLM\..\RunOnce: [Malwarebytes' Anti-Malware] C:\Program Files (x86)\Malwarebytes' Anti-Malware\mbamgui.exe /install /silent O4 - HKCU\..\Run: [ISUSPM Startup] C:\PROGRA~2\COMMON~1\INSTAL~1\UPDATE~1\ISUSPM.exe -startup O4 - HKUS\S-1-5-19\..\Run: [Sidebar] %ProgramFiles%\Windows Sidebar\Sidebar.exe /autoRun (User 'LOCAL SERVICE') O4 - HKUS\S-1-5-19\..\RunOnce: [mctadmin] C:\Windows\System32\mctadmin.exe (User 'LOCAL SERVICE') O4 - HKUS\S-1-5-20\..\Run: [Sidebar] %ProgramFiles%\Windows Sidebar\Sidebar.exe /autoRun (User 'NETWORK SERVICE') O4 - HKUS\S-1-5-20\..\RunOnce: [mctadmin] C:\Windows\System32\mctadmin.exe (User 'NETWORK SERVICE') O10 - Unknown file in Winsock LSP: c:\program files (x86)\common files\microsoft shared\windows live\wlidnsp.dll O10 - Unknown file in Winsock LSP: c:\program files (x86)\common files\microsoft shared\windows live\wlidnsp.dll O23 - Service: @%SystemRoot%\system32\Alg.exe,-112 (ALG) - Unknown owner - C:\Windows\System32\alg.exe (file missing) O23 - Service: AppleChargerSrv - Unknown owner - C:\Windows\system32\AppleChargerSrv.exe (file missing) O23 - Service: Browser Configuration Utility Service (BCUService) - DeviceVM, Inc. - C:\Program Files (x86)\DeviceVM\Browser Configuration Utility\BCUService.exe O23 - Service: DES2 Service for Energy Saving. (DES2 Service) - Unknown owner - C:\Program Files (x86)\GIGABYTE\EnergySaver2\des2svr.exe O23 - Service: @%SystemRoot%\system32\efssvc.dll,-100 (EFS) - Unknown owner - C:\Windows\System32\lsass.exe (file missing) O23 - Service: @%systemroot%\system32\fxsresm.dll,-118 (Fax) - Unknown owner - C:\Windows\system32\fxssvc.exe (file missing) O23 - Service: InstallDriver Table Manager (IDriverT) - Macrovision Corporation - C:\Program Files (x86)\Common Files\InstallShield\Driver\11\Intel 32\IDriverT.exe O23 - Service: JMB36X - Unknown owner - C:\Windows\SysWOW64\XSrvSetup.exe O23 - Service: @keyiso.dll,-100 (KeyIso) - Unknown owner - C:\Windows\system32\lsass.exe (file missing) O23 - Service: @comres.dll,-2797 (MSDTC) - Unknown owner - C:\Windows\System32\msdtc.exe (file missing) O23 - Service: @%SystemRoot%\System32\netlogon.dll,-102 (Netlogon) - Unknown owner - C:\Windows\system32\lsass.exe (file missing) O23 - Service: NVIDIA Driver Helper Service (NVSvc) - Unknown owner - C:\Windows\system32\nvvsvc.exe (file missing) O23 - Service: @%systemroot%\system32\psbase.dll,-300 (ProtectedStorage) - Unknown owner - C:\Windows\system32\lsass.exe (file missing) O23 - Service: @%systemroot%\system32\Locator.exe,-2 (RpcLocator) - Unknown owner - C:\Windows\system32\locator.exe (file missing) O23 - Service: @%SystemRoot%\system32\samsrv.dll,-1 (SamSs) - Unknown owner - C:\Windows\system32\lsass.exe (file missing) O23 - Service: @%SystemRoot%\system32\snmptrap.exe,-3 (SNMPTRAP) - Unknown owner - C:\Windows\System32\snmptrap.exe (file missing) O23 - Service: @%systemroot%\system32\spoolsv.exe,-1 (Spooler) - Unknown owner - C:\Windows\System32\spoolsv.exe (file missing) O23 - Service: @%SystemRoot%\system32\sppsvc.exe,-101 (sppsvc) - Unknown owner - C:\Windows\system32\sppsvc.exe (file missing) O23 - Service: Steam Client Service - Valve Corporation - C:\Program Files (x86)\Common Files\Steam\SteamService.exe O23 - Service: NVIDIA Stereoscopic 3D Driver Service (Stereo Service) - NVIDIA Corporation - C:\Program Files (x86)\NVIDIA Corporation\3D Vision\nvSCPAPISvr.exe O23 - Service: @%SystemRoot%\system32\ui0detect.exe,-101 (UI0Detect) - Unknown owner - C:\Windows\system32\UI0Detect.exe (file missing) O23 - Service: @%SystemRoot%\system32\vaultsvc.dll,-1003 (VaultSvc) - Unknown owner - C:\Windows\system32\lsass.exe (file missing) O23 - Service: @%SystemRoot%\system32\vds.exe,-100 (vds) - Unknown owner - C:\Windows\System32\vds.exe (file missing) O23 - Service: @%systemroot%\system32\vssvc.exe,-102 (VSS) - Unknown owner - C:\Windows\system32\vssvc.exe (file missing) O23 - Service: @%SystemRoot%\system32\Wat\WatUX.exe,-601 (WatAdminSvc) - Unknown owner - C:\Windows\system32\Wat\WatAdminSvc.exe (file missing) O23 - Service: @%systemroot%\system32\wbengine.exe,-104 (wbengine) - Unknown owner - C:\Windows\system32\wbengine.exe (file missing) O23 - Service: @%Systemroot%\system32\wbem\wmiapsrv.exe,-110 (wmiApSrv) - Unknown owner - C:\Windows\system32\wbem\WmiApSrv.exe (file missing) O23 - Service: @%PROGRAMFILES%\Windows Media Player\wmpnetwk.exe,-101 (WMPNetworkSvc) - Unknown owner - C:\Program Files (x86)\Windows Media Player\wmpnetwk.exe (file missing) -- End of file - 7889 bytes

    Read the article

  • Python Mechanize unable to avoid redirect when Post

    - by Enric Geijo
    I am trying to crawl a site using mechanize. The site provides search results in different pages. When posting to get the next set of results, something is wrong and the server redirects me to the first page, asking mechanize to update the SearchSession Cookie. I have been debugging the requests using Firefox and they look quite the same, and I am unable to find the problem. Any suggestion? Below the requests: ----------- FIRST THE RIGHT SEQUENCE, USING TAMPER IN FIREFOX ------------------------- POST XXX/JobSearch/Results.aspx?Keywords=Python&LTxt=London%2c+South+East&Radius=0&LIds2=ZV&clid=1621&cltypeid=2&clName=London Load Flags[LOAD_DOCUMENT_URI LOAD_INITIAL_DOCUMENT_URI ] Content Size[-1] Mime Type[text/html] Request Headers: Host[www.cwjobs.co.uk] User-Agent[Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.9) Gecko/20100401 Ubuntu/9.10 (karmic) Firefox/3.5.9] Accept[text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8] Accept-Language[en-us,en;q=0.5] Accept-Encoding[gzip,deflate] Accept-Charset[ISO-8859-1,utf-8;q=0.7,*;q=0.7] Keep-Alive[300] Connection[keep-alive] Referer[XXX/JobSearch/Results.aspx?Keywords=Python&LTxt=London%2c+South+East&Radius=0&LIds2=ZV&clid=1621&cltypeid=2&clName=London] Cookie[ecos=774803468-0; AnonymousUser=MemberId=acc079dd-66b6-4081-9b07-60d6955ee8bf&IsAnonymous=True; PJBIPPOPUP=; WT_FPC=id=86.181.183.106-2262469600.30073025:lv=1272812851736:ss=1272812789362; SearchSession=SessionGuid=71de63de-3bd0-4787-895d-b6b9e7c93801&LogSource=NAT] Post Data: __EVENTTARGET[srpPager%24btnForward] __EVENTARGUMENT[] hdnSearchResults[BV%2CA%2CC0P5x%2COou-%2CB4S-%2CBuC-%2CDzx-%2CHwn-%2CKPP-%2CIVA-%2CC9D-%2CH6X-%2CH7x-%2CJ0x-%2CCvX-%2CCra-%2COHa-%2CHhP-%2CCoj-%2CBlM-%2CE9W-%2CIm8-%2CBqG-%2CPFy-%2CN%2Fm-%2Ceaa%2CCvj-%2CCtJ-%2CCr7-%2CBpu-%2Cmh%2CMb6-%2CJ%2Fk-%2CHY8-%2COJ7-%2CNtF-%2CEya-%2CErT-%2CEo4-%2CEKU-%2CDnL-%2CC5M-%2CCyB-%2CBsD-%2CBrc-%2CBpU-%2Col%2C30%2CC1%2Cd4N%2COo8-%2COi0-%2CLz%2F-%2CLxP-%2CFyp-%2CFVR-%2CEHL-%2CPrP-%2CLmE-%2CK3H-%2CKXJ-%2CFyn%2CIcq-%2CIco-%2CIK4-%2CIIg-%2CH2k-%2CH0N-%2CHwp-%2CHvF-%2CFij-%2CFhl-%2CCwj-%2CCb5-%2CCQj-%2CCQh-%2CB%2B2-%2CBc6-%2ChFo%2CNLq-%2CNI%2F-%2CFzM-%2Cdu-%2CHg2-%2CBug-%2CBse-%2CB9Q-] __VIEWSTATE[%2FwEPDwUKLTkyMzI2ODA4Ng9kFgYCBA8WBB4EaHJlZgWJAWh0dHA6Ly93d3cuY3dqb2JzLmNvLnVrL0pvYlNlYXJjaC9SU1MuYXNweD9LZXl3b3Jkcz1QeXRob24mTFR4dD1Mb25kb24lMmMrU291dGgrRWFzdCZSYWRpdXM9MCZMSWRzMj1aViZjbGlkPTE2MjEmY2x0eXBlaWQ9MiZjbE5hbWU9TG9uZG9uHgV0aXRsZQUkTGF0ZXN0IFB5dGhvbiBqb2JzIGZyb20gQ1dKb2JzLmNvLnVrZAIGDxYCHgRUZXh0BV48bGluayByZWw9ImNhbm9uaWNhbCIgaHJlZj0iaHR0cDovL3d3dy5jd2pvYnMuY28udWsvSm9iU2Vla2luZy9QeXRob25fTG9uZG9uX2wxNjIxX3QyLmh0bWwiIC8%2BZAIIEGRkFg4CBw8WAh8CBV9Zb3VyIHNlYXJjaCBvbiA8Yj5LZXl3b3JkczogUHl0aG9uOyBMb2NhdGlvbjogTG9uZG9uLCBTb3V0aCBFYXN0OyA8L2I%2BIHJldHVybmVkIDxiPjg1PC9iPiBqb2JzLmQCCQ8WAh4HVmlzaWJsZWhkAgsPFgIfAgUoVGhlIG1vc3QgcmVsZXZhbnQgam9icyBhcmUgbGlzdGVkIGZpcnN0LmQCEw8PFgIeC05hdmlnYXRlVXJsBQF%2BZGQCFQ9kFgYCBQ8PFgYfAgUGUHl0aG9uHgtEZWZhdWx0VGV4dAUMZS5nLiBhbmFseXN0HhNEZWZhdWx0VGV4dENzc0NsYXNzZWRkAgsPDxYGHwIFEkxvbmRvbiwgU291dGggRWFzdB8FBQllLmcuIEJhdGgfBmVkZAIRDxAPFgYeDURhdGFUZXh0RmllbGQFClJhZGl1c05hbWUeDkRhdGFWYWx1ZUZpZWxkBQZSYWRpdXMeC18hRGF0YUJvdW5kZ2QQFREHMCBtaWxlcwcyIG1pbGVzBzUgbWlsZXMIMTAgbWlsZXMIMTUgbWlsZXMIMjAgbWlsZXMIMjUgbWlsZXMIMzAgbWlsZXMIMzUgbWlsZXMINDAgbWlsZXMINDUgbWlsZXMINTAgbWlsZXMINjAgbWlsZXMINzAgbWlsZXMIODAgbWlsZXMIOTAgbWlsZXMJMTAwIG1pbGVzFREBMAEyATUCMTACMTUCMjACMjUCMzACMzUCNDACNDUCNTACNjACNzACODACOTADMTAwFCsDEWdnZ2dnZ2dnZ2dnZ2dnZ2dnZGQCFw9kFgQCAQ9kFgQCBA8QZA8WA2YCAQICFgMQBQhBbGwgam9icwUBMGcQBRlEaXJlY3QgZW1wbG95ZXIgam9icyBvbmx5BQEyZxAFEEFnZW5jeSBqb2JzIG9ubHkFATFnZGQCBg8QZA8WA2YCAQICFgMQBQlSZWxldmFuY2UFATFnEAUERGF0ZQUBMmcQBQZTYWxhcnkFATNnZGQCBQ8PFgYeClBhZ2VOdW1iZXICAh4PTnVtYmVyT2ZSZXN1bHRzAlUeDlJlc3VsdHNQZXJQYWdlAhRkZAIZDxYCHwNoZGQ%3D] Refinesearch%24txtKeywords[Python] Refinesearch%24txtLocation[London%2C+South+East] Refinesearch%24ddlRadius[0] ddlCompanyType[0] ddlSort[1] Response Headers: Cache-Control[private] Date[Sun, 02 May 2010 16:09:27 GMT] Content-Type[text/html; charset=utf-8] Expires[Sat, 02 May 2009 16:09:27 GMT] Server[Microsoft-IIS/6.0] X-SiteConHost[P310] X-Powered-By[ASP.NET] X-AspNet-Version[2.0.50727] Set-Cookie[SearchSession=SessionGuid=71de63de-3bd0-4787-895d-b6b9e7c93801&LogSource=NAT; path=/] Content-Encoding[gzip] Vary[Accept-Encoding] Transfer-Encoding[chunked] -------- NOW WHAT I'AM SENDING USING MECHANIZE, SOME HEADERS ADDED, ETC ----------- POST /JobSearch/Results.aspx?Keywords=Python&LTxt=London%2c+South+East&Radius=0&LIds2=ZV&clid=1621&cltypeid=2&clName=London HTTP/1.1\r\nContent-Length: 2424\r\n Accept-Language: en-us,en;q=0.5\r\n Accept-Encoding: gzip\r\n Host: www.cwjobs.co.uk\r\n Accept: text/html,application/xhtml+xml,application/xml;q=0.9,/;q=0.8\r\n Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\n Connection: keep-alive\r\n Cookie: AnonymousUser=MemberId=8fa5ddd7-17ed-425e-b189-82693bfbaa0c&IsAnonymous=True; SearchSession=SessionGuid=33e4e439-c2d6-423f-900f-574099310d5a&LogSource=NAT\r\n Referer: XXX/JobSearch/Results.aspx?Keywords=Python&LTxt=London%2c+South+East&Radius=0&LIds2=ZV&clid=1621&cltypeid=2&clName=London\r\n Content-Type: application/x-www-form-urlencoded\r\n\r\n' '__EVENTTARGET=srpPager%24btnForward& __EVENTARGUMENT=& hdnSearchResults=BV%2CA%2CC0eif%2CMwc%2CM6s%2COou%2CK09%2CG4H%2CEZf%2CGTu%2CLrr%2CGuX%2CGs9%2CEz9%2CL5X%2CL9U%2ChU%2CHHf%2CMAL%2CNDi%2CJrY%2CGBy%2CM%2Bo%2CdE-%2CpI%2CtDI%2CL5L%2CL7l%2CL8z%2CM%2FA%2CPPP%2CCM0%2CEpK%2CHPy%2Cez%2C7p%2CJ2U%2CJ9b%2CJ%2F2%2CKea%2CLBj%2CLvi%2CL2t%2CM8r%2CM9S%2CM%2Fa%2CPRT%2CPgi%2Csg7%2CF6%2CI2F%2CJTd%2CO-%2CC0v%2CC3f%2CDCq%2CDxn%2CERl%2CUbV%2CGME%2CGMG%2CGd2%2CGgO%2CGyK%2CG0h%2CG4F%2CG5p%2CJGL%2CJHJ%2CKhj%2CL4L%2CMM1%2CMYL%2CMYN%2CMp4%2CNL0%2COrj%2CvuW%2CBdE%2CBfv%2CI1i%2CBCh-%2COLA%2CHH4%2CM6O%2CM8Q%2CMre& __VIEWSTATE=%2FwEPDwUKLTkyMzI2ODA4Ng9kFgYCBA8WBB4EaHJlZgWJAWh0dHA6Ly93d3cuY3dqb2JzLmNvLnVrL0pvYlNlYXJjaC9SU1MuYXNweD9LZXl3b3Jkcz1QeXRob24mTFR4dD1Mb25kb24lMmMrU291dGgrRWFzdCZSYWRpdXM9MCZMSWRzMj1aViZjbGlkPTE2MjEmY2x0eXBlaWQ9MiZjbE5hbWU9TG9uZG9uHgV0aXRsZQUkTGF0ZXN0IFB5dGhvbiBqb2JzIGZyb20gQ1dKb2JzLmNvLnVrZAIGDxYCHgRUZXh0BV48bGluayByZWw9ImNhbm9uaWNhbCIgaHJlZj0iaHR0cDovL3d3dy5jd2pvYnMuY28udWsvSm9iU2Vla2luZy9QeXRob25fTG9uZG9uX2wxNjIxX3QyLmh0bWwiIC8%2BZAIIEGRkFg4CBw8WAh8CBV9Zb3VyIHNlYXJjaCBvbiA8Yj5LZXl3b3JkczogUHl0aG9uOyBMb2NhdGlvbjogTG9uZG9uLCBTb3V0aCBFYXN0OyA8L2I%2BIHJldHVybmVkIDxiPjg1PC9iPiBqb2JzLmQCCQ8WAh4HVmlzaWJsZWhkAgsPFgIfAgUoVGhlIG1vc3QgcmVsZXZhbnQgam9icyBhcmUgbGlzdGVkIGZpcnN0LmQCEw8PFgIeC05hdmlnYXRlVXJsBQF%2BZGQCFQ9kFgYCBQ8PFgYfAgUGUHl0aG9uHgtEZWZhdWx0VGV4dAUMZS5nLiBhbmFseXN0HhNEZWZhdWx0VGV4dENzc0NsYXNzZWRkAgsPDxYGHwIFEkxvbmRvbiwgU291dGggRWFzdB8FBQllLmcuIEJhdGgfBmVkZAIRDxAPFgYeDURhdGFUZXh0RmllbGQFClJhZGl1c05hbWUeDkRhdGFWYWx1ZUZpZWxkBQZSYWRpdXMeC18hRGF0YUJvdW5kZ2QQFREHMCBtaWxlcwcyIG1pbGVzBzUgbWlsZXMIMTAgbWlsZXMIMTUgbWlsZXMIMjAgbWlsZXMIMjUgbWlsZXMIMzAgbWlsZXMIMzUgbWlsZXMINDAgbWlsZXMINDUgbWlsZXMINTAgbWlsZXMINjAgbWlsZXMINzAgbWlsZXMIODAgbWlsZXMIOTAgbWlsZXMJMTAwIG1pbGVzFREBMAEyATUCMTACMTUCMjACMjUCMzACMzUCNDACNDUCNTACNjACNzACODACOTADMTAwFCsDEWdnZ2dnZ2dnZ2dnZ2dnZ2dnZGQCFw9kFgQCAQ9kFgQCBA8QZA8WA2YCAQICFgMQBQhBbGwgam9icwUBMGcQBRlEaXJlY3QgZW1wbG95ZXIgam9icyBvbmx5BQEyZxAFEEFnZW5jeSBqb2JzIG9ubHkFATFnZGQCBg8QZA8WA2YCAQICFgMQBQlSZWxldmFuY2UFATFnEAUERGF0ZQUBMmcQBQZTYWxhcnkFATNnZGQCBQ8PFgYeClBhZ2VOdW1iZXICAR4PTnVtYmVyT2ZSZXN1bHRzAlUeDlJlc3VsdHNQZXJQYWdlAhRkZAIZDxYCHwNoZGQ%3D& Refinesearch%24txtKeywords=Python& Refinesearch%24txtLocation=London%2CSouth+East& Refinesearch%24ddlRadius=0& Refinesearch%24btnSearch=Search& ddlCompanyType=0& ddlSort=1'

    Read the article

  • Add options to select box without Internet Explorer closing the box?

    - by Paul Colby
    Hi, I'm trying to build a web page with a number of drop-down select boxes that load their options asynchronously when the box is first opened. This works very well under Firefox, but not under Internet Explorer. Below is a small example of what I'm trying to achieve. Basically, there is a select box (with the id "selectBox"), which contains just one option ("Any"). Then there is an onmousedown handler that loads the other options when the box is clicked. <html> <head> <script type="text/javascript"> function appendOption(select,option) { try { selectBox.add(option,null); // Standards compliant. } catch (e) { selectBox.add(option); // IE only version. } } function loadOptions() { // Simulate an AJAX request that will call the // loadOptionsCallback function after 500ms. setTimeout(loadOptionsCallback,500); } function loadOptionsCallback() { var selectBox = document.getElementById('selectBox'); var option = document.createElement('option'); option.text = 'new option'; appendOption(selectBox,option); } </script> </head> <body> <select id="selectBox" onmousedown="loadOptions();"> <option>Any</option> </select> </body> </html> The desired behavior (which Firefox does) is: the user see's a closed select box containing "Any". the user clicks on the select box. the select box opens to reveal the one and only option ("Any"). 500ms later (or when the AJAX call has returned) the dropped-down list expands to include the new options (hard coded to 'new option' in this example). So that's exactly what Firefox does, which is great. However, in Internet Explorer, as soon as the new option is added in "4" the browser closes the select box. The select box does contain the correct options, but the box is closed, requiring the user to click to re-open it. So, does anyone have any suggestions for how I can load the select control's options asynchronously without IE closing the drop-down box? I know that I can load the list before the box is even clicked, but the real form I'm developing contains many such select boxes, which are all interrelated, so it will be much better for both the client and server if I can load each set of options only when needed. Also, if the results are loaded synchronously, before the select box's onmousedown handler completes, then IE will show the full list as expected - however, synchronous loading is a bad idea here, since it will completely "lock" the browser while the network requests are taking place. Finally, I've also tried using IE's click() method to open the select box once the new options have been added, but that does not re-open the select box. Any ideas or suggestions would be really appreciated!! :) Thanks! Paul.

    Read the article

  • urllib2 misbehaving with dynamically loaded content

    - by Sheena
    Some Code headers = {} headers['user-agent'] = 'User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0' headers['Accept'] = 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' headers['Accept-Language'] = 'en-gb,en;q=0.5' #headers['Accept-Encoding'] = 'gzip, deflate' request = urllib.request.Request(sURL, headers = headers) try: response = urllib.request.urlopen(request) except error.HTTPError as e: print('The server couldn\'t fulfill the request.') print('Error code: {0}'.format(e.code)) except error.URLError as e: print('We failed to reach a server.') print('Reason: {0}'.format(e.reason)) else: f = open('output/{0}.html'.format(sFileName),'w') f.write(response.read().decode('utf-8')) A url http://groupon.cl/descuentos/santiago-centro The situation Here's what I did: enable javascript in browser open url above and keep an eye on the console disable javascript repeat step 2 use urllib2 to grab the webpage and save it to a file enable javascript open the file with browser and observe console repeat 7 with javascript off results In step 2 I saw that a whole lot of the page content was loaded dynamically using ajax. So the HTML that arrived was a sort of skeleton and ajax was used to fill in the gaps. This is fine and not at all surprising Since the page should be seo friendly it should work fine without js. in step 4 nothing happens in the console and the skeleton page loads pre-populated rendering the ajax unnecessary. This is also completely not confusing in step 7 the ajax calls are made but fail. this is also ok since the urls they are using are not local, the calls are thus broken. The page looks like the skeleton. This is also great and expected. in step 8: no ajax calls are made and the skeleton is just a skeleton. I would have thought that this should behave very much like in step 4 question What I want to do is use urllib2 to grab the html from step 4 but I cant figure out how. What am I missing and how could I pull this off? To paraphrase If I was writing a spider I would want to be able to grab plain ol' HTML (as in that which resulted in step 4). I dont want to execute ajax stuff or any javascript at all. I don't want to populate anything dynamically. I just want HTML. The seo friendly site wants me to get what I want because that's what seo is all about. How would one go about getting plain HTML content given the situation I outlined? To do it manually I would turn off js, navigate to the page and copy the html. I want to automate this. stuff I've tried I used wireshark to look at packet headers and the GETs sent off from my pc in steps 2 and 4 have the same headers. Reading about SEO stuff makes me think that this is pretty normal otherwise techniques such as hijax wouldn't be used. Here are the headers my browser sends: Host: groupon.cl User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-gb,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Here are the headers my script sends: Accept-Encoding: identity Host: groupon.cl Accept-Language: en-gb,en;q=0.5 Connection: close Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 User-Agent: User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0 The differences are: my script has Connection = close instead of keep-alive. I can't see how this would cause a problem my script has Accept-encoding = identity. This might be the cause of the problem. I can't really see why the host would use this field to determine the user-agent though. If I change encoding to match the browser request headers then I have trouble decoding it. I'm working on this now... watch this space, I'll update the question as new info comes up

    Read the article

  • Problem with the output of Jquery function .offset in IE

    - by vandalk
    Hello! I'm new to jquery and javascript, and to web site developing overall, and I'm having a problem with the .offset function. I have the following code working fine on chrome and FF but not working on IE: $(document).keydown(function(k){ var keycode=k.which; var posk=$('html').offset(); var centeryk=screen.availHeight*0.4; var centerxk=screen.availWidth*0.4; $("span").text(k.which+","+posk.top+","+posk.left); if (keycode==37){ k.preventDefault(); $("html,body").stop().animate({scrollLeft:-1*posk.left-centerxk}) }; if (keycode==38){ k.preventDefault(); $("html,body").stop().animate({scrollTop:-1*posk.top-centeryk}) }; if (keycode==39){ k.preventDefault(); $("html,body").stop().animate({scrollLeft:-1*posk.left+centerxk}) }; if (keycode==40){ k.preventDefault(); $("html,body").stop().animate({scrollTop:-1*posk.top+centeryk}) }; }); hat I want it to do is to scroll the window a set percentage using the arrow keys, so my thought was to find the current coordinates of the top left corner of the document and add a percentage relative to the user screen to it and animate the scroll so that the content don't jump and the user looses focus from where he was. The $("span").text are just so I know what's happening and will be turned into comments when the code is complete. So here is what happens, on Chrome and Firefox the output of the $("span").text for the position variables is correct, starting at 0,0 and always showing how much of the content was scrolled in coordinates, but on IE it starts on -2,-2 and never gets out of it, even if I manually scroll the window until the end of it and try using the right arrow key it will still return the initial value of -2,-2 and scroll back to the beggining. I tried substituting the offset for document.body.scrollLetf and scrollTop but the result is the same, only this time the coordinates are 0,0. Am I doing something wrong? Or is this some IE bug? Is there a way around it or some other function I can use and achieve the same results? On another note, I did other two navigating options for the user in this section of the site, one is to click and drag anywhere on the screen to move it: $("html").mousedown(function(e) { var initx=e.pageX var inity=e.pageY $(document).mousemove(function(n) { var x_inc= initx-n.pageX; var y_inc= inity-n.pageY; window.scrollBy(x_inc*0.7,y_inc*0.7); initx=n.pageX; inity=n.pageY //$("span").text(initx+ "," +inity+ "," +x_inc+ "," +y_inc+ "," +e.pageX+ "," +e.pageY+ "," +n.pageX+ "," +n.pageY); // cancel out any text selections document.body.focus(); // prevent text selection in IE document.onselectstart = function () { return false; }; // prevent IE from trying to drag an image document.ondragstart = function() { return false; }; // prevent text selection (except IE) return false; }); }); $("html").mouseup(function() { $(document).unbind('mousemove'); }); The only part of this code I didn't write was the preventing text selection lines, these ones I found in a tutorial about clicking and draging objects, anyway, this code works fine on Chrome, FireFox and IE, though on Firefox and IE it's more often to happen some moviment glitches while you drag, sometimes it seems the "scrolling" is a litlle jagged, it's only a visual thing and not that much significant but if there's a way to prevent it I would like to know.

    Read the article

  • Smart Array P400 - Accelerator Replacement Battery Failure

    - by inflammable
    TL;DR - Is the immediate failure of a replacement battery, for a failed battery, on a battery backed accelerator for a Smart Array P400 controller a common occurrence? Or are we likely to have an storage controller with an impending and critical fault? We have a slightly confusing situation with a Smart Array P400 storage controller with the 512mb battery backed accelerator addon on an HP DL380 server. The storage controller is (afaik) running the latest firmware and driver: Model: Smart Array P400 Controller Status: OK Firmware Version: 7.24 Serial Number: *snip* Rebuild Priority: Medium Expand Priority: Medium Number Of Ports: 2 The storage diagnostic (both on the both boot-up screen for the controller and within the 'Management Homepage' and the 'HP Array Diagnostic Utility') recently starting showing the following status a fault for the battery for the accelerator: Accelerator Status: Temporarily Disabled Error Code: Cache Disabled Low Batteries Serial Number: *snip* Total Memory: 524288 KB Read Cache: 25% Write Cache: 75% Battery Status: Failed Read Errors: 0 Write Errors: 0 We replaced the battery with a new unit (a visual inspection of the P400 card showing nothing unusual) and saw the same fault - but expected this to disappear over the course of a few hours/days as it charged. This didn't happy, and the fault status remains the same as above. Given the battery is a genuine part from HP, I wouldn't have expected a replacement battery to fail straight away, or to be dead-on-arrival (is that naivety on my part?). Is the immediate failure of a replacement battery, for a failed battery, on a battery backed accelerator a common occurrence? Or are we likely to have an storage controller with an impending and critical fault? Is there any diagnostic that could tell me more about the failed battery, without cracking the server open again? Many thanks!

    Read the article

  • IPCop Packet Mangling

    - by Zenham
    I've found myself in a pickle replacing an old firewall for a client this afternoon. I'm configuring their new IPCop firewall (1.4.21), Zerina OpenVPN addon is installed. What I need to do: There are three network interfaces, currently set up as red (WAN), green (LAN, 192.168.20.0/24) and orange (remote network 10.1.20.0/24). The orange interface is a direct fiber link to another organization. Simple description: Traffic and networks appear to be properly configured at this point, but I have many (150+) specific IPs on the LAN which, when accessing the resources on the 10.1.20.x network, need to be mangled to appear to be coming from the 10.1.20.0/24 network (and return traffic properly delivered). The routing on the far side was configured earlier and should be fine, but I need to redirect any packets coming across destined for those IPs to end up at their proper destination. The addressing is fixed and predictable (ie. 192.168.20.125 - 10.1.20.125). I need to insert whatever rules I have into the IPCop ruleset through /etc/rc.local I know, I'm just not sure about how I should structure this. There's CUSTOMOUTPUT and CUSTOMINPUT targets, both which currently just consist of the single rule redirecting packets to the OVPNOUTPUT/OVPNINPUT targets, so I'm guessing I should insert a rule matching outbound packets destined for the 10.1.20.x network and redirecting to a new target (maybe called TO-ORANGE) and a rule at the top of CUSTOMINPUT which redirects to a FROM-ORANGE target. Under those targets, I would have rules which do the IP matching and mangling. Am I approaching this right? If so, I'm not very familiar with mangle, and would appreciate seeing examples of how to write that source-IP rewrite. If not, how would you suggest doing this? TIA! edit: I notice additionally that the nat table has CUSTOMPREROUTING and CUSTOMPOSTROUTING targets, I guess I could alternatively post the rules in there....

    Read the article

  • You need to format the disk in drive

    - by Sab
    I was copying some files over to my external hard disk and suddenly a message popped up saying that some file was open and it could not continue.It asks me whether I want to cancel the operation and I said cancel(The other operation was continue.) After that whenever I plug in the usb into the same computer it does not work. It keeps saying that I need to format the disk to continue. Luckily for me it opened on another computer and am backing up all my data. But now I am wondering why exactly this happened. Is it that the hard disk is weak. Also as an addon I currently use http://www.dtidata.com/windowssurfacescanner/ program to check for bad sectors. Is this the recommended way or is there some other better more reliable way to check if my hard disk is failing? Also could the above problem be because the hard disk is failing? If not why does it happen.Even if the hard disk is interrupted in the middle of a read write cycle doesnt there exist error handling code to enable gracefull recovery?

    Read the article

  • Web API, JavaScript, Chrome &amp; Cross-Origin Resource Sharing

    - by Brian Lanham
    The team spent much of the week working through this issues related to Chrome running on Windows 8 consuming cross-origin resources using Web API.  We thought it was resolved on day 2 but it resurfaced the next day.  We definitely resolved it today though.  I believe I do not fully understand the situation but I am going to explain what I know in an effort to help you avoid and/or resolve a similar issue. References We referenced many sources during our trial-and-error troubleshooting.  These are the links we reference in order of applicability to the solution: Zoiner Tejada JavaScript and other material from -> http://www.devproconnections.com/content1/topic/microsoft-azure-cors-141869/catpath/windows-azure-platform2/page/3 WebDAV Where I learned about “Accept” –>  http://www-jo.se/f.pfleger/cors-and-iis? IT Hit Tells about NOT using ‘*’ –> http://www.webdavsystem.com/ajax/programming/cross_origin_requests Carlos Figueira Sample back-end code (newer) –> http://code.msdn.microsoft.com/windowsdesktop/Implementing-CORS-support-a677ab5d (older version) –> http://code.msdn.microsoft.com/CORS-support-in-ASPNET-Web-01e9980a   Background As a measure of protection, Web designers (W3C) and implementers (Google, Microsoft, Mozilla) made it so that a request, especially a JSON request (but really any URL), sent from one domain to another will only work if the requestee “knows” about the requester and allows requests from it. So, for example, if you write a ASP.NET MVC Web API service and try to consume it from multiple apps, the browsers used may (will?) indicate that you are not allowed by showing an “Access-Control-Allow-Origin” error indicating the requester is not allowed to make requests. Internet Explorer (big surprise) is the odd-hair-colored step-child in this mix. It seems that running locally at least IE allows this for development purposes.  Chrome and Firefox do not.  In fact, Chrome is quite restrictive.  Notice the images below. IE shows data (a tabular view with one row for each day of a week) while Chrome does not (trust me, neither does Firefox).  Further, the Chrome developer console shows an XmlHttpRequest (XHR) error. Screen captures from IE (left) and Chrome (right). Note that Chrome does not display data and the console shows an XHR error. Why does this happen? The Web browser submits these requests and processes the responses and each browser is different. Okay, so, IE is probably the only one that’s truly different.  However, Chrome has a specific process of performing a “pre-flight” check to make sure the service can respond to an “Access-Control-Allow-Origin” or Cross-Origin Resource Sharing (CORS) request.  So basically, the sequence is, if I understand correctly:  1)Page Loads –> 2)JavaScript Request Processed by Browser –> 3)Browsers Prepares to Submit Request –> 4)[Chrome] Browser Submits Pre-Flight Request –> 5)Server Responds with HTTP 200 –> 6)Browser Submits Request –> 7)Server Responds with Data –> 8)Page Shows Data This situation occurs for both GET and POST methods.  Typically, GET methods are called with query string parameters so there is no data posted.  Instead, the requesting domain needs to be permitted to request data but generally nothing more is required.  POSTs on the other hand send form data.  Therefore, more configuration is required (you’ll see the configuration below).  AJAX requests are not friendly with this (POSTs) either because they don’t post in a form. How to fix it. The team went through many iterations of self-hair removal and we think we finally have a working solution.  The trial-and-error approach eventually worked and we referenced many sources for the information.  I indicate those references above.  There are basically three (3) tasks needed to make this work. Assumptions: You are using Visual Studio, Web API, JavaScript, and have Cross-Origin Resource Sharing, and several browsers. 1. Configure the client Joel Cochran centralized our “cors-oriented” JavaScript (from here). There are two calls including one for GET and one for POST function(url, data, callback) {             console.log(data);             $.support.cors = true;             var jqxhr = $.post(url, data, callback, "json")                 .error(function(jqXhHR, status, errorThrown) {                     if ($.browser.msie && window.XDomainRequest) {                         var xdr = new XDomainRequest();                         xdr.open("post", url);                         xdr.onload = function () {                             if (callback) {                                 callback(JSON.parse(this.responseText), 'success');                             }                         };                         xdr.send(data);                     } else {                         console.log(">" + jqXhHR.status);                         alert("corsAjax.post error: " + status + ", " + errorThrown);                     }                 });         }; The GET CORS JavaScript function (credit to Zoiner Tejada) function(url, callback) {             $.support.cors = true;             var jqxhr = $.get(url, null, callback, "json")                 .error(function(jqXhHR, status, errorThrown) {                     if ($.browser.msie && window.XDomainRequest) {                         var xdr = new XDomainRequest();                         xdr.open("get", url);                         xdr.onload = function () {                             if (callback) {                                 callback(JSON.parse(this.responseText), 'success');                             }                         };                         xdr.send();                     } else {                         alert("CORS is not supported in this browser or from this origin.");                     }                 });         }; The POST CORS JavaScript function (credit to Zoiner Tejada) Now you need to call these functions to get and post your data (instead of, say, using $.Ajax). Here is a GET example: corsAjax.get(url, function(data) { if (data !== null && data.length !== undefined) { // do something with data } }); And here is a POST example: corsAjax.post(url, item); Simple…except…you’re not done yet. 2. Change Web API Controllers to Allow CORS There are actually two steps here.  Do you remember above when we mentioned the “pre-flight” check?  Chrome actually asks the server if it is allowed to ask it for cross-origin resource sharing access.  So you need to let the server know it’s okay.  This is a two-part activity.  a) Add the appropriate response header Access-Control-Allow-Origin, and b) permit the API functions to respond to various methods including GET, POST, and OPTIONS.  OPTIONS is the method that Chrome and other browsers use to ask the server if it can ask about permissions.  Here is an example of a Web API controller thus decorated: NOTE: You’ll see a lot of references to using “*” in the header value.  For security reasons, Chrome does NOT recognize this is valid. [HttpHeader("Access-Control-Allow-Origin", "http://localhost:51234")] [HttpHeader("Access-Control-Allow-Credentials", "true")] [HttpHeader("Access-Control-Allow-Methods", "ACCEPT, PROPFIND, PROPPATCH, COPY, MOVE, DELETE, MKCOL, LOCK, UNLOCK, PUT, GETLIB, VERSION-CONTROL, CHECKIN, CHECKOUT, UNCHECKOUT, REPORT, UPDATE, CANCELUPLOAD, HEAD, OPTIONS, GET, POST")] [HttpHeader("Access-Control-Allow-Headers", "Accept, Overwrite, Destination, Content-Type, Depth, User-Agent, X-File-Size, X-Requested-With, If-Modified-Since, X-File-Name, Cache-Control")] [HttpHeader("Access-Control-Max-Age", "3600")] public abstract class BaseApiController : ApiController {     [HttpGet]     [HttpOptions]     public IEnumerable<foo> GetFooItems(int id)     {         return foo.AsEnumerable();     }     [HttpPost]     [HttpOptions]     public void UpdateFooItem(FooItem fooItem)     {         // NOTE: The fooItem object may or may not         // (probably NOT) be set with actual data.         // If not, you need to extract the data from         // the posted form manually.         if (fooItem.Id == 0) // However you check for default...         {             // We use NewtonSoft.Json.             string jsonString = context.Request.Form.GetValues(0)[0].ToString();             Newtonsoft.Json.JsonSerializer js = new Newtonsoft.Json.JsonSerializer();             fooItem = js.Deserialize<FooItem>(new Newtonsoft.Json.JsonTextReader(new System.IO.StringReader(jsonString)));         }         // Update the set fooItem object.     } } Please note a few specific additions here: * The header attributes at the class level are required.  Note all of those methods and headers need to be specified but we find it works this way so we aren’t touching it. * Web API will actually deserialize the posted data into the object parameter of the called method on occasion but so far we don’t know why it does and doesn’t. * [HttpOptions] is, again, required for the pre-flight check. * The “Access-Control-Allow-Origin” response header should NOT NOT NOT contain an ‘*’. 3. Headers and Methods and Such We had most of this code in place but found that Chrome and Firefox still did not render the data.  Interestingly enough, Fiddler showed that the GET calls succeeded and the JSON data is returned properly.  We learned that among the headers set at the class level, we needed to add “ACCEPT”.  Note that I accidentally added it to methods and to headers.  Adding it to methods worked but I don’t know why.  We added it to headers also for good measure. [HttpHeader("Access-Control-Allow-Methods", "ACCEPT, PROPFIND, PROPPA... [HttpHeader("Access-Control-Allow-Headers", "Accept, Overwrite, Destin... Next Steps That should do it.  If it doesn’t let us know.  What to do next?  * Don’t hardcode the allowed domains.  Note that port numbers and other domain name specifics will cause problems and must be specified.  If this changes do you really want to deploy updated software?  Consider Miguel Figueira’s approach in the following link to writing a custom HttpHeaderAttribute class that allows you to specify the domain names and then you can do it dynamically.  There are, of course, other ways to do it dynamically but this is a clean approach. http://code.msdn.microsoft.com/windowsdesktop/Implementing-CORS-support-a677ab5d

    Read the article

  • Updating a backup image (.wim and/or Acronis .tib)

    - by Backdraft
    Anyways, I've got a Windows 7 installation that I want to make a generalized backup image of so I can use it for future installs on not only my desktop from which the image is to be derived from, but also other systems with dissimilar hardware. Therefore I've arrived at either 2 options, using either sysprep/imagx from WAIK (guide here), or the simpler Acronis True Image w/ their Universal Restore addon. Of course, they create distinct image file types, .wim and .tib respectively. What I'd like to do is to periodically update this image, say with Windows Updates, by booting it to either a physical partition or using virtualization (VirtualBox/VMWare), perform the updates, and save the updated .wim or .tib image file again. What's the simplest way I could do this? Another question is, I created this generalized backup image on a 500GB Seagate 7200RPM HDD. Say I get an SSD as an OS drive in the future, can I just deploy this backup image to the SSD normally, or are there any potential problems to be aware/avoid (ie. is it best to completely reinstall the OS on the SSD from scratch, or can I use the image created on the normal HDD with no issue)? Thanks and Happy Holidays.

    Read the article

  • Taking web sites offline for demonstration

    While working in software development in general, and in web development for a couple of customers it is quite common that it is necessary to provide a test bed where the client is able to get an image, or better said, a feeling for the visions and ideas you are talking about. Usually here at IOS Indian Ocean Software Ltd. we set up a demo web site on one of our staging servers, and provide credentials to the customer to access and review our progress and work ad hoc. This gives us the highest flexibility on both sides, as the test bed is simply online and available 24/7. We can update the structure, the UI and data at any time, and the client is able to view it as it suits best for her/him. Limited or lack of online connectivity But what is going to happen when your client is not capable to be online - no matter for what reasons; here are some more obvious ones: No internet connection (permanently or temporarily) Expensive connection, ie. mobile data package, stay at a hotel, etc. Presentation devices at an exhibition, ie. using tablets or iPads Being abroad for a certain time, and only occasionally online No network coverage, especially on mobile Bad infrastructure, like ie. in Third World countries Providing a catalogue on CD or USB pen drive Anyway, it doesn't matter really. We should be able to provide a solution for the circumstances of our customers. Presentation during an exhibition Recently, we had the following request from a customer: Is it possible to let us have a desktop version of ResortWork.co.uk that we can use for demo purposes at the forthcoming Ski Shows? It would allow us to let stand visitors browse the sites on an iPad to view jobs and training directory course listings. Yes, sure we can do that. Eventually, you might think why don't they simply use 3G enabled iPads for that purpose? As stated above, there might be several reasons for that - low coverage, expensive data packages, etc. Anyway, it is not a question on how to circumvent the request but to deliver a solution to that. Possible solutions... or not? We already did offline websites earlier, and even established complete mirrors of one or two web sites on our systems. There are actually several possibilities to handle this kind of request, and it mainly depends on the system or device where the offline site should be available on. Here, it is clearly expressed that we have to address this on an Apple iPad, well actually, I think that they'd like to use multiple devices during their exhibitions. Following is an overview of possible solutions depending on the technology or device in use, and how it can be done: Replication of source files and database The above mentioned web site is running on ASP.NET, IIS and SQL Server. In case that a laptop or slate runs a Windows OS, the easiest way would be to take a snapshot of the source files and database, and transfer them as local installation to those Windows machines. This approach would be fully operational on the local machine. Saving pages for offline usage This is actually a quite tedious job but still practicable for small web sites Tool based approach to 'harvest' the web site There quite some tools in the wild that could handle this job, namely wget, httrack, web copier, etc. Screenshots bundled as PDF document Not really... ;-) Creating screencast or video Simply navigate through your website and record your desktop session. Actually, we are using this kind of approach to track down difficult problems in order to see and understand exactly what the user was doing to cause an error. Of course, this list isn't complete and I'd love to get more of your ideas in the comments section below the article. Preparations for offline browsing The original website is dynamically and data-driven by ASP.NET, and looks like this: As we have to put the result onto iPads we are going to choose the tool-based approach to 'download' the whole web site for offline usage. Again, depending on the complexity of your web site you might have to check which of the applications produces the best results for you. My usual choice is to use wget but in this case, we run into problems related to the rewriting of hyperlinks. As a consequence of that we opted for using HTTrack. HTTrack comes in different flavours, like console application but also as either GUI (WinHTTrack on Windows) or Web client (WebHTTrack on Linux/Unix/BSD). Here's a brief description taken from the original website about HTTrack: HTTrack is a free (GPL, libre/free software) and easy-to-use offline browser utility. It allows you to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer. HTTrack arranges the original site's relative link-structure. Simply open a page of the "mirrored" website in your browser, and you can browse the site from link to link, as if you were viewing it online. And there is an extensive documentation for all options and switches online. General recommendation is to go through the HTTrack Users Guide By Fred Cohen. It covers all the initial steps you need to get up and running. Be aware that it will take quite some time to get all the necessary resources down to your machine. Actually, for our customer we run the tool directly on their web server to avoid unnecessary traffic and bandwidth. After a couple of runs and some additional fine-tuning - explicit inclusion or exclusion of various external linked web sites - we finally had a more or less complete offline version available. A very handsome feature of HTTrack is the error/warning log after completing the download. It contains some detailed information about errors that appeared on the pages and the links within the pages that have been processed. Error: "Bad Request" (400) at link www.resortwork.co.uk/job-details_Ski_hire:tech_or_mgr_or_driver_37854.aspx (from www.resortwork.co.uk/Jobs_A_to_Z.aspx)Error: "Not Found" (404) at link www.247recruit.net/images/applynow.png (from www.247recruit.net/css/global.css)Error: "Not Found" (404) at link www.247recruit.net/activate.html (from www.247recruit.net/247recruit_tefl_jobs_network.html) In our situation, we took the records of HTTP 400/404 errors and passed them to the web development department. Improvements are to be expected soon. ;-) Quality assurance on the full-featured desktop Unfortunately, the generated output of HTTrack was still incomplete but luckily there were only images missing. Being directly on the web server we simply copied the missing images from the original source folder into our offline version. After that, we created an archive and transferred the file securely to our local workspace for further review and checks. From that point on, it wasn't necessary to get any more files from the original web server, and we could focus ourselves completely on the process of browsing and navigating through the offline version to isolate visual differences and functional problems. As said, the original web site runs on ASP.NET Web Forms and uses Postback calls for interaction like search, pagination and partly for navigation. This is the main field of improving the offline experience. Of course, same as for standard web development it is advised to test with various browsers, and strangely we discovered that the offline version looked pretty good on Firefox, Chrome and Safari, but not in Internet Explorer. A quick look at the HTML source shed some light on this, and there are conditional CSS inclusions based on the user agent. HTTrack is not acting as Internet Explorer and so we didn't have the necessary overrides for this browser. Not problematic after all in our case, but you might have to pay attention to this and get the IE-specific files explicitly. And while having a view at the source code, we also found out that HTTrack actually modifies the generated HTML output. In several occasions we discovered that <div> elements were converted into <table> constructs for no obvious reason; even nested structures. Search 'e'nd destroy - sed (or Notepad++) to the rescue During our intensive root cause analysis for a couple of HTML/CSS problems that needed some extra attention it is very helpful to be familiar with any editor that allows search and replace over multiple files like, ie. sed - stream editor for filtering and transforming text on Linux or my personal favourite Notepad++ on Windows. This allowed us to quickly fix a lot of anchors with onclick attributes and Javascript code that was addressed to ASP.NET files instead of their generated HTML counterparts, like so: grep -lr -e '.aspx' * | xargs sed -e 's/.aspx/.html?/g' The additional question mark after the HTML extension helps to separate the query string from the actual target and solved all our missing hyperlinks very fast. The same can be done in Notepad++ on Windows, too. Just use the 'Replace in files' feature and you are settled. Especially, in combination with Regular Expressions (regex). Landscape of browsers Okay, after several runs of HTML/CSS code analysis, searching and replacing some strings in a pool of more than 4.000 files, we finally had a very good match of an offline browsing experience in Firefox and Chrome on Linux. Next, we transferred that modified set of files to a Windows 8 machine for review on Firefox, Chrome and Internet Explorer 7 to 10, and a Mac mini running Mac OS X 10.7 to check the output on Safari and again on Chrome. Besides IE, for reasons already mentioned above, the results were identical. And last but not least it was about to check web site on tablets. Please continue to read on the following articles: Taking web sites offline for demonstration on Galaxy Tablet Taking web sites offline for demonstration on iPad

    Read the article

  • Problem IIS 7.0 Locking files durring upload

    - by viscious
    I am running a server 2008 with iis7 and the ftp addon on to iis 7.0 I have the ftp site configured and mostly working Except that about 70% of the time when transferring a file the upload will hang forever. If I disconnect the ftp client and reconnect and try to upload the same file I will get an error on the client saying the file is locked. I have to restart the ftp service to clear the lock. I fired up process explorer and did a search on the file in question and sure enough the ftp service has a lock on the file and it takes around 20 minutes to release the lock on its own (and sometimes longer). This lock stays around even after I disconnect the client. Like I said this only happens about 70% of the time, the other 30% of the time it goes through just fine. Things i have verified. -Not a firewall issue. Server is using passive port range 8000-9000 which is allowed on the firewall. -Not a nat issue, server has a globally rout-able ip address -all recommended/required updates installed I have 5 other servers in a very similar configuration and this is the only one i have problems with.

    Read the article

  • Windows 2008 64bit: applications and explorer always hang

    - by Phil Farthing
    I setup a couple of Window 2008 64bit systems about 5 months ago. Initially all seemed well. Now however, for no apparent reason, things are dog slow, apps hang, explorer hangs, just clicking on something can cause a CPU spike of 100%, and often it's explorer that is eating it up. As I have two on identical hardware, and they experience the same problem, it doesn't seem related to addon software. The only thing these have in common is Kaspersky and I've tried disabling/uninstalling to no avail. There are no useful error messages in the event logs. Actually, the system never even reports app hangs. Sometimes, it similar to what I've seen on Windows 7 systems where the screen goes milky and asks if I want to trouble shoot, that's only when get impatient and click happy. The really odd thing, is that it will NOT do this for a few minutes at a time, and then starts up again. Like I will click on the start menu and browse for the Admin Tools, the start menu will hang at some point and I'll have to wait about a minute, then it's OK. The next time I do this, a few seconds later, it's fine. Every click seems to hang the first time around, then be ok the second time if I do the exact same thing. If anyone has any suggestions, please PLEASE let me know! thanks =)

    Read the article

  • big cpu load on vmware server / linux

    - by dezfafara
    Hi, I currently using a server 2.x hosting 4 virtual machines on a linux system Today, on my physical server, I saw an enormous load average: this is the "top" of the server, illustrating my 4 virtual guests. top - 11:02:02 up 194 days, 23:09, 5 users, load average: 18.78, 12.05, 13.55 Tasks: 113 total, 4 running, 109 sleeping, 0 stopped, 0 zombie Cpu0 : 71.6%us, 19.0%sy, 0.0%ni, 8.8%id, 0.0%wa, 0.3%hi, 0.3%si, 0.0%st Cpu1 : 74.3%us, 10.4%sy, 0.0%ni, 15.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 72.5%us, 17.6%sy, 0.0%ni, 9.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 79.5%us, 4.6%sy, 0.0%ni, 16.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 8178884k total, 8129980k used, 48904k free, 134904k buffers Swap: 10490436k total, 148k used, 10490288k free, 6129728k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 7312 root 6 -10 1149m 921m 559m R 97 11.5 107947:09 vmware-vmx 6995 root 6 -10 779m 687m 317m R 92 8.6 107374:31 vmware-vmx 6693 root 6 -10 880m 659m 409m S 85 8.3 76947:33 vmware-vmx 12937 root 6 -10 960m 719m 523m S 75 9.0 67219:49 vmware-vmx In bold are the cpu usage for my 4 virtuals guests These guests are running on a linux system, and the appropriate process are usually 5% - 15% of cpu I don't understang why , since a few days I have this big problem. This is the "top" on a virtual guest which is at 95% of cpu load top - 11:23:15 up 194 days, 23:13, 4 users, load average: 0.25, 0.47, 0.59 Tasks: 92 total, 2 running, 90 sleeping, 0 stopped, 0 zombie Cpu(s): 1.4%us, 7.7%sy, 0.0%ni, 90.5%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 382296k total, 369732k used, 12564k free, 145156k buffers Swap: 979924k total, 13956k used, 965968k free, 86988k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3691 root 20 0 23948 1148 960 S 13.0 0.3 15339:23 vmware-guestd 3840 root 20 0 19880 584 512 S 7.7 0.2 1729:17 hald-addon-stor This virtual guest state is ok ... If anyone has any ideas .. Thanks

    Read the article

  • Windows 7 search does not return results from indexed folders

    - by Dilbert
    I am experiencing this issue over and over again and I just cannot seem to find the answer. It doesn't make sense, but search simply does not return results from folders that certainly have these files inside. It's weird that this technology exists for more than 5 years now (it could be added to Windows XP as an addon), and they still haven't got it right. My folder contains 10 image files with .png extensions. Two scenarios: Scenario 1: I exclude the folder using Indexing options. Search works. Scenario 2: I turn on indexing for this folder. Search does not work. Of course, Agent Ransack returns results every time. When I check Advanced options for the Indexing options inside control panel, .png files are checked in the File Types tab, using the "File Properties filter". What's the deal with this? [Edit] To clarify, this doesn't happen with all folders, but does with more than one. For the "problematic" folders, even *.* doesn't return a single result. I found some advice to clear the archive and readonly attributes for all files (doesn't make sense, but hey), but it didn't work. Indexing status in Control panel is: Indexing complete. 100,000 items indexed. Folder is included in the list. File types list contains the .png extension (although it doesn't work with any filter, not even *.*).

    Read the article

  • Looking for ballpark pricing on an affordable a Cisco VOIP solution for our office

    - by guytech
    We have about 8 incoming PSTN lines that are currently on an old and antiquated Nortel Meridian ICS system. This system has been giving us some grief. We're looking for a new VOIP solution. I've been looking at a Cisco solution and it does seem pricey but I'm sure effective. Unfortunately, we probably can't afford a Cisco Unified Communications 520 which seems to be the ideal solution. We have about 15 people who need an extension and voicemail. We really don't have any need for a fancy system just an auto attendant of some sort when people call us. It looks like we'll have to get an older router and an addon card for what we're looking for to get best value pricing. However, I don't know a a lot about Cisco voice products so I'm a bit lost as to what to get. The only thing I am sure on is the pricing on VOIP phones which we expect to be about ~$100-200. However, I'm not sure what pieces of VOIP infrastructure to get. Any advice? I am familiar with Asterisk but right now I'm looking on pricing concerning a Cisco solution.

    Read the article

  • Dual boot Windows 8 and Ubuntu 12.10 across a reboot

    - by AK4749
    My Setup: I have two separate SSDs, and each contains an independently bootable OS - W8 and U12.10. From my extremely limited knowledge, this means each has a functioning EFI partition(?). My default boot order (GA-Z68XP-UD3P mobo with UEFI firmware update) boots the UEFI partition containing windows first, but if I enter the BIOS I can select the "ubuntu" entry to successfully boot ubuntu. Both drives are GPT, and are EFI boots. What I want to do: Reboot Windows 8, re-enter W8 (this is happening now due to the default boot order). What I want to change, however, is to boot into Ubuntu if i reboot from ubuntu. Essentially, I would like to work within one OS unless I consciously choose otherwise. Normally, I would not even ask to something I thought was impossible, but... Why I think this is possible: When trying EasyBCD to add ubuntu to the W8 UEFI bootloader, I noticed an "iReboot" addon or something that allows you to select which OS to boot into from within the OS. Note that I ended up not using the NeoGrub entry to chain Ubuntu off the W8 bootloader because I couldn't get much help with it. Is this possible? Have I had too much coffee and gone insane? Thank you all for your time, AK

    Read the article

  • linux kernel buffer memory is zero

    - by user64772
    Hi all. There are one qestion that i can`t find in google. I have many linux boxes mostly with SLES or openSUSE, diffrent versions and kernels. On some of them i faced with slow oracle transactions problem. It time to time problem and when i log in the box on that time i see that oracle blocked in kernel function sync_page # while :; do ps axo stat,pid,cmd,wchan | egrep '^D|^R'; echo --; sleep 5; done D 3483 hald-addon-storage: polling ide_do_drive_cmd Ds 4635 ora_dbw0_orcl sync_page Ds 4637 ora_lgwr_orcl sync_page Ds 4639 ora_ckpt_orcl sync_page D 11210 oracleorcl (LOCAL=NO) sync_page D 12457 [smtpd] sync_page R+ 12458 ps axo stat,pid,cmd,wchan - -- Ds 4635 ora_dbw0_orcl sync_page Ds 4637 ora_lgwr_orcl sync_page Ds 4639 ora_ckpt_orcl sync_page D 11210 oracleorcl (LOCAL=NO) sync_page R+ 12501 ps axo stat,pid,cmd,wchan - -- Ds 4635 ora_dbw0_orcl sync_page Ds 4637 ora_lgwr_orcl sync_page Ds 4639 ora_ckpt_orcl sync_page D 11210 oracleorcl (LOCAL=NO) sync_page R+ 12535 ps axo stat,pid,cmd,wchan - -- Ds 4635 ora_dbw0_orcl sync_page Ds 4637 ora_lgwr_orcl sync_page Ds 4639 ora_ckpt_orcl sync_page D 11210 oracleorcl (LOCAL=NO) sync_page R+ 12570 ps axo stat,pid,cmd,wchan - -- so i think that box is run out of memory for disk buffers but memry is fine total used free shared buffers cached Mem: 4149084 3994552 154532 0 0 2424328 -/+ buffers/cache: 1570224 2578860 Swap: 3148700 750696 2398004 i think that this is the problem, buffer is zero and we must write directly to disk, but why buffer is zero ? - i try to google it and find nothing - is anyone can help ?

    Read the article

  • UPK 3.6.1 (is Coming)

    - by marc.santosusso
    In anticipation of the release of UPK 3.6.1, I'd like to briefly describe some of the features that will be available in this new version. Topic Editor in Tabs Topic Editors now open in tabs instead of separate Developer windows. This offers several improvements: First, the bubble editor can be docked and resized in the same way as other editor panes. That's right, you can resize the bubble editor! The second enhancement that this changes brings is an improved undo and redo which allows each action to be undone and redone in the Topic Editor. New Sound Editor The topic and web page editors include a new sound editor with all the bells and whistles necessary to record, edit, import, and export, sound. Sound can be captured during topic recording--which is great for a Subject Matter Expert (SME) to narrate what they're recording--or after the topic has been recorded. Sound can also be added to web pages and played on the concept panes of modules, sections and topics. Turn off bubbles in Topics Authors may opt to hide bubbles either per frame or for an entire topic. When you want to draw a user's attention to the content on the screen instead of the bubble. This feature works extremely well in conjunction with the new sound capabilities. For instance, consider recording conceptual information with narration and no bubbles. Presentation Output UPK content can be published as a Presentation in Microsoft PowerPoint format. Publishing for Presentation will create a presentation for each topic published. The presentation template can be customized Using the same methods offered for the UPK document outputs, allowing your UPK-generated presentations to match your corporate branding. Autosave and Recovery The Developer will automatically save your work as often as you would like. This affords authors the ability to recover these automatically saved documents if their system or UPK were to close unexpectedly. The Developer defaults to save open documents every ten minutes. Package Editor Enhancement Files in packages will now open in the associated application when double-clicked. Authors can also choose to "Open with..." from the context menu (AKA right click menu.) See It! Window See It! mode may now be launched in a non-fullscreen window. This is available from the kp.html file in any Player package. This version of See It! mode offers on-screen navigation controls including previous frame, next frame, pause etc. Firefox Enhancments The UPK Player will now offer both Do It! mode and sound playback when viewed using Firefox web browser. Player Support for Safari The UPK Player is now fully supported on the Safari web browser for both Mac OS and Windows platforms. Keep document checked out Authors may choose to keep a document checked out when performing a check in. This allows an author to have a new version created on the server and continue editing. Close button on individual tabs A close button has been added to the tabs making it easier to close a specific tab. Outline Editor Enhancements Authors will have the option to prevent concepts from immediately displaying in the Developer when an outline item is selected. This makes it faster to move around in the outline editor. Tell us which feature you're most excited to use in the comments.

    Read the article

  • jQuery Context Menu Plugin and Capturing Right-Click

    - by Ben Griswold
    I was thrilled to find Cory LaViska’s jQuery Context Menu Plugin a few months ago. In very little time, I was able to integrate the context menu with the jQuery Treeview.  I quickly had a really pretty user interface which took full advantage of limited real estate.  And guess what.  As promised, the plugin worked in Chrome, Safari 3, IE 6/7/8, Firefox 2/3 and Opera 9.5.  Everything was perfect and I shipped to the Integration Environment. One thing kept bugging though – right clicks aren’t the standard in a web environment. Sure, when one hovers over the treeview node, the mouse changed from an arrow to a pointer, but without help text most users will certainly left-click rather than right. As I was already doubting the design decision, we did some Mac testing.  The context menu worked in Firefox but not Safari.  Damn.  That’s when I started digging into the Madness of Javascript Mouse Events.  Don’t tell, but it’s complicated.  About as close as one can get to capture the right-click mouse event on all major browsers on Windows and Mac is this: if (event.which == null) /* IE case */ button= (event.button < 2) ? "LEFT" : ((event.button == 4) ? "MIDDLE" : "RIGHT"); else /* All others */ button= (event.which < 2) ? "LEFT" : ((event.which == 2) ? "MIDDLE" : "RIGHT"); Yikes.  The content menu code was simply checking if event.button == 2.  No problem.  Cory offers a jQuery Right Click Plugin which I’m sure works for windows but probably not the Mac either.  (Please note I haven’t verified this.) Anyway, I decided to address my UI design concern and the Safari Mac issue in one swoop.  I decided to make the context menu respond to any mouse click event.  This didn’t take much – especially after seeing how Bill Beckelman updated the library to recognize the left click. First, I added an AnyClick option to the library defaults: // Any click may trigger the dropdown and that's okay // See Javascript Madness: Mouse Events – http: //unixpapa.com/js/mouse.html if (o.anyClick == undefined) o.anyClick = false; And then I trigger the context menu dropdown based on the following conditional: if (evt.button == 2 || o.anyClick) { Nothing tricky about that, right?  Finally, I updated my menu setup to include the AnyClick value, if true: $('.member').contextMenu({ menu: 'memberContextMenu', anyClick: true },             function (action, el, pos) {                 … Now the context menu works in “all” environments if you left, right or even middle click.  Download jQuery Context Menu Plugin for Any Click *Opera 9.5 has an option to allow scripts to detect right-clicks, but it is disabled by default. Furthermore, Opera still doesn’t allow JavaScript to disable the browser’s default context menu which causes a usability conflict.

    Read the article

  • How to Browse Without a Trace with an Ubuntu Live CD

    - by Trevor Bekolay
    No matter how diligently you clear your cache and erase your history, web browsing leaves traces on your computer. If you need keep your browsing private, then an Ubuntu Live CD is the answer. The key to this trick is that the Live CD environment runs completely in RAM, so things like your cache, cookies, and history don’t get saved to a persistent storage location. On a hard drive, even deleted files can be recovered, but once a computer is turned off the data stored in RAM is unrecoverable. In addition, since the Ubuntu Live CD environment is the same no matter what computer you use it on, there’s very little identifying information that a website can use to track you! The first step is to either burn an Ubuntu Live CD, or prepare a non-persistent Ubuntu USB flash drive. Ubuntu treats non-persistent flash drives like CDs, so files will not be written to it, but if you’re paranoid, then using a physical CD ensures that nothing gets written to a storage device. Boot up from the CD or flash drive, and choose to Run Ubuntu from the CD or flash drive if prompted (for more detailed instructions on booting from a CD or USB drive, see this article, or our guide on booting from a flash drive even if your BIOS won’t let you). Once the graphical Ubuntu environment comes up, you can click on the Firefox icon at the top of the screen to start browsing. If your browsing requires Flash, then you can install it by clicking on System at the top-left of the screen, then Administration > Synaptic Package Manager. Click on Settings at the top of the Synaptic window, and then select Repositories. Add a check in the checkbox with the label ending in “multiverse”. Click Close. Click the Reload button in the main Synaptic window. The list of available packages will reload. When they’ve reloaded, type “restricted” in the Quick search box. Right-click on ubuntu-restricted-extras and select Mark for Installation. It will note a number of other packages that will be installed. This list includes audio and video codecs, so after installing these, you should be able to play downloaded movies and songs. Click Mark to accept the installation of these other packages. Once you return to the main Synaptic window, click the Apply button and go through the dialogs to finish the installation of Flash and the other useful packages. If you open up Firefox now, you’ll have no problems using websites that use Flash. When you’re done browsing and shut down or restart your computer, all traces of your web browsing will be gone. It’s a bit of work compared to just using a privacy-centric browser, but if it’s very important that your browsing leave no traces on your hard drive, an Ubuntu Live CD is your best bet. Download Ubuntu Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDAdding extra Repositories on UbuntuHow to Add a Program to the Ubuntu Startup List (After Login)How to install Spotify in Ubuntu 9.10 using WineInstalling PHP4 and Apache on Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor tinysong gives a shortened URL for you to post on Twitter (or anywhere)

    Read the article

  • OBIEE 11.1.1 - User Interface (UI) Performance Is Slow With Internet Explorer 8

    - by Ahmed A
    The OBIEE 11g UI is performance is slow in IE 8 and faster in Firefox.  For VPN or WAN users, it takes long time to display links on Dashboards via IE 8. Cause is IE 8 generates many HTTP 304 return calls and this caused the 11g UI slower when compared to the Mozilla FireFox browser. To resolve this issue, you can implement HTTP compression and caching. This is a best practice.Why use Web Server Compression / Caching for OBIEE? Bandwidth Savings: Enabling HTTP compression can have a dramatic improvement on the latency of responses. By compressing static files and dynamic application responses, it will significantly reduce the remote (high latency) user response time. Improves request/response latency: Caching makes it possible to suppress the payload of the HTTP reply using the 304 status code.  Minimizing round trips over the Web to re-validate cached items can make a huge difference in browser page load times. This screen shot depicts the flow and where the compression and decompression occurs: Solution: a. How to Enable HTTP Caching / Compression in Oracle HTTP Server (OHS) 11.1.1.x 1. To implement HTTP compression / caching, install and configure Oracle HTTP Server (OHS) 11.1.1.x for the bi_serverN Managed Servers (refer to "OBIEE Enterprise Deployment Guide for Oracle Business Intelligence" document for details). 2. On the OHS machine, open the file HTTP Server configuration file (httpd.conf) for editing. This file is located in the OHS installation directory.For example: ORACLE_HOME/Oracle_WT1/instances/instance1/config/OHS/ohs13. In httpd.conf file, verify that the following directives are included and not commented out: LoadModule expires_module "${ORACLE_HOME}/ohs/modules/mod_expires.soLoadModule deflate_module "${ORACLE_HOME}/ohs/modules/mod_deflate.so 4. Add the following lines in httpd.conf file below the directive LoadModule section and restart the OHS: Note: For the Windows platform, you will need to enclose any paths in double quotes ("), for example:Alias "/analytics ORACLE_HOME/bifoundation/web/app"<Directory "ORACLE_HOME/bifoundation/web/app"> Alias /analytics ORACLE_HOME/bifoundation/web/app#Pls replace the ORACLE_HOME with your actual BI ORACLE_HOME path<Directory ORACLE_HOME/bifoundation/web/app>#We don't generate proper cross server ETags so disable themFileETag noneSetOutputFilter DEFLATE# Don't compress imagesSetEnvIfNoCase Request_URI \.(?:gif|jpe?g|png)$ no-gzip dont-vary<FilesMatch "\.(gif|jpeg|png|js|x-javascript|javascript|css)$">#Enable future expiry of static filesExpiresActive onExpiresDefault "access plus 1 week"     #1 week, this will stops the HTTP304 calls i.e. generated by IE 8Header set Cache-Control "max-age=604800"</FilesMatch>DirectoryIndex default.jsp</Directory>#Restrict access to WEB-INF<Location /analytics/WEB-INF>Order Allow,DenyDeny from all</Location> Note: Make sure you replace above placeholder "ORACLE_HOME" to your correct path for BI ORACLE_HOME.For example: my BI Oracle Home path is /Oracle/BIEE11g/Oracle_BI1/bifoundation/web/app Important Notes: Above caching rules restricted to static files found inside the /analytics directory(/web/app). This approach is safer instead of setting static file caching globally. In some customer environments you may not get 100% performance gains in IE 8.0 browser. So in that case you need to extend caching rules to other directories with static files content. If OHS is installed on separate dedicated machine, make sure static files in your BI ORACLE_HOME (../Oracle_BI1/bifoundation/web/app) is accessible to the OHS instance. The following screen shot summarizes the before and after results and improvements after enabling compression and caching:

    Read the article

  • Integrating Windows Form Click Once Application into SharePoint 2007 &ndash; Part 1 of 2

    - by Kelly Jones
    Last year, I had the opportunity to build a solution that involved integrating a Windows Form application into a SharePoint 2007 (WSS version 3.0). In this post, I’ll layout our architecture thinking and in part two, I’ll describe the technical details. Business Case Our challenge was this: we needed an easy way for a small group of our users to upload documents, in batches.  They also needed to quickly set the meta data values, as well as set security on individual files. Using the out of the box uploads just didn’t fit.  The single file upload allows set the meta data, but our users would be uploading dozens of files.  The multiple upload would allow our users to upload batches of files, but it doesn’t allow them to set the meta data during upload.  Also, neither upload method allows the users to set the permissions on the file. Our Solution We looked into building a web control of some kind, but ruled that out due to security complexities (if I remember correctly).  Another option would have been using a technology like Silverlight (or Flash?), but our team didn’t have the skills necessary to build with these. So, after looking at what was technically possible, and also what skills our team had, we settled on a Windows Form application.  We also decided to deliver it to the clients via Click Once, so we would have the ability to easily update the application in the future. Lessons Learned After deploying our solution, we’ve learned a few lessons.  First, you’ll need to have the .Net Framework installed on the client computers.  We knew this, but we still ran into issues making sure our users had the proper framework version installed.  Second, we had issues with authentication.  Our issues were due to our testing domain being a separate Active Directory domain from the domain that our end users and their workstations were members of.  (See my earlier post about Clearing Saved Passwords for the fix to our problem). Our third issue was how we dealt with uploading files that were named the same.  Our application would replace the existing file with the new file, which is the way we expected it to work.  However, our users wanted to upload weekly reports, named the same as the previous week.  We solved this by using folders within the document library to keep the sets of reports separate from previous weeks. One last thing to consider before implementing a solution like this, is what browsers and platforms your users will be working from.  We only needed to support IE and Windows, which works fine.  However, if you need to support Firefox, there are add-ons that allow Click Once to work with Firefox.  This is still a Windows only solution though.  In order to support Macs, you’d have to focus on either browser techniques (AJAX?) or Silverlight/Flash. Summary Our users are happy with the Click Once app.  It allowed them to move all of their content to our SharePoint site in under a couple hours, which they were thrilled with.  We’re happy because we can easily deploy updates, our development time was small, and we met all of our business requirements.

    Read the article

  • How to convert from amateur web app developer to professional web apper?

    - by Nilesh
    This is more of a practical question on web app development and deployment process. Here is some background information. I use PHP for server side scripting, javascript for client side. I use Netbeans and notepad++. I user Firefox and firebug for debugging and testing. The process I use is very amateurish, I code something in netbeans, something in notepad++ and since there is nothing to compile, I just refresh the firefox browser and test it. This is convenient and faster compared to the Java development enviornment where you would have to atleast compile and deploy the jar files before you could run them. I have been thinking of putting a formal process in my development and find it hard putting it together. There are so many things to do before you can deploy your final web app. I keep hearing jslint, compression, unit testing (selenium), Ant, YUI compressor etc but I am now looking for some steps that I can take to make me more organized. For e.g I use netbeans but don't use any projects within it. I directly update the files. I don't use any source control but use my Iomega backup that saves each save into a different version and at the end of the day I backup the dev directory to my Amazon s3 account. For me development environment is just a DEV directory, TEST is my intermediate stage and PROD is the final directory that gets pushed out to the server. But all these directories are in the same apache home. I have few php scripts that just copies the needed files into the production directory. Thats about it for my development approach. I know I am missing the following - Regression testing (manual or automated ??) - automated testing (selenium ??) - automated deployment (ANT ??) - source control (svn ??) - quality control (jslint ??) Can someone explain what are the missing steps and how to go about filling those steps in order to have more professional approach. I am looking for tools with example tutorials in streamlining the whole development to deployment stage. For me just getting a hang of database, server side and client side development all in synchronization was itself a huge accomplishment. And now I feel there is lot missing before you can produce quality web application. For e.g I see lot of mention about using automated testing but how to put in use with respect to javascript and php. How to use ANT for the deployment etc. Is this all too much for a single or two person development team? Is there a way to automate all the above so that I just keep coding in netbeans and then run a batch file that is configured once and run it everytime to produce the code in the production directory? Lot of these information is scattered on the web and here, if someone can guide I would be happy to consolidate here. Thank you for your patience :)

    Read the article

  • Ubuntu 12.04 / 12.10 Randomly Freezing - nVidia?

    - by Alix Axel
    My Ubuntu install frequently freezes, sometimes showing a black screen (not very common anymore - in my latest installs), some other times the mouse and keyboard just fail to move and respond (not even Ctrl + Alt + F1 works) and some other times I'm able to move the mouse with a huge delay (2-5 seconds) but I'm not able to do/click anything. I have a pretty strong feeling that this problem is related to my graphic card drivers because: after hard reset, I usually get error reports about X.org / jockey it's common for artifacts to appear during loading / shutdown / whenever, for instance: pattern filled with £ during log off ugly-colored squared pattern during boot windows that are partially moved (i.e.: only the top half) Firefox renderings that leave the bottom ~30% of the page black These artifacts appear right before the system freezes. I've installed Ubuntu 12.04 LTS and after several failed attempts to get my dual monitor setup to work properly I tried installing the new 12.10 version, hoping that this new version would have this problem solved... Unfortunatly, that was not the case, so I reverted to Ubuntu 12.04. I've tried all the drivers in the Additional Drivers application (even the experimental ones), I've also tried the nvidia-current package from the PPA repository ubuntu-x-swat/x-updates as well as the nouveau OSS driver. Nothing (except no driver at all with a 640*480 resolution) at all seems stable. Here is the info of my graphic card: alix@alix-E500:~$ lspci | grep VGA 01:00.0 VGA compatible controller: NVIDIA Corporation G86 [GeForce 8400M G] (rev a1) alix@alix-E500:~$ sudo lshw -C video [sudo] password for alix: *-display description: VGA compatible controller product: G86 [GeForce 8400M G] vendor: NVIDIA Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nouveau latency=0 resources: irq:16 memory:fd000000-fdffffff memory:d0000000-dfffffff memory:fa000000-fbffffff ioport:cc00(size=128) memory:fe0e0000-fe0fffff Right now, I don't even have my 22" monitor connected as I can't even get my laptop display to work properly and without freezes. I've searched, read and tried all that I could (over several fresh reinstalls) to fix the problem, but so far, no solution has proven definitive. I'm sorry I can't precise which symptom maps to each driver but I've been trying to solve this one on my own without logging what I'm doing, perhaps someone here will be able to point me to a certain-fix solution, if not I'll keep updating this question as I go along. Please let me know if any more info is needed to pinpoint the exact problem. Trying out NVIDIA accelerated graphics driver (version 173). The scrolling, minimizing / maximizing windows takes between 2 and 5 seconds to finalize. Context menus also pop up very slowly and the typing seems delayed by ~1 second. No critical issues so far. Firefox rendering of the Save Edits button is consistently messed up (random black lines in the top). Trying out NVIDIA accelerated graphics driver (version current) [Recommended]. All the delays mentioned above and the buggy rendering of the Save Edits button are gone, but I'm noticing that the whole screen flashes black for a couple of microseconds and while I was writing this test for the first time, the bottom 30% of the screen went black and I couldn't do anything (not even Ctrl + Alt + F1 would work). Had to force a hard reset. Also, the system hanged a little for a couple of seconds with the fade out of the "Restart" menu. Trying out NVIDIA accelerated graphics driver (*experimental*beta) (version experimental-304). Same symptoms as before, it crashed once while I was trying to install Chromium and again after a hard reset when I was trying to remove the driver. The bottom of the screen did not went black and I could move my mouse both times. Ctrl + Alt + F1 didn't work. The ugly-colored pattern also showed up during the second boot. Trying out NVIDIA accelerated graphics driver (*experimental*beta) (version experimental-307). The system crashed as soon as I clicked something. Had to do a fresh re-install. Trying out Nouveau: Accelerated Open Source driver for nVidia cards. Artifacts still show up during boot but other than that this one seems stable. As soon as I connected my second monitor, the responsiveness dropped a lot, animations and video are somewhat slow. I'm gonna try this solution http://askubuntu.com/a/98871/9018 later on.

    Read the article

< Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >