Search Results

Search found 6387 results on 256 pages for 'cpu allocation'.

Page 249/256 | < Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >

  • RHEL - blocked FC remote port time out: saving binding

    - by Dev G
    My Server went into a faulty state since the database could not write on the partition. I found out that the partition went into Read Only mode. Finally to fix it, I had to do a hard reboot. Linux 2.6.18-164.el5PAE #1 SMP Tue Aug 18 15:59:11 EDT 2009 i686 i686 i386 GNU/Linux /var/log/messages Oct 31 00:56:45 ota3g1 Had[17275]: VCS ERROR V-16-1-10214 Concurrency Violation:CurrentCount increased above 1 for failover group sg_network Oct 31 00:57:05 ota3g1 Had[17275]: VCS CRITICAL V-16-1-50086 CPU usage on ota3g1.mtsallstream.com is 100% Oct 31 01:01:47 ota3g1 Had[17275]: VCS ERROR V-16-1-10214 Concurrency Violation:CurrentCount increased above 1 for failover group sg_network Oct 31 01:06:50 ota3g1 Had[17275]: VCS ERROR V-16-1-10214 Concurrency Violation:CurrentCount increased above 1 for failover group sg_network Oct 31 01:11:52 ota3g1 Had[17275]: VCS ERROR V-16-1-10214 Concurrency Violation:CurrentCount increased above 1 for failover group sg_network Oct 31 01:12:10 ota3g1 kernel: lpfc 0000:29:00.1: 1:1305 Link Down Event x2 received Data: x2 x20 x80000 x0 x0 Oct 31 01:12:10 ota3g1 kernel: lpfc 0000:29:00.1: 1:1303 Link Up Event x3 received Data: x3 x1 x10 x1 x0 x0 0 Oct 31 01:12:12 ota3g1 kernel: lpfc 0000:29:00.1: 1:1305 Link Down Event x4 received Data: x4 x20 x80000 x0 x0 Oct 31 01:12:40 ota3g1 kernel: rport-8:0-0: blocked FC remote port time out: saving binding Oct 31 01:12:40 ota3g1 kernel: lpfc 0000:29:00.1: 1:(0):0203 Devloss timeout on WWPN 20:25:00:a0:b8:74:f5:65 NPort x0000e4 Data: x0 x7 x0 Oct 31 01:12:40 ota3g1 kernel: sd 8:0:0:4: SCSI error: return code = 0x00010000 Oct 31 01:12:40 ota3g1 kernel: end_request: I/O error, dev sdi, sector 38617577 Oct 31 01:12:40 ota3g1 kernel: sd 8:0:0:4: SCSI error: return code = 0x00010000 Oct 31 01:12:40 ota3g1 kernel: end_request: I/O error, dev sdi, sector 283532153 Oct 31 01:12:40 ota3g1 kernel: sd 8:0:0:4: SCSI error: return code = 0x00010000 Oct 31 01:12:40 ota3g1 kernel: end_request: I/O error, dev sdi, sector 90825 Oct 31 01:12:40 ota3g1 kernel: Aborting journal on device dm-16. Oct 31 01:12:40 ota3g1 kernel: sd 8:0:0:4: SCSI error: return code = 0x00010000 Oct 31 01:12:40 ota3g1 kernel: end_request: I/O error, dev sdi, sector 868841 Oct 31 01:12:40 ota3g1 kernel: sd 8:0:0:4: SCSI error: return code = 0x00010000 Oct 31 01:12:40 ota3g1 kernel: Aborting journal on device dm-10. Oct 31 01:12:40 ota3g1 kernel: end_request: I/O error, dev sdi, sector 37759889 Oct 31 01:12:40 ota3g1 kernel: sd 8:0:0:4: SCSI error: return code = 0x00010000 Oct 31 01:12:40 ota3g1 kernel: end_request: I/O error, dev sdi, sector 283349449 Oct 31 01:12:40 ota3g1 kernel: printk: 6 messages suppressed. Oct 31 01:12:40 ota3g1 kernel: Aborting journal on device dm-12. Oct 31 01:12:40 ota3g1 kernel: EXT3-fs error (device dm-12) in ext3_reserve_inode_write: Journal has aborted Oct 31 01:12:40 ota3g1 kernel: Buffer I/O error on device dm-16, logical block 1545 Oct 31 01:12:40 ota3g1 kernel: lost page write due to I/O error on dm-16 Oct 31 01:12:40 ota3g1 kernel: sd 8:0:0:4: SCSI error: return code = 0x00010000 Oct 31 01:12:40 ota3g1 kernel: end_request: I/O error, dev sdi, sector 12745 Oct 31 01:12:40 ota3g1 kernel: Buffer I/O error on device dm-10, logical block 1545 Oct 31 01:12:40 ota3g1 kernel: EXT3-fs error (device dm-16) in ext3_reserve_inode_write: Journal has aborted Oct 31 01:12:40 ota3g1 kernel: lost page write due to I/O error on dm-10 Oct 31 01:12:40 ota3g1 kernel: sd 8:0:0:4: SCSI error: return code = 0x00010000 Oct 31 01:12:40 ota3g1 kernel: end_request: I/O error, dev sdi, sector 37749121 Oct 31 01:12:40 ota3g1 kernel: Buffer I/O error on device dm-12, logical block 0 Oct 31 01:12:40 ota3g1 kernel: lost page write due to I/O error on dm-12 Oct 31 01:12:40 ota3g1 kernel: sd 8:0:0:4: SCSI error: return code = 0x00010000 Oct 31 01:12:40 ota3g1 kernel: EXT3-fs error (device dm-12) in ext3_dirty_inode: Journal has aborted Oct 31 01:12:40 ota3g1 kernel: end_request: I/O error, dev sdi, sector 37757897 Oct 31 01:12:40 ota3g1 kernel: Buffer I/O error on device dm-12, logical block 1097 Oct 31 01:12:40 ota3g1 kernel: lost page write due to I/O error on dm-12 Oct 31 01:12:40 ota3g1 kernel: sd 8:0:0:4: SCSI error: return code = 0x00010000 Oct 31 01:12:40 ota3g1 kernel: end_request: I/O error, dev sdi, sector 283337089 Oct 31 01:12:40 ota3g1 kernel: Buffer I/O error on device dm-16, logical block 0 Oct 31 01:12:40 ota3g1 kernel: lost page write due to I/O error on dm-16 Oct 31 01:12:40 ota3g1 kernel: sd 8:0:0:4: SCSI error: return code = 0x00010000 Oct 31 01:12:40 ota3g1 kernel: EXT3-fs error (device dm-16) in ext3_dirty_inode: Journal has aborted Oct 31 01:12:40 ota3g1 kernel: end_request: I/O error, dev sdi, sector 37749121 Oct 31 01:12:40 ota3g1 kernel: Buffer I/O error on device dm-12, logical block 0 Oct 31 01:12:41 ota3g1 kernel: lost page write due to I/O error on dm-12 Oct 31 01:12:41 ota3g1 kernel: sd 8:0:0:4: SCSI error: return code = 0x00010000 Oct 31 01:12:41 ota3g1 kernel: end_request: I/O error, dev sdi, sector 283337089 Oct 31 01:12:41 ota3g1 kernel: Buffer I/O error on device dm-16, logical block 0 Oct 31 01:12:41 ota3g1 kernel: lost page write due to I/O error on dm-16 Oct 31 01:12:41 ota3g1 kernel: sd 8:0:0:4: SCSI error: return code = 0x00010000 df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/cciss-root 4.9G 730M 3.9G 16% / /dev/mapper/cciss-home 9.7G 1.2G 8.1G 13% /home /dev/mapper/cciss-var 9.7G 494M 8.8G 6% /var /dev/mapper/cciss-usr 15G 2.6G 12G 19% /usr /dev/mapper/cciss-tmp 3.9G 153M 3.6G 5% /tmp /dev/sda1 996M 43M 902M 5% /boot tmpfs 5.9G 0 5.9G 0% /dev/shm /dev/mapper/cciss-product 25G 16G 7.4G 68% /product /dev/mapper/cciss-opt 20G 4.5G 14G 25% /opt /dev/mapper/dg_db1-vol_db1_system 18G 2.2G 15G 14% /database/OTADB/sys /dev/mapper/dg_db1-vol_db1_undo 18G 5.8G 12G 35% /database/OTADB/undo /dev/mapper/dg_db1-vol_db1_redo 8.9G 4.3G 4.2G 51% /database/OTADB/redo /dev/mapper/dg_db1-vol_db1_sgbd 8.9G 654M 7.8G 8% /database/OTADB/admin /dev/mapper/dg_db1-vol_db1_arch 98G 24G 69G 26% /database/OTADB/arch /dev/mapper/dg_db1-vol_db1_indexes 240G 14G 214G 6% /database/OTADB/index /dev/mapper/dg_db1-vol_db1_data 275G 47G 215G 18% /database/OTADB/data /dev/mapper/dg_dbrman-vol_db_rman 8.9G 351M 8.1G 5% /database/RMAN /dev/mapper/dg_app1-vol_app1 151G 113G 31G 79% /files/ota /etc/fstab /dev/cciss/root / ext3 defaults 1 1 /dev/cciss/home /home ext3 defaults 1 2 /dev/cciss/var /var ext3 defaults 1 2 /dev/cciss/usr /usr ext3 defaults 1 2 /dev/cciss/tmp /tmp ext3 defaults 1 2 LABEL=/boot /boot ext3 defaults 1 2 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 /dev/cciss/swap swap swap defaults 0 0 /dev/cciss/product /product ext3 defaults 1 2 /dev/cciss/opt /opt ext3 defaults 1 2 /dev/dg_db1/vol_db1_system /database/OTADB/sys ext3 defaults 1 2 /dev/dg_db1/vol_db1_undo /database/OTADB/undo ext3 defaults 1 2 /dev/dg_db1/vol_db1_redo /database/OTADB/redo ext3 defaults 1 2 /dev/dg_db1/vol_db1_sgbd /database/OTADB/admin ext3 defaults 1 2 /dev/dg_db1/vol_db1_arch /database/OTADB/arch ext3 defaults 1 2 /dev/dg_db1/vol_db1_indexes /database/OTADB/index ext3 defaults 1 2 /dev/dg_db1/vol_db1_data /database/OTADB/data ext3 defaults 1 2 /dev/dg_dbrman/vol_db_rman /database/RMAN ext3 defaults 1 2 /dev/dg_app1/vol_app1 /files/ota ext3 defaults 1 2 Thanks for all the help.

    Read the article

  • Could not load file or assembly 'System.Data.SQLite'

    - by J. Pablo Fernández
    I've installed ELMAH 1.1 .Net 3.5 x64 in my ASP.NET project and now I'm getting this error (whenever I try to see any page): Could not load file or assembly 'System.Data.SQLite, Version=1.0.61.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139' or one of its dependencies. An attempt was made to load a program with an incorrect format. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.BadImageFormatException: Could not load file or assembly 'System.Data.SQLite, Version=1.0.61.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139' or one of its dependencies. An attempt was made to load a program with an incorrect format. More error details at the bottom. My Active Solution platform is "Any CPU" and I'm running on a x64 Windows 7 on an x64, of course, processor. The reason why we are using this version of ELMAH is because 1.0 .Net 3.5 (x86, which is the only platform for which it's compiled) gave us this same error on our x64 Windows server. I've tried compiling for x86 and x64 and I get the same error. I've tried removing the all compiler output (bin and obj). Finally I've made a reference to the SQLite dll directly, something that was not needed for the project to work on the server and I've got this compiler error: Error 1 Warning as Error: Assembly generation -- Referenced assembly 'System.Data.SQLite.dll' targets a different processor MyProject Any ideas what the problem might be? More error details: Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [BadImageFormatException: Could not load file or assembly 'System.Data.SQLite, Version=1.0.61.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139' or one of its dependencies. An attempt was made to load a program with an incorrect format.] System.Reflection.Assembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) +0 System.Reflection.Assembly.nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) +43 System.Reflection.Assembly.InternalLoad(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) +127 System.Reflection.Assembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) +142 System.Reflection.Assembly.Load(String assemblyString) +28 System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) +46 [ConfigurationErrorsException: Could not load file or assembly 'System.Data.SQLite, Version=1.0.61.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139' or one of its dependencies. An attempt was made to load a program with an incorrect format.] System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) +613 System.Web.Configuration.CompilationSection.LoadAllAssembliesFromAppDomainBinDirectory() +203 System.Web.Configuration.CompilationSection.LoadAssembly(AssemblyInfo ai) +105 System.Web.Compilation.BuildManager.GetReferencedAssemblies(CompilationSection compConfig) +178 System.Web.Compilation.BuildProvidersCompiler..ctor(VirtualPath configPath, Boolean supportLocalization, String outputAssemblyName) +54 System.Web.Compilation.ApplicationBuildProvider.GetGlobalAsaxBuildResult(Boolean isPrecompiledApp) +232 System.Web.Compilation.BuildManager.CompileGlobalAsax() +52 System.Web.Compilation.BuildManager.EnsureTopLevelFilesCompiled() +337 [HttpException (0x80004005): Could not load file or assembly 'System.Data.SQLite, Version=1.0.61.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139' or one of its dependencies. An attempt was made to load a program with an incorrect format.] System.Web.Compilation.BuildManager.ReportTopLevelCompilationException() +58 System.Web.Compilation.BuildManager.EnsureTopLevelFilesCompiled() +512 System.Web.Hosting.HostingEnvironment.Initialize(ApplicationManager appManager, IApplicationHost appHost, IConfigMapPathFactory configMapPathFactory, HostingEnvironmentParameters hostingParameters) +729 [HttpException (0x80004005): Could not load file or assembly 'System.Data.SQLite, Version=1.0.61.0, Culture=neutral, PublicKeyToken=db937bc2d44ff139' or one of its dependencies. An attempt was made to load a program with an incorrect format.] System.Web.HttpRuntime.FirstRequestInit(HttpContext context) +8896783 System.Web.HttpRuntime.EnsureFirstRequestInit(HttpContext context) +85 System.Web.HttpRuntime.ProcessRequestInternal(HttpWorkerRequest wr) +259

    Read the article

  • Delphi - WndProc() in thread never called

    - by Robert Oschler
    I had code that worked fine when running in the context of the main VCL thread. This code allocated it's own WndProc() in order to handle SendMessage() calls. I am now trying to move it to a background thread because I am concerned that the SendMessage() traffic is affecting the main VCL thread adversely. So I created a worker thread with the sole purpose of allocating the WndProc() in its thread Execute() method to ensure that the WndProc() existed in the thread's execution context. The WndProc() handles the SendMessage() calls as they come in. The problem is that the worker thread's WndProc() method is never triggered. Note, doExecute() is part of a template method that is called by my TThreadExtended class which is a descendant of Delphi's TThread. TThreadExtended implements the thread Execute() method and calls doExecute() in a loop. I triple-checked and doExecute() is being called repeatedly. Also note that I call PeekMessage() right after I create the WndProc() in order to make sure that Windows creates a message queue for the thread. However something I am doing is wrong since the WndProc() method is never triggered. Here's the code below: // ========= BEGIN: CLASS - TWorkerThread ======================== constructor TWorkerThread.Create; begin FWndProcHandle := 0; inherited Create(false); end; // --------------------------------------------------------------- // This call is the thread's Execute() method. procedure TWorkerThread.doExecute; var Msg: TMsg; begin // Create the WndProc() in our thread's context. if FWndProcHandle = 0 then begin FWndProcHandle := AllocateHWND(WndProc); // Call PeekMessage() to make sure we have a window queue. PeekMessage(Msg, FWndProcHandle, 0, 0, PM_NOREMOVE); end; if Self.Terminated then begin // Get rid of the WndProc(). myDeallocateHWnd(FWndProcHandle); end; // Sleep a bit to avoid hogging the CPU. Sleep(5); end; // --------------------------------------------------------------- procedure TWorkerThread.WndProc(Var Msg: TMessage); begin // THIS CODE IS NEVER CALLED. try if Msg.Msg = WM_COPYDATA then begin // Is LParam assigned? if (Msg.LParam > 0) then begin // Yes. Treat it as a copy data structure. with PCopyDataStruct(Msg.LParam)^ do begin ... // Here is where I do my work. end; end; // if Assigned(Msg.LParam) then end; // if Msg.Msg = WM_COPYDATA then finally Msg.Result := 1; end; // try() end; // --------------------------------------------------------------- procedure TWorkerThread.myDeallocateHWnd(Wnd: HWND); var Instance: Pointer; begin Instance := Pointer(GetWindowLong(Wnd, GWL_WNDPROC)); if Instance <> @DefWindowProc then begin // Restore the default windows procedure before freeing memory. SetWindowLong(Wnd, GWL_WNDPROC, Longint(@DefWindowProc)); FreeObjectInstance(Instance); end; DestroyWindow(Wnd); end; // --------------------------------------------------------------- // ========= END : CLASS - TWorkerThread ======================== Thanks, Robert

    Read the article

  • facebook iframe stream.Publish cannot close dialog or skip

    - by fooyee
    am pulling my hair over this :( spent 10 hrs but nothing came out I read this thread http://forum.developers.facebook.com/viewtopic.php?pid=198128 but it didn't help much. I'm running a local dev App Engine server ( localhost:8080 ) iframe app so I have a couple of problems. 1) on safari 4.0.4, the publish story dialog comes up nicely with all images/data/action_links. upon posting a story (or skipping), the dialog goes blank and wouldn't close. 2) I tested the same code on firefox 3.5.8, the dialog comes up with all images/data/action_links, but then the whole thing freezes. Clicking anywhere on the dialog doesn't help at all. If i'm patient enough and click "publish", I have to wait for abt 10 seconds before the dialog says "story is published". then it freezes. (clicking on skip doesn't make a difference). btw, there is no "button clicking effect" : ie: the buttons don't look like they "sink down" upon clicking. I checked the firefox memory using the command "top" on the terminal, it all seems okay, no spike in CPU processes ( i could open other firefox tabs and work on them) My futile attempts at solving the problems... 1) so i thought, hmm could this be because of local dev (localhost) problem? I uploaded the code to the production server, the same thing happens. 2) I tried an older firefox (3.1) and the same problem persisted ( the freezing ) 3) I noticed that i kind of used 2 different FB features ( Connect and XFBML). The Connect Feature I used in the PostStory function. The XFBML feature I used before the tag. So I thought, hmm ... I tried replacing the FB_RequireFeatures["Connect"] feature with FB_RequireFeatures["XFBML"]. nothing changed. I still can't close the story dialog. 4) Is there a possibility that I didn't connect to xd_receiver.htm properly? my xd_receiver.htm is stored in my folder /media/fbconnect in my app.yaml handler: - url: /fbconnect static_dir: media/fbconnect so i thought a connection has to be established with xd_receiver.htm. any way I can test that? here're all the codes: <script type="text/javascript"> //post story function function PostStory() { //init facebook FB_RequireFeatures(["Connect"], function() { FB.Facebook.init('my_app_key', "/fbconnect/xd_receiver.htm"); FB.ensureInit(function() { var message = 'the message'; var attachment = { 'name': 'a simple app to send gifts', 'href': 'http://apps.facebook.com/my_app_name', 'caption': '{*actor*} sent u something', 'description': 'some description', "media": [{ "type": "image", "src": "http://bit.ly/105QYr", "href": "http://bit.ly/105QYr"}] }; //action links can only be seen AFTER the feed is published var action_links = [{ 'text': 'Send him/her a gift back!', 'href': 'http://somelink.com'}]; FB.Connect.streamPublish(message, attachment, action_links, null, "Share the gift with your friends", callback, false, null); }); }); function callback(post_id, exception) { //alert('Wall Post Complete'); } } </script> just before the end of the /body tag, i have this: <script type="text/javascript"> function callFBInit() { FB_RequireFeatures( ["XFBML"], function(){ FB.Facebook.init("my_app_key", "/fbconnect/xd_receiver.htm"); } ); } callFBInit(); btw, my xd_receiver.htm contains: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns=? http://www.w3.org/1999/xhtml? > <head> <title>cross-domain receiver page</title> </head> <body> <script src=?http://static.ak.facebook.com/js/api_lib/v0.4/xdcommreceiver.debug.js? type=? text/javascript? ></script> </body> </html> hope you guys can help out. thx

    Read the article

  • Server overloaded with log messages: tty_release_dev: pts0: read/write wait queue active!

    - by Raph
    In the logs, I have this (extract from the full kernel messages logges at 06:01:14): Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.863038] BUG: unable to handle kernel NULL pointer dereference at 0000000000000015 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861081] Process telnet (pid: 20247, threadinfo ffff8800f8598000, task ffff8800024d4500) And then the server logs flooded by this message: Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861547] tty_release_dev: pts0: read/write wait queue active! In the end, 2 hours later, I had to reboot because it had become inaccessible: the load hat grown to 160%. The last command does not show anyone logged on pts0 at that time. I also don't know where this telnet process could come from.... This is an AWS instance running UBUNTU 10.04 LTS And here are the complete logs: Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.863038] BUG: unable to handle kernel NULL pointer dereference at 0000000000000015 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861007] IP: [<ffffffff81363dde>] n_tty_read+0x2ce/0x970 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861019] PGD ee13d067 PUD f8698067 PMD 0 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861025] Oops: 0000 [#1] SMP Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861028] last sysfs file: /sys/devices/xen/vbd-2208/block/sdk/removable Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861032] CPU 0 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861034] Modules linked in: ipv6 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861040] Pid: 20247, comm: telnet Not tainted 2.6.32-312-ec2 #24-Ubuntu Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861042] RIP: e030:[<ffffffff81363dde>] [<ffffffff81363dde>] n_tty_read+0x2ce/0x970 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861047] RSP: e02b:ffff8800f8599d88 EFLAGS: 00010246 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861049] RAX: 0000000000000015 RBX: ffff8800f8598000 RCX: 0000000001aed069 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861052] RDX: 0000000000000000 RSI: ffff8800f8599e67 RDI: ffff8801dd833d1c Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861054] RBP: ffff8800f8599e98 R08: ffffffff8135eb10 R09: 7fffffffffffffff Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861057] R10: 0000000000000000 R11: 0000000000000246 R12: ffff8801dd833800 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861059] R13: 0000000000000000 R14: ffff8801dd833a68 R15: ffff8801dd833d1c Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861065] FS: 00007f90121f6720(0000) GS:ffff880002c40000(0000) knlGS:0000000000000000 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861068] CS: e033 DS: 0000 ES: 0000 CR0: 000000008005003b Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861070] CR2: 0000000000000015 CR3: 0000000032a59000 CR4: 0000000000002660 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861073] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861076] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861081] Process telnet (pid: 20247, threadinfo ffff8800f8598000, task ffff8800024d4500) Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861083] Stack: Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861085] 0000000000000000 0000000001aed069 ffff8801dd8339c8 ffff8800024d4500 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861089] <0> ffff8801dd8339c0 ffff8801dd833c90 0000000001aed027 ffff8800024d4500 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861094] <0> ffff8801dd8338d8 0000000000000000 ffff8800024d4500 0000000000000000 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861099] Call Trace: Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861107] [<ffffffff81034bc0>] ? default_wake_function+0x0/0x10 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861113] [<ffffffff8135ebb6>] tty_read+0xa6/0xf0 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861118] [<ffffffff810ee7e5>] vfs_read+0xb5/0x1a0 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861122] [<ffffffff810ee91c>] sys_read+0x4c/0x80 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861127] [<ffffffff81009ba8>] system_call_fastpath+0x16/0x1b Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861131] [<ffffffff81009b40>] ? system_call+0x0/0x52 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861133] Code: 85 d2 0f 84 92 00 00 00 45 8b ac 24 5c 02 00 00 f0 45 0f b3 2e 45 19 ed 49 63 84 24 5c 02 00 00 49 8b 94 24 50 02 00 00 4c 89 ff <0f> be 1c 02 e8 a9 d3 14 00 41 8b 94 24 5c 02 00 00 41 83 ac 24 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861171] RIP [<ffffffff81363dde>] n_tty_read+0x2ce/0x970 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861175] RSP <ffff8800f8599d88> Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861171] RIP [<ffffffff81363dde>] n_tty_read+0x2ce/0x970 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861175] RSP <ffff8800f8599d88> Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861177] CR2: 0000000000000015 Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861205] ---[ end trace f10eee2057ff4f6b ]--- Apr 21 06:01:14 ip-10-49-109-107 kernel: [233185.861547] tty_release_dev: pts0: read/write wait queue active!

    Read the article

  • ADO.NET (WCF) Data Services Query Interceptor Hangs IIS

    - by PreMagination
    I have an ADO.NET Data Service that's supposed to provide read-only access to a somewhat complex database. Logically I have table-per-type (TPT) inheritance in my data model but the EDM doesn't implement inheritance. (Limitation of EF and navigation properties on derived types. STILL not fixed in EF4!) I can query my EDM directly (using a separate project) using a copy of the query I'm trying to run against the web service, results are returned within 10 seconds. Disabling the query interceptors I'm able to make the same query against the web service, results are returned similarly quickly. I can enable some of the query interceptors and the results are returned slowly, up to a minute or so later. Alternatively, I can enable all the query interceptors, expand less of the properties on the main object I'm querying, and results are returned in a similar period of time. (I've increased some of the timeout periods) Up til this point Sql Profiler indicates the slow-down is the database. (That's a post for a different day) But when I enable all my query interceptors and expand all the properties I'd like to have the IIS worker process pegs the CPU for 20 minutes and a query is never even made against the database. This implies to me that yes, my implementation probably sucks but regardless the Data Services "tier" is having an issue it shouldn't. WCF tracing didn't reveal anything interesting to my untrained eye. Details: Data model: Agent-Person-Student Student has a collection of referrals Students and referrals are private, queries against the web service should only return "your" students and referrals. This means Person and Agent need to be filtered too. Other entities (Agent-Organization-School) can be accessed by anyone who has authenticated. The existing security model is poorly suited to perform this type of filtering for this type of data access, the query interceptors are complicated and cause EF to generate some entertaining sql queries. Sample Interceptor [QueryInterceptor("Agents")] public Expression<Func<Agent, Boolean>> OnQueryAgents() { //Agent is a Person(1), Educator(2), Student(3), or Other Person(13); allow if scope permissions exist return ag => (ag.AgentType.AgentTypeId == 1 || ag.AgentType.AgentTypeId == 2 || ag.AgentType.AgentTypeId == 3 || ag.AgentType.AgentTypeId == 13) && ag.Person.OrganizationPersons.Count<OrganizationPerson>(op => op.Organization.ScopePermissions.Any<ScopePermission> (p => p.ApplicationRoleAccount.Account.UserName == HttpContext.Current.User.Identity.Name && p.ApplicationRoleAccount.Application.ApplicationId == 124) || op.Organization.HierarchyDescendents.Any<OrganizationsHierarchy>(oh => oh.AncestorOrganization.ScopePermissions.Any<ScopePermission> (p => p.ApplicationRoleAccount.Account.UserName == HttpContext.Current.User.Identity.Name && p.ApplicationRoleAccount.Application.ApplicationId == 124))) > 0; } The query interceptors for Person, Student, Referral are all very similar, ie they traverse multiple same/similar tables to look for ScopePermissions as above. Sample Query var referrals = (from r in service.Referrals .Expand("Organization/ParentOrganization") .Expand("Educator/Person/Agent") .Expand("Student/Person/Agent") .Expand("Student") .Expand("Grade") .Expand("ProblemBehavior") .Expand("Location") .Expand("Motivation") .Expand("AdminDecision") .Expand("OthersInvolved") where r.DateCreated >= coupledays && r.DateDeleted == null select r); Any suggestions or tips would be greatly associated, for fixing my current implementation or in developing a new one, with the caveat that the database can't be changed and that ultimately I need to expose a large portion of the database via a web service that limits data access to the data authorized for, for the purpose of data integration with multiple outside parties. THANK YOU!!!

    Read the article

  • Change TextView without completely re-drawing layout?

    - by twk
    I've found that updating a text view every second in my app burns a lot of CPU. The textview is in a horizontal LinearLayout, which is in turn inside of a vertical LinearLayout. Switching to a RelativeLayout (as recommended to increase perf) is not an option right now (I tried to get that working originally, but it was too complicated). The horizontal LinearLayout has 3 elements. The outer ones are TextViews with a layout_weight of 0, and the middle one is a progress bar with a layout_weight of 1 to make it expand to take up most of the space. I'm changing the contents of the leftmost TextView every second So, is there a way to change the contents of the text view without re-drawing everything? Or, can I force the TextViews to use a fixed amount of space to simplify the layout. Other tips for speeding up a LinearLayout are greatly appreciated as well. For reference, here is my entire layout. The field I'm updating is the timeIn one. <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="wrap_content" android:layout_height="wrap_content"> <TextView android:text="Artist Name" android:id="@+id/curArtist" android:textSize="8pt" android:layout_width="fill_parent" android:layout_height="wrap_content" android:gravity="center_horizontal" android:paddingTop="5dp"></TextView> <TextView android:text="Song Name" android:id="@+id/curSong" android:textSize="10pt" android:textStyle="bold" android:layout_below="@id/curArtist" android:layout_width="fill_parent" android:layout_height="wrap_content" android:gravity="center_horizontal"></TextView> <TextView android:text="Album Name" android:id="@+id/curAlbum" android:textSize="8pt" android:layout_below="@id/curSong" android:layout_width="fill_parent" android:layout_height="wrap_content" android:gravity="center_horizontal"></TextView> <LinearLayout android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_below="@id/curAlbum" android:orientation="vertical"> <LinearLayout android:id="@+id/seekWrapper" android:layout_width="fill_parent" android:layout_height="wrap_content" android:minHeight="10dp" android:maxHeight="10dp" android:orientation="horizontal"> <TextView android:text="0:00" android:id="@+id/timeIn" android:textSize="4pt" android:paddingLeft="10dp" android:gravity="center_vertical" android:layout_weight="0" android:layout_gravity="left|center_vertical" android:layout_width="wrap_content" android:layout_height="fill_parent"></TextView> <ProgressBar android:layout_below="@id/curAlbum" android:id="@+id/progressBar" android:paddingLeft="7dp" android:paddingRight="7dp" android:layout_width="fill_parent" android:layout_height="fill_parent" android:maxHeight="10dp" android:minHeight="10dp" android:indeterminate="false" android:layout_weight="1" android:layout_gravity="center_horizontal|center_vertical" style="?android:attr/progressBarStyleHorizontal"></ProgressBar> <TextView android:text="0:00" android:id="@+id/timeLeft" android:paddingRight="10dp" android:textSize="4pt" android:layout_gravity="right|center_vertical" android:layout_weight="0" android:layout_width="wrap_content" android:layout_height="fill_parent"></TextView> </LinearLayout> <ImageView android:id="@+id/albumArt" android:layout_weight="1" android:padding="5dp" android:layout_width="fill_parent" android:layout_height="fill_parent" android:src="@drawable/blank_album_art"></ImageView> <LinearLayout android:layout_width="fill_parent" android:layout_height="wrap_content" android:orientation="horizontal" > <ImageButton android:id="@+id/prev" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="left" android:src="@drawable/button_prev" android:paddingLeft="10dp" android:background="@null"></ImageButton> <ImageButton android:id="@+id/playPause" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="center_horizontal" android:src="@drawable/button_play" android:layout_weight="1" android:background="@null"></ImageButton> <ImageButton android:id="@+id/next" android:layout_width="wrap_content" android:layout_height="wrap_content" android:src="@drawable/button_next" android:layout_gravity="right" android:paddingRight="10dp" android:background="@null"></ImageButton> </LinearLayout> </LinearLayout> </RelativeLayout>

    Read the article

  • QtOpenCl make errors. Please help.

    - by Skkard
    So I downloaded the ATI Stream SDK. I don't have a gpu now so I use the '-device cpu' and got the programs/examples in the OpenCl directory working by adding the directory to LD_LIBRARY_PATH etc. Now the problem is when installing QtOpenCl. configure script gives me: skkard@skkard-desktop:~/Applications/qt-labs-opencl$ ./configure This is the QtOpenCL configuration utility. Qt version ............. 4.6.2 qmake .................. /usr/bin/qmake OpenCL ................. yes OpenCL/OpenGL interop .. yes Extra QMAKE_CXXFLAGS ... Extra INCLUDEPATH ...... Extra LIBS ............. -lOpenCL QtOpenCL has been configured. Run '/usr/bin/make' to build. Make gives me: skkard@skkard-desktop:~/Applications/qt-labs-opencl$ make cd src/ && make -f Makefile make[1]: Entering directory `/home/skkard/Applications/qt-labs-opencl/src' cd opencl/ && make -f Makefile make[2]: Entering directory `/home/skkard/Applications/qt-labs-opencl/src/opencl' make[2]: Nothing to be done for `first'. make[2]: Leaving directory `/home/skkard/Applications/qt-labs-opencl/src/opencl' cd openclgl/ && make -f Makefile make[2]: Entering directory `/home/skkard/Applications/qt-labs-opencl/src/openclgl' make[2]: Nothing to be done for `first'. make[2]: Leaving directory `/home/skkard/Applications/qt-labs-opencl/src/openclgl' make[1]: Leaving directory `/home/skkard/Applications/qt-labs-opencl/src' cd examples/ && make -f Makefile make[1]: Entering directory `/home/skkard/Applications/qt-labs-opencl/examples' cd opencl/ && make -f Makefile make[2]: Entering directory `/home/skkard/Applications/qt-labs-opencl/examples/opencl' cd vectoradd/ && make -f Makefile make[3]: Entering directory `/home/skkard/Applications/qt-labs-opencl/examples/opencl/vectoradd' g++ -o vectoradd vectoradd.o qrc_vectoradd.o -L/usr/lib -L../../../lib -L../../../bin -lQtOpenCL -lQtGui -lQtCore -lpthread ../../../lib/libQtOpenCL.so: undefined reference to `clBuildProgram' ../../../lib/libQtOpenCL.so: undefined reference to `clSetCommandQueueProperty' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueNDRangeKernel' ../../../lib/libQtOpenCL.so: undefined reference to `clSetKernelArg' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueCopyBufferToImage' ../../../lib/libQtOpenCL.so: undefined reference to `clReleaseMemObject' ../../../lib/libQtOpenCL.so: undefined reference to `clFinish' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueUnmapMemObject' ../../../lib/libQtOpenCL.so: undefined reference to `clGetMemObjectInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueReadImage' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueMarker' ../../../lib/libQtOpenCL.so: undefined reference to `clRetainCommandQueue' ../../../lib/libQtOpenCL.so: undefined reference to `clGetCommandQueueInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueCopyImage' ../../../lib/libQtOpenCL.so: undefined reference to `clReleaseContext' ../../../lib/libQtOpenCL.so: undefined reference to `clRetainMemObject' ../../../lib/libQtOpenCL.so: undefined reference to `clReleaseEvent' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueWriteBuffer' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueCopyBuffer' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueMapImage' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueReadBuffer' ../../../lib/libQtOpenCL.so: undefined reference to `clUnloadCompiler' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueBarrier' ../../../lib/libQtOpenCL.so: undefined reference to `clGetProgramBuildInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueWaitForEvents' ../../../lib/libQtOpenCL.so: undefined reference to `clRetainProgram' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateContext' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateImage3D' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueMapBuffer' ../../../lib/libQtOpenCL.so: undefined reference to `clGetDeviceIDs' ../../../lib/libQtOpenCL.so: undefined reference to `clGetContextInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clGetDeviceInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clReleaseCommandQueue' ../../../lib/libQtOpenCL.so: undefined reference to `clGetSamplerInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clGetPlatformIDs' ../../../lib/libQtOpenCL.so: undefined reference to `clGetSupportedImageFormats' ../../../lib/libQtOpenCL.so: undefined reference to `clGetPlatformInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clWaitForEvents' ../../../lib/libQtOpenCL.so: undefined reference to `clGetEventInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clGetEventProfilingInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clGetImageInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateProgramWithBinary' ../../../lib/libQtOpenCL.so: undefined reference to `clReleaseSampler' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateCommandQueue' ../../../lib/libQtOpenCL.so: undefined reference to `clGetKernelWorkGroupInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clRetainEvent' ../../../lib/libQtOpenCL.so: undefined reference to `clRetainContext' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateSampler' ../../../lib/libQtOpenCL.so: undefined reference to `clReleaseProgram' ../../../lib/libQtOpenCL.so: undefined reference to `clFlush' ../../../lib/libQtOpenCL.so: undefined reference to `clGetProgramInfo' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateKernel' ../../../lib/libQtOpenCL.so: undefined reference to `clRetainKernel' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueWriteImage' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateBuffer' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateKernelsInProgram' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateProgramWithSource' ../../../lib/libQtOpenCL.so: undefined reference to `clReleaseKernel' ../../../lib/libQtOpenCL.so: undefined reference to `clRetainSampler' ../../../lib/libQtOpenCL.so: undefined reference to `clCreateImage2D' ../../../lib/libQtOpenCL.so: undefined reference to `clEnqueueCopyImageToBuffer' ../../../lib/libQtOpenCL.so: undefined reference to `clGetKernelInfo' collect2: ld returned 1 exit status make[3]: *** [vectoradd] Error 1 make[3]: Leaving directory `/home/skkard/Applications/qt-labs-opencl/examples/opencl/vectoradd' make[2]: *** [sub-vectoradd-make_default] Error 2 make[2]: Leaving directory `/home/skkard/Applications/qt-labs-opencl/examples/opencl' make[1]: *** [sub-opencl-make_default] Error 2 make[1]: Leaving directory `/home/skkard/Applications/qt-labs-opencl/examples' make: *** [sub-examples-make_default-ordered] Error 2 Tried it using the '-no-openclgl', but none of the examples etc are compiled. I'm using ubuntu 10.04 using the Qt which is installed from synaptic.

    Read the article

  • Understanding Node.js and concept of non-blocking I/O

    - by Saif Bechan
    Recently I became interested in using Node.js to tackle some of the parts of my web-application. I love the part that its full JavaScript and its very light weight so no use anymore to call an JavaScript-PHP call but a lighter JavaScript-JavaScript call. I however do not understand all the concepts explained. Basic concepts Now in the presentation for Node.js Ryan Dahl talks about non-blocking IO and why this is the way we need to create our programs. I can understand the theoretical concept. You just don't wait for a response, you go ahead and do other things. You make a callback for the response, and when the response arrives millions of clock-cycles later, you can fire that. If you have not already I recommend to watch this presentation. It is very easy to follow and pretty detailed. There are some nice concepts explained on how to write your code in a good manner. There are also some examples given and I am going to work with the basic example given. Examples The way we do thing now: puts("Enter your name: "); var name = gets(); puts("Name: " + name); Now the problem with this is that the code is halted at line 1. It blocks your code. The way we need to do things according to node puts("Enter your name: "); gets(function (name) { puts("Name: " + name); }); Now with this your program does not halt, because the input is a function within the output. So the programs continues to work without halting. Questions Now the basic question I have is how does this work in real-life situations. I am talking here for the use in web-applications. The application I am writing does I/O, bit is still does it in am blocking matter. I think that most of the time, if not all, you need to block, because you have to wait on what the response is you have to work with. When you need to get some information from the database, most of the time this data needs to be verified before you can further with the code. Example 1 If you take a login for example. You have to wait for the database to response to return, because you can not do anything else. I can't see a way around this without blocking. Example 2 Going back to the basic example. The use just request something from a database which does not need any verification. You still have to block because you don't have anything to do more. I can not come up with a single example where you want to do other things while you wait for the response to return. Possible answers I have read that this frees up recourses. When you program like this it takes less CPU or memory usage. So this non-blocking IO is ONLY meant to free up recourses and does not have any other practical use. Not that this is not a huge plus, freeing up recourses is always good. Yet I fail to see this as a good solution. because in both of the above examples, the program has to wait for the response of the user. Whether this is inside a function, or just inline, in my opinion there is a program that wait for input. Resources I looked at I have looked at some recourses before I posted this question. They talk a lot about the theoretical concept, which is quite clear. Yet i fail to see some real-life examples where this is makes a huge difference. Stackoverflow: What is in simple words blocking IO and non-blocking IO? Blocking IO vs non-blocking IO; looking for good articles tidy code for asynchronous IO Other recources: Wikipedia: Asynchronous I/O Introduction to non-blocking I/O The C10K problem

    Read the article

  • All downloads being interrupted

    - by Jake
    System: Windows 7 Professional 64bit. 8GB RAM, Intel i5-2400 CPU, +300GB free on the hard drive. AVG Internet Security 2012 (enabled & disabled, with firewall enabled and disabled - no effect for either). This computer is less than a year old. Network: This problem is occurring on a single computer on a network with multiple computers. The router is a Motorola Netopia 3347-02 (DSL Modem/Wireless Router combined). The computer is plugged in directly to the modem, other computers are using the wireless successfully. The router has been reset. The only thing odd about the connection between the router and computer is that it is configured to allow RDP through, so it is assigned a static IP by the router and port forwarding is enabled for port 3389. Also, though I doubt it matters, a second wireless router is active behind this router providing a second network that some computers in the area use without issues. Details: All downloads initiated on this specific computer eventually fail, this includes streaming from youtube, specialized downloads (itunes), downloads from websites, FTP downloads, etc. Failure occurs with all browsers, but in chrome this is the process it takes: 1) Download begins normally, 2) At some point between (observed) 7MBs and 229MBs the download stops progressing (at this point, if watching chrome's task manager, you can see the network activity for the downloading tab drop to 0kps), 3) for some time the download sits there still attempting to complete, but will eventually display "123,049,871/0 B, Interrupted" (where the number is whatever it actually got to). The file I am using to test this is a very large .zip file located on a server I control, but the problem seems to occur on any site. The amount downloaded is completely random, and seems to be more time-based than anything (if I start a download immediately after the last one fails, it tends to get further than the last one). Small files can get through for this reason, though they can fail as well. In a test where I simultaneously downloaded the same file via HTTP (chrome) and FTP (windows explorer), both downloads failed at the same instant, though explorer displayed "Connection timed out" several minutes before chrome finally showed the download as interrupted. Other things I have tried based on advice given to people with similar/identical problems: Setting my MTU to 1492 (as described here: http://blog.thecompwiz.com/2011/08/networking-issues.html) Disabling write caching to the hard drive storing the download on an external device successfully transmitted +1GB file from one computer on the same network to this computer disabling indexing in the folder the download was being stored in disabling all security software checked to make sure all drivers were up to date read about 50 accounts with nearly exact descriptions of what I'm experiencing, none of which had a solution given Running Processes: Image Name PID Session Name Session# Mem Usage ========================= ======== ================ =========== ============ System Idle Process 0 Services 0 24 K System 4 Services 0 104,836 K smss.exe 332 Services 0 1,276 K csrss.exe 764 Services 0 5,060 K wininit.exe 820 Services 0 4,748 K csrss.exe 844 Console 1 23,764 K services.exe 876 Services 0 11,856 K lsass.exe 892 Services 0 14,420 K lsm.exe 900 Services 0 7,820 K winlogon.exe 944 Console 1 7,716 K svchost.exe 428 Services 0 12,744 K svchost.exe 796 Services 0 12,240 K svchost.exe 1036 Services 0 22,372 K svchost.exe 1084 Services 0 174,132 K svchost.exe 1112 Services 0 56,144 K svchost.exe 1288 Services 0 18,640 K svchost.exe 1404 Services 0 29,616 K spoolsv.exe 1576 Services 0 25,924 K svchost.exe 1616 Services 0 12,788 K AppleMobileDeviceService. 1728 Services 0 9,796 K avgwdsvc.exe 1820 Services 0 8,268 K mDNSResponder.exe 1844 Services 0 5,832 K w3dbsmgr.exe 1108 Services 0 43,760 K QBCFMonitorService.exe 1336 Services 0 16,408 K svchost.exe 2404 Services 0 28,240 K taskhost.exe 3020 Console 1 12,372 K dwm.exe 2280 Console 1 5,968 K explorer.exe 2964 Console 1 152,476 K WUDFHost.exe 3316 Services 0 6,740 K svchost.exe 3408 Services 0 5,556 K RAVCpl64.exe 3684 Console 1 13,864 K igfxtray.exe 3700 Console 1 7,804 K hkcmd.exe 3772 Console 1 7,868 K igfxpers.exe 3788 Console 1 10,940 K sidebar.exe 3836 Console 1 84,400 K chrome.exe 3964 Console 1 19,640 K pptd40nt.exe 4068 Console 1 5,156 K acrotray.exe 3908 Console 1 14,676 K avgtray.exe 3872 Console 1 9,508 K jusched.exe 4076 Console 1 4,412 K iTunesHelper.exe 1532 Console 1 87,308 K SearchIndexer.exe 3492 Services 0 36,948 K iPodService.exe 4136 Services 0 7,944 K BrccMCtl.exe 4276 Console 1 18,132 K splwow64.exe 4380 Console 1 32,600 K qbupdate.exe 4836 Console 1 24,236 K svchost.exe 4288 Services 0 20,700 K wmpnetwk.exe 3112 Services 0 9,516 K FNPLicensingService.exe 5248 Services 0 5,852 K QBW32.EXE 5508 Console 1 127,068 K QBDBMgrN.exe 5600 Services 0 42,252 K EXCEL.EXE 2512 Console 1 99,100 K LMS.exe 3188 Services 0 5,616 K UNS.exe 1600 Services 0 7,308 K axlbridge.exe 5260 Console 1 5,132 K chrome.exe 5888 Console 1 200,336 K chrome.exe 3536 Console 1 26,076 K chrome.exe 1952 Console 1 20,168 K chrome.exe 4596 Console 1 24,696 K chrome.exe 4292 Console 1 48,096 K chrome.exe 2796 Console 1 23,520 K Acrobat.exe 1240 Console 1 87,252 K 123w.exe 4892 Console 1 22,728 K calc.exe 1700 Console 1 12,636 K chrome.exe 1328 Console 1 28,888 K chrome.exe 3696 Console 1 47,012 K rundll32.exe 6320 Console 1 7,104 K chrome.exe 4928 Console 1 44,248 K AVGIDSAgent.exe 260 Services 0 12,940 K avgfws.exe 6052 Services 0 26,912 K avgnsa.exe 5064 Services 0 2,496 K avgrsa.exe 3088 Services 0 2,200 K avgcsrva.exe 2596 Services 0 380 K avgcsrva.exe 6948 Services 0 408 K StikyNot.exe 452 Console 1 14,772 K chrome.exe 4580 Console 1 28,200 K chrome.exe 4016 Console 1 57,756 K svchost.exe 7140 Services 0 4,500 K chrome.exe 6264 Console 1 56,824 K chrome.exe 7008 Console 1 56,896 K chrome.exe 2224 Console 1 38,032 K taskhost.exe 612 Console 1 7,228 K chrome.exe 6000 Console 1 10,928 K chrome.exe 2568 Console 1 43,052 K chrome.exe 272 Console 1 75,988 K chrome.exe 7328 Console 1 53,240 K PaprPort.exe 7976 Console 1 137,152 K pplinks.exe 7500 Console 1 14,052 K ppscanmg.exe 5744 Console 1 18,996 K taskeng.exe 7388 Console 1 6,308 K SearchProtocolHost.exe 8024 Services 0 8,804 K SearchFilterHost.exe 7232 Services 0 7,848 K chrome.exe 8016 Console 1 37,440 K cmd.exe 7692 Console 1 3,096 K conhost.exe 7516 Console 1 5,872 K tasklist.exe 8160 Console 1 5,772 K WmiPrvSE.exe 7684 Services 0 6,400 K Any help with this would be greatly appreciated, I've been beating my head against a wall over this all day. This computer serves dual purpose as the main company document server and the Owner's work computer, it's fairly important it be fully functional and I cannot figure this out.

    Read the article

  • Constant Memory Leak in SpeechSynthesizer

    - by DudeFX
    I have developed a project which I would like to release which uses c#, WPF and the System.Speech.Synthesizer object. The issue preventing the release of this project is that whenever SpeakAsync is called it leaves a memory leak that grows to the point of eventual failure. I believe I have cleaned up properly after using this object, but cannot find a cure. I have run the program through Ants Memory Profiler and it reports that WAVEHDR and WaveHeader is growing with each call. I have created a sample project to try to pinpoint the cause, but am still at a loss. Any help would be appreciated. The project uses VS2008 and is a c# WPF project that targets .NET 3.5 and Any CPU. You need to manually add a reference to System.Speech. Here is the Code: <Window x:Class="SpeechTest.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="300" Width="300"> <Grid> <StackPanel Orientation="Vertical"> <Button Content="Start Speaking" Click="Start_Click" Margin="10" /> <Button Content="Stop Speaking" Click="Stop_Click" Margin="10" /> <Button Content="Exit" Click="Exit_Click" Margin="10"/> </StackPanel> </Grid> // Start of code behind using System; using System.Windows; using System.Speech.Synthesis; namespace SpeechTest { public partial class Window1 : Window { // speak setting private bool speakingOn = false; private int curLine = 0; private string [] speakLines = { "I am wondering", "Why whenever Speech is called", "A memory leak occurs", "If you run this long enough", "It will eventually crash", "Any help would be appreciated" }; public Window1() { InitializeComponent(); } private void Start_Click(object sender, RoutedEventArgs e) { speakingOn = true; SpeakLine(); } private void Stop_Click(object sender, RoutedEventArgs e) { speakingOn = false; } private void Exit_Click(object sender, RoutedEventArgs e) { App.Current.Shutdown(); } private void SpeakLine() { if (speakingOn) { // Create our speak object SpeechSynthesizer spk = new SpeechSynthesizer(); spk.SpeakCompleted += new EventHandler(spk_Completed); // Speak the line spk.SpeakAsync(speakLines[curLine]); } } public void spk_Completed(object sender, SpeakCompletedEventArgs e) { if (sender is SpeechSynthesizer) { // get access to our Speech object SpeechSynthesizer spk = (SpeechSynthesizer)sender; // Clean up after speaking (thinking the event handler is causing the memory leak) spk.SpeakCompleted -= new EventHandler(spk_Completed); // Dispose the speech object spk.Dispose(); // bump it curLine++; // check validity if (curLine = speakLines.Length) { // back to the beginning curLine = 0; } // Speak line SpeakLine(); } } } } I run this program on Windows 7 64 bit and it will run and eventually halt when attempting to create a new SpeechSynthesizer object. When run on Windows Vista 64 bit the memory will grow from a starting point of 34k to so far about 400k and growing. Can anyone see anything in the code that might be causing this, or is this an issue with the Speech object itself. Any help would be appreciated.

    Read the article

  • Nginx + php-fpm "504 Gateway Time-out" error with almost zero load (on a test-server)

    - by rahul286
    After debugging for 6-hours - I am giving this up :| We have a nginx+php-fpm+mysql in LAN with almost 100 wordpress (created and used by different designers/developers all working on test wordpres setup) We are using nginx without any issues from long. Today, all of a sudden - nginx started returning "504 Gateway Time-out" out of the blue... I checked nginx error log for a virtual host... 2010/09/06 21:24:24 [error] 12909#0: *349 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 21:25:11 [error] 12909#0: *349 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 21:25:11 [error] 12909#0: *443 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /info.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 21:25:12 [error] 12909#0: *443 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:08:32 [error] 12909#0: *1025 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:09:33 [error] 12909#0: *1025 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:09:40 [error] 12909#0: *1064 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /info.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:09:40 [error] 12909#0: *1064 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:24:44 [error] 12909#0: *1313 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" 2010/09/06 22:24:53 [error] 12909#0: *1313 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.0.1, server: rahul286.rtcamp.info, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "rahul286.rtcamp.info" As I run php-fpm on port 9000 via TCP mode, I ran "netstat | grep 9000" and noticed something unusual... (Pasting partial output here for ease of read) tcp 9 0 localhost:9000 localhost:36094 CLOSE_WAIT 14269/php5-fpm tcp 0 0 localhost:46664 localhost:9000 FIN_WAIT2 - tcp 1257 0 localhost:9000 localhost:36135 CLOSE_WAIT - tcp 1257 0 localhost:9000 localhost:36125 CLOSE_WAIT - tcp 9 0 localhost:9000 localhost:36102 CLOSE_WAIT 14268/php5-fpm tcp 0 0 localhost:46662 localhost:9000 FIN_WAIT2 - tcp 745 0 localhost:9000 localhost:46644 CLOSE_WAIT - tcp 0 0 localhost:46658 localhost:9000 FIN_WAIT2 - tcp 1265 0 localhost:9000 localhost:46607 CLOSE_WAIT - tcp 0 0 localhost:46672 localhost:9000 ESTABLISHED 12909/nginx: worker tcp 1257 0 localhost:9000 localhost:36119 CLOSE_WAIT - tcp 1265 0 localhost:9000 localhost:46613 CLOSE_WAIT - tcp 0 0 localhost:46646 localhost:9000 FIN_WAIT2 - tcp 1257 0 localhost:9000 localhost:36137 CLOSE_WAIT - tcp 0 0 localhost:46670 localhost:9000 ESTABLISHED 12909/nginx: worker tcp 1265 0 localhost:9000 localhost:46619 CLOSE_WAIT - tcp 1336 0 localhost:9000 localhost:46668 ESTABLISHED - tcp 0 0 localhost:46648 localhost:9000 FIN_WAIT2 - tcp 1336 0 localhost:9000 localhost:46670 ESTABLISHED - tcp 9 0 localhost:9000 localhost:36108 CLOSE_WAIT 14274/php5-fpm tcp 1336 0 localhost:9000 localhost:46684 ESTABLISHED - tcp 0 0 localhost:46674 localhost:9000 ESTABLISHED 12909/nginx: worker tcp 1336 0 localhost:9000 localhost:46666 ESTABLISHED - tcp 1257 0 localhost:9000 localhost:46648 CLOSE_WAIT - tcp 1336 0 localhost:9000 localhost:46678 ESTABLISHED - tcp 0 0 localhost:46668 localhost:9000 ESTABLISHED 12909/nginx: wo There are plenty of "CLOSE_WAIT" & "FIN_WAIT2" pairs as highlighted below (in above output): tcp 1337 0 localhost:9000 localhost:46680 CLOSE_WAIT - tcp 0 0 localhost:46680 localhost:9000 FIN_WAIT2 - Please note port 46680 in above. I enabled mysql slow queries error log, but it didn't work. As of now restarting php5-fpm every minute via a cronjob (see command below) keeping everything running "smoothly" but I hate patchwork and want to solve this... 1 * * * * service php5-fpm restart > /dev/null I searched extensively on Google - got no help. As mentioned, this a test-server in LAN, CPU load is never crossed 0.10 and memory usage is also below 25% (System has 2GB RAM and ubuntu-server installed) So if you find its time-confusing to help me out, please atleast drop a hint. Thanks in advance for help. -Rahul (note - this is reposting of - http://forum.nginx.org/read.php?11,127694) Update: I found answer, which is posted below.

    Read the article

  • Load-balancing between a Procurve switch and a server

    - by vlad
    Hello I've been searching around the web for this problem i've been having. It's similar in a way to this question: How exactly & specifically does layer 3 LACP destination address hashing work? My setup is as follows: I have a central switch, a Procurve 2510G-24, image version Y.11.16. It's the center of a star topology, there are four switches connected to it via a single gigabit link. Those switches service the users. On the central switch, I have a server with two gigabit interfaces that I want to bond together in order to achieve higher throughput, and two other servers that have single gigabit connections to the switch. The topology looks as follows: sw1 sw2 sw3 sw4 | | | | --------------------- | sw0 | --------------------- || | | srv1 srv2 srv3 The servers were running FreeBSD 8.1. On srv1 I set up a lagg interface using the lacp protocol, and on the switch I set up a trunk for the two ports using lacp as well. The switch showed that the server was a lacp partner, I could ping the server from another computer, and the server could ping other computers. If I unplugged one of the cables, the connection would keep working, so everything looked fine. Until I tested throughput. There was only one link used between srv1 and sw0. All testing was conducted with iperf, and load distribution was checked with systat -ifstat. I was looking to test the load balancing for both receive and send operations, as I want this server to be a file server. There were therefore two scenarios: iperf -s on srv1 and iperf -c on the other servers iperf -s on the other servers and iperf -c on srv1 connected to all the other servers. Every time only one link was used. If one cable was unplugged, the connections would keep going. However, once the cable was plugged back in, the load was not distributed. Each and every server is able to fill the gigabit link. In one-to-one test scenarios, iperf was reporting around 940Mbps. The CPU usage was around 20%, which means that the servers could withstand a doubling of the throughput. srv1 is a dell poweredge sc1425 with onboard intel 82541GI nics (em driver on freebsd). After troubleshooting a previous problem with vlan tagging on top of a lagg interface, it turned out that the em could not support this. So I figured that maybe something else is wrong with the em drivers and / or lagg stack, so I started up backtrack 4r2 on this same server. So srv1 now uses linux kernel 2.6.35.8. I set up a bonding interface bond0. The kernel module was loaded with option mode=4 in order to get lacp. The switch was happy with the link, I could ping to and from the server. I could even put vlans on top of the bonding interface. However, only half the problem was solved: if I used srv1 as a client to the other servers, iperf was reporting around 940Mbps for each connection, and bwm-ng showed, of course, a nice distribution of the load between the two nics; if I run the iperf server on srv1 and tried to connect with the other servers, there was no load balancing. I thought that maybe I was out of luck and the hashes for the two mac addresses of the clients were the same, so I brought in two new servers and tested with the four of them at the same time, and still nothing changed. I tried disabling and reenabling one of the links, and all that happened was the traffic switched from one link to the other and back to the first again. I also tried setting the trunk to "plain trunk mode" on the switch, and experimented with other bonding modes (roundrobin, xor, alb, tlb) but I never saw any traffic distribution. One interesting thing, though: one of the four switches is a Cisco 2950, image version 12.1(22)EA7. It has 48 10/100 ports and 2 gigabit uplinks. I have a server (call it srv4) with a 4 channel trunk connected to it (4x100), FreeBSD 8.0 release. The switch is connected to sw0 via gigabit. If I set up an iperf server on one of the servers connected to sw0 and a client on srv4, ALL 4 links are used, and iperf reports around 330Mbps. systat -ifstat shows all four interfaces are used. The cisco port-channel uses src-mac to balance the load. The HP should use both the source and destination according to the manual, so it should work as well. Could this mean there is some bug in the HP firmware? Am I doing something wrong?

    Read the article

  • codes to convert from avi to asf

    - by George2
    Hello everyone, No matter what library/SDK to use, I want to convert from avi to asf very quickly (I could even sacrifice some quality of video and audio). I am working on Windows platform (Vista and 2008 Server), better .Net SDK/code, C++ code is also fine. :-) I learned from the below link, that there could be a very quick way to convert from avi to asf to support streaming better, as mentioned "could convert the video from AVI to ASF format using a simple copy (i.e. the content is the same, but container changes).". My question is after some hours of study and trial various SDK/tools, as a newbie, I do not know how to begin with so I am asking for reference sample code to do this task. :-) (as this is a different issue, we decide to start a new topic. :-) ) http://stackoverflow.com/questions/743220/streaming-avi-file-issue thanks in advance, George EDIT 1: I have tried to get the binary of ffmpeg from, http://ffmpeg.arrozcru.org/autobuilds/ffmpeg-latest-mingw32-static.tar.bz2 then run the following command, C:\software\ffmpeg-latest-mingw32-static\bin>ffmpeg.exe -i test.avi -acodec copy -vcodec copy test.asf FFmpeg version SVN-r18506, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --enable-memalign-hack --prefix=/mingw --cross-prefix=i686-ming w32- --cc=ccache-i686-mingw32-gcc --target-os=mingw32 --arch=i686 --cpu=i686 --e nable-avisynth --enable-gpl --enable-zlib --enable-bzlib --enable-libgsm --enabl e-libfaac --enable-pthreads --enable-libvorbis --enable-libmp3lame --enable-libo penjpeg --enable-libtheora --enable-libspeex --enable-libxvid --enable-libfaad - -enable-libschroedinger --enable-libx264 libavutil 50. 3. 0 / 50. 3. 0 libavcodec 52.25. 0 / 52.25. 0 libavformat 52.32. 0 / 52.32. 0 libavdevice 52. 2. 0 / 52. 2. 0 libswscale 0. 7. 1 / 0. 7. 1 built on Apr 14 2009 04:04:47, gcc: 4.2.4 Input #0, avi, from 'test.avi': Duration: 00:00:44.86, start: 0.000000, bitrate: 5291 kb/s Stream #0.0: Video: msvideo1, rgb555le, 1280x1024, 5 tbr, 5 tbn, 5 tbc Stream #0.1: Audio: pcm_s16le, 22050 Hz, mono, s16, 352 kb/s Output #0, asf, to 'test.asf': Stream #0.0: Video: CRAM / 0x4D415243, rgb555le, 1280x1024, q=2-31, 1k tbn, 5 tbc Stream #0.1: Audio: pcm_s16le, 22050 Hz, mono, s16, 352 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding frame= 224 fps=222 q=-1.0 Lsize= 29426kB time=44.80 bitrate=5380.7kbits/s video:26910kB audio:1932kB global headers:0kB muxing overhead 2.023317% C:\software\ffmpeg-latest-mingw32-static\bin> http://www.microsoft.com/windows/windowsmedia/player/webhelp/default.aspx?&mpver=11.0.6001.7000&id=C00D11B1&contextid=230&originalid=C00D36E6 then have the following error when using Windows Media Player to play it, does anyone have any ideas? http://www.microsoft.com/windows/windowsmedia/player/webhelp/default.aspx?&mpver=11.0.6001.7000&id=C00D11B1&contextid=230&originalid=C00D36E6

    Read the article

  • Curl Certificate Error when Using RVM to install Ruby 1.9.2

    - by Will Dennis
    RVM is running into a certificate error when trying to download ruby 1.9.2. It looks like curl is having a certificate issue but I am not sure how to bypass it. NAy help would be great. Thanks so much, I have included the exact error info below. $ rvm install 1.9.2 Installing Ruby from source to: /Users/willdennis/.rvm/rubies/ruby-1.9.2-p180, this may take a while depending on your cpu(s)... ruby-1.9.2-p180 - #fetching ERROR: Error running 'bunzip2 '/Users/willdennis/.rvm/archives/ruby-1.9.2-p180.tar.bz2'', please read /Users/willdennis/.rvm/log/ruby-1.9.2-p180/extract.log ruby-1.9.2-p180 - #extracting ruby-1.9.2-p180 to /Users/willdennis/.rvm/src/ruby-1.9.2-p180 ruby-1.9.2-p180 - #extracted to /Users/willdennis/.rvm/src/ruby-1.9.2-p180 Fetching yaml-0.1.3.tar.gz to /Users/willdennis/.rvm/archives curl: (60) SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed More details here: http://curl.haxx.se/docs/sslcerts.html curl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). The default bundle is named curl-ca-bundle.crt; you can specify an alternate file using the --cacert option. If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option. ERROR: There was an error, please check /Users/willdennis/.rvm/log/ruby-1.9.2-p180/*.log. Next we'll try to fetch via http. Trying http:// URL instead. curl: (60) SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed More details here: http://curl.haxx.se/docs/sslcerts.html curl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). The default bundle is named curl-ca-bundle.crt; you can specify an alternate file using the --cacert option. If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option. ERROR: There was an error, please check /Users/willdennis/.rvm/log/ruby-1.9.2-p180/*.log Extracting yaml-0.1.3.tar.gz to /Users/willdennis/.rvm/src ERROR: Error running 'tar zxf /Users/willdennis/.rvm/archives/yaml-0.1.3.tar.gz -C /Users/willdennis/.rvm/src --no-same-owner', please read /Users/willdennis/.rvm/log/ruby-1.9.2-p180/yaml/extract.log /Users/willdennis/.rvm/scripts/functions/packages: line 55: cd: /Users/willdennis/.rvm/src/yaml-0.1.3: No such file or directory Configuring yaml in /Users/willdennis/.rvm/src/yaml-0.1.3. ERROR: Error running ' ./configure --prefix="/Users/willdennis/.rvm/usr" ', please read /Users/willdennis/.rvm/log/ruby-1.9.2-p180/yaml/configure.log Compiling yaml in /Users/willdennis/.rvm/src/yaml-0.1.3. ERROR: Error running '/usr/bin/make ', please read /Users/willdennis/.rvm/log/ruby-1.9.2-p180/yaml/make.log Installing yaml to /Users/willdennis/.rvm/usr ERROR: Error running '/usr/bin/make install', please read /Users/willdennis/.rvm/log/ruby-1.9.2-p180/yaml/make.install.log ruby-1.9.2-p180 - #configuring ERROR: Error running ' ./configure --prefix=/Users/willdennis/.rvm/rubies/ruby-1.9.2-p180 --enable-shared --disable-install-doc --with-libyaml-dir=/Users/willdennis/.rvm/usr ', please read /Users/willdennis/.rvm/log/ruby-1.9.2-p180/configure.log ERROR: There has been an error while running configure. Halting the installation. Will

    Read the article

  • Linking LLVM JIT Code to Static LLVM Libraries?

    - by inflector
    I'm in the process of implementing a cross-platform (Mac OS X, Windows, and Linux) application which will do lots of CPU intensive analysis of financial data. The bulk of the analysis engine will be written in C++ for speed reasons, with a user-accessible scripting engine interfacing with the C++ testing engine. I want to write several scripting front-ends over time to emulate other popular software with existing large user bases. The first front will be a VisualBasic-like scripting language. I'm thinking that LLVM would be perfect for my needs. Performance is very important because of the sheer amount of data; it can take hours or days to run a single run of tests to get an answer. I believe that using LLVM will also allow me to use a single back-end solution while I implement different front-ends for different flavors of the scripting language over time. The testing engine itself will be separated from the interface and testing will even take place in a separate process with progress and results being reported to the testing management interface. Tests will consist of scripting code integrated with the testing engine code. In a previous implementation of a similar commercial testing system I wrote, I built a fast interpreter which easily interfaced with the testing library because it was written in C++ and linked directly to the testing engine library. Callbacks from scripting code to testing library objects involved translating between the formats with significant overhead. I'm imagining that with LLVM, I could implement the callbacks into C++ directly so that I could make the scripting code work almost as if it had been written in C++. Likewise, if all the code was compiled to LLVM byte-code format, it seems like the LLVM optimizers could optimize across the boundaries between the scripting language and the testing engine code that was written in C++. I don't want to have to compile the testing engine every time. Ideally, I'd like to JIT compile only the scripting code. For small tests, I'd skip some optimization passes, while for large tests, I'd perform full optimizations during the link. So is this possible? Can I precompile the testing engine to a .o object file or .a library file and then link in the scripting code using the JIT? Finally, ideally, I'd like to have the scripting code implement specific methods as subclasses for a specific C++ class. So the C++ testing engine would only see C++ objects while the JIT setup code compiled scripting code that implemented some of the methods for the objects. It seems that if I used the right name mangling algorithm it would be relatively easy to set up the LLVM generation for the scripting language to look like a C++ method call which could then be linked into the testing engine. Thus the linking stage would go in two directions, calls from the scripting language into the testing engine objects to retrieve pricing information and test state information and calls from the testing engine of methods of some particular C++ objects where the code was supplied not from C++ but from the scripting language. In summary: 1) Can I link in precompiled (either .bc, .o, or .a) files as part of the JIT compilation, code-generation process? 2) Can I link in code using that the process in 1) above in such a way that I am able to create code that acts as if it was all written in C++?

    Read the article

  • FFmpeg audio dont work in converted videos

    - by Juddy Swaft
    NOTICE: when i convert videos via terminal and download them from ftp into pc the audio works fine. I use: if($ext == "avi" && $convert_avi == true) { $convert_source = _VIDEOS_DIR_PATH.$new_name; $conv_name = substr(md5($file['name'].rand(1,888)), 2, 10).".mp4"; $converted_file = _VIDEOS_DIR_PATH.$conv_name; $ffmpeg_command = 'ffmpeg -i '.$convert_source.' -acodec libmp3lame -vcodec libx264 -s 1280x720 -ar 44100 -async 44100 -r 29.970 -ac 2 -qscale 5 '.$converted_file; echo exec($ffmpeg_command); $sql = "UPDATE pm_temp SET url = '".$conv_name."' WHERE url = '".$new_name."' LIMIT 1"; $result = @mysql_query($sql); unlink($convert_source); } This code to convert avi to mp4 ffmpeg concole output: root@1tb:~# ffmpeg -i sample.avi -acodec libmp3lame -vcodec libx264 -s 1280x720 -ar 44100 -async 44100 -r 29.970 -ac 2 -qscale 5 goodsample.mp4 ffmpeg version 0.7.15, Copyright (c) 2000-2013 the FFmpeg developers built on Feb 22 2013 07:18:58 with gcc 4.4.5 configuration: --enable-libdc1394 --prefix=/usr --extra-cflags='-Wall -g ' --cc='ccache cc' --enable-shared --enable-libmp3lame --enable-gpl --enable-libvorbis --enable-pthreads --enable-libfaac --enable-libxvid --enable-postproc --enable-x11grab --enable-libgsm --enable-libtheora --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libx264 --enable-libspeex --enable-nonfree --disable-stripping --enable-avfilter --enable-libdirac --disable-decoder=libdirac --enable-libfreetype --enable-libschroedinger --disable-encoder=libschroedinger - s libavutil 50. 43. 0 / 50. 43. 0 libavcodec 52.123. 0 / 52.123. 0 libavformat 52.111. 0 / 52.111. 0 libavdevice 52. 5. 0 / 52. 5. 0 libavfilter 1. 80. 0 / 1. 80. 0 libswscale 0. 14. 1 / 0. 14. 1 libpostproc 51. 2. 0 / 51. 2. 0 [mp3 @ 0x191d4100] Header missing [mpeg4 @ 0x191d1dc0] Invalid and inefficient vfw-avi packed B frames detected Input #0, avi, from 'sample.avi': Metadata: encoder : VirtualDubMod 1.5.10.2 (build 2540/release) Duration: 00:01:01.81, start: 0.000000, bitrate: 1194 kb/s Stream #0.0: Video: mpeg4, yuv420p, 640x352 [PAR 1:1 DAR 20:11], 23.98 tbr, Stream #0.1: Audio: mp3, 48000 Hz, stereo, s16, 128 kb/s [buffer @ 0x191d1c80] w:640 h:352 pixfmt:yuv420p tb:1/1000000 sar:1/1 sws_param: [scale @ 0x191d6880] w:640 h:352 fmt:yuv420p -> w:1280 h:720 fmt:yuv420p flags:0 [libx264 @ 0x191ce5a0] Default settings detected, using medium profile [libx264 @ 0x191ce5a0] using SAR=45/44 [libx264 @ 0x191ce5a0] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle S [libx264 @ 0x191ce5a0] profile High, level 3.1 [libx264 @ 0x191ce5a0] 264 - core 118 - H.264/MPEG-4 AVC codec - Copyleft 2003-2 6 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_off 1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_l Output #0, mp4, to 'goodsample.mp4': Metadata: encoder : Lavf52.111.0 Stream #0.0: Video: libx264, yuv420p, 1280x720 [PAR 45:44 DAR 20:11], q=2-31 Stream #0.1: Audio: libmp3lame, 44100 Hz, stereo, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop, [?] for help [mp3 @ 0x191d4100] Header missing Error while decoding stream #0.1 [mpeg4 @ 0x191d1dc0] Invalid and inefficient vfw-avi packed B frames detected [mp3 @ 0x191d4100] incomplete frame 9467kB time=00:01:00.32 bitrate=1285.5kbits/ Error while decoding stream #0.1 frame= 1852 fps= 20 q=29.0 Lsize= 9652kB time=00:01:01.72 bitrate=1280.9kbits video:9121kB audio:483kB global headers:0kB muxing overhead 0.499688% frame I:11 Avg QP:16.78 size: 51456 [libx264 @ 0x191ce5a0] frame P:784 Avg QP:20.81 size: 8954 [libx264 @ 0x191ce5a0] frame B:1057 Avg QP:26.06 size: 1659 [libx264 @ 0x191ce5a0] consecutive B-frames: 22.0% 3.1% 7.5% 67.4% [libx264 @ 0x191ce5a0] mb I I16..4: 31.1% 59.8% 9.1% [libx264 @ 0x191ce5a0] mb P I16..4: 1.8% 2.6% 0.2% P16..4: 24.3% 7.0% 4.0 [libx264 @ 0x191ce5a0] mb B I16..4: 0.1% 0.1% 0.0% B16..8: 22.7% 0.8% 0.2 [libx264 @ 0x191ce5a0] 8x8 transform intra:57.0% inter:72.6% [libx264 @ 0x191ce5a0] coded y,uvDC,uvAC intra: 44.4% 33.3% 10.3% inter: 7.6% 5. [libx264 @ 0x191ce5a0] i16 v,h,dc,p: 68% 14% 8% 10% [libx264 @ 0x191ce5a0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 21% 14% 27% 5% 7% 7% 6 [libx264 @ 0x191ce5a0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 28% 14% 14% 6% 10% 9% 7 [libx264 @ 0x191ce5a0] i8c dc,h,v,p: 67% 13% 17% 3% [libx264 @ 0x191ce5a0] Weighted P-Frames: Y:1.9% UV:0.4% [libx264 @ 0x191ce5a0] ref P L0: 62.2% 12.8% 10.3% 14.5% 0.2% [libx264 @ 0x191ce5a0] ref B L0: 88.1% 5.5% 6.4% [libx264 @ 0x191ce5a0] ref B L1: 95.7% 4.3% [libx264 @ 0x191ce5a0] kb/s:1209.03 I know there is couple errors tough, but i dont know hot to fix it. Also i would be very thankfull if someone can help reduce video size but is not main problem video weights as original avi but sill.

    Read the article

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? ---update 2-6-11 Since I have not received any responses except the one below which appears to misunderstand my point, I am updating this post hoping to get more responses. I have used the terminal command sudo opensnoop -p PID where PID is the mdworker process ID to try and determine what Spotlight is doing and hopefully find the files it's having trouble with. Here's what happens: After indexing for a few hours, mdworker is gone. It no longer shows up in Activity Monitor under "All Processes" and the Terminal window with the opensnoop result stops moving. I then proceeded to execute the same command on mds to see what it was doing and here's what I get, repeatedly: 501 57 mds 21 / 501 57 mds 21 /Volumes/Sno Leppard 501 57 mds 21 /Volumes/Tiger 501 57 mds 21 /Volumes/Leppard 501 57 mds 21 /Volumes/Disk Warrior 501 57 mds 21 /Volumes/ONM Data These represent all the volumes currently mounted in the system. All except ONM Data, which is the one I am trying to index, are excluded from SPotlight indexing at the moment. The sequence above repeats over and over, with slight variation, sometimes skipping one of the volumes. Questions - what happened to mdworker? What is mds doing? I will let this run until tomorrow morning and throughout the day and monitor for any changes. Any input would be very much appreciated. Even if you're not sure what the ultimate answer is, please alert me to anything you think I may be missing. Hopefully at some point we will figure this out... Thanks, M __final edit__ I finally resolved the issue and here is how I did it. I used the terminal command "sudo opensnoop -p PID" where the PID is the process id of the processes I was monitoring. I was looking at all instances of mds and mdworker running in the system. After the third time through indexing the same data set (see info above), I contacted Apple and got to their highest level of support - they were flabbergasted as well. They advised me to install yet another default 10.6.6 system and try again. The same pattern repeated - mds and mdworker(s) would start indexing and eventually the spotlight icon would say 6 hours remaining and all mdworkers were gone, mds at 90% or so of CPU. But I did finally figure out that the first time mdworker stopped like that, the last file it touched was always in the same folder. I excluded that folder from spotlight search and the rest of the data set indexed within about 2 hours with no strange behavior or failures. I copied that folder to another machine and Spotlight barfed immediately. Exclude that folder and all is well again. I have no clue what is causing this behavior, still, but I did find a functional solution to the problem. Anyone with a similar problem - run opensnoop on all instances of mds and mdworker and wait patiently for wdworker to exit. Look at the last file it touched and exclude the enclosing folder from being indexed. I was able to repeat the issue and solution on 2 different installs and 2 different copies of the data set. Hope this helps. If we find an actual cause of the folder being such a problem (it is called MICHAEL BRECKER RECORD SOLOS and contains almost 1 GB of audio related files - performer, live, SD2 - things like that), I will edit again to let you all know. Thanks for ay attempts to help, M

    Read the article

  • Search engine solution for Django that actually works?

    - by prometheus
    The story so far: Decided to go with Xapian as search backend because it has all search-engine features I was looking for, knows about Unicode, stemming, has few dependencies and requires no bloated app-server installation on top of it. Tried Django and Haystack (plus xapian-haystack, the backend glue code to tie Haystack to Xapian) because it was advertised on quite some blogs as "working". Did not work. Neither django-haystack nor the xapian-haystack project provide a version combination that actually works together. MASTER from both projects yields an error from Xapian, so it's not stable at all. Haystack 1.0.1 and xapian-haystack 1.0.x/1.1.0 are not API-compatible. Plus, in a minimally working installation of Haystack 1.0.1 and xapian-haystack MASTER, any complex query yields zero results due to errors in either django-haystack or xapian-haystack (I double-verified this), maybe because the unit-tests actually test very simple cases, and no edge-cases at all. Tried Djapian. The source-code is riddled with spelling errors (mind you, in variable names, not comments), documentation is also riddled with ambiguities and outdated information that will never lead to a working installation. Not surprisingly, users rarely ask for features but how to get it working in the first place. Next on the plate: exploring Solr (installing a Java environment plus Tomcat gives me headaches, the machine is RAM- and CPU-constrained), or Lucene (slightly less headaches, but still). Before I proceed spending more time with a solution that might or might not work as advertised, I'd like to know: Did anyone ever get an actual, real-world search solution working in Django? I'm serious. I find it really frustrating reading about "large problems mostly solved", and then realizing that you will never get a working installation from the source-code because, actually, all bloggers dealing with those "mostly solved problems" never went past basic installation and copy-pasting the official tutorials. So here are the requirements: must be able to search for 10-100 terms in one query must handle + (term must be present) and - (term must not be present), AND/OR must handle arbitrary grouping (i.e. parentheses around AND/OR) must allow for Django-ORM filtering before or after fulltext-search (i.e. pre-/post-processing of results with the full set of filters that Django knows about) alternatively, there must be a facility to bulk-fetch the result set and transform it into a QuerySet should be light on the machine, so preferably no humongous JVM and Java-based app-server installation Is there anything out there that does this? I'm not interested in anecdotal evidence, or references to some blog posts that claim it should be working. I'd like to hear from someone who actually has a fully-functional setup working in the real world, under real conditions, with real queries. EDIT: Let me repeat again that I'm not so much interested in anecdotal evidence that someone, somewhere has a somewhat running installation working with unspecified properties. I already went there, I read all the blog posts, mailing lists, I contacted the authors, but when it came to actual implementation of real-world scenarios, nothing ever worked as advertised. Also, and a user below brought that point up as well, considering the TCO of any project, I'm definitely not interested in hearing that someone, somewhere was able to pull it off once a vendor parachuted in an unknown number of specialists to monkey-patch the whole installation with specific domain-knowledge that's documented nowhere. So, please, if you claim you have a working installation that actually satisfies minimum requirements for a full-fledged search (see requirements above), please provide the following so that we can all benefit from a search solution for Django that actually solves the problem: exact Linux distribution, release version, exact release version of Haystack (or equivalent) and release version of search backend, exact release version of the search engine publicly (!) available documentation how to set up all components exactly in the way that your installation was set up such that the minimal requirements above are met. Thank you.

    Read the article

  • Search engine solution for Django that actually works?

    - by prometheus
    The story so far: Decided to go with Xapian as search backend because it has all search-engine features I was looking for, knows about Unicode, stemming, has few dependencies and requires no bloated app-server installation on top of it. Tried Django and Haystack (plus xapian-haystack, the backend glue code to tie Haystack to Xapian) because it was advertised on quite some blogs as "working". Did not work. Neither django-haystack nor the xapian-haystack project provide a version combination that actually works together. MASTER from both projects yields an error from Xapian, so it's not stable at all. Haystack 1.0.1 and xapian-haystack 1.0.x/1.1.0 are not API-compatible. Plus, in a minimally working installation of Haystack 1.0.1 and xapian-haystack MASTER, any complex query yields zero results due to errors in either django-haystack or xapian-haystack (I double-verified this), maybe because the unit-tests actually test very simple cases, and no edge-cases at all. Tried Djapian. The source-code is riddled with spelling errors (mind you, in variable names, not comments), documentation is also riddled with ambiguities and outdated information that will never lead to a working installation. Not surprisingly, users rarely ask for features but how to get it working in the first place. Next on the plate: exploring Solr (installing a Java environment plus Tomcat gives me headaches, the machine is RAM- and CPU-constrained), or Lucene (slightly less headaches, but still). Before I proceed spending more time with a solution that might or might not work as advertised, I'd like to know: Did anyone ever get an actual, real-world search solution working in Django? I'm serious. I find it really frustrating reading about "large problems mostly solved", and then realizing that you will never get a working installation from the source-code because, actually, all bloggers dealing with those "mostly solved problems" never went past basic installation and copy-pasting the official tutorials. So here are the requirements: must be able to search for 10-100 terms in one query must handle + (term must be present) and - (term must not be present), AND/OR must handle arbitrary grouping (i.e. parentheses around AND/OR) must allow for Django-ORM filtering before or after fulltext-search (i.e. pre-/post-processing of results with the full set of filters that Django knows about) alternatively, there must be a facility to bulk-fetch the result set and transform it into a QuerySet should be light on the machine, so preferably no humongous JVM and Java-based app-server installation Is there anything out there that does this? I'm not interested in anecdotal evidence, or references to some blog posts that claim it should be working. I'd like to hear from someone who actually has a fully-functional setup working in the real world, under real conditions, with real queries. EDIT: Let me repeat again that I'm not so much interested in anecdotal evidence that someone, somewhere has a somewhat running installation working with unspecified properties. I already went there, I read all the blog posts, mailing lists, I contacted the authors, but when it came to actual implementation of real-world scenarios, nothing ever worked as advertised. Also, and a user below brought that point up as well, considering the TCO of any project, I'm definitely not interested in hearing that someone, somewhere was able to pull it off once a vendor parachuted in an unknown number of specialists to monkey-patch the whole installation with specific domain-knowledge that's documented nowhere. So, please, if you claim you have a working installation that actually satisfies minimum requirements for a full-fledged search (see requirements above), please provide the following so that we can all benefit from a search solution for Django that actually solves the problem: exact Linux distribution, release version, exact release version of Haystack (or equivalent) and release version of search backend, exact release version of the search engine publicly (!) available documentation how to set up all components exactly in the way that your installation was set up such that the minimal requirements above are met. Thank you.

    Read the article

  • Why do UInt16 arrays seem to add faster than int arrays?

    - by scraimer
    It seems that C# is faster at adding two arrays of UInt16[] than it is at adding two arrays of int[]. This makes no sense to me, since I would have assumed the arrays would be word-aligned, and thus int[] would require less work from the CPU, no? I ran the test-code below, and got the following results: Int for 1000 took 9896625613 tick (4227 msec) UInt16 for 1000 took 6297688551 tick (2689 msec) The test code does the following: Creates two arrays named a and b, once. Fills them with random data, once. Starts a stopwatch. Adds a and b, item-by-item. This is done 1000 times. Stops the stopwatch. Reports how long it took. This is done for int[] a, b and for UInt16 a,b. And every time I run the code, the tests for the UInt16 arrays take 30%-50% less time than the int arrays. Can you explain this to me? Here's the code, if you want to try if for yourself: public static UInt16[] GenerateRandomDataUInt16(int length) { UInt16[] noise = new UInt16[length]; Random random = new Random((int)DateTime.Now.Ticks); for (int i = 0; i < length; ++i) { noise[i] = (UInt16)random.Next(); } return noise; } public static int[] GenerateRandomDataInt(int length) { int[] noise = new int[length]; Random random = new Random((int)DateTime.Now.Ticks); for (int i = 0; i < length; ++i) { noise[i] = (int)random.Next(); } return noise; } public static int[] AddInt(int[] a, int[] b) { int len = a.Length; int[] result = new int[len]; for (int i = 0; i < len; ++i) { result[i] = (int)(a[i] + b[i]); } return result; } public static UInt16[] AddUInt16(UInt16[] a, UInt16[] b) { int len = a.Length; UInt16[] result = new UInt16[len]; for (int i = 0; i < len; ++i) { result[i] = (ushort)(a[i] + b[i]); } return result; } public static void Main() { int count = 1000; int len = 128 * 6000; int[] aInt = GenerateRandomDataInt(len); int[] bInt = GenerateRandomDataInt(len); Stopwatch s = new Stopwatch(); s.Start(); for (int i=0; i<count; ++i) { int[] resultInt = AddInt(aInt, bInt); } s.Stop(); Console.WriteLine("Int for " + count + " took " + s.ElapsedTicks + " tick (" + s.ElapsedMilliseconds + " msec)"); UInt16[] aUInt16 = GenerateRandomDataUInt16(len); UInt16[] bUInt16 = GenerateRandomDataUInt16(len); s = new Stopwatch(); s.Start(); for (int i=0; i<count; ++i) { UInt16[] resultUInt16 = AddUInt16(aUInt16, bUInt16); } s.Stop(); Console.WriteLine("UInt16 for " + count + " took " + s.ElapsedTicks + " tick (" + s.ElapsedMilliseconds + " msec)"); }

    Read the article

  • Optimal storage of data structure for fast lookup and persistence

    - by Mikael Svenson
    Scenario I have the following methods: public void AddItemSecurity(int itemId, int[] userIds) public int[] GetValidItemIds(int userId) Initially I'm thinking storage on the form: itemId -> userId, userId, userId and userId -> itemId, itemId, itemId AddItemSecurity is based on how I get data from a third party API, GetValidItemIds is how I want to use it at runtime. There are potentially 2000 users and 10 million items. Item id's are on the form: 2007123456, 2010001234 (10 digits where first four represent the year). AddItemSecurity does not have to perform super fast, but GetValidIds needs to be subsecond. Also, if there is an update on an existing itemId I need to remove that itemId for users no longer in the list. I'm trying to think about how I should store this in an optimal fashion. Preferably on disk (with caching), but I want the code maintainable and clean. If the item id's had started at 0, I thought about creating a byte array the length of MaxItemId / 8 for each user, and set a true/false bit if the item was present or not. That would limit the array length to little over 1mb per user and give fast lookups as well as an easy way to update the list per user. By persisting this as Memory Mapped Files with the .Net 4 framework I think I would get decent caching as well (if the machine has enough RAM) without implementing caching logic myself. Parsing the id, stripping out the year, and store an array per year could be a solution. The ItemId - UserId[] list can be serialized directly to disk and read/write with a normal FileStream in order to persist the list and diff it when there are changes. Each time a new user is added all the lists have to updated as well, but this can be done nightly. Question Should I continue to try out this approach, or are there other paths which should be explored as well? I'm thinking SQL server will not perform fast enough, and it would give an overhead (at least if it's hosted on a different server), but my assumptions might be wrong. Any thought or insights on the matter is appreciated. And I want to try to solve it without adding too much hardware :) [Update 2010-03-31] I have now tested with SQL server 2008 under the following conditions. Table with two columns (userid,itemid) both are Int Clustered index on the two columns Added ~800.000 items for 180 users - Total of 144 million rows Allocated 4gb ram for SQL server Dual Core 2.66ghz laptop SSD disk Use a SqlDataReader to read all itemid's into a List Loop over all users If I run one thread it averages on 0.2 seconds. When I add a second thread it goes up to 0.4 seconds, which is still ok. From there on the results are decreasing. Adding a third thread brings alot of the queries up to 2 seonds. A forth thread, up to 4 seconds, a fifth spikes some of the queries up to 50 seconds. The CPU is roofing while this is going on, even on one thread. My test app takes some due to the speedy loop, and sql the rest. Which leads me to the conclusion that it won't scale very well. At least not on my tested hardware. Are there ways to optimize the database, say storing an array of int's per user instead of one record per item. But this makes it harder to remove items.

    Read the article

  • Changing html <-> ajax <-> php/mysql to threaded approach

    - by Saif Bechan
    I have an application that needs to be updated real-time. There are various counters and other information that have to come from the database and the system needs to be up to date for the user. My approach now is just a normal ajax request every second to get the new values from the database. There is a JavaScript which loops every second getting the values trough ajax. This works fine but I think its very inefficient. The problem There is an ajax script that loops every second requesting data from php # On the server it has to load the PHP interpeter The PHP file has to get the data and format it correctly # PHP has to make a connection with the mysql database Work with the database(reads,never writes) Format the data so it can be send Send the data back to the browser # Close the database connection, and close the php interpeter Last the browser has to read these values and update the various html parts Now with this approach it has to load the interpreter and make a db connection every second. I was thinking of a way to make this more efficient, and maybe use a threaded approach to this. Threaded aprouch Do a post to the PHP when you enter the page and keep the connection alive In PHP only load the interpreter once, and make a connection to the DB ones Every second send an ajax response to the javascript listener The javascript listener than just changes values as the response from php arrives. I think this approach will be a great optimization to the server load and overall performance. But I can spot some weak point in the system and i need some help with these. Problems with the approach PHP execution time limit I don't think PHP is designed for such a setup. I know there is a time limit on php script execution. I don't know if an everlasting loop in PHP will cause any serious cpu/memory problems. Sending ajax request without breaking I don't know if it is possible to have just one ajax post action and have open and accepting data. user exists the page What will happen when the user exists the page and the PHP script is still going. Will it go on forever. security issues so far i can't think of any security issues. Almost every setup you use have some security issues. Maybe there are some with this solution I do not know of. Open to other solution I really want to change the setup as it is now and move to a threaded approach or better. If someone has a better approach to tackle this I definitely want to hear that. Maybe the usage of some other scripts is better suited for having an ongoing runtime. I only know php and java so any suggestions are welcome and I am willing to dig trough. I know there are things like perl, python etcetera that are used for this type of threaded but i don't know which one is best suited. When using other script If the best way is to go with other type of script like perl,python etcetera I do have some critera. The script has to be accessible via ajax post If it accepts some kind of json encode/decode it would be nice The script has to be able to access the session file This is essential because I need to know if the user is logged in The script has to be able to easily talk to MySQL All comments are welcome, and I hope this question is helpful to other also. Cheers!

    Read the article

  • The best cross platform (portable) arbitrary precision math library

    - by Siu Ching Pong - Asuka Kenji
    Dear ninjas / hackers / wizards, I'm looking for a good arbitrary precision math library in C or C++. Could you please give me some advices / suggestions? The primary requirements: It MUST handle arbitrarily big integers (my primary interest is on integers). In case that you don't know what the word arbitrarily big means, imagine something like 100000! (the factorial of 100000). The precision MUST NOT NEED to be specified during library initialization / object creation. The precision should ONLY be constrained by the available resources of the system. It SHOULD utilize the full power of the platform, and should handle "small" numbers natively. That means on a 64-bit platform, calculating 2^33 + 2^32 should use the available 64-bit CPU instructions. The library SHOULD NOT calculate this in the same way as it does with 2^66 + 2^65 on the same platform. It MUST handle addition (+), subtraction (-), multiplication (*), integer division (/), remainder (%), power (**), increment (++), decrement (--), gcd(), factorial(), and other common integer arithmetic calculations efficiently. Ability to handle functions like sqrt() (square root), log() (logarithm) that do not produce integer results is a plus. Ability to handle symbolic computations is even better. Here are what I found so far: Java's BigInteger and BigDecimal class: I have been using these so far. I have read the source code, but I don't understand the math underneath. It may be based on theories / algorithms that I have never learnt. The built-in integer type or in core libraries of bc / Python / Ruby / Haskell / Lisp / Erlang / OCaml / PHP / some other languages: I have ever used some of these, but I have no idea on which library they are using, or which kind of implementation they are using. What I have already known: Using a char as a decimal digit, and a char* as a decimal string and do calculations on the digits using a for-loop. Using an int (or a long int, or a long long) as a basic "unit" and an array of it as an arbitrary long integer, and do calculations on the elements using a for-loop. Booth's multiplication algorithm What I don't know: Printing the binary array mentioned above in decimal without using naive methods. Example of a naive method: (1) add the bits from the lowest to the highest: 1, 2, 4, 8, 16, 32, ... (2) use a char* string mentioned above to store the intermediate decimal results). What I appreciate: Good comparisons on GMP, MPFR, decNumber (or other libraries that are good in your opinion). Good suggestions on books / articles that I should read. For example, an illustration with figures on how a un-naive arbitrarily long binary to decimal conversion algorithm works is good. Any help. Please DO NOT answer this question if: you think using a double (or a long double, or a long long double) can solve this problem easily. If you do think so, it means that you don't understand the issue under discussion. you have no experience on arbitrary precision mathematics. Thank you in advance! Asuka Kenji

    Read the article

  • php crashes with no core file and this message : apc_mmap failed

    - by greg0ire
    Description of the problem Regularly, cron php processes crash on our production server, which result in mails with the following body : PHP Fatal error: PHP Startup: apc_mmap: mmap failed: in Unknown on line 0 Segmentation fault (core dumped) I think the Segmentation fault (core dumped) should result in core files being handled by apport and then written in /var/crashes, but the files I can see there are there since yesterday, although the last crash occured today : -rw-r----- 1 root whoopsie 1138528 mai 22 04:09 _usr_bin_php5.0.crash -rw-r----- 1 frontoffice whoopsie 1166373 mai 20 18:00 _usr_bin_php5.1005.crash -rw-r----- 1 frontoffice whoopsie 81622658 mai 22 00:05 _usr_sbin_php5-fpm.1005.crash I tried to download the last one anyway, and ran gdb /usr/sbin/php5-fpm /tmp/_usr_sbin_php5-fpm.1005.crash, only to be told that the file is not a core file (its format was not recognized). Here is the server's apc configuration : cat /etc/php5/cli/conf.d/20-apc.ini extension=apc.so apc.shm_size=512M apc.ttl=3600 apc.user_ttl=3600 apc.enable_cli=1 I'm mostly worried about the apc.shm_size… isn't it too high or too low ? I understand it has to do with the size of memory segments. Question(s) What could be the problem ? How can I troubleshoot it (how can I get a valid core file ?) ? System information free total used free shared buffers cached Mem: 5081296 4354684 726612 0 374744 959968 -/+ buffers/cache: 3019972 2061324 Swap: 522236 516888 5348 cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.2 LTS" php -v PHP 5.4.17-1~precise+1 (cli) (built: Jul 17 2013 18:14:06) Copyright (c) 1997-2013 The PHP Group Zend Engine v2.4.0, Copyright (c) 1998-2013 Zend Technologies php -i excerpt : Configuration apc APC Support => enabled Version => 3.1.13 APC Debugging => Disabled MMAP Support => Enabled MMAP File Mask => Locking type => pthread mutex Locks Serialization Support => php Revision => $Revision: 327136 $ Build Date => Nov 20 2012 18:41:36 Directive => Local Value => Master Value apc.cache_by_default => On => On apc.canonicalize => On => On apc.coredump_unmap => Off => Off apc.enable_cli => On => On apc.enabled => On => On apc.file_md5 => Off => Off apc.file_update_protection => 2 => 2 apc.filters => no value => no value apc.gc_ttl => 3600 => 3600 apc.include_once_override => Off => Off apc.lazy_classes => Off => Off apc.lazy_functions => Off => Off apc.max_file_size => 1M => 1M apc.mmap_file_mask => no value => no value apc.num_files_hint => 1000 => 1000 apc.preload_path => no value => no value apc.report_autofilter => Off => Off apc.rfc1867 => Off => Off apc.rfc1867_freq => 0 => 0 apc.rfc1867_name => APC_UPLOAD_PROGRESS => APC_UPLOAD_PROGRESS apc.rfc1867_prefix => upload_ => upload_ apc.rfc1867_ttl => 3600 => 3600 apc.serializer => default => default apc.shm_segments => 1 => 1 apc.shm_size => 512M => 512M apc.shm_strings_buffer => 4M => 4M apc.slam_defense => On => On apc.stat => On => On apc.stat_ctime => Off => Off apc.ttl => 3600 => 3600 apc.use_request_time => On => On apc.user_entries_hint => 4096 => 4096 apc.user_ttl => 3600 => 3600 apc.write_lock => On => On php -m [PHP Modules] apc bcmath bz2 calendar Core ctype curl date dba dom ereg exif fileinfo filter ftp gd gettext hash iconv imagick intl json ldap libxml mbstring memcache memcached mhash mysql mysqli openssl pcntl pcre PDO pdo_mysql pdo_pgsql pdo_sqlite pgsql Phar posix Reflection session shmop SimpleXML soap sockets SPL sqlite3 standard sysvmsg sysvsem sysvshm tidy tokenizer wddx xml xmlreader xmlwriter zip zlib [Zend Modules] ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 39531 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 39531 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

    Read the article

< Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >