Search Results

Search found 586 results on 24 pages for 'hanging'.

Page 19/24 | < Previous Page | 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Urgent: how to deny read access to a ExecCGI directory

    - by Malvolio
    First, I can't believe that that isn't the default behavior. Second, yikes! I don't know how long my code's been hanging out there, with all sort of cool secret stuff, just waiting for some hacker who knows Apache better than I do. EDIT (and apology) Well, this is sort of embarrassing. Here's what happened: We had some Python scripts available to the web, at /aux/file.py, which were not surprisingly at /var/www/http/aux . Separately, we were running an app server and Apache proxies through at /servlets/. A contractor had constructed the WAR file by bundling up all the generated files including the Python files (which are in a directory also called aux, not surprisingly), so if you typed in /servlets/aux/file.py, the web-server would ask the app-server for it and the app-server would just supply the file. It was the latter URL that this morning I happened to type in by accident and lo, the source appeared. Until I realized the shear unlikelihood of what I had done, the situation was rating about 8.3 on the sphincter scale. After a tense half-hour or so I realized that it had nothing to do with the CGI (and that serving files that were also executable would be not only foolish but also impossible), and was able to address the real problems. So -- sorry, everybody. Let the scorn-fest commence.

    Read the article

  • How to find Stolen MacBook with iCloud

    - by user1518089
    My MacBook Air was stolen about 6 weeks ago. Through iCloud and "Find Phone", I have some pictures and a location down to about 2 blocks. The pictures are from the current user taking photos which automatically appear on my local devices. (Yes they probably saw my pictures until I stopped taking them. Yes, they are stupid.) I was thinking about going there and hanging out until I recognized the current users, but it is in a very bad neighborhood and I would be noticed. The police have not done anything. Yes, the MacBook can be locked or a message sent. I am hoping to get it back. Does anyone have ideas on how to track them down? While Find Phone shows their location, it does not report an ip address. Is there a way to get an ip address? Does Facebook face recognition work on strangers? Come on tech geniuses, help me play detective. It does not have Drop Box installed.

    Read the article

  • How to diagnose computer freezing problem

    - by reinierpost
    I have a laptop (a Medion from Aldi) that tends to hang quite often - so often, in fact, that several attempts to install Windows XP or Ubuntu on it have all failed. However, I am able to boot and run Ubuntu as found on the standard Ubuntu 10.10 installation image. I have done this two times thus far. The first time everything was running smoothly, until at some point the GUI (i.e. X) became unresponsive. The cursor kept moving with the mouse, but menus would no longer show and clicking things no longer produced any response. So I switched to the consoles (Ctrl-F1, Ctrl-F2, etc., which in this setup automatically run shells. The shells were still responsive, and the cd command would still work, but any command that invoked an executable (e.g. /bin/ls or cd /bin; ./find caused the shell to hang up uninterruptibly. My hypothesis was that all attempts at disk access were hanging up, but I didn't actually try a command like echo /proc/$$ or while read line; do echo $line; done < /var/log/syslog to verify this. Another possibility is that an essential system library is cached in memory and somehow failing to function properly. The second time I left the system running overnight and it didn't hang itself spontaneously. I'm not sure I have the patience to just twiddle with the running system until the condition reappears, and I'm, not sure what to do once it does. Clearly we can rule out a software cause. It seems disk access related, but clearly it's not permanent hard disk failure because the system will reboot just fine. What kind of hardware problem might produce these symptoms? Can it be a memory problem?

    Read the article

  • Slow NFS and GFS2 performance

    - by Tiago
    Recently I've designed and configured a 4 node cluster for a webapp that does lots of file handling. The cluster have been broken down into 2 main roles, webserver and storage. Each role is replicated to a second server using drbd in active/passive mode. The webserver does a NFS mount of the data directory of the storage server and the latter also has a webserver running to serve files to browser clients. In the storage servers I've created a GFS2 FS to hold the data which is wired to drbd. I've chose GFS2 mainly because the announced performance and also because the volume size which has to be pretty high. Since we entered production I've been facing two problems that I think are deeply connected. First of all, the NFS mount on the webservers keeps hanging for a minute or so and then resumes normal operations. By analyzing the logs I've found out that NFS stops answering for a while and outputs the following log lines: Oct 15 18:15:42 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:44 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:46 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:47 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:48 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:51 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:52 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:55 <server hostname> kernel: nfs: server active.storage.vlan not responding, still trying Oct 15 18:15:58 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK Oct 15 18:15:59 <server hostname> kernel: nfs: server active.storage.vlan OK In this case, the hang lasted for 16 seconds but sometimes it takes 1 or 2 minutes to resume normal operations. My first guess was this was happening due to heavy load of the NFS mount and that by increasing RPCNFSDCOUNT to a higher value, this would become stable. I've increased it several times and apparently, after a while, the logs started appearing less times. The value is now on 32. After further investigating the issue, I've came across a different hang, despite the NFS messages still appear in the logs. Sometimes, the GFS2 FS simply hangs which causes both the NFS and the storage webserver to serve files. Both stay hang for a while and then they resume normal operations. This hangs leaves no trace on client side (also leaves no NFS ... not responding messages) and, on the storage side, the log system appears to be empty, even though the rsyslogd is running. The nodes connect themselves through a 10Gbps non-dedicated connection but I don't think this is an issue because the GFS2 hang is confirmed but connecting directly to the active storage server. I've been trying to solve this for a while now and I've tried different NFS configuration options, before I've found out the GFS2 FS is also hanging. The NFS mount is exported as such: /srv/data/ <ip_address>(rw,async,no_root_squash,no_all_squash,fsid=25) And the NFS client mounts with: mount -o "async,hard,intr,wsize=8192,rsize=8192" active.storage.vlan:/srv/data /srv/data After some tests, these were the configurations that yielded more performance to the cluster. I am desperate to find a solution for this as the cluster is already in production mode and I need to fix this so that this hangs won't happen in the future and I don't really know for sure what and how I should be benchmarking. What I can tell is that this is happening due to heavy loads as I have tested the cluster earlier and this problems weren't happening at all. Please tell me if you need me to provide configuration details of the cluster, and which do you want me to post. As last resort I can migrate the files to a different FS but I need some solid pointers on whether this will solve this problems as the volume size is extremely large at this point. The servers are being hosted by a third-party enterprise and I don't have physical access to them. Best regards. EDIT 1: The servers are physical servers and their specs are: Webservers: Intel Bi Xeon E5606 2x4 2.13GHz 24GB DDR3 Intel SSD 320 2 x 120GB Raid 1 Storage: Intel i5 3550 3.3GHz 16GB DDR3 12 x 2TB SATA Initially there was a VRack setup between the servers but we've upgraded one of the storage servers to have more RAM and it wasn't inside the VRack. They connect through a shared 10Gbps connection between them. Please note that it is the same connection that is used for public access. They use a single IP (using IP Failover) to connect between them and to allow for a graceful failover. NFS is therefore over a public connection and not under any private network (it was before the upgrade, were the problem still existed). The firewall was configured and tested thoroughly but I disabled it for a while to see if the problem still occurred, and it did. From my knowledge the hosting provider isn't blocking or limiting the connection between either the servers and the public domain (at least under a given bandwidth consumption threshold that hasn't been reached yet). Hope this helps figuring out the problem. EDIT 2: Relevant software versions: CentOS 2.6.32-279.9.1.el6.x86_64 nfs-utils-1.2.3-26.el6.x86_64 nfs-utils-lib-1.1.5-4.el6.x86_64 gfs2-utils-3.0.12.1-32.el6_3.1.x86_64 kmod-drbd84-8.4.2-1.el6_3.elrepo.x86_64 drbd84-utils-8.4.2-1.el6.elrepo.x86_64 DRBD configuration on storage servers: #/etc/drbd.d/storage.res resource storage { protocol C; on <server1 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server1 ip>:7788; meta-disk internal; } on <server2 fqdn> { device /dev/drbd0; disk /dev/vg_storage/LV_replicated; address <server2 ip>:7788; meta-disk internal; } } NFS Configuration in storage servers: #/etc/sysconfig/nfs RPCNFSDCOUNT=32 STATD_PORT=10002 STATD_OUTGOING_PORT=10003 MOUNTD_PORT=10004 RQUOTAD_PORT=10005 LOCKD_UDPPORT=30001 LOCKD_TCPPORT=30001 (can there be any conflict in using the same port for both LOCKD_UDPPORT and LOCKD_TCPPORT?) GFS2 configuration: # gfs2_tool gettune <mountpoint> incore_log_blocks = 1024 log_flush_secs = 60 quota_warn_period = 10 quota_quantum = 60 max_readahead = 262144 complain_secs = 10 statfs_slow = 0 quota_simul_sync = 64 statfs_quantum = 30 quota_scale = 1.0000 (1, 1) new_files_jdata = 0 Storage network environment: eth0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip address> Bcast:<bcast address> Mask:<ip mask> inet6 addr: <ip address> Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:957025127 errors:0 dropped:0 overruns:0 frame:0 TX packets:1473338731 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2630984979622 (2.3 TiB) TX bytes:1648430431523 (1.4 TiB) eth0:0 Link encap:Ethernet HWaddr <mac address> inet addr:<ip failover address> Bcast:<bcast address> Mask:<ip mask> UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 The IP addresses are statically assigned with the given network configurations: DEVICE="eth0" BOOTPROTO="static" HWADDR=<mac address> ONBOOT="yes" TYPE="Ethernet" IPADDR=<ip address> NETMASK=<net mask> and DEVICE="eth0:0" BOOTPROTO="static" HWADDR=<mac address> IPADDR=<ip failover> NETMASK=<net mask> ONBOOT="yes" BROADCAST=<bcast address> Hosts file to allow for a graceful NFS failover in conjunction with NFS option fsid=25 set on both storage servers: #/etc/hosts <storage ip failover address> active.storage.vlan <webserver ip failover address> active.service.vlan As you can see, packet errors are down to 0. I've also ran ping for a long time without any packet loss. MTU size is the normal 1500. As there is no VLan by now, this is the MTU used to communicate between servers. The webservers' network environment is similar. One thing I forgot to mention is that the storage servers handle ~200GB of new files each day through the NFS connection, which is a key point for me to think this is some kind of heavy load problem with either NFS or GFS2. If you need further configuration details please tell me. EDIT 3: Earlier today we had a major filesystem crash on the storage server. I couldn't get the details of the crash right away because the server stop responding. After the reboot, I noticed the filesystem was extremely slow, and I was not being able to serve a single file through either NFS or httpd, perhaps due to cache warming or so. Nevertheless, I've been monitoring the server closely and the following error came up in dmesg. The source of the problem is clearly GFS, which is waiting for a lock and ends up starving after a while. INFO: task nfsd:3029 blocked for more than 120 seconds. "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. nfsd D 0000000000000000 0 3029 2 0x00000080 ffff8803814f79e0 0000000000000046 0000000000000000 ffffffff8109213f ffff880434c5e148 ffff880624508d88 ffff8803814f7960 ffffffffa037253f ffff8803815c1098 ffff8803814f7fd8 000000000000fb88 ffff8803815c1098 Call Trace: [<ffffffff8109213f>] ? wake_up_bit+0x2f/0x40 [<ffffffffa037253f>] ? gfs2_holder_wake+0x1f/0x30 [gfs2] [<ffffffff814ff42e>] __mutex_lock_slowpath+0x13e/0x180 [<ffffffff814ff2cb>] mutex_lock+0x2b/0x50 [<ffffffffa0379f21>] gfs2_log_reserve+0x51/0x190 [gfs2] [<ffffffffa0390da2>] gfs2_trans_begin+0x112/0x1d0 [gfs2] [<ffffffffa0369b05>] ? gfs2_dir_check+0x35/0xe0 [gfs2] [<ffffffffa0377943>] gfs2_createi+0x1a3/0xaa0 [gfs2] [<ffffffff8121aab1>] ? avc_has_perm+0x71/0x90 [<ffffffffa0383d1e>] gfs2_create+0x7e/0x1a0 [gfs2] [<ffffffffa037783f>] ? gfs2_createi+0x9f/0xaa0 [gfs2] [<ffffffff81188cf4>] vfs_create+0xb4/0xe0 [<ffffffffa04217d6>] nfsd_create_v3+0x366/0x4c0 [nfsd] [<ffffffffa0429703>] nfsd3_proc_create+0x123/0x1b0 [nfsd] [<ffffffffa041a43e>] nfsd_dispatch+0xfe/0x240 [nfsd] [<ffffffffa025a5d4>] svc_process_common+0x344/0x640 [sunrpc] [<ffffffff810602a0>] ? default_wake_function+0x0/0x20 [<ffffffffa025ac10>] svc_process+0x110/0x160 [sunrpc] [<ffffffffa041ab62>] nfsd+0xc2/0x160 [nfsd] [<ffffffffa041aaa0>] ? nfsd+0x0/0x160 [nfsd] [<ffffffff81091de6>] kthread+0x96/0xa0 [<ffffffff8100c14a>] child_rip+0xa/0x20 [<ffffffff81091d50>] ? kthread+0x0/0xa0 [<ffffffff8100c140>] ? child_rip+0x0/0x20

    Read the article

  • Why doesn't my NamedPipeServerStream wait??

    - by Frank Fella
    I'm working with a NamedPipeServerStream to communicate between two processes. Here is the code where I initialize and connect the pipe: void Foo(IHasData objectProvider) { Stream stream = objectProvider.GetData(); if (stream.Length > 0) { using (NamedPipeServerStream pipeServer = new NamedPipeServerStream("VisualizerPipe", PipeDirection.Out, 1, PipeTransmissionMode.Byte, PipeOptions.Asynchronous)) { string currentDirectory = Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location); string uiFileName = Path.Combine(currentDirectory, "VisualizerUIApplication.exe"); Process.Start(uiFileName); if(pipeServer.BeginWaitForConnection(PipeConnected, this).AsyncWaitHandle.WaitOne(5000)) { while (stream.CanRead) { pipeServer.WriteByte((byte)stream.ReadByte()); } } else { throw new TimeoutException("Pipe connection to UI process timed out."); } } } } private void PipeConnected(IAsyncResult e) { } But it never seems to wait. I constantly get the following exception: System.InvalidOperationException: Pipe hasn't been connected yet. at System.IO.Pipes.PipeStream.CheckWriteOperations() at System.IO.Pipes.PipeStream.WriteByte(Byte value) at PeachesObjectVisualizer.Visualizer.Show(IDialogVisualizerService windowService, IVisualizerObjectProvider objectProvider) I would think that after the wait returns everything should be ready to go. If I use pipeServer.WaitForConnection() everything works fine, but hanging the application if the pipe doesn't connect is not an option.

    Read the article

  • Home automation using Arduino / XMPP client for Arduino

    - by Ashish
    I am trying to setup a system for automating certain tasks in my home. I am thinking of a solution wherein a server side application would be able to send/receive commands/data to Arduino (attached with Arduino Ethernet Shield) via the web. Here the Arduino may both act as a sensor interface to the server application or command executor interface for the server app. E.g. (user story): The overhead water tank in my house has a water level sensor attached with Arduino (attached with Arduino Ethernet Shield). Another Arduino (attached with Arduino Ethernet Shield) is attached with a relay/latch. This relay/latch is then connected to a water pump. Now the server side application on the web is able to get/receive water level information from the Arduino on the water tank. Depending on the water level information received, the web application should send suitable signals/commands to Arduino on water pump to switch 'ON' or switch 'OFF' the water pump. Now for such a system to work across the web, I am thinking of using one of the type of solutions in order of my priority: Using XMPP for communication between server application and Arduino. Using HTTP polling. Using HTTP hanging GET. For solution number 1, I need to implement a XMPP client that would reside on Arduino. Is it possible to write a XMPP client small enough to reside on an Arduino? If yes what are the minimum possible XMPP client functionality that I need to write for Arduino, so that it would be able to contact XMPP servers solutions like GTalk, etc.? For solution number 2 and 3 I need guidance in implementation. Also which solution would be cost effective and easily extendable?

    Read the article

  • Memory leak when using Workflow 4.0 SqlWorkflowInstanceStore and PersistableIdleAction.Unload

    - by Rohland
    Hi, This particular problem is driving me nuts. I wonder if anyone has experienced a similar problem. If I load up a workflow then unload it and perform a memory snapshot then the result is predictable - my workflow is no longer in memory. However, if I load up a workflow and set the PersistableIdle action to PersistableIdleAction.Unload and let the workflow idle the workflow remains in memory even though the Unload action fires. I used ANTS Memory Profiler to debug this issue. This is the object retention graph outputted showing that an internal object is hanging onto my workflow instance. Can anyone else verify this problem? My code amounts to the following: Create SqlWorkflowInstanceStore and setup lock owner handle -- At this point I take a memory snapshot Create an instance of Workflow1 Set the PersistableIdle action Apply the instancestore to Workflow1 Setup action event handlers for Idle, Unload, UnhandledException etc. Persist the workflow instance Run the workflow instance Wait for instance to idle (caused by Delay activity) Ensure the Unload action is fired -- At this point I take a second memory snapshot From the above image, it is clear that the only object referencing Workflow1 is some internal event handlers result which I have no ability to dispose of. Any clues?

    Read the article

  • Lambdas within Extension methods: Possible memory leak?

    - by Oliver
    I just gave an answer to a quite simple question by using an extension method. But after writing it down i remembered that you can't unsubscribe a lambda from an event handler. So far no big problem. But how does all this behave within an extension method?? Below is my code snipped again. So can anyone enlighten me, if this will lead to myriads of timers hanging around in memory if you call this extension method multiple times? I would say no, cause the scope of the timer is limited within this function. So after leaving it no one else has a reference to this object. I'm just a little unsure, cause we're here within a static function in a static class. public static class LabelExtensions { public static Label BlinkText(this Label label, int duration) { Timer timer = new Timer(); timer.Interval = duration; timer.Tick += (sender, e) => { timer.Stop(); label.Font = new Font(label.Font, label.Font.Style ^ FontStyle.Bold); }; label.Font = new Font(label.Font, label.Font.Style | FontStyle.Bold); timer.Start(); return label; } }

    Read the article

  • How to forward a 'saved' request stream to another Action within the same controller?

    - by Moe Howard
    We have a need to chunk-up large http requests sent by our mobile devices. These smaller chunk streams are merged to a file on the server. Once all chunks are received we need a way to submit the saved merged request to an another method(Action) within the same controller that will process this large http request. How can this be done? The code we tried below results in the service hanging. Is there a way to do this without a round-trip? //Open merged chunked file FileStream fileStream = new FileStream(fileName, FileMode.Open, FileAccess.Read); //Read steam support variables int bytesRead = 0; byte[] buffer = new byte[1024]; //Build New Web Request. The target Action is called "Upload", this method we are in is called "UploadChunk" HttpWebRequest webRequest; webRequest = (HttpWebRequest)WebRequest.Create(Request.Url.ToString().Replace("Chunk", string.Empty)); webRequest.Method = "POST"; webRequest.ContentType = "text/xml"; webRequest.KeepAlive = true; webRequest.Timeout = 600000; webRequest.ReadWriteTimeout = 600000; webRequest.Credentials = System.Net.CredentialCache.DefaultCredentials; Stream webStream = webRequest.GetRequestStream(); //Hangs here, no errors, just hangs I have looked into using RedirectToAction and RedirecctToRoute but these methods don't fit well with what we are looking to do as we cannot edit the Request.InputStream (as it is read-only) to carry out large request stream. Thanks, Moe

    Read the article

  • Silverlight performance with a large number of Objects Org Chart.

    - by KC
    Hello, I’m working on a org chart project(SL 3) and I’m seeing the UI thread hang when the chart is building around 2,000 nodes and when it renders it takes about a min and then FPS drop to a crawl. Here is the code flow. Page.xaml.cs calls a wcf service that returns a list of AD users. Then we use Linq to build a collection of nodes to bind to the Orgchart.cs OrgChart.cs is a canvas and displays a collection of nodes and connecting lines. Node.cs is a canvas and has user data can contain children nodes. NodeContent.xaml is a user control that has borders so I can set the background, textblocks to display user's data, Events that handle the selected and expaned nodes, and storyboards that resize the nodes when they are selected or expanded.I noticed during hours of debugging , here in the InitializeComponent(); section where it loads the xaml it seems to be where the preformance hit is happening. System.Windows.Application.LoadComponent(this, new System.Uri("/Silverlight.Custom;component/NodeContent.xaml", System.UriKind.Relative)); So I guess I have two questions. Can threading help in any way with the UI thread hanging while drawing the nodes? How can I avoid the hit when calling this user control? Any advice or direction anyone can lend would be greatly appreciated. Thanks, KC

    Read the article

  • Performance problem with System.Net.Mail

    - by Saif Khan
    I have this unusual problem with mailing from my app. At first it wasn't working (getting unable to relay error crap) anyways I added the proper authentication and it works. My problem now is, if I try to send around 300 emails (each with a 500k attachment) the app starts hanging around 95% thru the process. Here is some of my code which is called for each mail to be sent Using mail As New MailMessage() With mail .From = New MailAddress(My.Resources.EmailFrom) For Each contact As Contact In Contacts .To.Add(contact.Email) Next .Subject = "Accounting" .Body = My.Resources.EmailBody 'Back the stream up to the beginning orelse the attachment 'will be sent as a zero (0) byte file. attachment.Seek(0, SeekOrigin.Begin) .Attachments.Add(New Attachment(attachment, String.Concat(Item.Year, Item.AttachmentType.Extension))) End With Dim smtp As New SmtpClient("192.168.1.2") With smtp .DeliveryMethod = SmtpDeliveryMethod.Network .UseDefaultCredentials = False .Credentials = New NetworkCredential("username", "password") .Send(mail) End With End Using With item .SentStatus = True .DateSent = DateTime.Now.Date .Save() End With Return I was thinking, can I just prepare all the mails and add them to a collection then open one SMTP conenction and just iterate the collection, calling the send like this Using mail As New MailMessage() ... MailCollection.Add(mail) End Using ... Dim smtp As New SmtpClient("192.168.1.2") With smtp .DeliveryMethod = SmtpDeliveryMethod.Network .UseDefaultCredentials = False .Credentials = New NetworkCredential("username", "password") For Each mail in MainCollection .Send(mail) Next End With

    Read the article

  • Silverlight HttpWebRequest.Create hangs inside async block

    - by jack2010
    I am trying to prototype a Rpc Call to a JBOSS webserver from Silverlight (4). I have written the code and it is working in a console application - so I know that Jboss is responding to the web request. Porting it to silverlight 4, is causing issues: let uri = new Uri(queryUrl) // this is the line that hangs let request : HttpWebRequest = downcast WebRequest.Create(uri) request.Method <- httpMethod; request.ContentType <- contentType It may be a sandbox issue, as my silverlight is being served off of my file system and the Uri is a reference to the localhost - though I am not even getting an exception. Thoughts? Thx UPDATE 1 I created a new project and ported my code over and now it is working; something must be unstable w/ regard to the F# Silverlight integration still. Still would appreciate thoughts on debugging the "hanging" web create in the old model... UPDATE 2 let uri = Uri("http://localhost./portal/main?isSecure=IbongAdarnaNiFranciscoBalagtas") // this WebRequest.Create works fine let req : HttpWebRequest = downcast WebRequest.Create(uri) let Login = async { let uri = new Uri("http://localhost/portal/main?isSecure=IbongAdarnaNiFranciscoBalagtas") // code hangs on this WebRequest.Create let request : HttpWebRequest = downcast WebRequest.Create(uri) return request } Login |> Async.RunSynchronously I must be missing something; the Async block works fine in the console app - is it not allowed in the Silverlight App?

    Read the article

  • Is android's motion event handling accurate??

    - by Peterdk
    Bug I have a weird bug in my piano app. Sometimes keys (and thus notes) hang. I did a lot of debugging and narrowed it down to what looks like androids inaccuracy of motion event handling: DEBUG/(2091): ACTION_DOWN A4 DEBUG/(2091): KeyDown: A4 DEBUG/(2091): ACTION_MOVE A4 => A4 DEBUG/(2091): ACTION_MOVE ignoring DEBUG/(2091): ACTION_MOVE A4 => A4 DEBUG/(2091): ACTION_MOVE ignoring DEBUG/(2091): ACTION_MOVE A4 => A4 DEBUG/(2091): ACTION_MOVE ignoring DEBUG/(2091): ACTION_UP B4 //HOW CAN THIS BE???? DEBUG/(2091): KeyUp: B4 DEBUG/(2091): Stream is null, can't stop DEBUG/(2091): Hanging Note: A4 X=240-287 EventX=292 Y=117-200 EventY=164 DEBUG/(2091): KeyUp Note: B4 X=288-335 EventX=292 Y=117-200 EventY=164 Clearly it can be seen here that out of nowhere I suddenly have an ACTION_UP for another note. Shouldn't I definitely get a ACTION_MOVE first? As shown in the end of the log, it's definitely not an error in region detection, since the ACTION_UP event is clearly in the B4 region. Logging Implementation details Every onTouchEvent() call is logged, so the log is accurate. The relevant pseudo-code for the ACTION_MOVE logging is: Key oldKey = Key.get(event.getHistoricalX(), event.getHistoricalY()); Key newKey = Key.get(event.getX(), event.getY()); Question Is this normal behaviour for Android (the jumping in coordinates)? Am I missing something?

    Read the article

  • How to I get rid of these double quotes?

    - by Danger Angell
    I'm using ym4r to render a Google Map. Relevant portion of Controller code: @event.checkpoints.each do |checkpoint| unless checkpoint.lat.blank? current_checkpoint = GMarker.new([checkpoint.lat, checkpoint.long], :title => checkpoint.name, :info_window => checkpoint.name, :icon => checkpoint.discipline.icon, :draggable => false ) @map.overlay_init(current_checkpoint) end It's this line that is hanging me up: :icon => checkpoint.discipline.icon, Using this to render the map in the view: <%= @map.to_html %> <%= @map.div(:width => 735, :height => 450, :position => 'relative') %> The javascript that is puking looks like this: icon : "mtn_biking" and I need it looking like this: icon : mtn_biking This is the HTML generated: <script type="text/javascript"> var mtn_bike = addOptionsToIcon(new GIcon(),{image : "/images/map/mtn_bike.png",iconSize : new GSize(32,32),iconAnchor : new GPoint(16,32),infoWindowAnchor : new GPoint(16,0)});var map; window.onload = addCodeToFunction(window.onload,function() { if (GBrowserIsCompatible()) { map = new GMap2(document.getElementById("map")); map.setCenter(new GLatLng(37.7,-97.3),4);map.addOverlay(addInfoWindowToMarker(new GMarker(new GLatLng(34.9,-82.22),{icon : "mtn_bike",draggable : false,title : "CP1"}),"CP1",{})); map.addOverlay(addInfoWindowToMarker(new GMarker(new GLatLng(35.9,-83.22),{icon : "flat_water",draggable : false,title : "CP2"}),"CP2",{})); map.addOverlay(addInfoWindowToMarker(new GMarker(new GLatLng(36.9,-84.22),{icon : "white_water",draggable : false,title : "CP3"}),"CP3",{}));map.addControl(new GLargeMapControl()); map.addControl(new GMapTypeControl()); } }); </script> the issue is the double quotes in: icon : "mtn_bike" icon : "flat_water" icon : "white_water" I need a way to get rid of those double quotes in the generated HTML

    Read the article

  • refactor LINQ TO SQL custom properties that instantiate datacontext

    - by Thiago Silva
    I am working on an existing ASP.NET MVC app that started small and has grown with time to require a good re-architecture and refactoring. One thing that I am struggling with is that we've got partial classes of the L2S entities so we could add some extra properties, but these props create a new data context and query the DB for a subset of data. This would be the equivalent to doing the following in SQL, which is not a very good way to write this query as oppsed to joins: SELECT tbl1.stuff, (SELECT nestedValue FROM tbl2 WHERE tbl2.Foo = tbl1.Bar), tbl1.moreStuff FROM tbl1 so in short here's what we've got in some of our partial entity classes: public partial class Ticket { public StatusUpdate LastStatusUpdate { get { //this static method call returns a new DataContext but needs to be refactored var ctx = OurDataContext.GetContext(); var su = Compiled_Query_GetLastUpdate(ctx, this.TicketId); return su; } } } We've got some functions that create a compiled query, but the issue is that we also have some DataLoadOptions defined in the DataContext, and because we instantiate a new datacontext for getting these nested property, we get an exception "Compiled Queries across DataContexts with different LoadOptions not supported" . The first DataContext is coming from a DataContextFactory that we implemented with the refactorings, but this second one is just hanging off the entity property getter. We're implementing the Repository pattern in the refactoring process, so we must stop doing stuff like the above. Does anyone know of a good way to address this issue?

    Read the article

  • Why is it a bad practice to call System.gc?

    - by zneak
    After answering to a question about how to force-free objects in Java (the guy was clearing a 1.5GB HashMap) with System.gc(), I've been told it's a bad practice to call System.gc() manually, but the comments seemed mitigated about it. So much that no one dared to upvote it, nor downvote it. I've been told there it's a bad practice, but then I've also been told garbage collector runs don't systematically stop the world anymore, and that it could also be only seen as a hint, so I'm kind of at loss. I do understand that usually the JVM knows better than you when it needs to reclaim memory. I also understand that worrying about a few kilobytes of data is silly. And I also understand that even megabytes of data isn't what it was a few years back. But still, 1.5 gigabyte? And you know there's like 1.5 GB of data hanging around in memory; it's not like it's a shot in the dark. Is System.gc() systematically bad, or is there some point at which it becomes okay? So the question is actually double: Why is it or not a bad practice to call System.gc()? Is it really a hint under certain implementations, or is it always a full collection cycle? Are there really garbage collector implementations that can do their work without stopping the world? Please shed some light over the various assertions people have made. Where's the threshold? Is it never a good idea to call System.gc(), or are there times when it's acceptable? If any, what are those times?

    Read the article

  • PHP sessions causing Apache to hang indefinitely

    - by Kmaid
    The problem is that every so often a page that writes to a Session will cause apache to hang forever for a particular session. Once this error occurs for one user any further modifications to any session of any user will cause the website to hang for this user. This problem has been my sole focus for days. I have a development VPS running Windows 2003 and default latest version of XAMPP using the standard PHP session handler. The code in question actually runs on two other machines perfectly normally so although my common sense says it’s a web server configuration issue but at this point I am willing to try anything. On further investigation there are no errors in the Apache, PHP or System event log. Resources are abundant and there is no “AJAX shit storm” or more than a couple writes to a session per page. I have also implemented session_write_close() wherever possible to try and help elevate the problem. I have checked the session’s directory which is set to “C:\windows\Temp” and found that once a user enters this hanging phase that the corresponding session file is exclusively locked and the only way to resolve this is to stop Apache and wait a few moments for the files to become unlocked and delete them. I am not wondering if deletion is required. The Sessions themselves only contain 4 bits of information. ShoppingCartID, UserID, UserLevel and Refering URL and are alphanumerical with an occasional slash. My PHP.INI’s session section is configured like this: session.save_handler = files session.save_path = "C:\WINDOWS\Temp" session.use_cookies = 1 session.name = PHPSESSID session.auto_start = 0 session.cookie_lifetime = 0 session.cookie_path = / session.cookie_domain = session.cookie_httponly = session.serialize_handler = php session.gc_probability = 1 session.gc_divisor = 100 session.gc_maxlifetime = 1440 session.bug_compat_42 = 1 session.bug_compat_warn = 1 session.referer_check = session.entropy_length = 0 session.entropy_file = session.cache_limiter = nocache session.cache_expire = 180 session.use_trans_sid = 0 session.hash_function = 0 session.hash_bits_per_character = 4 I have tried everything I can think of and the whole problem is now a blur to me. Any ideas would be appreciated and thanks for your time reading this :)

    Read the article

  • Encrypting an id in an URL in ASP.NET MVC

    - by Chuck Conway
    I'm attempting to encode the encrypted id in the Url. Like this: http://www.calemadr.com/Membership/Welcome/9xCnCLIwzxzBuPEjqJFxC6XJdAZqQsIDqNrRUJoW6229IIeeL4eXl5n1cnYapg+N However, it either doesn't encode correctly and I get slashes '/' in the encryption or I receive and error from IIS: The request filtering module is configured to deny a request that contains a double escape sequence. I've tried different encodings, each fails: HttpUtility.HtmlEncode HttpUtility.UrlEncode HttpUtility.UrlPathEncode HttpUtility.UrlEncodeUnicode Update The problem was I when I encrypted a Guid and converted it to a base64 string it would contain unsafe url characters . Of course when I tried to navigate to a url containing unsafe characters IIS(7.5/ windows 7) would blow up. Url Encoding the base64 encrypted string would raise and error in IIS (The request filtering module is configured to deny a request that contains a double escape sequence.). I'm not sure how it detects double encoded strings but it did. After trying the above methods to encode the base64 encrypted string. I decided to remove the base64 encoding. However this leaves the encrypted text as a byte[]. I tried UrlEncoding the byte[], it's one of the overloads hanging off the httpUtility.Encode method. Again, while it was URL encoded, IIS did not like it and served up a "page not found." After digging around the net I came across a HexEncoding/Decoding class. Applying the Hex Encoding to the encrypted bytes did the trick. The output is url safe. On the other side, I haven't had any problems with decoding and decrypting the hex strings.

    Read the article

  • SQL Server lock/hang issue

    - by mattwoberts
    Hi, I'm using SQL Server 2008 on Windows Server 2008 R2, all sp'd up. I'm getting occasional issues with SQL Server hanging with the CPU usage on 100% on our live server. It seems all the wait time on SQL Sever when this happens is given to SOS_SCHEDULER_YIELD. Here is the Stored Proc that causes the hang. I've added the "WITH (NOLOCK)" in an attempt to fix what seems to be a locking issue. ALTER PROCEDURE [dbo].[MostPopularRead] AS BEGIN SET NOCOUNT ON; SELECT c.ForeignId , ct.ContentSource as ContentSource , sum(ch.HitCount * hw.Weight) as Popularity , (sum(ch.HitCount * hw.Weight) * 100) / @Total as Percent , @Total as TotalHits from ContentHit ch WITH (NOLOCK) join [Content] c WITH (NOLOCK) on ch.ContentId = c.ContentId join HitWeight hw WITH (NOLOCK) on ch.HitWeightId = hw.HitWeightId join ContentType ct WITH (NOLOCK) on c.ContentTypeId = ct.ContentTypeId where ch.CreatedDate between @Then and @Now group by c.ForeignId , ct.ContentSource order by sum(ch.HitCount * hw.HitWeightMultiplier) desc END The stored proc reads from the table "ContentHit", which is a table that tracks when content on the site is clicked (it gets hit quite frequently - anything from 4 to 20 hits a minute). So its pretty clear that this table is the source of the problem. There is a stored proc that is called to add hit tracks to the ContentHit table, its pretty trivial, it just builds up a string from the params passed in, which involves a few selects from some lookup tables, followed by the main insert: BEGIN TRAN insert into [ContentHit] (ContentId, HitCount, HitWeightId, ContentHitComment) values (@ContentId, isnull(@HitCount,1), isnull(@HitWeightId,1), @ContentHitComment) COMMIT TRAN The ContentHit table has a clustered index on its ID column, and I've added another index on CreatedDate since that is used in the select. When I profile the issue, I see the Stored proc executes for exactly 30 seconds, then the SQL timeout exception occurs. If it makes a difference the web application using it is ASP.NET, and I'm using Subsonic (3) to execute these stored procs. Can someone please advise how best I can solve this problem? I don't care about reading dirty data... Thanks

    Read the article

  • Deploying ASP.NET MVC to IIS6: pages are just blank

    - by BryanGrimes
    I have an MVC app that is actually on a couple other servers but I didn't do the deploy. For this deploy I have added the wildcard to aspnet_isapi.dll which has gotten rid of the 404 error. But the pages are not pulling up, rather everything is just blank. I can't seem to find any IIS configuration differences. The Global asax.cs file does have routing defined, but as I've seen on a working server, that file isn't just hanging out in the root or anything so obvious. What could I be missing here? All of the servers are running IIS6 and I have compared the setups and they look the same to me at this point. Thanks... Bryan EDIT for the comments thus far: I've looked in the event logs with no luck, and scoured various IIS logs per David Wang: blogs.msdn.com. Below is the Global.asax.cs file... public class MvcApplication : System.Web.HttpApplication { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.IgnoreRoute("error.axd"); // for Elmah // For deployment to IIS6 routes.Add(new Route ( "{controller}.mvc/{action}/{id}", new RouteValueDictionary(new { action = "Index", id = (string)null }), new MvcRouteHandler() )); routes.MapRoute( "WeeklyTimeSave", "Time/Save", new { controller = "Time", action = "Save" } ); routes.MapRoute( "WeeklyTimeAdd", "Time/Add", new { controller = "Time", action = "Add" } ); routes.MapRoute( "WeeklyTimeEdit", "Time/Edit/{id}", new { controller = "Time", action = "Edit", id = "" } ); routes.MapRoute( "FromSalesforce", "Home/{id}", new { controller = "Home", action = "Index", id = "" }); routes.MapRoute( "Default2", "{controller}/{id}", new { controller = "Home", action = "Index", id = "" } ); routes.MapRoute( "Default", "{controller}/{action}/{id}", new { controller = "Home", action = "Index", id = "" } ); } protected void Application_Start() { RegisterRoutes(RouteTable.Routes); } } Maybe this is as stupid as the asax file not being somewhere it needs to be, but heck if I know at this point.

    Read the article

  • Debugging in VS 2008 locks a stored procedure

    - by larryq
    Hi everyone, I've got a strange one here. I have a .Net executable that, under the hood, calls a few stored procedures. For whatever reason, one of the stored procs hangs when I'm debugging. If I run the executable outside of visual studio things go fine, including this stored proc. It's when I'm debugging that this hangs, and it really hangs. If I stop the debugging session the IDE freezes and I have to kill it via taskmanager. I know which stored procedure has the trouble, as well as the actual statement within it that's the problem. It's calling an update statement that doesn't stand out as particularly special. I can run the identical statement (and the stored procedure itself) from SQL management studio wtih no problem. And, as I mentioned, the exe runs just fine outside the debugger. If I use the SQL activity monitor to see why things are hanging, the wait type says PREEMPTIVE_DEBUG. I'm not sure if that's helpful but if you need more info I'll try to get it to you. I've rebooted my machine (the SQL Server in question is on this box as well) and that didn't do anything, nor did rebuilding the executable. I'm scratching my head on this one and if you have any ideas what to check on next, I'm be happy to listen. Thanks!

    Read the article

  • Haskell math performance

    - by Travis Brown
    I'm in the middle of porting David Blei's original C implementation of Latent Dirichlet Allocation to Haskell, and I'm trying to decide whether to leave some of the low-level stuff in C. The following function is one example—it's an approximation of the second derivative of lgamma: double trigamma(double x) { double p; int i; x=x+6; p=1/(x*x); p=(((((0.075757575757576*p-0.033333333333333)*p+0.0238095238095238) *p-0.033333333333333)*p+0.166666666666667)*p+1)/x+0.5*p; for (i=0; i<6 ;i++) { x=x-1; p=1/(x*x)+p; } return(p); } I've translated this into more or less idiomatic Haskell as follows: trigamma :: Double -> Double trigamma x = snd $ last $ take 7 $ iterate next (x' - 1, p') where x' = x + 6 p = 1 / x' ^ 2 p' = p / 2 + c / x' c = foldr1 (\a b -> (a + b * p)) [1, 1/6, -1/30, 1/42, -1/30, 5/66] next (x, p) = (x - 1, 1 / x ^ 2 + p) The problem is that when I run both through Criterion, my Haskell version is six or seven times slower (I'm compiling with -O2 on GHC 6.12.1). Some similar functions are even worse. I know practically nothing about Haskell performance, and I'm not terribly interested in digging through Core or anything like that, since I can always just call the handful of math-intensive C functions through FFI. But I'm curious about whether there's low-hanging fruit that I'm missing—some kind of extension or library or annotation that I could use to speed up this numeric stuff without making it too ugly.

    Read the article

  • rails application on production not working

    - by Steven
    i have a rails application on production which is running using mongrel, I can successfully start the mogrel for the application but when i try to access the application on the URL it is not responding... it is just hanging. This is the mongrel log... but when I hit xxx.xxx.xxx.xx:3001 it is not showing the website but on developent is working fine. ** Starting Mongrel listening at 0.0.0.0:3001 ** Initiating groups for "name.co.za":"name.co.za". ** Changing group to "name.co.za". ** Changing user to "name.co.za". ** Starting Rails with production environment... ** Rails loaded. ** Loading any Rails specific GemPlugins ** Signals ready. TERM = stop. USR2 = restart. INT = stop (no restart). ** Rails signals registered. HUP = reload (without restart). It might not work well. ** Mongrel 1.1.5 available at 0.0.0.0:3001 ** Writing PID file to /home/name.co.za/shared/log/mongrel.pid

    Read the article

  • BackgroundWorker might be causing my application to hang

    - by alexD
    I have a Form that uses a BackgroundWorker to execute a series of tests. I use the ProgressChanged event to send messages to the main thread, which then does all of the updates on the UI. I've combed through my code to make sure I'm not doing anything to the UI in the background worker. There are no while loops in my code and the BackgroundWorker has a finite execution time (measured in seconds or minutes). However, for some reason when I lock my computer, often times the application will be hung when I log back in. The thing is, the BackgroundWorker isn't even running when this happens. The reason I believe it is related to the BackgroundWorker though is because the form only hangs when the BackgroundWorker has been executed since the application was loaded (it only runs when given a certain user input). I pass this thread a List of TreeNodes from a TreeView in my UI through the RunWorkerAsync method, but I only read those nodes in the worker thread..any modifications I make to them is done in the UI thread through the progressChanged event. I do use Thread.Sleep in my worker thread to execute tests at timed intervals (which involves sending messages over a TCP socket, which was not created in the worker thread). I am completely perplexed as to why my application might be hanging. I'm sure I'm doing something 'illegal' somewhere, I just don't know what.

    Read the article

  • Why compiler go to suspend mode when want to open database?

    - by rima
    Dear friend I try to connect to database with a less line for my connection string... I find out s.th in oracle website but i dont know Why when the compiler arrive to the line of open database do nothing????!it go back to GUI,but it like hanging...please help me to solve it. p.s.Its funny the program didnt get me any exception also! these service is active in my computer: > Oracle ORCL VSS Writer Service Start > OracleDBConsolrorcl > OracleJobSchedulerORCL Start > OracleOraDB11g+home1TNSListener Start > oracleServiceORCL Start try { /** * ORCL = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = rima-PC)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl) ) )*/ string oradb = "Data Source=(DESCRIPTION=" + "(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=rima-PC)(PORT=1521)))" + "(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=orcl)));" + "User Id=bird_artus;Password=123456;"; //string oradb = "Data Source=OraDb;User Id=scott;Password=tiger;"; string oradb1 = "Data Source=ORCL;User Id=scott;Password=tiger;"; // C# OracleConnection con = new OracleConnection(); con.ConnectionString = oradb1; String command = "select dname from dept where deptno = 10"; MessageBox.Show(command); OracleDataAdapter oda = new OracleDataAdapter(); oda.SelectCommand = new OracleCommand(); oda.SelectCommand.Connection = con; oda.SelectCommand.CommandText = command; con.Open(); oda.SelectCommand.ExecuteNonQuery(); DataSet ds = new DataSet(); oda.Fill(ds); Console.WriteLine(ds.GetXml()); dataGridView1.DataSource = ds; con.Close(); } catch (Exception ex) { MessageBox.Show(ex.Message.ToString()+Environment.NewLine+ ex.StackTrace.ToString()); }

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24  | Next Page >