Search Results

Search found 96 results on 4 pages for 'stall'.

Page 3/4 | < Previous Page | 1 2 3 4  | Next Page >

  • Writing an Iron Python debugger

    - by Kragen
    As a learning exercise I'm writing myself a simple extension / plugin / macro framework using IronPython - I've gotten the basics working but I'd like to add some basic debugging support to make my script editor easier to work with. I've been hunting around on the internet a bit and I've found a couple of good resources on writing managed debuggers (including Mike Stall's excellent .Net Debugging blog and the MSDN documentaiton on the CLR Debugging API) - I understand that IronPython is essentially IL however apart from that I'm a tad lost on how to get started, in particular: Are there any significant differences between debugging a dynamic language (such as IronPython) to a static one (such as C#)? Do I need to execute my script in a special way to get IronPython to output suitable debugging information? Is debugging a script running inside the current process going to cause deadlocks, or does IronPython execute my script in a child process? Am I better off looking into how to produce a simple C# debugger first to get the general idea? (I'm not interested in the GUI aspect of making a debugger for now - I've already got a pretty good idea of how this might work)

    Read the article

  • USB enumeration failure in ColdFire MCF52259

    - by Quicksilver
    Anybody with programming experience on the ColdFire line, please help! I'm using the CMX USB-Lite stack with the ColdFire MCF52259. For some reason, USB enumeration fails at the very first step, as soon as I enable the DP pull-up resistor. This is what I'm doing :- INT_ENB, OTG_INT_EN, ERR_ENB are set to 0x0 INT_STAT, OTG_INT_STAT and ERR_STAT are set to 0xff (This should clear all interrupts) In the Interrupt Status Register, bits 0 (Mask All) and 53 are made 0, all others are 1. TOK_DNE, USB_RST and STALL interrupts are enabled in INT_ENB. BDT base address is set. MCF_USB_CTL holds 0x1 PROBLEM: After the host issues the first reset (at which point I enable Control endpoint 0), instead of the expected Get Descriptor request I'm getting an endless series of resets. At least, that's what it looks like, because the TOK_DNE interrupt never triggers. Is there anything I'm doing wrong?

    Read the article

  • How to slow down Uploadify plugin script voor jQuery

    - by Peter
    Now if that isn't a weird question, I don't know what is. But here is the problem: I have a function in my "onSelect" option that has to collect some data (through AJAX) and I have a function in my "onComplete" option that processes the just uploaded files based on the data "onSelect" collected. However, with very small files, the "onSelect" hasn't finished yet before the upload is complete and the "onComplete" fails because it lacks the necessary data. Bigger files work just fine. So, I'm looking for a way to stall the upload of the file. To let it start only after the function in onSelect completed fetching the necessary data. Any ideas? Here is an example of the function that the onSelect triggers: var foo = new Array(); function checkDB( event, queueID , fileObj ) { $.post( ajax_folder + 'fetchData' , { ref: "someValue" } , function( data ){ foo[ queueID ] = data.result; } , 'json' ); return true; }

    Read the article

  • How do i force a file to be deleted? Windows server 2008

    - by acidzombie24
    On my site a user may upload a file (pic, zip, audio, video, whatever). He then may decide to replace it with a newer revision. This user may upload a file, make a post then decide to put up a new revision replacing the old (lets say its a large zip or tar.gz file). Theres a good chance people may be downloading it if he sent out an email or even im for the home user. Problem. I need to replace the file and people may be downloading and it may be some minutes before it is deleted. I dont want my code to stall until i cant delete or check every second to see if its unused (especially bad if another user can start and he takes long creating a cycle). How do i delete the file while users are downloading the file? i dont care if they stop i just care that the file can be replaced and new downloads are the new revision.

    Read the article

  • MySQL Insert Query Randomly Takes a Long Time

    - by ShimmerTroll
    I am using MySQL to manage session data for my PHP application. When testing the app, it is usually very quick and responsive. However, seemingly randomly the response will stall before finally completing after a few seconds. I have narrowed the problem down to the session write query which looks something like this: INSERT INTO Session VALUES('lvg0p9peb1vd55tue9nvh460a7', '1275704013', '') ON DUPLICATE KEY UPDATE sessAccess='1275704013',sessData=''; The slow query log has this information: Query_time: 0.524446 Lock_time: 0.000046 Rows_sent: 0 Rows_examined: 0 This happens about 1 out of every 10 times. The query usually only takes ~0.0044 sec. The table is InnoDB with about 60 rows. sessId is the primary key with a BTREE index. Since this is accessed on every page view, it is clearly not an acceptable execution time. Why is this happening? Update: Table schema is: sessId:varchar(32), sessAccess:int(10), sessData:text

    Read the article

  • why does InnoDB keep on growing without for every update?

    - by Akash Kava
    I have a table which consists of heavy blobs, and I wanted to conduct some tests on it. I know deleted space is not reclaimed by innodb, so I decided to reuse existing records by updating its own values instead of createing new records. But I noticed, whether I delete and insert a new entry, or I do UPDATE on existing ROW, InnoDB keeps on growing. Assuming I have 100 Rows, each Storing 500KB of information, My InnoDB size is 10MB, now when I call UPDATE on all rows (no insert/ no delete), the innodb grows by ~8MB for every run I do. All I am doing is I am storing exactly 500KB of data in each row, with little modification, and size of blob is fixed. What can I do to prevent this? I know about optimize table, but I cant do it because on regular usage, the table is going to be 60-100GB big, and running optimize will just stall entire server.

    Read the article

  • SQL SERVER – WRITELOG – Wait Type – Day 17 of 28

    - by pinaldave
    WRITELOG is one of the most interesting wait types. So far we have seen a lot of different wait types, but this log type is associated with log file which makes it interesting to deal with. From Book On-Line: WRITELOG Occurs while waiting for a log flush to complete. Common operations that cause log flushes are checkpoints and transaction commits. WRITELOG Explanation: This wait type is usually seen in the heavy transactional database. When data is modified, it is written both on the log cache and buffer cache. This wait type occurs when data in the log cache is flushing to the disk. During this time, the session has to wait due to WRITELOG. I have recently seen this wait type’s persistence at my client’s place, where one of the long-running transactions was stopped by the user causing it to roll back. In the future, I will see if I could re-create this situation once again on my machine to validate the relation. Reducing WRITELOG wait: There are several suggestions to reduce this wait stats: Move Transaction Log to Separate Disk from mdf and other files. Avoid cursor-like coding methodology and frequent committing of statements. Find the most active file based on IO stall time based on the script written over here. You can also use fn_virtualfilestats to find IO-related issues using the script mentioned over here. Check the IO-related counters (PhysicalDisk:Avg.Disk Queue Length, PhysicalDisk:Disk Read Bytes/sec and PhysicalDisk :Disk Write Bytes/sec) for additional details. Read about them over here. There are two excellent resources by Paul Randal, I suggest you understand the subject from those videos. The links to videos are here and here. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – LOGBUFFER – Wait Type – Day 18 of 28

    - by pinaldave
    At first, I was not planning to write about this wait type. The reason was simple- I have faced this only once in my lifetime so far maybe because it is one of the top 5 wait types. I am not sure if it is a common wait type or not, but in the samples I had it really looks rare to me. From Book On-Line: LOGBUFFER Occurs when a task is waiting for space in the log buffer to store a log record. Consistently high values may indicate that the log devices cannot keep up with the amount of log being generated by the server. LOGBUFFER Explanation: The book online definition of the LOGBUFFER seems to be very accurate. On the system where I faced this wait type, the log file (LDF) was put on the local disk, and the data files (MDF, NDF) were put on SanDrives. My client then was not familiar about how the file distribution was supposed to be. Once we moved the LDF to a faster drive, this wait type disappeared. Reducing LOGBUFFER wait: There are several suggestions to reduce this wait stats: Move Transaction Log to Separate Disk from mdf and other files. (Make sure your drive where your LDF is has no IO bottleneck issues). Avoid cursor-like coding methodology and frequent commit statements. Find the most-active file based on IO stall time, as shown in the script written over here. You can also use fn_virtualfilestats to find IO-related issues using the script mentioned over here. Check the IO-related counters (PhysicalDisk:Avg.Disk Queue Length, PhysicalDisk:Disk Read Bytes/sec and PhysicalDisk :Disk Write Bytes/sec) for additional details. Read about them over here. If you have noticed, my suggestions for reducing the LOGBUFFER is very similar to WRITELOG. Although the procedures on reducing them are alike, I am not suggesting that LOGBUFFER and WRITELOG are same wait types. From the definition of the two, you will find their difference. However, they are both related to LOG and both of them can severely degrade the performance. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Tiling Problem Solutions for Various Size "Dominoes"

    - by user67081
    I've got an interesting tiling problem, I have a large square image (size 128k so 131072 squares) with dimensons 256x512... I want to fill this image with certain grain types (a 1x1 tile, a 1x2 strip, a 2x1 strip, and 2x2 square) and have no overlap, no holes, and no extension past the image boundary. Given some probability for each of these grain types, a list of the number required to be placed is generated for each. Obviously an iterative/brute force method doesn't work well here if we just randomly place the pieces, instead a certain algorithm is required. 1) all 2x2 square grains are randomly placed until exhaustion. 2) 1x2 and 2x1 grains are randomly placed alternatively until exhaustion 3) the remaining 1x1 tiles are placed to fill in all holes. It turns out this algorithm works pretty well for some cases and has no problem filling the entire image, however as you might guess, increasing the probability (and thus number) of 1x2 and 2x1 grains eventually causes the placement to stall (since there are too many holes created by the strips and not all them can be placed). My approach to this solution has been as follows: 1) Create a mini-image of size 8x8 or 16x16. 2) Fill this image randomly and following the algorithm specified above so that the desired probability of the entire image is realized in the mini-image. 3) Create N of these mini-images and then randomly successively place them in the large image. Unfortunately there are some downfalls to this simplification. 1) given the small size of the mini-images, nailing an exact probability for the entire image is not possible. Example if I want p(2x1)=P(1x2)=0.4, the mini image may only give 0.41 as the closes probability. 2) The mini-images create a pseudo boundary where no overlaps occur which isn't really descriptive of the model this is being used for. 3) There is only a fixed number of mini-images so i'm not sure how random this really is. I'm really just looking to brainstorm about possible solutions to this. My main concern is really to nail down closer probabilities, now one might suggest I just increase the mini-image size. Well I have, and it turns out that in certain cases(p(1x2)=p(2x1)=0.5) the mini-image 16x16 isn't even iteratively solvable.. So it's pretty obvious how difficult it is to randomly solve this for anything greater than 8x8 sizes.. So I'd love to hear some ideas. Thanks

    Read the article

  • Apache mod-pagespeed installation affects mod-spdy?

    - by tim peterson
    Recently my site (an https connection, running on an Amazon EC2 ubuntu apache2.2) has this issue where I need to load the page several times (3-4) before it will load normally without issue. It will then load normally as long as I keep loading pages regularly (every couple seconds). It will stall again if I don't load pages for a few minutes. It has nothing to do with my application because I don't have this problem with the exact same app codebase on my Apache installation on my laptop. The only things to my knowledge that I've changed is that I recently installed mod_spdy and then a few weeks later I installed mod_pagespeed, https://developers.google.com/speed/pagespeed/mod. However, I have since turned mod_pagespeed off by setting its pagespeed.conf to mod_pagespeed off. Unfortunately, that didn't solve the problem. The line below is how every of last 10 lines of my error.log look: # tail -f /var/log/apache2/error.log ... [32728:32729:ERROR:mod_spdy.cc(162)] request->chunked == 1 in request GET / HTTP/1.1 [Sat Jun 02 04:50:08 2012] [warn] [client 50.136.93.153] [stream 5] [32728:32729:WARNING:http_to_spdy_filter.cc(113)] HttpToSpdyFilter is not the last filter in the chain: chunk any thoughts? thank you, tim

    Read the article

  • Page reload needed several times before loading normally

    - by tim peterson
    Sorry my question is so vague I just have no idea where to start in solving it and am quite a novice with servers. Recently my site (an https connection, running on an Amazon EC2 ubuntu apache2.2) has this issue where I need to load the page several times (3-4) before it will load normally without issue. It will then load normally as long as I keep loading pages regularly (every couple seconds). It will stall again if I don't load pages for a few minutes. It has nothing to do with my application because I don't have this problem with the exact same app codebase on my Apache installation on my laptop. The only thing to my knowledge that I changed is that I installed mod_pagespeed https://developers.google.com/speed/pagespeed/mod. However, I have since turned it off by setting my pagespeed.conf to mod_pagespeed off. Unfortunately, that didn't solve the problem. I'm wondering general advice on how to troubleshoot this problem. For instance are there linux commands to check page loading peformance? Also, it looks like I have lots of new error.logs in my /var/log/apache2 directory which i believe weren't there a few months ago. lots of this : error.log RewriteLog.log.24.gz ssl_access.log.40.gz error.log.1 RewriteLog.log.25.gz ssl_access.log.41.gz error.log.10.gz RewriteLog.log.26.gz ssl_access.log.42.gz error.log.11.gz RewriteLog.log.27.gz any thoughts? thank you, tim

    Read the article

  • Networking "chokes" on Windows 7 64 bit

    - by Rohit Nair
    I've been having this problem for some months now, and I have been unable to figure out a solution, or even the cause. At random points throughout the day, my internet connectivity "freezes". I don't get disconnected from my local wireless network. My router doesn't get disconnected from the world. However, for some reason, my computer stops receiving packets. If I'm playing an MMO ( World of Warcraft, in this case, but it has happened with Eve Online as well ) all activity just freezes. If I try to browse, Opera, Firefox and IE all stall at "Waiting for google.com..." or whatever the hostname may be. Inspection with a packet sniffer seems to reveal that there are no incoming packets. Here's the interesting part. Disconnecting from my wireless network and reconnecting fixes the issue. Obviously this led me to conclude that it was a problem with my router or wireless card. However, I have tweaked all the settings on my router that I could think of, including things like QoS, AP Isolation, etc. with no change. My wireless card doesn't really have that many options, and I have uninstalled and reinstalled drivers a few times without any change. Windows Firewall on/off doesn't make a difference. Anyone have any suggestions for debugging this? It's becoming an annoyance.

    Read the article

  • SSH traffic over openvpn connection freezes when I cat a file

    - by user42055
    I have an openvpn (version 2.1_rc15 at both ends) connection setup between two gentoo boxes using shared keys. it works fine for the most part. I use mysql, http, ftp, scp over the vpn with no problems. But when I ssh from the client to the server over the vpn, weird things happen. I can login, i can execute some commands. But if i try to run an ncurses application like top, or i try to cat a file, the connection will stall and I'll have to sever the ssh session. I can, for example, execute "echo blah; echo .; echo blah" and it will output the three lines of text over the ssh session fine. But if i execute "cat /etc/motd" the session will freeze the moment I press enter. I compiled openvpn 2.1.1 on my mac and copied over my config directory from my gentoo client. The mac connected and ssh sessions worked fine without freezing. I then compiled it on my older gentoo box (2.6.26 kernel) which I am retiring due to a dying hard drive, and ssh over it also works perfectly. Why does it fail on my brand new gentoo box ? I've tried compiling three different kernels in case it was that, but other than that there should be no difference between my older and my newer gentoo boxes that I can think of. Any suggestions on what's wrong ?

    Read the article

  • PowerShell 3.0 x64 bit broken after installing KB2506143

    - by Dave Parker
    I have searched using all kinds of variations on relevant terms and I cannot find a single other instance of someone else having this excact same problem, so I am hoping someone here may have a clue. Problem I installed Windows Management Framework 3.0 (KB2506143) by downloading and running Windows6.1-KB2506143-x64.msu from Microsoft.com. Once completed I rebooted my machine as requested. After rebooting and logging in, I try to run the 64-bit PowerShell command shell and it comes up for a second then goes away. The 32-bit shell seems to work fine, it is just the 64-bit one that fails. Looking in the Fusion logs, I found: *** Assembly Binder Log Entry (10/4/2012 @ 1:51:48 PM) *** The operation failed. Bind result: hr = 0x80070002. The system cannot find the file specified. Assembly manager loaded from: C:\Windows\Microsoft.NET\Framework64\v2.0.50727\mscorwks.dll Running under executable C:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe --- A detailed error log follows. === Pre-bind state information === LOG: User = ********\***** LOG: DisplayName = Microsoft.PowerShell.ConsoleHost, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL <remainder omitted> GacUtil reveals that there is a Microsoft.PowerShell.ConsoleHost, Version=1.0.0.0, but not 3.0.0.0. I tried uninstalling KB2506143 (which removed MSVCRT90.dll and caused Windows Live Messenger to fail on load after rebooting again, so I ran a repair in stall on Windows Live Essentials and that fixed the Messenger problem) and then re-installing it, but nothing changed. If it helps, here are what I think may be the relevant parts of my hardware/software environment. Environment Dell Latitude E6510, 8GB RAM Windows 7 Professional 64-bit with SP1 Visual Studio 2010 Professional installed (includes .NET 4.0) Visual Studio 2012 Professional installed Microsoft Forefront Client Security Any clues out there? Thanks, Dave

    Read the article

  • SQL Server log backups "stalling"

    - by MattK
    I have interited a box running SQL Server 2008 and Windows 2003, and have had a few events where largeish (35GB) log backups "stall", both before and after the installation of SQL 2008 SP1. The server log ships to a standby, so regular log backups are taken at 15 minute intervals. However, after an index reorg causes the log to grow to about 35GB (on a DB with about 17GB of data), the next log backup runs to ~95% completion, then seems to stop. The process shows as suspended, with a wait state of BACKUPIO. CPU, read, and write activity on the SPID also does not change, and the process stays in this state for hours, when normally a backup of this size should complete in about 20 minutes. This server has a single RAID-1 volume, thus the source database files and destination backup files are on the same volume. However, I cannot determine if another process is blocking the backup. The backup SPID cannot be killed, and the only way to terminate the log backup and clear the lock on the backup file is to cycle the SQL Server service. There was one event where the backup terminated completely, with an error that another process had locked the backup file, but no details about what that process was. Can anyone suggest a cause or diagnostic process to this situation?

    Read the article

  • Log backups "stalling" on SQL 2008?

    - by MattK
    I have interited a box running SQL Server 2008 and Windows 2003, and have had a few events where largeish (35GB) log backups "stall", both before and after the installation of SQL 2008 SP1. The server log ships to a standby, so regular log backups are taken at 15 minute intervals. However, after an index reorg causes the log to grow to about 35GB (on a DB with about 17GB of data), the next log backup runs to ~95% completion, then seems to stop. The process shows as suspended, with a wait state of BACKUPIO. CPU, read, and write activity on the SPID also does not change, and the process stays in this state for hours, when normally a backup of this size should complete in about 20 minutes. This server has a single RAID-1 volume, thus the source database files and destination backup files are on the same volume. However, I cannot determine if another process is blocking the backup. The backup SPID cannot be killed, and the only way to terminate the log backup and clear the lock on the backup file is to cycle the SQL Server service. There was one event where the backup terminated completely, with an error that another process had locked the backup file, but no details about what that process was. Can anyone suggest a cause or diagnostic process to this situation?

    Read the article

  • Ubuntu on VPS becomes unresponsive: BUG: soft lockup - CPU#0 stuck for 22s

    - by Bhante Nandiya
    We have a VPS running Ubuntu, on Xen. The problem is this, about once a day, for about 20-50 minutes, at a random time, the server becomes completely unresponsive to the outside world. After this period, it becomes responsive again, as if nothing had happened, it doesn't lose uptime, it doesn't restart. It just starts responding again as if it had been in suspended animation. These outages occur under conditions of non-exceptional memory and cpu, for example 70% mem, 5% cpu. I have stopped all non-essential services so the usage is very even. These outages don't particularly occur during times of increased memory/cpu (during daily tasks), they sometimes occur at times of very low cpu use (<2%), but in the past also occured during swapping. These blackouts have been occurring both under Ubuntu 12.04 LTS, and Ubuntu 14.04 LTS - no change at all (I upgraded Ubuntu specifically to see if it helped this problem). It is possible to log into our webhosts site, and use their administration console to see error messages from during this time. Presumably, these messages are from the Xen virtualization, the main message goes like this: BUG: soft lockp - CPU#0 stuck for 22s! [ksoftireqd/0:3] (repeats many times) SysRq : Emergency Sync (Sometimes this is the only message in the console) Others seen previously under different load situations include: BUG: soft lockup - CPU#0 stuck for 22s! [swapper/0:0] (repeated many times) or: INFO: rcu_sched detected stall on CPU 0 (t=15000 jiffies) (repeated many times with t getting bigger) From googling around I've tried various kernel parameters such as nohz=off and acpi=off to no avail. All tech support has said is that other Ubuntu installations are not suffering the same problem. Anyone got any ideas or experience with this problem?

    Read the article

  • SQL Server log backups “stalling”

    - by MattK
    I have interited a box running SQL Server 2008 and Windows 2003, and have had a few events where largeish (35GB) log backups "stall", both before and after the installation of SQL 2008 SP1. The server log ships to a standby, so regular log backups are taken at 15 minute intervals. However, after an index reorg causes the log to grow to about 35GB (on a DB with about 17GB of data), the next log backup runs to ~95% completion, then seems to stop. The process shows as suspended, with a wait state of BACKUPIO. CPU, read, and write activity on the SPID also does not change, and the process stays in this state for hours, when normally a backup of this size should complete in about 20 minutes. This server has a single RAID-1 volume, thus the source database files and destination backup files are on the same volume. However, I cannot determine if another process is blocking the backup. The backup SPID cannot be killed, and the only way to terminate the log backup and clear the lock on the backup file is to cycle the SQL Server service. There was one event where the backup terminated completely, with an error that another process had locked the backup file, but no details about what that process was. Can anyone suggest a cause or diagnostic process to this situation?

    Read the article

  • hg clone has stopped working on my Vista box

    - by vkraemer
    I have a Windows Vista machine that has been connecting to http://hg.netbeans.org productively for awhile... until recently. Lately, when I attempt to pull or clone, the update appears to stall... I see the following messages on the screen when I attempt to clone: destination directory: web-main requesting all changes adding changesets And then... nothing happens. I have opened the Task Manager and there doesn't appear to be any significant network activity for HOURS. I can contact the server with FireFox and see the proper output. I can clone from the repo with Solaris and/or Mac OS X... so the issue doesn't appear to be at the 'other end'. I had been running a fairly old version of Mercurial before this started happening. After it started happening, I upgraded to Mercurial 1.5.2.. which did not help resolve the issue at all. What are the likely causes and work-arounds for this?

    Read the article

  • SQL SERVER – DMV – sys.dm_os_wait_stats Explanation – Wait Type – Day 3 of 28

    - by pinaldave
    The key Dynamic Management View (DMV) that helps us to understand wait stats is sys.dm_os_wait_stats; this DMV gives us all the information that we need to know regarding wait stats. However, the interpretation is left to us. This is a challenge as understanding wait stats can often be quite tricky. Anyway, we will cover few wait stats in one of the future articles. Today we will go over the basic understanding of the DMV. The Official Book OnLine Reference for DMV is over here: sys.dm_os_wait_stats. I suggest you all to refer this for all the accuracy. Following is a statement from the online book: “Specific types of wait times during query execution can indicate bottlenecks or stall points within the query. Similarly, high wait times, or wait counts server wide can indicate bottlenecks or hot spots in interaction query interactions within the server instance.” This is the statement which has inspired me to write this series. Let us first run the following statement from DMV. SELECT * FROM sys.dm_os_wait_stats ORDER BY wait_time_ms DESC GO Above statement will show us few of the columns. Here it is quick explanation of each of the column. wait_type – this is the name of the wait type. There can be three different kinds of wait types – resource, queue and external. waiting_tasks_count – this incremental counter is a good indication of frequent the wait is happening. If this number is very high, it is good indication for us to investigate that particular wait type. It is quite possible that the wait time is considerably low, but the frequency of the wait is much high. wait_time_ms – this is total wait accumulated for any type of wait. This is the total wait time and includes singal_wait_time_ms. max_wait_time_ms – this indicates the maximum wait type ever occurred for that particular wait type. Using this, one can estimate the intensity of the wait type in past. Again, it is not necessary that this max wait time will occur every time; so do not over invest yourself here. signal_wait_time_ms – this is the wait time when thread is marked as runnable and it gets to the running state. If the runnable queue is very long, you will find that this wait time becomes high. Additionally, please note that this DMV does not show current wait type or wait stats. This is cumulative view of the all the wait stats since server (instance) restarted or wait stats have been cleared. In future blog post, we will also cover two more DMVs which can be helpful to identify wait-related issues. ?sys.dm_os_waiting_tasks sys.dm_exec_requests Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Don&rsquo;t Kill the Password

    - by Anthony Trudeau
    A week ago Mr. Honan from Wired.com penned an article on security he titled “Kill the Password: Why a String of Characters Can’t Protect Us Anymore.” He asserts that the password is not effective and a new solution is needed. Unfortunately, Mr. Honan was a victim of hacking. As a result he has a victim’s vendetta. His conclusion is ill conceived even though there are smatterings of truth and good advice. The password is a security barrier much like a lock on your door. In of itself it’s not guaranteeing protection. You can have a good password akin to a steel reinforced door with the best lock money can buy, or you can have a poor password like “password” which is like a sliding lock like on a bathroom stall. But, just like in the real world a lock isn’t always enough. You can have a lock, security system, video cameras, guard dogs, and even armed security guards; but none of that guarantees your protection. Even top secret government agencies can be breached by someone who is just that good (as dramatized in movies like Mission Impossible). And that’s the crux of it. There are real hackers out there that are that good. Killer coding ninja monkeys do exist! We still have locks on our doors, because they still serve their role. Passwords are no different. Security doesn’t end with the password. Most people would agree that stuffing your mattress with your life savings isn’t a good idea even if you have the best locks and security system. Most people agree its safest to have the money in a bank. Essentially this is compartmentalization. Compartmentalization extends to the online world as well. You’re at risk if your online banking accounts are linked to the same account as your social networks. This is especially true if you’re lackadaisical about linking those social networks to outside sources including apps. The object here is to minimize the damage that can be done. An attacker should not be able to get into your bank account, because they breached your Twitter account. It’s time to prioritize once you’ve compartmentalized. This simply means deciding how much security you want for the different compartments which I’ll call security zones. Social networking applications like Facebook provide a lot of security features. However, security features are almost always a compromise with privacy and convenience. It’s similar to an engineering adage, but in this case it’s security, convenience, and privacy – pick two. For example, you might use a safe instead of bank to store your money, because the convenience of having your money closer or the privacy of not having the bank records is more important than the added security. The following are lists of security do’s and don’ts (these aren’t meant to be exhaustive and each could be an article in of themselves): Security Do’s: Use strong passwords based on a phrase Use encryption whenever you can (e.g. HTTPS in Facebook) Use a firewall (and learn to use it properly) Configure security on your router (including port blocking) Keep your operating system patched Make routine backups of important files Realize that if you’re not paying for it, you’re the product Security Don’ts Link accounts if at all possible Reuse passwords across your security zones Use real answers for security questions (e.g. mother’s maiden name) Trust anything you download Ignore message boxes shown by your system or browser Forget to test your backups Share your primary email indiscriminately Only you can decide your comfort level between convenience, privacy, and security. Attackers are going to find exploits in software. Software is complex and depends on other software. The exploits are the responsibility of the software company. But your security is always your responsibility. Complete security is an illusion. But, there is plenty you can do to minimize the risk online just like you do in the physical world. Be safe and enjoy what the Internet has to offer. I expect passwords to be necessary just as long as locks.

    Read the article

  • SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28

    - by pinaldave
    For any good system three things are vital: CPU, Memory and IO (disk). Among these three, IO is the most crucial factor of SQL Server. Looking at real-world cases, I do not see IT people upgrading CPU and Memory frequently. However, the disk is often upgraded for either improving the space, speed or throughput. Today we will look at another IO-related wait type. From Book On-Line: Occurs when a task is waiting for I/Os to finish. ASYNC_IO_COMPLETION Explanation: Any tasks are waiting for I/O to finish. If by any means your application that’s connected to SQL Server is processing the data very slowly, this type of wait can occur. Several long-running database operations like BACKUP, CREATE DATABASE, ALTER DATABASE or other operations can also create this wait type. Reducing ASYNC_IO_COMPLETION wait: When it is an issue related to IO, one should check for the following things associated to IO subsystem: Look at the programming and see if there is any application code which processes the data slowly (like inefficient loop, etc.). Note that it should be re-written to avoid this  wait type. Proper placing of the files is very important. We should check the file system for proper placement of the files – LDF and MDF on separate drive, TempDB on another separate drive, hot spot tables on separate filegroup (and on separate disk), etc. Check the File Statistics and see if there is a higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. Check event log and error log for any errors or warnings related to IO. If you are using SAN (Storage Area Network), check the throughput of the SAN system as well as configuration of the HBA Queue Depth. In one of my recent projects, the SAN was performing really badly and so the SAN administrator did not accept it. After some investigations, he agreed to change the HBA Queue Depth on the development setup (test environment). As soon as we changed the HBA Queue Depth to quite a higher value, there was a sudden big improvement in the performance. It is very likely to happen that there are no proper indexes on the system and yet there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can effectively reduce lots of CPU, Memory and IO (considering cover index has lesser columns than cluster table and all other; it depends upon the situation). You can refer to the following two articles I wrote that talk about how to optimize indexes: Create Missing Indexes Drop Unused Indexes Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – IO_COMPLETION – Wait Type – Day 10 of 28

    - by pinaldave
    For any good system three things are vital: CPU, Memory and IO (disk). Among these three, IO is the most crucial factor of SQL Server. Looking at real-world cases, I do not see IT people upgrading CPU and Memory frequently. However, the disk is often upgraded for either improving the space, speed or throughput. Today we will look at an IO-related wait types. From Book On-Line: Occurs while waiting for I/O operations to complete. This wait type generally represents non-data page I/Os. Data page I/O completion waits appear as PAGEIOLATCH_* waits. IO_COMPLETION Explanation: Any tasks are waiting for I/O to finish. This is a good indication that IO needs to be looked over here. Reducing IO_COMPLETION wait: When it is an issue concerning the IO, one should look at the following things related to IO subsystem: Proper placing of the files is very important. We should check the file system for proper placement of files – LDF and MDF on a separate drive, TempDB on another separate drive, hot spot tables on separate filegroup (and on separate disk),etc. Check the File Statistics and see if there is higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. Check event log and error log for any errors or warnings related to IO. If you are using SAN (Storage Area Network), check the throughput of the SAN system as well as the configuration of the HBA Queue Depth. In one of my recent projects, the SAN was performing really badly so the SAN administrator did not accept it. After some investigations, he agreed to change the HBA Queue Depth on development (test environment) set up and as soon as we changed the HBA Queue Depth to quite a higher value, there was a sudden big improvement in the performance. It is very possible that there are no proper indexes in the system and there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can effectively reduce lots of CPU, Memory and IO (considering cover index has lesser columns than cluster table and all other; it depends upon the situation). You can refer to the two articles that I wrote; they are about how to optimize indexes: Create Missing Indexes Drop Unused Indexes Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Types, SQL White Papers, T SQL, Technology

    Read the article

  • Getting the number of fragments which passed the depth test

    - by Etan
    In "modern" environments, the "NV Occlusion Query" extension provides a method to get the number of fragments which passed the depth test. However, on the iPad / iPhone using OpenGL ES, the extension is not available. What is the most performant approach to implement a similar behaviour in the fragment shader? Some of my ideas: Render the object completely in white, then count all the colors together using a two-pass shader where first a vertical line is rendered and for each fragment the shader computes the sum over the whole row. Then, a single vertex is rendered whose fragment sums all the partial sums of the first pass. Doesn't seem to be very efficient. Render the object completely in white over a black background. Downsample recursively, abusing the hardware linear interpolation between textures until being at a reasonably small resolution. This leads to fragments which have a greyscale level depending on the number of white pixels where in their corresponding region. Is this even accurate enough? Use mipmaps and simply read the pixel on the 1x1 level. Again the question of accuracy and if it is even possible using non-power-of-two textures. The problem wit these approaches is, that the pipeline gets stalled which results in major performance issues. Therefore, I'm looking for a more performant way to accomplish my goal. Using the EXT_OCCLUSION_QUERY_BOOLEAN extension Apple introduced EXT_OCCLUSION_QUERY_BOOLEAN in iOS 5.0 for iPad 2. "4.1.6 Occlusion Queries Occlusion queries use query objects to track the number of fragments or samples that pass the depth test. An occlusion query can be started and finished by calling BeginQueryEXT and EndQueryEXT, respectively, with a target of ANY_SAMPLES_PASSED_EXT or ANY_SAMPLES_PASSED_CONSERVATIVE_EXT. When an occlusion query is started with the target ANY_SAMPLES_PASSED_EXT, the samples-boolean state maintained by the GL is set to FALSE. While that occlusion query is active, the samples-boolean state is set to TRUE if any fragment or sample passes the depth test. When the occlusion query finishes, the samples-boolean state of FALSE or TRUE is written to the corresponding query object as the query result value, and the query result for that object is marked as available. If the target of the query is ANY_SAMPLES_PASSED_CONSERVATIVE_EXT, an implementation may choose to use a less precise version of the test which can additionally set the samples-boolean state to TRUE in some other implementation dependent cases." The first sentence hints on a behavior which is exactly what I'm looking for: getting the number of pixels which passed the depth test in an asynchronous manner without much performance loss. However, the rest of the document describes only how to get boolean results. Is it possible to exploit this extension to get the pixel count? Does the hardware support it so that there may be hidden API to get access to the pixel count? Other extensions which could be exploitable would be debugging features like the number of times the fragment shader was invoked (PSInvocations in DirectX - not sure if something simila is available in OpenGL ES). However, this would also result in a pipeline stall.

    Read the article

  • How to display NotifyIcon and SSDP Service running during AutoLogon

    - by Paul Farry
    I've got an application (that is targetting .Net Framework 2.0) that is running on startup of the System, and I'm trying to get a NotifyIcon to display. When my program starts up when a user either Runs it normally or is started as a child process after the system has already logged on everything is fine. If my application starts up as the system is performing an AutoLogon using POSReady2009 (basically XP with Single User set). Then the NotifyIcon never becomes active. If you subsequently check (in a timer) the .Visible of the Icon at any point later it always reports that it is visible = true. If you disable the SSDPSrv and restart the Computer, the Icon displays correctly. I have a sneaking suspicion this is related to .Net 3.5sp1 installed over the top of a .Net 2 system. Is there some process that I should be following to ensure that my NotifyIcon is always available to the user. I have setup RegisterWindowMessage("TaskbarCreated") but I don't get this message called, except when you forcilbly Kill Explorer.exe and restart it. Even so, a NotifyIcon interally registers for these notifications anyway, so it shouldn't be required. I'm happy to stall the startup of my program, but once the program starts up, I expect that the icon shows correctly. If there is a KB article that I cannot find detailing this I'd be happy with that too.

    Read the article

< Previous Page | 1 2 3 4  | Next Page >