Search Results

Search found 229 results on 10 pages for 'correlation'.

Page 8/10 | < Previous Page | 4 5 6 7 8 9 10  | Next Page >

  • Runtime of optimized Primehunter

    - by Setton
    Ok so I need some serious runtime help here! This method should take in an int value, check its primality, and return true if the number is indeed a prime. I understand why the loop only needs to go up to i squared, I understand that the worst case scenario is the case in which either the number is prime (or a multiple of a prime). But I don't understand how to quantify the actual runtime. I have done the loop myself by hand to try to understand the pattern or correlation of the number (n) and how many loops occur, but I literally feel like I keep falling into the same trap every time. I need a new way of thinking about this! I have a hint: "Think about the SIZE of the integer" which makes me want to quantify the literal number of integers in a number in relation to how many iterations it does in the for loop (floor log(n)) +1). BUT IT'S NOT WORKIIIING?! I KNOW it isn't square root n, obviously. I'm asking for Big O notation. public class PrimeHunter { public static boolean isPrime(int n) { boolean answer = (n > 1) ? true : false; //runtime = linear runtime for (int i = 2; i * i <= n; i++) //runtime = ????? { if (n % i == 0) //doesn't occur if it is a prime { answer = false; break; } } return answer; //runtime = linear runtime } }

    Read the article

  • Group Policy error 1006 with and error code 52

    - by Bernesto
    I have a hyper-v cluster operating on win2k8 R2 in a 2003 forest. These servers are at our NOC with a DC that connects to our PDC at HQ via a persistent VPN. The cluster boxes are reporting a error event ID 1006 shown below. The DC is also reporting an error 5805 also shown below. I have found numorus posts regarding 1006 errors, but none for error ID 52's. It's weird, I can ping and I can browse network shares on the DC from each. I thought maybe a DNS or net work issue, but nslook up works too. Event 1006 <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-GroupPolicy" Guid="{AEA1B4FA-97D1-45F2-A64C-4D69FFFD92C9}" /> <EventID>1006</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>1</Opcode> <Keywords>0x8000000000000000</Keywords> <TimeCreated SystemTime="2013-12-17T00:08:19.582292600Z" /> <EventRecordID>41808</EventRecordID> <Correlation ActivityID="{26B10592-6228-4A3E-845B-E04B49702A54}" /> <Execution ProcessID="964" ThreadID="1384" /> <Channel>System</Channel> <Computer>NEOREEFVH1.neoreef.com</Computer> <Security UserID="S-1-5-18" /> </System> <EventData> <Data Name="SupportInfo1">1</Data> <Data Name="SupportInfo2">5012</Data> <Data Name="ProcessingMode">0</Data> <Data Name="ProcessingTimeInMilliseconds">1138</Data> <Data Name="ErrorCode">52</Data> <Data Name="ErrorDescription">Unavailable</Data> <Data Name="DCName" /> </EventData> </Event> Event 5805 Event Type: Error Event Source: NETLOGON Event Category: None Event ID: 5805 Date: 12/16/2013 Time: 2:32:01 PM User: N/A Computer: NEOREEFSRV15 Description: The session setup from the computer NEOREEFVH3 failed to authenticate. The following error occurred: Access is denied. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 22 00 00 c0 "..À Here are the networks on the hosts: Any with a "Enabled" Are virtual switches.

    Read the article

  • "The system time has changed" events after waking from sleep

    - by Damir Arh
    Sometimes when my computer running Windows 7 wakes up from sleep, it has to adjust the time. When this happens the following system event is logged: <Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'> <System> <Provider Name='Microsoft-Windows-Kernel-General' Guid='{A68CA8B7-004F-D7B6-A698-07E2DE0F1F5D}'/> <EventID>1</EventID> <Version>0</Version> <Level>4</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x8000000000000010</Keywords> <TimeCreated SystemTime='2010-03-06T19:09:57.500000000Z'/> <EventRecordID>10672</EventRecordID> <Correlation/> <Execution ProcessID='4' ThreadID='56'/> <Channel>System</Channel> <Computer>GAME</Computer> <Security/> </System> <EventData> <Data Name='NewTime'>2010-03-06T19:09:57.500000000Z</Data> <Data Name='OldTime'>2010-03-06T17:34:32.870117200Z</Data> </EventData> <RenderingInfo Culture='sl-SI'> <Message>The system time has changed to ?2010?-?03?-?06T19:09:57.500000000Z from ?2010?-?03?-?06T17:34:32.870117200Z.</Message> <Level>Information</Level> <Task></Task> <Opcode>Info</Opcode> <Channel>System</Channel> <Provider>Microsoft-Windows-Kernel-General</Provider> <Keywords> <Keyword>Time</Keyword> </Keywords> </RenderingInfo> </Event> When this happens (I noticed it twice until now) the old time always corresponds to the time when computer entered sleep. The problem is that if Windows Media Center is scheduled for recording during this time, it just skips it as if the computer was turned off. I never had this problem running Windows Vista on the same machine. Any ideas what could be causing this problem and how to solve it are welcome.

    Read the article

  • Task Scheduler permissions error for some jobs

    - by MaseBase
    I have recently moved to a 64-bit Windows Server 2008 R2. I setup my Scheduled Tasks to run under one user (TaskUser) specifically created for the scheduler and most run just fine. However some of them do not run under TaskUser but will for my own credentials. Here is the Event Log entry I found, which from my research points me to believe that it doesn't have permissions, but it does. It also has the option "Run with highest privileges" checked on. I have seen this particular checkbox work wonders on some tasks, but I have a number of them that it's not helping for. The error is ERROR_ELEVATION_REQUIRED but the user is a member of the administrators group and has folder/file permission and is set to "Run with highest privileges" Log Name: Microsoft-Windows-UAC/Operational Source: Microsoft-Windows-UAC Date: 4/27/2010 2:21:44 PM Event ID: 1 Task Category: (1) Level: Error Keywords: User: LIVE\TaskUser Computer: www2 Description: The process failed to handle ERROR_ELEVATION_REQUIRED during the creation of a child process. Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-UAC" Guid="{E7558269-3FA5-46ED-9F4D-3C6E282DDE55}" /> <EventID>1</EventID> <Version>0</Version> <Level>2</Level> <Task>1</Task> <Opcode>0</Opcode> <Keywords>0x8000000000000000</Keywords> <TimeCreated SystemTime="2010-04-27T21:21:44.407053800Z" /> <EventRecordID>19</EventRecordID> <Correlation /> <Execution ProcessID="2460" ThreadID="5960" /> <Channel>Microsoft-Windows-UAC/Operational</Channel> <Computer>www2</Computer> <Security UserID="S-1-5-21-4017510424-2083581016-1307463562-1640" /> </System> <EventData></EventData> </Event> The errors shown in the Task Scheduler History tab display these results and states This operation requires an interactive window station. (0x800705B3) EventID 103 Task Scheduler failed to launch action "F:\App\Path\ConsoleApp.exe" in instance "{1a6d3450-b85a-40c0-b3db-72b98c1aa395}" of task "\taskFolder\taskName". Additional Data: Error Value: 2147943859. EventID 203 Task Scheduler failed to start instance "{1a6d3450-b85a-40c0-b3db-72b98c1aa395}" of "\taskFolder\taskName" task for user "LIVE\TaskUser" . Additional Data: Error Value: 2147943859.

    Read the article

  • IIS 7.5 Error 1007

    - by darkdog
    I have a Windows 2008 R2 Virtual Server. I removed the "Default Web Site" using IIS Manager and now all my other sites now report an error saying they can't access the site (or something similar). I thought I could just remove the role, re-install IIS again and start with a clean slate. After I re-installed IIS it's now reporting the following error in the event log: The World Wide Web Publishing Service (WWW service) failed to register the URL prefix of "http://www.mash-guild.com:80:192.168.245.132/" for the website of "4". The required network connectivity may already be used. The site has been disabled. The data field contains the error number. This is the full version with the german error note (I'm from germany): Der WWW-Publishingdienst (WWW-Dienst) konnte das URL-Präfix "http://www.mash-guild.com:80:192.168.245.132/" für die Website "4" nicht registrieren. Die erforderliche Netzwerkverbindung wird möglicherweise bereits verwendet. Die Website wurde deaktiviert. Das Datenfeld enthält die Fehlernummer. Ereignis-XML: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-IIS-W3SVC" Guid="{05448E22-93DE-4A7A-BBA5-92E27486A8BE}" EventSourceName="W3SVC" /> <EventID Qualifiers="49152">1007</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2011-03-26T16:14:56.000000000Z" /> <EventRecordID>1435</EventRecordID> <Correlation /> <Execution ProcessID="0" ThreadID="0" /> <Channel>System</Channel> <Computer>WIN-DCJ8SN0QI5J</Computer> <Security /> </System> <EventData> <Data Name="UrlPrefix">http://www.mash-guild.com:80:192.168.245.132/</Data> <Data Name="SiteID">4</Data> <Binary>B7000780</Binary> </EventData> </Event> I would like to know if there is a way to fix this or just get back to the IIS + Server settings I had just after I installed windows 2008 ?

    Read the article

  • "The system time has changed" avents after waking from sleep

    - by Damir Arh
    Sometimes when my computer running Windows 7 wakes up from sleep, it has to adjust the time. When this happens the following system event is logged: <Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'> <System> <Provider Name='Microsoft-Windows-Kernel-General' Guid='{A68CA8B7-004F-D7B6-A698-07E2DE0F1F5D}'/> <EventID>1</EventID> <Version>0</Version> <Level>4</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x8000000000000010</Keywords> <TimeCreated SystemTime='2010-03-06T19:09:57.500000000Z'/> <EventRecordID>10672</EventRecordID> <Correlation/> <Execution ProcessID='4' ThreadID='56'/> <Channel>System</Channel> <Computer>GAME</Computer> <Security/> </System> <EventData> <Data Name='NewTime'>2010-03-06T19:09:57.500000000Z</Data> <Data Name='OldTime'>2010-03-06T17:34:32.870117200Z</Data> </EventData> <RenderingInfo Culture='sl-SI'> <Message>The system time has changed to ?2010?-?03?-?06T19:09:57.500000000Z from ?2010?-?03?-?06T17:34:32.870117200Z.</Message> <Level>Information</Level> <Task></Task> <Opcode>Info</Opcode> <Channel>System</Channel> <Provider>Microsoft-Windows-Kernel-General</Provider> <Keywords> <Keyword>Time</Keyword> </Keywords> </RenderingInfo> </Event> When this happens (I noticed it twice until now) the old time always corresponds to the time when computer entered sleep. The problem is that if Windows Media Center is scheduled for recording during this time, it just skips it as if the computer was turned off. I never had this problem running Windows Vista on the same machine. Any ideas what could be causing this problem and how to solve it are welcome.

    Read the article

  • Windows Server 2008 R2 time keeps changing unexpectedly

    - by lmhost
    There is a serious problem with one of my servers time. At some random hours (5:59 localtime in the following example) it goes 1 hour back and keeps doing this again and again next times it reaches 5:59. Like this: 5:59-4:59 … 5:59- 4:59 and so on. It stays in the loop until I manually update its time. The server OS is Windows 2008 R2 Standard x64, acting as a standalone web server on the internet. time zone (UTC-05:00). system log: Information 4/13/2012 5:59:34 PM Kernel-General 1 None Information 4/13/2012 5:59:34 PM Kernel-General 1 None Information 4/13/2012 5:59:34 PM Kernel-General 1 None Information 4/13/2012 5:59:34 PM Kernel-General 1 None Information 4/13/2012 5:59:34 PM Kernel-General 1 None Information 4/13/2012 5:59:34 PM Kernel-General 1 None all of the above entries have same content: The system time has changed to ?2012?-?04?-?13T21:59:34.500000000Z from ?2012?-?04?-?13T22:59:34.500000000Z. Details XML View: - <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> - <System> <Provider Name="Microsoft-Windows-Kernel-General" Guid="{A68CA8B7-004F-D7B6-A698-07E2DE0F1F5D}" /> <EventID>1</EventID> <Version>0</Version> <Level>4</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x8000000000000010</Keywords> <TimeCreated SystemTime="2012-04-13T21:59:34.500000000Z" /> <EventRecordID>4060</EventRecordID> <Correlation /> <Execution ProcessID="4" ThreadID="80" /> <Channel>System</Channel> <Computer>********</Computer> <Security UserID="S-1-5-18" /> </System> - <EventData> <Data Name="NewTime">2012-04-13T21:59:34.500000000Z</Data> <Data Name="OldTime">2012-04-13T22:59:34.500000000Z</Data> </EventData> </Event> Any idea about what’s going on? Thanks

    Read the article

  • Crashes and freezes after fixing "BOOTMGR is missing" error

    - by Greg-J
    I came back from a 3-day weekend to a computer that was off. I leave my PC on 24/7, so this was odd. Turn it on to get the dreaded "BOOTMGR is missing" screen. Two attempts at Windows Recovery and it booted into Windows fine. After an hour or so, I get a frozen Chrome and my start bar disappears. Ctrl+Alt+Del brings up an error box telling me that Ctrl+Alt+Del failed to work properly. Clicking on any open application triggers an error (I can't recall the error now, but it essentially just said that the application couldn't be found running or something along those lines). I restart, and again, the same thing happens after a while of use. I turn it on, install the 47 updates I have or so, and then restart it. After a while of use (under an hour), it just freezes completely. My thoughts are: SSDs, RAM or PS. My system specs below: (RAID0) 2 x Crucial M4 CT128M4SSD2 2.5" 128GB SATA III MLC Internal Solid State Drive (SSD) CORSAIR Vengeance 16GB (4 x 4GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Desktop Memory Model CML16GX3M4A1600C9 CORSAIR HX Series HX750 750W ATX12V 2.3 / EPS12V 2.91 SLI Ready CrossFire Ready 80 PLUS GOLD Certified Modular Active 1 x ASUS Maximus IV Gene-Z/GEN3 LGA 1155 Intel Z68 HDMI SATA 6Gb/s USB 3.0 Micro ATX Intel Motherboard 1 x Hitachi GST Deskstar 7K1000.C 0F10383 1TB 7200 RPM SATA 3.0Gb/s 3.5" Internal Hard Drive -Bare Drive 1 x Intel Core i7-2600K Sandy Bridge 3.4GHz (3.8GHz Turbo Boost) LGA 1155 95W Quad-Core Desktop Processor Intel HD Graphics 1 x SAPPHIRE 21197-00-40G Radeon HD 7970 3GB 384-bit GDDR5 PCI Express 3.0 x16 HDCP Ready CrossFireX Support Video Card 1 x Noctua NH-D14 120mm & 140mm SSO CPU Cooler This is all crammed in a pretty small case (NZXT Vulcan) and has been running perfectly problem-free since January. The only thing out of the ordinary is that there is a fan in the case that is now making noise whereas the case has previously been completely silent. I have no reason to believe this is anything more then correlation, but felt it is worth mentioning. I believe it MAY be the SSDs simply because of the BOOTMGR error, but not sure how to test that theory. My belief that it may be the RAM is simply from experience with frozen machines. I haven't had the time to memtest it, but will. The PS being the culprit is something I've picked up by reading similar threads on various forums, and it seems plausible. I am unsure how to test this though. ANY insight whatsover would be greatly appreciated!

    Read the article

  • http://localhost/ always gives 502 unknown host

    - by Nitesh Panchal
    My service for World Wide Web Publishing Service started successfully but whenever I browse to http://localhost/ I always get 502 Unknown host. Also, wampapache is installed side by side but when I stop IIS service and start wampapache from services.msc I get error and when I view it in System event log I get this error: - <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> - <System> <Provider Name="Service Control Manager" Guid="{555908d1-a6d7-4695-8e1e-26931d2012f4}" EventSourceName="Service Control Manager" /> <EventID Qualifiers="49152">7024</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x8080000000000000</Keywords> <TimeCreated SystemTime="2011-06-12T17:43:28.223498400Z" /> <EventRecordID>346799</EventRecordID> <Correlation /> <Execution ProcessID="456" ThreadID="3936" /> <Channel>System</Channel> <Computer>MACHINENAME</Computer> <Security /> </System> - <EventData> <Data Name="param1">wampapache</Data> <Data Name="param2">%%1</Data> </EventData> </Event> I am fed up of this error and it is driving me nuts. I feel like banging my head against the laptop. I am really serious. Without concentrating on my real application I am trying to solve this issue since 3 hours. I google various threads and few of them said that there could be issue of Reporting Services or Skype. But I have uninstalled Skype and Reporting Services are disabled. What more should I do? I have hosts file present in etc directory and it does have mapping for localhost to 127.0.0.1. What more could I do?

    Read the article

  • Analysing and measuring the performance of a .NET application (survey results)

    - by Laila
    Back in December last year, I asked myself: could it be that .NET developers think that you need three days and a PhD to do performance profiling on their code? What if developers are shunning profilers because they perceive them as too complex to use? If so, then what method do they use to measure and analyse the performance of their .NET applications? Do they even care about performance? So, a few weeks ago, I decided to get a 1-minute survey up and running in the hopes that some good, hard data would clear the matter up once and for all. I posted the survey on Simple Talk and got help from a few people to promote it. The survey consisted of 3 simple questions: Amazingly, 533 developers took the time to respond - which means I had enough data to get representative results! So before I go any further, I would like to thank all of you who contributed, because I now have some pretty good answers to the troubling questions I was asking myself. To thank you properly, I thought I would share some of the results with you. First of all, application performance is indeed important to most of you. In fact, performance is an intrinsic part of the development cycle for a good 40% of you, which is much higher than I had anticipated, I have to admit. (I know, "Have a little faith Laila!") When asked what tool you use to measure and analyse application performance, I found that nearly half of the respondents use logging statements, a third use performance counters, and 70% of respondents use a profiler of some sort (a 3rd party performance profilers, the CLR profiler or the Visual Studio profiler). The importance attributed to logging statements did surprise me a little. I am still not sure why somebody would go to the trouble of manually instrumenting code in order to measure its performance, instead of just using a profiler. I personally find the process of annotating code, calculating times from log files, and relating it all back to your source terrifyingly laborious. Not to mention that you then need to remember to turn it all off later! Even when you have logging in place throughout all your code anyway, you still have a fair amount of potentially error-prone calculation to sift through the results; in addition, you'll only get method-level rather than line-level timings, and you won't get timings from any framework or library methods you don't have source for. To top it all, we all know that bottlenecks are rarely where you would expect them to be, so you could be wasting time looking for a performance problem in the wrong place. On the other hand, profilers do all the work for you: they automatically collect the CPU and wall-clock timings, and present the results from method timing all the way down to individual lines of code. Maybe I'm missing a trick. I would love to know about the types of scenarios where you actively prefer to use logging statements. Finally, while a third of the respondents didn't have a strong opinion about code performance profilers, those who had an opinion thought that they were mainly complex to use and time consuming. Three respondents in particular summarised this perfectly: "sometimes, they are rather complex to use, adding an additional time-sink to the process of trying to resolve the existing problem". "they are simple to use, but the results are hard to understand" "Complex to find the more advanced things, easy to find some low hanging fruit". These results confirmed my suspicions: Profilers are seen to be designed for more advanced users who can use them effectively and make sense of the results. I found yet more interesting information when I started comparing samples of "developers for whom performance is an important part of the dev cycle", with those "to whom performance is only looked at in times of crisis", and "developers to whom performance is not important, as long as the app works". See the three graphs below. Sample of developers to whom performance is an important part of the dev cycle: Sample of developers to whom performance is important only in times of crisis: Sample of developers to whom performance is not important, as long as the app works: As you can see, there is a strong correlation between the usage of a profiler and the importance attributed to performance: indeed, the more important performance is to a development team, the more likely they are to use a profiler. In addition, developers to whom performance is an important part of the dev cycle have a higher tendency to use a much wider range of methods for performance measurement and analysis. And, unsurprisingly, the less important performance is, the less varied the methods of measurement are. So all in all, to come back to my random questions: .NET developers do care about performance. Those who care the most use a wider range of performance measurement methods than those who care less. But overall, logging statements, performance counters and third party performance profilers are the performance measurement methods of choice for most developers. Finally, although most of you find code profilers complex to use, those of you who care the most about performance tend to use profilers more than those of you to whom performance is not so important.

    Read the article

  • SQL SERVER – SSMS: Top Object and Batch Execution Statistics Reports

    - by Pinal Dave
    The month of June till mid of July has been the fever of sports. First, it was Wimbledon Tennis and then the Soccer fever was all over. There is a huge number of fan followers and it is great to see the level at which people sometimes worship these sports. Being an Indian, I cannot forget to mention the India tour of England later part of July. Following these sports and as the events unfold to the finals, there are a number of ways the statisticians can slice and dice the numbers. Cue from soccer I can surely say there is a team performance against another team and then there is individual member fairs against a particular opponent. Such statistics give us a fair idea to how a team in the past or in the recent past has fared against each other, head-to-head stats during World cup and during other neutral venue games. All these statistics are just pointers. In reality, they don’t reflect the calibre of the current team because the individuals who performed in each of these games are totally different (Typical example being the Brazil Vs Germany semi-final match in FIFA 2014). So at times these numbers are misleading. It is worth investigating and get the next level information. Similar to these statistics, SQL Server Management studio is also equipped with a number of reports like a) Object Execution Statistics report and b) Batch Execution Statistics reports. As discussed in the example, the team scorecard is like the Batch Execution statistics and individual stats is like Object Level statistics. The analogy can be taken only this far, trust me there is no correlation between SQL Server functioning and playing sports – It is like I think about diet all the time except while I am eating. Performance – Batch Execution Statistics Let us view the first report which can be invoked from Server Node -> Reports -> Standard Reports -> Performance – Batch Execution Statistics. Most of the values that are displayed in this report come from the DMVs sys.dm_exec_query_stats and sys.dm_exec_sql_text(sql_handle). This report contains 3 distinctive sections as outline below.   Section 1: This is a graphical bar graph representation of Average CPU Time, Average Logical reads and Average Logical Writes for individual batches. The Batch numbers are indicative and the details of individual batch is available in section 3 (detailed below). Section 2: This represents a Pie chart of all the batches by Total CPU Time (%) and Total Logical IO (%) by batches. This graphical representation tells us which batch consumed the highest CPU and IO since the server started, provided plan is available in the cache. Section 3: This is the section where we can find the SQL statements associated with each of the batch Numbers. This also gives us the details of Average CPU / Average Logical Reads and Average Logical Writes in the system for the given batch with object details. Expanding the rows, I will also get the # Executions and # Plans Generated for each of the queries. Performance – Object Execution Statistics The second report worth a look is Object Execution statistics. This is a similar report as the previous but turned on its head by SQL Server Objects. The report has 3 areas to look as above. Section 1 gives the Average CPU, Average IO bar charts for specific objects. The section 2 is a graphical representation of Total CPU by objects and Total Logical IO by objects. The final section details the various objects in detail with the Avg. CPU, IO and other details which are self-explanatory. At a high-level both the reports are based on queries on two DMVs (sys.dm_exec_query_stats and sys.dm_exec_sql_text) and it builds values based on calculations using columns in them: SELECT * FROM    sys.dm_exec_query_stats s1 CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS s2 WHERE   s2.objectid IS NOT NULL AND DB_NAME(s2.dbid) IS NOT NULL ORDER BY  s1.sql_handle; This is one of the simplest form of reports and in future blogs we will look at more complex reports. I truly hope that these reports can give DBAs and developers a hint about what is the possible performance tuning area. As a closing point I must emphasize that all above reports pick up data from the plan cache. If a particular query has consumed a lot of resources earlier, but plan is not available in the cache, none of the above reports would show that bad query. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • Breaking up the Workday– Overcoming the Workaholic Syndrome

    - by dwahlin
    Hi, my name’s Dan Wahlin and I’m a workaholic – I admit it. It’s good from the standpoint that I get a lot done but it also has a lot of cons associated with it as well that I’m not proud of. I literally can’t watch TV without feeling like I should be doing something more productive (although I have no problem going to see movies at a theater or watching sporting events – that’s my escape I guess). On vacation it’s sometimes difficult the first few days to just “let go” of work and enjoy the time with my family. I always feel like I should be checking email and following up with different business projects. Fortunately, my wife knows me really well after 17 years of marriage and “gently” restricts my usage of laptops and other gadgets while we’re out. She also reminds me that constantly burying my face in gadgets just isn’t cool and shows a distinct lack of self control. On a given day I typically put in between 12 (at a minimum) up to 16-18 hours working on projects. My company does .NET consulting (ASP.NET/jQuery, SharePoint and Silverlight) but we also do a lot in the training space so there’s always a client project, some new courseware or some other deliverable that has to be worked on. My normal process for handling that is to just work my butt off and see how much I can get done. That process has worked well for a long time but when you start realizing that your happiness comes from how much work you accomplished that day then you have a problem. That’s especially true if you have kids (which I do….two awesome boys). It’s almost as if working more hours feels like I’m more successful or something which is of course ridiculous. It may actually mean that I’m too distracted or disorganized. Lately I’ve realized that while I’m still productive and always meet my deadlines, I’m really burnt out by the afternoon and have lost some of the excitement I used to have. Part of that’s normal I think given that I’ve been doing this for close to 15 years now, but in thinking through it more I realized that I just need to get away from the desk and take a break. By far, the happiest time of my life was my childhood. Part of that was due to having awesome parents, having far less responsibility (a big factor I suspect), being able to hang-out with friends at school, playing sports, games, etc. but I think a big part of the overall happiness came from being outside a lot. I lived on my bike as a little kid and as I grew up I shared time between riding an ATV all over the place, shooting hoops on the basketball court, playing golf and working on a golf course (all outside work of course).  Being a software developer and trainer I generally spend 95% or more of my day indoors and only see the sun when driving from place to place or by looking out the window (that’s sad because I live in a suburb of Phoenix, AZ where it’s nearly always sunny). I haven’t looked into any scientific studies on the matter, but I’d be willing to bet there’s a direct correlation between overall productivity/happiness and being outside some throughout the day (sunny or not). But, I wasn’t sure what to do about it since I do have a lot of deadlines I need to meet after all. While talking with my wife last night I mentioned how I feel like I’m in a rut and want to get the “fun” back that I used to have. She immediately said that I need to start making time for breaks (a real quick fact – she’s a lot smarter than me and nearly always right). Of course my first thought was that I’d be less productive taking breaks. If I spend 2 hours just relaxing then I’m losing 2 hours of work. But, I thought about it more and realized that I’m probably less productive when I work 10+ hours and only take less than 30 minutes for a lunch break to relax a little. I bet my brain is screaming, “Please let me relax a little so I can figure out these problems you’re trying to resolve!”. So, starting today I’m going to try to break the workaholic habit and spend time outside of the office. That could mean sitting around outside, working out, golfing, or whatever. I’ve decided that no gadgets are allowed during that time and that I shouldn’t work for more than 4 hours straight without taking a break. I have no idea how my little “break the workaholic syndrome” experiment will go or how long it will last, but I’d be very interested in hearing from others on how they keep fresh and focused without working yourself to death. If you have any specific ideas, techniques or practices you follow please share them. There’s a lot more to life than work and some of us (and I’m thinking of myself specifically) need to take a long, hard look at what kind of balance we currently have. I’d hate to look back at my life when I’m 80 years old and say, “The only thing I did was work – I missed out on life!”.

    Read the article

  • The Virtues and Challenges of Implementing Basel III: What Every CFO and CRO Needs To Know

    - by Jenna Danko
    The Basel Committee on Banking Supervision (BCBS) is a group tasked with providing thought-leadership to the global banking industry.  Over the years, the BCBS has released volumes of guidance in an effort to promote stability within the financial sector.  By effectively communicating best-practices, the Basel Committee has influenced financial regulations worldwide.  Basel regulations are intended to help banks: More easily absorb shocks due to various forms of financial-economic stress Improve risk management and governance Enhance regulatory reporting and transparency In June 2011, the BCBS released Basel III: A global regulatory framework for more resilient banks and banking systems.  This new set of regulations included many enhancements to previous rules and will have both short and long term impacts on the banking industry.  Some of the key features of Basel III include: A stronger capital base More stringent capital standards and higher capital requirements Introduction of capital buffers  Additional risk coverage Enhanced quantification of counterparty credit risk Credit valuation adjustments  Wrong  way risk  Asset Value Correlation Multiplier for large financial institutions Liquidity management and monitoring Introduction of leverage ratio Even more rigorous data requirements To implement these features banks need to embark on a journey replete with challenges. These can be categorized into three key areas: Data, Models and Compliance. Data Challenges Data quality - All standard dimensions of Data Quality (DQ) have to be demonstrated.  Manual approaches are now considered too cumbersome and automation has become the norm. Data lineage - Data lineage has to be documented and demonstrated.  The PPT / Excel approach to documentation is being replaced by metadata tools.  Data lineage has become dynamic due to a variety of factors, making static documentation out-dated quickly.  Data dictionaries - A strong and clean business glossary is needed with proper identification of business owners for the data.  Data integrity - A strong, scalable architecture with work flow tools helps demonstrate data integrity.  Manual touch points have to be minimized.   Data relevance/coverage - Data must be relevant to all portfolios and storage devices must allow for sufficient data retention.  Coverage of both on and off balance sheet exposures is critical.   Model Challenges Model development - Requires highly trained resources with both quantitative and subject matter expertise. Model validation - All Basel models need to be validated. This requires additional resources with skills that may not be readily available in the marketplace.  Model documentation - All models need to be adequately documented.  Creation of document templates and model development processes/procedures is key. Risk and finance integration - This integration is necessary for Basel as the Allowance for Loan and Lease Losses (ALLL) is calculated by Finance, yet Expected Loss (EL) is calculated by Risk Management – and they need to somehow be equal.  This is tricky at best from an implementation perspective.  Compliance Challenges Rules interpretation - Some Basel III requirements leave room for interpretation.  A misinterpretation of regulations can lead to delays in Basel compliance and undesired reprimands from supervisory authorities. Gap identification and remediation - Internal identification and remediation of gaps ensures smoother Basel compliance and audit processes.  However business lines are challenged by the competing priorities which arise from regulatory compliance and business as usual work.  Qualification readiness - Providing internal and external auditors with robust evidence of a thorough examination of the readiness to proceed to parallel run and Basel qualification  In light of new regulations like Basel III and local variations such as the Dodd Frank Act (DFA) and Comprehensive Capital Analysis and Review (CCAR) in the US, banks are now forced to ask themselves many difficult questions.  For example, executives must consider: How will Basel III play into their Risk Appetite? How will they create project plans for Basel III when they haven’t yet finished implementing Basel II? How will new regulations impact capital structure including profitability and capital distributions to shareholders? After all, new regulations often lead to diminished profitability as well as an assortment of implementation problems as we discussed earlier in this note.  However, by requiring banks to focus on premium growth, regulators increase the potential for long-term profitability and sustainability.  And a more stable banking system: Increases consumer confidence which in turn supports banking activity  Ensures that adequate funding is available for individuals and companies Puts regulators at ease, allowing bankers to focus on banking Stability is intended to bring long-term profitability to banks.  Therefore, it is important that every banking institution takes the steps necessary to properly manage, monitor and disclose its risks.  This can be done with the assistance and oversight of an independent regulatory authority.  A spectrum of banks exist today wherein some continue to debate and negotiate with regulators over the implementation of new requirements, while others are simply choosing to embrace them for the benefits I highlighted above. Do share with me how your institution is coping with and embracing these new regulations within your bank. Dr. Varun Agarwal is a Principal in the Banking Practice for Capgemini Financial Services.  He has over 19 years experience in areas that span from enterprise risk management, credit, market, and to country risk management; financial modeling and valuation; and international financial markets research and analyses.

    Read the article

  • Using BizTalk to bridge SQL Job and Human Intervention (Requesting Permission)

    - by Kevin Shyr
    I start off the process with either a BizTalk Scheduler (http://biztalkscheduledtask.codeplex.com/releases/view/50363) or a manual file drop of the XML message.  The manual file drop is to allow the SQL  Job to call a "File Copy" SSIS step to copy the trigger file for the next process and allows SQL  Job to be linked back into BizTalk processing. The Process Trigger XML looks like the following.  It is basically the configuration hub of the business process <ns0:MsgSchedulerTriggerSQLJobReceive xmlns:ns0="urn:com:something something">   <ns0:IsProcessAsync>YES</ns0:IsProcessAsync>   <ns0:IsPermissionRequired>YES</ns0:IsPermissionRequired>   <ns0:BusinessProcessName>Data Push</ns0:BusinessProcessName>   <ns0:EmailFrom>[email protected]</ns0:EmailFrom>   <ns0:EmailRecipientToList>[email protected]</ns0:EmailRecipientToList>   <ns0:EmailRecipientCCList>[email protected]</ns0:EmailRecipientCCList>   <ns0:EmailMessageBodyForPermissionRequest>This message was sent to request permission to start the Data Push process.  The SQL Job to be run is WeeklyProcessing_DataPush</ns0:EmailMessageBodyForPermissionRequest>   <ns0:SQLJobName>WeeklyProcessing_DataPush</ns0:SQLJobName>   <ns0:SQLJobStepName>Push_To_Production</ns0:SQLJobStepName>   <ns0:SQLJobMinToWait>1</ns0:SQLJobMinToWait>   <ns0:PermissionRequestTriggerPath>\\server\ETL-BizTalk\Automation\TriggerCreatedByBizTalk\</ns0:PermissionRequestTriggerPath>   <ns0:PermissionRequestApprovedPath>\\server\ETL-BizTalk\Automation\Approved\</ns0:PermissionRequestApprovedPath>   <ns0:PermissionRequestNotApprovedPath>\\server\ETL-BizTalk\Automation\NotApproved\</ns0:PermissionRequestNotApprovedPath> </ns0:MsgSchedulerTriggerSQLJobReceive>   Every node of this schema was promoted to a distinguished field so that the values can be used for decision making in the orchestration.  The first decision made is on the "IsPermissionRequired" field.     If permission is required (IsPermissionRequired=="YES"), BizTalk will use the configuration info in the XML trigger to format the email message.  Here is the snippet of how the email message is constructed. SQLJobEmailMessage.EmailBody     = new Eai.OrchestrationHelpers.XlangCustomFormatters.RawString(         MsgSchedulerTriggerSQLJobReceive.EmailMessageBodyForPermissionRequest +         "<br><br>" +         "By moving the file, you are either giving permission to the process, or disapprove of the process." +         "<br>" +         "This is the file to move: \"" + PermissionTriggerToBeGenereatedHere +         "\"<br>" +         "(You may find it easier to open the destination folder first, then navigate to the sibling folder to get to this file)" +         "<br><br>" +         "To approve, move(NOT copy) the file here: " + MsgSchedulerTriggerSQLJobReceive.PermissionRequestApprovedPath +         "<br><br>" +         "To disapprove, move(NOT copy) the file here: " + MsgSchedulerTriggerSQLJobReceive.PermissionRequestNotApprovedPath +         "<br><br>" +         "The file will be IMMEDIATELY picked up by the automated process.  This is normal.  You should receive a message soon that the file is processed." +         "<br>" +         "Thank you!"     ); SQLJobSendNotification(Microsoft.XLANGs.BaseTypes.Address) = "mailto:" + MsgSchedulerTriggerSQLJobReceive.EmailRecipientToList; SQLJobEmailMessage.EmailBody(Microsoft.XLANGs.BaseTypes.ContentType) = "text/html"; SQLJobEmailMessage(SMTP.Subject) = "Requesting Permission to Start the " + MsgSchedulerTriggerSQLJobReceive.BusinessProcessName; SQLJobEmailMessage(SMTP.From) = MsgSchedulerTriggerSQLJobReceive.EmailFrom; SQLJobEmailMessage(SMTP.CC) = MsgSchedulerTriggerSQLJobReceive.EmailRecipientCCList; SQLJobEmailMessage(SMTP.EmailBodyFileCharset) = "UTF-8"; SQLJobEmailMessage(SMTP.SMTPHost) = "localhost"; SQLJobEmailMessage(SMTP.MessagePartsAttachments) = 2;   After the Permission request email is sent, the next step is to generate the actual Permission Trigger file.  A correlation set is used here on SQLJobName and a newly generated GUID field. <?xml version="1.0" encoding="utf-8"?><ns0:SQLJobAuthorizationTrigger xmlns:ns0="somethingsomething"><SQLJobName>Data Push</SQLJobName><CorrelationGuid>9f7c6b46-0e62-46a7-b3a0-b5327ab03753</CorrelationGuid></ns0:SQLJobAuthorizationTrigger> The end user (the human intervention piece) will either grant permission for this process, or deny it, by moving the Permission Trigger file to either the "Approved" folder or the "NotApproved" folder.  A parallel Listen shape is waiting for either response.   The next set of steps decide how the SQL Job is to be called, or whether it is called at all.  If permission denied, it simply sends out a notification.  If permission is granted, then the flag (IsProcessAsync) in the original Process Trigger is used.  The synchonous part is not really synchronous, but a loop timer to check the status within the calling stored procedure (for more information, check out my previous post:  http://geekswithblogs.net/LifeLongTechie/archive/2010/11/01/execute-sql-job-synchronously-for-biztalk-via-a-stored-procedure.aspx)  If it's async, then the sp starts the job and BizTalk sends out an email.   And of course, some error notification:   Footnote: The next version of this orchestration will have an additional parallel line near the Listen shape with a Delay built in and a Loop to send out a daily reminder if no response has been received from the end user.  The synchronous part is used to gather results and execute a data clean up process so that the SQL Job can be re-tried.  There are manu possibilities here.

    Read the article

  • CPU Usage in Very Large Coherence Clusters

    - by jpurdy
    When sizing Coherence installations, one of the complicating factors is that these installations (by their very nature) tend to be application-specific, with some being large, memory-intensive caches, with others acting as I/O-intensive transaction-processing platforms, and still others performing CPU-intensive calculations across the data grid. Regardless of the primary resource requirements, Coherence sizing calculations are inherently empirical, in that there are so many permutations that a simple spreadsheet approach to sizing is rarely optimal (though it can provide a good starting estimate). So we typically recommend measuring actual resource usage (primarily CPU cycles, network bandwidth and memory) at a given load, and then extrapolating from those measurements. Of course there may be multiple types of load, and these may have varying degrees of correlation -- for example, an increased request rate may drive up the number of objects "pinned" in memory at any point, but the increase may be less than linear if those objects are naturally shared by concurrent requests. But for most reasonably-designed applications, a linear resource model will be reasonably accurate for most levels of scale. However, at extreme scale, sizing becomes a bit more complicated as certain cluster management operations -- while very infrequent -- become increasingly critical. This is because certain operations do not naturally tend to scale out. In a small cluster, sizing is primarily driven by the request rate, required cache size, or other application-driven metrics. In larger clusters (e.g. those with hundreds of cluster members), certain infrastructure tasks become intensive, in particular those related to members joining and leaving the cluster, such as introducing new cluster members to the rest of the cluster, or publishing the location of partitions during rebalancing. These tasks have a strong tendency to require all updates to be routed via a single member for the sake of cluster stability and data integrity. Fortunately that member is dynamically assigned in Coherence, so it is not a single point of failure, but it may still become a single point of bottleneck (until the cluster finishes its reconfiguration, at which point this member will have a similar load to the rest of the members). The most common cause of scaling issues in large clusters is disabling multicast (by configuring well-known addresses, aka WKA). This obviously impacts network usage, but it also has a large impact on CPU usage, primarily since the senior member must directly communicate certain messages with every other cluster member, and this communication requires significant CPU time. In particular, the need to notify the rest of the cluster about membership changes and corresponding partition reassignments adds stress to the senior member. Given that portions of the network stack may tend to be single-threaded (both in Coherence and the underlying OS), this may be even more problematic on servers with poor single-threaded performance. As a result of this, some extremely large clusters may be configured with a smaller number of partitions than ideal. This results in the size of each partition being increased. When a cache server fails, the other servers will use their fractional backups to recover the state of that server (and take over responsibility for their backed-up portion of that state). The finest granularity of this recovery is a single partition, and the single service thread can not accept new requests during this recovery. Ordinarily, recovery is practically instantaneous (it is roughly equivalent to the time required to iterate over a set of backup backing map entries and move them to the primary backing map in the same JVM). But certain factors can increase this duration drastically (to several seconds): large partitions, sufficiently slow single-threaded CPU performance, many or expensive indexes to rebuild, etc. The solution of course is to mitigate each of those factors but in many cases this may be challenging. Larger clusters also lead to the temptation to place more load on the available hardware resources, spreading CPU resources thin. As an example, while we've long been aware of how garbage collection can cause significant pauses, it usually isn't viewed as a major consumer of CPU (in terms of overall system throughput). Typically, the use of a concurrent collector allows greater responsiveness by minimizing pause times, at the cost of reducing system throughput. However, at a recent engagement, we were forced to turn off the concurrent collector and use a traditional parallel "stop the world" collector to reduce CPU usage to an acceptable level. In summary, there are some less obvious factors that may result in excessive CPU consumption in a larger cluster, so it is even more critical to test at full scale, even though allocating sufficient hardware may often be much more difficult for these large clusters.

    Read the article

  • Simple BizTalk Orchestration & Port Tutorial

    - by bosuch
    (This is a reference for a lunch & learn I'm giving at my company) This demo will create a BizTalk process that monitors a directory for an XML file, loads it into an orchestration, and drops it into a different directory. There’s no real processing going on (other than moving the file from one location to another), but this will introduce you to Messages, Orchestrations and Ports. To begin, create a new BizTalk Project names OrchestrationPortDemo: When the solution has been created, right-click the OrchestrationPortDemo solution name and select Add -> New Item. Add a BizTalk Orchestration named DemoOrchestration: Click Add and the orchestration will be created and displayed in the BizTalk Orchestration Designer. The designer allows you to visually create your business processes: Next, you will add a message (the basic unit of communication) to the orchestration. In the Orchestration View, right-click Messages and select New Message. In the message properties window, enter DemoMessage as the Identifier (the name), and select .NET Classes -> System.Xml.XmlDocument for Message Type. This indicates that we’ll be passing a standard Xml document in and out of the orchestration. Next, you will add Send and Receive shapes to the orchestration. From the toolbox, drag a Receive shape onto the orchestration (where it says “Drop a shape from the toolbox here”). Next, drag a Send shape directly below the Receive shape. For the properties of both shapes, select DemoMessage for Message – this indicates we’ll be passing around the message we created earlier. The Operation box will have a red exclamation mark next to it because no port has been specified. We will do this in a minute. On the Receive shape properties, you must be sure to select True for Activate. This indicates that the orchestration will be started upon receipt of a message, rather than being called by another orchestration. If you leave it set to false, when you try to build the application you’ll receive the error “You must specify at least one already-initialized correlation set for a non-activation receive that is on a non self-correlating port.” Now you’ll add ports to the orchestration. Ports specify how your orchestration will send and receive messages. Drag a port from the toolbox to the left-hand Port Surface, and the Port Configuration Wizard launches. For the first port (the receive port), enter the following information: Name: ReceivePort Select the port type to be used for this port: Create a new Port Type Port Type Name: ReceivePortType Port direction of communication: I’ll always be receiving <…> Port binding: Specify later By choosing “Specify later” you are choosing to bind the port (choose where and how it will send or receive its messages) at deployment time via the BizTalk Server Administration console. This allows you to change locations later without building and re-deploying the application. Next, drag a port to the right-hand Port Surface; this will be your send port. Configure it as follows: Name: SendPort Select the port type to be used for this port: Create a new Port Type Port Type Name: SendPortType Port direction of communication: I’ll always be sending <…> Port binding: Specify later Finally, drag the green arrow on the ReceivePort to the Receive_1 shape, and the green arrow on the SendPort to the Send_1 shape. Your orchestration should look like this: Now you have a couple final steps before building and deploying the application. In the Solution Explorer, right-click on OrchestrationPortDemo and select Properties. On the Signing tab, click “Sign the assembly”, and choose <New…> from the drop-down. Enter DemoKey as the Key file name, and deselect “Protect my key file with a password”. This will create the file DemoKey.snk in your solution. Signing the assembly gives it a strong name so that it can be deployed into the global assembly cache (GAC). Next, click the Deployment tab, and enter OrchestrationPortDemo as the Application Name. Save your solution. Click “Build OrchestrationPortDemo”. Your solution should (hopefully!) build with no errors. Click “Deploy OrchestrationPortDemo”. (Note – If you’re running Server 2008, Vista or Win7, you may get an error message. If so, close Visual Studio and run it as an administrator) That’s it! Your application is ready to be configured and fired up in the BizTalk Server Administration console, so stay tuned!

    Read the article

  • Please give the solution of the following programs in R Programming

    - by NEETHU
    Table below gives data concerning the performance of 28 national football league teams in 1976.It is suspected that the no. of yards gained rushing by opponents(x8) has an effect on the no. of games won by a team(y) (a)Fit a simple linear regression model relating games won by y to yards gained rushing by opponents x8. (b)Construct the analysis of variance table and test for significance of regression. (c)Find a 95% CI on the slope. (d)What percent of the total variability in y is explained by this model. (e)Find a 95% CI on the mean number of games won in opponents yards rushing is limited to 2000 yards. Team y x8 1 10 2205 2 11 2096 3 11 1847 4 13 1803 5 10 1457 6 11 1848 7 10 1564 8 11 1821 9 4 2577 10 2 2476 11 7 1984 12 10 1917 13 9 1761 14 9 1709 15 6 1901 16 5 2288 17 5 2072 18 5 2861 19 6 2411 20 4 2289 21 3 2203 22 3 2592 23 4 2053 24 10 1979 25 6 2048 26 8 1786 27 2 2876 28 0 2560 Suppose we would like to use the model developed in problem 1 to predict the no. of games a team will win if it can limit opponents yards rushing to 1800 yards. Find a point estimate of the no. of games won when x8=1800.Find a 905 prediction interval on the no. of games won. The purity of Oxygen produced by a fractionation process is thought to be percentage of Hydrocarbon in the main condenser of the processing unit .20 samples are shown below. Purity(%) Hydrocarbon(%) 86.91 1.02 89.85 1.11 90.28 1.43 86.34 1.11 92.58 1.01 87.33 0.95 86.29 1.11 91.86 0.87 95.61 1.43 89.86 1.02 96.73 1.46 99.42 1.55 98.66 1.55 96.07 1.55 93.65 1.4 87.31 1.15 95 1.01 96.85 0.99 85.2 0.95 90.56 0.98 (a)Fit a simple linear regression model to the data. (b)Test the hypothesis H0:ß=0 (c)Calculate R2 . (d)Find a 95% CI on the slope. (e)Find a 95% CI on the mean purity and the Hydrocarbon % is 1. Consider the Oxygen plant data in Problem3 and assume that purity and Hydrocarbon percentage are jointly normally distributed r.vs (a)What is the correlation between Oxygen purity and Hydrocarbon% (b)Test the hypothesis that ?=0. (c)Construct a 95% CI for ?.

    Read the article

  • Using Artificial Intelligence (AI) to predict Stock Prices

    - by akaphenom
    Given a set of datavery similar to the Motley Fool CAPS system, where individual users enter BUY and SELL recommendations on various equities. What I would like to do is show each recommendation and I guess some how rate (1-5) as to whether it was good predictor<5 (ie corellation coeffient = 1) of the future stock price (or eps or whatever) or a horrible predictor (ie corellation coeffient = -1) or somewhere inbetween. Each recommendation is tagged to a particular user, so that can be tracked over time. I can also track market direction (bullish / bearish) based off of something like sp500 price. The components I think that would make sense in the model would be: user direction (long/short) market direction sector of stock The thought is that some users are better in bull markets than bear (and vice versa), and some are better at shorts than longs- and then a cobination the above. I can automatically tag the market direction and sector (based off the market at the time and the equity being recommended). The thought is that I could present a series of screens and allow me to rank each individual recommendation by displaying available data absolute, market and sector out performance for a specfic time period out. I would follow a detailed list for ranking the stocks so that the ranking is as objective as possible. My assumtion is that a single user is right no more than 57% of the time - but who knows. I could load the system and say "Lets rank the recommendation as a predictor of stock value 90 days forward"; and that would represent a very explicit set of rankings. NOW here is the crux - I want to create some sort of machine learning algorithm that can identify patterns over a series of time so that as recommendations stream into the application we maintain a ranking of that stock (ie. similar to correlation coeeficient) as to the likelihood of that recommendation (in addition to the past series of recommendations ) will affect the price. Now here is the super crux. I have never taken an AI class / read an AI book / never mind specific to machine learning. So I cam looking for guidance - sample or description of a similar system I could adapt. Place to look for info or any general help. Or even push me in the right direction to get started... My hope is to implment this with F# and be able to impress my friends with a new skillset in F# with an implementation of machine learnign and potentially something (application / source) I can include in a tech portfolio or blog space; Thank you for any advice in advance.

    Read the article

  • SignalR recording when a Web Page has closed

    - by Benjamin Rogers
    I am using MassTransit request and response with SignalR. The web site makes a request to a windows service that creates a file. When the file has been created the windows service will send a response message back to the web site. The web site will open the file and make it available for the users to see. I want to handle the scenario where the user closes the web page before the file is created. In that case I want the created file to be emailed to them. Regardless of whether the user has closed the web page or not, the message handler for the response message will be run. What I want to be able to do is have some way of knowing within the response message handler that the web page has been closed. This is what I have done already. It doesnt work but it does illustrate my thinking. On the web page I have $(window).unload(function () { if (event.clientY < 0) { // $.connection.hub.stop(); $.connection.exportcreate.setIsDisconnected(); } }); exportcreate is my Hub name. In setIsDisconnected would I set a property on Caller? Lets say I successfully set a property to indicate that the web page has been closed. How do I find out that value in the response message handler. This is what it does now protected void BasicResponseHandler(BasicResponse message) { string groupName = CorrelationIdGroupName(message.CorrelationId); GetClients()[groupName].display(message.ExportGuid); } private static dynamic GetClients() { return AspNetHost.DependencyResolver.Resolve<IConnectionManager>().GetClients<ExportCreateHub>(); } I am using the message correlation id as a group. Now for me the ExportGuid on the message is very important. That is used to identify the file. So if I am going to email the created file I have to do it within the response handler because I need the ExportGuid value. If I did store a value on Caller in my hub for the web page close, how would I access it in the response handler. Just in case you need to know. display is defined on the web page as exportCreate.display = function (guid) { setTimeout(function () { top.location.href = 'GetExport.ashx?guid=' + guid; }, 500); }; GetExport.ashx opens the file and returns it as a response. Thank you, Regards Ben

    Read the article

  • Window Media Player issues two requests for the audio on web page

    - by Ron Harlev
    I'm using Windows Media Player in a web page. I have version 11 installed so that is the version I'm testing with right now. The player is embedded on the page with this HTML: <OBJECT id='MS_mediaPlayer' width="400" height="45" classid='CLSID:6BF52A52-394A-11D3-B153-00C04F79FAA6' codebase='http://activex.microsoft.com/activex/controls/mplayer/en/nsmp2inf.cab#Version=5,1,52,701' standby='Loading Microsoft Windows Media Player components...' type='application/x-oleobject'> <param name='autoStart' value="false"> <param name='uiMode' value="invisible"> <param name='loop' value="false"> </OBJECT> I'm calling in JavaScript: MS_mediaPlayer.URL = "SomeAudioFile.mp3" MS_mediaPlayer.controls.play(); When I look at Fiddler I can see that the player actually downloads "SomeAudioFile.mp3" twice. Is there some setting I have wrong? I was trying to set the "autoPlay" to true and avoid calling "play()". Got the same result - two downloads. UPDATE: The first request's user-agent is "Windows-Media-Player/11.0.5721.5268". The second has "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; GTB6; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)". Looks like the browser is running the same request the second time. No Idea why Any ideas? UPDATE (4/1/10): Still no solution. I debugged the JS thoroughly and there is only one call to MediaPlayer.URL='.....' to set the audio file. Nothing else triggers the media player to load the file and there is no other place referencing the audio file on the page. One other interesting fact is that this doesn't happen (the double loading of the audio) when I run the browser locally on my development web server. But other remote requests to the same web server generate the double audio loading. I believe I eliminated any correlation with specific IE version or media player version. This happens with IE6-8 and WM9-12

    Read the article

  • Problem with final branch in a parallel activity

    - by Dan Revell
    This might seem like a silly thing to say, the final branch in a parallel activity so I'll clarify. It's a parallel activity with three branches each containing a simple create task, on task changed and complete task. The branch containing the task that is last to complete seems to break. So every task works in it's own right, but the last one encounters a problem. Say the user clicks the final tasks link to open the attached infopath form and submits that. Execution gets to the event handler for that onTaskChanged where a taskCompleted variable gets set to true which will exit the while loop. I've successfully hit a breakpoint on this line so I know that happens. However the final activity in that branch, the completeTask doesn't get hit. When submit is clicked in the final form, the operation in progess screen says of for quite a while before returning to the workflow status page. The task that was opened and submitted says "Not Started". I can disable any of the branches to leave only two, but the same problem happens with the last to be completed. Earlier on in the workflow I do essencially the same thing. I have another 3 branch parallel activity with each brach containing a task. This one works correctly which leads me to believe that it might be a problem with having two parallel activites in the same sequential workflow. I've considered the possibility that it might be a correlation token problem. The token that every task branch uses is unique to that branch and it's owner activity name is est to that of the branch. It stands to reason that if the task complete variable is indeed getting set to true but the while loop isn't being exited, then there's a wire crossing with the variable somewhere. However I'd still have thought that the task status back on the workflow status page would at least say that the task is in progress. This is a frustrating show stopper of a bug for me. Any thoughts or suggestions would be much appricated so I can investigate them.

    Read the article

  • IIS 7.5 + Windows Server 2008 R2 + ASP.NET 4.0 HTTP 500 Error?

    - by Dave
    Hi, I'm having an issue I cannot track down and I have looked through the forums and not found anything that sheds any light. I have a fresh install of a Server 2008 R2 Web that I am trying to load an application I created and tested on a Windows 7 machine running IIS 7.5 using ASP.NET 4.0. Everything works fine on the dev machine. But when I used the Web Deployment tool to move it to the server, I now get a HTTP 500 error without a lot of information: Module AspNetInitClrHostFailureModule Notification BeginRequest Handler StaticFile Error Code 0x80070002 Requested URL http://192.168.1.83:80/ Physical Path C:\JustStreamIt Logon Method Not yet determined Logon User Not yet determined Failed Request Tracing Log Directory C:\inetpub\logs\FailedReqLogFiles And in my trace file I get: view trace Warning -SET_RESPONSE_ERROR_DESCRIPTION ErrorDescription An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur. view trace Warning -MODULE_SET_RESPONSE_ERROR_STATUS ModuleName AspNetInitClrHostFailureModule Notification 1 HttpStatus 500 HttpReason Internal Server Error HttpSubStatus 0 ErrorCode 2147942402 ConfigExceptionInfo Notification BEGIN_REQUEST ErrorCode The system cannot find the file specified. (0x80070002) And I get the following in the Application Log: Log Name: Application Source: Microsoft-Windows-IIS-W3SVC-WP Date: 5/28/2010 2:08:10 PM Event ID: 2299 Task Category: None Level: Error Keywords: Classic User: N/A Computer: win-ltfkdo1dnfp Description: An application has reported as being unhealthy. The worker process will now request a recycle. Reason given: An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur. . The data is the error. Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-IIS-W3SVC-WP" Guid="{670080D9-742A-4187-8D16-41143D1290BD}" EventSourceName="W3SVC-WP" /> <EventID Qualifiers="49152">2299</EventID> <Version>0</Version> <Level>2</Level> <Task>0</Task> <Opcode>0</Opcode> <Keywords>0x80000000000000</Keywords> <TimeCreated SystemTime="2010-05-28T21:08:10.000000000Z" /> <EventRecordID>1663</EventRecordID> <Correlation /> <Execution ProcessID="0" ThreadID="0" /> <Channel>Application</Channel> <Computer>win-ltfkdo1dnfp</Computer> <Security /> </System> <EventData> <Data Name="Reason">An error message detailing the cause of this specific request failure can be found in the application event log of the web server. Please review this log entry to discover what caused this error to occur. </Data> <Binary>02000780</Binary> </EventData> </Event> Anyone have a suggestion about where I should start looking?

    Read the article

  • Making sense of S.M.A.R.T

    - by James
    First of all, I think everyone knows that hard drives fail a lot more than the manufacturers would like to admit. Google did a study that indicates that certain raw data attributes that the S.M.A.R.T status of hard drives reports can have a strong correlation with the future failure of the drive. We find, for example, that after their first scan error, drives are 39 times more likely to fail within 60 days than drives with no such errors. First errors in re- allocations, offline reallocations, and probational counts are also strongly correlated to higher failure probabil- ities. Despite those strong correlations, we find that failure prediction models based on SMART parameters alone are likely to be severely limited in their prediction accuracy, given that a large fraction of our failed drives have shown no SMART error signals whatsoever. Seagate seems like it is trying to obscure this information about their drives by claiming that only their software can accurately determine the accurate status of their drive and by the way their software will not tell you the raw data values for the S.M.A.R.T attributes. Western digital has made no such claim to my knowledge but their status reporting tool does not appear to report raw data values either. I've been using HDtune and smartctl from smartmontools in order to gather the raw data values for each attribute. I've found that indeed... I am comparing apples to oranges when it comes to certain attributes. I've found for example that most Seagate drives will report that they have many millions of read errors while western digital 99% of the time shows 0 for read errors. I've also found that Seagate will report many millions of seek errors while Western Digital always seems to report 0. Now for my question. How do I normalize this data? Is Seagate producing millions of errors while Western digital is producing none? Wikipedia's article on S.M.A.R.T status says that manufacturers have different ways of reporting this data. Here is my hypothesis: I think I found a way to normalize (is that the right term?) the data. Seagate drives have an additional attribute that Western Digital drives do not have (Hardware ECC Recovered). When you subtract the Read error count from the ECC Recovered count, you'll probably end up with 0. This seems to be equivalent to Western Digitals reported "Read Error" count. This means that Western Digital only reports read errors that it cannot correct while Seagate counts up all read errors and tells you how many of those it was able to fix. I had a Seagate drive where the ECC Recovered count was less than the Read error count and I noticed that many of my files were becoming corrupt. This is how I came up with my hypothesis. The millions of seek errors that Seagate produces are still a mystery to me. Please confirm or correct my hypothesis if you have additional information. Here is the smart status of my western digital drive just so you can see what I'm talking about: james@ubuntu:~$ sudo smartctl -a /dev/sda smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF INFORMATION SECTION === Device Model: WDC WD1001FALS-00E3A0 Serial Number: WD-WCATR0258512 Firmware Version: 05.01D05 User Capacity: 1,000,204,886,016 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 8 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Thu Jun 10 19:52:28 2010 PDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0027 179 175 021 Pre-fail Always - 4033 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 270 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 098 098 000 Old_age Always - 1468 10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 262 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 46 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 223 194 Temperature_Celsius 0x0022 105 102 000 Old_age Always - 42 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0

    Read the article

  • Remote Desktop failed logon event 4625 not logging correctly on 2008 Terminal Services server

    - by Zone12
    When I use the new remote desktop with ssl and try to log on with bad credentials it logs a 4625 event as expected. The problem is, it doesn't log the ip address, so I can't block malicious logons in our firewall. The event looks like this: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-Security-Auditing" Guid="{00000000-0000-0000-0000-000000000000}" /> <EventID>4625</EventID> <Version>0</Version> <Level>0</Level> <Task>12544</Task> <Opcode>0</Opcode> <Keywords>0x8010000000000000</Keywords> <TimeCreated SystemTime="2012-04-13T06:52:36.499113600Z" /> <EventRecordID>467553</EventRecordID> <Correlation /> <Execution ProcessID="544" ThreadID="596" /> <Channel>Security</Channel> <Computer>ontheinternet</Computer> <Security /> </System> <EventData> <Data Name="SubjectUserSid">S-1-0-0</Data> <Data Name="SubjectUserName">-</Data> <Data Name="SubjectDomainName">-</Data> <Data Name="SubjectLogonId">0x0</Data> <Data Name="TargetUserSid">S-1-0-0</Data> <Data Name="TargetUserName">notauser</Data> <Data Name="TargetDomainName">MYSERVER-PC</Data> <Data Name="Status">0xc000006d</Data> <Data Name="FailureReason">%%2313</Data> <Data Name="SubStatus">0xc0000064</Data> <Data Name="LogonType">3</Data> <Data Name="LogonProcessName">NtLmSsp</Data> <Data Name="AuthenticationPackageName">NTLM</Data> <Data Name="WorkstationName">MYSERVER-PC</Data> <Data Name="TransmittedServices">-</Data> <Data Name="LmPackageName">-</Data> <Data Name="KeyLength">0</Data> <Data Name="ProcessId">0x0</Data> <Data Name="ProcessName">-</Data> <Data Name="IpAddress">-</Data> <Data Name="IpPort">-</Data> </EventData> </Event> It seems because the logon type is 3 and not 10 like the old rdp sessions, the ip address and other information is not stored. The machine I am trying to connect from is on the internet and not on the same network as the server. Does anyone know where this information is stored (and what other events are generated with a failed logon)? Any help will be much appreciated.

    Read the article

  • Blocking 'good' bots in nginx with multiple conditions for certain off-limits URL's where humans can go

    - by Glenn Plas
    After 2 days of searching/trying/failing I decided to post this here, I haven't found any example of someone doing the same nor what I tried seems to be working OK. I'm trying to send a 403 to bots not respecting the robots.txt file (even after downloading it several times). Specifically Googlebot. It will support the following robots.txt definition. User-agent: * Disallow: /*/*/page/ The intent is to allow Google to browse whatever they can find on the site but return a 403 for the following type of request. Googlebot seems to keep on nesting these links eternally adding paging block after block: my_domain.com:80 - 66.x.67.x - - [25/Apr/2012:11:13:54 +0200] "GET /2011/06/ page/3/?/page/2//page/3//page/2//page/3//page/2//page/2//page/4//page/4//pag e/1/&wpmp_switcher=desktop HTTP/1.1" 403 135 "-" "Mozilla/5.0 (compatible; G ooglebot/2.1; +http://www.google.com/bot.html)" It's a wordpress site btw. I don't want those pages to show up, even though after the robots.txt info got through, they stopped for a while only to begin crawling again later. It just never stops .... I do want real people to see this. As you can see, google get a 403 but when I try this myself in a browser I get a 404 back. I want browsers to pass. root@my_domain:# nginx -V nginx version: nginx/1.2.0 I tried different approaches, using a map and plain old nono if's and they both act the same: (under http section) map $http_user_agent $is_bot { default 0; ~crawl|Googlebot|Slurp|spider|bingbot|tracker|click|parser|spider 1; } (under the server section) location ~ /(\d+)/(\d+)/page/ { if ($is_bot) { return 403; # Please respect the robots.txt file ! } } I recently had to polish up my Apache skills for a client where I did about the same thing like this : # Block real Engines , not respecting robots.txt but allowing correct calls to pass # Google RewriteCond %{HTTP_USER_AGENT} ^Mozilla/5\.0\ \(compatible;\ Googlebot/2\.[01];\ \+http://www\.google\.com/bot\.html\)$ [NC,OR] # Bing RewriteCond %{HTTP_USER_AGENT} ^Mozilla/5\.0\ \(compatible;\ bingbot/2\.[01];\ \+http://www\.bing\.com/bingbot\.htm\)$ [NC,OR] # msnbot RewriteCond %{HTTP_USER_AGENT} ^msnbot-media/1\.[01]\ \(\+http://search\.msn\.com/msnbot\.htm\)$ [NC,OR] # Slurp RewriteCond %{HTTP_USER_AGENT} ^Mozilla/5\.0\ \(compatible;\ Yahoo!\ Slurp;\ http://help\.yahoo\.com/help/us/ysearch/slurp\)$ [NC] # block all page searches, the rest may pass RewriteCond %{REQUEST_URI} ^(/[0-9]{4}/[0-9]{2}/page/) [OR] # or with the wpmp_switcher=mobile parameter set RewriteCond %{QUERY_STRING} wpmp_switcher=mobile # ISSUE 403 / SERVE ERRORDOCUMENT RewriteRule .* - [F,L] # End if match This does a bit more than I asked nginx to do but it's about the same principle, I'm having a hard time figuring this out for nginx. So my question would be, why would nginx serve my browser a 404 ? Why isn't it passing, The regex isn't matching for my UA: "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.30 Safari/536.5" There are tons of example to block based on UA alone, and that's easy. It also looks like the matchin location is final, e.g. it's not 'falling' through for regular user, I'm pretty certain that this has some correlation with the 404 I get in the browser. As a cherry on top of things, I also want google to disregard the parameter wpmp_switcher=mobile , wpmp_switcher=desktop is fine but I just don't want the same content being crawled multiple times. Even though I ended up adding wpmp_switcher=mobile via the google webmaster tools pages (requiring me to sign up ....). that also stopped for a while but today they are back spidering the mobile sections. So in short, I need to find a way for nginx to enforce the robots.txt definitions. Can someone shell out a few minutes of their lives and push me in the right direction please ? I really appreciate ANY response that makes me think harder ;-)

    Read the article

< Previous Page | 4 5 6 7 8 9 10  | Next Page >