Search Results

Search found 1631 results on 66 pages for 'statistics'.

Page 22/66 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • SQLAuthority News – Monthly list of Puzzles and Solutions on SQLAuthority.com

    - by pinaldave
    This month has been very interesting month for SQLAuthority.com we had multiple and various puzzles which everybody participated and lots of interesting conversation which we have shared. Let us start in latest puzzles and continue going down. There are few answers also posted on facebook as well. SQL SERVER – Puzzle Involving NULL – Resolve – Error – Operand data type void type is invalid for sum operator This puzzle involves NULL and throws an error. The challenge is to resolve the error. There are multiple ways to resolve this error. Readers has contributed various methods. Few of them even have supplied the answer why this error is showing up. NULL are very important part of the database and if one of the column has NULL the result can be totally different than the one expected. SQL SERVER – T-SQL Scripts to Find Maximum between Two Numbers I modified script provided by friend to find greatest number between two number. My script has small bug in it. However, lots of readers have suggested better scripts. Madhivanan has written blog post on the subject over here. SQL SERVER – BI Quiz Hint – Performance Tuning Cubes – Hints This quiz is hosted on my friend Jacob‘s site. I have written many hints how one can tune cubes. Now one can take part here and win exciting prizes. SQL SERVER – Solution – Generating Zero Without using Any Numbers in T-SQL Madhivanan has asked very interesting question on his blog about How to Generate Zero without using Any Numbers in T-SQL. He has demonstrated various methods how one can generate Zero. I asked the same question on blog and got many interesting answers which I have shared. SQL SERVER – Solution – Puzzle – Statistics are not Updated but are Created Once I have to accept that this was most difficult puzzle. In this puzzle I have asked even though settings are correct, why statistics of the tables are not getting updated. In this puzzle one is tested with various concepts 1) Indexes, 2) Statistics, 3) database settings etc. There are multiple ways of solving this puzzles. It was interesting as many took interest but only few got it right. SQL SERVER – Question to You – When to use Function and When to use Stored Procedure This is rather straight forward question and not the typical puzzle. The answers from readers are great however, still there is chance of more detailed answers. SQL SERVER – Selecting Domain from Email Address I wrote on selecting domains from email addresses. Madhivanan makes puzzle out of a simple question. He wrote a follow-up post over here. In his post he writes various way how one can find email addresses from list of domains. Well, this is not a puzzle but amazing Guest Post by Feodor Georgiev who has written on subject Job Interviewing the Right Way (and for the Right Reasons). An article which everyone should read. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • ArchBeat Top 10 for December 2-8, 2012

    - by Bob Rhubart
    The Top 10 most-clicked items shared on the OTN ArchBeat Facebook page for the week of December 2-8, 2012 Configure Oracle SOA JMSAdatper to Work with WLS JMS Topics Another of the four posts published on Dec 4 by the Fusion Middleware A-Team blogger identified as "fip" illlustrates "how to configure the JMS Topic, the JmsAdapter connection factory, as well as the composite so that the JMS Topic messages will be evenly distributed to same composite running off different SOA cluster nodes without causing duplication." Web Service Example - Part 3: Asynchronous Part 3 in this series from the Oracle ADF Mobile blog looks at "firing the web service asynchronously and then filling in the UI when it completes." Denis says, "This can be useful when you have data on the device in a local store and want to show that to the user while the application uses lazy loading from a web service to load more data." Advanced Oracle SOA Suite Oracle Open World 2012 SOA Presentations Oracle SOA & BPM Partner Community blogger Juergen Kress shares a list of 13 SOA presentations delivered or moderated by Oracle SOA Product Management at OOW12 in San Francisco. Oracle WebLogic Server WLS Domain Browser My colleague Jeff Davies, a frequent speaker at OTN Architect Day events and a genuinely nice guy, emailed me last night with this message: "I just came across this app on Google Play. It allows WebLogic administrators to browse WLS 12c domain information. I installed it on my phone and tried it out. Works very fast." I'm an iPhone guy, but I'm perfectly comfortable taking Jeff at his word. The app is called WLS Domain Browser. Follow the link for more info from the Google Play site. Retrieve Performance Data from SOA Infrastructure Database Another of the four blog posts published on Dec 4 by very busy Oracle Fusion Middleware A-Team member "fip," this one offers "examples of some basic SQL queries you can run against the infrastructure database of Oracle SOA Suite 11G to acquire the performance statistics for a given period of time." How to Achieve OC4J RMI Load Balancing "Having returned from a customer who faced challenges with OC4J RMI load balancing, I felt there is still some confusion in the field [about] how OC4J RMI load balancing works," says the Oracle Fusion Middleware A-Team member known only as "fip." "Hence I decide to dust off an old tech note that I wrote a few years back and share it with the general public." From XaaS to Java EE – Which damn cloud is right for me in 2012? Oracle ACE Director Markus Eisele wrestles with a timely technical issue and shares his observations on several of the alternatives. Exalogic 2.0.1 Tea Break Snippets - Creating a ModifyJeOS VirtualBox "One of the main advantages of this is that Templates can be created away from the Exalogic Environment," explains The Old Toxophilist. (BTW: I had to look it up: a toxophilist is one who collects bows and arrows.) ADF Mobile - Implementing Reusable Mobile Architecture "Reusability was always a strong part of ADF," says Oracle ACE Director Andrejus Baranovskis. "The same high reusability level is supported now in ADF Mobile." The objective of this post is "to prove technically that [the] reusable architecture concept works for ADF Mobile." Using BPEL Performance Statistics to Diagnose Performance Bottlenecks Someone had a busy day… This post, one of four published on DeC 4 by a member of the Oracle Fusion Middleware A-Team identified only as "fip," offers details on how to "enable, retrieve and interpret the performance statistics, before the future versions provides a more pleasant user experience." Thought for the Day "If you're afraid to change something it is clearly poorly designed." — Martin Fowler Source: SoftwareQuotes.com

    Read the article

  • Monitor and Control Memory Usage in Google Chrome

    - by Asian Angel
    Do you want to know just how much memory Google Chrome and any installed extensions are using at a given moment? With just a few clicks you can see just what is going on under the hood of your browser. How Much Memory are the Extensions Using? Here is our test browser with a new tab and the Extensions Page open, five enabled extensions, and one disabled at the moment. You can access Chrome’s Task Manager using the Page Menu, going to Developer, and selecting Task manager… Or by right clicking on the Tab Bar and selecting Task manager. There is also a keyboard shortcut (Shift + Esc) available for the “keyboard ninjas”. Sitting idle as shown above here are the stats for our test browser. All of the extensions are sitting there eating memory even though some of them are not available/active for use on our new tab and Extensions Page. Not so good… If the default layout is not to your liking then you can easily modify the information that is available by right clicking and adding/removing extra columns as desired. For our example we added Shared Memory & Private Memory. Using the about:memory Page to View Memory Usage Want even more detail? Type about:memory into the Address Bar and press Enter. Note: You can also access this page by clicking on the Stats for nerds Link in the lower left corner of the Task Manager Window. Focusing on the four distinct areas you can see the exact version of Chrome that is currently installed on your system… View the Memory & Virtual Memory statistics for Chrome… Note: If you have other browsers running at the same time you can view statistics for them here too. See a list of the Processes currently running… And the Memory & Virtual Memory statistics for those processes. The Difference with the Extensions Disabled Just for fun we decided to disable all of the extension in our test browser… The Task Manager Window is looking rather empty now but the memory consumption has definitely seen an improvement. Comparing Memory Usage for Two Extensions with Similar Functions For our next step we decided to compare the memory usage for two extensions with similar functionality. This can be helpful if you are wanting to keep memory consumption trimmed down as much as possible when deciding between similar extensions. First up was Speed Dial”(see our review here). The stats for Speed Dial…quite a change from what was shown above (~3,000 – 6,000 K). Next up was Incredible StartPage (see our review here). Surprisingly both were nearly identical in the amount of memory being used. Purging Memory Perhaps you like the idea of being able to “purge” some of that excess memory consumption. With a simple command switch modification to Chrome’s shortcut(s) you can add a Purge Memory Button to the Task Manager Window as shown below.  Notice the amount of memory being consumed at the moment… Note: The tutorial for adding the command switch can be found here. One quick click and there is a noticeable drop in memory consumption. Conclusion We hope that our examples here will prove useful to you in managing the memory consumption in your own Google Chrome installation. If you have a computer with limited resources every little bit definitely helps out. Similar Articles Productive Geek Tips Stupid Geek Tricks: Compare Your Browser’s Memory Usage with Google ChromeMonitor CPU, Memory, and Disk IO In Windows 7 with Taskbar MetersFix for Firefox memory leak on WindowsHow to Purge Memory in Google ChromeHow to Make Google Chrome Your Default Browser TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals Video Toolbox is a Superb Online Video Editor Fun with 47 charts and graphs

    Read the article

  • 256 Worker Role 3D Rendering Demo is now a Lab on my Azure Course

    - by Alan Smith
    Ever since I came up with the crazy idea of creating an Azure application that would spin up 256 worker roles (please vote if you like it ) to render a 3D animation created using the Kinect depth camera I have been trying to think of something useful to do with it. I have also been busy working on developing training materials for a Windows Azure course that I will be delivering through a training partner in Stockholm, and for customers wanting to learn Windows Azure. I hit on the idea of combining the render demo and a course lab and creating a lab where the students would create and deploy their own mini render farms, which would participate in a single render job, consisting of 2,000 frames. The architecture of the solution is shown below. As students would be creating and deploying their own applications, I thought it would be fun to introduce some competitiveness into the lab. In the 256 worker role demo I capture the rendering statistics for each role, so it was fairly simple to include the students name in these statistics. This allowed the process monitor application to capture the number of frames each student had rendered and display a high-score table. When I demoed the application I deployed one instance that started rendering a frame every few minutes, and the challenge for the students was to deploy and scale their applications, and then overtake my single role instance by the end of the lab time. I had the process monitor running on the projector during the lab so the class could see the progress of their deployments, and how they were performing against my implementation and their classmates. When I tested the lab for the first time in Oslo last week it was a great success, the students were keen to be the first to build and deploy their solution and then watch the frames appear. As the students mostly had MSDN suspicions they were able to scale to the full 20 worker role instances and before long we had over 100 worker roles working on the animation. There were, however, a few issues who the couple of issues caused by the competitive nature of the lab. The first student to scale the application to 20 instances would render the most frames and win; there was no way for others to catch up. Also, as they were competing against each other, there was no incentive to help others on the course get their application up and running. I have now re-written the lab to divide the student into teams that will compete to render the most frames. This means that if one developer on the team can deploy and scale quickly, the other team still has a chance to catch up. It also means that if a student finishes quickly and puts their team in the lead they will have an incentive to help the other developers on their team get up and running. As I was using “Sharks with Lasers” for a lot of my demos, and reserved the sharkswithfreakinlasers namespaces for some of the Azure services (well somebody had to do it), the students came up with some creative alternatives, like “Camels with Cannons” and “Honey Badgers with Homing Missiles”. That gave me the idea for the teams having to choose a creative name involving animals and weapons. The team rendering architecture diagram is shown below.   Render Challenge Rules In order to ensure fair play a number of rules are imposed on the lab. ·         The class will be divided into teams, each team choses a name. ·         The team name must consist of a ferocious animal combined with a hazardous weapon. ·         Teams can allocate as many worker roles as they can muster to the render job. ·         Frame processing statistics and rendered frames will be vigilantly monitored; any cheating, tampering, and other foul play will result in penalties. The screenshot below shows an example of the team render farm in action, Badgers with Bombs have taken a lead over Camels with Cannons, and both are  leaving the Sharks with Lasers standing. If you are interested in attending a scheduled delivery of my Windows Azure or Windows Azure Service bus courses, or would like on-site training, more details are here.

    Read the article

  • Query Level 2 Caching throwing ClassCastException

    - by Sameer Malhotra
    Hi, I am using JPA and Hibernate for the database. I have configured (EHCacache) second level cache and query level cache, but just to make sure that caching is working I was trying to get the statistics which is throwing class cast exception.Any help will be highly appreciated. My main goal is to see all the objects which have been cached to make sure that the caching is working properly. Here is the code: public List<CodeValue> findByCodetype(String propertyName) { try { final String queryString = "select model from CodeValue model where model.codetype" + "= :propertyValue" + " order by model.code"; Query query = em.createQuery(queryString); query.setHint("org.hibernate.cacheable", true); query.setHint("org.hibernate.cacheRegion", "query.findByCodetype"); query.setParameter("propertyValue", propertyName); List resultList = query.getResultList(); org.hibernate.Session session = (Session) em.getDelegate(); SessionFactory sessionFactory = session.getSessionFactory(); Map cacheEntries = sessionFactory.getStatistics() .getSecondLevelCacheStatistics("query.findByCodetype") .getEntries(); logger.info("The statistics are: " + cacheEntries); return resultList; } catch (RuntimeException re) { logger.error("findByCodetype failed in trauma patient", re); throw re; } } The error is existing right when I am trying to print the statistics. Below is exception: [6/7/10 19:23:17:059 GMT] 00000034 SystemOut O java.lang.ClassCastException: org.hibernate.cache.QueryKey incompatible with org.hibernate.cache.CacheKey at org.hibernate.stat.SecondLevelCacheStatistics.getEntries(SecondLevelCacheStatistics.java:51) at com.idph.trauma.registry.service.TraumaPatientDAO.findByCodetype(TraumaPatientDAO.java:439) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:615) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:307) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:106) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) at $Proxy209.findByCodetype(Unknown Source) Do you know what's going on?

    Read the article

  • pthread_join from a signal handler

    - by liv2hak
    I have a capture program which in addition do capturing data and writing it into a file also prints some statistics.The function that prints the statistics static void* report(void) { /*Print statistics*/ } is called roughly every second using an ALARM that expires every second.So The program is like void capture_program() { pthread_t report_thread while(!exit_now) { if(pthread_create(&report_thread,NULL,report,NULL)){ fprintf(stderr,"Error creating reporting thread! \n"); } /* Capturing code -------------- -------------- */ if(doreport) usleep(5); } } void *report(void *param) { while(true) { if(doreport) { doreport = 0 //access some register from hardware usleep(5) } } } The expiry of the timer sets the doreport flag.If this flag is set report() is called which clears the flag.I am using usleep to alternate between two threads in the program.This seems to work fine. I also have a signal handler to handle SIGINT (i.e CTRL+C) static void anysig(int sig) { if (sig != SIGINT) dagutil_set_signal_handler(SIG_DFL); /* Tell the main loop to exit */ exit_now = 1; return; } My question: 1) Is it safe to call pthread_join from inside the signal handler? 2) Should I use exit_now flag for the report thread as well?

    Read the article

  • tightvnc authentication failure

    - by broiyan
    When I run a tightvnc client to establish a VNC session I sometimes receive an error message that suggests there are repeated failed VNC login attempts or a brute force attack. The message dialog title is "unsupported security type" and the text content is "too many authentication failures, try another connection? yes/no". This problem goes away if I reboot the Ubuntu server and reload the VNC server program and try again. From that point, it will work for multiple VNC sessions. My VNC sessions are typically about 20 minutes. At some time in the future, the problem may recur so it seems correlated to the time the server has been up or the time tightvnc has been loaded. Typically it takes only a day or so before the problem comes back. I am using tightvnc 1.3 on an server running Ubuntu 12.04. The version of vncserver is rather dated because that seems to be all that is available from tightvnc for linux servers. On the client side I use the newest Java-based VNC client (version 2.5) for both Windows access and Ubuntu access. All my VNC sessions are via SSH. I am the only user and I will typically use only the same client computer. How can I stop this problem from recurring? Edit I found the log file. This is a small excerpt of what I am seeing. Essentially, various IPs, not my own, are attempting to connect. What is the practical solution for this? 05/06/12 20:07:32 Got connection from client 69.194.204.90 05/06/12 20:07:32 Non-standard protocol version 3.4, using 3.3 instead 05/06/12 20:07:32 Too many authentication failures - client rejected 05/06/12 20:07:32 Client 69.194.204.90 gone 05/06/12 20:07:32 Statistics: 05/06/12 20:07:32 framebuffer updates 0, rectangles 0, bytes 0 05/06/12 20:24:56 Got connection from client 79.161.16.40 05/06/12 20:24:56 Non-standard protocol version 3.4, using 3.3 instead 05/06/12 20:24:56 Too many authentication failures - client rejected 05/06/12 20:24:56 Client 79.161.16.40 gone 05/06/12 20:24:56 Statistics: 05/06/12 20:24:56 framebuffer updates 0, rectangles 0, bytes 0 05/06/12 20:29:27 Got connection from client 109.230.246.54 05/06/12 20:29:27 Non-standard protocol version 3.4, using 3.3 instead 05/06/12 20:29:28 rfbVncAuthProcessResponse: authentication failed from 109.230.246.54 05/06/12 20:29:28 Client 109.230.246.54 gone 05/06/12 20:29:28 Statistics: 05/06/12 20:29:28 framebuffer updates 0, rectangles 0, bytes 0

    Read the article

  • How to remove Analog and Webalizer from my WHM?

    - by GaVrA
    Hello! Really simple question, we had some issues analog, and decided to turn it off since noone is using it. Now, in "Statistics Software Configuration" page in WHM we have disabled all statistics software, but we want to completely remove every trace of analog and webalizer. We have logs for about 2 years in them, but, like i said, we never used them(using google analytics) so why leaving something that have a lot of data if we are not gonna use it... So how to remove analog and webalizer from whm and how to prune every data they generated? Thanks!

    Read the article

  • Ping computername - result format

    - by kamleshrao
    Hi, I am trying PING command on my Windows 7 PC after many months. While doing this, I notice the following result: Ping using computer name: D:\>ping amdwin764 Pinging AMDWIN764 [fe80::ac53:546f:a730:8bd6%11] with 32 bytes of data: Reply from fe80::ac53:546f:a730:8bd6%11: time=1ms Reply from fe80::ac53:546f:a730:8bd6%11: time=1ms Reply from fe80::ac53:546f:a730:8bd6%11: time=1ms Reply from fe80::ac53:546f:a730:8bd6%11: time=1ms Ping statistics for fe80::ac53:546f:a730:8bd6%11: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 1ms, Maximum = 1ms, Average = 1ms Ping using IP address: D:\>ping 192.168.1.2 Pinging 192.168.1.2 with 32 bytes of data: Reply from 192.168.1.2: bytes=32 time=75ms TTL=128 Reply from 192.168.1.2: bytes=32 time=1ms TTL=128 Reply from 192.168.1.2: bytes=32 time=1ms TTL=128 Reply from 192.168.1.2: bytes=32 time=1ms TTL=128 Ping statistics for 192.168.1.2: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 1ms, Maximum = 75ms, Average = 19ms Why am I not getting the Ping results with Numeric IP address in my first example? Thanks, Kamlesh

    Read the article

  • Client A can ping server S, but client B cannot

    - by Soundar Rajan
    I moved the question to here from stackoverflow.com http://stackoverflow.com/questions/2917569/unable-to-ping-server-from-client-b-but-able-to-ping-from-client-a-please-help I am trying to configure a IIS 6.0/Windows Server 2003 web server with a ASP.net application. When I try to ping the server from client computer A I get the following: PING 74.208.192.xxx ==> Ping fails PING 74.208.192.xxx:80 ==> Ping succeeds! From client computer B, BOTH the pings fail. PING 74.208.192.xxx ==> Ping fails PING 74.208.192.xxx:80 ==> Ping fails with a message "Ping request could not find host 74.208.192.xxx:80" Both clients A and B are on the same subnet. The server is outside (a virtual server hosted by an ISP) I have an ASP.NET application in a virtual directory on the server. In IE or firefox, if I enter http://74.208.192.xxx/subdir/subdir/../Default.aspx, it works from both the clients! The server has default firewall settings but web server enabled (Port 80 is open). From client A (note the different 'reply to' address when the ping succeeds. C:\Program Files\Microsoft Visual Studio 9.0\VC>ping 74.208.192.xx Pinging 74.208.192.xx with 32 bytes of data: Request timed out. ... Request timed out. Ping statistics for 74.208.192.xx: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), C:\Program Files\Microsoft Visual Studio 9.0\VC>ping 74.208.192.xx:80 Pinging 74.208.192.xx:80 [208.67.216.xxx] with 32 bytes of data: Reply from 208.67.216.xxx: bytes=32 time=35ms TTL=54 ... Reply from 208.67.216.xxx: bytes=32 time=33ms TTL=54 Ping statistics for 208.67.216.xxx: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 32ms, Maximum = 54ms, Average = 38ms From client B C:\Documents and Settings\user>ping 74.208.192.81 Pinging 74.208.192.81 with 32 bytes of data: Request timed out. ... Request timed out. Ping statistics for 74.208.192.81: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), C:\Documents and Settings\user>ping 74.208.192.81:80 Ping request could not find host 74.208.192.81:80. Please check the name and try again. My main problem is I have a web service (asmx) file and the web service client program is not able to access it from client B, but able to access it from client A. I am trying to find out why and thought this ping issue may shed some light. I can ping yahoo.com both the computers.

    Read the article

  • Gateway on a virtual network interface used by LXC guests

    - by linkdd
    I'm currently having some problems with configuring a gateway for a virtual network interface. Here is what I've done : I created a virtual network interface : # brctl addbr lxc0 # brctl setfd lxc0 0 # ifconfig lxc0 192.168.0.1 promisc up # route add -net default gw 192.168.0.1 lxc0 The output of ifconfig gave me what I wanted : lxc0 Link encap:Ethernet HWaddr 22:4f:e4:40:89:bb inet adr:192.168.0.1 Bcast:192.168.0.255 Masque:255.255.255.0 adr inet6: fe80::88cf:d4ff:fe47:3b6b/64 Scope:Lien UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:623 errors:0 dropped:0 overruns:0 frame:0 TX packets:7412 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:0 RX bytes:50329 (49.1 KiB) TX bytes:335738 (327.8 KiB) I configured dnsmasq to provide a DNS server (using the default : 192.168.1.1) and a DHCP server. Then, my LXC guest is configured like this : lxc.network.type=veth lxc.network.link=lxc0 lxc.network.flags=up Every thing is working perfectly, my containers have an IP (192.168.0.57 and 192.168.0.98). I can ping the host and the containers from the containers and from the host : (host)# ping -c 3 192.168.0.114 PING 192.168.0.114 (192.168.0.114) 56(84) bytes of data. 64 bytes from 192.168.0.114: icmp_req=1 ttl=64 time=0.044 ms 64 bytes from 192.168.0.114: icmp_req=2 ttl=64 time=0.038 ms 64 bytes from 192.168.0.114: icmp_req=3 ttl=64 time=0.043 ms --- 192.168.0.114 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1998ms rtt min/avg/max/mdev = 0.038/0.041/0.044/0.007 ms (guest)# ping -c 3 192.168.0.1 PING 192.168.0.1 (192.168.0.1) 56(84) bytes of data. 64 bytes from 192.168.0.1: icmp_req=1 ttl=64 time=0.048 ms 64 bytes from 192.168.0.1: icmp_req=2 ttl=64 time=0.042 ms 64 bytes from 192.168.0.1: icmp_req=3 ttl=64 time=0.042 ms --- 192.168.0.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.042/0.044/0.048/0.003 ms Now, it's time to configure the host as a gateway for the network 192.168.0.0/24 : #!/bin/sh # Clear rules iptables -F iptables -t nat -F iptables -t mangle -F iptables -X iptables -A FORWARD -i lxc0 -o eth0 -j ACCEPT iptables -A FORWARD -i eth0 -o lxc0 -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE echo 1 > /proc/sys/net/ipv4/ip_forward The final test failed completely, ping the outside : (guest)# ping -c 3 google.fr PING google.fr (173.194.67.94) 56(84) bytes of data. From 192.168.0.1: icmp_seq=3 Redirect Host(New nexthop: wi-in-f94.1e100.net (173.194.67.94)) From 192.168.0.1 icmp_seq=1 Destination Host Unreachable From 192.168.0.1 icmp_seq=2 Destination Host Unreachable From 192.168.0.1 icmp_seq=3 Destination Host Unreachable --- google.fr ping statistics --- 3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 2017ms Did I missed something ?

    Read the article

  • Cisco ASA: How to route PPPoE-assigned subnet?

    - by Martijn Heemels
    We've just received a fiber uplink, and I'm trying to configure our Cisco ASA 5505 to properly use it. The provider requires us to connect via PPPoE, and I managed to configure the ASA as a PPPoE client and establish a connection. The ASA is assigned an IP address by PPPoE, and I can ping out from the ASA to the internet, but I should have access to an entire /28 subnet. I can't figure out how to get that subnet configured on the ASA, so that I can route or NAT the available public addresses to various internal hosts. My assigned range is: 188.xx.xx.176/28 The address I get via PPPoE is 188.xx.xx.177/32, which according to our provider is our Default Gateway address. They claim the subnet is correctly routed to us on their side. How does the ASA know which range it is responsible for on the Fiber interface? How do I use the addresses from my range? To clarify my config; The ASA is currently configured to default-route to our ADSL uplink on port Ethernet0/0 (interface vlan2, nicknamed Outside). The fiber is connected to port Ethernet0/2 (interface vlan50, nicknamed Fiber) so I can configure and test it before making it the default route. Once I'm clear on how to set it all up, I'll fully replace the Outside interface with Fiber. My config (rather long): : Saved : ASA Version 8.3(2)4 ! hostname gw domain-name example.com enable password ****** encrypted passwd ****** encrypted names name 10.10.1.0 Inside-dhcp-network description Desktops and clients that receive their IP via DHCP name 10.10.0.208 svn.example.com description Subversion server name 10.10.0.205 marvin.example.com description LAMP development server name 10.10.0.206 dns.example.com description DNS, DHCP, NTP ! interface Vlan2 description Old ADSL WAN connection nameif outside security-level 0 ip address 192.168.1.2 255.255.255.252 ! interface Vlan10 description LAN vlan 10 Regular LAN traffic nameif inside security-level 100 ip address 10.10.0.254 255.255.0.0 ! interface Vlan11 description LAN vlan 11 Lab/test traffic nameif lab security-level 90 ip address 10.11.0.254 255.255.0.0 ! interface Vlan20 description LAN vlan 20 ISCSI traffic nameif iscsi security-level 100 ip address 10.20.0.254 255.255.0.0 ! interface Vlan30 description LAN vlan 30 DMZ traffic nameif dmz security-level 50 ip address 10.30.0.254 255.255.0.0 ! interface Vlan40 description LAN vlan 40 Guests access to the internet nameif guests security-level 50 ip address 10.40.0.254 255.255.0.0 ! interface Vlan50 description New WAN Corporate Internet over fiber nameif fiber security-level 0 pppoe client vpdn group KPN ip address pppoe ! interface Ethernet0/0 switchport access vlan 2 speed 100 duplex full ! interface Ethernet0/1 switchport trunk allowed vlan 10,11,30,40 switchport trunk native vlan 10 switchport mode trunk ! interface Ethernet0/2 switchport access vlan 50 speed 100 duplex full ! interface Ethernet0/3 shutdown ! interface Ethernet0/4 shutdown ! interface Ethernet0/5 switchport access vlan 20 ! interface Ethernet0/6 shutdown ! interface Ethernet0/7 shutdown ! boot system disk0:/asa832-4-k8.bin ftp mode passive clock timezone CEST 1 clock summer-time CEDT recurring last Sun Mar 2:00 last Sun Oct 3:00 dns domain-lookup inside dns server-group DefaultDNS name-server dns.example.com domain-name example.com same-security-traffic permit inter-interface same-security-traffic permit intra-interface object network inside-net subnet 10.10.0.0 255.255.0.0 object network svn.example.com host 10.10.0.208 object network marvin.example.com host 10.10.0.205 object network lab-net subnet 10.11.0.0 255.255.0.0 object network dmz-net subnet 10.30.0.0 255.255.0.0 object network guests-net subnet 10.40.0.0 255.255.0.0 object network dhcp-subnet subnet 10.10.1.0 255.255.255.0 description DHCP assigned addresses on Vlan 10 object network Inside-vpnpool description Pool of assignable addresses for VPN clients object network vpn-subnet subnet 10.10.3.0 255.255.255.0 description Address pool assignable to VPN clients object network dns.example.com host 10.10.0.206 description DNS, DHCP, NTP object-group service iscsi tcp description iscsi storage traffic port-object eq 3260 access-list outside_access_in remark Allow access from outside to HTTP on svn. access-list outside_access_in extended permit tcp any object svn.example.com eq www access-list Insiders!_splitTunnelAcl standard permit 10.10.0.0 255.255.0.0 access-list iscsi_access_in remark Prevent disruption of iscsi traffic from outside the iscsi vlan. access-list iscsi_access_in extended deny tcp any interface iscsi object-group iscsi log warnings ! snmp-map DenyV1 deny version 1 ! pager lines 24 logging enable logging timestamp logging asdm-buffer-size 512 logging monitor warnings logging buffered warnings logging history critical logging asdm errors logging flash-bufferwrap logging flash-minimum-free 4000 logging flash-maximum-allocation 2000 mtu outside 1500 mtu inside 1500 mtu lab 1500 mtu iscsi 9000 mtu dmz 1500 mtu guests 1500 mtu fiber 1492 ip local pool DHCP_VPN 10.10.3.1-10.10.3.20 mask 255.255.0.0 ip verify reverse-path interface outside no failover icmp unreachable rate-limit 10 burst-size 5 asdm image disk0:/asdm-635.bin asdm history enable arp timeout 14400 nat (inside,outside) source static any any destination static vpn-subnet vpn-subnet ! object network inside-net nat (inside,outside) dynamic interface object network svn.example.com nat (inside,outside) static interface service tcp www www object network lab-net nat (lab,outside) dynamic interface object network dmz-net nat (dmz,outside) dynamic interface object network guests-net nat (guests,outside) dynamic interface access-group outside_access_in in interface outside access-group iscsi_access_in in interface iscsi route outside 0.0.0.0 0.0.0.0 192.168.1.1 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 dynamic-access-policy-record DfltAccessPolicy aaa-server SBS2003 protocol radius aaa-server SBS2003 (inside) host 10.10.0.204 timeout 5 key ***** aaa authentication enable console SBS2003 LOCAL aaa authentication ssh console SBS2003 LOCAL aaa authentication telnet console SBS2003 LOCAL http server enable http 10.10.0.0 255.255.0.0 inside snmp-server host inside 10.10.0.207 community ***** version 2c snmp-server location Server room snmp-server contact [email protected] snmp-server community ***** snmp-server enable traps snmp authentication linkup linkdown coldstart snmp-server enable traps syslog crypto ipsec transform-set TRANS_ESP_AES-256_SHA esp-aes-256 esp-sha-hmac crypto ipsec transform-set TRANS_ESP_AES-256_SHA mode transport crypto ipsec transform-set ESP-AES-256-MD5 esp-aes-256 esp-md5-hmac crypto ipsec transform-set ESP-DES-SHA esp-des esp-sha-hmac crypto ipsec transform-set ESP-DES-MD5 esp-des esp-md5-hmac crypto ipsec transform-set ESP-AES-192-MD5 esp-aes-192 esp-md5-hmac crypto ipsec transform-set ESP-3DES-MD5 esp-3des esp-md5-hmac crypto ipsec transform-set ESP-AES-256-SHA esp-aes-256 esp-sha-hmac crypto ipsec transform-set ESP-AES-128-SHA esp-aes esp-sha-hmac crypto ipsec transform-set ESP-AES-192-SHA esp-aes-192 esp-sha-hmac crypto ipsec transform-set ESP-AES-128-MD5 esp-aes esp-md5-hmac crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto dynamic-map outside_dyn_map 20 set pfs group5 crypto dynamic-map outside_dyn_map 20 set transform-set TRANS_ESP_AES-256_SHA crypto dynamic-map SYSTEM_DEFAULT_CRYPTO_MAP 65535 set transform-set ESP-AES-128-SHA ESP-AES-128-MD5 ESP-AES-192-SHA ESP-AES-192-MD5 ESP-AES-256-SHA ESP-AES-256-MD5 ESP-3DES-SHA ESP-3DES-MD5 ESP-DES-SHA ESP-DES-MD5 crypto map outside_map 65535 ipsec-isakmp dynamic SYSTEM_DEFAULT_CRYPTO_MAP crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 1 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 telnet 10.10.0.0 255.255.0.0 inside telnet timeout 5 ssh scopy enable ssh 10.10.0.0 255.255.0.0 inside ssh timeout 5 ssh version 2 console timeout 30 management-access inside vpdn group KPN request dialout pppoe vpdn group KPN localname INSIDERS vpdn group KPN ppp authentication pap vpdn username INSIDERS password ***** store-local dhcpd address 10.40.1.0-10.40.1.100 guests dhcpd dns 8.8.8.8 8.8.4.4 interface guests dhcpd update dns interface guests dhcpd enable guests ! threat-detection basic-threat threat-detection scanning-threat threat-detection statistics host number-of-rate 2 threat-detection statistics port number-of-rate 3 threat-detection statistics protocol number-of-rate 3 threat-detection statistics access-list threat-detection statistics tcp-intercept rate-interval 30 burst-rate 400 average-rate 200 ntp server dns.example.com source inside prefer webvpn group-policy DfltGrpPolicy attributes vpn-tunnel-protocol IPSec l2tp-ipsec group-policy Insiders! internal group-policy Insiders! attributes wins-server value 10.10.0.205 dns-server value 10.10.0.206 vpn-tunnel-protocol IPSec l2tp-ipsec split-tunnel-policy tunnelspecified split-tunnel-network-list value Insiders!_splitTunnelAcl default-domain value example.com username martijn password ****** encrypted privilege 15 username marcel password ****** encrypted privilege 15 tunnel-group DefaultRAGroup ipsec-attributes pre-shared-key ***** tunnel-group Insiders! type remote-access tunnel-group Insiders! general-attributes address-pool DHCP_VPN authentication-server-group SBS2003 LOCAL default-group-policy Insiders! tunnel-group Insiders! ipsec-attributes pre-shared-key ***** ! class-map global-class match default-inspection-traffic class-map type inspect http match-all asdm_medium_security_methods match not request method head match not request method post match not request method get ! ! policy-map type inspect dns preset_dns_map parameters message-length maximum 512 policy-map type inspect http http_inspection_policy parameters protocol-violation action drop-connection policy-map global-policy class global-class inspect dns inspect esmtp inspect ftp inspect h323 h225 inspect h323 ras inspect http inspect icmp inspect icmp error inspect mgcp inspect netbios inspect pptp inspect rtsp inspect snmp DenyV1 ! service-policy global-policy global smtp-server 123.123.123.123 prompt hostname context call-home profile CiscoTAC-1 no active destination address http https://tools.cisco.com/its/service/oddce/services/DDCEService destination address email [email protected] destination transport-method http subscribe-to-alert-group diagnostic subscribe-to-alert-group environment subscribe-to-alert-group inventory periodic monthly subscribe-to-alert-group configuration periodic monthly subscribe-to-alert-group telemetry periodic daily hpm topN enable Cryptochecksum:a76bbcf8b19019771c6d3eeecb95c1ca : end asdm image disk0:/asdm-635.bin asdm location svn.example.com 255.255.255.255 inside asdm location marvin.example.com 255.255.255.255 inside asdm location dns.example.com 255.255.255.255 inside asdm history enable

    Read the article

  • Troubleshooting a high SQL Server Compilation/Batch-Ratio

    - by Sleepless
    I have a SQL Server (quad core x86, 4GB RAM) that constantly has almost the same values for "SQLServer:SQL Statistics: SQL compilations/sec" and "SQLServer:SQL Statistics: SQL batches/sec". This could be interpreted as a server running 100% ad hoc queries, each one of which has to be recompiled, but this is not the case here. The sys.dm_exec_query_stats DMV lists hundreds of query plans with an execution_count much larger than 1. Does anybody have any idea how to interpret / troubleshoot this phenomenon? BTW, the server's general performance counters (CPU,I/O,RAM) all show very modest utilization.

    Read the article

  • The ping response time doesn't reflect the real network response time

    - by yangchenyun
    I encountered a weird problem that the response time returned by ping is almost fixed at 98ms. Either I ping the gateway, or I ping a local host or a internet host. The response time is always around 98ms although the actual delay is obvious. However, the reverse ping (from a local machine to this host) works properly. The following is my route table and the result: route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.1.1 0.0.0.0 UG 100 0 0 eth1 60.194.136.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth1 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 # ping the gateway ping 192.168.1.1 PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. 64 bytes from 192.168.1.1: icmp_req=1 ttl=64 time=98.7 ms 64 bytes from 192.168.1.1: icmp_req=2 ttl=64 time=97.0 ms 64 bytes from 192.168.1.1: icmp_req=3 ttl=64 time=96.0 ms 64 bytes from 192.168.1.1: icmp_req=4 ttl=64 time=94.9 ms 64 bytes from 192.168.1.1: icmp_req=5 ttl=64 time=94.0 ms ^C --- 192.168.1.1 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4004ms rtt min/avg/max/mdev = 94.030/96.149/98.744/1.673 ms #ping a local machine ping 192.168.1.88 PING 192.168.1.88 (192.168.1.88) 56(84) bytes of data. 64 bytes from 192.168.1.88: icmp_req=1 ttl=64 time=98.7 ms 64 bytes from 192.168.1.88: icmp_req=2 ttl=64 time=96.9 ms 64 bytes from 192.168.1.88: icmp_req=3 ttl=64 time=96.0 ms 64 bytes from 192.168.1.88: icmp_req=4 ttl=64 time=95.0 ms ^C --- 192.168.1.88 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3003ms rtt min/avg/max/mdev = 95.003/96.696/98.786/1.428 ms #ping a internet host ping google.com PING google.com (74.125.128.139) 56(84) bytes of data. 64 bytes from hg-in-f139.1e100.net (74.125.128.139): icmp_req=1 ttl=42 time=99.8 ms 64 bytes from hg-in-f139.1e100.net (74.125.128.139): icmp_req=2 ttl=42 time=99.9 ms 64 bytes from hg-in-f139.1e100.net (74.125.128.139): icmp_req=3 ttl=42 time=99.9 ms 64 bytes from hg-in-f139.1e100.net (74.125.128.139): icmp_req=4 ttl=42 time=99.9 ms ^C64 bytes from hg-in-f139.1e100.net (74.125.128.139): icmp_req=5 ttl=42 time=99.9 ms --- google.com ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 32799ms rtt min/avg/max/mdev = 99.862/99.925/99.944/0.284 ms I am running iperf to test the bandwidth, the rate is quite low for a LAN connection. iperf -c 192.168.1.87 -t 50 -i 10 -f M ------------------------------------------------------------ Client connecting to 192.168.1.87, TCP port 5001 TCP window size: 0.06 MByte (default) ------------------------------------------------------------ [ 4] local 192.168.1.139 port 54697 connected with 192.168.1.87 port 5001 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 6.12 MBytes 0.61 MBytes/sec [ 4] 10.0-20.0 sec 6.38 MBytes 0.64 MBytes/sec [ 4] 20.0-30.0 sec 6.38 MBytes 0.64 MBytes/sec [ 4] 30.0-40.0 sec 6.25 MBytes 0.62 MBytes/sec [ 4] 40.0-50.0 sec 6.38 MBytes 0.64 MBytes/sec [ 4] 0.0-50.1 sec 31.6 MBytes 0.63 MBytes/sec

    Read the article

  • Plesk 9.2.1 reporting much more SMTP traffic than the logs indicate

    - by Eric3
    Plesk is reporting nearly 7GB of SMTP traffic so far this month on one domain, most of it outgoing. However, after running qmail's mail logs (which only go back to May 8) through Sawmill, only about 900MB of traffic on that domain is accounted for. What I know so far: Email sent via PHP's mail() function is sent through sendmail, which has been logging its output via syslog to the same logs that qmail uses, at /usr/local/psa/var/log/ Messages sent by logging in directly via Telnet are logged as well I verified that Plesk is reporting totals correctly by creating a new domain, sending some large emails through it, running Plesk's statistics calculation script, and comparing its reported totals to the actual size of the emails sent The problem domain did have three mail accounts with blank or insecure passwords, which I corrected Does anyone know how Plesk calculates SMTP traffic statistics? Are there some log files elsewhere that I'm missing? What kind of SMTP traffic would Plesk know about that isn't being logged?

    Read the article

  • Cisco 2911 - problem with DHCP

    - by bluszcz
    Hi, i am configuring DHCP daemon on Cisco 2911. At some point i assigned address 192.168.50.50 to one box (using MAC address 'relation'). When I wanted to saved config - I got warning, that address 192.168.50.50 is already in Pool, which sounds bit weird - it was first host which I started configuring. I tried with following commands: clear ip dhcp server statistics clear ip dhcp conflict but first one doesn't output anything (and show statistics shows that there are 2 addresses in pool), and second one throws "there is no conflicted ips" message. How I can force/purge/clear/allow to bind 192.168.50.50 to this box?

    Read the article

  • Amazon EC2: how to find out detailed CPU usage?

    - by j0nes
    I am running several EC2 instances, and I want to know the exact work my CPU is doing. On "normal" machines I am doing this with munin and its CPU plugin which looks at the statistics provided by /proc/stat. On my EC2 machines however, I get incorrect graphs. The machine has two cores, so the max CPU usage should be 200% - however it gets as high as 400%: I know that I should use Amazon CloudWatch to see the total CPU usage (and this is the official and recommended from Amazon way to do this), but I am specifically looking on how the CPU usage is spend (e.g. system, user, iowait). Is there a way to get detailed CPU usage statistics on EC2 instances?

    Read the article

  • Find command exclude files whose path match a certain pattern

    - by user40570
    I have a find command that looks for files that was modified recently and outputs the date find /path/on/server -mtime -1 -name '*.js' -exec ls -l {} \; I would like it to exclude any deeply nested folder that matches a certain pattern e.g. there are a number of folders that have a "statistics" directory and ".svn" directories. So i'd like to be able to say if the file that was modified yesterday is in a folder named statistics ignore it. Or perhaps not search for files in those folders at all.

    Read the article

  • Why "scope link" ipv6 address can be pinged via interfaces which they are not active on

    - by olagu
    [root@2_01 ~]# /sbin/ip -6 addr show pubeth0 inet6 2001:1::6/64 scope global inet6 2001:1::1/64 scope global inet6 fe80::20c:29ff:fe69:f9e8/64 scope link [root@v2_01 ~]# /sbin/ip -6 addr show pubeth1 inet6 fe80::20c:29ff:fe69:f906/64 scope link [root@2_01 ~]# ping6 fe80::20c:29ff:fe69:f9e8%pubeth1 PING fe80::20c:29ff:fe69:f9e8%pubeth1(fe80::20c:29ff:fe69:f9e8) 56 data bytes 64 bytes from fe80::20c:29ff:fe69:f9e8: icmp_seq=1 ttl=64 time=0.259 ms --- fe80::20c:29ff:fe69:f9e8%pubeth1 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 286ms rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms [root@2_01 ~]# ping6 fe80::20c:29ff:fe69:f9e8%pubeth0 PING fe80::20c:29ff:fe69:f9e8%pubeth0(fe80::20c:29ff:fe69:f9e8) 56 data bytes 64 bytes from fe80::20c:29ff:fe69:f9e8: icmp_seq=1 ttl=64 time=0.057 ms --- fe80::20c:29ff:fe69:f9e8%pubeth0 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 390ms rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms Why can I ping6 "fe80::20c:29ff:fe69:f9e8" via pubeth1?

    Read the article

  • Heaps of Trouble?

    - by Paul White NZ
    If you’re not already a regular reader of Brad Schulz’s blog, you’re missing out on some great material.  In his latest entry, he is tasked with optimizing a query run against tables that have no indexes at all.  The problem is, predictably, that performance is not very good.  The catch is that we are not allowed to create any indexes (or even new statistics) as part of our optimization efforts. In this post, I’m going to look at the problem from a slightly different angle, and present an alternative solution to the one Brad found.  Inevitably, there’s going to be some overlap between our entries, and while you don’t necessarily need to read Brad’s post before this one, I do strongly recommend that you read it at some stage; he covers some important points that I won’t cover again here. The Example We’ll use data from the AdventureWorks database, copied to temporary unindexed tables.  A script to create these structures is shown below: CREATE TABLE #Custs ( CustomerID INTEGER NOT NULL, TerritoryID INTEGER NULL, CustomerType NCHAR(1) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #Prods ( ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, Name NVARCHAR(50) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #OrdHeader ( SalesOrderID INTEGER NOT NULL, OrderDate DATETIME NOT NULL, SalesOrderNumber NVARCHAR(25) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, CustomerID INTEGER NOT NULL, ); GO CREATE TABLE #OrdDetail ( SalesOrderID INTEGER NOT NULL, OrderQty SMALLINT NOT NULL, LineTotal NUMERIC(38,6) NOT NULL, ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, ); GO INSERT #Custs ( CustomerID, TerritoryID, CustomerType ) SELECT C.CustomerID, C.TerritoryID, C.CustomerType FROM AdventureWorks.Sales.Customer C WITH (TABLOCK); GO INSERT #Prods ( ProductMainID, ProductSubID, ProductSubSubID, Name ) SELECT P.ProductID, P.ProductID, P.ProductID, P.Name FROM AdventureWorks.Production.Product P WITH (TABLOCK); GO INSERT #OrdHeader ( SalesOrderID, OrderDate, SalesOrderNumber, CustomerID ) SELECT H.SalesOrderID, H.OrderDate, H.SalesOrderNumber, H.CustomerID FROM AdventureWorks.Sales.SalesOrderHeader H WITH (TABLOCK); GO INSERT #OrdDetail ( SalesOrderID, OrderQty, LineTotal, ProductMainID, ProductSubID, ProductSubSubID ) SELECT D.SalesOrderID, D.OrderQty, D.LineTotal, D.ProductID, D.ProductID, D.ProductID FROM AdventureWorks.Sales.SalesOrderDetail D WITH (TABLOCK); The query itself is a simple join of the four tables: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #OrdDetail D ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID JOIN #OrdHeader H ON D.SalesOrderID = H.SalesOrderID JOIN #Custs C ON H.CustomerID = C.CustomerID ORDER BY P.ProductMainID ASC OPTION (RECOMPILE, MAXDOP 1); Remember that these tables have no indexes at all, and only the single-column sampled statistics SQL Server automatically creates (assuming default settings).  The estimated query plan produced for the test query looks like this (click to enlarge): The Problem The problem here is one of cardinality estimation – the number of rows SQL Server expects to find at each step of the plan.  The lack of indexes and useful statistical information means that SQL Server does not have the information it needs to make a good estimate.  Every join in the plan shown above estimates that it will produce just a single row as output.  Brad covers the factors that lead to the low estimates in his post. In reality, the join between the #Prods and #OrdDetail tables will produce 121,317 rows.  It should not surprise you that this has rather dire consequences for the remainder of the query plan.  In particular, it makes a nonsense of the optimizer’s decision to use Nested Loops to join to the two remaining tables.  Instead of scanning the #OrdHeader and #Custs tables once (as it expected), it has to perform 121,317 full scans of each.  The query takes somewhere in the region of twenty minutes to run to completion on my development machine. A Solution At this point, you may be thinking the same thing I was: if we really are stuck with no indexes, the best we can do is to use hash joins everywhere. We can force the exclusive use of hash joins in several ways, the two most common being join and query hints.  A join hint means writing the query using the INNER HASH JOIN syntax; using a query hint involves adding OPTION (HASH JOIN) at the bottom of the query.  The difference is that using join hints also forces the order of the join, whereas the query hint gives the optimizer freedom to reorder the joins at its discretion. Adding the OPTION (HASH JOIN) hint results in this estimated plan: That produces the correct output in around seven seconds, which is quite an improvement!  As a purely practical matter, and given the rigid rules of the environment we find ourselves in, we might leave things there.  (We can improve the hashing solution a bit – I’ll come back to that later on). Faster Nested Loops It might surprise you to hear that we can beat the performance of the hash join solution shown above using nested loops joins exclusively, and without breaking the rules we have been set. The key to this part is to realize that a condition like (A = B) can be expressed as (A <= B) AND (A >= B).  Armed with this tremendous new insight, we can rewrite the join predicates like so: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #OrdDetail D JOIN #OrdHeader H ON D.SalesOrderID >= H.SalesOrderID AND D.SalesOrderID <= H.SalesOrderID JOIN #Custs C ON H.CustomerID >= C.CustomerID AND H.CustomerID <= C.CustomerID JOIN #Prods P ON P.ProductMainID >= D.ProductMainID AND P.ProductMainID <= D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (RECOMPILE, LOOP JOIN, MAXDOP 1, FORCE ORDER); I’ve also added LOOP JOIN and FORCE ORDER query hints to ensure that only nested loops joins are used, and that the tables are joined in the order they appear.  The new estimated execution plan is: This new query runs in under 2 seconds. Why Is It Faster? The main reason for the improvement is the appearance of the eager Index Spools, which are also known as index-on-the-fly spools.  If you read my Inside The Optimiser series you might be interested to know that the rule responsible is called JoinToIndexOnTheFly. An eager index spool consumes all rows from the table it sits above, and builds a index suitable for the join to seek on.  Taking the index spool above the #Custs table as an example, it reads all the CustomerID and TerritoryID values with a single scan of the table, and builds an index keyed on CustomerID.  The term ‘eager’ means that the spool consumes all of its input rows when it starts up.  The index is built in a work table in tempdb, has no associated statistics, and only exists until the query finishes executing. The result is that each unindexed table is only scanned once, and just for the columns necessary to build the temporary index.  From that point on, every execution of the inner side of the join is answered by a seek on the temporary index – not the base table. A second optimization is that the sort on ProductMainID (required by the ORDER BY clause) is performed early, on just the rows coming from the #OrdDetail table.  The optimizer has a good estimate for the number of rows it needs to sort at that stage – it is just the cardinality of the table itself.  The accuracy of the estimate there is important because it helps determine the memory grant given to the sort operation.  Nested loops join preserves the order of rows on its outer input, so sorting early is safe.  (Hash joins do not preserve order in this way, of course). The extra lazy spool on the #Prods branch is a further optimization that avoids executing the seek on the temporary index if the value being joined (the ‘outer reference’) hasn’t changed from the last row received on the outer input.  It takes advantage of the fact that rows are still sorted on ProductMainID, so if duplicates exist, they will arrive at the join operator one after the other. The optimizer is quite conservative about introducing index spools into a plan, because creating and dropping a temporary index is a relatively expensive operation.  It’s presence in a plan is often an indication that a useful index is missing. I want to stress that I rewrote the query in this way primarily as an educational exercise – I can’t imagine having to do something so horrible to a production system. Improving the Hash Join I promised I would return to the solution that uses hash joins.  You might be puzzled that SQL Server can create three new indexes (and perform all those nested loops iterations) faster than it can perform three hash joins.  The answer, again, is down to the poor information available to the optimizer.  Let’s look at the hash join plan again: Two of the hash joins have single-row estimates on their build inputs.  SQL Server fixes the amount of memory available for the hash table based on this cardinality estimate, so at run time the hash join very quickly runs out of memory. This results in the join spilling hash buckets to disk, and any rows from the probe input that hash to the spilled buckets also get written to disk.  The join process then continues, and may again run out of memory.  This is a recursive process, which may eventually result in SQL Server resorting to a bailout join algorithm, which is guaranteed to complete eventually, but may be very slow.  The data sizes in the example tables are not large enough to force a hash bailout, but it does result in multiple levels of hash recursion.  You can see this for yourself by tracing the Hash Warning event using the Profiler tool. The final sort in the plan also suffers from a similar problem: it receives very little memory and has to perform multiple sort passes, saving intermediate runs to disk (the Sort Warnings Profiler event can be used to confirm this).  Notice also that because hash joins don’t preserve sort order, the sort cannot be pushed down the plan toward the #OrdDetail table, as in the nested loops plan. Ok, so now we understand the problems, what can we do to fix it?  We can address the hash spilling by forcing a different order for the joins: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #Custs C JOIN #OrdHeader H ON H.CustomerID = C.CustomerID JOIN #OrdDetail D ON D.SalesOrderID = H.SalesOrderID ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (MAXDOP 1, HASH JOIN, FORCE ORDER); With this plan, each of the inputs to the hash joins has a good estimate, and no hash recursion occurs.  The final sort still suffers from the one-row estimate problem, and we get a single-pass sort warning as it writes rows to disk.  Even so, the query runs to completion in three or four seconds.  That’s around half the time of the previous hashing solution, but still not as fast as the nested loops trickery. Final Thoughts SQL Server’s optimizer makes cost-based decisions, so it is vital to provide it with accurate information.  We can’t really blame the performance problems highlighted here on anything other than the decision to use completely unindexed tables, and not to allow the creation of additional statistics. I should probably stress that the nested loops solution shown above is not one I would normally contemplate in the real world.  It’s there primarily for its educational and entertainment value.  I might perhaps use it to demonstrate to the sceptical that SQL Server itself is crying out for an index. Be sure to read Brad’s original post for more details.  My grateful thanks to him for granting permission to reuse some of his material. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

  • Hello Operator, My Switch Is Bored

    - by Paul White
    This is a post for T-SQL Tuesday #43 hosted by my good friend Rob Farley. The topic this month is Plan Operators. I haven’t taken part in T-SQL Tuesday before, but I do like to write about execution plans, so this seemed like a good time to start. This post is in two parts. The first part is primarily an excuse to use a pretty bad play on words in the title of this blog post (if you’re too young to know what a telephone operator or a switchboard is, I hate you). The second part of the post looks at an invisible query plan operator (so to speak). 1. My Switch Is Bored Allow me to present the rare and interesting execution plan operator, Switch: Books Online has this to say about Switch: Following that description, I had a go at producing a Fast Forward Cursor plan that used the TOP operator, but had no luck. That may be due to my lack of skill with cursors, I’m not too sure. The only application of Switch in SQL Server 2012 that I am familiar with requires a local partitioned view: CREATE TABLE dbo.T1 (c1 int NOT NULL CHECK (c1 BETWEEN 00 AND 24)); CREATE TABLE dbo.T2 (c1 int NOT NULL CHECK (c1 BETWEEN 25 AND 49)); CREATE TABLE dbo.T3 (c1 int NOT NULL CHECK (c1 BETWEEN 50 AND 74)); CREATE TABLE dbo.T4 (c1 int NOT NULL CHECK (c1 BETWEEN 75 AND 99)); GO CREATE VIEW V1 AS SELECT c1 FROM dbo.T1 UNION ALL SELECT c1 FROM dbo.T2 UNION ALL SELECT c1 FROM dbo.T3 UNION ALL SELECT c1 FROM dbo.T4; Not only that, but it needs an updatable local partitioned view. We’ll need some primary keys to meet that requirement: ALTER TABLE dbo.T1 ADD CONSTRAINT PK_T1 PRIMARY KEY (c1);   ALTER TABLE dbo.T2 ADD CONSTRAINT PK_T2 PRIMARY KEY (c1);   ALTER TABLE dbo.T3 ADD CONSTRAINT PK_T3 PRIMARY KEY (c1);   ALTER TABLE dbo.T4 ADD CONSTRAINT PK_T4 PRIMARY KEY (c1); We also need an INSERT statement that references the view. Even more specifically, to see a Switch operator, we need to perform a single-row insert (multi-row inserts use a different plan shape): INSERT dbo.V1 (c1) VALUES (1); And now…the execution plan: The Constant Scan manufactures a single row with no columns. The Compute Scalar works out which partition of the view the new value should go in. The Assert checks that the computed partition number is not null (if it is, an error is returned). The Nested Loops Join executes exactly once, with the partition id as an outer reference (correlated parameter). The Switch operator checks the value of the parameter and executes the corresponding input only. If the partition id is 0, the uppermost Clustered Index Insert is executed, adding a row to table T1. If the partition id is 1, the next lower Clustered Index Insert is executed, adding a row to table T2…and so on. In case you were wondering, here’s a query and execution plan for a multi-row insert to the view: INSERT dbo.V1 (c1) VALUES (1), (2); Yuck! An Eager Table Spool and four Filters! I prefer the Switch plan. My guess is that almost all the old strategies that used a Switch operator have been replaced over time, using things like a regular Concatenation Union All combined with Start-Up Filters on its inputs. Other new (relative to the Switch operator) features like table partitioning have specific execution plan support that doesn’t need the Switch operator either. This feels like a bit of a shame, but perhaps it is just nostalgia on my part, it’s hard to know. Please do let me know if you encounter a query that can still use the Switch operator in 2012 – it must be very bored if this is the only possible modern usage! 2. Invisible Plan Operators The second part of this post uses an example based on a question Dave Ballantyne asked using the SQL Sentry Plan Explorer plan upload facility. If you haven’t tried that yet, make sure you’re on the latest version of the (free) Plan Explorer software, and then click the Post to SQLPerformance.com button. That will create a site question with the query plan attached (which can be anonymized if the plan contains sensitive information). Aaron Bertrand and I keep a close eye on questions there, so if you have ever wanted to ask a query plan question of either of us, that’s a good way to do it. The problem The issue I want to talk about revolves around a query issued against a calendar table. The script below creates a simplified version and adds 100 years of per-day information to it: USE tempdb; GO CREATE TABLE dbo.Calendar ( dt date NOT NULL, isWeekday bit NOT NULL, theYear smallint NOT NULL,   CONSTRAINT PK__dbo_Calendar_dt PRIMARY KEY CLUSTERED (dt) ); GO -- Monday is the first day of the week for me SET DATEFIRST 1;   -- Add 100 years of data INSERT dbo.Calendar WITH (TABLOCKX) (dt, isWeekday, theYear) SELECT CA.dt, isWeekday = CASE WHEN DATEPART(WEEKDAY, CA.dt) IN (6, 7) THEN 0 ELSE 1 END, theYear = YEAR(CA.dt) FROM Sandpit.dbo.Numbers AS N CROSS APPLY ( VALUES (DATEADD(DAY, N.n - 1, CONVERT(date, '01 Jan 2000', 113))) ) AS CA (dt) WHERE N.n BETWEEN 1 AND 36525; The following query counts the number of weekend days in 2013: SELECT Days = COUNT_BIG(*) FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; It returns the correct result (104) using the following execution plan: The query optimizer has managed to estimate the number of rows returned from the table exactly, based purely on the default statistics created separately on the two columns referenced in the query’s WHERE clause. (Well, almost exactly, the unrounded estimate is 104.289 rows.) There is already an invisible operator in this query plan – a Filter operator used to apply the WHERE clause predicates. We can see it by re-running the query with the enormously useful (but undocumented) trace flag 9130 enabled: Now we can see the full picture. The whole table is scanned, returning all 36,525 rows, before the Filter narrows that down to just the 104 we want. Without the trace flag, the Filter is incorporated in the Clustered Index Scan as a residual predicate. It is a little bit more efficient than using a separate operator, but residual predicates are still something you will want to avoid where possible. The estimates are still spot on though: Anyway, looking to improve the performance of this query, Dave added the following filtered index to the Calendar table: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear) WHERE isWeekday = 0; The original query now produces a much more efficient plan: Unfortunately, the estimated number of rows produced by the seek is now wrong (365 instead of 104): What’s going on? The estimate was spot on before we added the index! Explanation You might want to grab a coffee for this bit. Using another trace flag or two (8606 and 8612) we can see that the cardinality estimates were exactly right initially: The highlighted information shows the initial cardinality estimates for the base table (36,525 rows), the result of applying the two relational selects in our WHERE clause (104 rows), and after performing the COUNT_BIG(*) group by aggregate (1 row). All of these are correct, but that was before cost-based optimization got involved :) Cost-based optimization When cost-based optimization starts up, the logical tree above is copied into a structure (the ‘memo’) that has one group per logical operation (roughly speaking). The logical read of the base table (LogOp_Get) ends up in group 7; the two predicates (LogOp_Select) end up in group 8 (with the details of the selections in subgroups 0-6). These two groups still have the correct cardinalities as trace flag 8608 output (initial memo contents) shows: During cost-based optimization, a rule called SelToIdxStrategy runs on group 8. It’s job is to match logical selections to indexable expressions (SARGs). It successfully matches the selections (theYear = 2013, is Weekday = 0) to the filtered index, and writes a new alternative into the memo structure. The new alternative is entered into group 8 as option 1 (option 0 was the original LogOp_Select): The new alternative is to do nothing (PhyOp_NOP = no operation), but to instead follow the new logical instructions listed below the NOP. The LogOp_GetIdx (full read of an index) goes into group 21, and the LogOp_SelectIdx (selection on an index) is placed in group 22, operating on the result of group 21. The definition of the comparison ‘the Year = 2013’ (ScaOp_Comp downwards) was already present in the memo starting at group 2, so no new memo groups are created for that. New Cardinality Estimates The new memo groups require two new cardinality estimates to be derived. First, LogOp_Idx (full read of the index) gets a predicted cardinality of 10,436. This number comes from the filtered index statistics: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH STAT_HEADER; The second new cardinality derivation is for the LogOp_SelectIdx applying the predicate (theYear = 2013). To get a number for this, the cardinality estimator uses statistics for the column ‘theYear’, producing an estimate of 365 rows (there are 365 days in 2013!): DBCC SHOW_STATISTICS (Calendar, theYear) WITH HISTOGRAM; This is where the mistake happens. Cardinality estimation should have used the filtered index statistics here, to get an estimate of 104 rows: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH HISTOGRAM; Unfortunately, the logic has lost sight of the link between the read of the filtered index (LogOp_GetIdx) in group 22, and the selection on that index (LogOp_SelectIdx) that it is deriving a cardinality estimate for, in group 21. The correct cardinality estimate (104 rows) is still present in the memo, attached to group 8, but that group now has a PhyOp_NOP implementation. Skipping over the rest of cost-based optimization (in a belated attempt at brevity) we can see the optimizer’s final output using trace flag 8607: This output shows the (incorrect, but understandable) 365 row estimate for the index range operation, and the correct 104 estimate still attached to its PhyOp_NOP. This tree still has to go through a few post-optimizer rewrites and ‘copy out’ from the memo structure into a tree suitable for the execution engine. One step in this process removes PhyOp_NOP, discarding its 104-row cardinality estimate as it does so. To finish this section on a more positive note, consider what happens if we add an OVER clause to the query aggregate. This isn’t intended to be a ‘fix’ of any sort, I just want to show you that the 104 estimate can survive and be used if later cardinality estimation needs it: SELECT Days = COUNT_BIG(*) OVER () FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; The estimated execution plan is: Note the 365 estimate at the Index Seek, but the 104 lives again at the Segment! We can imagine the lost predicate ‘isWeekday = 0’ as sitting between the seek and the segment in an invisible Filter operator that drops the estimate from 365 to 104. Even though the NOP group is removed after optimization (so we don’t see it in the execution plan) bear in mind that all cost-based choices were made with the 104-row memo group present, so although things look a bit odd, it shouldn’t affect the optimizer’s plan selection. I should also mention that we can work around the estimation issue by including the index’s filtering columns in the index key: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear, isWeekday) WHERE isWeekday = 0 WITH (DROP_EXISTING = ON); There are some downsides to doing this, including that changes to the isWeekday column may now require Halloween Protection, but that is unlikely to be a big problem for a static calendar table ;)  With the updated index in place, the original query produces an execution plan with the correct cardinality estimation showing at the Index Seek: That’s all for today, remember to let me know about any Switch plans you come across on a modern instance of SQL Server! Finally, here are some other posts of mine that cover other plan operators: Segment and Sequence Project Common Subexpression Spools Why Plan Operators Run Backwards Row Goals and the Top Operator Hash Match Flow Distinct Top N Sort Index Spools and Page Splits Singleton and Range Seeks Bitmaps Hash Join Performance Compute Scalar © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • SQL SERVER – 2000 – DBCC SQLPERF(waitstats) – Wait Type – Day 24 of 28

    - by pinaldave
    I have received many comments, email, suggestions and motivations for my current series of wait types and wait statistics. One of the questions which I keep on receiving almost every other day is whether all of the discussions I have presented so far are also applicable to SQL Server 2000. Additionally, I receive another question asking me if wait statistics matters in SQL Server 2000. If it is, then the asker wants to know how to measure wait types for SQL Server 2000. In SQL Server, you can run the following command to get a list of all the wait types: DBCC SQLPERF(waitstats) The query above will work in SQL Server 2005/2008/R2  because of backup compatibility. As you might have noticed, I have been discussing everything keeping SQL Server 2005+ in mind, but I have given little consideration on SQL Server 2000. However, I am pretty sure that most of the suggestions I have provided are applicable to SQL Server 2000. The wait types I have been discussing mostly exist in SQL Server 2000 as well. But the difference of the 2000 version is that it gets late recent releases, but it is worth it. Wait types are very essential to measure performance bottleneck. Because of this, I do not have to state that I am big fan of them just so I could identify performance bottleneck. Please read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Ignoring Robots - Or Better Yet, Counting Them Separately

    - by [email protected]
    It is quite common to have web sessions that are undesirable from the point of view of analytics. For example, when there are either internal or external robots that check the site's health, index it or just extract information from it. These robotic session do not behave like humans and if their volume is high enough they can sway the statistics and models.One easy way to deal with these sessions is to define a partitioning variable for all the models that is a flag indicating whether the session is "Normal" or "Robot". Then all the reports and the predictions can use the "Normal" partition, while the counts and statistics for Robots are still available.In order for this to work, though, it is necessary to have two conditions:1. It is possible to identify the Robotic sessions.2. No learning happens before the identification of the session as a robot.The first point is obvious, but the second may require some explanation. While the default in RTD is to learn at the end of the session, it is possible to learn in any entry point. This is a setting for each model. There are various reasons to learn in a specific entry point, for example if there is a desire to capture exactly and precisely the data in the session at the time the event happened as opposed to including changes to the end of the session.In any case, if RTD has already learned on the session before the identification of a robot was done there is no way to retract this learning.Identifying the robotic sessions can be done through the use of rules and heuristics. For example we may use some of the following:Maintain a list of known robotic IPs or domainsDetect very long sessions, lasting more than a few hours or visiting more than 500 pagesDetect "robotic" behaviors like a methodic click on all the link of every pageDetect a session with 10 pages clicked at exactly 20 second intervalsDetect extensive non-linear navigationNow, an interesting experiment would be to use the flag above as an output of a model to see if there are more subtle characteristics of robots such that a model can be used to detect robots, even if they fall through the cracks of rules and heuristics.In any case, the basic and simple technique of partitioning the models by the type of session is simple to implement and provides a lot of advantages.

    Read the article

  • What is the most time-effective way to monitor & manage threats from bots and/or humans?

    - by CheeseConQueso
    I'm usually overwhelmed by the amount of tools that hosting companies provide to track & quantify traffic data and statistics. I'm equally overwhelmed by the countless flavors of malicious 'attacks' that target any and every web site known to man. The security methods used to protect both the back and front end of a website are documented well and are straight-forward in terms of ease of implementation and application, but the army of autonomous bots knows no boundaries and will always find a niche of a website to infest. So what can be done to handle the inevitable swarm of bots that pound your domain with brute force? Whenever I look at error logs for my domains, there are always thousands of entries that look like bots trying to sneak sql code into the database by tricking the variables in the url into giving them schema information or private data within the database. My barbaric and time-consuming plan of defense is just to monitor visitor statistics for those obvious patterns of abuse and either ban the ips or range of ips accordingly. Aside from that, I don't know much else I could do to prevent all of the ping pong going on all day. Are there any good tools that automatically monitor this background activity (specifically activity that throws errors on the web & db server) and proactively deal with these source(s) of mayhem?

    Read the article

  • Extreme Optimization Numerical Libraries for .NET – Part 1 of n

    - by JoshReuben
    While many of my colleagues are fascinated in constructing the ultimate ViewModel or ServiceBus, I feel that this kind of plumbing code is re-invented far too many times – at some point in the near future, it will be out of the box standard infra. How many times have you been to a customer site and built a different variation of the same kind of code frameworks? How many times can you abstract Prism or reliable and discoverable WCF communication? As the bar is raised for whats bundled with the framework and more tasks become declarative, automated and configurable, Information Systems will expose a higher level of abstraction, forcing software engineers to focus on more advanced computer science and algorithmic tasks. I've spent the better half of the past decade building skills in .NET and expanding my mathematical horizons by working through the Schaums guides. In this series I am going to examine how these skillsets come together in the implementation provided by ExtremeOptimization. Download the trial version here: http://www.extremeoptimization.com/downloads.aspx Overview The library implements a set of algorithms for: linear algebra, complex numbers, numerical integration and differentiation, solving equations, optimization, random numbers, regression, ANOVA, statistical distributions, hypothesis tests. EONumLib combines three libraries in one - organized in a consistent namespace hierarchy. Mathematics Library - Extreme.Mathematics namespace Vector and Matrix Library - Extreme.Mathematics.LinearAlgebra namespace Statistics Library - Extreme.Statistics namespace System Requirements -.NET framework 4.0  Mathematics Library The classes are organized into the following namespace hierarchy: Extreme.Mathematics – common data types, exception types, and delegates. Extreme.Mathematics.Calculus - numerical integration and differentiation of functions. Extreme.Mathematics.Curves - points, lines and curves, including polynomials and Chebyshev approximations. curve fitting and interpolation. Extreme.Mathematics.Generic - generic arithmetic & linear algebra. Extreme.Mathematics.EquationSolvers - root finding algorithms. Extreme.Mathematics.LinearAlgebra - vectors , matrices , matrix decompositions, solvers for simultaneous linear equations and least squares. Extreme.Mathematics.Optimization – multi-d function optimization + linear programming. Extreme.Mathematics.SignalProcessing - one and two-dimensional discrete Fourier transforms. Extreme.Mathematics.SpecialFunctions

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >