Search Results

Search found 1692 results on 68 pages for 'trending and statistics'.

Page 41/68 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • What command line tools for monitoring host network activity on linux do you use?

    - by user27388
    What command line tools are good for reliably monitoring network activity? I have used ifconfig, but an office colleague said that its statistics are not always reliable. Is that true? I have recently used ethtool, but is it reliable? What about just looking at /proc/net 'files'? Is that any better? EDIT I'm interested in packets Tx/Rx, bytes Tx/Rx, but most importantly drops or errors and why the drop/error might have occurred.

    Read the article

  • Could 11.5 Million 401's be causing bottlenecks?

    - by roviuser
    I'm going to preface this with a warning: My knowledge about servers and networking is VERY limited, and if you provide me with technical answers, I probably won't understand much until I research your answer further. I'm trying to expand my knowledge and learn about it, though. If the information that I am able to provide in this question is insufficient to answer the question, I understand, and it can be closed. We have a SharePoint 2007 system that is extremely slow, mostly from huge amounts of use. We've been told that the main speed bottleneck is the access to the sql databases. However, they do provide a statistics dashboard, so I did some poking around, and noticed that we have 11.5 million or more 401 - access denied errors every month. Could this be causing major speed/performance decreases? Authentication for sharepoint uses active directory.

    Read the article

  • Force firefox to open pages in a specific tab using command line

    - by user36306
    Hey Guys, Here's the challenge--I developed Softphone Screenpop PHP App that takes caller id info and searches for a match in our db, also allows us to collect call statistics. Great for our management but it's driving our reps nuts. We use firefox here and when our softphone pops to the external page, every time it opens in a new tab, the girls quickly get 5-10 open and it becomes confusing. Our softphone will also run command line. I wondering if there is a way to have a URL open in a certain tab. Otherwise does anyone have any other ideas? Thanks!

    Read the article

  • Can't ping my computer - "Transmit failed. General failure."

    - by Vaccano
    I am having an issue with my computer. My IIS services are not working. I have narrowed it down to the fact that my computer cannot find itself via its name. I try pinging my computer by its name and I get this: C:\Users\18773ping MyComputerNameHere Pinging MyComputerNameHere [::1] with 32 bytes of data: PING: transmit failed. General failure. PING: transmit failed. General failure. PING: transmit failed. General failure. PING: transmit failed. General failure. Ping statistics for ::1: Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), I tried having someone else ping my machine and it works fine for them. Any ideas?

    Read the article

  • Ubutu Server Edition: useful for the home user?

    - by D Connors
    My question is simple. What (if anything) does Ubuntu Server Edition have to offer to home users? This question is mostly out of curiosity really, but I like asking. I've got a home network set up here with some 6 to 7 machines (most Windows, one Linux). And I was wondering how useful would it be to have a dedicated computer in my house running Ubuntu Server full time. We've had an awful experience with file sharing so far, would it simplify file sharing/transferring? Would I be able to limit the internet bandwidth granted to each PC? Would I be able to monitor in/out internet traffic (both real time and monthly statistics)? Last, and most important, if I'm completely off as to what Ubuntu Server actually is, please say so. I am completely new to it.

    Read the article

  • Monitoring disk block access in Linux

    - by VoidPointer
    Is there a way to gather statistics about blocks being accessed on a disk? I have a scenario where a task is both memory and I/O intensive and I need to find a good balance as to how much of the available RAM I can assign to the process and how much I should leave for the system for building its I/O cache for the block device being used. I suspect that most of the I/O that is currently happening is accessing a rather small subset of the device and that performance could be optimized by increasing the RAM that is available for I/O buffering. Ideally, I would be able to create something like a "heat-map" that shows me which parts of the disk are accessed most of the time.

    Read the article

  • Psychology researcher wants to learn new language

    - by user273347
    I'm currently considering R, matlab, or python, but I'm open to other options. Could you help me pick the best language for my needs? Here are the criteria I have in mind (not in order): Simple to learn. I don't really have a lot of free time, so I'm looking for something that isn't extremely complicated and/or difficult to pick up. I know some C, FWIW. Good for statistics/psychometrics. I do a ton of statistics and psychometrics analysis. A lot of it is basic stuff that I can do with SPSS, but I'd like to play around with the more advanced stuff too (bootstrapping, genetic programming, data mining, neural nets, modeling, etc). I'm looking for a language/environment that can help me run my simpler analyses faster and give me more options than a canned stat package like SPSS. If it can even make tables for me, then it'll be perfect. I also do a fair bit of experimental psychology. I use a canned experiment "programming" software (SuperLab) to make most of my experiments, but I want to be able to program executable programs that I can run on any computer and that can compile the data from the experiments in a spreadsheet. I know python has psychopy and pyepl and matlab has psychtoolbox, but I don't know which one is best. If R had something like this, I'd probably be sold on R already. I'm looking for something regularly used in academe and industry. Everybody else here (including myself, so far) uses canned stat and experiment programming software. One of the reasons I'm trying to learn a programming language is so that I can keep up when I move to another lab. Looking forward to your comments and suggestions. Thank you all for your kind and informative replies. I appreciate it. It's still a tough choice because of so many strong arguments for each language. Python - Thinking about it, I've forgotten so much about C already (I don't even remember what to do with an array) that it might be better for me to start from scratch with a simple program that does what it's supposed to do. It looks like it can do most of the things I'll need it to do, though not as cleanly as R and MATLAB. R - I'm really liking what I'm reading about R. The packages are perfect for my statistical work now. Given the purpose of R, I don't think it's suited to building psychological experiments though. To clarify, what I mean is making a program that presents visual and auditory stimuli to my specifications (hundreds of them in a preset and/or randomized sequence) and records the response data gathered from participants. MATLAB - It's awesome that cognitive and neuro folk are recommending MATLAB, because I'm preparing for the big leap from social and personality psychology to cognitive neuro. The problem is the Uni where I work doesn't have MATLAB licenses (and 3750 GBP for a compiler license is not an option for me haha). Octave looks like a good alternative. PsychToolbox is compatible with Octave, thankfully. SQL - Thanks for the tip. I'll explore that option, too. Python will be the least backbreaking and most useful in the short term. R is well suited to my current work. MATLAB is well suited to my prospective work. It's a tough call, but I think I am now equipped to make a more well-informed decision about where to go next. Thanks again!

    Read the article

  • Bubble Breaker Game Solver better than greedy?

    - by Gregory
    For a mental exercise I decided to try and solve the bubble breaker game found on many cell phones as well as an example here:Bubble Break Game The random (N,M,C) board consists N rows x M columns with C colors The goal is to get the highest score by picking the sequence of bubble groups that ultimately leads to the highest score A bubble group is 2 or more bubbles of the same color that are adjacent to each other in either x or y direction. Diagonals do not count When a group is picked, the bubbles disappear, any holes are filled with bubbles from above first, ie shift down, then any holes are filled by shifting right A bubble group score = n * (n - 1) where n is the number of bubbles in the bubble group The first algorithm is a simple exhaustive recursive algorithm which explores going through the board row by row and column by column picking bubble groups. Once the bubble group is picked, we create a new board and try to solve that board, recursively descending down Some of the ideas I am using include normalized memoization. Once a board is solved we store the board and the best score in a memoization table. I create a prototype in python which shows a (2,15,5) board takes 8859 boards to solve in about 3 seconds. A (3,15,5) board takes 12,384,726 boards in 50 minutes on a server. The solver rate is ~3k-4k boards/sec and gradually decreases as the memoization search takes longer. Memoization table grows to 5,692,482 boards, and hits 6,713,566 times. What other approaches could yield high scores besides the exhaustive search? I don't seen any obvious way to divide and conquer. But trending towards larger and larger bubbles groups seems to be one approach Thanks to David Locke for posting the paper link which talks above a window solver which uses a constant-depth lookahead heuristic.

    Read the article

  • Database - Designing an "Events" Table

    - by Alix Axel
    After reading the tips from this great Nettuts+ article I've come up with a table schema that would separate highly volatile data from other tables subjected to heavy reads and at the same time lower the number of tables needed in the whole database schema, however I'm not sure if this is a good idea since it doesn't follow the rules of normalization and I would like to hear your advice, here is the general idea: I've four types of users modeled in a Class Table Inheritance structure, in the main "user" table I store data common to all the users (id, username, password, several flags, ...) along with some TIMESTAMP fields (date_created, date_updated, date_activated, date_lastLogin, ...). To quote the tip #16 from the Nettuts+ article mentioned above: Example 2: You have a “last_login” field in your table. It updates every time a user logs in to the website. But every update on a table causes the query cache for that table to be flushed. You can put that field into another table to keep updates to your users table to a minimum. Now it gets even trickier, I need to keep track of some user statistics like how many unique times a user profile was seen how many unique times a ad from a specific type of user was clicked how many unique times a post from a specific type of user was seen and so on... In my fully normalized database this adds up to about 8 to 10 additional tables, it's not a lot but I would like to keep things simple if I could, so I've come up with the following "events" table: |------|----------------|----------------|--------------|-----------| | ID | TABLE | EVENT | DATE | IP | |------|----------------|----------------|--------------|-----------| | 1 | user | login | 201004190030 | 127.0.0.1 | |------|----------------|----------------|--------------|-----------| | 1 | user | login | 201004190230 | 127.0.0.1 | |------|----------------|----------------|--------------|-----------| | 2 | user | created | 201004190031 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 2 | user | activated | 201004190234 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 2 | user | approved | 201004190930 | 217.0.0.1 | |------|----------------|----------------|--------------|-----------| | 2 | user | login | 201004191200 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | created | 201004191230 | 127.0.0.1 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | impressed | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 2 | user | blocked | 201004200319 | 217.0.0.1 | |------|----------------|----------------|--------------|-----------| | 2 | user | deleted | 201004200320 | 217.0.0.1 | |------|----------------|----------------|--------------|-----------| Basically the ID refers to the primary key (id) field in the TABLE table, I believe the rest should be pretty straightforward. One thing that I've come to like in this design is that I can keep track of all the user logins instead of just the last one, and thus generate some interesting metrics with that data. Due to the nature of the events table I also thought of making some optimizations, such as: #9: Since there is only a finite number of tables and a finite (and predetermined) number of events, the TABLE and EVENTS columns could be setup as ENUMs instead of VARCHARs to save some space. #14: Store IPs as UNSIGNED INT with INET_ATON() instead of VARCHARs. Store DATEs as TIMESTAMPs instead of DATETIMEs. Use the ARCHIVE (or the CSV?) engine instead of InnoDB / MyISAM. Overall, each event would only consume 14 bytes which is okay for my traffic I guess. Pros: Ability to store more detailed data (such as logins). No need to design (and code for) almost a dozen additional tables (dates and statistics). Reduces a few columns per table and keeps volatile data separated. Cons: Non-relational (still not as bad as EAV): SELECT * FROM events WHERE id = 2 AND table = 'user' ORDER BY date DESC(); 6 bytes overhead per event (ID, TABLE and EVENT). I'm more inclined to go with this approach since the pros seem to far outweigh the cons, but I'm still a little bit reluctant.. Am I missing something? What are your thoughts on this? Thanks!

    Read the article

  • SQL Server 2005 Blocking Problem (ASYNC_NETWORK_IO)

    - by ivankolo
    I am responsible for a third-party application (no access to source) running on IIS and SQL Server 2005 (500 concurrent users, 1TB data, 8 IIS servers). We have recently started to see significant blocking on the database (after months of running this application in production with no problems). This occurs at random intervals during the day, approximately every 30 minutes, and affects between 20 and 100 sessions each time. All of the sessions eventually hit the application time out and the sessions abort. The problem disappears and then gradually re-emerges. The SPID responsible for the blocking always has the following features: WAIT TYPE = ASYNC_NETWORK_IO The SQL being run is “(@claimid varchar(15))SELECT claimid, enrollid, status, orgclaimid, resubclaimid, primaryclaimid FROM claim WHERE primaryclaimid = @claimid AND primaryclaimid < claimid)”. This is relatively innocuous SQL that should only return one or two records, not a large dataset. NO OTHER SQL statements have been implicated in the blocking, only this SQL statement. This is parameterized SQL for which an execution plan is cached in sys.dm_exec_cached_plans. This SPID has an object-level S lock on the claim table, so all UPDATEs/INSERTs to the claim table are also blocked. HOST ID varies. Different web servers are responsible for the blocking sessions. E.g., sometimes we trace back to web server 1, sometimes web server 2. When we trace back to the web server implicated in the blocking, we see the following: There is always some sort of application related error in the Event Log on the web server, linked to the Host ID and Host Process ID from the SQL Session. The error messages vary, usually some sort of SystemOutofMemory. (These error messages seem to be similar to error messages that we have seen in the past without such dramatic consequences. We think was happening before, but didn’t lead to blocking. Why now?) No known problems with the network adapters on either the web servers or the SQL server. (In any event the record set returned by the offending query would be small.) Things ruled out: Indexes are regularly defragmented. Statistics regularly updated. Increased sample size of statistics on claim.primaryclaimid. Forced recompilation of the cached execution plan. Created a compound index with primaryclaimid, claimid. No networking problems. No known issues on the web server. No changes to application software on web servers. We hypothesize that the chain of events goes something like this: Web server process submits SQL above. SQL server executes the SQL, during which it acquires a lock on the claim table. Web server process gets an error and dies. SQL server session is hung waiting for the web server process to read the data set. SQL Server sessions that need to get X locks on parts of the claim table (anyone processing claims) are blocked by the lock on the claim table and remain blocked until they all hit the application time out. Any suggestions for troubleshooting while waiting for the vendor's assistance would be most welcome. Is there a way to force SQL Server to lock at the row/page level for this particular SQL statement only? Is there a way to set a threshold on ASYNC_NETWORK_IO waits only?

    Read the article

  • Nested hyperlinks in XHTML 1.1 document

    - by Nazgulled
    Hi, I'm doing a simple widget for WordPress that fetches the most recent tweets from the RSS feed provided by Twitter. This widget parses any link posted on a tweet, it also parses mentions (ie: @username) and trending topics (ie: #nowplaying). For these 3 situations, it creates links pointing to some Twitter feature. For instance: "Hi @UserA, check out the song Foo from FooBar that I'm listening, it's awesome. #nowplaying" And it will parse into this: Hi <a href="http://twitter.com/UserA">@UserA</a>, check out the song Foo from FooBar that I'm listening, it's awesome. <a href="http://twitter.com/#search?q=nowplaying">#nowplaying</a> Now now I need to add a global link to the whole message, like this: <a href="http://twitter.com/UserA/statuses/1234567890"> Hi <a href="http://twitter.com/UserA">@UserA</a>, check out the song Foo from FooBar that I'm listening, it's awesome. <a href="http://twitter.com/#search?q=nowplaying">#nowplaying</a> </a> But this code does not validate and it doesn't work anyways (the browsers don't really seem to know what to do with it). Any suggestions how could I fix this?

    Read the article

  • Combination of JFreeChart with JXLayer with JHotDraw

    - by Yan Cheng CHEOK
    Recently, I had use JXLayer, to overlay two moving yellow message boxes, on the top of JFreeChart http://yccheok.blogspot.com/2010/02/investment-flow-chart.html I was wondering, had anyone experience using JXLayer + JHotdraw, to overlay all sorts of figures (Re-sizable text box, straight line, circle...), on the top of JFreeChart. I just would like to add drawing capability, without changing the JFreeChart source code. So that, JStock's user may draw trending lines, annotation text on their favorite stock charting. The code skeleton is as follow : // this.chartPanel is JFreeChartPanel final org.jdesktop.jxlayer.JXLayer<ChartPanel> layer = new org.jdesktop.jxlayer.JXLayer<ChartPanel>(this.chartPanel); this.myUI = new MyUI<ChartPanel>(this); layer.setUI(this.myUI); public class MyUI<V extends javax.swing.JComponent> extends AbstractLayerUI<V> { @Override protected void paintLayer(Graphics2D g2, JXLayer<? extends V> layer) { // Previous, I am using my own hand-coded, to draw the yellow box // // Now, How can I make use of JHotDraw at here, to draw various type of // figures? } @Override protected void processMouseEvent(MouseEvent e, JXLayer<? extends V> layer) { // How can I make use of JHotDraw at here? } @Override protected void processMouseMotionEvent(MouseEvent e, JXLayer<? extends V> layer) { // How can I make use of JHotDraw at here? } } As you see, I already got Graphics2D g2 from paintLayer method. How is it possible that I can pass the Graphics2D object to JHotDraw, and let JHotDraw handles all the drawing. My experience in using JHotDraw are with org.jhotdraw.draw.DefaultDrawingView org.jhotdraw.draw.DefaultDrawingEditor I am able to use them to draw various figures, by clicking on the toolbar and click on drawing area. How is it possible I can use DefaultDrawingView and DefaultDrawingEditor within MyUI's paintLayer? Also, shall I let MyUI handles the mouse event, or JHotDraw? Sorry, I start getting confused.

    Read the article

  • Spring hibernate ehcache setup

    - by Johan Sjöberg
    I have some problems getting the hibernate second level cache to work for caching domain objects. According to the ehcache documentation it shouldn't be too complicated to add caching to my existing working application. I have the following setup (only relevant snippets are outlined): @Entity @Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE public void Entity { // ... } ehcache-entity.xml <cache name="com.company.Entity" eternal="false" maxElementsInMemory="10000" overflowToDisk="true" diskPersistent="false" timeToIdleSeconds="0" timeToLiveSeconds="300" memoryStoreEvictionPolicy="LRU" /> ApplicationContext.xml <bean class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean"> <property name="dataSource" ref="ds" /> <property name="annotatedClasses"> <list> <value>com.company.Entity</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.generate_statistics">true</prop> <prop key="hibernate.cache.use_second_level_cache">true</prop> <prop key="net.sf.ehcache.configurationResourceName">/ehcache-entity.xml</prop> <prop key="hibernate.cache.region.factory_class">net.sf.ehcache.hibernate.SingletonEhCacheRegionFactory</prop> .... </property> </bean> Maven dependencies <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-annotations</artifactId> <version>3.4.0.GA</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-hibernate3</artifactId> <version>2.0.8</version> <exclusions> <exclusion> <artifactId>hibernate</artifactId> <groupId>org.hibernate</groupId> </exclusion> </exclusions> </dependency> <dependency> <groupId>net.sf.ehcache</groupId> <artifactId>ehcache-core</artifactId> <version>2.3.2</version> </dependency> A test class is used which enables cache statistics: Cache cache = cacheManager.getCache("com.company.Entity"); cache.setStatisticsAccuracy(Statistics.STATISTICS_ACCURACY_GUARANTEED); cache.setStatisticsEnabled(true); // store, read etc ... cache.getStatistics().getMemoryStoreObjectCount(); // returns 0 No operation seems to trigger any cache changes. What am I missing? Currently I'm using HibernateTemplate in the DAO, perhaps that has some impact. [EDIT] The only ehcache log output when set to DEBUG is: SettingsFactory: Cache region factory : net.sf.ehcache.hibernate.SingletonEhCacheRegionFactory

    Read the article

  • Twitter API similar to Google Alert

    - by Felix Perdana
    I am trying to create a web application which have a similar functionality with Google Alerts. (by similar I mean, the user can provide their email address for the alert to be sent to, daily or hourly) The only limitation is that it only gives alerts to user based on a certain keyword or hashtag. I think that I have found the fundamental API needed for this web application. https://dev.twitter.com/docs/api/1/get/search The problem is I still don't know all the web technologies needed for this application to work properly. For example, Do I have to store all of the searched keywords in database? Do I have to keep pooling ajax request all the time in order to keep my database updated? What if the keyword the user provided is very popular right now that might have thousands of tweets just in an hour (not to mention, there might be several emails that request several trending topics)? By the way, I am trying to build this application using PHP. So please let me know, what kind of techniques I need to learn for such web app (and some references maybe)? Any kind of help will be appreciated. Thanks in advance :) Regards, Felix Perdana

    Read the article

  • Desktop Fun: Battlestar Galactica Wallpapers

    - by Asian Angel
    Are you feeling nostalgic and/or sad now that the Battlestar Galactica series has finished up? Now you can add a bit of that Galactica goodness to your desktop with our Battlestar Galactica Wallpaper collection. If the image links fail for some reason you can download the entire set as a zipped file here. Note: Click on the picture to see the full-size image—these wallpapers vary in size so you may need to crop, stretch, or place them on a colored background in order to best match them to your screen’s resolution. For more fun wallpapers be certain to visit our new Desktop Fun section. If you are looking for some great icons to go with your new Battlestar Galactica wallpaper make certain to check out our Sci-Fi Icon Packs collection here. Similar Articles Productive Geek Tips Desktop Customization: Sci-Fi Icon PacksWindows 7 Welcome Screen Taking Forever? Here’s the Fix (Maybe)Desktop Fun: Starship Theme WallpapersDesktop Fun: Underwater Theme WallpapersDesktop Fun: Starscape Theme Wallpapers TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics How to Add Exceptions to the Windows Firewall Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox)

    Read the article

  • SQL SERVER – Standard Reports from SQL Server Management Studio – SQL in Sixty Seconds #016 – Video

    - by pinaldave
    SQL Server management Studio 2012 is wonderful tool and has many different features. Many times, an average user does not use them as they are not aware about these features. Today, we will learn one such feature. SSMS comes with many inbuilt performance and activity reports, but we do not use it to the full potential. Connect to SQL Server Node >> Right Click on it >> Go to Reports >> Click on Standard Reports >> Pick Any Report. Please note that some of the reports can be IO intensive and not suggested to run during business hours! More on Standard Reports: SQL SERVER – Out of the Box – Activity and Performance Reports from SSSMS SQL SERVER – Generate Report for Index Physical Statistics – SSMS SQL SERVER – Configure Management Data Collection in Quick Steps I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • SQLAuthority News – Three Posts on Reporting – T-SQL Tuesday #005

    - by pinaldave
    If you are following my blog, you already know that I am more of “T-SQL and Performance Tuning” type of person. I do have a good understanding of Business Intelligence suit and I also do certain training sessions on the same subject. When I was writing the blog post for T-SQL Tuesday #005 – Reporting, I realized that I have written a post that clearly explains how to generate reports using SQL Server Management Studio. Here is a quick recap on how one can use SSMS and out-of-the-box reports which can help many developers. Please note that they can be resource-intensive as well, so please use SSMS carefully. SQL SERVER – Generate Report for Index Physical Statistics – SSMS SQL SERVER – Out of the Box – Activity and Performance Reports from SSSMS SQL SERVER – Configure Management Data Collection in Quick Steps – T-SQL Tuesday #005 Junior developers and DBA can use these reports right away and can also start learning and exploring most database performance issues with the help of Sr. DBAs. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Reporting, SQL Reports

    Read the article

  • Book Review: &ldquo;Inside Microsoft SQL Server 2008: T-SQL Querying&rdquo; by Itzik Ben-Gan et al

    - by Sam Abraham
    In the past few weeks, I have been reading “Inside Microsoft SQL Server 2008: T-SQL Querying” by Itzik Ben-Gan et al. In the next few lines, I will be providing a quick book review having finished reading this valuable resource on SQL Server 2008. In this book, the authors have targeted most of the common as well as advanced T-SQL Querying scenarios that one would use for development on a SQL Server database. Book content covered sufficient theory and practice to empower its readers to systematically write better performance-tuned queries. Chapter one introduced a quick refresher of the basics of query processing. Chapters 2 and 3 followed with a thorough coverage of applicable relational algebra concepts which set a good stage for chapter 4 to dive deep into query tuning. Chapter 4 has been my favorite chapter of the book as it provided nice illustrations of the internals of indexes, waits, statistics and query plans. I particularly appreciated the thorough explanation of execution plans which helped clarify some areas I may have not paid particular attention to in the past. The book continues to focus on SQL operators tackling a few in each chapter and covering their internal workings and the best practices to follow when used. Figures and illustrations have been particularly helpful in grasping advanced concepts covered therein. In conclusion, Inside Microsoft SQL Server 2008: T-SQL Querying provided me with 750+ pages of focused, advanced and practical knowledge that has added a few tips and tricks to my arsenal of query tuning strategies. Many thanks to the O’Reilly User Group Program and its support of our West Palm Beach Developers’ Group. --Sam Abraham

    Read the article

  • SQL SERVER – Identify Most Resource Intensive Queries – SQL in Sixty Seconds #028 – Video

    - by pinaldave
    During performance tuning conversation the very first question people often ask is what are the queries offending the server or in another word let us identify the queries which are the most resource intensive. The resources are often described as either Memory, CPU or IO. When we talk about the queries the same is applicable for them as well. The query which is doing lots of reads or writes are for sure resource intensive as well query which are taking maximum CPU time. Performance tuning is a very deep subject and we all have our own preference regarding what should be the first step to tuning and what should be looked with the salt of grain. Though there is no denying that a query which uses more resources than what it should be using for sure require tuning. There are many ways to do identify query using intense resources (e.g. Extended events etc) but in this one we will go by simple DMV. There is a small gotcha we all have to remember about usage of DMV is that it only brings back results from existing cache. So if you have a query which is very resource intensive but is not cached or if you have explicitly removed the query from the cache it will be not part of the result returned by this DMV. It is quite possible that a query is aged and removed from the cache if your cache is not huge. If your cache is large you may want to be careful in running this query during business hours as this query itself can be resource intensive. Get Script to identify resource intensive query from Here Related Tips in SQL in Sixty Seconds: SQL SERVER – Find Most Expensive Queries Using DMV Simple Example to Configure Resource Governor – Introduction to Resource Governor SQL SERVER – DMV – sys.dm_exec_query_optimizer_info – Statistics of Optimizer SQL SERVER – Wait Stats – Wait Types – Wait Queues – Day 0 of 28 Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • SQLAuthority News – Happy Deepavali and Happy News Year

    - by pinaldave
    Diwali or Deepavali is popularly known as the festival of lights. It literally means “array of light” or “row of lamps“. Today we build a small clay maps and fill it with oil and light it up. The significance of lighting the lamp is the triumph of good over evil. I work every single day in a year but today I am spending my time with family and little one. I make sure that my daughter is aware of our culture and she learns to celebrate the festival with the same passion and values which I have. Every year on this day, I do not write a long blog post but rather write a small post with various SQL Tips and Tricks. After reading them you should quickly get back to your friends and family – it is the most important festival day. Here are a few tips and tricks: Take regular full backup of your database Avoid cursors if they can be replaced by set based process Keep your index maintenance script handy and execute them at intervals Consider Solid State Drive (SDD) for crucial database and tempdb placement Update statistics for OLTP transactions at intervals I guess that’s it for today. If you still have more time to learn. Here are few things you should consider. Get FREE Books by Sign up for tomorrow’s webcast by Rick Morelan Watch SQL in Sixty Seconds Series – FREE SQL Learning Read my earlier 2300+ articles Well, I am sure that will keep you busy for the rest of the day! Happy Diwali to All of You! Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • How to Find Your IP Address in Ubuntu Linux

    - by Trevor Bekolay
    In Windows, we use the command-line program ipconfig to find out our IP address. How do you find it in Ubuntu? We will show you two locations easily accessible through the GUI and, of course, a terminal command that will get your IP address in no time. The first location, and the easiest in most cases, is found by right clicking the network icon in the notification area and clicking Connection Information. This brings up a window which has a bunch of information, including your IP address. The second location, which shows you more detail than this first method, is at System > Administration > Network Tools. Select the right network device, and you’ve got a ton of information at your fingertips. Finally, if you can’t tear yourself away from a terminal window, the command to type in is: ifconfig Yes, it’s only one character different than ipconfig. Who would have guessed? As it turns out, you’re always a few clicks or keystrokes away from finding your IP address in Ubuntu. Isn’t choice great? Similar Articles Productive Geek Tips Change Ubuntu Desktop from DHCP to a Static IP AddressAdding extra Repositories on UbuntuClear the Auto-Complete Email Address Cache in OutlookMake Firefox Display Large Images Full SizeChange Ubuntu Server from DHCP to a Static IP Address TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Track Daily Goals With 42Goals Video Toolbox is a Superb Online Video Editor Fun with 47 charts and graphs Tomorrow is Mother’s Day Check the Average Speed of YouTube Videos You’ve Watched OutlookStatView Scans and Displays General Usage Statistics

    Read the article

  • SQL SERVER – Question to You – When to use Function and When to use Stored Procedure

    - by pinaldave
    This week has been very interesting week. I have asked few questions to users and have received remarkable participation on the subject. Q1) SQL SERVER – Puzzle – SELECT * vs SELECT COUNT(*) Q2) SQL SERVER – Puzzle – Statistics are not Updated but are Created Once Keeping the same spirit up, I am asking the third question over here. Q3) When to use User Defined Function and when to use Stored Procedure in your development? Personally, I believe that they are both different things - they cannot be compared. I can say, it will be like comparing apples and oranges. Each has its own unique use. However, they can be used interchangeably at many times and in real life (i.e., production environment). I have personally seen both of these being used interchangeably many times. This is the precise reason for asking this question. When do you use Function and when do you use Stored Procedure? What are Pros and Cons of each of them when used instead of each other? If you are going to answer that ‘To avoid repeating code, you use Function’ - please think harder! Stored procedure can do the same. In SQL Server Denali, even the stored procedure can return the result just like Function in SELECT statement; so if you are going to answer with ‘Function can be used in SELECT, whereas Stored Procedure cannot be used’ - again think harder! (link). Now, what do you say? I will post the answers of all the three questions with due credit next week. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Question, SQL, SQL Authority, SQL Function, SQL Puzzle, SQL Query, SQL Server, SQL Stored Procedure, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Allow Incoming Responses from Curl On Ubuntu 11.10 - Curl

    - by Daniel Adarve
    I'm trying to get a Curl Response from an outside server, however I noticed I cant neither PING the server in question nor connect to it. I tried disabling the iptables firewall but I had no success. My server is running behind a Cisco Linksys WRTN310N Router with the DD-wrt firmware Installed. In which I already disabled the firewall. Here are my network settings: Ifconfig eth0 Link encap:Ethernet HWaddr 00:26:b9:76:73:6b inet addr:192.168.1.120 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::226:b9ff:fe76:736b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:49713 errors:0 dropped:0 overruns:0 frame:0 TX packets:30987 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:52829022 (52.8 MB) TX bytes:5438223 (5.4 MB) Interrupt:16 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:341 errors:0 dropped:0 overruns:0 frame:0 TX packets:341 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:27604 (27.6 KB) TX bytes:27604 (27.6 KB) /etc/resolv.conf nameserver 192.168.1.1 /etc/nsswitch.com passwd: compat group: compat shadow: compat hosts: files dns networks: files protocols: db files services: db files ethers: db files rpc: db files netgroup: nis /etc/host.conf order hosts,bind multi on /etc/hosts 127.0.0.1 localhost 127.0.0.1 callcenter # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters /etc/network/interfaces # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet static address 192.168.1.120 netmask 255.255.255.0 network 192.168.1.1 broadcast 192.168.1.255 gateway 192.168.1.1 The Url to which im trying to get a connection to is https://www.veripayment.com/integration/index.php When I ping it on terminal heres what I get daniel@callcenter:~$ ping www.veripayment.com PING www.veripayment.com (69.172.200.5) 56(84) bytes of data. --- www.veripayment.com ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1007ms Thanks in Advance

    Read the article

  • The Retail Week Conference 2012 - Interview with Paul Dickson

    - by user801960
    Recently we attended the Retail Week Conference at the Hilton London Metropole Hotel in London. The conference proves to be an inspirational meeting of retail minds and the insight gained from both the speakers and the other delegates is invaluable. In particular we enjoyed hearing from Charlie Mayfield, Chairman at John Lewis Partnership, about understanding how the consumer is viewing the ever changing world of retail; a session on how to encourage brand-loyal multichannel activities from Robin Terrell of House of Fraser with Alan White of the N Brown Group, Vince Russell from The Cloud and Lucy Neville-Rolfe from Tesco; and a fascinating session from Tim Steiner, Chief Executive of Ocado, about how the business makes it as easy as possible for consumers to shop on their various platforms, which included some surprising usage statistics. Oracle's own Vice President of Retail, Paul Dickson, also held a session with Richard Pennycook, Group Finance Director at Morrisons, about the role of technology in accelerating and supporting the business strategy. Morrisons' 'Evolve' programme takes a litte-and-often approach to updating its technology infrastructure to spread cost and keep the adoption process gentle for staff, and the session explored how the process works and how Oracle's technology underpins the programme to optimise their operations using actionable insight. We had a quick chat with Paul Dickson at the session to get his thoughts on the programme - the video is below. We also filmed the whole presentation, so keep checking back on this blog if you're interested in seeing it.

    Read the article

  • SQL SERVER – Winners – Contest Win Joes 2 Pros Combo (USD 198)

    - by pinaldave
    Earlier this week we had contest ran over the blog where we are giving away USD 198 worth books of Joes 2 Pros. We had over 500+ responses during the five days of the contest. After removing duplicate and incorrect responses we had a total of 416 valid responses combined total 5 days. We got maximum correct answer on day 2 and minimum correct answer on day 5. Well, enough of the statistics. Let us go over the winners’ names. The winners have been selected randomly by one of the book editors of Joes 2 Pros. SQL Server Joes 2 Pros Learning Kit 5 Books Day 1 Winner USA: Philip Dacosta India: Sandeep Mittal Day 2 Winner USA: Michael Evans India: Satyanarayana Raju Pakalapati Day 3 Winner USA: Ratna Pulapaka India: Sandip Pani Day 4 Winner USA: Ramlal Raghavan India: Dattatrey Sindol Day 5 Winner USA: David Hall India: Mohit Garg I congratulate all the winners for their participation. All of you will receive emails from us. You will have to reply the email with your physical address. Once you receive an email please reply within 3 days so we can ship the 5 book kits to you immediately. Bonus Winners Additionally, I had announced that every day I will select a winner from the readers who have left comments with their favorite blog post. Here are the winners with their favorite blog post. Day 1: Prasanna kumar.D [Favorite Post] Day 2: Ganesh narim [Favorite Post] Day 3: Sreelekha [Favorite Post] Day 4: P.Anish Shenoy [Favorite Post] Day 5: Rikhil [Favorite Post] All the bonus winners will receive my print book SQL Wait Stats if your shipping address is in India or Pluralsight Subscription if you are outside India. If you are not winner of the contest but still want to learn SQL Server you can get the book from here. Amazon | 1 | 2 | 3 | 4 | 5 | Flipkart | Indiaplaza Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >