Search Results

Search found 18096 results on 724 pages for 'let'.

Page 275/724 | < Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >

  • Thank You MySQL Community! MySQL 5.6.9 Release Candidate Available Now!

    - by Rob Young
    The MySQL Community continues its good work in testing and refining MySQL 5.6, and as such the next iteration of the 5.6 Release Candidate is now available for download.  You can get MySQL 5.6.9 here (look under the "Development Releases" tab).  This version is the result of feedback we have gotten since MySQL 5.6.7 was announced at MySQL Connect in late September. As iron sharpens iron, Community feedback sharpens the quality and performance of MySQL so please download 5.6.9 and let us know how we can improve it as we move toward the production-ready product release in early 2013. MySQL 5.6 is designed to meet the agility demands of the next generation of web apps and services and includes across the board improvements to the Optimizer, InnoDB performance/scale and online DDL operations, self-healing Replication, Performance Schema Instrumentation, Security and developer enabling NoSQL functionality.  You can learn all the details and follow MySQL Engineering blogs on all of the key features in this MySQL DevZone article. On a related note, plan to join this week's live webinars to learn more about MySQL 5.6 Self-Healing Replication Clusters and Building the Next Generation of Web, Cloud, SaaS, Embedded Application and Services with MySQL 5.6.  Hurry!  Seating is limited!  As always, thanks for your continued support of MySQL!

    Read the article

  • iOS 5: Enable Android Style Auto Correction Feature With A Simple Trick

    - by Gopinath
    Apple generally don’t let its users to play with their devices, but seems to be these days there are few things slipping through the nets. Smart users are able find some hacks and enable new features on iOS devices! Few days ago we heard about the hidden panorama feature built into iOS 5 and it could be enabled on a jail broken device. Here come another hidden feature unearthed by a smart geek in iOS 5 : enable Android style auto-correction on on-screen keyboard. Luckily to enable this feature you don’t need to jailbreak, all you need to do is to take backup of your device, edit a file and restore it back. Boom!  That’s it. To enable auto corrections feature on the on-screen keyboard of iOS 5 follow these steps Download iBackupBot and install it on your machine. It’s works on both Windows and Mac OS X. Backup your iPhone, iPod, or iPad with iTunes – plug in your iOS device and sync it. Open iBackupBot, locate your most recent backup and click on it Scroll down to Library/Preferences/com.apple.keyboard.plist and double-click on it.   Replace everything between the two <dict> with the following <key>KeyboardAutocorrectionLists</key> <string>YES</string> Save the plist file, then hit the "Restore From Backup" button in iBackupbot. Reboot your device to see the auto correction feature in action on your device’s on-screen keyboard. via lifehacker This article titled,iOS 5: Enable Android Style Auto Correction Feature With A Simple Trick, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Ask the Readers: Which Google Services Do You Use?

    - by Asian Angel
    Nearly everyone uses at least one of Google’s services while browsing each day. What we want to know this week is which Google services do you use? Image by adria.richards Google offers a multitude of services such as e-mail, calendar, and docs to help you manage your online life. Some of you may only use a few of the available services while others are power users. A fair number of businesses and schools have also switched over to Google apps and services for their organization. Whether it is at home, work, or both Google has become a part of our daily lives. Being able to access everything in one place can be extremely useful but equally frustrating if Google’s services experience any downtime. Another concern for some people is the issue of privacy over having so much information stored by a single company. Ultimately the final decision lies with you. Which Google services do you use at home or at work? Let us know in the comments! Similar Articles Productive Geek Tips Access Your Favorite Google Services in Chrome the Easy WayFinding RSS Subscriber Counts Through Apache LogsQuick and Easy Access to Your Favorite Google Services with GButtsAsk the Readers: Which Search Engine Do You Use?A Few Things I’ve Learned from Writing at How-To Geek TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Identify Fonts using WhatFontis.com Windows 7’s WordPad is Actually Good Greate Image Viewing and Management with Zoner Photo Studio Free Windows Media Player Plus! – Cool WMP Enhancer Get Your Team’s World Cup Schedule In Google Calendar Backup Drivers With Driver Magician

    Read the article

  • How can I find the right UV coordinates for interpolating a bezier curve?

    - by ssb
    I'll let this picture do the talking. I'm trying to create a mesh from a bezier curve and then add a texture to it. The problem here is that the interpolation points along the curve do not increase linearly, so points farther from the control point (near the endpoints) stretch and those in the bend contract, causing the texture to be uneven across the curve, which can be problematic when using a pattern like stripes on a road. How can I determine how far along the curve the vertices actually are so I can give a proper UV coordinate? EDIT: Allow me to clarify that I'm not talking about the trapezoidal distortion of the roads. That I know is normal and I'm not concerned about. I've updated the image to show more clearly where my concerns are. Interpolating over the curve I get 10 segments, but each of these 10 segments is not spaced at an equal point along the curve, so I have to account for this in assigning UV data to vertices or else the road texture will stretch/shrink depending on how far apart vertices are at that particular part of the curve.

    Read the article

  • Code Generation and IDE vs writing per Hand

    - by sytycs
    I have been programming for about a year now. Pretty soon I realized that I need a great Tool for writing code and learned Vim. I was happy with C and Ruby and never liked the idea of an IDE. Which was encouraged by a lot of reading about programming.[1] However I started with (my first) Java Project. In a CS Course we were using Visual Paradigm and encouraged to let the program generate our code from a class diagram. I did not like that Idea because: Our class diagram was buggy. Students more experienced in Java said they would write the code per hand. I had never written any Java before and would not understand a lot of the generated code. So I took a different approach and wrote all methods per Hand (getter and Setter included). My Team-members have written their parts (partly generated by VP) in an IDE and I was "forced" to use it too. I realized they had generated equal amounts of code in a shorter amount of time and did not spend a lot of time setting their CLASSPATH and writing scripts for compiling that son of a b***. Additionally we had to implement a GUI and I dont see how we could have done that in a sane matter in Vim. So here is my Problem: I fell in love with Vim and the Unix way. But it looks like for getting this job done (on time) the IDE/Code generation approach is superior. Do you have equal experiences? Is Java by the nature of the language just more suitable for an IDE/Code generated approach? Or am I lacking the knowledge to produce equal amounts of code "per Hand"? [1] http://heather.cs.ucdavis.edu/~matloff/eclipse.html

    Read the article

  • Oracle OpenWorld Series: Fusion Middleware for Enterprise Applications

    - by Michelle Kimihira
    Continuing from Tuesday’s Oracle OpenWorld Series: Best of Oracle Fusion Middleware blog, let’s focus on what Middleware sessions to attend if you are an Enterprise Applications customer. This Focus On document provides you a roadmap of must-attend sessions and demos. We also have a great line up of customers participating in Fusion Middleware sessions targeted to Enterprise Applications customers. Here are a few: Monday, 10/1 1:45 PM – 2:45 PM Intuit and Qualcomm join  CON9470 – Best Practices for Realizing Greater Returns from Oracle Fusion Middleware Projects Moscone West, 3003 Wednesday, 10/3 10:15 AM – 11:15 AM Cognizant joins CON9506 – Oracle Exalogic: The Best Choice for Running Your Applications Moscone West, 3003 Wednesday, 10/3 5:00 PM – 6:00 PM Ingersoll Rand joins CON9047 – Efficiently Scaling Oracle E-Business Suite on Oracle Exadata and Oracle Exalogic Moscone West, 2016 Thursday, 10/4 12:45 PM – 1:45 PM Allegis Group joins CON9048 – Harness the Power of Oracle Exadata and Oracle Exalogic with PeopleSoft Applications Moscone West, 3011 Additional Information ·         Relevant Blogs: Oracle OpenWorld Countdown Begins ,  Best of Oracle Fusion Middleware, Oracle OpenWorld Blog ·         Focus On Docs: Best of Oracle Fusion Middleware, Fusion Middleware for Enterprise Applications ·         Product Information on Oracle.com: Oracle Fusion Middleware ·         Subscribe to our regular Fusion Middleware Newsletter ·         Follow us on Twitter and Facebook

    Read the article

  • Subversion problem, repo has moved

    - by Rudiger
    Hi, I've set up subversion on a CentOS fresh install. Web view works fine and gives no errors and requests password but when I try and access it through svn client (xcode) it gives the error 175011 (Repository has been moved). I've tried some of the solutions out there but no success. My subversion.conf: <Location /repos> DAV svn SVNParentPath /var/www/html/repos # Limit write permission to list of valid users. # Require SSL connection for password protection. SSLRequireSSL AuthType Basic AuthName "Authorization Realm" AuthUserFile /etc/svn-auth-conf Require valid-user </Location> My Apache DocumentRoot: /var/www/html I've only set up one svn repository so far so there shouldn't be any conflicts there. If you need any more info let me know. Thanks

    Read the article

  • Double GPRS/EDGE speed with two mobile phones at once?

    - by Patrick
    Hi, I'm using my mobile phone to connect to the internet in an area where only GPRS/EDGE is available. To increase the connection speed I would like to use a technique called connection teaming. E.g. I would use two mobile phones / usb sticks to go online with different providers at the same time and let the software distribute requests over both connections. My questions are: is there a software available to do connection teaming? It sounds like Midpoint was able to do it but it's over 7 years old and is unlikely to run on Windows 7 has anybody tried this? Thanks a lot, Patrick

    Read the article

  • Is Joel Test really a good gauging tool?

    - by henry
    I just learned about Joel Test. I have been computer programmer for 22 years, but somehow never heard about it before. I consider my best job so far to be this small investment managing company with 30 employees and only 3 people in IT department. I am no longer with them but I had being working there for 5 years – my longest streak with any given company. To my surprise they scored extremely poor on Joel Test. The only two questions I would answer “yes” are #4: Do you have a bug database? And #9: Do you use the best tools money can buy? Everything else is either “sometimes” or straight “no”. Here is what I liked about the company however: a) Good pay, they bragged about it to my face and I bragged about it to their face, so it was almost like a family environment. b) I always knew big picture. When writing a code to solve particular problem there were no ambiguity about the business nature of that problem. Even though we did not always had written specs we could ask business users a question anytime, often yelling it across the floor. I could even talk to executives any time I felt like doing it: no appointment necessary. c) Immediate feedback. Once we implement a solution and make business users happy they immediately let us know that, we (programmers) become heroes of the moment. d) No red tape. I could always buy any tools I deem necessary, and design solutions the way my professional judgment dictates. e) Flexibility. If I had mid-day dental appointment that is near my house rather than near the office, I would send email to the company: "FYI: I work from home today". As long as one of 3 IT guys was on the floor (to help traders in case their monitors go dark) they did not care where 2 others are. So the question thus becomes how valuable Joel Test is? Why bother with it?

    Read the article

  • Enabling XML-documentation for code contracts

    - by DigiMortal
    One nice feature that code contracts offer is updating of code documentation. If you are using source code documenting features of Visual Studio then code contracts may automate some tasks you otherwise have to implement manually. In this posting I will show you some XML documentation files with documented contracts. I will also explain how this feature works. Enabling XML-documentation in project settings As a first thing let’s enable generating of code documentation under project settings. Open project properties, move to Build page and make check to checkbox called “XML documentation file”. Save project settings and rebuild project. When project is built go to bin/Debug folder and open the XML-file. Here is my XML. <?xml version="1.0"?> <doc>     <assembly>         <name>Eneta.Examples.CodeContracts.Testable</name>     </assembly>     <members>         <member name="T:Eneta.Examples.CodeContracts.Testable.Randomizer">             <summary>             Class for generating random integers in user specified range.             </summary>         </member>         <member name="M:Eneta.Examples.CodeContracts.Testable.Randomizer.#ctor(Eneta.Examples.CodeContracts.Testable.IRandomGenerator)">             <summary>             Constructor of Randomizer. Initializes Randomizer class.             </summary>             <param name="generator">Instance of random number generator.</param>         </member>         <member name="M:Eneta.Examples.CodeContracts.Testable.Randomizer.GetRandomFromRangeContracted(System.Int32,System.Int32)">             <summary>             Returns random integer in given range.             </summary>             <param name="min">Minimum value of random integer.</param>             <param name="max">Maximum value of random integer.</param>         </member>     </members> </doc> You can see nothing about code contracts here. Enabling code contracts documentation Code contracts have their own settings and conditions for documentation. Open project properties and move to Code Contracts tab. From “Contract Reference Assembly” dropdown check Build and make check to checkbox “Emit contracts into XML doc file”. And again – save project setting, build the project and move to bin/Debug folder. Now you can see that there are two files for XML-documentation: <assembly name>.XML <assembly name>.old.XML First files is documentation with contracts, second file is original documentation without contracts. Let’s see now what is inside our new XML-documentation file. <?xml version="1.0"?> <doc>   <assembly>     <name>Eneta.Examples.CodeContracts.Testable</name>   </assembly>   <members>     <member name="T:Eneta.Examples.CodeContracts.Testable.Randomizer">       <summary>             Class for generating random integers in user specified range.             </summary>     </member>     <member name="M:Eneta.Examples.CodeContracts.Testable.Randomizer.#ctor(Eneta.Examples.CodeContracts.Testable.IRandomGenerator)">       <summary>             Constructor of Randomizer. Initializes Randomizer class.             </summary>       <param name="generator">Instance of random number generator.</param>     </member>     <member name="M:Eneta.Examples.CodeContracts.Testable.Randomizer.GetRandomFromRangeContracted(System.Int32,System.Int32)">       <summary>             Returns random integer in given range.             </summary>       <param name="min">Minimum value of random integer.</param>       <param name="max">Maximum value of random integer.</param>       <requires description="Min must be less than max" exception="T:System.ArgumentOutOfRangeException">                 min &lt; max</requires>       <exception cref="T:System.ArgumentOutOfRangeException">                 min &gt;= max</exception>       <ensures description="Return value is out of range">                 Contract.Result&lt;int&gt;() &gt;= min &amp;&amp;                 Contract.Result&lt;int&gt;() &lt;= max</ensures>     </member>   </members> </doc> As you can see then code contracts are pretty well documented. Messages that I provided with code contracts are also available in documentation. If I wrote very good and informative messages then these messages are very useful also in contracts documentation. Code contracts and Sandcastle Sandcastle knows nothing about code contracts by default. There is separate package of file for Sandcastle that is provided you by code contracts installation. You can read from code contracts manual: “Sandcastle (http://www.codeplex.com/Sandcastle) is a freely available tool that generates help les and web sites describing your APIs, based on the XML doc comments in your source code. The CodeContracts install contains a set of les that can be copied over a Sandcastle installation to take advantage of the additional contract information. The produced documentation adds a contract section to methods with declared requires and/or ensures. In order for Sandcastle to produce Contract sections, you need to patch a number of files in its installation. Please refer to the Sandcastle Readme.txt found under Start Menu/CodeContracts/Sandcastle for instructions. A future release of Sandcastle will hopefully support contract sections without the need for this patching step.” Integrating code contracts documentation to Sandcastle will be one of my next postings about code contracts. Conclusion if you are using code documentation then documentation about code contracts can be added to documentation very easily. All you have to do is to enable XML-documentation for contracts and build your project. Later you can use Sandcastle files provided by code contracts installer to integrate contracts documentation to your output documentation package.

    Read the article

  • Watch @marcorus and @ferrarialberto sessions online #teched #msteched #tee2012

    - by Marco Russo (SQLBI)
    In June I participated to two TechEd editions (North America and Europe). I and Alberto delivered a Pre Conference and two sessions about Tabular. Both conferences provides recorded sessions freely available on Channel 9 so that you can compare which one has been delivered in the best way! If you have to choose between the two versions, consider that in North America we receive more questions during and after the session (still recording), increasing the interaction, whereas in Europe questions usually comes after the session finished (so no recording available). If you’re curious, watch both and let me know which version you prefer, especially for Multidimensional vs Tabular! BISM: Multidimensional vs. Tabular (TechEd North America 2012) BISM: Multidimensional vs. Tabular (TechEd Europe 2012) Many-to-Many Relationships in BISM Tabular (TechEd North America 2012) Many-to-Many Relationships in BISM Tabular (TechEd Europe 2012) If you are interested to learn SSAS Tabular, don’t miss the next SSAS Tabular Workshop online on September 3-4, 2012. We are also planning dates for another roadshow in Europe this fall and I’m happy to announce we’ll have two dates in Germany, too. More updates in the coming weeks.

    Read the article

  • Truecrypt or default Disk Utility on Mac?

    - by Kaushik Gopal
    Windows by default doesn't come with a password protect folder option (other that Win7 ultimate), so I used to swear by Truecrypt which was great. But I've read in a couple of places that Mac OS X by default has a way of protecting folders using the Default Disk Utility. So my question is which is better, using TrueCrypt on the Mac or just sticking with the default Disk Utils app? Can somebody let me know the advantages of one over the other? A summary from the very helpful answers below: if you're looking for cross-platform usage Truecrypt is the obvious tool of choice if you're looking for convenience, and intend to stick only to the Mac platform, use the default Disk Utils app.

    Read the article

  • Starting and stopping X11 and LXDE from command line

    - by Radian
    I have a Raspberry Pi with Debian Wheezy (Raspbian) and so far I've managed to learn quite a lot about Linux just playing around, but I have a few questions for all you seasoned Linux pros out there. 1) From command line, if I execute startx, X11 will launch followed by LXDE. If I had a monitor connected, I'm imagining I would see a transition from command line to the desktop environment. Can I launch X11 first with x, then start LXDE on top of X11 afterwards with /etc/init.d/lxdm start (is this correct?) and get to the same result as startx? 2) Instead, let's say I executed /etc/init.d/lxdm start alone, would X11 start automatically (since LXDE relies on X11)? 3) From desktop, if I CTRL+ALT+F1 to get back to command line, then I should be able to shutdown LXDE using /etc/init.d/lxdm stop. Does X11 automatically close with the termination of LXDE? 4) What is the proper/safe way to shutdown X11? Thanks

    Read the article

  • How can I tell if ZFS (zfs-fuse) dedup/compression is applied to a particular file?

    - by asari
    I have a zfs formatted partition using zfs-fuse for linux (Ubuntu). I had used it for a while, and then enabled dedup and compression on it (zfs set compression=on/dedup=on). Now I think I have some files that are dedup'ed and compressed, and file that are not yet. It was OK, but sometimes I was confused. Let's see, following command would consume almost 4GB of my zfs storage: cp oldfile.4GB newfile.4GB .. and this would consume almost zero: cp newfile.4GB newfile.4GB.2 This is because the old file is not yet compressed, so dedup not happened, I think. My idea is -- if I can find old files that are not yet dedup/compressed, I can perform batch copy/rename/remove them to eliminate duplicity and redundancy. But how I can check that? I know I can re-copy whole contents of my storage should work (even better with checking the time stamp of each file), but I'd be happier if I have zfsstat-like tool that shows some file properties.

    Read the article

  • How to install a wiki server

    - by gvalero87
    I would like to install a wiki on my server so I can let people I know to edit documents. So, I don't wan it to be as open as wikipedia. I would like something as google docs (the thing is google docs doesn't handle images and tables on documents very well), doesn't have to be in real time as google does it. The first thing I noticed looking over internet there are different solutions. Maybe what I'm looking for is the easiest one to install. If a want to use the wiki technology don't have to much time investing on errors. Thanks for your opinions

    Read the article

  • Windows 8 Start Screen Shortcuts for Desktop Apps

    - by Anna
    I am facing an issue regarding windows 8 shortcuts. My app is not getting pinned for all users for my installed desktop apps. If admin is installing that desktop app, then shortcut of app is getting pinned only for admin not for other users which we never faced with previous version of windows. Can any one provide the reason for this issue or any reference for it. Kindly let me know the reason why Microsoft has designed the Windows 8 Start Screen with this drawback. Regards.

    Read the article

  • Running make for Nginx throws a “multiple target patterns” error

    - by Justin Meltzer
    When I run make inside my installed nginx directory I get the output: make -f objs/Makefile make[1]: Entering directory `/home/ec2-user/nginx/nginx-1.2.4' objs/Makefile:110: *** multiple target patterns. Stop. make[1]: Leaving directory `/home/ec2-user/nginx/nginx-1.2.4' make: * [build] Error 2 I am on an Amazon Linux AMI. The steps I took from the beginning was wget /path/to/nginx/tarball tar xvf nginx-1.2.4.tar.gz cd nginx-1.2.4 ./configure --prefix=/nginx --a-bunch-of-other-options Then I ran make. Also I installed make by running sudo yum install make Please let me know if there's any other information I should be providing.

    Read the article

  • ASP.NET hosting: better, faster, cheaper

    - by Fabrice Marguerie
    After seven years with webhost4life, it was time to move on. Especially because of all the troubles with webhost4life due to their internal migration to a new hosting environment (the company has been bought out).I've just moved all my websites elsewhere. I'm now using Arvixe and OrcsWeb.I use OrcsWeb for metaSapiens.com. OrcsWeb kindly offers me free ASP.NET hosting because I'm a Microsoft MVP. I'd like to publicly thank OrcsWeb for this, and I invite you to have a look at what they have to offer.I use Arvixe for all my other websites, the major ones being SharpToolbox.com, JavaToolbox.com, AxToolbox.com, Proagora.com, LinqInAction.net, ClairDeBulle.com, and madgeek.com.Moving all my websites wasn't a walk in the park, but it was well worth it. Let's consider what I get with Arvixe:Unlimited diskspaceUnlimited data transferUnlimited domainsDedicated application poolsUnlimited POP3 and IMAP mailboxesUnlimited SQL Server 2008 databasesUnlimited MySQL 5 databases.NET 1.1, 2, 3.5 and 4Full trustIIS 7Daily backups A powerful and easy to use control panelAnd more!All of this for $8 per month. If you don't need all of the above features, you can even get an offer as cheap as $5 per month.You can even get better rates if you use coupon codes, such as TOPHOST (30% discount) or MVCHOSTING (20% discount).All in all, I paid only $134 for two years for a great hosting service!Maybe it's time for you to move too?Disclaimer: the links to OrcsWeb and Arvixe are affiliate links that may bring me some money home if you sign up.

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • deploying a Python application from a PHP developer

    - by user1218776
    I'm a little confused on the deployment process for Python. Let's say you create a brand new project with virtualenv source bin/activate pip install a few libraries write a simple hello world app pip freeze the dependencies When I deploy this code into a machine, do I need first make sure the machine is sourced before installing dependencies? I don't mean to sound like a total noob but in the PHP world, I don't have to worry about this because it's already part of the project. All the dependencies are registered with the autoloader in place. The steps would be: rsync the files (or any other method) source bin/activate pip install the dependencies from the pip freeze output file It feels awkward, or just wrong and very error prone. What are the correct steps to make? I've searched around but it seems many tutorials/articles make an assumption that anyone reading the article has past python experience (imo).

    Read the article

  • Microsoft Sync Framework

    - by kaleidoscope
    Introduction It is a platform that enables collaboration and offline access for applications, services and devices. Sync framework features technologies and tools that enable roaming, data sharing and taking data offline. Moreover, developers can build synchronization ecosystems that integrate any application with data from any store, by using any protocol over any network. Highlights * Add sync support to new and existing applications, services and devices * Enable collaboration and offline capabilities for any application * Roam and share information form any data store, over any protocol and over any network configuration * Leverage sync capabilities exposed in Microsoft technologies to create sync ecosystems * Extend the architecture to support custom data types including files Benefits of using Sync Framework * An extensible model that lets you integrate multiple data sources into a synchronization ecosystems. * A managed API for all components and a native API for select components. * Conflict handling for automatic and custom resolution schemes. * Filters that let you synchronize a subset of data, such as only those files that contain images. * A compact and efficient metadata model that enables synchronization for virtually any participant, without significant changes to the data store: - Any data store     Add synchronization to a wide range of applications, services and devices. - Any data type     Introduce new data types to synchronize. - Any protocol     Use existing architectures and protocols to synchronize data. The transport – agnostic architecture allows integration of synchronization into a variety of protocols, including over-the-air and embedded devices. - Any network configuration     Enable synchronization for your applications, devices and services in true peer-to-peer or hub-and-spoke configurations. Easily recover from network interruptions. Reduce network traffic by efficiently selecting changes to synchronize. Technorati Tags: Anish Sharma,Microsoft Sync Framework

    Read the article

  • Estimating cost of labor for a controlled experiment

    - by Lorin Hochstein
    Let's say you are a software engineering researcher and you are designing a controlled experiment to compare two software technologies or techniques (e.g., TDD vs. non-TDD, Python vs. Go) with respect to some qualities of interest (e.g., quality of resulting code, programmer productivity). According to your study design, participants will work alone to implement a non-trivial software system. You estimate it should take about six months for a single programmer to complete the task. You also estimate via power analysis that you will need around sixty study participants to obtain statistically significant results, assuming the technologies actually do yield different outcomes. To maximize external validity, you want to use professional programmers as study participants. Unfortunately, it isn't possible to find professional programmers who can volunteer for several months to work full-time on implementing a software system. You decide to go the simplest route and contract with a large IT consulting firm to obtain access to programmers to participate in the study. What is a reasonable estimate of the cost range, per person-month, for the programming labor? Assume you are constrained to work with a U.S.-based firm, but it doesn't matter where in the U.S. the firm itself or the programmers or located. Note: I'm looking for a reasonable order-of-magnitude range suitable for back-of-the-envelope calculations so that when people say "Why doesn't somebody just do a study to measure X", I can say, "Because running that study properly would cost $Y", and have a reasonable argument for the value of $Y.

    Read the article

  • High level vs. low level programming. Do I really have to choose?

    - by EpsilonVector
    Every once in a while I'm asked in interviews which I like the best- low level or high level. It seems to me that the implicit message is that they are both a specialty and they want to know which direction I'm heading. The trouble is, I seem to like both. Low level is extremely challenging and often requires a great deal of esoteric knowledge. High level is where all the sexy things happen: applications that people use directly, results that can be easily demonstrated (showed off) in a way that is accessible to everybody, and you get to work with really advanced tools and interact with new technologies. I would really love to do both, even if it means alternating between them (I doubt there are jobs that will let me do both simultaneously), but I'm guessing that the industry rewards specialists more than generalists. Will it really be problematic career wise if I never choose one over the other? Is it practical to alternate between the two in the sense that if I were to leave a job doing one of them, I should experience no "friction" trying to get a job doing the other (assuming I'm reasonably in the loop)? Are there career opportunities where you get to do both? Do I really have to choose one over the other?

    Read the article

  • Ubuntu 9.0.4 Presario S4000NX Fan Speed

    - by Chris C
    I recently install Ubuntu 9.0.4 on a Presario S4000NX and the CPU fan speed is kept at max. With Windows XP installed the fan speed would increase/decrease as required. I've tried to install lm-sensors and run the sensors-detect. It recommended that I load the modules which I did: smsc47m192 i2c-i801 When running sensor-detect it gave me this strange message: Trying family SMSC Found SMSC LPC47M15x/192/997 Super IO Fan Sensors (but not activated) Running the sensors command gives me a list of voltages and CPU and temperature but doesn't list any fans. After doing some Internet research I then tried to load the smsc47m1 module but I get the following error: FATAL: Error inserting smsc47m1 (/lib/modules/2.6.28-15-generic/kernel/drivers/hwmon/smsc47m1.ko): no such device The file smsc47m1.ko does exist in the listed folder. Any suggestions for getting the fan speed (and the noise) down in Ubuntu? Thanx. - Chris P.S. - I would have put better tags but Server Fault wouldn't let me.

    Read the article

  • In Joomla In htaccess REQUEST_URI is always returning index.php instead of SEF URL

    - by Saumil
    I have installed joomsef version 3.9.9 with the Joomla 1.5.25. Now I want to set https for some of the section of my site(e.g URI starts with /events/) while wanting rest of all urls on http.I am setting rules in .htaccess file but not getting output as expected. I am checking REQUEST_URI of the SEF urls but always getting index.php as URI. Here is my htaccess code. ########## Begin - Custom redirects # # If you need to redirect some pages, or set a canonical non-www to # www redirect (or vice versa), place that code here. Ensure those # redirects use the correct RewriteRule syntax and the [R=301,L] flags. # ########## End - Custom redirects # Uncomment following line if your webserver's URL # is not directly related to physical file paths. # Update Your Joomla! Directory (just / for root) # RewriteBase / ########## Begin - Joomla! core SEF Section # RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] # # If the requested path and file is not /index.php and the request # has not already been internally rewritten to the index.php script RewriteCond %{REQUEST_URI} !^/index\.php # and the request is for root, or for an extensionless URL, or the # requested URL ends with one of the listed extensions RewriteCond %{REQUEST_URI} (/[^.]*|\.(php|html?|feed|pdf|raw))$ [NC] # and the requested path and file doesn't directly match a physical file RewriteCond %{REQUEST_FILENAME} !-f # and the requested path and file doesn't directly match a physical folder RewriteCond %{REQUEST_FILENAME} !-d # internally rewrite the request to the index.php script RewriteRule .* index.php [L] # ########## End - Joomla! core SEF Section # Here is my code e.g site url is http://mydomain.com/events RewriteCond %{REQUEST_URI} ^/(events)$ RewriteCond %{HTTPS} !ON RewriteRule (.*) https://%{REQUEST_HOST}%{REQUEST_URI}/$1 [L,R=301] I am not getting why REQUEST_URI is reffering index.php even though my url in address bar is like this http://mydomain.com/events . I am using JOOMSEF(Joomla extension for SEF URLS).If I am removing other rules from the htaccess file then joomla stops working. I am not getting a way to handle this as I am not expert.Please let me know if someone has passed through same situation and have solution or suggest some work around. Thanks

    Read the article

< Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >