Search Results

Search found 2465 results on 99 pages for 'dave hillier'.

Page 36/99 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • What are the disadvantages of self-encapsulation?

    - by Dave Jarvis
    Background Tony Hoare's billion dollar mistake was the invention of null. Subsequently, a lot of code has become riddled with null pointer exceptions (segfaults) when software developers try to use (dereference) uninitialized variables. In 1989, Wirfs-Brock and Wikerson wrote: Direct references to variables severely limit the ability of programmers to re?ne existing classes. The programming conventions described here structure the use of variables to promote reusable designs. We encourage users of all object-oriented languages to follow these conventions. Additionally, we strongly urge designers of object-oriented languages to consider the effects of unrestricted variable references on reusability. Problem A lot of software, especially in Java, but likely in C# and C++, often uses the following pattern: public class SomeClass { private String someAttribute; public SomeClass() { this.someAttribute = "Some Value"; } public void someMethod() { if( this.someAttribute.equals( "Some Value" ) ) { // do something... } } public void setAttribute( String s ) { this.someAttribute = s; } public String getAttribute() { return this.someAttribute; } } Sometimes a band-aid solution is used by checking for null throughout the code base: public void someMethod() { assert this.someAttribute != null; if( this.someAttribute.equals( "Some Value" ) ) { // do something... } } public void anotherMethod() { assert this.someAttribute != null; if( this.someAttribute.equals( "Some Default Value" ) ) { // do something... } } The band-aid does not always avoid the null pointer problem: a race condition exists. The race condition is mitigated using: public void anotherMethod() { String someAttribute = this.someAttribute; assert someAttribute != null; if( someAttribute.equals( "Some Default Value" ) ) { // do something... } } Yet that requires two statements (assignment to local copy and check for null) every time a class-scoped variable is used to ensure it is valid. Self-Encapsulation Ken Auer's Reusability Through Self-Encapsulation (Pattern Languages of Program Design, Addison Wesley, New York, pp. 505-516, 1994) advocated self-encapsulation combined with lazy initialization. The result, in Java, would resemble: public class SomeClass { private String someAttribute; public SomeClass() { setAttribute( "Some Value" ); } public void someMethod() { if( getAttribute().equals( "Some Value" ) ) { // do something... } } public void setAttribute( String s ) { this.someAttribute = s; } public String getAttribute() { String someAttribute = this.someAttribute; if( someAttribute == null ) { setAttribute( createDefaultValue() ); } return someAttribute; } protected String createDefaultValue() { return "Some Default Value"; } } All duplicate checks for null are superfluous: getAttribute() ensures the value is never null at a single location within the containing class. Efficiency arguments should be fairly moot -- modern compilers and virtual machines can inline the code when possible. As long as variables are never referenced directly, this also allows for proper application of the Open-Closed Principle. Question What are the disadvantages of self-encapsulation, if any? (Ideally, I would like to see references to studies that contrast the robustness of similarly complex systems that use and don't use self-encapsulation, as this strikes me as a fairly straightforward testable hypothesis.)

    Read the article

  • How long can you be out of the MS market before it affects your career [closed]

    - by dave
    I've been working with .Net since it first came out and have done my best to use the latest and greatest things from Redmond. That being said, I've been working for the past year in the Python/Unix/Web world. In order to keep myself relevant in the MS world, I've been working part-time on a WPF project but I do not know how much longer that work will continue. So my question is: If I were to move totally to the Unix/Python/Web world, how long could I stay there before it starts getting hard to get another MS job? I am trying not to burn bridges in my career as I've found MS jobs pay better and tend to be more plentiful. PS: I like my Python job since it is something new and I get to work from home. It has provided a different view on coding that I've found useful. EDIT: I was out of the MS market for 12 months before attempting to get another MS job. No-one said "Gee you've been gone a while" but I did get a conspicuous lack of responses to job applications. My feeling is that the head-hunters do not bother to look beyond your last job. In the end, I got employment via my own network rather than the pimps. So, to answer my question: "not long, especially if you trust your career to head hunters."

    Read the article

  • OUYA and Unity set up problems

    - by Atkobeau
    I'm having trouble with the Unity / OUYA plugin. I'm using Unity 4 with the latest update on a Windows 7 machine. When I open the starter kit and try to compile the plugin I get the following error: Picked up _JAVA_OPTIONS: -Xmx512M And if I try to Build and Run I get this error: Error building Player: ArgumentException: Illegal characters in path. I'm stumped, I've gone through lots of forum posts here and on stackoverflow and I can't seem to resolve it. My environment variables look like this: PATH - C:\Users\dave\Documents\adt-bundle-windows-x86_64-20130219\sdk\tools; C:\Users\dave\Documents\adt-bundle-windows-x86_64-20130219\sdk\platform-tools\ JAVA_HOME - C:\Program Files (x86)\Java\jdk1.6.0_45\ Everything in the OUYA Panel is white Any ideas?

    Read the article

  • SilverlightShow for 20-26 Dec 2010

    - by Dave Campbell
    Check out the Top Five most popular news at SilverlightShow for last week (20 - 26 Dec 2010). The most popular news for last week is Ryan Alford's solution on handling an error in Silverlight 4 when using Entity Framework 4, followed by Jeremy Likness' video on building an RSS Feed Reader in Silverlight. Here is SilverlightShow's weekly top 5: Silverlight 4 - Productivity Power Tools and EF4 A Silverlight MVVM Feed Reader from Scratch in 30 Minutes Resizable Grid Using Thumb Controls A Simplified Grid Markup for Silverlight and WPF Announcing the Winner of Telerik Silverlight controls in SilverlightShow Post-webinar Survey Visit and bookmark SilverlightShow. Stay in the 'Light

    Read the article

  • Content appearing under multiple categories; anything I can do to prevent duplicate penalty?

    - by dave
    I'm working with a CMS that allows me to post content in to multiple categories. So, I have this link: www.site.com/category/green-cars Here are the GREEN cars TITLE: A Big green car INTRO: this is a great big green car. But then I have this link: www.site.com/category/big-cars Here are the BIG cars TITLE: A Big green car INTRO: this is a great big green car. So essentially - for every item of content, header and the intro sentence is the same regardless of the category the item appears in. Will a search engine penalise the site for having the same content in this way? I've looked at canonical links, but I don't think this is relevant here. All my content points to the same page - but the content may appear in multiple categories first. Or am I worrying about nothing? Thanks.

    Read the article

  • Parsing T-SQL – The easy way

    - by Dave Ballantyne
    Every once in a while, I hit an issue that would require me to interrogate/parse some T-SQL code.  Normally, I would shy away from this and attempt to solve the problem in some other way.  I have written parsers before in the the past using LEX and YACC, and as much fun and awesomeness that path is,  I couldnt justify the time it would take. However, this week I have been faced with just such an issue and at the back of my mind I can remember reading through the SQLServer 2012 feature pack and seeing something called “Microsoft SQL Server 2012 Transact-SQL Language Service “.  This is described there as : “The SQL Server Transact-SQL Language Service is a component based on the .NET Framework which provides parsing validation and IntelliSense services for Transact-SQL for SQL Server 2012, SQL Server 2008 R2, and SQL Server 2008. “ Sounds just what I was after.  Documentation is very scant on this so dont take what follows as best practice or best use, just a practice and a use. Knowing what I was sort of looking for something, I found the relevant assembly in the gac which is the simply named ,’Microsoft.SqlServer.Management.SqlParser’. Even knowing that you wont find much in terms of documentation if you do a web-search, but you will find the MSDN documentation that list the members and methods etc… The “scanner”  class sounded the most appropriate for my needs as that is described as “Scans Transact-SQL searching for individual units of code or tokens.”. After a bit of poking, around the code i ended up with was something like [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.Management.SqlParser") | Out-Null $ParseOptions = New-Object Microsoft.SqlServer.Management.SqlParser.Parser.ParseOptions $ParseOptions.BatchSeparator = 'GO' $Parser = new-object Microsoft.SqlServer.Management.SqlParser.Parser.Scanner($ParseOptions) $Sql = "Create Procedure MyProc as Select top(10) * from dbo.Table" $Parser.SetSource($Sql,0) $Token=[Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::TOKEN_SET $Start =0 $End = 0 $State =0 $IsEndOfBatch = $false $IsMatched = $false $IsExecAutoParamHelp = $false while(($Token = $Parser.GetNext([ref]$State ,[ref]$Start, [ref]$End, [ref]$IsMatched, [ref]$IsExecAutoParamHelp ))-ne [Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]::EOF) { try{ ($TokenPrs =[Microsoft.SqlServer.Management.SqlParser.Parser.Tokens]$Token) | Out-Null $TokenPrs $Sql.Substring($Start,($end-$Start)+1) }catch{ $TokenPrs = $null } } As you can see , the $Sql variable holds the sql to be parsed , that is pushed into the $Parser object using SetSource,  and then we will use GetNext until the EOF token is returned.  GetNext will also return the Start and End character positions within the source string of the parsed text. This script’s output is : TOKEN_CREATE Create TOKEN_PROCEDURE Procedure TOKEN_ID MyProc TOKEN_AS as TOKEN_SELECT Select TOKEN_TOP top TOKEN_INTEGER 10 TOKEN_FROM from TOKEN_ID dbo TOKEN_TABLE Table note that the ‘(‘, ‘)’  and ‘*’ characters have returned a token type that is not present in the Microsoft.SqlServer.Management.SqlParser.Parser.Tokens Enum that has caused an error which has been caught in the catch block.  Fun, Fun ,Fun , Simple T-SQL Parsing.  Hope this helps someone in the same position,  let me know how you get on.

    Read the article

  • Is using ELSE bad programming?

    - by dave.b
    I've often come across bugs that have been caused by using the ELSE construct. A prime example is something along the lines of: If (passwordCheck() == false){ displayMessage(); }else{ letThemIn(); } To me this screams security problem. I know that passwordCheck is likely to be a boolean, but I wouldn't place my applications security on it. What would happen if its a string, int etc? I usually try to avoid using ELSE, and instead opt for two completely separate IF statements to test for what I expect. Anything else then either gets ignored OR is specifically handled. Surely this is a better way to prevent bugs / security issues entering your app. How do you guys do it?

    Read the article

  • Does anybody know what happens to money spent in the Ubuntu One Music Store?

    - by Dave
    I remember reading on the Canonical website around the launch of the Ubuntu One Music store that after revenue was divided between Canonical, 7 digital and the artists involved that all profit or a significant percentage would be donated to a charity. This information has either disappeared from their website or my detective skills have failed me once more. Does anybody happen to understand the break down of revenue generated by the Ubuntu One Music Store or even know where to find this information. Also it would be useful to know which charity benefits from this. (Not that that would impact upon my purchases or send me running to iTunes. ;) Promise)

    Read the article

  • Linear search vs Octree (Frustum cull)

    - by Dave
    I am wondering whether I should look into implementing an octree of some kind. I have a very simple game which consists of a 3d plane for the floor. There are multiple objects scattered around on the ground, each one has an aabb in world space. Currently I just do a loop through the list of all these objects and check if its bounding box intersects with the frustum, it works great but I am wondering if if it would be a good investment in an octree. I only have max 512 of these objects on the map and they all contain bounding boxes. I am not sure if an octree would make it faster since I have so little objects in the scene.

    Read the article

  • C-states and P-states : confounding factors for benchmarking

    - by Dave
    I was recently looking into a performance issue in the java.util.concurrent (JUC) fork-join pool framework related to particularly long latencies when trying to wake (unpark) threads in the pool. Eventually I tracked the issue down to the power & scaling governor and idle-state policies on x86. Briefly, P-states refer to the set of clock rates (speeds) at which a processor can run. C-states reflect the possible idle states. The deeper the C-state (higher numerical values) the less power the processor will draw, but the longer it takes the processor to respond and exit that sleep state on the next idle to non-idle transition. In some cases the latency can be worse than 100 microseconds. C0 is normal execution state, and P0 is "full speed" with higher Pn values reflecting reduced clock rates. C-states are P-states are orthogonal, although P-states only have meaning at C0. You could also think of the states as occupying a spectrum as follows : P0, P1, P2, Pn, C1, C2, ... Cn, where all the P-states are at C0. Our fork-join framework was calling unpark() to wake a thread from the pool, and that thread was being dispatched onto a processor at deep C-state, so we were observing rather impressive latencies between the time of the unpark and the time the thread actually resumed and was able to accept work. (I originally thought we were seeing situations where the wakee was preempting the waker, but that wasn't the case. I'll save that topic for a future blog entry). It's also worth pointing out that higher P-state values draw less power and there's usually some latency in ramping up the clock (P-states) in response to offered load. The issue of C-states and P-states isn't new and has been described at length elsewhere, but it may be new to Java programmers, adding a new confounding factor to benchmarking methodologies and procedures. To get stable results I'd recommend running at C0 and P0, particularly for server-side applications. As appropriate, disabling "turbo" mode may also be prudent. But it also makes sense to run with the system defaults to understand if your application exhibits any performance sensitivity to power management policies. The operating system power management sub-system typically control the P-state and C-states based on current and recent load. The scaling governor manages P-states. Operating systems often use adaptive policies that try to avoid deep C-states for some period if recent deep idle episodes proved to be very short and futile. This helps make the system more responsive under bursty or otherwise irregular load. But it also means the system is stateful and exhibits a memory effect, which can further complicate benchmarking. Forcing C0 + P0 should avoid this issue.

    Read the article

  • Non use of persisted data

    - by Dave Ballantyne
    Working at a client site, that in itself is good to say, I ran into a set of circumstances that made me ponder, and appreciate, the optimizer engine a bit more. Working on optimizing a stored procedure, I found a piece of code similar to : select BillToAddressID, Rowguid, dbo.udfCleanGuid(rowguid) from sales.salesorderheaderwhere BillToAddressID = 985 A lovely scalar UDF was being used,  in actuality it was used as part of the WHERE clause but simplified here.  Normally I would use an inline table valued function here, but in this case it wasn't a good option. So this seemed like a pretty good case to use a persisted column to improve performance. The supporting index was already defined as create index idxBill on sales.salesorderheader(BillToAddressID) include (rowguid) and the function code is Create Function udfCleanGuid(@GUID uniqueidentifier)returns varchar(255)with schemabindingasbegin Declare @RetStr varchar(255) Select @RetStr=CAST(@Guid as varchar(255)) Select @RetStr=REPLACE(@Retstr,'-','') return @RetStrend Executing the Select statement produced a plan of : Nothing surprising, a seek to find the data and compute scalar to execute the UDF. Lets get optimizing and remove the UDF with a persisted column Alter table sales.salesorderheaderadd CleanedGuid as dbo.udfCleanGuid(rowguid)PERSISTED A subtle change to the SELECT statement… select BillToAddressID,CleanedGuid from sales.salesorderheaderwhere BillToAddressID = 985 and our new optimized plan looks like… Not a lot different from before!  We are using persisted data on our table, where is the lookup to fetch it ?  It didnt happen,  it was recalculated.  Looking at the properties of the relevant Compute Scalar would confirm this ,  but a more graphic example would be shown in the profiler SP:StatementCompleted event. Why did the lookup happen ? Remember the index definition,  it has included the original guid to avoid the lookup.  The optimizer knows this column will be passed into the UDF, run through its logic and decided that to recalculate is cheaper than the lookup.  That may or may not be the case in actuality,  the optimizer has no idea of the real cost of a scalar udf.  IMO the default cost of a scalar UDF should be seen as a lot higher than it is, since they are invariably higher. Knowing this, how do we avoid the function call?  Dropping the guid from the index is not an option, there may be other code reliant on it.   We are left with only one real option,  add the persisted column into the index. drop index Sales.SalesOrderHeader.idxBillgocreate index idxBill on sales.salesorderheader(BillToAddressID) include (rowguid,cleanedguid) Now if we repeat the statement select BillToAddressID,CleanedGuid from sales.salesorderheaderwhere BillToAddressID = 985 We still have a compute scalar operator, but this time it wasnt used to recalculate the persisted data.  This can be confirmed with profiler again. The takeaway here is,  just because you have persisted data dont automatically assumed that it is being used.

    Read the article

  • Non use of persisted data – Part deux

    - by Dave Ballantyne
    In my last blog I showed how persisted data may not be used if you have used the base data on an include on an index. That wasn't the only problem ive had that showed the same symptom.  Using the same code as before,  I was executing similar to the below : select BillToAddressID,SOD.SalesOrderDetailID,SOH.CleanedGuid from sales.salesorderheader SOH join Sales.SalesOrderDetail SOD on SOH.SalesOrderID = SOD.SalesOrderID But,  due to a distribution error in statistics i found it necessary to use a table hint.  In this case, I wanted to force a loop join select BillToAddressID,SOD.SalesOrderDetailID,SOH.CleanedGuid from sales.salesorderheader SOH inner loop join Sales.SalesOrderDetail SOD on SOH.SalesOrderID = SOD.SalesOrderID   But, being the diligent  TSQL developer that I am ,looking at the execution plan I noticed that the ‘compute scalar’ operator was again calling the function.  Again,  profiler is a more graphic way to view this…..   All very odd,  just because ive forced a join , that has NOTHING, to do with my persisted data then something is causing the data to be re-evaluated. Not sure if there is any easy fix you can do to the TSQL here, but again its a lesson learned (or rather reinforced) examine the execution plan of every query you write to ensure that it is operating as you thought it would.

    Read the article

  • SilverlightShow for November 07 - 13, 2011

    - by Dave Campbell
    Check out the Top Five most popular news at SilverlightShow for SilverlightShow Top 5 News for November 07 - 13, 2011. Here are the top 5 news on SilverlightShow for last week: Will there be a Silverlight 6 (and does it matter)? Silverlight 5 - the end of the line Microsoft confirms move to align Windows & Windows Phone JavaScript, MVVM and Silverlight - Papa's Perspective How TO: Virtualizing WrapPanel .NET Visit and bookmark SilverlightShow. Stay in the 'Light 

    Read the article

  • Particle System in XNA - cannot draw particle

    - by Dave Voyles
    I'm trying to implement a simple particle system in my XNA project. I'm going by RB Whitaker's tutorial, and it seems simple enough. I'm trying to draw particles within my menu screen. Below I've included the code which I think is applicable. I'm coming up with one error in my build, and it is stating that I need to create a new instance of the EmitterLocation from the particleEngine. When I hover over particleEngine.EmitterLocation = new Vector2(Mouse.GetState().X, Mouse.GetState().Y); it states that particleEngine is returning a null value. What could be causing this? /// <summary> /// Base class for screens that contain a menu of options. The user can /// move up and down to select an entry, or cancel to back out of the screen. /// </summary> abstract class MenuScreen : GameScreen ParticleEngine particleEngine; public void LoadContent(ContentManager content) { if (content == null) { content = new ContentManager(ScreenManager.Game.Services, "Content"); } base.LoadContent(); List<Texture2D> textures = new List<Texture2D>(); textures.Add(content.Load<Texture2D>(@"gfx/circle")); textures.Add(content.Load<Texture2D>(@"gfx/star")); textures.Add(content.Load<Texture2D>(@"gfx/diamond")); particleEngine = new ParticleEngine(textures, new Vector2(400, 240)); } public override void Update(GameTime gameTime, bool otherScreenHasFocus, bool coveredByOtherScreen) { base.Update(gameTime, otherScreenHasFocus, coveredByOtherScreen); // Update each nested MenuEntry object. for (int i = 0; i < menuEntries.Count; i++) { bool isSelected = IsActive && (i == selectedEntry); menuEntries[i].Update(this, isSelected, gameTime); } particleEngine.EmitterLocation = new Vector2(Mouse.GetState().X, Mouse.GetState().Y); particleEngine.Update(); } public override void Draw(GameTime gameTime) { // make sure our entries are in the right place before we draw them UpdateMenuEntryLocations(); GraphicsDevice graphics = ScreenManager.GraphicsDevice; SpriteBatch spriteBatch = ScreenManager.SpriteBatch; SpriteFont font = ScreenManager.Font; spriteBatch.Begin(); // Draw stuff logic spriteBatch.End(); particleEngine.Draw(spriteBatch); }

    Read the article

  • Reporting on common code smells : A POC

    - by Dave Ballantyne
    Over the past few blog entries, I’ve been looking at parsing TSQL scripts in a variety of ways for a variety of tasks.  In my last entry ‘How to prevent ‘Select *’ : The elegant way’, I looked at parsing SQL to report upon uses of SELECT *.  The obvious question leading on from this is, “Great, what about other code smells ?”  Well, using the language service parser to do that was turning out to be a bit of a hard job,  sure I was getting tokens but no real context.  I wasn't even being told when an end of statement had been reached. One of the other parsing options available from Microsoft is exposed in the assembly ‘Microsoft.SqlServer.TransactSql.ScriptDom’,  this is ,I believe, installed with the client development tools with SQLServer.  It is much more feature rich than the original parser I had used and breaks a TSQL script into intuitive classes for analysis. So, what sort of smells can I now find using it ?  Well, for an opening gambit quite a nice little list. Use of NOLOCK Set of READ UNCOMMITTED Use of SELECT * Insert without column references Explicit datatype conversion on Sargs Cross server selects Non use of two-part naming convention Table and Query hint usage Changes in set options Use of single line comments Use of ordinal column positions in ORDER BY clause Now, lets not argue the point that “It depends” as smells on some of these, but as an academic exercise it is quite interesting.  The code is available from this link :https://www.dropbox.com/s/rfk32sou4fzl2cw/TSQLDomTest.zip  All the usual disclaimers apply to this code, I cannot be held responsible for anything ranging from mild annoyance through to universe destruction due to the use of this code or examples. The zip file contains a powershell script and my test cases.  The assembly used requires .Net 4 to run, which means that you will need powershell 3 ( though im running through PowerGUI and all works ok ) .  The code searches for all .sql files in the folder hierarchy for the workingpath,  you can override this if you want by simply changing the $Folder variable, and processes each in turn for the smells.  Feedback is not great at the moment, all it does is output to an xml file (Smells.xml) the offset position and a description of the smell found. Right now, I am interested in your feedback.  What do you think ?  Is this (or should it be) more than an academic exercise ?  Can tooling such as this be used as some form of code quality measure ?  Does it Work ? Do you have a case listed above which is not being reported ? Do you have a case that you would love to be reported ? Let me know , please mailto: [email protected]. Thanks

    Read the article

  • When row estimation goes wrong

    - by Dave Ballantyne
    Whilst working at a client site, I hit upon one of those issues that you are not sure if that this is something entirely new or a bug or a gap in your knowledge. The client had a large query that needed optimizing.  The query itself looked pretty good, no udfs, UNION ALL were used rather than UNION, most of the predicates were sargable other than one or two minor ones.  There were a few extra joins that could be eradicated and having fixed up the query I then started to dive into the plan. I could see all manor of spills in the hash joins and the sort operations,  these are caused when SQL Server has not reserved enough memory and has to write to tempdb.  A VERY expensive operation that is generally avoidable.  These, however, are a symptom of a bad row estimation somewhere else, and when that bad estimation is combined with other estimation errors, chaos can ensue. Working my way back down the plan, I found the cause, and the more I thought about it the more i came convinced that the optimizer could be making a much more intelligent choice. First step is to reproduce and I was able to simplify the query down a single join between two tables, Product and ProductStatus,  from a business point of view, quite fundamental, find the status of particular products to show if ‘active’ ,’inactive’ or whatever. The query itself couldn’t be any simpler The estimated plan looked like this: Ignore the “!” warning which is a missing index, but notice that Products has 27,984 rows and the join outputs 14,000. The actual plan shows how bad that estimation of 14,000 is : So every row in Products has a corresponding row in ProductStatus.  This is unsurprising, in fact it is guaranteed,  there is a trusted FK relationship between the two columns.  There is no way that the actual output of the join can be different from the input. The optimizer is already partly aware of the foreign key meta data, and that can be seen in the simplifiction stage. If we drop the Description column from the query: the join to ProductStatus is optimized out. It serves no purpose to the query, there is no data required from the table and the optimizer knows that the FK will guarantee that a matching row will exist so it has been removed. Surely the same should be applied to the row estimations in the initial example, right ?  If you think so, please upvote this connect item. So what are our options in fixing this error ? Simply changing the join to a left join will cause the optimizer to think that we could allow the rows not to exist. or a subselect would also work However, this is a client site, Im not able to change each and every query where this join takes place but there is a more global switch that will fix this error,  TraceFlag 2301. This is described as, perhaps loosely, “Enable advanced decision support optimizations”. We can test this on the original query in isolation by using the “QueryTraceOn” option and lo and behold our estimated plan now has the ‘correct’ estimation. Many thanks goes to Paul White (b|t) for his help and keeping me sane through this

    Read the article

  • update-apt-xapian-index uses 100% CPU, even when Update Manager is set to not check for updates

    - by Dave M G
    I have a slightly older laptop running Ubuntu 11.10. It runs fine, but frequently, when I start it up, the CPU monitor in my Gnome Panel shows 100% usage for for what can be up to five minutes or so. It seems that the offending process is update-apt-xapian-index, which, if I understand correctly, is the update manager checking for updates. I have gone into the update manager settings, and selected to never check for updates. I'll do that manually when I feel like I have the time to leave the laptop running for that. However, despite my selection, this still happens. Roughly 50% of the time or more, when I start my laptop, it runs update-apt-xapian-index. How can I get the update manager to respect my settings, or at least to get this process to stop eating my CPU cycles?

    Read the article

  • SQL Auto Close Options

    - by Dave Noderer
    Found an interesting thing that others have run across but it is the first time I’ve seen it. A customer emailed to say that the SQL 2008 db that I had helped him with seemed to be going into recovery mode on a regular basis while watching the SQL Management Studio screen. Needless to say he was a bit nervous and about to take some drastic steps. Eventually he found that the Auto Close option was set to true. When this is set to true, the database automatically closes all connections and unlocks the mdf file after 300 milliseconds. When a new connection is made it spins backup… Great for xcopy deployment on a client machine but not a multi-user server based application. So the warning… if you have started a database with SQL express and then move it to a production SQL server, make sure you check that the Auto Close option is set to false. See options screen below:

    Read the article

  • SilverlightShow for June 13 - 19, 2011

    - by Dave Campbell
    Check out the Top Five most popular news at SilverlightShow for SilverlightShow Top 5 News for June 13 - 19, 2011. Here are the top 5 news on SilverlightShow for last week: Panorama "Windows 8" template for Silverlight Premature cries of Silverlight / WPF skill loss. Windows 8 supports all programming models HTML 5 & Silverlight 5 10 Silverlight 5 Demos Recording of recent SilverlightShow webinar 'Blend for Silverlight Developers' now available online Visit and bookmark SilverlightShow. Stay in the 'Light

    Read the article

  • Window manager not loading when gnome-classic starts

    - by Dave M G
    When I boot up or log into gnome-classic (no effects), windows don't have the frame around them that has the minimize and close buttons and all that. After that, there all sorts of issues - for example, when I open a new window, it's in the very top left corner, obscuring the menu on the gnome panel... just a whole bunch of minor annoyances. I can get things working by by loading the compiz fusion icon and then selecting to reload the window manager. Of course this is less than ideal. How do I get the window manager to load automatically? Update: It seems that I get no window manager whenever I load Compiz, even after logging in. So it looks like the problem is more generally with Compiz's window manager.

    Read the article

  • Proper policy for user setup

    - by Dave Long
    I am still fairly new to linux hosting and am currently working on some policies for our production ubuntu servers. The servers are public facing webservers with ssh access from the public network and database servers with ssh access from the internal private network. We are a small hosting company so in the past with windows servers we used one user account and one password that each of us used internally. Anyone outside of the company who needed to access the server for FTP or anything else had their own user account. Is that okay to do in the linux world, or would most people recommend using individual accounts for each person who needs to access the server?

    Read the article

  • Why can't i ping server? VMware set to 'Bridged' loses IP address on 10.04.

    - by Dave
    I have installed a fresh 10.04 server onto a laptop on a home network as a VMware machine and set network connection to 'Bridged: connect directly to the physical network' from within VMware and rebooted the server. It then loses its IP address. 'dhclient eth0' says "No working leases in persistent database - sleeping" DHCP is working fine on the wi-fi router. The laptop is wired to a wireless router and from there wirelessly to a desktop. Desktop and Laptop can ping each other from Windows. I can ping the VM from Windows on the same laptop, but not from the desktop. Strangely ping has started to resolve hostnames to IPv6 addresses and not IPv4. Don't know whether that's connected? A kick in the right direction would be greatly appreciated. I've been an Ubuntu desktkop user for a few years, but new to ubuntu servers.

    Read the article

  • "Walking" along a rotating surface in LimeJS

    - by Dave Lancea
    I'm trying to have a character walk along a plank (a long, thin rectangle) that works like a seesaw, being rotated around a central point by box2d physics (falling objects). I want the left and right arrow keys to move the player up and down the plank, regardless of it's slope, and I don't want to use real physics for the player movement. My idea for achieving this was to compute the coordinate based on the rotation of the plank and the current location "up" or "down" the board. My math is derived from here: http://math.stackexchange.com/questions/143932/calculate-point-given-x-y-angle-and-distance Here's the code I have so far: movement = 0; if(keys[37]){ // Left movement = -3; } if(keys[39]){ // Right movement = 3; } // this.plank is a LimeJS sprite. // getRotation() Should return an angle in degrees var rotation = this.plank.getRotation(); // this.current_plank_location is initialized as 0 this.current_plank_location += movement; var x_difference = this.current_plank_location * Math.cos(rotation); var y_difference = this.current_plank_location * Math.sin(rotation); this.setPosition(seesaw.PLANK_CENTER_X + x_difference, seesaw.PLANK_CENTER_Y + y_difference); This code causes the player to swing around in a circle when they are out of the center of the plank given a slight change in rotation of the plank. Any ideas on how I can get the player position to follow the board position?

    Read the article

  • Best scripting language for project [on hold]

    - by Dave
    This is a subjective question, but I don't know where else to ask it. I'd appreciate it if someone could direct me to an appropriate scripting language for my project. I'm a little new at this so I'd appreciate any help. The project is a website that will display a list of photo subject groups (such as "nature" "people" "sports" etc) on the home page. The photos will all be in subdirectories of the main photo directory (photos) and each subject group will represent a subdirectory in photos. For example in directory photos there might be 3 subdirectories, "nature" "people" "sports" and in each of those subdirectories there will be the actual photos. The idea is that when the website owner wants to update/add/delete a subject group all he has to do is add, delete or update a subdirectory of the photos directory. This means, I think, that I need a scripting language that can read the directories and files in the website and then send a web page with the information in it. What is the simplest and easiest scripting language to do this in? Any ideas? Thanks

    Read the article

  • Performance triage

    - by Dave
    Folks often ask me how to approach a suspected performance issue. My personal strategy is informed by the fact that I work on concurrency issues. (When you have a hammer everything looks like a nail, but I'll try to keep this general). A good starting point is to ask yourself if the observed performance matches your expectations. Expectations might be derived from known system performance limits, prototypes, and other software or environments that are comparable to your particular system-under-test. Some simple comparisons and microbenchmarks can be useful at this stage. It's also useful to write some very simple programs to validate some of the reported or expected system limits. Can that disk controller really tolerate and sustain 500 reads per second? To reduce the number of confounding factors it's better to try to answer that question with a very simple targeted program. And finally, nothing beats having familiarity with the technologies that underlying your particular layer. On the topic of confounding factors, as our technology stacks become deeper and less transparent, we often find our own technology working against us in some unexpected way to choke performance rather than simply running into some fundamental system limit. A good example is the warm-up time needed by just-in-time compilers in Java Virtual Machines. I won't delve too far into that particular hole except to say that it's rare to find good benchmarks and methodology for java code. Another example is power management on x86. Power management is great, but it can take a while for the CPUs to throttle up from low(er) frequencies to full throttle. And while I love "turbo" mode, it makes benchmarking applications with multiple threads a chore as you have to remember to turn it off and then back on otherwise short single-threaded runs may look abnormally fast compared to runs with higher thread counts. In general for performance characterization I disable turbo mode and fix the power governor at "performance" state. Another source of complexity is the scheduler, which I've discussed in prior blog entries. Lets say I have a running application and I want to better understand its behavior and performance. We'll presume it's warmed up, is under load, and is an execution mode representative of what we think the norm would be. It should be in steady-state, if a steady-state mode even exists. On Solaris the very first thing I'll do is take a set of "pstack" samples. Pstack briefly stops the process and walks each of the stacks, reporting symbolic information (if available) for each frame. For Java, pstack has been augmented to understand java frames, and even report inlining. A few pstack samples can provide powerful insight into what's actually going on inside the program. You'll be able to see calling patterns, which threads are blocked on what system calls or synchronization constructs, memory allocation, etc. If your code is CPU-bound then you'll get a good sense where the cycles are being spent. (I should caution that normal C/C++ inlining can diffuse an otherwise "hot" method into other methods. This is a rare instance where pstack sampling might not immediately point to the key problem). At this point you'll need to reconcile what you're seeing with pstack and your mental model of what you think the program should be doing. They're often rather different. And generally if there's a key performance issue, you'll spot it with a moderate number of samples. I'll also use OS-level observability tools to lock for the existence of bottlenecks where threads contend for locks; other situations where threads are blocked; and the distribution of threads over the system. On Solaris some good tools are mpstat and too a lesser degree, vmstat. Try running "mpstat -a 5" in one window while the application program runs concurrently. One key measure is the voluntary context switch rate "vctx" or "csw" which reflects threads descheduling themselves. It's also good to look at the user; system; and idle CPU percentages. This can give a broad but useful understanding if your threads are mostly parked or mostly running. For instance if your program makes heavy use of malloc/free, then it might be the case you're contending on the central malloc lock in the default allocator. In that case you'd see malloc calling lock in the stack traces, observe a high csw/vctx rate as threads block for the malloc lock, and your "usr" time would be less than expected. Solaris dtrace is a wonderful and invaluable performance tool as well, but in a sense you have to frame and articulate a meaningful and specific question to get a useful answer, so I tend not to use it for first-order screening of problems. It's also most effective for OS and software-level performance issues as opposed to HW-level issues. For that reason I recommend mpstat & pstack as my the 1st step in performance triage. If some other OS-level issue is evident then it's good to switch to dtrace to drill more deeply into the problem. Only after I've ruled out OS-level issues do I switch to using hardware performance counters to look for architectural impediments.

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >