Search Results

Search found 974 results on 39 pages for 'outer'.

Page 22/39 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • SQL SERVER – Subquery or Join – Various Options – SQL Server Engine Knows the Best – Part 2

    - by pinaldave
    This blog post is part 2 of the earlier written article SQL SERVER – Subquery or Join – Various Options – SQL Server Engine knows the Best by Paulo R. Pereira. Paulo has left excellent comment to earlier article once again proving the point that SQL Server Engine is smart enough to figure out the best plan itself and uses the same for the query. Let us go over his comment as he has posted. “I think IN or EXISTS is the best choice, because there is a little difference between ‘Merge Join’ of query with JOIN (Inner Join) and the others options (Left Semi Join), and JOIN can give more results than IN or EXISTS if the relationship is 1:0..N and not 1:0..1. And if I try use NOT IN and NOT EXISTS the query plan is different from LEFT JOIN too (Left Anti Semi Join vs. Left Outer Join + Filter). So, I found a case where EXISTS has a different query plan than IN or ANY/SOME:” USE AdventureWorks GO -- use of SOME SELECT * FROM HumanResources.Employee E WHERE E.EmployeeID = SOME ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA UNION ALL SELECT EA.EmployeeID FROM HumanResources.EmployeeDepartmentHistory EA ) -- use of IN SELECT * FROM HumanResources.Employee E WHERE E.EmployeeID IN ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA UNION ALL SELECT EA.EmployeeID FROM HumanResources.EmployeeDepartmentHistory EA ) -- use of EXISTS SELECT * FROM HumanResources.Employee E WHERE EXISTS ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA UNION ALL SELECT EA.EmployeeID FROM HumanResources.EmployeeDepartmentHistory EA ) When looked into execution plan of the queries listed above indeed we do get different plans for queries and SQL Server Engines creates the best (least cost) plan for each query. Click on image to see larger images. Thanks Paulo for your wonderful contribution. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Contribution, SQL, SQL Authority, SQL Joins, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Dual monitors won't accept new settings

    - by mschulze
    I'm trying to use dual monitors but every time I try to change the display settings(external monitor is on the right of the laptop, not the left etc...) my screens go black, then my laptop screen will come back "normally" but my external monitor will just flash different shades of red or black. I cannot interact with anything on my laptop screen even though I can move my mouse around. Even when the monitors attempt to revert back the external monitor continues to flicker instead of reverting. The only way to stop it is to hard-boot but the new settings aren't saved. Also, when the dual monitors are both working my laptop screen has two vertical black bars along its sides. It's like Natty just decided I couldn't use the outer inch and a half of my laptop screen. My external monitor doesn't have this problem, and my laptop uses its full screen size when the external monitor is not hooked up. Does anyone know what I'm doing wrong? I have an HP Pavillion dv7 and an HP w2338h external monitor. Thanks!

    Read the article

  • Push or Pull Mobile Coupons?

    - by David Dorf
    Mobile phones allow consumers to receive coupons in context, which increases their relevance and therefore redemption rates. Using your current location, you can get coupons that can be redeemed nearby for the things you want now. Receiving a coupon for something you wanted last week or something you might buy next month just isn't as valuable. I previously talked about Placecast and their concept of pushing offers to mobile phones that transgress "geo-fences" around points of interest, like store locations. This push model is an automatic reminder there are good deals just up ahead. This model works well in dense cities where people walk, but I question how effective it will be in the suburbs where people are driving. McDonald's recently ran a campaign in Finland where they pushed offers to GPS devices when cars neared their restaurants. Amazingly, they achieved a 7% click-through rate. But 8coupons.com sees things differently. They prefer the pull model that requires customers to initiate a search for nearby coupons, and they've done some studies to better understand what "nearby" means. It turns out that there are concentric search circles that emanate from your home and work. From inner to outer, people search for food, drink, shopping, and entertainment. Intuitively, that feels about right. So the question is, do consumers prefer the push or pull model for offers? No doubt the market is big enough for both. These days its not good enough to just know who your customers are -- you also need to know where they are so you can catch them in the right moment. According to Borrell Associates, redemption rates of mobile coupons are 10x that of traditional mail and newspaper coupons. One thing is for sure; assuming 85% of consumers regularly spend money within 5 miles of home and work, location-based coupons make tons of sense.

    Read the article

  • C# XNA: Effecient mesh building algorithm for voxel based terrain ("top" outside layer only, non-destructible)

    - by Tim Hatch
    To put this bluntly, for non-destructible/non-constructible voxel style terrain, are generated meshes handled much better than instancing? Is there another method to achieve millions of visible quad faces per scene with ease? If generated meshes per chunk is the way to go, what kind of algorithm might I want to use based on only EVER needing the outer layer rendered? I'm using 3D Perlin Noise for terrain generation (for overhangs/caves/etc). The layout is fantastic, but even for around 20k visible faces, it's quite slow using instancing (whether it's one big draw call or multiple smaller chunks). I've simplified it to the point of removing non-visible cubes and only having the top faces of my cube-like terrain be rendered, but with 20k quad instances, it's still pretty sluggish (30fps on my machine). My goal is for the world to be made using quite small cubes. Where multiple games (IE: Minecraft) have the player 1x1 cube in width/length and 2 high, I'm shooting for 6x6 width/length and 9 high. With a lot of advantages as far as gameplay goes, it also means I could quite easily have a single scene with millions of truly visible quads. So, I have been trying to look into changing my method from instancing to mesh generation on a chunk by chunk basis. Do video cards handle this type of processing better than separate quads/cubes through instancing? What kind of existing algorithms should I be looking into? I've seen references to marching cubes a few times now, but I haven't spent much time investigating it since I don't know if it's the better route for my situation or not. I'm also starting to doubt my need of using 3D Perlin noise for terrain generation since I won't want the kind of depth it would seem best at. I just like the idea of overhangs and occasional cave-like structures, but could find no better 'surface only' algorithms to cover that. If anyone has any better suggestions there, feel free to throw them at me too. Thanks, Mythics

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #037

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Convert Text to Numbers (Integer) – CAST and CONVERT If table column is VARCHAR and has all the numeric values in it, it can be retrieved as Integer using CAST or CONVERT function. List All Stored Procedure Modified in Last N Days If SQL Server suddenly start behaving in un-expectable behavior and if stored procedure were changed recently, following script can be used to check recently modified stored procedure. If a stored procedure was created but never modified afterwards modified date and create a date for that stored procedure are same. Count Duplicate Records – Rows Validate Field For DATE datatype using function ISDATE() We always checked DATETIME field for incorrect data type. One of the user input date as 30/2/2007. The date was sucessfully inserted in the temp table but while inserting from temp table to final table it crashed with error. We had now task to validate incorrect date value before we insert in final table. Jr. Developer asked me how can he do that? We check for incorrect data type (varchar, int, NULL) but this is incorrect date value. Regular expression works fine with them because of mm/dd/yyyy format. 2008 Find Space Used For Any Particular Table It is very simple to find out the space used by any table in the database. Two Convenient Features Inline Assignment – Inline Operations Here is the script which does both – Inline Assignment and Inline Operation DECLARE @idx INT = 0 SET @idx+=1 SELECT @idx Introduction to SPARSE Columns SPARSE column are better at managing NULL and ZERO values in SQL Server. It does not take any space in database at all. If column is created with SPARSE clause with it and it contains ZERO or NULL it will be take lesser space then regular column (without SPARSE clause). SP_CONFIGURE – Displays or Changes Global Configuration Settings If advanced settings are not enabled at configuration level SQL Server will not let user change the advanced features on server. Authorized user can turn on or turn off advance settings. 2009 Standby Servers and Types of Standby Servers Standby Server is a type of server that can be brought online in a situation when Primary Server goes offline and application needs continuous (high) availability of the server. There is always a need to set up a mechanism where data and objects from primary server are moved to secondary (standby) server. BLOB – Pointer to Image, Image in Database, FILESTREAM Storage When it comes to storing images in database there are two common methods. I had previously blogged about the same subject on my visit to Toronto. With SQL Server 2008, we have a new method of FILESTREAM storage. However, the answer on when to use FILESTREAM and when to use other methods is still vague in community. 2010 Upper Case Shortcut SQL Server Management Studio I select the word and hit CTRL+SHIFT+U and it SSMS immediately changes the case of the selected word. Similar way if one want to convert cases to lower case, another short cut CTRL+SHIFT+L is also available. The Self Join – Inner Join and Outer Join Self Join has always been a noteworthy case. It is interesting to ask questions about self join in a room full of developers. I often ask – if there are three kinds of joins, i.e.- Inner Join, Outer Join and Cross Join; what type of join is Self Join? The usual answer is that it is an Inner Join. However, the reality is very different. Parallelism – Row per Processor – Row per Thread – Thread 0  If you look carefully in the Properties window or XML Plan, there is “Thread 0?. What does this “Thread 0” indicate? Well find out from the blog post. How do I Learn and How do I Teach The blog post has raised three very interesting questions. How do you learn? How do you teach? What are you learning or teaching? Let me try to answer the same. 2011 SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 7 of 31 What are Different Types of Locks? What are Pessimistic Lock and Optimistic Lock? When is the use of UPDATE_STATISTICS command? What is the Difference between a HAVING clause and a WHERE clause? What is Connection Pooling and why it is Used? What are the Properties and Different Types of Sub-Queries? What are the Authentication Modes in SQL Server? How can it be Changed? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 8 of 31 Which Command using Query Analyzer will give you the Version of SQL Server and Operating System? What is an SQL Server Agent? Can a Stored Procedure call itself or a Recursive Stored Procedure? How many levels of SP nesting is possible? What is Log Shipping? Name 3 ways to get an Accurate Count of the Number of Records in a Table? What does it mean to have QUOTED_IDENTIFIER ON? What are the Implications of having it OFF? What is the Difference between a Local and a Global Temporary Table? What is the STUFF Function and How Does it Differ from the REPLACE Function? What is PRIMARY KEY? What is UNIQUE KEY Constraint? What is FOREIGN KEY? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 9 of 31 What is CHECK Constraint? What is NOT NULL Constraint? What is the difference between UNION and UNION ALL? What is B-Tree? How to get @@ERROR and @@ROWCOUNT at the Same Time? What is a Scheduled Job or What is a Scheduled Task? What are the Advantages of Using Stored Procedures? What is a Table Called, if it has neither Cluster nor Non-cluster Index? What is it Used for? Can SQL Servers Linked to other Servers like Oracle? What is BCP? When is it Used? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 10 of 31 What Command do we Use to Rename a db, a Table and a Column? What are sp_configure Commands and SET Commands? How to Implement One-to-One, One-to-Many and Many-to-Many Relationships while Designing Tables? What is Difference between Commit and Rollback when Used in Transactions? What is an Execution Plan? When would you Use it? How would you View the Execution Plan? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 11 of 31 What is Difference between Table Aliases and Column Aliases? Do they Affect Performance? What is the difference between CHAR and VARCHAR Datatypes? What is the Difference between VARCHAR and VARCHAR(MAX) Datatypes? What is the Difference between VARCHAR and NVARCHAR datatypes? Which are the Important Points to Note when Multilanguage Data is Stored in a Table? How to Optimize Stored Procedure Optimization? What is SQL Injection? How to Protect Against SQL Injection Attack? How to Find Out the List Schema Name and Table Name for the Database? What is CHECKPOINT Process in the SQL Server? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Day 12 of 31 How does Using a Separate Hard Drive for Several Database Objects Improves Performance Right Away? How to Find the List of Fixed Hard Drive and Free Space on Server? Why can there be only one Clustered Index and not more than one? What is Difference between Line Feed (\n) and Carriage Return (\r)? Is It Possible to have Clustered Index on Separate Drive From Original Table Location? What is a Hint? How to Delete Duplicate Rows? Why the Trigger Fires Multiple Times in Single Login? 2012 CTRL+SHIFT+] Shortcut to Select Code Between Two Parenthesis Shortcut key is CTRL+SHIFT+]. This key can be very useful when dealing with multiple subqueries, CTE or query with multiple parentheses. When exercised this shortcut key it selects T-SQL code between two parentheses. Monday Morning Puzzle – Query Returns Results Sometimes but Not Always I am beginner with SQL Server. I have one query, it sometime returns a result and sometime it does not return me the result. Where should I start looking for a solution and what kind of information I should send to you so you can help me with solving. I have no clue, please guide me. Remove Debug Button in SSMS – SQL in Sixty Seconds #020 – Video Effect of Case Sensitive Collation on Resultset Collation is a very interesting concept but I quite often see it is heavily neglected. I have seen developer and DBA looking for a workaround to fix collation error rather than understanding if the side effect of the workaround. Switch Between Two Parenthesis using Shortcut CTRL+] Earlier this week I wrote a blog post about CTRL+SHIFT+] Shortcut to Select Code Between Two Parenthesis, I received quite a lot of positive feedback from readers. If you are a regular reader of the blog post, you must be aware that I appreciate the learning shared by readers. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Cross Apply Ambiguity

    - by Dave Ballantyne
    Cross apply (and outer apply)  are a very welcome addition to the TSQL language.  However, today after a few hours of head scratching, I have found an simple issue which could cause big big problems. What would you expect from this statement ? select * from sys.objects b join sys.objects a on a.object_id = object_id No prizes for guessing SQL server errors with “Ambiguous column name 'object_id'”. What would you expect from this statement ? Select * from sys.objects a cross apply( Select * from sys.objects b where b.object_id = object_id) as c Surprisingly, perhaps, the result is a cross join of sys.objects.  Well, what happened there ? If you look at the apply statement, within the where clause, only one of the conditions is qualified with a table name.  This meant that is has be interpreted as “b.object_id = b.object_id” causing the cross apply to have no join the the parent sys.objects table and causing the cross join. The fix is , obviously, simple Select * from sys.objects a cross apply( Select * from sys.objects b where b.object_id = a.object_id) as c So why no “Ambiguous column name ” error ?  I’ve raised a connect item on this issue here.

    Read the article

  • Deferred rendering with both Clockwise and CounterClockwise culling

    - by user1423893
    I have a deferred rendering system that works well with objects that appear solid and drawn using CounterClockwise culling. I have a problem with Clockwise culled objects that are supposed to represent hollow that display their inside faces only. The image below shows a CounterClockwise culled object (left) Clockwise culled object (right). The Clockwise culled object faces display what would be displayed on the CounterClockwise face. How can I get the lighting to light the inner faces for Clockwise culled objects and continue lighting the outer CounterClockwise faces as normal? My lighting method is below private void DeferredLighting(GameTime gameTime) { // Set the render target for the lights game.GraphicsDevice.SetRenderTarget(lightMap); // Clear the render target to (0, 0, 0, 0) game.GraphicsDevice.Clear(Color.Transparent); // Set the render states game.GraphicsDevice.BlendState = BlendState.Additive; game.GraphicsDevice.DepthStencilState = DepthStencilState.None; game.GraphicsDevice.RasterizerState = RasterizerState.CullCounterClockwise; // Set sampler state to Point as the Surface type requires it in XNA 4.0 game.GraphicsDevice.SamplerStates[0] = SamplerState.PointClamp; // Set the camera properties for all lights BaseLight.SetCameraProperties(game.ActiveCamera); // Draw the lights int numLights = lights.Count; for (int i = 0; i < numLights; ++i) { if (lights[i].Diffuse.W > 0f) { lights[i].Render(gameTime, ref normalMap, ref depthMap, ref sgrMap); } } // Resolve the render target game.GraphicsDevice.SetRenderTarget(null); } I have tried adjusting the render states but no combination works for both objects.

    Read the article

  • JavaScript Class Patterns Revisited: Endgame

    - by Liam McLennan
    I recently described some of the patterns used to simulate classes (types) in JavaScript. But I missed the best pattern of them all. I described a pattern I called constructor function with a prototype that looks like this: function Person(name, age) { this.name = name; this.age = age; } Person.prototype = { toString: function() { return this.name + " is " + this.age + " years old."; } }; var john = new Person("John Galt", 50); console.log(john.toString()); and I mentioned that the problem with this pattern is that it does not provide any encapsulation, that is, it does not allow private variables. Jan Van Ryswyck recently posted the solution, obvious in hindsight, of wrapping the constructor function in another function, thereby allowing private variables through closure. The above example becomes: var Person = (function() { // private variables go here var name,age; function constructor(n, a) { name = n; age = a; } constructor.prototype = { toString: function() { return name + " is " + age + " years old."; } }; return constructor; })(); var john = new Person("John Galt", 50); console.log(john.toString()); Now we have prototypal inheritance and encapsulation. The important thing to understand is that the constructor, and the toString function both have access to the name and age private variables because they are in an outer scope and they become part of the closure.

    Read the article

  • How can I prevent seams from showing up on objects using lower mipmap levels?

    - by Shivan Dragon
    Disclaimer: kindly right click on the images and open them separately so that they're at full size, as there are fine details which don't show up otherwise. Thank you. I made a simple Blender model, it's a cylinder with the top cap removed: I've exported the UVs: Then imported them into Photoshop, and painted the inner area in yellow and the outer area in red. I made sure I cover well the UV lines: I then save the image and load it as texture on the model in Blender. Actually, I just reload it as the image where the UVs are exported, and change the viewport view mode to textured. When I look at the mesh up-close, there's yellow everywhere, everything seems fine: However, if I start zooming out, I start seeing red (literally and metaphorically) where the texture edges are: And the more I zoom, the more I see it: Same thing happends in Unity, though the effect seems less pronounced. Up close is fine and yellow: Zoom out and you see red at the seams: Now, obviously, for this simple example a workaround is to spread the yellow well outside the UV margins, and its fine from all distances. However this is an issue when you try making a complex texture that should tile seamlessly at the edges. In this situation I either make a few lines of pixels overlap (in which case it looks bad from upclose and ok from far away), or I leave them seamless and then I have those seams when seeing it from far away. So my question is, is there something I'm missing, or some extra thing I must do to have my texture look seamless from all distances?

    Read the article

  • Limitations of the SharePoint join using CAML

    - by ybbest
    Limitation One In SharePoint 2010, you can join the primary list to a foreign list and include more than one field from the foreign list. However, the limitation is that the included fields from foreign list have to be the following type: Calculated (treated as plain text) ContentTypeId Counter Currency DateTime Guid Integer Note (one-line only) Number Text The above limitation also explains why you cannot include some types of the fields from the remote list when creating a lookup. Limitation Two When using CAML query to join SharePoint lists, there can be joins to multiple lists, multiple joins to the same list, and chains of joins. However, the limitations are only inner and left outer joins are permitted and the field in the primary list must be a Lookup type field that looks up to the field in the foreign list. Limitation Three The support for writing the JOIN query in CAML is very limited.I have to hand-code the CAML query to join the lists,not fun at all.Although some blogs post mentioned about using LINQ to SharePoint and get the CAML code from there , but I never get it to work.You can check this blog post  for this.Let me know if it works for you. References: http://msdn.microsoft.com/en-us/library/ee535502.aspx http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.spquery.joins.aspx

    Read the article

  • Database Backup History From MSDB in a pivot table

    - by steveh99999
    I knocked up a nice little query to display backup history for each database in a pivot table format.I wanted to display the most recent full, differential, and transaction log backup for each database. Here's the SQL :-WITH backupCTE AS (SELECT name, recovery_model_desc, d AS 'Last Full Backup', i AS 'Last Differential Backup', l AS 'Last Tlog Backup' FROM ( SELECT db.name, db.recovery_model_desc,type, backup_finish_date FROM master.sys.databases db LEFT OUTER JOIN msdb.dbo.backupset a ON a.database_name = db.name WHERE db.state_desc = 'ONLINE' ) AS Sourcetable   PIVOT (MAX (backup_finish_date) FOR type IN (D,I,L) ) AS MostRecentBackup ) SELECT * FROM backupCTE Gives output such as this :-  With this query, I can then build up some straightforward queries to ensure backups are scheduled and running as expected -For example, the following logic can be used ;-  - WHERE [Last Full Backup] IS NULL) - ie database has never been backed up.. - WHERE [Last Tlog Backup] < DATEDIFF(mm,GETDATE(),-60) AND recovery_model_desc <> 'SIMPLE') - transction log not backed up in last 60 minutes. - WHERE [Last Full Backup] < DATEDIFF(dd,GETDATE(),-1) AND [Last Differential Backup] < [Last Full Backup]) -- no backup in last day.- WHERE [Last Differential Backup] < DATEDIFF(dd,GETDATE(),-1) AND [Last Full Backup] < DATEDIFF(dd,GETDATE(),-8) ) -- no differential backup in last day when last full backup is over 8 days old.   

    Read the article

  • which style of member-access is preferable

    - by itwasntpete
    the purpose of oop using classes is to encapsulate members from the outer space. i always read that accessing members should be done by methods. for example: template<typename T> class foo_1 { T state_; public: // following below }; the most common doing that by my professor was to have a get and set method. // variant 1 T const& getState() { return state_; } void setState(T const& v) { state_ = v; } or like this: // variant 2 // in my opinion it is easier to read T const& state() { return state_; } void state(T const& v) { state_ = v; } assume the state_ is a variable, which is checked periodically and there is no need to ensure the value (state) is consistent. Is there any disadvantage of accessing the state by reference? for example: // variant 3 // do it by reference T& state() { return state_; } or even directly, if I declare the variable as public. template<typename T> class foo { public: // variant 4 T state; }; In variant 4 I could even ensure consistence by using c++11 atomic. So my question is, which one should I prefer?, Is there any coding standard which would decline one of these pattern? for some code see here

    Read the article

  • BusEnum2 and a Minor Bug Fix

    - by Kate Moss' Open Space
    The default root bus driver, BusEnum, enumerate and active drivers one by one in synchronized manner. It is not only slowing the boot time but in the even if any of driver's init function (XXX_init) get hanged, the whole system won't boot at all. There is a sample of enhanced root bus driver, BusEnum2, on the http://msdn.microsoft.com/en-us/library/dd187254.aspx The page provides the sample code and the detail explanation of the design concept. With multi-threaded BusEnum2 on CE7 with SMP enabled system, the scalability is even more significant. Since you have more than one processor and it can load drivers in parallel! Everything looks good so far, except to there is a small bug in the sample code. Fortunately, it is easy to fix. But hard to trace if you ever enc outer it! The BUSENUM2 flag only defined in BUSENUM2\BUSDEF\sources but not in BUSENUM2\BUSENUM\sources. The DeviceFolder is implemented in BUSENUM2\BUSDEF but the instance is created in BUSENUM2\BUSENUM\busenum.cpp, so the result is it allocates less memory than actual need.   Add   CDEFINES=$(CDEFINES) -DBUSENUM2   into BUSENUM2\BUSENUM\sources and the problem fixed!

    Read the article

  • Control convention for circular movement?

    - by Christian
    I'm currently doing a kind of training project in Unity (still a beginner). It's supposed to be somewhat like Breakout, but instead of just going left and right I want the paddle to circle around the center point. This is all fine and dandy, but the problem I have is: how do you control this with a keyboard or gamepad? For touch and mouse control I could work around the problem by letting the paddle follow the cursor/finger, but with the other control methods I'm a bit stumped. With a keyboard for example, I could either make it so that the Left arrow always moves the paddle clockwise (it starts at the bottom of the circle), or I could link it to the actual direction - meaning that if the paddle is at the bottom, it goes left and up along the circle or, if it's in the upper hemisphere, it moves left and down, both times toward the outer left point of the circle. Both feel kind of weird. With the first one, it can be counter intuitive to press Left to move the paddle right when it's in the upper area, while in the second method you'd need to constantly switch buttons to keep moving. So, long story short: is there any kind of existing standard, convention or accepted example for this type of movement and the corresponding controls? I didn't really know what to google for (control conventions for circular movement was one of the searches I tried, but it didn't give me much), and I also didn't really find anything about this on here. If there is a Question that I simply didn't see, please excuse the duplicate.

    Read the article

  • "Programming error" exceptions - Is my approach sound?

    - by Medo42
    I am currently trying to improve my use of exceptions, and found the important distinction between exceptions that signify programming errors (e.g. someone passed null as argument, or called a method on an object after it was disposed) and those that signify a failure in the operation that is not the caller's fault (e.g. an I/O exception). As far as I understand, it makes little sense for an immediate caller to actually handle programming error exceptions, he should instead assure that the preconditions are met. Only "outer" exception handlers at task boundaries should catch them, so they can keep the system running if a task fails. In order to ensure that client code can cleanly catch "failure" exceptions without catching error exceptions by mistake, I create my own exception classes for all failure exceptions now, and document them in the methods that throw them. I would make them checked exceptions in Java. Now I have a few questions: Before, I tried to document all exceptions that a method could throw, but that sometimes creates an unwiedly list that needs to be documented in every method up the call chain until you can show that the error won't happen. Instead, I document the preconditions in the summary / parameter descriptions and don't even mention what happens if they are not met. The idea is that people should not try to catch these exceptions explicitly anyway, so there is no need to document their types. Would you agree that this is enough? Going further, do you think all preconditions even need to be documented for every method? For example, calling methods in IDisposable objects after calling Dispose is an error, but since IDisposable is such a widely used interface, can I just assume a programmer will know this? A similar case is with reference type parameters where passing null makes no conceivable sense: Should I document "non-null" anyway? IMO, documentation should only cover things that are not obvious, but I am not sure where "obvious" ends.

    Read the article

  • Are there legitimate reasons for returning exception objects instead of throwing them?

    - by stakx
    This question is intended to apply to any OO programming language that supports exception handling; I am using C# for illustrative purposes only. Exceptions are usually intended to be raised when an problem arises that the code cannot immediately handle, and then to be caught in a catch clause in a different location (usually an outer stack frame). Q: Are there any legitimate situations where exceptions are not thrown and caught, but simply returned from a method and then passed around as error objects? This question came up for me because .NET 4's System.IObserver<T>.OnError method suggests just that: exceptions being passed around as error objects. Let's look at another scenario, validation. Let's say I am following conventional wisdom, and that I am therefore distinguishing between an error object type IValidationError and a separate exception type ValidationException that is used to report unexpected errors: partial interface IValidationError { } abstract partial class ValidationException : System.Exception { public abstract IValidationError[] ValidationErrors { get; } } (The System.Component.DataAnnotations namespace does something quite similar.) These types could be employed as follows: partial interface IFoo { } // an immutable type partial interface IFooBuilder // mutable counterpart to prepare instances of above type { bool IsValid(out IValidationError[] validationErrors); // true if no validation error occurs IFoo Build(); // throws ValidationException if !IsValid(…) } Now I am wondering, could I not simplify the above to this: partial class ValidationError : System.Exception { } // = IValidationError + ValidationException partial interface IFoo { } // (unchanged) partial interface IFooBuilder { bool IsValid(out ValidationError[] validationErrors); IFoo Build(); // may throw ValidationError or sth. like AggregateException<ValidationError> } Q: What are the advantages and disadvantages of these two differing approaches?

    Read the article

  • Warning and error information in stored procedures revisited

    - by user13334359
    Originally way to handle warnings and errors in MySQL stored routine was designed as follows: if warning was generated during stored routine execution which has a handler for such a warning/error, MySQL remembered the handler, ignored the warning and continued execution after routine is executed MySQL checked if there is a remembered handler and activated if any This logic was not ideal and causes several problems, particularly: it was not possible to choose right handler for an instruction which generated several warnings or errors, because only first one was chosen handling conditions in current scope messed with conditions in different there were no generated warning/errors in Diagnostic Area that is against SQL Standard. First try to fix this was done in version 5.5. Patch left Diagnostic Area intact after stored routine execution, but cleared it in the beginning of each statement which can generate warnings or to work with tables. Diagnostic Area checked after stored routine execution.This patch solved issue with order of condition handlers, but lead to new issues. Most popular was that outer stored routine could see warnings which should be already handled by handler inside inner stored routine, although latest has handler. I even had to wrote a blog post about it.And now I am happy to announce this behaviour changed third time.Since version 5.6 Diagnostic Area cleared after instruction leaves its handler.This lead to that only one handler will see condition it is supposed to proceed and in proper order. All past problems are solved.I am happy that my old blog post describing weird behaviour in version 5.5 is not true any more.

    Read the article

  • AngularJs ng-cloak Problems on large Pages

    - by Rick Strahl
    I’ve been working on a rather complex and large Angular page. Unlike a typical AngularJs SPA style ‘application’ this particular page is just that: a single page with a large amount of data on it that has to be visible all at once. The problem is that when this large page loads it flickers and displays template markup briefly before kicking into its actual content rendering. This is is what the Angular ng-cloak is supposed to address, but in this case I had no luck getting it to work properly. This application is a shop floor app where workers need to see all related information in one big screen view, so some of the benefits of Angular’s routing and view swapping features couldn’t be applied. Instead, we decided to have one very big view but lots of ng-controllers and directives to break out the logic for code separation. For code separation this works great – there are a number of small controllers that deal with their own individual and isolated application concerns. For HTML separation we used partial ASP.NET MVC Razor Views which made breaking out the HTML into manageable pieces super easy and made migration of this page from a previous server side Razor page much easier. We were also able to leverage most of our server side localization without a lot of  changes as a bonus. But as a result of this choice the initial HTML document that loads is rather large – even without any data loaded into it, resulting in a fairly large DOM tree that Angular must manage. Large Page and Angular Startup The problem on this particular page is that there’s quite a bit of markup – 35k’s worth of markup without any data loaded, in fact. It’s a large HTML page with a complex DOM tree. There are quite a lot of Angular {{ }} markup expressions in the document. Angular provides the ng-cloak directive to try and hide the element it cloaks so that you don’t see the flash of these markup expressions when the page initially loads before Angular has a chance to render the data into the markup expressions.<div id="mainContainer" class="mainContainer boxshadow" ng-app="app" ng-cloak> Note the ng-cloak attribute on this element, which here is an outer wrapper element of the most of this large page’s content. ng-cloak is supposed to prevent displaying the content below it, until Angular has taken control and is ready to render the data into the templates. Alas, with this large page the end result unfortunately is a brief flicker of un-rendered markup which looks like this: It’s brief, but plenty ugly – right?  And depending on the speed of the machine this flash gets more noticeable with slow machines that take longer to process the initial HTML DOM. ng-cloak Styles ng-cloak works by temporarily hiding the marked up element and it does this by essentially applying a style that does this:[ng\:cloak], [ng-cloak], [data-ng-cloak], [x-ng-cloak], .ng-cloak, .x-ng-cloak { display: none !important; } This style is inlined as part of AngularJs itself. If you looking at the angular.js source file you’ll find this at the very end of the file:!angular.$$csp() && angular.element(document) .find('head') .prepend('<style type="text/css">@charset "UTF-8";[ng\\:cloak],[ng-cloak],' + '[data-ng-cloak],[x-ng-cloak],.ng-cloak,.x-ng-cloak,' + '.ng-hide{display:none !important;}ng\\:form{display:block;}' '.ng-animate-block-transitions{transition:0s all!important;-webkit-transition:0s all!important;}' + '</style>'); This is is meant to initially hide any elements that contain the ng-cloak attribute or one of the other Angular directive permutation markup. Unfortunately on this particular web page ng-cloak had no effect – I still see the flicker. Why doesn’t ng-cloak work? The problem is of course – timing. The problem is that Angular actually needs to get control of the page before it ever starts doing anything like process even the ng-cloak attribute (or style etc). Because this page is rather large (about 35k of non-data HTML) it takes a while for the DOM to actually plow through the HTML. With the Angular <script> tag defined at the bottom of the page after the HTML DOM content there’s a slight delay which causes the flicker. For smaller pages the initial DOM load/parse cycle is so fast that the markup never shows, but with larger content pages it may show and become an annoying problem. Workarounds There a number of simple ways around this issue and some of them are hinted on in the Angular documentation. Load Angular Sooner One obvious thing that would help with this is to load Angular at the top of the page  BEFORE the DOM loads and that would give it much earlier control. The old ng-cloak documentation actually recommended putting the Angular.js script into the header of the page (apparently this was recently removed), but generally it’s not a good practice to load scripts in the header for page load performance. This is especially true if you load other libraries like jQuery which should be loaded prior to loading Angular so it can use jQuery rather than its own jqLite subset. This is not something I normally would like to do and also something that I’d likely forget in the future and end up right back here :-). Use ng-include for Child Content Angular supports nesting of child templates via the ng-include directive which essentially delay loads HTML content. This helps by removing a lot of the template content out of the main page and so getting control to Angular a lot sooner in order to hide the markup template content. In the application in question, I realize that in hindsight it might have been smarter to break this page out with client side ng-include directives instead of MVC Razor partial views we used to break up the page sections. Razor partial views give that nice separation as well, but in the end Razor puts humpty dumpty (ie. the HTML) back together into a whole single and rather large HTML document. Razor provides the logical separation, but still results in a large physical result document. But Razor also ended up being helpful to have a few security related blocks handled via server side template logic that simply excludes certain parts of the UI the user is not allowed to see – something that you can’t really do with client side exclusion like ng-hide/ng-show – client side content is always there whereas on the server side you can simply not send it to the client. Another reason I’m not a huge fan of ng-include is that it adds another HTTP hit to a request as templates are loaded from the server dynamically as needed. Given that this page was already heavy with resources adding another 10 separate ng-include directives wouldn’t be beneficial :-) ng-include is a valid option if you start from scratch and partition your logic. Of course if you don’t have complex pages, having completely separate views that are swapped in as they are accessed are even better, but we didn’t have this option due to the information having to be on screen all at once. Avoid using {{ }}  Expressions The biggest issue that ng-cloak attempts to address isn’t so much displaying the original content – it’s displaying empty {{ }} markup expression tags that get embedded into content. It gives you the dreaded “now you see it, now you don’t” effect where you sometimes see three separate rendering states: Markup junk, empty views, then views filled with data. If we can remove {{ }} expressions from the page you remove most of the perceived double draw effect as you would effectively start with a blank form and go straight to a filled form. To do this you can forego {{ }}  expressions and replace them with ng-bind directives on DOM elements. For example you can turn:<div class="list-item-name listViewOrderNo"> <a href='#'>{{lineItem.MpsOrderNo}}</a> </div>into:<div class="list-item-name listViewOrderNo"> <a href="#" ng-bind="lineItem.MpsOrderNo"></a> </div> to get identical results but because the {{ }}  expression has been removed there’s no double draw effect for this element. Again, not a great solution. The {{ }} syntax sure reads cleaner and is more fluent to type IMHO. In some cases you may also not have an outer element to attach ng-bind to which then requires you to artificially inject DOM elements into the page. This is especially painful if you have several consecutive values like {{Firstname}} {{Lastname}} for example. It’s an option though especially if you think of this issue up front and you don’t have a ton of expressions to deal with. Add the ng-cloak Styles manually You can also explicitly define the .css styles that Angular injects via code manually in your application’s style sheet. By doing so the styles become immediately available and so are applied right when the page loads – no flicker. I use the minimal:[ng-cloak] { display: none !important; } which works for:<div id="mainContainer" class="mainContainer dialog boxshadow" ng-app="app" ng-cloak> If you use one of the other combinations add the other CSS selectors as well or use the full style shown earlier. Angular will still load its version of the ng-cloak styling but it overrides those settings later, but this will do the trick of hiding the content before that CSS is injected into the page. Adding the CSS in your own style sheet works well, and is IMHO by far the best option. The nuclear option: Hiding the Content manually Using the explicit CSS is the best choice, so the following shouldn’t ever be necessary. But I’ll mention it here as it gives some insight how you can hide/show content manually on load for other frameworks or in your own markup based templates. Before I figured out that I could explicitly embed the CSS style into the page, I had tried to figure out why ng-cloak wasn’t doing its job. After wasting an hour getting nowhere I finally decided to just manually hide and show the container. The idea is simple – initially hide the container, then show it once Angular has done its initial processing and removal of the template markup from the page. You can manually hide the content and make it visible after Angular has gotten control. To do this I used:<div id="mainContainer" class="mainContainer boxshadow" ng-app="app" style="display:none"> Notice the display: none style that explicitly hides the element initially on the page. Then once Angular has run its initialization and effectively processed the template markup on the page you can show the content. For Angular this ‘ready’ event is the app.run() function:app.run( function ($rootScope, $location, cellService) { $("#mainContainer").show(); … }); This effectively removes the display:none style and the content displays. By the time app.run() fires the DOM is ready to displayed with filled data or at least empty data – Angular has gotten control. Edge Case Clearly this is an edge case. In general the initial HTML pages tend to be reasonably sized and the load time for the HTML and Angular are fast enough that there’s no flicker between the rendering times. This only becomes an issue as the initial pages get rather large. Regardless – if you have an Angular application it’s probably a good idea to add the CSS style into your application’s CSS (or a common shared one) just to make sure that content is always hidden. You never know how slow of a browser somebody might be running and while your super fast dev machine might not show any flicker, grandma’s old XP box very well might…© Rick Strahl, West Wind Technologies, 2005-2014Posted in Angular  JavaScript  CSS  HTML   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How to simulate inner join on very large files in java (without running out of memory)

    - by Constantin
    I am trying to simulate SQL joins using java and very large text files (INNER, RIGHT OUTER and LEFT OUTER). The files have already been sorted using an external sort routine. The issue I have is I am trying to find the most efficient way to deal with the INNER join part of the algorithm. Right now I am using two Lists to store the lines that have the same key and iterate through the set of lines in the right file once for every line in the left file (provided the keys still match). In other words, the join key is not unique in each file so would need to account for the Cartesian product situations ... left_01, 1 left_02, 1 right_01, 1 right_02, 1 right_03, 1 left_01 joins to right_01 using key 1 left_01 joins to right_02 using key 1 left_01 joins to right_03 using key 1 left_02 joins to right_01 using key 1 left_02 joins to right_02 using key 1 left_02 joins to right_03 using key 1 My concern is one of memory. I will run out of memory if i use the approach below but still want the inner join part to work fairly quickly. What is the best approach to deal with the INNER join part keeping in mind that these files may potentially be huge public class Joiner { private void join(BufferedReader left, BufferedReader right, BufferedWriter output) throws Throwable { BufferedReader _left = left; BufferedReader _right = right; BufferedWriter _output = output; Record _leftRecord; Record _rightRecord; _leftRecord = read(_left); _rightRecord = read(_right); while( _leftRecord != null && _rightRecord != null ) { if( _leftRecord.getKey() < _rightRecord.getKey() ) { write(_output, _leftRecord, null); _leftRecord = read(_left); } else if( _leftRecord.getKey() > _rightRecord.getKey() ) { write(_output, null, _rightRecord); _rightRecord = read(_right); } else { List<Record> leftList = new ArrayList<Record>(); List<Record> rightList = new ArrayList<Record>(); _leftRecord = readRecords(leftList, _leftRecord, _left); _rightRecord = readRecords(rightList, _rightRecord, _right); for( Record equalKeyLeftRecord : leftList ){ for( Record equalKeyRightRecord : rightList ){ write(_output, equalKeyLeftRecord, equalKeyRightRecord); } } } } if( _leftRecord != null ) { write(_output, _leftRecord, null); _leftRecord = read(_left); while(_leftRecord != null) { write(_output, _leftRecord, null); _leftRecord = read(_left); } } else { if( _rightRecord != null ) { write(_output, null, _rightRecord); _rightRecord = read(_right); while(_rightRecord != null) { write(_output, null, _rightRecord); _rightRecord = read(_right); } } } _left.close(); _right.close(); _output.flush(); _output.close(); } private Record read(BufferedReader reader) throws Throwable { Record record = null; String data = reader.readLine(); if( data != null ) { record = new Record(data.split("\t")); } return record; } private Record readRecords(List<Record> list, Record record, BufferedReader reader) throws Throwable { int key = record.getKey(); list.add(record); record = read(reader); while( record != null && record.getKey() == key) { list.add(record); record = read(reader); } return record; } private void write(BufferedWriter writer, Record left, Record right) throws Throwable { String leftKey = (left == null ? "null" : Integer.toString(left.getKey())); String leftData = (left == null ? "null" : left.getData()); String rightKey = (right == null ? "null" : Integer.toString(right.getKey())); String rightData = (right == null ? "null" : right.getData()); writer.write("[" + leftKey + "][" + leftData + "][" + rightKey + "][" + rightData + "]\n"); } public static void main(String[] args) { try { BufferedReader leftReader = new BufferedReader(new FileReader("LEFT.DAT")); BufferedReader rightReader = new BufferedReader(new FileReader("RIGHT.DAT")); BufferedWriter output = new BufferedWriter(new FileWriter("OUTPUT.DAT")); Joiner joiner = new Joiner(); joiner.join(leftReader, rightReader, output); } catch (Throwable e) { e.printStackTrace(); } } } After applying the ideas from the proposed answer, I changed the loop to this private void join(RandomAccessFile left, RandomAccessFile right, BufferedWriter output) throws Throwable { long _pointer = 0; RandomAccessFile _left = left; RandomAccessFile _right = right; BufferedWriter _output = output; Record _leftRecord; Record _rightRecord; _leftRecord = read(_left); _rightRecord = read(_right); while( _leftRecord != null && _rightRecord != null ) { if( _leftRecord.getKey() < _rightRecord.getKey() ) { write(_output, _leftRecord, null); _leftRecord = read(_left); } else if( _leftRecord.getKey() > _rightRecord.getKey() ) { write(_output, null, _rightRecord); _pointer = _right.getFilePointer(); _rightRecord = read(_right); } else { long _tempPointer = 0; int key = _leftRecord.getKey(); while( _leftRecord != null && _leftRecord.getKey() == key ) { _right.seek(_pointer); _rightRecord = read(_right); while( _rightRecord != null && _rightRecord.getKey() == key ) { write(_output, _leftRecord, _rightRecord ); _tempPointer = _right.getFilePointer(); _rightRecord = read(_right); } _leftRecord = read(_left); } _pointer = _tempPointer; } } if( _leftRecord != null ) { write(_output, _leftRecord, null); _leftRecord = read(_left); while(_leftRecord != null) { write(_output, _leftRecord, null); _leftRecord = read(_left); } } else { if( _rightRecord != null ) { write(_output, null, _rightRecord); _rightRecord = read(_right); while(_rightRecord != null) { write(_output, null, _rightRecord); _rightRecord = read(_right); } } } _left.close(); _right.close(); _output.flush(); _output.close(); } UPDATE While this approach worked, it was terribly slow and so I have modified this to create files as buffers and this works very well. Here is the update ... private long getMaxBufferedLines(File file) throws Throwable { long freeBytes = Runtime.getRuntime().freeMemory() / 2; return (freeBytes / (file.length() / getLineCount(file))); } private void join(File left, File right, File output, JoinType joinType) throws Throwable { BufferedReader leftFile = new BufferedReader(new FileReader(left)); BufferedReader rightFile = new BufferedReader(new FileReader(right)); BufferedWriter outputFile = new BufferedWriter(new FileWriter(output)); long maxBufferedLines = getMaxBufferedLines(right); Record leftRecord; Record rightRecord; leftRecord = read(leftFile); rightRecord = read(rightFile); while( leftRecord != null && rightRecord != null ) { if( leftRecord.getKey().compareTo(rightRecord.getKey()) < 0) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.LeftExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, leftRecord, null); } leftRecord = read(leftFile); } else if( leftRecord.getKey().compareTo(rightRecord.getKey()) > 0 ) { if( joinType == JoinType.RightOuterJoin || joinType == JoinType.RightExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, null, rightRecord); } rightRecord = read(rightFile); } else if( leftRecord.getKey().compareTo(rightRecord.getKey()) == 0 ) { String key = leftRecord.getKey(); List<File> rightRecordFileList = new ArrayList<File>(); List<Record> rightRecordList = new ArrayList<Record>(); rightRecordList.add(rightRecord); rightRecord = consume(key, rightFile, rightRecordList, rightRecordFileList, maxBufferedLines); while( leftRecord != null && leftRecord.getKey().compareTo(key) == 0 ) { processRightRecords(outputFile, leftRecord, rightRecordFileList, rightRecordList, joinType); leftRecord = read(leftFile); } // need a dispose for deleting files in list } else { throw new Exception("DATA IS NOT SORTED"); } } if( leftRecord != null ) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.LeftExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, leftRecord, null); } leftRecord = read(leftFile); while(leftRecord != null) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.LeftExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, leftRecord, null); } leftRecord = read(leftFile); } } else { if( rightRecord != null ) { if( joinType == JoinType.RightOuterJoin || joinType == JoinType.RightExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, null, rightRecord); } rightRecord = read(rightFile); while(rightRecord != null) { if( joinType == JoinType.RightOuterJoin || joinType == JoinType.RightExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, null, rightRecord); } rightRecord = read(rightFile); } } } leftFile.close(); rightFile.close(); outputFile.flush(); outputFile.close(); } public void processRightRecords(BufferedWriter outputFile, Record leftRecord, List<File> rightFiles, List<Record> rightRecords, JoinType joinType) throws Throwable { for(File rightFile : rightFiles) { BufferedReader rightReader = new BufferedReader(new FileReader(rightFile)); Record rightRecord = read(rightReader); while(rightRecord != null){ if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.RightOuterJoin || joinType == JoinType.FullOuterJoin || joinType == JoinType.InnerJoin ) { write(outputFile, leftRecord, rightRecord); } rightRecord = read(rightReader); } rightReader.close(); } for(Record rightRecord : rightRecords) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.RightOuterJoin || joinType == JoinType.FullOuterJoin || joinType == JoinType.InnerJoin ) { write(outputFile, leftRecord, rightRecord); } } } /** * consume all records having key (either to a single list or multiple files) each file will * store a buffer full of data. The right record returned represents the outside flow (key is * already positioned to next one or null) so we can't use this record in below while loop or * within this block in general when comparing current key. The trick is to keep consuming * from a List. When it becomes empty, re-fill it from the next file until all files have * been consumed (and the last node in the list is read). The next outside iteration will be * ready to be processed (either it will be null or it points to the next biggest key * @throws Throwable * */ private Record consume(String key, BufferedReader reader, List<Record> records, List<File> files, long bufferMaxRecordLines ) throws Throwable { boolean processComplete = false; Record record = records.get(records.size() - 1); while(!processComplete){ long recordCount = records.size(); if( record.getKey().compareTo(key) == 0 ){ record = read(reader); while( record != null && record.getKey().compareTo(key) == 0 && recordCount < bufferMaxRecordLines ) { records.add(record); recordCount++; record = read(reader); } } processComplete = true; // if record is null, we are done if( record != null ) { // if the key has changed, we are done if( record.getKey().compareTo(key) == 0 ) { // Same key means we have exhausted the buffer. // Dump entire buffer into a file. The list of file // pointers will keep track of the files ... processComplete = false; dumpBufferToFile(records, files); records.clear(); records.add(record); } } } return record; } /** * Dump all records in List of Record objects to a file. Then, add that * file to List of File objects * * NEED TO PLACE A LIMIT ON NUMBER OF FILE POINTERS (check size of file list) * * @param records * @param files * @throws Throwable */ private void dumpBufferToFile(List<Record> records, List<File> files) throws Throwable { String prefix = "joiner_" + files.size() + 1; String suffix = ".dat"; File file = File.createTempFile(prefix, suffix, new File("cache")); BufferedWriter writer = new BufferedWriter(new FileWriter(file)); for( Record record : records ) { writer.write( record.dump() ); } files.add(file); writer.flush(); writer.close(); }

    Read the article

  • Singleton by Jon Skeet clarification

    - by amutha
    public sealed class Singleton { Singleton() { } public static Singleton Instance { get { return Nested.instance; } } class Nested { // Explicit static constructor to tell C# compiler // not to mark type as beforefieldinit static Nested() { } internal static readonly Singleton instance = new Singleton(); } } I wish to implement Jon Skeet's Singleton pattern in my current application in C#. I have two doubts on the code 1) How is it possible to access the outer class inside nested class? I mean internal static readonly Singleton instance = new Singleton(); Is something called closure? 2) I did not get this comment // Explicit static constructor to tell C# compiler // not to mark type as beforefieldinit what does this comment suggest us?

    Read the article

  • Set JavaScript events in WebBrowser control

    - by Neir0
    Hi I have a WebBrowser control and try to set onclick and href attributes on all links. foreach (HtmlElement link in webBrowser1.Document.Links) { link.SetAttribute("href", "http://www.google.com"); link.SetAttribute("onclick", "return false;"); } It works well. When i out source code of outer html i see that attributes was exist. But JavaScript code does not work. Why and how i can force WebBrowser control to execute javascript code?

    Read the article

  • Vista/7 UAC: how to lower process privileges

    - by Sergey
    Hi all, Is it possible for a process to lower itself from elevated UAC permission back to standard user? If not can the elevated process launch its copy with standard user token and then kill itself? Any code examples (C# preferred)? Details: Problem: - user installs my product (written in C#) - the installer elevates its UAC permission to admin - at the end the installer launches my exe - the exe inherits elevated permissions from admin - the exe mounts network drives which become invisible in Windows Explorer (that runs with regular permissions) Options I considered: 1) break installer into outer exe and inner exe, that runs with elevated permission. The install consists of 1000+ lines of NSIS code and I don't know anything about NSIS 2) mounting drives with lower permissions. If I do it Win Explorer can see the drives but my exe cannot 3) setting EnableLinkedConnection registry option to 1. This is a no-go because it requires PC reboot during the installation. Please help! Sergey

    Read the article

  • SSRS 2005 Matrix and border styles when exporting to XLS

    - by Mufasa
    The Matrix in SSRS (SQL Server Reporting Services 2005) seems to have issues with certain the border styles when exporting to XLS (but not PDF or web view; maybe other formats, not sure?). For example: Create a matrix and set the Matrix border style to Black Solid 1px, but all 4 of the cells to have a border style of Black None 1px. When viewed via the ASP.NET control, it looks correct. But after export to XLS, it creates borders around all of the header cells (column and row headers, and the top left cell), and even the right most data column. But all the cells in the middle of the report correctly have no border set. Update: If the Matrix borders are set to None, then the borders on the cells don't show up in XLS. So, how do you set an outer border around the Matrix, but not have it apply the 'all sides' border to every cell that touches the edge of the Matrix when exported to Excel?

    Read the article

  • Counting and joining two tables

    - by Eikern
    Eventhosts – containing the three regular hosts and an "other" field (if someone is replacing them) eventid | host (SET[Steve,Tim,Brian,other]) ------------------------------------------- 1 | Steve 2 | Tim 3 | Brian 4 | other 5 | other Event id | other | name etc. ---------------------- 1 | | … 2 | | … 3 | | … 4 | Billy | … 5 | Irwin | … This query: SELECT h.host, COUNT(*) AS hostcount FROM host AS h LEFT OUTER JOIN event AS e ON h.eventid = e.id GROUP BY h.host Returns Steve | 1 Tim | 1 Brian | 1 other | 2 I want it to return Steve | 1 Tim | 1 Brian | 1 Billy | 1 Irwin | 1 OR Steve | | 1 Tim | | 1 Brian | | 1 other | Billy | 1 other | Irwin | 1 Can someone tell me how I can achieve this or point me in a direction?

    Read the article

  • Access violation after GetInterface/QueryInterface in Delphi

    - by W55tKQbuRu28Q4xv
    Hi everyone! First, I'm very new in Delphi and COM, but I should build COM application in Delphi. I read a lot of articles and notes on the internets, but COM and COM in Delphi are still not clear to me. My sources - http://www.everfall.com/paste/id.php?wisdn8hyhzkt (about 80 lines). I try to make a COM Interface and Impl class - it works if I call an interface method from Delphi (I create an impl object via TestClient.Create), but if I try to create an object from outer world (from Java, via com4j) my application crashed with following exception: Project Kernel.exe raised exception class $C0000005 with message 'access violation at 0x00000002: read of address 0x00000002'. If I set a breakpoint in QueryInterface - it breaks, but when I come out from function - all crashes. What I'm doing wrong? What I still missing? What I can/should read about COM (in Delphi) to avoid dumb questions like this?

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >