Search Results

Search found 17867 results on 715 pages for 'delete row'.

Page 143/715 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • How to maintain GridPane's fixed-size after adding elemnts dynamically

    - by Eviatar G.
    I need to create board game that can be dynamically change. Its size can be 5x5, 6x6, 7x7 or 8x8. I am jusing JavaFX with NetBeans and Scene builder for the GUI. When the user choose board size greater than 5x5 this is what happens: This is the template on the scene builder before adding cells dynamically: To every cell in the GridPane I am adding StackPane + label of the cell number: @FXML GridPane boardGame; public void CreateBoard() { int boardSize = m_Engine.GetBoard().GetBoardSize(); int num = boardSize * boardSize; int maxColumns = m_Engine.GetNumOfCols(); int maxRows = m_Engine.GetNumOfRows(); for(int row = 0; row < maxRows ; row++) { for(int col = maxColumns - 1; col >= 0 ; col--) { StackPane stackPane = new StackPane(); stackPane.setPrefSize(150.0, 200.0); stackPane.getChildren().add(new Label(String.valueOf(num))); boardGame.add(stackPane, col, row); num--; } } boardGame.setGridLinesVisible(true); boardGame.autosize(); } The problem is the stack panes's size on the GridPane are getting smaller. I tried to set them equal minimum and maximum size but it didn't help they are still getting smaller. I searched on the web but didn't realy find same problem as mine. The only similar problem to mine was found here: Dynamically add elements to a fixed-size GridPane in JavaFX But his suggestion is to use TilePane and I need to use GridPane because this is a board game and it more easier to use GridPane when I need to do tasks such as getting to cell on row = 1 and column = 2 for example. EDIT: I removed the GridPane from the FXML and created it manually on the Controller but now it print a blank board: @FXML GridPane boardGame; public void CreateBoard() { int boardSize = m_Engine.GetBoard().GetBoardSize(); int num = boardSize * boardSize; int maxColumns = m_Engine.GetNumOfCols(); int maxRows = m_Engine.GetNumOfRows(); boardGame = new GridPane(); boardGame.setAlignment(Pos.CENTER); Collection<StackPane> stackPanes = new ArrayList<StackPane>(); for(int row = 0; row < maxRows ; row++) { for(int col = maxColumns - 1; col >= 0 ; col--) { StackPane stackPane = new StackPane(); stackPane.setPrefSize(150.0, 200.0); stackPane.getChildren().add(new Label(String.valueOf(num))); boardGame.add(stackPane, col, row); stackPanes.add(stackPane); num--; } } this.buildGridPane(boardSize); boardGame.setGridLinesVisible(true); boardGame.autosize(); boardGamePane.getChildren().addAll(stackPanes); } public void buildGridPane(int i_NumOfRowsAndColumns) { RowConstraints rowConstraint; ColumnConstraints columnConstraint; for(int index = 0 ; index < i_NumOfRowsAndColumns; index++) { rowConstraint = new RowConstraints(3, Control.USE_COMPUTED_SIZE, Double.POSITIVE_INFINITY, Priority.ALWAYS, VPos.CENTER, true); boardGame.getRowConstraints().add(rowConstraint); columnConstraint = new ColumnConstraints(3, Control.USE_COMPUTED_SIZE, Double.POSITIVE_INFINITY, Priority.ALWAYS, HPos.CENTER, true); boardGame.getColumnConstraints().add(columnConstraint); } }

    Read the article

  • How do I simplify this php script

    - by user225269
    Any suggestions on how I can simplify the php script below?This was my previous question: http://stackoverflow.com/questions/2712237/how-to-check-if-a-checkbox-radio-button-is-checked-in-php that is linked to this one, What I'm trying to do here is to output the data depending on the checkbox that is checked. But my code isn't really good, it shows 2 tables if the condition is met by the 2 results. As you can see in the code below, any suggestions on how I can simplify this? <?php $con = mysql_connect("localhost","root","nitoryolai123$%^"); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_select_db("school", $con); $id = mysql_real_escape_string($_POST['idnum']); if ( $_POST['yr'] == 'year' and $_POST['sec'] == 'section' ){ $result2 = mysql_query("SELECT * FROM student WHERE IDNO='$id'"); echo "<table border='1'> <tr> <th>IDNO</th> <th>YEAR</th> <th>SECTION</th> </tr>"; while($row = mysql_fetch_array($result2)) { echo "<tr>"; echo "<td>" . $row['IDNO'] . "</td>"; echo "<td>" . $row['YEAR'] . "</td>"; echo "<td>" . $row['SECTION'] . "</td>"; echo "</tr>"; } echo "</table>"; } if ( $_POST['yr'] == 'year' and $_POST['sec'] == 'section' and $_POST['lname'] == 'lastname'){ $result3 = mysql_query("SELECT * FROM student WHERE IDNO='$id'"); echo "<table border='1'> <tr> <th>IDNO</th> <th>YEAR</th> <th>SECTION</th> <th>LASTNAME</th> </tr>"; while($row = mysql_fetch_array($result3)) { echo "<tr>"; echo "<td>" . $row['IDNO'] . "</td>"; echo "<td>" . $row['YEAR'] . "</td>"; echo "<td>" . $row['SECTION'] . "</td>"; echo "<td>" . $row['LASTNAME'] . "</td>"; echo "</tr>"; } echo "</table>"; } mysql_close($con); ?>

    Read the article

  • Visual Studio 2010 Productivity Tips and Tricks-Part 2: Key Shortcuts

    - by ToStringTheory
    Ask anyone that knows me, and they will confirm that I hate the mouse.  This isn’t because I deny affection to objects that don’t look like their mammalian-named self, but rather for a much more simple and not-insane reason: I have terrible eyesight.  Introduction Thanks to a degenerative eye disease known as Choroideremia, I have learned to rely more on the keyboard which I can feel digital/static positions of keys relative to my fingers, than the much more analog/random position of the mouse.  Now, I would like to share some of the keyboard shortcuts with you now, as I believe that they not only increase my productivity, but yours as well once you know them (if you don’t already of course)...  I share one of my biggest tips for productivity in the conclusion at the end. Visual Studio Key Shortcuts Global Editor Shortcuts These are shortcuts that are available from almost any application running in Windows, however are many times forgotten. Shortcut Action Visual Studio 2010 Functionality Ctrl + X Cut This shortcut works without a selection. If nothing is selected, the entire line that the caret is on is cut from the editor. Ctrl + C Copy This shortcut works without a selection. If nothing is selected, the entire line that the caret is on is copied from the editor. Ctrl + V Paste If you copied an entire line by the method above, the data is pasted in the line above the current caret line. Ctrl + Shift + V Next Clipboard Element Cut/Copy multiple things, and then hit this combo repeatedly to switch to the next clipboard item when pasting. Ctrl + Backspace Delete Previous Will delete the previous word from the editor directly before the caret. If anything is selected, will just delete that. Ctrl + Del Delete Next Word Will delete the next word/space from the editor directly after the caret. If anything is selected, will just delete that. Shift + Del Delete Focused Line Will delete the line from the editor that the caret is on. If something is selected, will just delete that. Ctrl + ? or Ctrl + ? Left/Right by Word This will move the caret left or right by word or special character boundary. Holding Shift will also select the word. Ctrl + F Quick Find Either the Quick Find panel, or the search bar if you have the Productivity Power Tools installed. Ctrl + Shift + F Find in Solution Opens up the 'Find in Files' window, allowing you to search your solution, as well as using regex for pattern matching. F2 Rename File... While not debugging, selecting a file in the solution explorer\navigator and pressing F2 allows you to rename the selected file. Global Application Shortcuts These are shortcuts that are available from almost any application running in Windows, however are many times forgotten... Again... Shortcut Action Visual Studio 2010 Functionality Ctrl + N New File dialog Opens up the 'New File' dialog to add a new file to the current directory in the Solution\Project. Ctrl + O Open File dialog Opens up the 'Open File' dialog to open a file in the editor, not necessarily in the solution. Ctrl + S Save File dialog Saves the currently focused editor tab back to your HDD/SSD. Ctrl + Shift + S Save All... Quickly save all open/edited documents back to your disk. Ctrl + Tab Switch Panel\Tab Tapping this combo switches between tabs quickly. Holding down Ctrl when hitting tab will bring up a chooser window. Building Shortcuts These are shortcuts that are focused on building and running a solution. These are not usable when the IDE is in Debug mode, as the shortcut changes by context. Shortcut Action Visual Studio 2010 Functionality Ctrl + Shift + B Build Solution Starts a build process on the solution according to the current build configuration manager settings. Ctrl + Break Cancel a Building Solution Will cancel a build operation currently in progress. Good for long running builds when you think of one last change. F5 Start Debugging Will build the solution if needed and launch debugging according to the current configuration manager settings. Ctrl + F5 Start Without Debugger Will build the solution if needed and launch the startup project without attaching a debugger. Debugging Shortcuts These are shortcuts that are used when debugging a solution. Shortcut Action Visual Studio 2010 Functionality F5 Continue Execution Continues execution of code until the next breakpoint. Ctrl + Alt + Break Pause Execution Pauses the program execution. Shift + F5 Stop Debugging Stops the current debugging session. NOTE: Web apps will still continue processing after stopping the debugger. Keep this in mind if working on code such as credit card processing. Ctrl + Shift + F5 Restart Debugging Stops the current debugging session and restarts the debugging session from the beginning. F9 Place Breakpoint Toggles/Places a breakpoint in the editor on the current line. Set a breakpoint in condensed code by highlighting the statement first. F10 Step Over Statement When debugging, executes all code in methods/properties on the current line until the next line. F11 Step Into Statement When debugging, steps into a method call so you can walk through the code executed there (if available). Ctrl + Alt + I Immediate Window Open the Immediate Window to execute commands when execution is paused. Navigation Shortcuts These are shortcuts that are used for navigating in the IDE or editor panel. Shortcut Action Visual Studio 2010 Functionality F4 Properties Panel Opens the properties panel for the selected item in the editor/designer/solution navigator (context driven). F12 Go to Definition Press F12 with the caret on a member to navigate to its declaration. With the Productivity tools, Ctrl + Click works too. Ctrl + K Ctrl + T View Call Hierarchy View the call hierarchy of the member the caret is on. Great for going through n-tier solutions and interface implementations! Ctrl + Alt + B Breakpoint Window View the breakpoint window to manage breakpoints and their advanced options. Allows easy toggling of breakpoints. Ctrl + Alt + L Solution Navigator Open the solution explorer panel. Ctrl + Alt + O Output Window View the output window to see build\general output from Visual Studio. Ctrl + Alt + Enter Live Web Preview Only available with the Web Essential plugin. Launches the auto-updating Preview panel. Testing Shortcuts These are shortcuts that are used for running tests in the IDE. Please note, Visual Studio 2010 is all about context. If your caret is within a test method when you use one of these combinations, the combination will apply to that test. If your caret is within a test class, it will apply to that class. If the caret is outside of a test class, it will apply to all tests. Shortcut Action Visual Studio 2010 Functionality Ctrl + R T Run Test(s) Run all tests in the current context without a debugger attached. Breakpoints will not be stopped on. Ctrl + R Ctrl + T Run Test(s) (Debug) Run all tests in the current context with a debugger attached. This allows you to use breakpoints. Substitute A for T from the preceding combos to run/debug ALL tests in the current context. Substitute Y for T from the preceding combos to run/debug ALL impacted/covering tests for a method in the current context. Advanced Editor Shortcuts These are shortcuts that are used for more advanced editing in the editor window. Shortcut Action Visual Studio 2010 Functionality Shift + Alt + ? Shift + Alt + ? Multiline caret up/down Use this combo to edit multiple lines at once. Not too many uses for it, but once in a blue moon one comes along. Ctrl + Alt + Enter Insert Line Above Inserts a blank line above the line the caret is currently on. No need to be at end or start of line, so no cutting off words/code. Ctrl + K Ctrl + C Comment Selection Comments the current selection out of compilation. Ctrl + K Ctrl + U Uncomment Selection Uncomments the current selection into compilation. Ctrl + K Ctrl + D Format Document Automatically formats the document into a structured layout. Lines up nodes or code into columns intelligently. Alt + ? Alt + ? Code line up/down *Use this combo to move a line of code up or down quickly. Great for small rearrangements of code. *Requires the Productivity Power pack from Microsoft. Conclusion This list is by no means meant to be exhaustive, but these are the shortcuts I use regularly every hour/minute of the day. There are still 100s more in Visual Studio that you can discover through the configuration window, or by tooltips. Something that I started doing months ago seems to have interest in my office.. In my last post, I talked about how I hated a cluttered UI. One of the ways that I aimed to resolve that was by systematically cleaning up the toolbars week by week. First day, I removed ALL icons that I already knew shortcuts to, or would never use them (Undo in a toolbar?!). Then, every week from that point on, I make it a point to remove an icon/two from the toolbar and make an effort to remember its key combination. I gain extra space in the toolbar area, AND become more productive at the same time! I hope that you found this article interesting or at least somewhat informative.. Maybe a shortcut or two you didn't know. I know some of them seem trivial, but I often see people going to the edit menu for Copy/Paste... Thought a refresher might be helpful!

    Read the article

  • Spooling in SQL execution plans

    - by Rob Farley
    Sewing has never been my thing. I barely even know the terminology, and when discussing this with American friends, I even found out that half the words that Americans use are different to the words that English and Australian people use. That said – let’s talk about spools! In particular, the Spool operators that you find in some SQL execution plans. This post is for T-SQL Tuesday, hosted this month by me! I’ve chosen to write about spools because they seem to get a bad rap (even in my song I used the line “There’s spooling from a CTE, they’ve got recursion needlessly”). I figured it was worth covering some of what spools are about, and hopefully explain why they are remarkably necessary, and generally very useful. If you have a look at the Books Online page about Plan Operators, at http://msdn.microsoft.com/en-us/library/ms191158.aspx, and do a search for the word ‘spool’, you’ll notice it says there are 46 matches. 46! Yeah, that’s what I thought too... Spooling is mentioned in several operators: Eager Spool, Lazy Spool, Index Spool (sometimes called a Nonclustered Index Spool), Row Count Spool, Spool, Table Spool, and Window Spool (oh, and Cache, which is a special kind of spool for a single row, but as it isn’t used in SQL 2012, I won’t describe it any further here). Spool, Table Spool, Index Spool, Window Spool and Row Count Spool are all physical operators, whereas Eager Spool and Lazy Spool are logical operators, describing the way that the other spools work. For example, you might see a Table Spool which is either Eager or Lazy. A Window Spool can actually act as both, as I’ll mention in a moment. In sewing, cotton is put onto a spool to make it more useful. You might buy it in bulk on a cone, but if you’re going to be using a sewing machine, then you quite probably want to have it on a spool or bobbin, which allows it to be used in a more effective way. This is the picture that I want you to think about in relation to your data. I’m sure you use spools every time you use your sewing machine. I know I do. I can’t think of a time when I’ve got out my sewing machine to do some sewing and haven’t used a spool. However, I often run SQL queries that don’t use spools. You see, the data that is consumed by my query is typically in a useful state without a spool. It’s like I can just sew with my cotton despite it not being on a spool! Many of my favourite features in T-SQL do like to use spools though. This looks like a very similar query to before, but includes an OVER clause to return a column telling me the number of rows in my data set. I’ll describe what’s going on in a few paragraphs’ time. So what does a Spool operator actually do? The spool operator consumes a set of data, and stores it in a temporary structure, in the tempdb database. This structure is typically either a Table (ie, a heap), or an Index (ie, a b-tree). If no data is actually needed from it, then it could also be a Row Count spool, which only stores the number of rows that the spool operator consumes. A Window Spool is another option if the data being consumed is tightly linked to windows of data, such as when the ROWS/RANGE clause of the OVER clause is being used. You could maybe think about the type of spool being like whether the cotton is going onto a small bobbin to fit in the base of the sewing machine, or whether it’s a larger spool for the top. A Table or Index Spool is either Eager or Lazy in nature. Eager and Lazy are Logical operators, which talk more about the behaviour, rather than the physical operation. If I’m sewing, I can either be all enthusiastic and get all my cotton onto the spool before I start, or I can do it as I need it. “Lazy” might not the be the best word to describe a person – in the SQL world it describes the idea of either fetching all the rows to build up the whole spool when the operator is called (Eager), or populating the spool only as it’s needed (Lazy). Window Spools are both physical and logical. They’re eager on a per-window basis, but lazy between windows. And when is it needed? The way I see it, spools are needed for two reasons. 1 – When data is going to be needed AGAIN. 2 – When data needs to be kept away from the original source. If you’re someone that writes long stored procedures, you are probably quite aware of the second scenario. I see plenty of stored procedures being written this way – where the query writer populates a temporary table, so that they can make updates to it without risking the original table. SQL does this too. Imagine I’m updating my contact list, and some of my changes move data to later in the book. If I’m not careful, I might update the same row a second time (or even enter an infinite loop, updating it over and over). A spool can make sure that I don’t, by using a copy of the data. This problem is known as the Halloween Effect (not because it’s spooky, but because it was discovered in late October one year). As I’m sure you can imagine, the kind of spool you’d need to protect against the Halloween Effect would be eager, because if you’re only handling one row at a time, then you’re not providing the protection... An eager spool will block the flow of data, waiting until it has fetched all the data before serving it up to the operator that called it. In the query below I’m forcing the Query Optimizer to use an index which would be upset if the Name column values got changed, and we see that before any data is fetched, a spool is created to load the data into. This doesn’t stop the index being maintained, but it does mean that the index is protected from the changes that are being done. There are plenty of times, though, when you need data repeatedly. Consider the query I put above. A simple join, but then counting the number of rows that came through. The way that this has executed (be it ideal or not), is to ask that a Table Spool be populated. That’s the Table Spool operator on the top row. That spool can produce the same set of rows repeatedly. This is the behaviour that we see in the bottom half of the plan. In the bottom half of the plan, we see that the a join is being done between the rows that are being sourced from the spool – one being aggregated and one not – producing the columns that we need for the query. Table v Index When considering whether to use a Table Spool or an Index Spool, the question that the Query Optimizer needs to answer is whether there is sufficient benefit to storing the data in a b-tree. The idea of having data in indexes is great, but of course there is a cost to maintaining them. Here we’re creating a temporary structure for data, and there is a cost associated with populating each row into its correct position according to a b-tree, as opposed to simply adding it to the end of the list of rows in a heap. Using a b-tree could even result in page-splits as the b-tree is populated, so there had better be a reason to use that kind of structure. That all depends on how the data is going to be used in other parts of the plan. If you’ve ever thought that you could use a temporary index for a particular query, well this is it – and the Query Optimizer can do that if it thinks it’s worthwhile. It’s worth noting that just because a Spool is populated using an Index Spool, it can still be fetched using a Table Spool. The details about whether or not a Spool used as a source shows as a Table Spool or an Index Spool is more about whether a Seek predicate is used, rather than on the underlying structure. Recursive CTE I’ve already shown you an example of spooling when the OVER clause is used. You might see them being used whenever you have data that is needed multiple times, and CTEs are quite common here. With the definition of a set of data described in a CTE, if the query writer is leveraging this by referring to the CTE multiple times, and there’s no simplification to be leveraged, a spool could theoretically be used to avoid reapplying the CTE’s logic. Annoyingly, this doesn’t happen. Consider this query, which really looks like it’s using the same data twice. I’m creating a set of data (which is completely deterministic, by the way), and then joining it back to itself. There seems to be no reason why it shouldn’t use a spool for the set described by the CTE, but it doesn’t. On the other hand, if we don’t pull as many columns back, we might see a very different plan. You see, CTEs, like all sub-queries, are simplified out to figure out the best way of executing the whole query. My example is somewhat contrived, and although there are plenty of cases when it’s nice to give the Query Optimizer hints about how to execute queries, it usually doesn’t do a bad job, even without spooling (and you can always use a temporary table). When recursion is used, though, spooling should be expected. Consider what we’re asking for in a recursive CTE. We’re telling the system to construct a set of data using an initial query, and then use set as a source for another query, piping this back into the same set and back around. It’s very much a spool. The analogy of cotton is long gone here, as the idea of having a continual loop of cotton feeding onto a spool and off again doesn’t quite fit, but that’s what we have here. Data is being fed onto the spool, and getting pulled out a second time when the spool is used as a source. (This query is running on AdventureWorks, which has a ManagerID column in HumanResources.Employee, not AdventureWorks2012) The Index Spool operator is sucking rows into it – lazily. It has to be lazy, because at the start, there’s only one row to be had. However, as rows get populated onto the spool, the Table Spool operator on the right can return rows when asked, ending up with more rows (potentially) getting back onto the spool, ready for the next round. (The Assert operator is merely checking to see if we’ve reached the MAXRECURSION point – it vanishes if you use OPTION (MAXRECURSION 0), which you can try yourself if you like). Spools are useful. Don’t lose sight of that. Every time you use temporary tables or table variables in a stored procedure, you’re essentially doing the same – don’t get upset at the Query Optimizer for doing so, even if you think the spool looks like an expensive part of the query. I hope you’re enjoying this T-SQL Tuesday. Why not head over to my post that is hosting it this month to read about some other plan operators? At some point I’ll write a summary post – once I have you should find a comment below pointing at it. @rob_farley

    Read the article

  • Reading OpenDocument spreadsheets using C#

    - by DigiMortal
    Excel with its file formats is not the only spreadsheet application that is widely used. There are also users on Linux and Macs and often they are using OpenOffice and other open-source office packages that use ODF instead of OpenXML. In this post I will show you how to read Open Document spreadsheet in C#. Importer as example My previous post about importers showed you how to build flexible importers support to your web application. This post introduces you practical example of one of my importers. Of course, sensitive code is omitted. We start with ODS importer class and we add new methods as we go. public class OdsImporter : ImporterBase {     public OdsImporter()     {     }       public override string[] SupportedFileExtensions     {         get { return new[] { "ods" }; }     }       public override ImportResult Import(Stream fileStream, long companyId, short year)     {         string contentXml = GetContentXml(fileStream);           var result = new ImportResult();         var doc = XDocument.Parse(contentXml);           var rows = doc.Descendants("{urn:oasis:names:tc:opendocument:xmlns:table:1.0}table-row").Skip(1);           foreach (var row in rows)         {             ImportRow(row, companyId, year, result);         }           return result;     } } The class given here just extends base class for importers (previous post uses interface but as I already told there you move to abstract base class when writing code for real projects). Import method reads data from *.ods file, parses it (it is XML), finds all data rows and imports data. As you may see then first row is skipped. This is because the first row on my sheet is always headers row. Reading ODS file Our import method starts with getting XML from *.ods file. ODS files like OpenXml files are zipped containers that contain different files. We need content.xml as all data is kept there. To get the contents of file we use SharpZipLib library to read uploaded file as *.zip file. private static string GetContentXml(Stream fileStream) {     var contentXml = "";       using (var zipInputStream = new ZipInputStream(fileStream))     {         ZipEntry contentEntry = null;         while ((contentEntry = zipInputStream.GetNextEntry()) != null)         {             if (!contentEntry.IsFile)                 continue;             if (contentEntry.Name.ToLower() == "content.xml")                 break;         }           if (contentEntry.Name.ToLower() != "content.xml")         {             throw new Exception("Cannot find content.xml");         }           var bytesResult = new byte[] { };         var bytes = new byte[2000];         var i = 0;           while ((i = zipInputStream.Read(bytes, 0, bytes.Length)) != 0)         {             var arrayLength = bytesResult.Length;             Array.Resize<byte>(ref bytesResult, arrayLength + i);             Array.Copy(bytes, 0, bytesResult, arrayLength, i);         }         contentXml = Encoding.UTF8.GetString(bytesResult);     }     return contentXml; } If here is content.xml file then we stop browsing the file. We read this file to memory and return it as UTF-8 format string. Importing rows Our last task is to import rows. We use special method for this as we have to handle some tricks here. To keep files smaller the cell count on row is not always the same. If we have more than one empty cell one after another then ODS keeps only one cell for sequential empty cells. This cell has attribute called number-columns-repeated and it’s value is set to the number of sequential empty cells. This is why we use two indexers for cells collection. private void ImportRow(XElement row, ImportResult result) {     var cells = (from c in row.Descendants()                 where c.Name == "{urn:oasis:names:tc:opendocument:xmlns:table:1.0}table-cell"                 select c).ToList();       var dto = new DataDto();       var count = cells.Count;     var j = -1;       for (var i = 0; i < count; i++)     {         j++;         var cell = cells[i];         var attr = cell.Attribute("{urn:oasis:names:tc:opendocument:xmlns:table:1.0}number-columns-repeated");         if (attr != null)         {             var numToSkip = 0;             if (int.TryParse(attr.Value, out numToSkip))             {                 j += numToSkip - 1;             }         }           if (i > 30) break;         if (j == 0)         {             dto.SomeProperty = cells[i].Value;         }         if (j == 1)         {             dto.SomeOtherProperty = cells[i].Value;         }         // some more data reading     }       // save data } You can define your own class for import results and add there all problems found during data import. Your application gets the results and shows them to user. Conclusion Reading ODS files may seem to complex task but actually it is very easy if we need only data from those documents. We can use some zip-library to get the content file and then parse it to XML. It is not hard to go through the XML but there are some optimization tricks we have to know. The code here is safe to use in web applications as it is not using any API-s that may have special needs to server and infrastructure.

    Read the article

  • Use an Ubuntu Live CD to Securely Wipe Your PC’s Hard Drive

    - by Trevor Bekolay
    Deleting files or quickly formatting a drive isn’t enough for sensitive personal information. We’ll show you how to get rid of it for good using a Ubuntu Live CD. When you delete a file in Windows, Ubuntu, or any other operating system, it doesn’t actually destroy the data stored on your hard drive, it just marks that data as “deleted.” If you overwrite it later, then that data is generally unrecoverable, but if the operating system don’t happen to overwrite it, then your data is still stored on your hard drive, recoverable by anyone who has the right software. By securely delete files or entire hard drives, your data will be gone for good. Note: Modern hard drives are extremely sophisticated, as are the experts who recover data for a living. There is no guarantee that the methods covered in this article will make your data completely unrecoverable; however, they will make your data unrecoverable to the majority of recovery methods, and all methods that are readily available to the general public. Shred individual files Most of the data stored on your hard drive is harmless, and doesn’t reveal anything about you. If there are just a few files that you know you don’t want someone else to see, then the easiest way to get rid of them is a built-in Linux utility called shred. Open a terminal window by clicking on Applications at the top-left of the screen, then expanding the Accessories menu and clicking on Terminal. Navigate to the file that you want to delete using cd to change directories and ls to list the files and folders in the current directory. As an example, we’ve got a file called BankInfo.txt on a Windows NTFS-formatted hard drive. We want to delete it securely, so we’ll call shred by entering the following in the terminal window: shred <file> which is, in our example: shred BankInfo.txt Notice that our BankInfo.txt file still exists, even though we’ve shredded it. A quick look at the contents of BankInfo.txt make it obvious that the file has indeed been securely overwritten. We can use some command-line arguments to make shred delete the file from the hard drive as well. We can also be extra-careful about the shredding process by upping the number of times shred overwrites the original file. To do this, in the terminal, type in: shred –remove –iterations=<num> <file> By default, shred overwrites the file 25 times. We’ll double this, giving us the following command: shred –remove –iterations=50 BankInfo.txt BankInfo.txt has now been securely wiped on the physical disk, and also no longer shows up in the directory listing. Repeat this process for any sensitive files on your hard drive! Wipe entire hard drives If you’re disposing of an old hard drive, or giving it to someone else, then you might instead want to wipe your entire hard drive. shred can be invoked on hard drives, but on modern file systems, the shred process may be reversible. We’ll use the program wipe to securely delete all of the data on a hard drive. Unlike shred, wipe is not included in Ubuntu by default, so we have to install it. Open up the Synaptic Package Manager by clicking on System in the top-left corner of the screen, then expanding the Administration folder and clicking on Synaptic Package Manager. wipe is part of the Universe repository, which is not enabled by default. We’ll enable it by clicking on Settings > Repositories in the Synaptic Package Manager window. Check the checkbox next to “Community-maintained Open Source software (universe)”. Click Close. You’ll need to reload Synaptic’s package list. Click on the Reload button in the main Synaptic Package Manager window. Once the package list has been reloaded, the text over the search field will change to “Rebuilding search index”. Wait until it reads “Quick search,” and then type “wipe” into the search field. The wipe package should come up, along with some other packages that perform similar functions. Click on the checkbox to the left of the label “wipe” and select “Mark for Installation”. Click on the Apply button to start the installation process. Click the Apply button on the Summary window that pops up. Once the installation is done, click the Close button and close the Synaptic Package Manager window. Open a terminal window by clicking on Applications in the top-left of the screen, then Accessories > Terminal. You need to figure our the correct hard drive to wipe. If you wipe the wrong hard drive, that data will not be recoverable, so exercise caution! In the terminal window, type in: sudo fdisk -l A list of your hard drives will show up. A few factors will help you identify the right hard drive. One is the file system, found in the System column of  the list – Windows hard drives are usually formatted as NTFS (which shows up as HPFS/NTFS). Another good identifier is the size of the hard drive, which appears after its identifier (highlighted in the following screenshot). In our case, the hard drive we want to wipe is only around 1 GB large, and is formatted as NTFS. We make a note of the label found under the the Device column heading. If you have multiple partitions on this hard drive, then there will be more than one device in this list. The wipe developers recommend wiping each partition separately. To start the wiping process, type the following into the terminal: sudo wipe <device label> In our case, this is: sudo wipe /dev/sda1 Again, exercise caution – this is the point of no return! Your hard drive will be completely wiped. It may take some time to complete, depending on the size of the drive you’re wiping. Conclusion If you have sensitive information on your hard drive – and chances are you probably do – then it’s a good idea to securely delete sensitive files before you give away or dispose of your hard drive. The most secure way to delete your data is with a few swings of a hammer, but shred and wipe from a Ubuntu Live CD is a good alternative! Similar Articles Productive Geek Tips Reset Your Ubuntu Password Easily from the Live CDScan a Windows PC for Viruses from a Ubuntu Live CDRecover Deleted Files on an NTFS Hard Drive from a Ubuntu Live CDCreate a Bootable Ubuntu 9.10 USB Flash DriveCreate a Bootable Ubuntu USB Flash Drive the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool Download Free iPad Wallpapers at iPad Decor Get Your Delicious Bookmarks In Firefox’s Awesome Bar

    Read the article

  • SQL SERVER – SSIS Look Up Component – Cache Mode – Notes from the Field #028

    - by Pinal Dave
    [Notes from Pinal]: Lots of people think that SSIS is all about arranging various operations together in one logical flow. Well, the understanding is absolutely correct, but the implementation of the same is not as easy as it seems. Similarly most of the people think lookup component is just component which does look up for additional information and does not pay much attention to it. Due to the same reason they do not pay attention to the same and eventually get very bad performance. Linchpin People are database coaches and wellness experts for a data driven world. In this 28th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to write a good lookup component with Cache Mode. In SQL Server Integration Services, the lookup component is one of the most frequently used tools for data validation and completion.  The lookup component is provided as a means to virtually join one set of data to another to validate and/or retrieve missing values.  Properly configured, it is reliable and reasonably fast. Among the many settings available on the lookup component, one of the most critical is the cache mode.  This selection will determine whether and how the distinct lookup values are cached during package execution.  It is critical to know how cache modes affect the result of the lookup and the performance of the package, as choosing the wrong setting can lead to poorly performing packages, and in some cases, incorrect results. Full Cache The full cache mode setting is the default cache mode selection in the SSIS lookup transformation.  Like the name implies, full cache mode will cause the lookup transformation to retrieve and store in SSIS cache the entire set of data from the specified lookup location.  As a result, the data flow in which the lookup transformation resides will not start processing any data buffers until all of the rows from the lookup query have been cached in SSIS. The most commonly used cache mode is the full cache setting, and for good reason.  The full cache setting has the most practical applications, and should be considered the go-to cache setting when dealing with an untested set of data. With a moderately sized set of reference data, a lookup transformation using full cache mode usually performs well.  Full cache mode does not require multiple round trips to the database, since the entire reference result set is cached prior to data flow execution. There are a few potential gotchas to be aware of when using full cache mode.  First, you can see some performance issues – memory pressure in particular – when using full cache mode against large sets of reference data.  If the table you use for the lookup is very large (either deep or wide, or perhaps both), there’s going to be a performance cost associated with retrieving and caching all of that data.  Also, keep in mind that when doing a lookup on character data, full cache mode will always do a case-sensitive (and in some cases, space-sensitive) string comparison even if your database is set to a case-insensitive collation.  This is because the in-memory lookup uses a .NET string comparison (which is case- and space-sensitive) as opposed to a database string comparison (which may be case sensitive, depending on collation).  There’s a relatively easy workaround in which you can use the UPPER() or LOWER() function in the pipeline data and the reference data to ensure that case differences do not impact the success of your lookup operation.  Again, neither of these present a reason to avoid full cache mode, but should be used to determine whether full cache mode should be used in a given situation. Full cache mode is ideally useful when one or all of the following conditions exist: The size of the reference data set is small to moderately sized The size of the pipeline data set (the data you are comparing to the lookup table) is large, is unknown at design time, or is unpredictable Each distinct key value(s) in the pipeline data set is expected to be found multiple times in that set of data Partial Cache When using the partial cache setting, lookup values will still be cached, but only as each distinct value is encountered in the data flow.  Initially, each distinct value will be retrieved individually from the specified source, and then cached.  To be clear, this is a row-by-row lookup for each distinct key value(s). This is a less frequently used cache setting because it addresses a narrower set of scenarios.  Because each distinct key value(s) combination requires a relational round trip to the lookup source, performance can be an issue, especially with a large pipeline data set to be compared to the lookup data set.  If you have, for example, a million records from your pipeline data source, you have the potential for doing a million lookup queries against your lookup data source (depending on the number of distinct values in the key column(s)).  Therefore, one has to be keenly aware of the expected row count and value distribution of the pipeline data to safely use partial cache mode. Using partial cache mode is ideally suited for the conditions below: The size of the data in the pipeline (more specifically, the number of distinct key column) is relatively small The size of the lookup data is too large to effectively store in cache The lookup source is well indexed to allow for fast retrieval of row-by-row values No Cache As you might guess, selecting no cache mode will not add any values to the lookup cache in SSIS.  As a result, every single row in the pipeline data set will require a query against the lookup source.  Since no data is cached, it is possible to save a small amount of overhead in SSIS memory in cases where key values are not reused.  In the real world, I don’t see a lot of use of the no cache setting, but I can imagine some edge cases where it might be useful. As such, it’s critical to know your data before choosing this option.  Obviously, performance will be an issue with anything other than small sets of data, as the no cache setting requires row-by-row processing of all of the data in the pipeline. I would recommend considering the no cache mode only when all of the below conditions are true: The reference data set is too large to reasonably be loaded into SSIS memory The pipeline data set is small and is not expected to grow There are expected to be very few or no duplicates of the key values(s) in the pipeline data set (i.e., there would be no benefit from caching these values) Conclusion The cache mode, an often-overlooked setting on the SSIS lookup component, represents an important design decision in your SSIS data flow.  Choosing the right lookup cache mode directly impacts the fidelity of your results and the performance of package execution.  Know how this selection impacts your ETL loads, and you’ll end up with more reliable, faster packages. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSIS

    Read the article

  • MapRedux - PowerShell and Big Data

    - by Dittenhafer Solutions
    MapRedux – #PowerShell and #Big Data Have you been hearing about “big data”, “map reduce” and other large scale computing terms over the past couple of years and been curious to dig into more detail? Have you read some of the Apache Hadoop online documentation and unfortunately concluded that it wasn't feasible to setup a “test” hadoop environment on your machine? More recently, I have read about some of Microsoft’s work to enable Hadoop on the Azure cloud. Being a "Microsoft"-leaning technologist, I am more inclinded to be successful with experimentation when on the Windows platform. Of course, it is not that I am "religious" about one set of technologies other another, but rather more experienced. Anyway, within the past couple of weeks I have been thinking about PowerShell a bit more as the 2012 PowerShell Scripting Games approach and it occured to me that PowerShell's support for Windows Remote Management (WinRM), and some other inherent features of PowerShell might lend themselves particularly well to a simple implementation of the MapReduce framework. I fired up my PowerShell ISE and started writing just to see where it would take me. Quite simply, the ScriptBlock feature combined with the ability of Invoke-Command to create remote jobs on networked servers provides much of the plumbing of a distributed computing environment. There are some limiting factors of course. Microsoft provided some default settings which prevent PowerShell from taking over a network without administrative approval first. But even with just one adjustment, a given Windows-based machine can become a node in a MapReduce-style distributed computing environment. Ok, so enough introduction. Let's talk about the code. First, any machine that will participate as a remote "node" will need WinRM enabled for remote access, as shown below. This is not exactly practical for hundreds of intended nodes, but for one (or five) machines in a test environment it does just fine. C:> winrm quickconfig WinRM is not set up to receive requests on this machine. The following changes must be made: Set the WinRM service type to auto start. Start the WinRM service. Make these changes [y/n]? y Alternatively, you could take the approach described in the Remotely enable PSRemoting post from the TechNet forum and use PowerShell to create remote scheduled tasks that will call Enable-PSRemoting on each intended node. Invoke-MapRedux Moving on, now that you have one or more remote "nodes" enabled, you can consider the actual Map and Reduce algorithms. Consider the following snippet: $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose Invoke-MapRedux takes an instance of a MapReduceItem which references the Map and Reduce scriptblocks, an array of computer names which are the remote nodes, and the initial data set to be processed. As simple as that, you can start working with concepts of big data and the MapReduce paradigm. Now, how did we get there? I have published the initial version of my PsMapRedux PowerShell Module on GitHub. The PsMapRedux module provides the Invoke-MapRedux function described above. Feel free to browse the underlying code and even contribute to the project! In a later post, I plan to show some of the inner workings of the module, but for now let's move on to how the Map and Reduce functions are defined. Map Both the Map and Reduce functions need to follow a prescribed prototype. The prototype for a Map function in the MapRedux module is as follows. A simple scriptblock that takes one PsObject parameter and returns a hashtable. It is important to note that the PsObject $dataset parameter is a MapRedux custom object that has a "Data" property which offers an array of data to be processed by the Map function. $aMap = { Param ( [PsObject] $dataset ) # Indicate the job is running on the remote node. Write-Host ($env:computername + "::Map"); # The hashtable to return $list = @{}; # ... Perform the mapping work and prepare the $list hashtable result with your custom PSObject... # ... The $dataset has a single 'Data' property which contains an array of data rows # which is a subset of the originally submitted data set. # Return the hashtable (Key, PSObject) Write-Output $list; } Reduce Likewise, with the Reduce function a simple prototype must be followed which takes a $key and a result $dataset from the MapRedux's partitioning function (which joins the Map results by key). Again, the $dataset is a MapRedux custom object that has a "Data" property as described in the Map section. $aReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) # The hashtable to return $redux = @{}; # Return Write-Output $redux; } All Together Now When everything is put together in a short example script, you implement your Map and Reduce functions, query for some starting data, build the MapReduxItem via New-MapReduxItem and call Invoke-MapRedux to get the process started: # Import the MapRedux and SQL Server providers Import-Module "MapRedux" Import-Module “sqlps” -DisableNameChecking # Query the database for a dataset Set-Location SQLSERVER:\sql\dbserver1\default\databases\myDb $query = "SELECT MyKey, Date, Value1 FROM BigData ORDER BY MyKey"; Write-Host "Query: $query" $dataset = Invoke-SqlCmd -query $query # Build the Map function $MyMap = { Param ( [PsObject] $dataset ) Write-Host ($env:computername + "::Map"); $list = @{}; foreach($row in $dataset.Data) { # Write-Host ("Key: " + $row.MyKey.ToString()); if($list.ContainsKey($row.MyKey) -eq $true) { $s = $list.Item($row.MyKey); $s.Sum += $row.Value1; $s.Count++; } else { $s = New-Object PSObject; $s | Add-Member -Type NoteProperty -Name MyKey -Value $row.MyKey; $s | Add-Member -type NoteProperty -Name Sum -Value $row.Value1; $list.Add($row.MyKey, $s); } } Write-Output $list; } $MyReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) $redux = @{}; $count = 0; foreach($s in $dataset.Data) { $sum += $s.Sum; $count += 1; } # Reduce $redux.Add($s.MyKey, $sum / $count); # Return Write-Output $redux; } # Create the item data $Mr = New-MapReduxItem "My Test MapReduce Job" $MyMap $MyReduce # Array of processing nodes... $MyNodes = ("node1", "node2", "node3", "node4", "localhost") # Run the Map Reduce routine... $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose # Show the results Set-Location C:\ $MyMrResults | Out-GridView Conclusion I hope you have seen through this article that PowerShell has a significant infrastructure available for distributed computing. While it does take some code to expose a MapReduce-style framework, much of the work is already done and PowerShell could prove to be the the easiest platform to develop and run big data jobs in your corporate data center, potentially in the Azure cloud, or certainly as an academic excerise at home or school. Follow me on Twitter to stay up to date on the continuing progress of my Powershell MapRedux module, and thanks for reading! Daniel

    Read the article

  • Linq to Datarow, Select multiple columns as distinct?

    - by Beta033
    basically i'm trying to reproduce the following mssql query as LINQ SELECT DISTINCT [TABLENAME], [COLUMNNAME] FROM [DATATABLE] the closest i've got is Dim query = (From row As DataRow In ds.Tables("DATATABLE").Rows _ Select row("COLUMNNAME") ,row("TABLENAME").Distinct when i do the above i get the error Range variable name can be inferred only from a simple or qualified name with no arguments. i was sort of expecting it to return a collection that i could then iterate through and perform actions for each entry. maybe a datarow collection? As a complete LINQ newb, i'm not sure what i'm missing. i've tried variations on Select new with { row("COLUMNNAME") ,row("TABLENAME")} and get: Anonymous type member name can be inferred only from a simple or qualified name with no arguments. Also, does anyone know of any good books/resources to get fluent?

    Read the article

  • SQL SERVER – 2008 – Introduction to Snapshot Database – Restore From Snapshot

    - by pinaldave
    Snapshot database is one of the most interesting concepts that I have used at some places recently. Here is a quick definition of the subject from Book On Line: A Database Snapshot is a read-only, static view of a database (the source database). Multiple snapshots can exist on a source database and can always reside on the same server instance as the database. Each database snapshot is consistent, in terms of transactions, with the source database as of the moment of the snapshot’s creation. A snapshot persists until it is explicitly dropped by the database owner. If you do not know how Snapshot database work, here is a quick note on the subject. However, please refer to the official description on Book-on-Line for accuracy. Snapshot database is a read-only database created from an original database called the “source database”. This database operates at page level. When Snapshot database is created, it is produced on sparse files; in fact, it does not occupy any space (or occupies very little space) in the Operating System. When any data page is modified in the source database, that data page is copied to Snapshot database, making the sparse file size increases. When an unmodified data page is read in the Snapshot database, it actually reads the pages of the original database. In other words, the changes that happen in the source database are reflected in the Snapshot database. Let us see a simple example of Snapshot. In the following exercise, we will do a few operations. Please note that this script is for demo purposes only- there are a few considerations of CPU, DISK I/O and memory, which will be discussed in the future posts. Create Snapshot Delete Data from Original DB Restore Data from Snapshot First, let us create the first Snapshot database and observe the sparse file details. USE master GO -- Create Regular Database CREATE DATABASE RegularDB GO USE RegularDB GO -- Populate Regular Database with Sample Table CREATE TABLE FirstTable (ID INT, Value VARCHAR(10)) INSERT INTO FirstTable VALUES(1, 'First'); INSERT INTO FirstTable VALUES(2, 'Second'); INSERT INTO FirstTable VALUES(3, 'Third'); GO -- Create Snapshot Database CREATE DATABASE SnapshotDB ON (Name ='RegularDB', FileName='c:\SSDB.ss1') AS SNAPSHOT OF RegularDB; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO Now let us see the resultset for the same. Now let us do delete something from the Original DB and check the same details we checked before. -- Delete from Regular Database DELETE FROM RegularDB.dbo.FirstTable; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO When we check the details of sparse file created by Snapshot database, we will find some interesting details. The details of Regular DB remain the same. It clearly shows that when we delete data from Regular/Source DB, it copies the data pages to Snapshot database. This is the reason why the size of the snapshot DB is increased. Now let us take this small exercise to  the next level and restore our deleted data from Snapshot DB to Original Source DB. -- Restore Data from Snapshot Database USE master GO RESTORE DATABASE RegularDB FROM DATABASE_SNAPSHOT = 'SnapshotDB'; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Clean up DROP DATABASE [SnapshotDB]; DROP DATABASE [RegularDB]; GO Now let us check the details of the select statement and we can see that we are successful able to restore the database from Snapshot Database. We can clearly see that this is a very useful feature in case you would encounter a good business that needs it. I would like to request the readers to suggest more details if they are using this feature in their business. Also, let me know if you think it can be potentially used to achieve any tasks. Complete Script of the afore- mentioned operation for easy reference is as follows: USE master GO -- Create Regular Database CREATE DATABASE RegularDB GO USE RegularDB GO -- Populate Regular Database with Sample Table CREATE TABLE FirstTable (ID INT, Value VARCHAR(10)) INSERT INTO FirstTable VALUES(1, 'First'); INSERT INTO FirstTable VALUES(2, 'Second'); INSERT INTO FirstTable VALUES(3, 'Third'); GO -- Create Snapshot Database CREATE DATABASE SnapshotDB ON (Name ='RegularDB', FileName='c:\SSDB.ss1') AS SNAPSHOT OF RegularDB; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Delete from Regular Database DELETE FROM RegularDB.dbo.FirstTable; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Restore Data from Snapshot Database USE master GO RESTORE DATABASE RegularDB FROM DATABASE_SNAPSHOT = 'SnapshotDB'; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Clean up DROP DATABASE [SnapshotDB]; DROP DATABASE [RegularDB]; GO Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Is there a dojo enhanced grid example with context menu

    - by user102023
    I am looking for an example of a dojo enhanced grid that contains a context menu on either a cell or row menu where the cell or row data is accessed. I have managed to create an enhanced grid with a row context menu. I can create a function that captures the event of clicking on the row menu item. However, I am not sure how to access the row data in the context of the menu item handler. I have not seen any example in the tests of the nightly build. Is there an example of this available online?

    Read the article

  • Difference between LASTDATE and MAX for semi-additive measures in #DAX

    - by Marco Russo (SQLBI)
    I recently wrote an article on SQLBI about the semi-additive measures in DAX. I included the formulas common calculations and there is an interesting point that worth a longer digression: the difference between LASTDATE and MAX (which is similar to FIRSTDATE and MIN – I just describe the former, for the latter just replace the correspondent names). LASTDATE is a dax function that receives an argument that has to be a date column and returns the last date active in the current filter context. Apparently, it is the same value returned by MAX, which returns the maximum value of the argument in the current filter context. Of course, MAX can receive any numeric type (including date), whereas LASTDATE only accepts a column of type date. But overall, they seems identical in the result. However, the difference is a semantic one. In fact, this expression: LASTDATE ( 'Date'[Date] ) could be also rewritten as: FILTER ( VALUES ( 'Date'[Date] ), 'Date'[Date] = MAX ( 'Date'[Date] ) ) LASTDATE is a function that returns a table with a single column and one row, whereas MAX returns a scalar value. In DAX, any expression with one row and one column can be automatically converted into the corresponding scalar value of the single cell returned. The opposite is not true. So you can use LASTDATE in any expression where a table or a scalar is required, but MAX can be used only where a scalar expression is expected. Since LASTDATE returns a table, you can use it in any expression that expects a table as an argument, such as COUNTROWS. In fact, you can write this expression: COUNTROWS ( LASTDATE ( 'Date'[Date] ) ) which will always return 1 or BLANK (if there are no dates active in the current filter context). You cannot pass MAX as an argument of COUNTROWS. You can pass to LASTDATE a reference to a column or any table expression that returns a column. The following two syntaxes are semantically identical: LASTDATE ( 'Date'[Date] ) LASTDATE ( VALUES ( 'Date'[Date] ) ) The result is the same and the use of VALUES is not required because it is implicit in the first syntax, unless you have a row context active. In that case, be careful that using in a row context the LASTDATE function with a direct column reference will produce a context transition (the row context is transformed into a filter context) that hides the external filter context, whereas using VALUES in the argument preserve the existing filter context without applying the context transition of the row context (see the columns LastDate and Values in the following query and result). You can use any other table expressions (including a FILTER) as LASTDATE argument. For example, the following expression will always return the last date available in the Date table, regardless of the current filter context: LASTDATE ( ALL ( 'Date'[Date] ) ) The following query recap the result produced by the different syntaxes described. EVALUATE     CALCULATETABLE(         ADDCOLUMNS(              VALUES ('Date'[Date] ),             "LastDate", LASTDATE( 'Date'[Date] ),             "Values", LASTDATE( VALUES ( 'Date'[Date] ) ),             "Filter", LASTDATE( FILTER ( VALUES ( 'Date'[Date] ), 'Date'[Date] = MAX ( 'Date'[Date] ) ) ),             "All", LASTDATE( ALL ( 'Date'[Date] ) ),             "Max", MAX( 'Date'[Date] )         ),         'Date'[Calendar Year] = 2008     ) ORDER BY 'Date'[Date] The LastDate columns repeat the current date, because the context transition happens within the ADDCOLUMNS. The Values column preserve the existing filter context from being replaced by the context transition, so the result corresponds to the last day in year 2008 (which is filtered in the external CALCULATETABLE). The Filter column works like the Values one, even if we use the FILTER instead of the LASTDATE approach. The All column shows the result of LASTDATE ( ALL ( ‘Date’[Date] ) ) that ignores the filter on Calendar Year (in fact the date returned is in year 2010). Finally, the Max column shows the result of the MAX formula, which is the easiest to use and only don’t return a table if you need it (like in a filter argument of CALCULATE or CALCULATETABLE, where using LASTDATE is shorter). I know that using LASTDATE in complex expressions might create some issue. In my experience, the fact that a context transition happens automatically in presence of a row context is the main reason of confusion and unexpected results in DAX formulas using this function. For a reference of DAX formulas using MAX and LASTDATE, read my article about semi-additive measures in DAX.

    Read the article

  • Horizontal UITableView

    - by imran
    I want implement a layout in my ipad application that has a uitable view that scrolls left and right rather then up and down : So rather than row 1 row 2 row 3 ( scrolling vertically ) It would be : row 1, row2, row 3 (scrolling horizontally ) I've seen that UItableView is designed to only do vertical scrolling so doing a transform does not give the desired effect. Is there a standard way to do this taking advantage of a datasource provider like uitableview provides? I basically want to do somthing similar to what the BBC News reader app ( http://itunes.apple.com/us/app/bbc-news/id364147881?mt=8 ) on the Ipad does with the list of stories to select from. Thanks

    Read the article

  • jQuery UI Dialog Button Icons

    - by Cory Grimster
    Is it possible to add icons to the buttons on a jQuery UI Dialog? I've tried doing it this way: $("#DeleteDialog").dialog({ resizable: false, height:150, modal: true, buttons: { 'Delete': function() { /* Do stuff */ $(this).dialog('close'); }, Cancel: function() { $(this).dialog('close'); } }, open: function() { $('.ui-dialog-buttonpane').find('button:contains("Cancel")').addClass('ui-icon-cancel'); $('.ui-dialog-buttonpane').find('button:contains("Delete")').addClass('ui-icon-trash'); } }); The selectors in the open function seem to be working fine. If I add the following to "open": $('.ui-dialog-buttonpane').find('button:contains("Delete")').css('color', 'red'); then I do get a Delete button with red text. That's not bad, but I'd really like that little trash can sprite on the Delete button as well.

    Read the article

  • CHMOD To Prevent Deletion Of File Directory

    - by Sohnee
    I have some hosting on a Linux server and I have a few folders that I don't ever want to delete. There are sub folders within these that I do want to delete. How do I set the CHMOD permissions on the folders I don't want to delete? Of course, when I say "I don't ever want to delete" - what I mean is that the end customer shouldn't delete them by accident, via FTP or in a PHP script etc. As an example of directory structure... MainFolder/SubFolder MainFolder/Another I don't want "MainFolder" to be accidentally deleted, but I'm happy for "SubFolder" and "Another" to be removed!

    Read the article

  • SQL SERVER – Fundamentals of Columnstore Index

    - by pinaldave
    There are two kind of storage in database. Row Store and Column Store. Row store does exactly as the name suggests – stores rows of data on a page – and column store stores all the data in a column on the same page. These columns are much easier to search – instead of a query searching all the data in an entire row whether the data is relevant or not, column store queries need only to search much lesser number of the columns. This means major increases in search speed and hard drive use. Additionally, the column store indexes are heavily compressed, which translates to even greater memory and faster searches. I am sure this looks very exciting and it does not mean that you convert every single index from row store to column store index. One has to understand the proper places where to use row store or column store indexes. Let us understand in this article what is the difference in Columnstore type of index. Column store indexes are run by Microsoft’s VertiPaq technology. However, all you really need to know is that this method of storing data is columns on a single page is much faster and more efficient. Creating a column store index is very easy, and you don’t have to learn new syntax to create them. You just need to specify the keyword “COLUMNSTORE” and enter the data as you normally would. Keep in mind that once you add a column store to a table, though, you cannot delete, insert or update the data – it is READ ONLY. However, since column store will be mainly used for data warehousing, this should not be a big problem. You can always use partitioning to avoid rebuilding the index. A columnstore index stores each column in a separate set of disk pages, rather than storing multiple rows per page as data traditionally has been stored. The difference between column store and row store approaches is illustrated below: In case of the row store indexes multiple pages will contain multiple rows of the columns spanning across multiple pages. In case of column store indexes multiple pages will contain multiple single columns. This will lead only the columns needed to solve a query will be fetched from disk. Additionally there is good chance that there will be redundant data in a single column which will further help to compress the data, this will have positive effect on buffer hit rate as most of the data will be in memory and due to same it will not need to be retrieved. Let us see small example of how columnstore index improves the performance of the query on a large table. As a first step let us create databaseset which is large enough to show performance impact of columnstore index. The time taken to create sample database may vary on different computer based on the resources. USE AdventureWorks GO -- Create New Table CREATE TABLE [dbo].[MySalesOrderDetail]( [SalesOrderID] [int] NOT NULL, [SalesOrderDetailID] [int] NOT NULL, [CarrierTrackingNumber] [nvarchar](25) NULL, [OrderQty] [smallint] NOT NULL, [ProductID] [int] NOT NULL, [SpecialOfferID] [int] NOT NULL, [UnitPrice] [money] NOT NULL, [UnitPriceDiscount] [money] NOT NULL, [LineTotal] [numeric](38, 6) NOT NULL, [rowguid] [uniqueidentifier] NOT NULL, [ModifiedDate] [datetime] NOT NULL ) ON [PRIMARY] GO -- Create clustered index CREATE CLUSTERED INDEX [CL_MySalesOrderDetail] ON [dbo].[MySalesOrderDetail] ( [SalesOrderDetailID]) GO -- Create Sample Data Table -- WARNING: This Query may run upto 2-10 minutes based on your systems resources INSERT INTO [dbo].[MySalesOrderDetail] SELECT S1.* FROM Sales.SalesOrderDetail S1 GO 100 Now let us do quick performance test. I have kept STATISTICS IO ON for measuring how much IO following queries take. In my test first I will run query which will use regular index. We will note the IO usage of the query. After that we will create columnstore index and will measure the IO of the same. -- Performance Test -- Comparing Regular Index with ColumnStore Index USE AdventureWorks GO SET STATISTICS IO ON GO -- Select Table with regular Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO -- Table 'MySalesOrderDetail'. Scan count 1, logical reads 342261, physical reads 0, read-ahead reads 0. -- Create ColumnStore Index CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_MySalesOrderDetail_ColumnStore] ON [MySalesOrderDetail] (UnitPrice, OrderQty, ProductID) GO -- Select Table with Columnstore Index SELECT ProductID, SUM(UnitPrice) SumUnitPrice, AVG(UnitPrice) AvgUnitPrice, SUM(OrderQty) SumOrderQty, AVG(OrderQty) AvgOrderQty FROM [dbo].[MySalesOrderDetail] GROUP BY ProductID ORDER BY ProductID GO It is very clear from the results that query is performance extremely fast after creating ColumnStore Index. The amount of the pages it has to read to run query is drastically reduced as the column which are needed in the query are stored in the same page and query does not have to go through every single page to read those columns. If we enable execution plan and compare we can see that column store index performance way better than regular index in this case. Let us clean up the database. -- Cleanup DROP INDEX [IX_MySalesOrderDetail_ColumnStore] ON [dbo].[MySalesOrderDetail] GO TRUNCATE TABLE dbo.MySalesOrderDetail GO DROP TABLE dbo.MySalesOrderDetail GO In future posts we will see cases where Columnstore index is not appropriate solution as well few other tricks and tips of the columnstore index. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How to get MAX value of a version-number (varchar) column in T-SQL

    - by Ogre Psalm33
    I have a table defined like this: Column: Version Message Type: varchar(20) varchar(100) ---------------------------------- Row 1: 2.2.6 Message 1 Row 2: 2.2.7 Message 2 Row 3: 2.2.12 Message 3 Row 4: 2.3.9 Message 4 Row 5: 2.3.15 Message 5 I want to write a T-Sql query that will get message for the MAX version number, where the "Version" column represents a software version number. I.e., 2.2.12 is greater than 2.2.7, and 2.3.15 is greater than 2.3.9, etc. Unfortunately, I can't think of an easy way to do that without using CHARINDEX or some complicated other split-like logic. Running this query: SELECT MAX(Version) FROM my_table will yield the erroneous result: 2.3.9 When it should really be 2.3.15. Any bright ideas that don't get too complex?

    Read the article

  • Can NSDictionary be used with TableView on iPhone?

    - by bobo
    In a UITableViewController subclass, there are some methods that need to be implemented in order to load the data and handle the row selection event: - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return 1; //there is only one section needed for my table view } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return [myList count]; //myList is a NSDictionary already populated in viewDidLoad method } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease ]; } // indexPath.row returns an integer index, // but myList uses keys that are not integer, // I don't know how I can retrieve the value and assign it to the cell.textLabel.text return cell; } - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { // Handle row on select event, // but indexPath.row only returns the index, // not a key of the myList NSDictionary, // this prevents me from knowing which row is selected } How is NSDictionary supposed to work with TableView? What is the simplest way to get this done?

    Read the article

  • Convert table base design to table less design in best way

    - by Brij
    What is the best optimized way to convert following in table less design? the layout should be cross browser compatible and SEO Friendly. <table cellpadding="0" cellspacing="0"> <tr> <td>Row 1 Column 1</td> <td>Row 1 Column 2</td> <td>Row 1 Column 3</td> </tr> <tr> <td colspan="3" align="center">Row 2</td> </tr> <tr> <td>Row 3</td> <td align="right" colspan="2"><img src="test.jpg" alt="test" /></td> </tr> </table>

    Read the article

  • ASP.Net - DataRepeater with multiple templates and multiple datasources in one

    - by NicoJuicy
    This is my problem. I'm using different sql-queries for fetching some people, some come from my database and some come from an external database. They are all sorted in the same "list", only the difference is, that the people who come from our database will have a different lay-out and there will be less of them in a row (eg. 1 in a row), the list of people who come from the external database will be ordered by 3 (in a row). How can i implement this using a repeater? And how would the pagination work? Any "logical", working alternatives will be appreciated also, but i prefer to keep my current workflow to solve this problem. Short: - Multiple datasources - Multiple templates for the different datasources (1 in a row, 3 in a row) - Pagination in this problem?

    Read the article

  • refresh table view iphone

    - by Florent
    Hi all !! So i've set a table view, i have set a system which set if the row have been already selected, i set checkmarck acessory for a row which have been seen, i write the row in a plist to an int value. It work good but only when i restart the app or reload the table view in my navigation controller. I mean when i select a row it pushes a view controller, then when i go back to the table view checkmark disappear and we do not know if the row have already been selected only when the app restart. So is there a way to refresh the table view ? ? in the view will appear for example ? ? thanks to all !!!!

    Read the article

  • Why does [NSOutlineView clickedRow] always return -1?

    - by jxpx777
    I have a fairly pedestrian non-editable NSOutlineView setup. In the bindings for the outline view, I have set the binding to my file's owner (MyDocument FWIW) with a selector of outlineViewWasDoubleClicked The method exists and is called, but when I call -clickedRow it consistently returns -1 rather than the row number of the row that I double clicked to trigger the method. My _outlineView is an IBOutlet and I've verified that it is hooked up correctly by using -selectedRow for the method rather than -clickedRow (I would rather use -clickedRow though because it seems unintuitive for the user to have a row selected, double click another row to do something with it and have the method triggered with the row they had selected.) My best guess right now is that the -clickedRow value is getting cleared out before my method fires, but I don't know where or what might be gobbling it up. Thanks in advance for any help.

    Read the article

  • recursive function to get all the child categories

    - by user253530
    Here is what I'm trying to do: - i need a function that when passed as an argument an ID (for a category of things) will provide all the subcategories and the sub-sub categories and sub-sub-sub..etc. - i was thinking to use a recursive function since i don't know the number of subcategories their sub-subcategories and so on so here is what i've tried to do so far function categoryChild($id) { $s = "SELECT * FROM PLD_CATEGORY WHERE PARENT_ID = $id"; $r = mysql_query($s); if(mysql_num_rows($r) > 0) { while($row = mysql_fetch_array($r)) echo $row['ID'].",".categoryChild($row['ID']); } else { $row = mysql_fetch_array($r); return $row['ID']; } } If i use return instead of echo, i won't get the same result. I need some help in order to fix this or rewrite it from scratch

    Read the article

  • Optimized way to convert table base to table less design

    - by Brij
    What is the best optimized way to convert following in table less design? the layout should be cross browser compatible and SEO Friendly. <table cellpadding="0" cellspacing="0"> <tr> <td>Row 1 Column 1</td> <td>Row 1 Column 2</td> <td>Row 1 Column 3</td> </tr> <tr> <td colspan="3" align="center">Row 2</td> </tr> <tr> <td>Row 3</td> <td align="right" colspan="2"><img src="test.jpg" alt="test" /></td> </tr> </table>

    Read the article

  • Sync a WinForm with DatagridView

    - by Ruben Trancoso
    I have a Form with a DataGridView which DataSource is a BindingSource to a table. This view will have a single row selection and a button to delete, edit the current selected row in a popup Form and a insert button that will use the same Form as well. My question is how can I sync the pop Form with the current row? I tryied to use the RowStateChanged event to get and store the current selected Row to be used in the Form but I coudnt. After the event I get the row that was selected before. Other thing I dont understand yet in C# how to have a single recordSet and know wich is the current record even if its a new being inserted in a way that once in the Form all data being entered will show up at the same time in the DataGridView.

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >