Search Results

Search found 1510 results on 61 pages for 'cheat sheet'.

Page 57/61 | < Previous Page | 53 54 55 56 57 58 59 60 61  | Next Page >

  • addSubview and autosizing

    - by neoneye
    How does one add views to a window, so that the views are resized to fit within the window frame? The problem I'm making a sheet window containing 2 views, where only one of them is visible at a time, so it's important that the views have the same size as the window. My problem is that either view0 fits correctly and view1 doesn't or the other way around. I can't figure out how to give them the same size as the window. Possible solution I could just make sure that both views have precisely the same size within Interface Builder, then it would work. However I'm looking for a way to do this programmatically. Screenshot of view0 Below you can see the autoresizing problem in the top and the right side, where the view is somehow clipped. Screenshot of view1 This view is resized correctly. Here is my code Can the views be resized before adding them to the window. Or is it better to do as I do now where the views are added one by one while changing the window frame. How do you do it? NSView* view0 = /* a view made with IB */; NSView* view1 = /* another view made with IB */; NSWindow* window = [self window]; NSRect window_frame = [window frame]; NSView* cv = [[[NSView alloc] initWithFrame:window_frame] autorelease]; [window setContentView:cv]; [cv setAutoresizesSubviews:YES]; // add subview so it fits within the contentview frame { NSView* v = view0; [v setHidden:YES]; [v setAutoresizesSubviews:NO]; [cv addSubview:v]; [v setFrameOrigin:NSZeroPoint]; [window setFrame:[v frame] display:NO]; [v setAutoresizesSubviews:YES]; } // add subview so it fits within the contentview frame { NSView* v = view1; [v setHidden:YES]; [v setAutoresizesSubviews:NO]; [cv addSubview:v]; [v setFrameOrigin:NSZeroPoint]; [window setFrame:[v frame] display:NO]; [v setAutoresizesSubviews:YES]; } // restore original window frame [window setFrame:window_frame display:YES]; [view0 setHidden:NO]; [view1 setHidden:YES];

    Read the article

  • Reusing xalan transformer causing its extension functions break

    - by Leslie Norman
    I am using xalan 2.7.1 to validate my xml docs with xslt style sheet. It works fine for the first document and returns error message in case of error along with correct line and column number of xml source by making use of NodeInfo.lineNumber and NodeInfo.columnNumber extensions. The problem is when I try to reuse transformer to validate other xml docs, it successfully transforms the document but always returns lineNumber=columnNumber=-1 for all errors. Any idea? Here is my code: TransformerFactory tFactory = TransformerFactory.newInstance("org.apache.xalan.processor.TransformerFactoryImpl", null); tFactory.setAttribute(TransformerFactoryImpl.FEATURE_SOURCE_LOCATION, Boolean.TRUE); StreamSource xsltStreamSource = new StreamSource(new File("E:\\Temp\\Test\\myXslt.xsl")); Transformer transformer=null; try { transformer = tFactory.newTransformer(xsltStreamSource); ByteArrayOutputStream outStream = new ByteArrayOutputStream(); File srcFolder = new File("E:\\Temp\\Test"); for (File file :srcFolder.listFiles()) { if (file.getName().endsWith("xml")) { transformer.transform(new StreamSource(file), new StreamResult(outStream)); transformer.reset(); } } System.out.println(outStream.toString()); } catch (TransformerException e) { e.printStackTrace(); } Edit: New code after implementing @rsp suggestions: package mycompany; import java.io.File; import javax.xml.transform.ErrorListener; import javax.xml.transform.Source; import javax.xml.transform.Transformer; import javax.xml.transform.TransformerException; import javax.xml.transform.TransformerFactory; import javax.xml.transform.stream.StreamResult; import javax.xml.transform.stream.StreamSource; import org.apache.xalan.processor.TransformerFactoryImpl; public class XsltTransformer { public static void main(String[] args) { TransformerFactory tFactory = TransformerFactory.newInstance("org.apache.xalan.processor.TransformerFactoryImpl", null); tFactory.setAttribute(TransformerFactoryImpl.FEATURE_SOURCE_LOCATION, Boolean.TRUE); StreamSource xsltStreamSource = new StreamSource(new File("E:\\Temp\\Test\\myXslt.xsl")); try { Transformer transformer = tFactory.newTransformer(xsltStreamSource); File srcFolder = new File("E:\\Temp\\Test"); for (File file : srcFolder.listFiles()) { if (file.getName().endsWith("xml")) { Source source = new StreamSource(file); StreamResult result = new StreamResult(System.out); XsltTransformer xsltTransformer = new XsltTransformer(); ErrorListenerImpl errorHandler = xsltTransformer.new ErrorListenerImpl(); transformer.setErrorListener(errorHandler); transformer.transform(source, result); if (errorHandler.e != null) { System.out.println("Transformation Exception: " + errorHandler.e.getMessage()); } transformer.reset(); } } } catch (TransformerException e) { e.printStackTrace(); } } private class ErrorListenerImpl implements ErrorListener { public TransformerException e = null; public void error(TransformerException exception) { this.e = exception; } public void fatalError(TransformerException exception) { this.e = exception; } public void warning(TransformerException exception) { this.e = exception; } } }

    Read the article

  • When does IE7 recompute styles? Doesn't work reliably when a class is added to the body.

    - by Kid A
    I have an interesting problem here. I'm using a class on the element as a switch to drive a fair amount of layout behavior on my site. If the class is applied, certain things happen, and if the class isn't applied, they don't happen. The relevant CSS is roughly like this: .rightSide { display:none; } .showCommentsRight .rightSide { display:block; width:50%; } .showCommentsRight .leftSide { display:block; width:50%; } And the HTML: <body class="showCommentsRight"> <div class="container"></div> <div class="leftSide"></div> <div class="rightSide"></div> </div> <div class="container"></div> <div class="leftSide"></div> <div class="rightSide"></div> </div> <div class="container"></div> <div class="leftSide"></div> <div class="rightSide"></div> </div> </body> I've simplified things but this is essentially the method. The whole page changes layout (hiding the right side in three different areas) when the flag is set on the body. This works in Firefox and IE8. It does not work in IE8 in compatibility mode. What is fascinating is that if you sit there and refresh the page, the results can vary. It will pick a different section's right side to show. Sometimes it will show only the top section's right side, sometimes it will show the middle. I have tried a validator (to look for malformed html), double css formatting, and making sure my IE7 hack sheet wasn't having an effect. So my question is: * Is there a way that this behavior can be made reliable? * When does IE7 decide to re-do styling? Thanks everyone.

    Read the article

  • Execute SQL SP in Excel VBA

    - by TheOCD
    HI I am having problem with getting all the columns back when i execute following code in excel vba. I only get 6 out of 23 columns back. Connection, command etc works fine (i can see exec command in the SQL Profiler), data headers are created for all 23 columns but i only get data for 6 column. Side Note: it's not prod level code, have missed out error handling on purpose, sp works fine in SQL management studio, ASP.Net, C# win form app, it is for Excel 2003 connecting to SQL 2008. Can someone help me troubleshoot it? Dim connection As ADODB.connection Dim recordset As ADODB.recordset Dim command As ADODB.command Dim strProcName As String 'Stored Procedure name Dim strConn As String ' connection string. Dim selectedVal As String 'Set ADODB requirements Set connection = New ADODB.connection Set recordset = New ADODB.recordset Set command = New ADODB.command If Workbooks("Book2.xls").MultiUserEditing = True Then MsgBox "You do not have Exclusive access to the workbook at this time." & _ vbNewLine & "Please have all other users close the workbook and then try again.", vbOKOnly + vbExclamation Exit Sub Else On Error Resume Next ActiveWorkbook.ExclusiveAccess 'On Error GoTo No_Bugs End If 'set the active sheet Set oSht = Workbooks("Book2.xls").Sheets(1) 'get the connection string, if empty just exit strConn = ConnectionString() If strConn = "" Then Exit Sub End If ' selected value, if <NOTHING> just exit selectedVal = selectedValue() If selectedVal = "<NOTHING>" Then Exit Sub End If If Not oSht Is Nothing Then 'Open database connection connection.ConnectionString = strConn connection.Open ' set command stuff. command.ActiveConnection = connection command.CommandText = "GetAlbumByName" command.CommandType = adCmdStoredProc command.Parameters.Refresh command.Parameters(1).Value = selectedVal 'Execute stored procedure and return to a recordset Set recordset = command.Execute() If recordset.BOF = False And recordset.EOF = False Then Sheets("Sheet2").[A1].CopyFromRecordset recordset ' Create headers and copy data With Sheets("Sheet2") For Column = 0 To recordset.Fields.Count - 1 .Cells(1, Column + 1).Value = recordset.Fields(Column).Name Next .Range(.Cells(1, 1), .Cells(1, recordset.Fields.Count)).Font.Bold = True .Cells(2, 1).CopyFromRecordset recordset End With Else MsgBox "b4 BOF or after EOF.", vbOKOnly + vbExclamation End If 'Close database connection and clean up If CBool(recordset.State And adStateOpen) = True Then recordset.Close Set recordset = Nothing If CBool(connection.State And adStateOpen) = True Then connection.Close Set connection = Nothing Else MsgBox "oSheet2 is Nothing.", vbOKOnly + vbExclamation End If

    Read the article

  • xslt test if a variable value is contained in a node set

    - by Aamir
    I have the following two files: <?xml version="1.0" encoding="utf-8" ?> <!-- D E F A U L T H O S P I T A L P O L I C Y --> <xas DefaultPolicy="open" DefaultSubjectsFile="subjects.xss"> <rule id="R1" access="deny" object="record" subject="roles/*[name()!='Staff']"/> <rule id="R2" access="deny" object="diagnosis" subject="roles//Nurse"/> <rule id="R3" access="grant" object="record[@id=$user]" subject="roles/member[@id=$user]"/> </xas> and the other xml file called subjects.xss is: <?xml version="1.0" encoding="utf-8" ?> <subjects> <users> <member id="dupont" password="4A-4E-E9-17-5D-CE-2C-DD-43-43-1D-F1-3F-5D-94-71"> <name>Pierre Dupont</name> </member> <member id="durand" password="3A-B6-1B-E8-C0-1F-CD-34-DF-C4-5E-BA-02-3C-04-61"> <name>Jacqueline Durand</name> </member> </users> <roles> <Staff> <Doctor> <member idref="dupont"/> </Doctor> <Nurse> <member idref="durand"/> </Nurse> </Staff> </roles> </subjects> I am writing an xsl sheet which will read the subject value for each rule in policy.xas and if the currently logged in user (accessible as variable "user" in the stylesheet) is contained in that subject value (say roles//Nurse), then do something. I am not being able to test whether the currently logged in user ($user which is equal to say "durand") is contained in roles//Nurse in the subjects file (which is a different xml file). Hope that clarifies my question. Any ideas? Thanks in advance.

    Read the article

  • Can GPU capabilities impact virtual machine performance?

    - by Dave White
    While this many not seem like a programming question directly, it impacts my development activities and so it seems like it belongs here. It seems that more and more developers are turning to virtual environments for development activities on their computers, SharePoint development being a prime example. Also, as a trainer, I have virtual training environments for all of the classes that I teach. I recently purchased a new Dell E6510 to travel around with. It has the i7 620M (Dual core, HyperThreaded cpu running at 2.66GHz) and 8 GB of memory. Reading the spec sheet, it sounded like it would be a great laptop to carry around and run virtual machines on. Getting the laptop though, I've been pretty disappointed with the user experience of developing in a virtual machine. Giving the Virtual Machine 4 GB of memory, it was slow and I could type complete sentences and watch the VM "catchup". My company has training laptops that we provide for our classes. They are Dell Precision M6400 Intel Core 2 Duo P8700 running at 2.54Ghz with 8 GB of memory and the experience on this laptops is night and day compared to the E6510. They are crisp and you barely aware that you are running in a virtual environment. Since the E6510 should be faster in all categories than the M6400, I couldn't understand why the new laptop was slower, so I did a component by component comparison and the only place where the E6510 is less performant than the M6400 is the graphics department. The M6400 is running a nVidia FX 2700m GPU and the E6510 is running a nVidia 3100M GPU. Looking at benchmarks of the two GPUs suggest that the FX 2700M is twice as fast as the 3100M. http://www.notebookcheck.net/Mobile-Graphics-Cards-Benchmark-List.844.0.html 3100M = 111th (E6510) FX 2700m = 47th (Precision M6400) Radeon HD 5870 = 8th (Alienware) The host OS is Windows 7 64bit as is the guest OS, running in Virtual Box 3.1.8 with Guest Additions installed on the guest. The IDE being used in the virtual environment is VS 2010 Premium. So after that long setup, my question is: Is the GPU significantly impacting the virtual machine's performance or are there other factors that I'm not looking at that I can use to boost the vm's performance? Do we now have to consider GPU performance when purchasing laptops where we expect to use virtualized development environments? Thanks in advance. Cheers, Dave

    Read the article

  • Using NSXMLParser to parse Vimeo XML on iPhone.

    - by Sonny Fazio
    Hello, I'm working on an iPhone app that will use Vimeo Simple API to give me a listing our videos by a certain user, in a convenient TableView format. I'm new to Parsing XML and have tried TouchXML, TinyXML, and now NSXMLParser with no luck. Most tutorials on parsing XML are for a blog, and not for an API XML sheet. I've tried modifying the blog parsers to search for the specific tags, but it doesn't seem to work. Right now I'm working with NSXMLParser and it seems to correctly find the value of an XML tag, but when it goes to append it to a NSMutableString, it writes a whole bunch of nulls in between it. I'm using a tutorial from theappleblog and modifying it to work with Vimeo API - (void)parser:(NSXMLParser *)parser foundCharacters:(NSString *)string{ if ([currentElement isEqualToString:@"video_title"]) { NSLog(@"String: %@",string); [currentTitle appendString:string]; } else if ([currentElement isEqualToString:@"video_url"]) { [currentLink appendString:string]; } else if ([currentElement isEqualToString:@"video_description"]) { [currentSummary appendString:string]; } else if ([currentElement isEqualToString:@"date"]) { [currentDate appendString:string]; } Here is the nulls it writes: http://grab.by/grabs/92d9cfc2df4fac3fe6579493b1a8e89f.png Then when it finishes, it has to add the NSMutableStrings into a NSMutableDictionary - (void)parser:(NSXMLParser *)parser didStartElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName attributes:(NSDictionary *)attributeDict{ //NSLog(@"found this element: %@", elementName); currentElement = [elementName copy]; if ([elementName isEqualToString:@"item"]) { // clear out our story item caches... item = [[NSMutableDictionary alloc] init]; currentTitle = [[NSMutableString alloc] init]; currentDate = [[NSMutableString alloc] init]; currentSummary = [[NSMutableString alloc] init]; currentLink = [[NSMutableString alloc] init]; } } - (void)parser:(NSXMLParser *)parser didEndElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName{ NSLog(@"Title: %@",currentTitle); if ([elementName isEqualToString:@"item"]) {// save values to an item, then store that item into the array... [item setObject:currentTitle forKey:@"video_title"]; NSLog(@"Current Title%@", currentTitle); [item setObject:currentLink forKey:@"video_url"]; [item setObject:currentSummary forKey:@"video_description"]; [item setObject:currentDate forKey:@"date"]; [stories addObject:[item copy]]; NSLog(@"adding story: %@", currentTitle); } } I would really appreciate it if someone has any advance

    Read the article

  • Merge Mutliple Excel Workbooks

    - by IRHM
    I wonder whether someone may be able to help me please. I'm trying to use the code below to allow the user to select multiple Excel Workbooks, amalgamating the data into one 'Summary' sheet. Sub Merge() Dim DestWB As Workbook, WB As Workbook, WS As Worksheet, SourceSheet As String Set DestWB = ActiveWorkbook SourceSheet = "Input" startrow = 7 FileNames = Application.GetOpenFilename( _ filefilter:="Excel Files (*.xls*),*.xls*", _ Title:="Select the workbooks to merge.", MultiSelect:=True) If IsArray(FileNames) = False Then If FileNames = False Then Exit Sub End If End If For n = LBound(FileNames) To UBound(FileNames) Set WB = Workbooks.Open(Filename:=FileNames(n), ReadOnly:=True) For Each WS In WB.Worksheets If WS.Name = SourceSheet Then With WS If .UsedRange.Cells.Count > 1 Then dr = DestWB.Worksheets("Input").Range("C" & Rows.Count).End(xlUp).Row + 1 lastrow = .Range("C" & Rows.Count).End(xlUp).Row For j = lastrow To startrow Step -1 Select Case .Range("E" & j).Value Case "Manager", "Lead", "Technical", "Analyst" 'do nothing Case Else .Rows(j).EntireRow.Delete End Select Next lastrow = .Range("C" & Rows.Count).End(xlUp).Row If lastrow >= startrow Then .Range("B" & startrow & ":AD" & lastrow).Copy DestWB.Worksheets("Input").Cells(dr, "B").PasteSpecial xlValues .Range("AF" & startrow & ":AQ" & lastrow).Copy DestWB.Worksheets("Input").Cells(dr, "AF").PasteSpecial xlValues .Range("AS" & startrow & ":AS" & lastrow).Copy DestWB.Worksheets("Input").Cells(dr, "AS").PasteSpecial xlValues End If End If End With Exit For End If Next WS WB.Close savechanges:=False Next n End Sub The code works fine except for one issue which I've been trying to solve for the last few weeks. The following line of code looks in column E of the Source file, and if any of the entries match the values shown in the code it copies that row of data to paste into the Destination file. If Range("E" & j) <> "Manager" And Range("E" & j) <> "Lead" And Range("E" & j) <> "Technical" And Range("E" & j) <> "Analyst" Then Rows(j).Delete The problem I have is that if none of these values are found in the Source file, I receive the following error: Run time error '1004': Delete method of range class failed and in Debug mode it highlights this part of the line as the source of the error, but I've no idea why. Rows(j).Delete I just wondered whether someone may be able to look at this please and let me know where I'm going wrong, or perhaps even suggest a more efficient process of allowing the user to merge the workbooks. Many thanks and kind regards

    Read the article

  • VBA Excel - Workbook_SheetChange

    - by user2947014
    Hopefully this question hasn't already been asked, I tried searching for an answer and couldn't find anything. This is probably a simple question, but I am writing my first macro in excel and am having a problem that I can't find out a solution to. I wrote a couple of macros that basically sum up columns dynamically (so that the number of rows can change and the formula moves down automatically) based on a value in another column of the same row, and I call those macros from the event Workbook_SheetChange. The problem I'm having is, I change a cell's value from my macro to display the result of the sum, and this then calls Workbook_SheetChange again, which I do not want. Right now it works, but I can trace it and see that Workbook_SheetChange is being called multiple times. This is preventing me from adding other cell changes to the macros, because then it results in an infinite loop. I want the macros to run every time a change is made to the sheet, but I don't see any way around allowing the macros to change a cell's value, so I don't know what to do. I will paste my code below, in case it is helpful. Private Sub Workbook_SheetChange(ByVal Sh As Object, ByVal Target As Range) Dim Row As Long Dim Col As Long Row = Target.Row Col = Target.Column If Col <> 7 Then Range("G" & Row).Select Selection.Formula = "=IF(F" & Row & "=""Win"",E" & Row & ",IF(F" & Row & "=""Loss"",-D" & Row & ",0))" Target.Select End If Call SumRiskColumn End Sub Private Sub Workbook_SheetCalculate(ByVal Sh As Object) Call SumOutcomeColumn End Sub Sub SumOutcomeColumn() Dim N As Long N = Cells(Rows.Count, "A").End(xlUp).Row Cells(N + 1, "G").Formula = "=SUM(G2:G" & N & ")" End Sub Sub SumRiskColumn() Dim N As Long N = Cells(Rows.Count, "A").End(xlUp).Row Dim CurrTotalRisk As Long CurrTotalRisk = 0 For i = 2 To N If IsEmpty(ActiveSheet.Cells(i, 6)) And Not IsEmpty(ActiveSheet.Cells(i, 1)) And Not IsEmpty(ActiveSheet.Cells(i, 2)) And Not IsEmpty(ActiveSheet.Cells(i, 3)) Then CurrTotalRisk = CurrTotalRisk + ActiveSheet.Cells(i, 4).Value End If Next i Cells(N + 1, "D").Value = CurrTotalRisk End Sub Thank you for any help you can give me! I really appreciate it.

    Read the article

  • Changing an Action Type="click" to an auto action in SVG

    - by Dustin Myers
    I have a SVG document that I exported from Visio 2003 that would like to edit. This file has an action where you click a button it will navigate to a new screen. What I would like to do is have the navigation be based off of a data point rather than having to click the button. As an example I have a dynamic point data being brought into the SVG file and when that value changes from 0 to 1, I want this screen to automatically navigate to another screen. Below is the code I for clicking the button. <title content="structured text">Sheet.1107</title> <desc content="structured text">Button 1</desc> <v:custProps> <v:cp v:ask="false" v:langID="1033" v:invis="false" v:cal="0" v:val="VT4(Test Graphic 2)" v:type="0" v:prompt="" v:nameU="ObjRef" v:sortKey="" v:lbl="" v:format=""/> <v:cp v:ask="false" v:langID="1033" v:invis="false" v:cal="0" v:val="VT4(SF-S)" v:type="0" v:prompt="" v:nameU="DataPt" v:sortKey="" v:lbl="" v:format=""/> </v:custProps> <v:userDefs> <v:ud v:prompt="" v:nameU="NAVIGATE" v:val="VT4(NAVIGATE Test Graphic 2,&apos;&apos;,)"/> </v:userDefs> <v:textBlock v:margins="rect(4,4,4,4)"/> <v:textRect width="125.01" height="35" cx="62.5" cy="517.5"/> <rect x="0" width="125" y="500" height="35" class="st3"/> <text x="40.15" y="521.1" v:langID="1033" class="st4"><v:paragraph v:horizAlign="1"/><v:tabList/>Button 1</text> <jci:action type="click" count="1">if(evt.button == 0) nav(&apos;Test Graphic 2&apos;,&apos;&apos;,&apos;&apos;);</jci:action></g></a> In the above code I am bringing in the data value from SF-S. Once that value is equal to 1 I want this screen to automatically navigate to Test Graphic 2. I don't have any experience with coding so I am hoping someone here will be able to help me. Thanks, DMyers

    Read the article

  • Calculate total time between Dates in Hours and Minutes

    - by matthew parkes
    Hi I’m trying to resolve a problem using VB and I need some assistance. I’m very new to the language (1 week). The problem is I have created a user form to show how many hours and minutes has elapsed between two different times similar to a time sheet. The user form consists of two calendars, and under each calendar there are two text boxes; one box each to record the Hour and Minute they left and two further boxes to record the time they arrived back. I have used the code to minus the calendars together (e.g calendar in – calendar out) then times this by 24 to indicate the hours away. Then under the calendar out I have a text box for the user to type in the hour they left. Then I minus the 24 by the Hour out e.g. if it was 24 -15 it will appear 9 ( 9 hours of that day ) then I would add that to the figure they inserted in the text box Hour in (Return Time). e.g 14. Then I would add them to together e.g. 9 + 14 = 23 and have this displayed in another text box Total Hours. Therefore it would display 23 meaning 23 hours. I have then want to show another two text boxes to indicate minutes. One for Minutes Out then Minutes In. I have the problem to convert these minutes for instance if it is the out time is 15:50 and the in time the next day is at 15:55 it displays as 24 (in one text box) and 105 minutes (in the other text box). I would like the minutes added to the hour and have the balance of the remaining minutes in the minute text box. This should display 24 (in one text box) and 5 (in another text box). The ultimate aim is to get a result that shows a person was absent for a number of days, hours and minutes, eg, 2 days, 5 hours and 10 minutes. Any ideas on how I can modify my code to achieve this? Here’s my code. Please Help Dim number1 As Date Dim number2 As Date Dim number3 As Integer Dim number4 As Integer Dim Number5 As Integer Dim Number6 As Integer Dim answer As Integer Dim answer2 As Integer Dim answer3 As Integer Dim answer4 As Integer Dim answer5 As String number1 = DTPicker1 number2 = DTPicker2 number3 = Txthourout number4 = TxtHourin Number5 = TxtMinuteout Number6 = TxtMinuetIn answer = number2 - number1 answer2 = answer * 24 answer3 = answer2 - number3 answer4 = answer3 + number4 answer5 = Number5 + Number6 TextBox1.Text = answer4 TextBox2.Text = answer5 End Sub

    Read the article

  • SQLite/iPhone read copyright symbol

    - by Marco A
    Hi All, I am having problems reading the copyright symbol from a sqlite db that I have for my App that I am developing. I import the information manually, ie, from an excel sheet. I have tried two ways of doing it and failed with both: 1) Tried replacing the copyright symbol with "\u00ae" (unicode combination) within excel and then importing the modified file. - Result: I get the combination of \u00ae as a part of the string, it doesnt detect the unicode combination. 2) Tried leaving as it is. Importing the excel with the copyright symbol. - Result: I get a symbol that is different from the copyright, its something like an AE put together.looks like this: Æ Heres my code how I read from DB: -(void) readCategoriesFromDatabase:(NSString *) rest_input { // Init the products Array categories = [[NSMutableArray alloc] init]; // Open the database from the users filessytem rest_input = [rest_input stringByAppendingString:@"'"]; NSString *newString; newString = [@"select distinct category from food where restaurant='" stringByAppendingString:rest_input]; const char *cat_sqlStatement = [newString UTF8String]; sqlite3_stmt *cat_compiledStatement; if(sqlite3_prepare_v2(database, cat_sqlStatement, -1, &cat_compiledStatement, NULL) == SQLITE_OK) { // Loop through the results and add them to the feeds array while(sqlite3_step(cat_compiledStatement) == SQLITE_ROW) { NSString *catName = [NSString stringWithUTF8String:(char *)sqlite3_column_text(cat_compiledStatement,0)]; // Create a new product object with the data from the database Product *category = [[Product alloc] initWithName:catName]; // Add the product object to the respective Array [categories addObject:category]; [category release]; } sqlite3_finalize(cat_compiledStatement); } NSLog(@"Finished Accessing Database to gather Categories...."); } I open the DB with this function: -(void) checkAndCreateDatabase{ NSLog(@"Checking/Creating Database...."); NSFileManager *fileManager = [NSFileManager defaultManager]; success = [fileManager fileExistsAtPath:databasePath]; [fileManager removeFileAtPath:databasePath handler:nil]; NSString *databasePathFromApp = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:databaseName]; [fileManager copyItemAtPath:databasePathFromApp toPath:databasePath error:nil]; [fileManager release]; if (sqlite3_open([databasePath UTF8String], &database) != SQLITE_OK) { sqlite3_close(database); database = nil; } NSLog(@"Finished Checking/Creating Database...."); } Thanks to anything that can help me out.

    Read the article

  • organizing external libraries and include files

    - by stijn
    Over the years my projects use more and more external libraries, and the way I did it starts feeling more and more awkward (although, that has to be said, it does work flawlessly). I use VS on Windows, CMake on others, and CodeComposer for targetting DSPs on Windows. Except for the DSPs, both 32bit and 64bit platforms are used. Here's a sample of what I am doing now; note that as shown, the different external libraries themselves are not always organized in the same way. Some have different lib/include/src folders, others have a single src folder. Some came ready-to-use with static and/or shared libraries, others were built /path/to/projects /projectA /projectB /path/to/apis /apiA /src /include /lib /apiB /include /i386/lib /amd64/lib /path/to/otherapis /apiC /src /path/to/sharedlibs /apiA_x86.lib -->some libs were built in all possible configurations /apiA_x86d.lib /apiA_x64.lib /apiA_x64d.lib /apiA_static_x86.lib /apiB.lib -->other libs have just one import library /path/to/dlls -->most of this directory also gets distributed to clients /apiA_x86.dll and it's in the PATH /apiB.dll Each time I add an external libary, I roughly use this process: build it, if needed, for different configurations (release/debug/platform) copy it's static and/or import libraries to 'sharedlibs' copy it's shared libraries to 'dlls' add an environment variable, eg 'API_A_DIR' that points to the root for ApiA, like '/path/to/apis/apiA' create a VS property sheet and a CMake file to state include path and eventually the library name, like include = '$(API_A_DIR)/Include' and lib = apiA.lib add the propertysheet/cmake file to the project needing the library It's especially step 4 and 5 that are bothering me. I am pretty sure I am not the only one facing this problem, and would like see how others deal with this. I was thinking to get rid of the environment variables per library, and use just one 'API_INCLUDE_DIR' and populating it with the include files in an organized way: /path/to/api/include /apiA /apiB /apiC This way I do not need the include path in the propertysheets nor the environment variables. For libs that are only used on windows I even don't need a propertysheet at all as I can use #pragmas to instruct the linker what library to link to. Also in the code it will be more clear what gets included, and no need for wrappers to include files having the same name but are from different libraries: #include <apiA/header.h> #include <apiB/header.h> #include <apiC_version1/header.h> The withdrawal is off course that I have to copy include files, and possibly** introduce duplicates on the filesystem, but that looks like a minor price to pay, doesn't it? ** actually once libraries are built, the only thing I need from them is the include files and thie libs. Since each of those would have a dedicated directory, the original source tree is not needed anymore so can be deleted..

    Read the article

  • Type Mismatch using VBScript to create Pivot Table/Chart

    - by Rodricks
    I get Run time error:Type mismatch for the following code: Dim Field Field="Gen8" '''' ============================================================================== EXCEL Sheet '==============Errors -Stacked Chart by Year and Week --ALL WEEKS ''''=================================================== objExcel.ActiveWorkbook.Worksheets.Add SheetNumber = SheetNumber ' add adds in front so sheetnumber stays 1 objExcel.Sheets(SheetNumber).Select objExcel.Sheets(SheetNumber).Activate objExcel.Sheets(SheetNumber).Name = "YRWk" SheetName = "SYS_Product_YRWeeks" '============== strSQLCustomers = "select isnull(AB.Week,D.Week_Num) AS YRWk,ISNULL(AB.UnCorrectable,0) as UE," & _ "isnull(AB.Correctable,0) as CE, isnull(AB.SYS_Product,'" & Field & "'" & _ ") as SYS_Product from AHS_Dates D Left Join (select * from P_tot where " & _ "SYS_Product = '" & Field & "'" & _ " ) AB on AB.Year_=D.Year_ and AB.Week=D.Week_Num order by YRWk" FetchData2.Open strSQLCustomers, openConnection, adOpenStatic, adLockReadOnly If FetchData2.RecordCount > 0 Then **objExcel.ActiveWorkbook.Connections.Add SheetName, "", _ Array(Array( _ "ODBC;DRIVER=SQL Server Native Client 10.0;SERVER=" & sServerIP & ";TimeOut=5000000; Trusted_Connection=Yes;Integrated Security=SSPI;" _ ), Array("DATABASE=" & sDataBaseName & ";")), Array(strSQLCustomers), 2** objExcel.ActiveWorkbook.PivotCaches.Create(SourceType:=xlExternal, SourceData:= _ objExcel.ActiveWorkbook.Connections(SheetName), Version:= _ xlPivotTableVersion14).CreatePivotTable TableDestination:=objExcel.Sheets(SheetNumber).Name & "!R3C7", _ TableName:="PivotTable" & SheetNumber, DefaultVersion:=xlPivotTableVersion14 Set ws = objExcel.ActiveWorkbook.Worksheets(objExcel.Sheets(SheetNumber).Name) objExcel.Cells(3, 7).Select ws.Shapes.AddChart.Select objExcel.ActiveWorkbook.ActiveChart.ChartType = xlAreaStacked objExcel.ActiveWorkbook.ActiveChart.SetSourceData Source:=ws.Range(objExcel.Sheets(SheetNumber).Name & "!$G$3:$I$20") With ws.PivotTables("PivotTable1").PivotFields("SYS_PRoduct") .Orientation = xlColumnField .Position = 1 End With With ws.PivotTables("PivotTable1").PivotFields("YRWk") .Orientation = xlRowField .Position = 1 End With ' With ws.PivotTables("PivotTable1").PivotFields("Year_") ' .Orientation = xlRowField ' .Position = 2 ' End With objExcel.ActiveWorkbook.ActiveChart.ChartTitle.Text = " Errors by Week and Year -ALLWEEKS" ws.PivotTables("PivotTable1").AddDataField ws.PivotTables( _ "PivotTable1").PivotFields("UE"), "Sum of UnCorrectable", xlSum ws.PivotTables("PivotTable1").AddDataField ws.PivotTables( _ "PivotTable1").PivotFields("CE"), "Sum of Correctable", xlSum End If ''MsgBox (FetchData2.RecordCount) FetchData2.Close I have used the same pivot chart + table in other slides. The problem I think is the query length My question: 1.Is there a better way for me to access the query results. Would appreciate the steps if any. 2.If I can make it a procedure how do I modify the pivot chart/table creation. Thanks. The query results with all 52 weeks: Week UE CE SYS_Product(or Field) 1 0 0 Gen8 2 0 0 Gen8 3 0 0 Gen8 4 0 0 Gen8 5 0 0 Gen8 6 0 0 Gen8

    Read the article

  • Interesting articles and blogs on SPARC T4

    - by mv
    Interesting articles and blogs on SPARC T4 processor   I have consolidated all the interesting information I could get on SPARC T4 processor and its hardware cryptographic capabilities.  Hope its useful. 1. Advantages of SPARC T4 processor  Most important points in this T4 announcement are : "The SPARC T4 processor was designed from the ground up for high speed security and has a cryptographic stream processing unit (SPU) integrated directly into each processor core. These accelerators support 16 industry standard security ciphers and enable high speed encryption at rates 3 to 5 times that of competing processors. By integrating encryption capabilities directly inside the instruction pipeline, the SPARC T4 processor eliminates the performance and cost barriers typically associated with secure computing and makes it possible to deliver high security levels without impacting the user experience." Data Sheet has more details on these  : "New on-chip Encryption Instruction Accelerators with direct non-privileged support for 16 industry-standard cryptographic algorithms plus random number generation in each of the eight cores: AES, Camellia, CRC32c, DES, 3DES, DH, DSA, ECC, Kasumi, MD5, RSA, SHA-1, SHA-224, SHA-256, SHA-384, SHA-512" I ran "isainfo -v" command on Solaris 11 Sparc T4-1 system. It shows the new instructions as expected  : $ isainfo -v 64-bit sparcv9 applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc 32-bit sparc applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc v8plus div32 mul32  2.  Dan Anderson's Blog have some interesting points about how these can be used : "New T4 crypto instructions include: aes_kexpand0, aes_kexpand1, aes_kexpand2,         aes_eround01, aes_eround23, aes_eround01_l, aes_eround_23_l, aes_dround01, aes_dround23, aes_dround01_l, aes_dround_23_l.       Having SPARC T4 hardware crypto instructions is all well and good, but how do we access it ?      The software is available with Solaris 11 and is used automatically if you are running Solaris a SPARC T4.  It is used internally in the kernel through kernel crypto modules.  It is available in user space through the PKCS#11 library." 3.   Dans' Blog on Where's the Crypto Libraries? Although this was written in 2009 but still is very useful  "Here's a brief tour of the major crypto libraries shown in the digraph:   The libpkcs11 library contains the PKCS#11 API (C_\*() functions, such as C_Initialize()). That in turn calls library pkcs11_softtoken or pkcs11_kernel, for userland or kernel crypto providers. The latter is used mostly for hardware-assisted cryptography (such as n2cp for Niagara2 SPARC processors), as that is performed more efficiently in kernel space with the "kCF" module (Kernel Crypto Framework). Additionally, for Solaris 10, strong crypto algorithms were split off in separate libraries, pkcs11_softtoken_extra libcryptoutil contains low-level utility functions to help implement cryptography. libsoftcrypto (OpenSolaris and Solaris Nevada only) implements several symmetric-key crypto algorithms in software, such as AES, RC4, and DES3, and the bignum library (used for RSA). libmd implements MD5, SHA, and SHA2 message digest algorithms" 4. Difference in T3 and T4 Diagram in this blog is good and self explanatory. Jeff's blog also highlights the differences  "The T4 servers have improved crypto acceleration, described at https://blogs.oracle.com/DanX/entry/sparc_t4_openssl_engine. It is "just built in" so administrators no longer have to assign crypto accelerator units to domains - it "just happens". Every physical or virtual CPU on a SPARC-T4 has full access to hardware based crypto acceleration at all times. .... For completeness sake, it's worth noting that the T4 adds more crypto algorithms, and accelerates Camelia, CRC32c, and more SHA-x." 5. About performance counters In this blog, performance counters are explained : "Note that unlike T3 and before, T4 crypto doesn't require kernel modules like ncp or n2cp, there is no visibility of crypto hardware with kstats or cryptoadm. T4 does provide hardware counters for crypto operations.  You can see these using cpustat: cpustat -c pic0=Instr_FGU_crypto 5 You can check the general crypto support of the hardware and OS with the command "isainfo -v". Since T4 crypto's implementation now allows direct userland access, there are no "crypto units" visible to cryptoadm.  " For more details refer Martin's blog as well. 6. How to turn off  SPARC T4 or Intel AES-NI crypto acceleration  I found this interesting blog from Darren about how to turn off  SPARC T4 or Intel AES-NI crypto acceleration. "One of the new Solaris 11 features of the linker/loader is the ability to have a single ELF object that has multiple different implementations of the same functions that are selected at runtime based on the capabilities of the machine.   The alternate to this is having the application coded to call getisax(2) system call and make the choice itself.  We use this functionality of the linker/loader when we build the userland libraries for the Solaris Cryptographic Framework (specifically libmd.so and libsoftcrypto.so) The Solaris linker/loader allows control of a lot of its functionality via environment variables, we can use that to control the version of the cryptographic functions we run.  To do this we simply export the LD_HWCAP environment variable with values that tell ld.so.1 to not select the HWCAP section matching certain features even if isainfo says they are present.  This will work for consumers of the Solaris Cryptographic Framework that use the Solaris PKCS#11 libraries or use libmd.so interfaces directly.  For SPARC T4 : export LD_HWCAP="-aes -des -md5 -sha256 -sha512 -mont -mpul" .. For Intel systems with AES-NI support: export LD_HWCAP="-aes"" Note that LD_HWCAP is explained in  http://docs.oracle.com/cd/E23823_01/html/816-5165/ld.so.1-1.html "LD_HWCAP, LD_HWCAP_32, and LD_HWCAP_64 -  Identifies an alternative hardware capabilities value... A “-” prefix results in the capabilities that follow being removed from the alternative capabilities." 7. Whitepaper on SPARC T4 Servers—Optimized for End-to-End Data Center Computing This Whitepaper on SPARC T4 Servers—Optimized for End-to-End Data Center Computing explains more details.  It has DTrace scripts which may come in handy : "To ensure the hardware-assisted cryptographic acceleration is configured to use and working with the security scenarios, it is recommended to use the following Solaris DTrace script. #!/usr/sbin/dtrace -s pid$1:libsoftcrypto:yf*:entry, pid$target:libsoftcrypto:rsa*:entry, pid$1:libmd:yf*:entry { @[probefunc] = count(); } tick-1sec { printa(@ops); trunc(@ops); }" Note that I have slightly modified the D Script to have RSA "libsoftcrypto:rsa*:entry" as well as per recommendations from Chi-Chang Lin. 8. References http://www.oracle.com/us/corporate/features/sparc-t4-announcement-494846.html http://www.oracle.com/us/products/servers-storage/servers/sparc-enterprise/t-series/sparc-t4-1-ds-487858.pdf https://blogs.oracle.com/DanX/entry/sparc_t4_openssl_engine https://blogs.oracle.com/DanX/entry/where_s_the_crypto_libraries https://blogs.oracle.com/darren/entry/howto_turn_off_sparc_t4 http://docs.oracle.com/cd/E23823_01/html/816-5165/ld.so.1-1.html   https://blogs.oracle.com/hardware/entry/unleash_the_power_of_cryptography https://blogs.oracle.com/cmt/entry/t4_crypto_cheat_sheet https://blogs.oracle.com/martinm/entry/t4_performance_counters_explained  https://blogs.oracle.com/jsavit/entry/no_mau_required_on_a http://www.oracle.com/us/products/servers-storage/servers/sparc-enterprise/t-series/sparc-t4-business-wp-524472.pdf

    Read the article

  • CodePlex Daily Summary for Sunday, September 30, 2012

    CodePlex Daily Summary for Sunday, September 30, 2012Popular ReleasesCAPTCHA Solver: Initial Release: This is the initial Release :) Still very much a WIP.MCEBuddy 2.x: MCEBuddy 2.2.17: Reccomended update to 2.2.16 Changelog for 2.2.17 (32bit and 64bit) 1. Fixed bugs around thread synchronization with new remote model (fixes cause the app to crash or hang) 2. Updated UPnP code base, faster and more reliable now 3. Now you can get audio/video properties for multiple files on main page. Selected multiple files and right click, all selected files properties will be shown. 4. Fix a bug, not able to enter a conversion task name in the GUIAggravation: Version 1.0: This version 1.0 release is pretty stable. You need the Silverlight 4 runtime, developer tools, and Experssion Blend 4 installed.Readable Passphrase Generator: KeePass Plugin 0.7.1: See the KeePass Plugin Step By Step Guide for instructions on how to install the plugin. Changes Built against KeePass 2.20Windows 8 Toolkit - Charts and More: Beta 1.0: The First Compiled Version of my LibraryPDF.NET: PDF.NET.Ver4.5-OpenSourceCode: PDF.NET Ver4.5 ????,????Web??????。 PDF.NET Ver4.5 Open Source Code,include a sample Web application project.D3 Loot Tracker: 1.4: Session name is displayed in the UI. Changes data directory for clickonce deployment so that sessions files are persisted between versions. Added a delete button in the sessions list window. Allow opening of the sessions local folder from the session list widow. Display the session name in the main window Ability to select which diablo process to hook up to when pressing new () function BUT only if multi-process support is selected in the generals settings tab menu. Session picker...CRM 2011 Visual Ribbon Editor: Visual Ribbon Editor 1.1 Beta: Visual Ribbon Editor 1.1 Beta What's New: Fixed scrolling issue in UnHide dialog Added support for connecting via ADFS / IFD Added support for more than one action for a button Added support for empty StringParameter for Javascript functions Fixed bug in rule CrmClientTypeRule when selecting Outlook option Extended Prefix field in New Button dialogVisual Studio Icon Patcher: Version 1.5.2: This version contains no new images from v1.5.1 Contains the following improvements: Better support for detecting the installed languages The extract & inject commands won’t run if Visual Studio is running You may now run in extract or inject mode The p/invoke code was cleaned up based on Code Analysis recommendations When a p/invoke method fails the Win32 error message is now displayed Error messages use red text Status messages use green textZXing.Net: ZXing.Net 0.9.0.0: On the way to a release 1.0 the API should be stable now with this version. sync with rev. 2393 of the java version improved api better Unity support Windows RT binaries Windows CE binaries new Windows Service demo new WPF demo WindowsCE Hotfix: Fixes an error with ISO8859-1 encoding and scannning of QR-Codes. The hotfix is only needed for the WindowsCE platform.C.B.R. : Comic Book Reader: CBR 0.7: Synthesis since 0.6 : ePUB : Complete refactoring Add a new dedicated feed viewer for opds stream PDF conversion : improved with image merge Make all backstage panel scrollable Integrate the new AvalonDock 2 library. Support multi-document. Library explorer and Table of content are now toolboxes Designer for dynamic books is now mvvm and much better New BrowserForControl Customized xps viewer to suppress toolbars and bind it to cbr commands Add quick start manual and button ...menu4web: menu4web 1.0 - free javascript menu for web sites: menu4web 1.0 has been tested with all major browsers: Firefox, Chrome, IE, Opera and Safari. Minified m4w.js library is less than 9K. Includes 21 menu examples of different styles. Can be freely distributed under The MIT License (MIT).Rawr: Rawr 5.0.0: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...Coevery - Free CRM: Coevery 1.0.0.26: The zh-CN issue has been solved. We also add a project management module.VidCoder: 1.4.1 Beta: Updated to HandBrake 4971. This should fix some issues with stuck PGS subtitles. Fixed build break which prevented pre-compiled XML serializers from showing up. Fixed problem where a preset would get errantly marked as modified when re-opening the encode settings window or importing a new preset.Snake!: Snake 1.0: Version 1 StablePaging SharePoint ListItems using listitems position: Paginglistitems V1.0: This is a console application which has two methods both on CSOM and SOM to display the listitems in a paged manner.SharePoint Move Discussion Threads: SharePoint Move Discussion Threads ver 0.1: ver 0.1NTCPMSG: V1.1.1.0: increase the performance. Support .net framework 4.0.BlackJumboDog: Ver5.7.2: 2012.09.23 Ver5.7.2 (1)InetTest?? (2)HTTP?????????????????100???????????New Projects2D Sprite Editor: This is a 2d sprite editor. Import your sprite sheet, trace your animations frame and export the coordinates points in a simple txt file, ready to import.caifenweb1: test project.CatchThatException: This is a small logging library We created at developerpath.com to help us log exceptions. It write it to a text file and you can easilay open that txt.FsxWs - WebServices for Microsoft FSX: WebServices for MS Flight Simulator. Get flights data as JSON, KML. !! Still in SetUp phase - be patient !!GetTPB: Some training in downloading and parsing web pages, with multithreading too.JSON-RPC Client Generator (for XBMC): The goal of this project is to provide a .Net client for the XBMC JSONRPC API. The main part is not XBMC dependent and may be used for any JSON-RPC client.matlab-silhouette-pose-wtf: Whatevermfp: this is random codeMVC Grid: MVC Grid ExampleMyWebSocketTry: sssssssssssssssssssssssssssssssssssssssNetduino Console: Netduino Console is an interface with built in messaging layers that allows you as a developer to dynamically create plugins following a provided interface to iSharePoint ASP.NET Verifier: Project will allow to verify SharePoint 2010 components using ASP.NET web applicationSharepoint Custom Upload: This is a SharePoint solution that allows an administrator to customize the upload page individually for each document library in a site.. It allows you to makeWinWeb Browser Deluxe: WinWeb Browser Deluxe es un navegador web de código abierto basado en Internet Explorer hecho en Visual Basic .NET. Descargalo ya!writethatoutput: This is the official release page for WriteThatOutPut from developerpath.com

    Read the article

  • CodePlex Daily Summary for Monday, March 22, 2010

    CodePlex Daily Summary for Monday, March 22, 2010New Projects[Tool] Vczh Non-public DLL Classes Caller: Generate C# code for you to call non-public classes in DLLs very easily.Artefact Animator: Artefact Animator provides an easy to use framework for procedural time-based animations in Silverlight and WPF.cacheroo: Cacheroo is a social networking community that will make it easier for people who love geocaching to get connected.Data Processing Toolkit: An utility app to collected data from different sources (i.e. bugzilla bug reports) in a structured way. We are currently setting up the site. Mo...eXternal SQL Bridge (PHP): The eXternal SQL Bridge (XSB) allows you to bridge two websites together in a secure manner through pre-shared keys. XSB is resilient against repla...'G' - Language to Define Gestures for Touch Based Applications: A cross plat form multi-touch application framework with a language to define gestures. The application is build on Silverlight 4.0 and the languag...IIS Network Diagnostic Tools: Web implementation of "looking glass" like services (ping, traceroute) as HTTP modules for Internet Information Services.Interop Router: This project establishes a communication framework and job dispatcher for a mixed operating system cluster environment.L2 Commander: L2Commander makes it easier for both new and old l2j users to manage your server.You no longer have to waste time on finding the files you need and...MediaHelper: A utility to help clean up empty/unwanted files and folders in your filesystem.mhinze: matt hinze stuffOneMan: Focus on Silverlight and WCF technology.Rss Photo Frame Android Widget: RSS Photo Frame Android Widget permits showing pictures from any RSS feed on your Android device's desktopSingle Web Session: Web Tool Kits Current project provide developer with different tools that help to enhance web site performance, security, and other common functio...Work Item Visualization: Use DGML to visualize and analyze your TFS Work Items. Included is the ability to perform basic risk/impact analysis. It helps answer the question,...New Releases[Tool] Vczh Non-public DLL Classes Caller: Wrapper Coder (beta): Click "<Click Me To Open Assembly File>", WrapperCoder will load the assembly and referenced assembly. Check the non-public classes that you want...APS - Automatic Print Screen: APS 1.0: APS automatizes the tasks of paste the image in Paint and save it after print screen or alt+print screen. Choose directory, name and file extension...BTP Tools: e-Sword generator build 20100321: 1. Modify the indent after subtitle. 2. Add 2 spaces after subtitle.Combres - WebForm & MVC Client-side Resource Combine Library: Combres 2.0: Changes since last version (1.2) Support ignore Combres pipeline in debug mode - see issue #6088 Debug mode generates comment helping identify in...Desafio Office 2010 Brasil: DesafioOutlook: Controlando um robo com o Outlook 2010dylan.NET: dylan.NET v. 9.4: Adding Platform Invocation Services Support, full Managed Pointer Support, Charset,Dllimport,Callconv setting for P/Invoke, MarshalAs for parametersFamily Tree Analyzer: Version 1.3.2.0: Version 1.3.2.0 Add open folder button to IGI Search Form Fixes to Fact Location processing - IGIName renamed to RegionID Fix if Region ID not fou...Fasterflect - A Fast and Simple Reflection API: Fasterflect 2.0: We are pleased to release version 2.0 of Fasterflect, which contains a lot of additions and improvements from the previous version. Please refer t...IIS Network Diagnostic Tools: 1.0: Initial public release.Informant: Informant (Desktop) v0.1: This release allows users to send sms messages to 1-Many Groups or 1-Many contacts. It is a very basic release of the application. No styling has b...InfoService: InfoService v1.5 - MPE1 Package: InfoService Release v1.5.0.65 Please read Plugin installation for installation instructions.InfoService: InfoService v1.5 - RAR Package: InfoService Release v1.5.0.65 Please read Plugin installation for installation instructions.L2 Commander: Source Code Link: Where to find our source.ModularCMS: ModularCMS 1.2: Minor bug fixes.NMTools: NMTools-v40b0-20100321-0: The most noticeable aspect of this release is that NMTools is now an independent project. It will no longer tied to OpenSLIM. Nevertheless, OpenSLI...SharePoint LogViewer: SharePoint LogViewer 1.5.3: Log loading performance enhanced. Search text box now has auto complete feature.Single Web Session: Single Web Session: !Single Web Session! <httpModules> <add name="SingleSession" type="SingleWebSession.Model.WebSessionModule, SingleWebSession"/> </httpModules>Sprite Sheet Packer: 2.1 Release: Made a few crucial fixes from 2.0: - Fixed error with paths having spaces. - Fixed error with UI not unlocking. - Fixed NullReferenceException on ...uManage - AD Self-Service Portal: uManage v1.1 (.NET 4.0 RC): Updated Releasev1.1 Adds the primary ability to setup and configure the application through a setup wizard. The setup wizard will continue to evol...VCC: Latest build, v2.1.30321.0: Automatic drop of latest buildVS ChessMania: VS ChessMania V2 March Beta: Second Beta Release with move correction and making application more safe for user. New features will be added soon.WatchersNET CKEditor™ Provider for DotNetNuke: CKEditor Provider 1.9.00: Whats New Added New Toolbar Plugin (By Kent Safransk) 'MediaEmbed' to Include Embed Media from Youtube, Vimeo, etc. Media Embed Plugin Added New ...WeatherBar: WeatherBar 1.0 [No Installation]: Extract the ZIP archive and run WeatherBar.exe. Current release contains some bugs that will be fixed in the next version. Check the Issue Tracker...Work Item Visualization: Release 1.0: This is the initial release of the Work Item Visualization tool. There are no known issues when it comes to the visualization aspects of the tool b...WPF Application Framework (WAF): WPF Application Framework (WAF) 1.0.0.10: Version: 1.0.0.10 (Milestone 10): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requi...WPF AutoComplete TextBox Control: Version 1.2: What's Newadds AutoAppend feature adds a new provider: UrlHistoryDataProvider sample application is updated to reflect the new things Bug Fixe...ZoomBarPlus: V2 (Beta): - Fixed bug: if the active window changed while you were in the middle of a single tap delay, long tap delay, or swipe-repeat, it would continue re...Most Popular ProjectsMetaSharpSavvy DateTimeRawrWBFS ManagerSilverlight ToolkitASP.NET Ajax LibraryMicrosoft SQL Server Product Samples: DatabaseAJAX Control ToolkitLiveUpload to FacebookWindows Presentation Foundation (WPF)Most Active ProjectsLINQ to TwitterRawrOData SDK for PHPjQuery Library for SharePoint Web ServicesDirectQPHPExcelFarseer Physics Enginepatterns & practices – Enterprise LibraryBlogEngine.NETNB_Store - Free DotNetNuke Ecommerce Catalog Module

    Read the article

  • How to Upload a file from client to server using OFBIZ?

    - by SIVAKUMAR.J
    I'm new to ofbiz so try to keep your answer as simple as possibly. If you can give examples that would be kind. My problem is I created a project inside the ofbiz/hot-deploy folder namely productionmgntSystem. Inside the folder ofbiz\hot-deploy\productionmgntSystem\webapp\productionmgntSystem I created a file app_details_1.ftl. The following are the code of this file <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <title>Insert title here</title> <script TYPE="TEXT/JAVASCRIPT" language=""JAVASCRIPT"> function uploadFile() { //alert("Before calling upload.jsp"); window.location='<@ofbizUrl>testing_service1</@ofbizUrl>' } </script> </head> <!-- <form action="<@ofbizUrl>testing_service1</@ofbizUrl>" enctype="multipart/form-data" name="app_details_frm"> --> <form action="<@ofbizUrl>logout1</@ofbizUrl>" enctype="multipart/form-data" name="app_details_frm"> <center style="height: 299px; "> <table border="0" style="height: 177px; width: 788px"> <tr style="height: 115px; "> <td style="width: 103px; "> <td style="width: 413px; "><h1>APPLICATION DETAILS</h1> <td style="width: 55px; "> </tr> <tr> <td style="width: 125px; ">Application name : </td> <td> <input name="app_name_txt" id="txt_1" value=" " /> </td> </tr> <tr> <td style="width: 125px; ">Excell sheet &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;: </td> <td> <input type="file" name="filename"/> </td> </tr> <tr> <td> <!-- <input type="button" name="logout1_cmd" value="Logout" onclick="logout1()"/> --> <input type="submit" name="logout_cmd" value="logout"/> </td> <td> <!-- <input type="submit" name="upload_cmd" value="Submit" /> --> <input type="button" name="upload1_cmd" value="Upload" onclick="uploadFile()"/> </td> </tr> </table> </center> </form> </html> the following coding is present in the file ofbiz\hot-deploy\productionmgntSystem\webapp\productionmgntSystem\WEB-INF\controller.xml ...... ....... ........ <request-map uri="testing_service1"> <security https="true" auth="true"/> <event type="java" path="org.ofbiz.productionmgntSystem.web_app_req.WebServices1" invoke="testingService"/> <response name="ok" type="view" value="ok_view"/> <response name="exception" type="view" value="exception_view"/> </request-map> .......... ............ .......... <view-map name="ok_view" type="ftl" page="ok_view.ftl"/> <view-map name="exception_view" type="ftl" page="exception_view.ftl"/> ................ ............. ............. The following are the coding present in the file ofbiz\hot-deploy\productionmgntSystem\src\org\ofbiz\productionmgntSystem\web_app_req\WebServices1.java package org.ofbiz.productionmgntSystem.web_app_req; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import java.io.DataInputStream; import java.io.FileOutputStream; import java.io.IOException; public class WebServices1 { public static String testingService(HttpServletRequest request, HttpServletResponse response) { //int i=0; String result="ok"; System.out.println("\n\n\t*************************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response)- Start"); String contentType=request.getContentType(); System.out.println("\n\n\t*************************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response)- contentType : "+contentType); String str=new String(); // response.setContentType("text/html"); //PrintWriter writer; if ((contentType != null) && (contentType.indexOf("multipart/form-data") >= 0)) { System.out.println("\n\n\t**********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) after if (contentType != null)"); try { // writer=response.getWriter(); System.out.println("\n\n\t**********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) - try Start"); DataInputStream in = new DataInputStream(request.getInputStream()); int formDataLength = request.getContentLength(); byte dataBytes[] = new byte[formDataLength]; int byteRead = 0; int totalBytesRead = 0; //this loop converting the uploaded file into byte code while (totalBytesRead < formDataLength) { byteRead = in.read(dataBytes, totalBytesRead,formDataLength); totalBytesRead += byteRead; } String file = new String(dataBytes); //for saving the file name String saveFile = file.substring(file.indexOf("filename=\"") + 10); saveFile = saveFile.substring(0, saveFile.indexOf("\n")); saveFile = saveFile.substring(saveFile.lastIndexOf("\\")+ 1,saveFile.indexOf("\"")); int lastIndex = contentType.lastIndexOf("="); String boundary = contentType.substring(lastIndex + 1,contentType.length()); int pos; //extracting the index of file pos = file.indexOf("filename=\""); pos = file.indexOf("\n", pos) + 1; pos = file.indexOf("\n", pos) + 1; pos = file.indexOf("\n", pos) + 1; int boundaryLocation = file.indexOf(boundary, pos) - 4; int startPos = ((file.substring(0, pos)).getBytes()).length; int endPos = ((file.substring(0, boundaryLocation)).getBytes()).length; //creating a new file with the same name and writing the content in new file FileOutputStream fileOut = new FileOutputStream("/"+saveFile); fileOut.write(dataBytes, startPos, (endPos - startPos)); fileOut.flush(); fileOut.close(); System.out.println("\n\n\t**********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) - try End"); } catch(IOException ioe) { System.out.println("\n\n\t*********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) - Catch IOException"); //ioe.printStackTrace(); return("exception"); } catch(Exception ex) { System.out.println("\n\n\t*********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) - Catch Exception"); return("exception"); } } else { System.out.println("\n\n\t********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) else part"); result="exception"; } System.out.println("\n\n\t*************************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response)- End"); return(result); } } I want to upload a file to the server. The file is get from user " tag in the "app_details_1.ftl" file & it is updated into the server by using the method "testingService(HttpServletRequest request, HttpServletResponse response)" in the class "WebServices1". But the file is not uploaded. Give me a good solution for uploading a file to the server.

    Read the article

  • SQL SERVER – Step by Step Guide to Beginning Data Quality Services in SQL Server 2012 – Introduction to DQS

    - by pinaldave
    Data Quality Services is a very important concept of SQL Server. I have recently started to explore the same and I am really learning some good concepts. Here are two very important blog posts which one should go over before continuing this blog post. Installing Data Quality Services (DQS) on SQL Server 2012 Connecting Error to Data Quality Services (DQS) on SQL Server 2012 This article is introduction to Data Quality Services for beginners. We will be using an Excel file Click on the image to enlarge the it. In the first article we learned to install DQS. In this article we will see how we can learn about building Knowledge Base and using it to help us identify the quality of the data as well help correct the bad quality of the data. Here are the two very important steps we will be learning in this tutorial. Building a New Knowledge Base  Creating a New Data Quality Project Let us start the building the Knowledge Base. Click on New Knowledge Base. In our project we will be using the Excel as a knowledge base. Here is the Excel which we will be using. There are two columns. One is Colors and another is Shade. They are independent columns and not related to each other. The point which I am trying to show is that in Column A there are unique data and in Column B there are duplicate records. Clicking on New Knowledge Base will bring up the following screen. Enter the name of the new knowledge base. Clicking NEXT will bring up following screen where it will allow to select the EXCE file and it will also let users select the source column. I have selected Colors and Shade both as a source column. Creating a domain is very important. Here you can create a unique domain or domain which is compositely build from Colors and Shade. As this is the first example, I will create unique domain – for Colors I will create domain Colors and for Shade I will create domain Shade. Here is the screen which will demonstrate how the screen will look after creating domains. Clicking NEXT it will bring you to following screen where you can do the data discovery. Clicking on the START will start the processing of the source data provided. Pre-processed data will show various information related to the source data. In our case it shows that Colors column have unique data whereas Shade have non-unique data and unique data rows are only two. In the next screen you can actually add more rows as well see the frequency of the data as the values are listed unique. Clicking next will publish the knowledge base which is just created. Now the knowledge base is created. We will try to take any random data and attempt to do DQS implementation over it. I am using another excel sheet here for simplicity purpose. In reality you can easily use SQL Server table for the same. Click on New Data Quality Project to see start DQS Project. In the next screen it will ask which knowledge base to use. We will be using our Colors knowledge base which we have recently created. In the Colors knowledge base we had two columns – 1) Colors and 2) Shade. In our case we will be using both of the mappings here. User can select one or multiple column mapping over here. Now the most important phase of the complete project. Click on Start and it will make the cleaning process and shows various results. In our case there were two columns to be processed and it completed the task with necessary information. It demonstrated that in Colors columns it has not corrected any value by itself but in Shade value there is a suggestion it has. We can train the DQS to correct values but let us keep that subject for future blog posts. Now click next and keep the domain Colors selected left side. It will demonstrate that there are two incorrect columns which it needs to be corrected. Here is the place where once corrected value will be auto-corrected in future. I manually corrected the value here and clicked on Approve radio buttons. As soon as I click on Approve buttons the rows will be disappeared from this tab and will move to Corrected Tab. If I had rejected tab it would have moved the rows to Invalid tab as well. In this screen you can see how the corrected 2 rows are demonstrated. You can click on Correct tab and see previously validated 6 rows which passed the DQS process. Now let us click on the Shade domain on the left side of the screen. This domain shows very interesting details as there DQS system guessed the correct answer as Dark with the confidence level of 77%. It is quite a high confidence level and manual observation also demonstrate that Dark is the correct answer. I clicked on Approve and the row moved to corrected tab. On the next screen DQS shows the summary of all the activities. It also demonstrates how the correction of the quality of the data was performed. The user can explore their data to a SQL Server Table, CSV file or Excel. The user also has an option to either explore data and all the associated cleansing info or data only. I will select Data only for demonstration purpose. Clicking explore will generate the files. Let us open the generated file. It will look as following and it looks pretty complete and corrected. Well, we have successfully completed DQS Process. The process is indeed very easy. I suggest you try this out yourself and you will find it very easy to learn. In future we will go over advanced concepts. Are you using this feature on your production server? If yes, would you please leave a comment with your environment and business need. It will be indeed interesting to see where it is implemented. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Data Warehousing, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Data Quality Services, DQS

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • Best of Breed vs. Suite – Oracle’s SaaS Delivers Both

    - by yaldahhakim
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} The debate of which is better: “best of breed” business applications vs. an integrated suite is certainly not a new conversation. This has been argued between IT vendors and CIOs for years. It’s also important to clarify that “best of breed” does not necessarily translate into being the richest functionality; rather it’s often about just having the best fit solution to solve a specific business problem or need. So what does cloud have to do with the niche vs. suite debate? Consuming business applications in a cloud or SaaS deployment model can change the best of breed vs. suite discussion - if the cloud is done right. It’s having your cake and eating it too only better: you don’t have to gather all the ingredients or wait to bake your cake, and you can adjust how big of slice you take. Before you eat, it’s worth pausing to recall much of what we learned about IT over the last decade. These basic IT principles still hold true even though the financial model has changed from buying to renting. In other words, what’s under the technology hood still matters. Architecture and development methodologies like building an application based on open standards so it works with other systems - is still important. Data and information silos, complex integrations, and proprietary technologies that lock you in, are still bad. While some may argue that IT no longer matters with cloud, the opposite is actually true. If anything cloud can help return IT back to its rightful place as key strategic asset vs. a liability on the balance sheet. The “I” in CIO was never meant to stand for “integration” yet it’s amazing how much time and money is poured into these types of initiatives for most organizations each year. Rather the “I” needs to stand for “innovation”. This is where Oracle SaaS can uniquely help. Oracle’s application strategy has not really changed over the years. It’s always been about bringing the best and richest functionality across the enterprise to our customers while leveraging a common, standards-based, and enterprise-grade platform. So not jut best fit, but the best capabilities based on the input of thousands of enterprise customers across the globe. Oracle invests billions in R&D every year to add new capabilities to the broadest cloud portfolio in the industry, spanning across functional pillars like CRM, HCM, ERP, etc. And where it makes sense, Oracle combines key strategic acquisitions to complement organic functionality. The result is best of breed delivered in a suite. Again this is not something new. The game changer now with cloud is that it impacts HOW Oracle customers adopt the richest, most modern applications across the business – and continue on getting it. Consuming oracle applications in the cloud means you can adopt new capabilities and updates very quickly and easily. There’s no hardware to buy or software to manage. Oracle does it for you. Low upfront costs and an OpEx financial model is the easy part. Oracle Cloud Applications take it a big step further. For organizations that demand having the latest and richest functionality and accelerating the time to value from their IT investment, Oracle Cloud is the right path. It’s about holistically changing the “hows” and the “whys” of the organization by leveraging transformational innovations like social, mobile, and big data in a consistent and more powerful way. Not just about sales force automation or talent management. These technologies should impact all parts of the company and Oracle Cloud is the enterprise-grade delivery vehicle. Oracle SaaS helps break down barriers of adoption and is eases the headache of upgrades, investing in new supporting hardware, or adding internal expertise to manage it all. With Oracle Cloud, customers can get best of breed capabilities in either a full suite model or a la carte. And because it’s entirely built on open standards, it’s built to co-exist with existing IT investments. Updates can be automatic or delayed based on a customer’s requirements. And it’s complete – a full suite of cross pillar functionality. Even better, if you don’t like it, need more or less, just turn the dial up or down. Just like your utility bill, you pay for what you use, and can consume more or less power whenever you need it. Lower cost, lower investment risk, without compromising on functionality, security, or performance. Technology still matters in the cloud. So our cloud customers also like that when they adopt our cloud applications, they also get the best underlying technology, from the middleware and database platform down to infrastructure and Oracle’s engineered systems. Therefore it’s not just the greatest and latest in application functionality, but everything underneath that makes it work is also the latest and greatest. The best of breed technology stack powering best of breed business applications, and all delivered in a subscription based model. The best of both worlds. Yep, that’s the idea.

    Read the article

  • Product Development Investment: A Measure of Vendor Performance

    - by Jim Mcglothlin
    The relationship between a large, complex organization and its key suppliers of information technology is normally more than just "strategic". Expectations about the duration of the relationship typically exceed 20 years. Enterprise applications and technology infrastructure are not expected to be changed out like petunias. So how would you rate the due diligence processes as performed in Higher Education when selecting critical, transformational information technology? My observation: I see a lot of effort put into elaborate demonstration of basic software functionality. I see a lot of attention paid to the cost element of technology acquisition, including the contracted cost of implementation consulting services. But the factor that receives only cursory analysis and due diligence is long-term performance--the ability of a vendor to grow, expand, and develop, and bring its customers along with it. So what should you look for in a long-term IT supplier? Oracle has a public track record for product development. The annual investment has been on a run rate of almost $3 Billion organic product development. Oracle's well-publicized acquisitions and mergers have been supplemental to its R&D. This is important for Higher Education. Another meaningful way to evaluate a company is to look at the tangible track record of enhancement. Consider the Oracle-PeopleSoft enterprise business platform since acquired by Oracle 6 years ago: Product or Technology Enhancement Customer or User Impact Service Oriented Architecture (SOA) 300+ new web services delivered in versions 9.0 & 9.1 provide flexibility, so that customers can integrate PeopleSoft with other applications. Campus Solutions has added Admissions and Constituent Web Services. Constituent Relationship Management PeopleSoft CRM 9.1 for Higher Education introduced new process flows for student recruiting and retention to support "Student Success" initiatives. A 360 view of the constituent is now delivered, and the concept of a single-stop Student Services Center is now in CRM 9.1 with tight integration to PeopleSoft Campus Solutions. Human Capital Management Contract Pay for Education, with flexibility for configuration and calculation, has been extended in HCM 9.1. New chartfield integration among Project Costing - Time & Labor - Payroll to serve the labor distribution requirements for Grants / Sponsored Research. Talent Management PeopleSoft 9.0 and 9.1 feature an integrated talent management approach centered on definitions in "Profile Manager", with all new usability improvements. Internal and external candidate pools, and the entire recruitment process, are driven by delivered configurable selection and on-boarding processes. Interview scheduling, and online job offers are newly delivered processes. Performance Management PeopleSoft HCM ePerformance 9.1 will include significant new functionality designed to help organizations more effectively align business objectives with employee goals. Using an Organization Chart view, your business goals can flow down to become tangible objectives per employee. Succession Planning / Workforce Development New in HCM 9.0, enhanced in 9.1, is a planning capability for regular or unusual (major organizational change) succession of internal or external candidates. PeopleSoft supports employee-based career planning, which ultimately increases the integrity of the succession planning process (identify their career needs, plans, preferences, and interests). Dashboards / Oracle Business Intelligence Application Suite Oracle Human Resources Analytics provides the workforce information foundation that integrates data from HR functional areas and Finance. Oracle Human Resources Analytics delivers 9 dashboards and over 200 reports. Provide your HR professionals and front-line managers the tools to analyze workforce staffing, retention, productivity, to better source high-quality applicants, and to reduce absence costs. Multi-year Planning and Commitment Control External funding sources, especially Grants, require a multi-year encumbrance business process. PeopleSoft HCM 9.1 adds multi-year funding and commitment control, including budget checking. The newly designed Real Time Budget Checking will provide the customer with an updated snapshot of their budget and encumbrances at any given time. Position Budgeting with Hyperion Hyperion Planning world-class products now include delivered integration to PeopleSoft HCM. Position Budgeting is available in the new Public Sector Planning module of Hyperion. Web 2.0 features for the latest in usability PeopleSoft 9.1 features a contemporary internet user experience: Partial-page refreshing Drag and drop pagelets New menu structure Navigation pagelets Modal popup message windows Favorites & recently used links Type-ahead Drag and drop grid columns, pop-out grids Portal Workspaces Enterprise 2.0 for your collaborative web communities, using new content management, along with Wikis, blogs, and discussion forums in PeopleSoft Portal 9.1. PeopleTools enhanced by Oracle Fusion Middleware Standards-based tools have been added to the PeopleTools application infrastructure: BI (XML) Publisher, Java tools. Certified for use with PeopleSoft: Oracle Business Intelligence (OBIEE), Oracle Enterprise Manager, Oracle Weblogic Server, Oracle SOA Suite. Hosting for PeopleSoft applications A solid new deployment option: Oracle On Demand remote hosting center for high scalability, security, and continuity of operations. Business Process Outsourcing (BPO) for HCM / Payroll functions Partnership with AT&T provides hosting of HR/Payroll application along with payroll business process operations, and subscription-based service fees (SaaS). AT&T BPO full service includes pay sheet processing, bank and 3rd party file transfer, payroll tax handling, etc. Continuous Delivery Model Feature Packs provide faster time-to-benefit; new features become available in PeopleSoft 9.1 (or Campus Solutions 9.0) without need to perform upgrade. Golden person data model across all campus applications Oracle Higher Education Constituent Hub provides synchronization and data governance of person data across any application, e.g. HR/ Payroll, Student Information System, Housing, Emergency Contact, LMS, CRM. Oracle's aggressive enhancement plans within the "Applications Unlimited" program continue, as new functionality is under development for a new version of a PeopleSoft release planned for 2012. Meanwhile, new capabilities are planned on an annual basis in Feature Packs. PeopleSoft just delivered the HCM 2010 Feature Pack and another is planned for 2011. In February we plan to have over 100 customers from our Customer Advisory Boards at our PeopleSoft Development Center in California to review designs for all of these releases. For those of you near New York City The investment and progressive development story described above is the subject of an Oracle road show event on February 9, 2011. Charting Your Course with Oracle Applications is a global event series designed to help business and IT executives assess the impact of new inflection points on their business and applications roadmap: changing workforces, shifting customer and constituent bases, and increased volatility. Learn how innovations ranging from new deployment models like cloud computing to the introduction of social applications and smart devices are delivering results across all areas of business and industry. THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY AND MAY NOT BE INCORPORATED INTO A CONTRACT OR AGREEMENT.

    Read the article

  • creating a heirarchy of terminals or workspaces

    - by intuited
    <rant This question occurred to me ('occurred' meaning 'whispered seductively in my ear for the 100th time') while using GNU-screen, so I'll make that my example. However this is a much more general question about user interfaces and what I perceive as a flawmissing feature in every implementation I've yet seen. I'm wondering if there is some way to create a heirarchy/tree of terminals in a screen session. EG I'd like to have something like 1 bash 1.1 bash 1.2 bash 2 bash 3 bash 3.1 bash 3.1.1 bash 3.1.2 bash It would be good if the terminals could be labelled instead of having to be navigated to via some arrangement that I suspect doesn't exist. So then you could jump to one using eg ^A:goto happydays or ^A:goto dykstra.angry. So to generalize that: Firefox, Chrome, Internet Explorer, gnome-terminal, roxterm, konsole, yakuake, OpenOffice, Microsoft Office, Mr. Snuffaluppagus's Funtime Carousel™, and Your Mom's Jam Browser™ all offer the ability to create a flat set of tabs containing documents of an identical nature: web pages, terminals, documents, fun rideable animals, and jams. GNU-screen implements the same functionality without using tabs. Linux and OS/X window managers provide the ability to organize windows into an array of workspaces, which amounts to again, the same deal. Over the past few years, this has become a more or less ubiquitous concept which has been righteously welcomed into the far reaches of the computer interface funfest. Heavy users of these systems quickly encounter a problem with it: the set of entities is flat. In the case of workspaces, an option may be available to create a 2d array. However none of these applications furnish their users with the ability to create heirarchies, similar to filesystem directory structures, containing instances of their particular contained type. I for one am consistently bothered by this, and am wondering if the community can offer some wisdom as to why this has not happened in any of the foremost collections of computational functionality our culture has yet produced. Or if perhaps it has and I'm just an ignorant savage. I'd like to be able to not only group things into a tree structure, but also to create references (aka symbolic links, aka pointers) from one part of the structure to another, as well as apply properties (eg default directory, colorscheme, ...) recursively downward from a given node. I see no reason why we shouldn't be able to save these structures as known sessions, and apply tags to particular instances. So then you can sort through them by tag, find them by name, or just use the arrow keys (with an appropriate modifier) to move left or right and in or out of a given level. Another key combo would serve to create a branch in the place of the current terminal/webpage/lifelike statue/spreadsheet/spreadsheet sheet/presentation/jam and move that entity into the new branch, then create a fresh one as a sibling to it: a second leaf node within the same branch node. They would get along well. I find it a bit astonishing that this hasn't happened yet, and the only reason I can venture as a guess is that the creators of these fine systems do not consider such functionality to be useful to a significant portion of their userbase. I posit that the probability that that such an assumption would be correct is pretty low. On the other hand, given the relative ease with which such structures can be implemented using modern libraries/languages, it doesn't seem likely that difficulty of implementation would be a major roadblock. If it could be done in 1972 or whenever within the constraints of a filesystem driver, it should be relatively painless to implement in 2010 in a fullblown application. Given that all of these systems are capable of maintaining a set of equivalent entities, it seems unlikely that a major infrastructure overhaul would be necessary in order to enable a navigable heirarchy of them. </rant Mostly I'm just looking to start up a discussion and/or brainstorming on this topic. Any ideas, examples, criticism, or analysis are quite welcome. * Mr. Snuffaluppagus's Funtime Carousel is a registered trademark of Children's Television Workshop Inc. * Your Mom's Jam Browser is a registered trademark of Your Mom Inc.

    Read the article

  • dvd drive I/O error?

    - by bobby
    i get dis error evry time i burn a cd/dvd thru my dvd drive...!! Nero Burning ROM bobby 4C85-200E-4005-0004-0000-7660-0800-35X3-0000-407M-MX37-**** (*) Windows XP 6.1 IA32 WinAspi: - NT-SPTI used Nero Version: 7.11.3. Internal Version: 7, 11, 3, (Nero Express) Recorder: Version: UL01 - HA 1 TA 1 - 7.11.3.0 Adapter driver: HA 1 Drive buffer : 2048kB Bus Type : default CD-ROM: Version: 52PP - HA 1 TA 0 - 7.11.3.0 Adapter driver: HA 1 === Scsi-Device-Map === === CDRom-Device-Map === ATAPI-CD ROM-DRIVE-52MAX F: CdRom0 HL-DT-ST DVDRAM GSA-H12N G: CdRom1 ======================= AutoRun : 1 Excluded drive IDs: WriteBufferSize: 83886080 (0) Byte BUFE : 0 Physical memory : 958MB (981560kB) Free physical memory: 309MB (317024kB) Memory in use : 67 % Uncached PFiles: 0x0 Use Inquiry : 1 Global Bus Type: default (0) Check supported media : Disabled (0) 11.6.2010 CD Image 10:43:02 AM #1 Text 0 File SCSIPTICommands.cpp, Line 450 LockMCN - completed sucessfully for IOCTL_STORAGE_MCN_CONTROL 10:43:02 AM #2 Text 0 File Burncd.cpp, Line 3186 HL-DT-ST DVDRAM GSA-H12N Buffer underrun protection activated 10:43:02 AM #3 Text 0 File Burncd.cpp, Line 3500 Turn on Disc-At-Once, using CD-R/RW media 10:43:02 AM #4 Text 0 File DlgWaitCD.cpp, Line 307 Last possible write address on media: 359848 ( 79:59.73) Last address to be written: 318783 ( 70:52.33) 10:43:02 AM #5 Text 0 File DlgWaitCD.cpp, Line 319 Write in overburning mode: NO (enabled: CD) 10:43:02 AM #6 Text 0 File DlgWaitCD.cpp, Line 2988 Recorder: HL-DT-ST DVDRAM G SA-H12N; CDR co de: 00 97 27 18; O SJ entry from: Pla smon Data systems Ltd. ATIP Data: Special Info [hex] 1: D0 00 A0, 2: 61 1B 12 (LI 97:27.18), 3: 4F 3B 4A ( LO 79:59.74) Additional Info [hex] 1: 00 00 00 (invalid), 2: 00 00 00 (invalid), 3: 00 0 0 00 (invalid) 10:43:02 AM #7 Text 0 File DlgWaitCD.cpp, Line 493 Protocol of DlgWaitCD activities: TRM_DATA_MODE1, 2048, config 0, wanted index0 0 blocks, length 318784 blo cks [G: HL-DT-ST DVDRAM GSA-H12N] -------------------------------------------------------------- 10:43:02 AM #9 Text 0 File ThreadedTransferInterface.cpp, Line 986 Prepare [G: HL-DT-ST DVDRAM GSA-H12N] for write in CUE-sheet-DAO DAO infos: ========== MCN: "" TOCType: 0x00; Se ssion Clo sed, disc fixated Tracks 1 to 1: Idx 0 Idx 1 Next T rk 1: TRM_DATA_MODE1, 2048/0x00, FilePos 0 307200 6531768 32, ISRC "" DAO layout: =========== ___Start_|____Track_|_Idx_|_CtrlAdr_|_____Size_|______NWA_|_RecDep__________ -150 | lead-in | 0 | 0x41 | 0 | 0 | 0x00 -150 | 1 | 0 | 0x41 | 0 | 0 | 0x00 0 | 1 | 1 | 0x41 | 318784 | 318784 | 0x00 318784 | lead-out | 1 | 0x41 | 0 | 0 | 0x00 10:43:02 AM #10 Text 0 File SCSIPTICommands.cpp, Line 240 SPTILockVolume - completed successfully for FSCTL_LOCK_VOLUME 10:43:02 AM #11 Text 0 File Burncd.cpp, Line 4286 Caching options: cache CDRom or Network-Yes, small files-Yes ( ON 10:43:03 AM #19 Text 0 File MMC.cpp, Line 18034 CueData, Len=32 41 00 00 14 00 00 00 00 41 01 00 10 00 00 00 00 41 01 01 10 00 00 02 00 41 aa 01 14 00 46 34 22 10:43:03 AM #20 Text 0 File ThreadedTransfer.cpp, Line 268 Pipe memory size 83836800 10:43:16 AM #21 Text 0 File Cdrdrv.cpp, Line 1405 10:43:16.806 - G: HL-DT-ST DVDRAM GSA-H12N : Queue again later 10:43:42 AM #22 SPTI -1502 File SCSIPassThrough.cpp, Line 181 CdRom1: SCSIStatus(x02) WinError(0) NeroError(-1502) Sense Key: 0x04 (KEY_HARDWARE_ERROR) Nero Report 2 Nero Burning ROM Sense Code: 0x08 Sense Qual: 0x03 CDB Data: 0x2A 00 00 00 4D 00 00 00 20 00 00 00 Sense Area: 0x70 00 04 00 00 00 00 10 53 29 A1 80 08 03 Buffer x0c7d9a40: Len x10000 0xDC 87 EB 41 6E AC 61 5A 07 B2 DB 78 B5 D4 D9 24 0x8D BC 51 38 46 56 0F EE 16 15 5C 5B E3 B0 10 16 0x14 B1 C3 6E 30 2B C4 78 15 AB D5 92 09 B7 81 23 10:43:42 AM #23 CDR -1502 File Writer.cpp, Line 306 DMA-driver error, CRC error G: HL-DT-ST DVDRAM GSA-H12N 10:43:55 AM #24 Phase 38 File dlgbrnst.cpp, Line 1767 Burn process failed at 48x (7,200 KB/s) 10:43:55 AM #25 Text 0 File SCSIPTICommands.cpp, Line 287 SPTIDismountVolume - completed successfully for FSCTL_DISMOUNT_VOLUME 10:44:01 AM #26 Text 0 File Cdrdrv.cpp, Line 11412 DriveLocker: UnLockVolume completed 10:44:01 AM #27 Text 0 File SCSIPTICommands.cpp, Line 450 UnLockMCN - completed sucessfully for IOCTL_STORAGE_MCN_CONTROL Existing drivers: Registry Keys: HKLM\Software\Microsoft\Windows NT\CurrentVersion\WinLogon Nero Report 3

    Read the article

  • What would make a noise in a PC on graphics operations on a passively-cooled system?

    - by T.J. Crowder
    I have this system based on the Intel D510MO motherboard, which is basically an Atom D510 (dual-core HT Atom w/built-in GPU), an Intel NM10 chipset, and a Realtek Gigabit LAN controller. It's entirely passively cooled. I noticed almost immediately that there was a kind of very, very soft noise that corresponded with graphics operations, sort of the noise you'd get if you had a sheet of flat paper and slid something really light across it — but more electronic than that. I wrote it off as observation error and/or disk activity triggered by the graphics operation (although the latter seemed like a lot of unnecessary disk activity). It isn't. I got curious enough that I finally did a few controlled experiments, and here's what I've determined: It isn't the HDD. For one thing, the sounds the HDD makes (when seeking, when reading or writing, when just sitting there spinning) is different. For another, I used sudo hdparm -y /dev/sda (I'm using Ubuntu 10.04 LTS) to temporarily put the disk on standby while making sure that non-disk graphics op was happening in a loop. The disk spun down, but the other sound continued, corresponding perfectly with the timing of the graphics op. (Then the disk spun up again, but it takes long enough that I could rule out the HDD.) It isn't the monitor; I ensured the two were well physically-separated and the sound was definitely coming from the main box. It isn't something else in the room; the sound is coming from the box. It isn't cross-talk to an audio circuit coming out the speakers. (It doesn't have any speakers.) It isn't my mouse (e.g., when I'm trying to make graphics ops happen); the sound happens if I set up a recurring operation and don't use the mouse at all, or if I lift the mouse off the table slightly (but enough that the laser still registers movement). It isn't the voices in my head; they never whisper like that. Other observations: It doesn't seem to matter what the graphics operation is; anything that changes what's on the screen seems to do it. I get the sound when moving the mouse over the Chromium tab bar (which makes the tab backgrounds change); I get it when a web page has a counter on it that changes the text on the page: I get it when dragging window contents around. The sound is very, very slightly louder if the graphics op is larger, like scrolling a text area when writing a question on superuser.com, than for smaller operations like the tick counter on the web page. But it's very slight. It's fairly loud (and of good duration) when the op involves color changes to substantial surface areas. For instance, when asking a question here on superuser and you move the cursor between the question box and the tag box, and the help to the right fades out, changes, and fades back in. (Yet another example related to the web browser, so let me say: I hear it when operations completely unrelated to the web browser as well.) It doesn't sound like arcing or anything like that (I'd've shut off the machine Right Quick Like if it did). Moving windows does it. Scrolling windows (by and large) doesn't. I have the feeling I've heard this sort of thing before, when all system fans were on low and such, with other systems — but (again) written it off as observational error. For all the world it's like I'm hearing the CPU working (as opposed to the GPU; note the window scroll thing above) or data being transferred somewhere, but that just seems...unlikely. So what am I hearing? This may seem like a very localized question, but perhaps other silent PC enthusiasts may be interested as well...

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61  | Next Page >