Search Results

Search found 51778 results on 2072 pages for 'super columns'.

Page 82/2072 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Convert ddply {plyr} to Oracle R Enterprise, or use with Embedded R Execution

    - by Mark Hornick
    The plyr package contains a set of tools for partitioning a problem into smaller sub-problems that can be more easily processed. One function within {plyr} is ddply, which allows you to specify subsets of a data.frame and then apply a function to each subset. The result is gathered into a single data.frame. Such a capability is very convenient. The function ddply also has a parallel option that if TRUE, will apply the function in parallel, using the backend provided by foreach. This type of functionality is available through Oracle R Enterprise using the ore.groupApply function. In this blog post, we show a few examples from Sean Anderson's "A quick introduction to plyr" to illustrate the correpsonding functionality using ore.groupApply. To get started, we'll create a demo data set and load the plyr package. set.seed(1) d <- data.frame(year = rep(2000:2014, each = 3),         count = round(runif(45, 0, 20))) dim(d) library(plyr) This first example takes the data frame, partitions it by year, and calculates the coefficient of variation of the count, returning a data frame. # Example 1 res <- ddply(d, "year", function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(cv.count = cv)   }) To illustrate the equivalent functionality in Oracle R Enterprise, using embedded R execution, we use the ore.groupApply function on the same data, but pushed to the database, creating an ore.frame. The function ore.push creates a temporary table in the database, returning a proxy object, the ore.frame. D <- ore.push(d) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(year=x$year[1], cv.count = cv)   }, FUN.VALUE=data.frame(year=1, cv.count=1)) You'll notice the similarities in the first three arguments. With ore.groupApply, we augment the function to return the specific data.frame we want. We also specify the argument FUN.VALUE, which describes the resulting data.frame. From our previous blog posts, you may recall that by default, ore.groupApply returns an ore.list containing the results of each function invocation. To get a data.frame, we specify the structure of the result. The results in both cases are the same, however the ore.groupApply result is an ore.frame. In this case the data stays in the database until it's actually required. This can result in significant memory and time savings whe data is large. R> class(res) [1] "ore.frame" attr(,"package") [1] "OREbase" R> head(res)    year cv.count 1 2000 0.3984848 2 2001 0.6062178 3 2002 0.2309401 4 2003 0.5773503 5 2004 0.3069680 6 2005 0.3431743 To make the ore.groupApply execute in parallel, you can specify the argument parallel with either TRUE, to use default database parallelism, or to a specific number, which serves as a hint to the database as to how many parallel R engines should be used. The next ddply example uses the summarise function, which creates a new data.frame. In ore.groupApply, the year column is passed in with the data. Since no automatic creation of columns takes place, we explicitly set the year column in the data.frame result to the value of the first row, since all rows received by the function have the same year. # Example 2 ddply(d, "year", summarise, mean.count = mean(count)) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   data.frame(year=x$year[1], mean.count = mean.count)   }, FUN.VALUE=data.frame(year=1, mean.count=1)) R> head(res)    year mean.count 1 2000 7.666667 2 2001 13.333333 3 2002 15.000000 4 2003 3.000000 5 2004 12.333333 6 2005 14.666667 Example 3 uses the transform function with ddply, which modifies the existing data.frame. With ore.groupApply, we again construct the data.frame explicilty, which is returned as an ore.frame. # Example 3 ddply(d, "year", transform, total.count = sum(count)) res <- ore.groupApply (D, D$year, function(x) {   total.count <- sum(x$count)   data.frame(year=x$year[1], count=x$count, total.count = total.count)   }, FUN.VALUE=data.frame(year=1, count=1, total.count=1)) > head(res)    year count total.count 1 2000 5 23 2 2000 7 23 3 2000 11 23 4 2001 18 40 5 2001 4 40 6 2001 18 40 In Example 4, the mutate function with ddply enables you to define new columns that build on columns just defined. Since the construction of the data.frame using ore.groupApply is explicit, you always have complete control over when and how to use columns. # Example 4 ddply(d, "year", mutate, mu = mean(count), sigma = sd(count),       cv = sigma/mu) res <- ore.groupApply (D, D$year, function(x) {   mu <- mean(x$count)   sigma <- sd(x$count)   cv <- sigma/mu   data.frame(year=x$year[1], count=x$count, mu=mu, sigma=sigma, cv=cv)   }, FUN.VALUE=data.frame(year=1, count=1, mu=1,sigma=1,cv=1)) R> head(res)    year count mu sigma cv 1 2000 5 7.666667 3.055050 0.3984848 2 2000 7 7.666667 3.055050 0.3984848 3 2000 11 7.666667 3.055050 0.3984848 4 2001 18 13.333333 8.082904 0.6062178 5 2001 4 13.333333 8.082904 0.6062178 6 2001 18 13.333333 8.082904 0.6062178 In Example 5, ddply is used to partition data on multiple columns before constructing the result. Realizing this with ore.groupApply involves creating an index column out of the concatenation of the columns used for partitioning. This example also allows us to illustrate using the ORE transparency layer to subset the data. # Example 5 baseball.dat <- subset(baseball, year > 2000) # data from the plyr package x <- ddply(baseball.dat, c("year", "team"), summarize,            homeruns = sum(hr)) We first push the data set to the database to get an ore.frame. We then add the composite column and perform the subset, using the transparency layer. Since the results from database execution are unordered, we will explicitly sort these results and view the first 6 rows. BB.DAT <- ore.push(baseball) BB.DAT$index <- with(BB.DAT, paste(year, team, sep="+")) BB.DAT2 <- subset(BB.DAT, year > 2000) X <- ore.groupApply (BB.DAT2, BB.DAT2$index, function(x) {   data.frame(year=x$year[1], team=x$team[1], homeruns=sum(x$hr))   }, FUN.VALUE=data.frame(year=1, team="A", homeruns=1), parallel=FALSE) res <- ore.sort(X, by=c("year","team")) R> head(res)    year team homeruns 1 2001 ANA 4 2 2001 ARI 155 3 2001 ATL 63 4 2001 BAL 58 5 2001 BOS 77 6 2001 CHA 63 Our next example is derived from the ggplot function documentation. This illustrates the use of ddply within using the ggplot2 package. We first create a data.frame with demo data and use ddply to create some statistics for each group (gp). We then use ggplot to produce the graph. We can take this same code, push the data.frame df to the database and invoke this on the database server. The graph will be returned to the client window, as depicted below. # Example 6 with ggplot2 library(ggplot2) df <- data.frame(gp = factor(rep(letters[1:3], each = 10)),                  y = rnorm(30)) # Compute sample mean and standard deviation in each group library(plyr) ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y)) # Set up a skeleton ggplot object and add layers: ggplot() +   geom_point(data = df, aes(x = gp, y = y)) +   geom_point(data = ds, aes(x = gp, y = mean),              colour = 'red', size = 3) +   geom_errorbar(data = ds, aes(x = gp, y = mean,                                ymin = mean - sd, ymax = mean + sd),              colour = 'red', width = 0.4) DF <- ore.push(df) ore.tableApply(DF, function(df) {   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4) }) But let's take this one step further. Suppose we wanted to produce multiple graphs, partitioned on some index column. We replicate the data three times and add some noise to the y values, just to make the graphs a little different. We also create an index column to form our three partitions. Note that we've also specified that this should be executed in parallel, allowing Oracle Database to control and manage the server-side R engines. The result of ore.groupApply is an ore.list that contains the three graphs. Each graph can be viewed by printing the list element. df2 <- rbind(df,df,df) df2$y <- df2$y + rnorm(nrow(df2)) df2$index <- c(rep(1,300), rep(2,300), rep(3,300)) DF2 <- ore.push(df2) res <- ore.groupApply(DF2, DF2$index, function(df) {   df <- df[,1:2]   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4)   }, parallel=TRUE) res[[1]] res[[2]] res[[3]] To recap, we've illustrated how various uses of ddply from the plyr package can be realized in ore.groupApply, which affords the user explicit control over the contents of the data.frame result in a straightforward manner. We've also highlighted how ddply can be used within an ore.groupApply call.

    Read the article

  • LEMP Stack on Ubuntu Server 13.04 not parsing PHP Switch Statement Properly

    - by schester
    On my Ubuntu 12.04 Server LTS on nginx 1.1.19, the following PHP code works properly: switch($_SESSION['user']['permissions']) { case 9: echo "Super Admin Privileges"; break; case 0: echo "Operator Privileges"; break; case 1: echo "Line Leader Privileges"; break; case 2: echo "Supervisor Privileges"; break; case 3: echo "Engineer Privileges"; break; case 4: echo "Manager Privileges"; break; case 5: echo "Administrator Privileges"; break; default: echo "Operator Privileges"; } However, I have a backup server running Ubuntu Server 13.04 on nginx 1.4.1 which has the exact same copy of the script (synced) but instead of breaking on the break; command, it echos the whole php script. The output on the 12.04 Box is similar to this: You are logged in with Super Admin Privileges But on the 13.04 Box, the output is like this: You are logged in logged in with Super Admin Privileges"; break; case 0: echo "Operator Privileges"; break; case 1: echo "Line Leader Privileges"; break; case 2: echo "Supervisor Privileges"; break; case 3: echo "Engineer Privileges"; break; case 4: echo "Manager Privileges"; break; case 5: echo "Administrator Privileges"; break; default: echo "Operator Privileges"; } ?> I have also tried changing the script from switch statement to if statements but same results. Any idea what is wrong?

    Read the article

  • Lightest weight ubuntu desktop for text editing in big terminal windows?

    - by Kevin Pauli
    I have an older windows laptop onto which I'm installing ubuntu within a VM. My goal is just to use terminal-based linux tools such as vim and shell scripting. I don't give a hoot about any gui for this box. So I first installed ubuntu minimalcd and chose "Basic Ubuntu Server". Upon boot, the text-based terminal came up and I logged in, but the problem is it only gives me 80 columns. I want to do terminal mode vim but have a couple hundred columns to take advantage of my large monitor. If you happen to know how to do that, please see my question here . This post is assuming that the other question is not answerable, and that I will need a desktop to get more than 80 columns in a terminal window. So if that is the case, I want the lightest weight one possible, because this is older hardware and all I want is the ability to have nice big text-based terminal windows for editing text. From the ubuntu minimal CD, I see options for Edubuntu, Kubuntu, etc. Which one of the available desktops would be a good choice for my needs?

    Read the article

  • Cannot run update due to a dpkg error with burg-theme-minimal-sir

    - by boywithaxe
    I cannot run an update or indeed run $: apt-get remove due to a dpkg error with a package that's a part of super-boot-manager. Running an update returns: dpkg: error processing burg-theme-minimal-sir (--configure): subprocess installed post-installation script returned error exit status 1 I tried removing this package alone, with the same error, also trying to remove super-boot-manager returns: (Reading database ... 225474 files and directories currently installed.) Removing burg-theme-minimal-sir ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error processing burg-theme-minimal-sir (--remove): subprocess installed post-removal script returned error exit status 1 No apport report written because MaxReports is reached already Removing super-boot-manager ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for hicolor-icon-theme ... Errors were encountered while processing: burg-theme-minimal-sir E: Sub-process /usr/bin/dpkg returned an error code (1) I'm sort of stuck now and Google has failed me. Has anyone encountered this problem before? Or does anyone know a way for fixing this?

    Read the article

  • OWB 11gR2 &ndash; JDBC Helper Utility

    - by David Allan
    One of the common queries when importing the tables via JDBC with 11gR2 is determining why the import wizard doesn’t display the tables that you think it should. I often just use the script below to dump out the schemas, tables and columns that the JDBC driver is returning. This is useful in a few areas; to figure out what the schema name is returned to double check with the schema name you have used in the location (this is used in the DatabaseMetaData.getTables API call within the basic JDBC metadata import. to figure out the data types returned from the JDBC driver when you see columns skipped because of no datatype supported messages. also…I can do it via scripting and don’t need to recompile classes and stuff :-) Edit the tcl script and set the JDBC driver, the connection URL and the username and password (they are at the bottom of the script), the script then calls a basic tcl procedure which writes to standard out the schemas, tables and columns with various properties. For example I executed it using the XML JDBC driver from ODI over a simple customers XML file and it writes the following metadata; You can add more details as you need and execute from the OMBPlus panel within OWB. Download the sample tcl jdbc script here There is a bunch of really useful stuff on OTN documenting this area (start with the white paper here) that is worth checking out all related to the OWB SDK covering everything from platform definitions, custom metadata importers, application adapters, code templates etc. You can find a bunch of goodies on the OWB SDK here.

    Read the article

  • Metaphor for task synchronization [closed]

    - by nkint
    I'm looking for a metaphor. A friend of mine taught me to use metaphors from nature, everyday life, math, and use them to design my projects. They can help in creating a better design or better understanding or the problem, and they are cool. Now I'm working on a project with hardware and micro-controllers in C. For convenience, I have decided to use multiple micro-controllers as co-processor units for real-time (the slaves) and a master. This has saved me a lot of headache: I can code the main logic in the master without paying too much attention to super optimizing everything; I don't care if I need some blocking-call; I don't worry about serial communication with the computer. I just send messages to the slaves and they are super fast super in real time. I like my design and it seems to work well. So here are the important concepts that I'm trying capture in the metaphor: hierarchy of processing Not using one big brain but rather several small, distributed brain units using distributed power or resources I'm looking for a good metaphor for this concept of having one unit synchronize the work of all the others. Preferably, the metaphor would come from nature, biology, or zoology.

    Read the article

  • ODI 11g – Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • ODI 11g - Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • Pause and Resume and get the value of a countdown timer by savedInstanceState [closed]

    - by Catherine grace Balauro
    I have developed a countdown timer and I am not sure how to pause and resume the timer as the TextView for the timer is being clicked. Click to start then click again to pause and to resume, click again the timer's text view. This is my code: Timer = (TextView)this.findViewById(R.id.time); //TIMER Timer.setOnClickListener(TimerClickListener); counter = new MyCount(600000, 1000); }//end of create private OnClickListener TimerClickListener = new OnClickListener() { public void onClick(View v) { updateTimeTask(); } private void updateTimeTask() { if (decision==0){ counter.start(); decision=1;} else if(decision==2){ counter.onResume1(); decision=1; } else{ counter.onPause1(); decision=2; }//end if }; }; class MyCount extends CountDownTimer { public MyCount(long millisInFuture, long countDownInterval) { super(millisInFuture, countDownInterval); }//MyCount public void onResume1(){ onResume(); } public void onPause1() { onPause();} public void onFinish() { Timer.setText("00:00"); p1++; if (p1<=4){ TextView PScore = (TextView) findViewById(R.id.pscore); PScore.setText(p1 + ""); }//end if }//finish public void onTick(long millisUntilFinished) { Integer milisec = new Integer(new Double(millisUntilFinished).intValue()); Integer cd_secs = milisec / 1000; Integer minutes = (cd_secs % 3600) / 60; Integer seconds = (cd_secs % 3600) % 60; Timer.setText(String.format("%02d", minutes) + ":" + String.format("%02d", seconds)); //long timeLeft = millisUntilFinished / 1000; }//on tick }//class MyCount protected void onResume() { super.onResume(); //handler.removeCallbacks(updateTimeTask); //handler.postDelayed(updateTimeTask, 1000); }//onResume @Override protected void onPause() { super.onPause(); //do stuff }//onPause I am only beginner in android programming and I don't know how to get the value of the countdown timer using savedInstanceState. How do I do this?

    Read the article

  • BI Publisher : Formatting Issues

    - by Manoj Madhusoodanan
    While creating BI Publisher reports the formatting issues are quite common.Here I am discussing some common issues related to BIP report development. 1) First issue is related to column formatting.When you want to display some data which has leading zeros or trailing zeros after '.' in EXCEL output you will not get the desired output.But in PDF it will come as what you are expecting.This is not with the issue of your data. This is due to the unique nature of EXCEL cell format.When you are trying to put a text data in a cell with out making any change to cell format it will treat as number and it will truncate all leading zeros and all trailing zeros after '.' . So what you have to do is to convert that data into a format which EXCEL can treat as text. Eg: If you want to display 0020100 convert this data into ="0020100". Same way for 23789.02300 to ="23789.02300".   Note: This is applicable to EXCEL output only.If you have multiple output type apply it only for EXCEL. 2) Second is related to report size issue in PDF output type.If the number of columns are more and if you want to show most of the columns in one row andif it is a PDF output you can choose the paper size as Legal (8.5 x 14''). You will get more spaces in the template to accommodate more columns. 3) If your XML data contains special characters like &,<,> etc ..  pass the data to DBMS_XMLGEN.CONVERT function.It will replace special characters with corresponding XML notations. Eg: (a>b) & (c!=d) to  (a&gt;b) &amp; (c!=d)

    Read the article

  • How do I work around an HTML Table rendering bug in IE 7?

    - by osmaniac
    I have a table. Some cells span multiple columns. Some cells span multiple rows and columns. But one row (which spans all columns but the rightmost one) creates an artifact. Part of the text in the cell is erroneously repeated left justified on a new row just below the table. I'm baffled. I tried rendering with and without "table-layout: fixed;". Same result. When I originally composed the design using just HTML and CSS, it looked great. But then I worked it into a page and had to add more columns to my master table the right to hold buttons. These buttons are in three groups, each having their own div to control floating and rewrapping when the window gets narrower. One div has another table inside it that groups a single row of buttons. Thus I have table inside div inside td inside outer table. I would prefer a simpler design, but how? This is what I want to have: ................................................................................... . . . . Four rows of data . Three groups of buttons that can reflow . . With several columns . if window gets narrower . . meticulously layed out, . . . That should not resize . . . when window gets narrower . . ................................................................................... . One more row of data spanning the whole screen which stays below the buttons . ................................................................................... What I was doing was putting the three divs with the buttons in the upper right in a single cell that spanned four rows. What other opportunities does CSS offer? The buttons are not allowed to overlap the data on the left or go past the data line below. The original design had the divs with the buttons NOT in a table with the data, but when the window gets narrow, some of the buttons flow such that they go underneath the data, which looks bad. I would post actual HTML, except it is generated by ASP, huge, with lots of CSS styling, and the feature that lets me view the final HTML is not working at the moment. (Built in security in the application.)

    Read the article

  • Generic styles for DataGridTemplateColumn Headers & Cells

    - by user557765
    I am struggling to define templates for my DataGrid columns. Here is the code that I have working at the moment: <t:DataGrid.Columns> <t:DataGridTemplateColumn Width="75" > <t:DataGridTemplateColumn.HeaderTemplate> <DataTemplate> <TextBlock Style="{StaticResource FieldNameVertical}" Text="Date" /> </DataTemplate> </t:DataGridTemplateColumn.HeaderTemplate> <t:DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock Style="{StaticResource FieldValue}" Text="{Binding ModifiedDate, StringFormat='{}{0:MM/dd/yyyy}'}" /> </DataTemplate> </t:DataGridTemplateColumn.CellTemplate> </t:DataGridTemplateColumn> .. .. .. </t:DataGrid.Columns> I would like to define HeaderTemplate & CellTemplate as reusable styles -- so that each column would be as brief as something like this: <t:DataGrid.Resources> <DataTemplate x:Key="dgHeaderStyle"> <TextBlock Style="{StaticResource FieldNameVertical}" Text="{Binding}" /> </DataTemplate> <DataTemplate x:Key="dgCellStyle"> <TextBlock Style="{StaticResource FieldValue}" Text="{Binding}" /> </DataTemplate> </t:DataGrid.Resources> <t:DataGrid.Columns> <t:DataGridTemplateColumn Width="75" Header="Date" Binding="{Binding ModifiedDate, StringFormat='{}{0:MM/dd/yyyy}'}" HeaderTemplate="{StaticResource dgHeaderStyle}" CellTemplate="{StaticResource dgCellStyle}" /> <t:DataGridTemplateColumn Width="100" Header="Dealer" HeaderTemplate="{StaticResource dgHeaderStyle}" CellTemplate="{StaticResource dgCellStyle}" /> ... </t:DataGrid.Columns> Every attempt I make has failed. I had hoped to implement something like the "solution" snippet in the initial entry of WPF DataGrid HeaderTemplate Mysterious Padding. However, I can't seem to adapt it to what I'm doing.

    Read the article

  • Semi-complex aggregate select statement confusion

    - by Ian Henry
    Alright, this problem is a little complicated, so bear with me. I have a table full of data. One of the table columns is an EntryDate. There can be multiple entries per day. However, I want to select all rows that are the latest entry on their respective days, and I want to select all the columns of said table. One of the columns is a unique identifier column, but it is not the primary key (I have no idea why it's there; this is a pretty old system). For purposes of demonstration, say the table looks like this: create table ExampleTable ( ID int identity(1,1) not null, PersonID int not null, StoreID int not null, Data1 int not null, Data2 int not null, EntryDate datetime not null ) The primary key is on PersonID and StoreID, which logically defines uniqueness. Now, like I said, I want to select all the rows that are the latest entries on that particular day (for each Person-Store combination). This is pretty easy: --Figure 1 select PersonID, StoreID, max(EntryDate) from ExampleTable group by PersonID, StoreID, dbo.dayof(EntryDate) Where dbo.dayof() is a simple function that strips the time component from a datetime. However, doing this loses the rest of the columns! I can't simply include the other columns, because then I'd have to group by them, which would produce the wrong results (especially since ID is unique). I have found a dirty hack that will do what I want, but there must be a better way -- here's my current solution: select cast(null as int) as ID, PersonID, StoreID, cast(null as int) as Data1, cast(null as int) as Data2, max(EntryDate) as EntryDate into #StagingTable from ExampleTable group by PersonID, StoreID, dbo.dayof(EntryDate) update Target set ID = Source.ID, Data1 = Source.Data1, Data2 = Source.Data2, from #StagingTable as Target inner join ExampleTable as Source on Source.PersonID = Target.PersonID and Source.StoreID = Target.StoreID and Source.EntryDate = Target.EntryDate This gets me the correct data in #StagingTable but, well, look at it! Creating a table with null values, then doing an update to get the values back -- surely there's a better way to do this? A single statement that will get me all the values the first time? It is my belief that the correct join on that original select (Figure 1) would do the trick, like a self-join or something... but how do you do that with the group by clause? I cannot find the right syntax to make the query execute. I am pretty new with SQL, so it's likely that I'm missing something obvious. Any suggestions? (Working in T-SQL, if it makes any difference)

    Read the article

  • Android: Using onStart() method in Bluetooth application

    - by Nii
    Hello, I am getting a nullpointer exception when my onStart() method is called. Here is the breakdown of my Android app: Opening the app brings a user to the homescreen: The user is then presented with the first 6 icons to choose from. When the user presses the "Sugar" icon it takes them to the SugarTabActivity. The SugarTabActivity is a Tabbed layout with two tabs. I'm concerned with the first tab. The first tab calls the getDefaultAdapter() method in its onCreate() method. Once it calls this, it checks if the bluetooth adapter is null on the phone, and if its null it shows a toast saying "Bluetooth is not available". This works just fine. Then I call the onStart() method. In the onStart() method I check if bluetooth is enabled, and if it isnt, then I start a new activity from the BluetoothAdapter enable bluetooth intent; otherwise, I start my bluetooth service. The exact error I'm getting is 04-19 00:44:45.674: ERROR/AndroidRuntime(225): Caused by: java.lang.NullPointerException 04-19 00:44:45.674: ERROR/AndroidRuntime(225): at com.nii.glucose.Glucose.onStart(Glucose.java:313). Heading Override public void onCreate(Bundle icicle) { super.onCreate(icicle); if(D) Log.d(TAG, "+++ ON CREATE +++"); setContentView(R.layout.glucose_layout); mBluetoothAdapter = BluetoothAdapter.getDefaultAdapter(); if(mBluetoothAdapter==null){ Toast.makeText(this, "Bluetooth not available", Toast.LENGTH_LONG).show(); //finish(); return; } } Override public void onStart() { super.onStart(); if(D) Log.e(TAG, "++ ON START ++"); // If BT is not on, request that it be enabled. // setupChat() will then be called during onActivityResult if (!mBluetoothAdapter.isEnabled()) { Intent enableIntent = new Intent(BluetoothAdapter.ACTION_REQUEST_ENABLE); startActivityForResult(enableIntent, REQUEST_ENABLE_BT); // Otherwise, setup the chat session } else { if (mGlucoseService == null) mGlucoseService = new BluetoothService(this, mHandler); } } @Override public synchronized void onResume(){ super.onResume(); if(D) Log.e(TAG,"==== ON RESUME ======"); // Performing this check in onResume() covers the case in which BT was // not enabled during onStart(), so we were paused to enable it... // onResume() will be called when ACTION_REQUEST_ENABLE activity returns. if (mGlucoseService != null) { // Only if the state is STATE_NONE, do we know that we haven't started already if (mGlucoseService.getState() == BluetoothService.STATE_NONE) { // Start the Bluetooth chat services mGlucoseService.start(); } } } @Override public synchronized void onPause(){ super.onPause(); //isActive.set(false); if(D) Log.e(TAG,"==== ON PAUSE ======"); } @Override public void onDestroy() { super.onDestroy(); // Stop the Bluetooth chat services if (mGlucoseService != null) mGlucoseService.stop(); if(D) Log.e(TAG, "--- ON DESTROY ---"); }

    Read the article

  • Page expired issue with back button and wicket SortableDataProvider and DataTable

    - by David
    Hi, I've got an issue with SortableDataProvider and DataTable in wicket. I've defined my DataTable as such: IColumn<Column>[] columns = new IColumn[9]; //column values are mapped to the private attributes listed in ColumnImpl.java columns[0] = new PropertyColumn(new Model("#"), "columnPosition", "columnPosition"); columns[1] = new PropertyColumn(new Model("Description"), "description"); columns[2] = new PropertyColumn(new Model("Type"), "dataType", "dataType"); Adding it to the table: DataTable<Column> dataTable = new DataTable<Column>("columnsTable", columns, provider, maxRowsPerPage) { @Override protected Item<Column> newRowItem(String id, int index, IModel<Column> model) { return new OddEvenItem<Column>(id, index, model); } }; My data provider: public class ColumnSortableDataProvider extends SortableDataProvider<Column> { private static final long serialVersionUID = 1L; private List list = null; public ColumnSortableDataProvider(Table table, String sortProperty) { this.list = Arrays.asList(table.getColumns().toArray(new Column[0])); setSort(sortProperty, true); } public ColumnSortableDataProvider(List list, String sortProperty) { this.list = list; setSort(sortProperty, true); } @Override public Iterator iterator(int first, int count) { /* first - first row of data count - minimum number of elements to retrieve So this method returns an iterator capable of iterating over {first, first+count} items */ Iterator iterator = null; try { if(getSort() != null) { Collections.sort(list, new Comparator() { private static final long serialVersionUID = 1L; @Override public int compare(Column c1, Column c2) { int result=1; PropertyModel<Comparable> model1= new PropertyModel<Comparable>(c1, getSort().getProperty()); PropertyModel<Comparable> model2= new PropertyModel<Comparable>(c2, getSort().getProperty()); if(model1.getObject() == null && model2.getObject() == null) result = 0; else if(model1.getObject() == null) result = 1; else if(model2.getObject() == null) result = -1; else result = ((Comparable)model1.getObject()).compareTo(model2.getObject()); result = getSort().isAscending() ? result : -result; return result; } }); } if (list.size() (first+count)) iterator = list.subList(first, first+count).iterator(); else iterator = list.iterator(); } catch (Exception e) { e.printStackTrace(); } return iterator; } The problem is the following: - I click a column header to sort by that column. - I navigate to a different page - I click Back (or Forward if I do the opposite scenario) - Page has expired. It'd be nice to generate the page using PageParameters but I somehow need to intercept the sort event to do so. Any pointers would be greatly appreciated. Thanks a ton!! David

    Read the article

  • iPhone: didSelectRowAtIndexPath not invoked

    - by soletan
    Hi, I know this issue being mentioned before, but resolutions there didn't apply. I'm having a UINavigationController with an embedded UITableViewController set up using IB. In IB the UITableView's delegate and dataSource are both set to my derivation of UITableViewController. This class has been added using XCode's templates for UITableViewController classes. There is no custom UITableViewCell and the table view is using default plain style with single title, only. Well, in simulator the list is rendered properly, with two elements provided by dataSource, so dataSource is linked properly. If I remove the outlet link for dataSource in IB, an empty table is rendered instead. As soon as I tap on one of these two items, it is flashing blue and the GDB encounters interruption in __forwarding__ in scope of a UITableView::_selectRowAtIndexPath. It's not reaching breakpoint set in my non-empty method didSelectRowIndexPath. I checked the arguments and method's name to exclude typos resulting in different selector. I recently didn't succeed in whether delegate is set properly, but as it is set equivalently to dataSource which is getting two elements from the same class, I expect it to be set properly. So, what's wrong? I'm running iPhone/iPad SDK 3.1.2 ... but tried with iPhone SDK 3.1 in simulator as well. EDIT: This is the code of my UITableViewController derivation: #import "LocalBrowserListController.h" #import "InstrumentDescriptor.h" @implementation LocalBrowserListController - (void)viewDidLoad { [super viewDidLoad]; [self listLocalInstruments]; } - (void)viewDidAppear:(BOOL)animated { [super viewDidAppear:animated]; } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; } - (void)viewDidUnload { [super viewDidUnload]; } - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return 1; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { return [entries count]; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier] autorelease]; if ( ( [entries count] > 0 ) && ( [indexPath length] > 0 ) ) cell.textLabel.text = [[[entries objectAtIndex:[indexPath indexAtPosition:[indexPath length] - 1]] label] retain]; return cell; } - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { if ( ( [entries count] > 0 ) && ( [indexPath length] > 0 ) ) { ... } } - (void)dealloc { [super dealloc]; } - (void) listLocalInstruments { NSMutableArray *result = [NSMutableArray arrayWithCapacity:10]; [result addObject:[InstrumentDescriptor descriptorOn:[[NSBundle mainBundle] pathForResource:@"example" ofType:@"idl"] withLabel:@"Default 1"]]; [result addObject:[InstrumentDescriptor descriptorOn:[[NSBundle mainBundle] pathForResource:@"example" ofType:@"xml"] withLabel:@"Default 2"]]; [entries release]; entries = [[NSArray alloc] initWithArray:result]; } @end

    Read the article

  • OperationalError: foreign key mismatch

    - by Niek de Klein
    I have two tables that I'm filling, 'msrun' and 'feature'. 'feature' has a foreign key pointing to the 'msrun_name' column of the 'msrun' table. Inserting in the tables works fine. But when I try to delete from the 'feature' table I get the following error: pysqlite2.dbapi2.OperationalError: foreign key mismatch From the rules of foreign keys in the manual of SQLite: - The parent table does not exist, or - The parent key columns named in the foreign key constraint do not exist, or - The parent key columns named in the foreign key constraint are not the primary key of the parent table and are not subject to a unique constraint using collating sequence specified in the CREATE TABLE, or - The child table references the primary key of the parent without specifying the primary key columns and the number of primary key columns in the parent do not match the number of child key columns. I can see nothing that I'm violating. My create tables look like this: DROP TABLE IF EXISTS `msrun`; -- ----------------------------------------------------- -- Table `msrun` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `msrun` ( `msrun_name` VARCHAR(40) PRIMARY KEY NOT NULL , `description` VARCHAR(500) NOT NULL ); DROP TABLE IF EXISTS `feature`; -- ----------------------------------------------------- -- Table `feature` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `feature` ( `feature_id` VARCHAR(40) PRIMARY KEY NOT NULL , `intensity` DOUBLE NOT NULL , `overallquality` DOUBLE NOT NULL , `charge` INT NOT NULL , `content` VARCHAR(45) NOT NULL , `msrun_msrun_name` VARCHAR(40) NOT NULL , CONSTRAINT `fk_feature_msrun1` FOREIGN KEY (`msrun_msrun_name` ) REFERENCES `msrun` (`msrun_name` ) ON DELETE NO ACTION ON UPDATE NO ACTION); CREATE UNIQUE INDEX `id_UNIQUE` ON `feature` (`feature_id` ASC); CREATE INDEX `fk_feature_msrun1` ON `feature` (`msrun_msrun_name` ASC); As far as I can see the parent table exists, the foreign key is pointing to the right parent key, the parent key is a primary key and the foreign key specifies the primary key column. The script that produces the error: from pysqlite2 import dbapi2 as sqlite import parseFeatureXML connection = sqlite.connect('example.db') cursor = connection.cursor() cursor.execute("PRAGMA foreign_keys=ON") inputValues = ('example', 'description') cursor.execute("INSERT INTO `msrun` VALUES(?, ?)", inputValues) featureXML = parseFeatureXML.Reader('../example_scripts/example_files/input/featureXML_example.featureXML') for feature in featureXML.getSimpleFeatureInfo(): inputValues = (featureXML['id'], featureXML['intensity'], featureXML['overallquality'], featureXML['charge'], featureXML['content'], 'example') # insert the values into msrun using ? for sql injection safety cursor.execute("INSERT INTO `feature` VALUES(?,?,?,?,?,?)", inputValues) connection.commit() for feature in featureXML.getSimpleFeatureInfo(): cursor.execute("DELETE FROM `feature` WHERE feature_id = ?", (str(featureXML['id']),))

    Read the article

  • Excel string manipulation to check data consistency

    - by chefsmart
    Background information: - There are nearly 7000 individuals and there is data about their performances in one, two or three tests. Every individual has taken the 1st test (let's call it Test M). Some of those who have taken Test M have also taken Test I, and some of those who have taken Test I have also taken Test B. For the first two tests (M and I), students can score grades I, II, or III. Depending on the grades they are awarded points -- 3 for grade I, 2 for II, 1 for III. The last Test B is just a pass or a fail result with no grades. Those passing this test get 1 point, with no points for failure. (Well actually, grades are awarded, but all grades are given a common 1 point). An amateur has entered data to represent these students and their grades in an Excel file. Problem is, this person has done the worst thing possible - he has developed his own notation and entered all test information in a single cell --- and made my life hell. The file originally had two text columns, one for individual's id, and the second for test info, if one could call it that. It's horrible, I know, and I am suffering. In the image, if you see "M-II-2 I-III-1" it means the person got grade II in Test M for 2 points and grade III in Test I for 1 point. Some have taken only one test, some two, and some three. When the file came to me for processing and analyzing the performance of students, I sent it back with instructions to insert 3 additional columns with only the grades for the three tests. The file now looks as follows. Columns C and D represent grades I, II, and III using 1,2 and 3 respectively. Column C is for Test M, column D for Test I. Column E says BA (B Achieved!) if the individual has passed Test B. Now that you have the above information, let's get to the problem. I don't trust this and want to check whether data in column B matches with data in columns C,D and E. That is, I want to examine the string in column B and find out whether the figures in columns C,D and E are correct. All help is really appreciated. P.S. - I had exported this to MySQL via ODBC and that is why you are seeing those NULLs. I tried doing this in MySQL too, and really will accept a MySQL or an Excel solution, I don't have a preference. Edit : - See file with sample data

    Read the article

  • Add new row to asp .net grid view using button

    - by SARAVAN
    Hi, I am working in ASP .net 2.0. I am a learner. I have a grid view which has a button in it. Please find the asp mark up below <form id="form1" runat="server"> <div> <asp:GridView ID="myGridView" runat="server"> <Columns> <asp:TemplateField> <ItemTemplate> <asp:Button CommandName="AddARowBelow" Text="Add A Row Below" runat="server" /> </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView> </div> </form> Please find the code behind below. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Data; using System.Web.UI.WebControls; namespace GridViewDemo { public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { DataTable dt = new DataTable("myTable"); dt.Columns.Add("col1"); dt.Columns.Add("col2"); dt.Columns.Add("col3"); dt.Rows.Add(1, 2, 3); dt.Rows.Add(1, 2, 3); dt.Rows.Add(1, 2, 3); dt.Rows.Add(1, 2, 3); dt.Rows.Add(1, 2, 3); myGridView.DataSource = dt; myGridView.DataBind(); } protected void myGridView_RowCommand(object sender, GridViewCommandEventArgs e) { } } } I was thinking that when I click the command button, it would fire the mygridview_rowcommand() but instead it threw an error as follows: Invalid postback or callback argument. Event validation is enabled using in configuration or <%@ Page EnableEventValidation="true" % in a page. For security purposes, this feature verifies that arguments to postback or callback events originate from the server control that originally rendered them. If the data is valid and expected, use the ClientScriptManager.RegisterForEventValidation method in order to register the postback or callback data for validation. Can any one let me know on where I am going wrong?

    Read the article

  • Why is the W3C box model considered better?

    - by Mel
    Why do most developers consider the W3C box-model to be better than the box-model used by Internet Explorer? I know it's very frustrating developing pages that look the way you want them on Internet Explorer, but I find the W3C box-model to be counter-intuitive. For example, if margins, padding, and border were factored into the width, I could assign fixed width values for all my columns without having to worry about how many columns I have and what changes I make to my padding and margins. Under W3C box model I have to worry about how many columns I have, and develop something akin to a mathematical formula to calculate my values. Changing them would nightmarish, especially for complex layouts. My novice opinion is that it's more restrictive. Consider this small frame-work I wrote: #content { margin:0 auto 30px auto; padding:0 30px 30px 30px; width:900px; } #content .column { float:left; margin:0 20px 20px 20px; } #content .first { margin-left:0; } #content .last { margin-right:0; } .width_1-4 { width:195px; } .width_1-3 { width:273px; } .width_1-2 { width:430px; } .width_3-4 { width:645px; } .width_1-1 { width:900px; } These values I have assigned here will falter unless there are three columns, and thus the margins at 0(first)+20+20+20+20+0(last). It would be a disaster if I wanted to add padding to my columns too, as my entire setup would have to be re calibrated. Now imagine if column width incorporated all the other elements, all I would need to do is change one associated value, and I have my layout. I'm less criticizing it and more hoping to understand why it's better, or why I'm finding it more difficult. Am I doing this whole thing wrong? I don't know very much about this topic, but it seems counter intuitive to use W3C's box-model. Some advice would be really appreciated. Thanks

    Read the article

  • What is the best WebControl to create this

    - by balexandre
    current output wanted output current code public partial class test : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (!Page.IsPostBack) populateData(); } private void populateData() { List<temp> ls = new List<temp>(); ls.Add(new temp { a = "AAA", b = "aa", c = "a", dt = DateTime.Now }); ls.Add(new temp { a = "BBB", b = "bb", c = "b", dt = DateTime.Now }); ls.Add(new temp { a = "CCC", b = "cc", c = "c", dt = DateTime.Now.AddDays(1) }); ls.Add(new temp { a = "DDD", b = "dd", c = "d", dt = DateTime.Now.AddDays(1) }); ls.Add(new temp { a = "EEE", b = "ee", c = "e", dt = DateTime.Now.AddDays(2) }); ls.Add(new temp { a = "FFF", b = "ff", c = "f", dt = DateTime.Now.AddDays(2) }); TemplateField tc = (TemplateField)gv.Columns[0]; // <-- want to assign here just day gv.Columns.Add(tc); // <-- want to assign here just day + 1 gv.Columns.Add(tc); // <-- want to assign here just day + 2 gv.DataSource = ls; gv.DataBind(); } } public class temp { public temp() { } public string a { get; set; } public string b { get; set; } public string c { get; set; } public DateTime dt { get; set; } } and in HTML <asp:GridView ID="gv" runat="server" AutoGenerateColumns="false"> <Columns> <asp:TemplateField> <ItemTemplate> <asp:Label ID="Label1" runat="server" Text='<%# Eval("a") %>' Font-Bold="true" /><br /> <asp:Label ID="Label2" runat="server" Text='<%# Eval("b") %>' Font-Italic="true" /><br /> <asp:Label ID="Label3" runat="server" Text='<%# Eval("dt") %>' /> </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView> What I'm trying to avoid is repeat code so I can only use one unique TemplateField I can accomplish this with 3 x GridView, one per each day, but I'm really trying to simplify code as the Grid will be exactly the same (as the HTML code goes), just the DataSource changes. Any help is greatly appreciated, Thank you.

    Read the article

  • dialog.show() crashes my application, why?

    - by user1739462
    I'm new in adroid. I like to do things when the color reach a value. I like (for example) show the alert if r is bigger than 30, but the application go in crash. Thank for very simple answares. public class MainActivity extends Activity { private AlertDialog dialog; private AlertDialog.Builder builder; private BackgroundColors view; public class BackgroundColors extends SurfaceView implements Runnable { public int grand=0; public int step=0; private boolean flip=true; private Thread thread; private boolean running; private SurfaceHolder holder; public BackgroundColors(Context context) { super(context); } Inside this loop while running is true. is impossible to show dialogs ?? public void run() { int r = 0; while (running){ if (holder.getSurface().isValid()){ Canvas canvas = holder.lockCanvas(); if (r > 250) r = 0; r += 10; if (r>30 && flip){ flip=false; // ********************************* dialog.show(); // ********************************* // CRASH !! } try { Thread.sleep(300); } catch(InterruptedException e) { e.printStackTrace(); } canvas.drawARGB(255, r, 255, 255); holder.unlockCanvasAndPost(canvas); } } } public void start() { running = true; thread = new Thread(this); holder = this.getHolder(); thread.start(); } public void stop() { running = false; boolean retry = true; while (retry){ try { thread.join(); retry = false; } catch(InterruptedException e) { retry = true; } } } public boolean onTouchEvent(MotionEvent e){ dialog.show(); return false; } protected void onSizeChanged(int xNew, int yNew, int xOld, int yOld){ super.onSizeChanged(xNew, yNew, xOld, yOld); grand = xNew; step =grand/15; } } public void onCreate(Bundle b) { super.onCreate(b); view = new BackgroundColors(this); this.setContentView(view); builder = new AlertDialog.Builder(this); builder.setMessage("ciao"); builder.setPositiveButton("OK", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int which) { Log.d("Basic", "It worked"); } }); dialog = builder.create(); } public void onPause(){ super.onPause(); view.stop(); } public void onResume(){ super.onResume(); view.start(); } }

    Read the article

  • SQL Query takes about 10 - 20 minutes

    - by masfenix
    I have a select from (nothing to complex) Select * from VIEW This view has about 6000 records and about 40 columns. It comes from a Lotus Notes SQL database. So my ODBC drive is the LotusNotesSQL driver. The query takes about 30 seconds to execute. The company I worked for used EXCEL to run the query and write everything to the worksheet. Since I am assuming it writes everything cell by cell, it used to take up to 30 - 40 minutes to complete. I then used MS access. I made a replica local table on Access to store the data. My first try was INSERT INTO COLUMNS OF LOCAL TABLE FROM (SELECT * FROM VIEW) note that this is pseudocode. This ran successfully, but again took up to 20 - 30 minutes. Then I used VBA to loop through the data and insert it in manually (using an INSERT statement) for each seperate record. This took about 10 - 15 minutes. This has been my best case yet. What i need to do after: After i have the data, I need to filter through it by department. The thing is if I put a where clause in the SQL query (the time jumps from 30 seconds to execute the query, to about 10 minutes + the time to write to local table/excel). I dont know why. MAYBE because the columns are all text columns? If we change some of the columns to integer, would that make it faster in terms of the where clause? I am looking for suggestions on how to approach this. My boss has said we could employ some Java based solution. Will this help? I am not a java person but a c#, and maybe I'll convince them to use c# as well, but i am mainly looking for suggestions on how to cut down the time. I've already cut it down from 40 minutes to 10 minutes, but the want it under 2 minutes. Just to recap: Query takes about 30 seconds to exceute Query takes about 15 - 40 minutes to be used locally in excel/acess Need it under 2 minutes Could use java based solution You may suggest other solutions instead of java.

    Read the article

  • How to retreive Bundle Version in a label from project test-Info.plist in iphone?

    - by aman-gupta
    Hi, Here I m pasting my codes where i want to retrive Bundle version from my test-Info.plist. // // testAppDelegate.h // test // // Created by Fortune1 on 20/04/10. // Copyright __MyCompanyName__ 2010. All rights reserved. // #import <UIKit/UIKit.h> @class testViewController; @interface testAppDelegate : NSObject <UIApplicationDelegate> { UIWindow *window; testViewController *viewController; } @property (nonatomic, retain) IBOutlet UIWindow *window; @property (nonatomic, retain) IBOutlet testViewController *viewController; @end ////////////// // // testAppDelegate.m // test // // Created by Fortune1 on 20/04/10. // Copyright __MyCompanyName__ 2010. All rights reserved. // #import "testAppDelegate.h" #import "testViewController.h" @implementation testAppDelegate @synthesize window; @synthesize viewController; - (void)applicationDidFinishLaunching:(UIApplication *)application { // Override point for customization after app launch [window addSubview:viewController.view]; [window makeKeyAndVisible]; } - (void)dealloc { [viewController release]; [window release]; [super dealloc]; } @end ///////////////////// // // testViewController.h // test // // Created by Fortune1 on 20/04/10. // Copyright __MyCompanyName__ 2010. All rights reserved. // #import <UIKit/UIKit.h> @interface testViewController : UIViewController { UILabel *label; } @property(nonatomic,retain) IBOutlet UILabel *label; @end /////////////////////// // // testViewController.m // test // // Created by Fortune1 on 20/04/10. // Copyright __MyCompanyName__ 2010. All rights reserved. // #import "testViewController.h" @implementation testViewController @synthesize label; /* // The designated initializer. Override to perform setup that is required before the view is loaded. - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { if (self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]) { // Custom initialization } return self; } */ /* // Implement loadView to create a view hierarchy programmatically, without using a nib. - (void)loadView { } */ - (void)viewDidLoad { [super viewDidLoad]; NSString *path = [[NSBundle mainBundle] pathForResource:@"test-info" ofType:@"plist"]; NSString *versionString = [NSString stringWithFormat:@"v%d", [plistData objectForKey:@"Bundle version"]]; label.text = versionString; } /* // Override to allow orientations other than the default portrait orientation. - (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation { // Return YES for supported orientations return (interfaceOrientation == UIInterfaceOrientationPortrait); } */ - (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } - (void)viewDidUnload { // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } - (void)dealloc { [super dealloc]; } @end But still I got null value where i m wrong please help me out Thanks in advance

    Read the article

  • Convert Object Hierachey to Object Array

    - by Killercam
    All, I want to create an object array foo[], where the constructor for Foo is public Foo(string name, string discription){} I have a database object which has a structure (not incuding stored procedures, functions or views for simplicity) like public class Database { public string name { get; set; } public string filename { get; set; } public List<Table> tables { get; set; } public Database(string name, string filename) { this.name = name; this.filename = filename; } } protected internal class Table { public string name { get; set; } public List<Column> columns { get; set;} public Table(string name, List<Column> columns) { this.name = name; this.columns = columns; } } protected internal class Column { public string name { get; set; } public string type { get; set; } public Column(string name, string type, int maxLength, bool isNullable) { this.name = name; this.type = type; } } I would like to know the quickest way to add Column and Table information to the Foo[] object array? Clearly I can do List<Foo> fooList = new List<Foo>(); foreach (Table t in database.tables) { fooList.Add(new Foo(t.Name, "Some Description")); foreach (Column c in t.columns) fooList.Add(new Foo(c.Name, "Some Description")); } Foo[] fooArr = fooList.ToArray<Foo>(); But is there a quicker way? Clearly LINQ is likely to be slower for a query that does a simalar operation, but I care allot about speed here so any advice would be appreciated. Perhaps the use of a HashSet would be the way to go as there will not be duplicate entries... Thanks for your time.

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >