Search Results

Search found 4705 results on 189 pages for 'export to csv'.

Page 160/189 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • A graph-based tuple merge?

    - by user1644030
    I have paired values in tuples that are related matches (and technically still in CSV files). Neither of the paired values are necessarily unique. tupleAB = (A####, B###), (A###, B###), (A###, B###)... tupleBC = (B####, C###), (B###, C###), (B###, C###)... tupleAC = (A####, C###), (A###, C###), (A###, C###)... My ideal output would be a dictionary with a unique ID and a list of "reinforced" matches. The way I try to think about it is in a graph-based context. For example, if: tupleAB[x] = (A0001, B0012) tupleBC[y] = (B0012, C0230) tupleAC[z] = (A0001, C0230) This would produce: output = {uniquekey0001, [A0001, B0012, C0230]} Ideally, this would also be able to scale up to more than three tuples (for example, adding a "D" match that would result in an additional three tuples - AD, BD, and CD - and lists of four items long; and so forth). In regards to scaling up to more tuples, I am open to having "graphs" that aren't necessarily fully connected, i.e., every node connected to every other node. My hunch is that I could easily filter based on the list lengths. I am open to any suggestions. I think, with a few cups of coffee, I could work out a brute force solution, but I thought I'd ask the community if anyone was aware of a more elegant solution. Thanks for any feedback.

    Read the article

  • Best way to dynamically get column names from oracle tables

    - by MNC
    Hi, We are using an extractor application that will export data from the database to csv files. Based on some condition variable it extracts data from different tables, and for some conditions we have to use UNION ALL as the data has to be extracted from more than one table. So to satisfy the UNION ALL condition we are using nulls to match the number of columns. Right now all the queries in the system are pre-built based on the condition variable. The problem is whenever there is change in the table projection (i.e new column added, existing column modified, column dropped) we have to manually change the code in the application. Can you please give some suggestions how to extract the column names dynamically so that any changes in the table structure do not require change in the code? My concern is the condition that decides which table to query. The variable condition is like if the condition is A, then load from TableX if the condition is B then load from TableA and TableY. We must know from which table we need to get data. Once we know the table it is straightforward to query the column names from the data dictionary. But there is one more condition, which is that some columns need to be excluded, and these columns are different for each table. I am trying to solve the problem only for dynamically generating the list columns. But my manager told me to make solution on the conceptual level rather than just fixing. This is a very big system with providers and consumers constantly loading and consuming data. So he wanted solution that can be general. So what is the best way for storing condition, tablename, excluded columns? One way is storing in database. Are there any other ways? If yes what is the best? As I have to give at least a couple of ideas before finalizing. Thanks,

    Read the article

  • Intermittent bug - IE6 showing file as text in browser, rather than as file download

    - by Richard Ev
    In an ASP.NET WebForms 2.0 site we are encountering an intermittent bug in IE6 whereby a file download attempt results in the contents of the being shown directly in the browser as text, rather than the file save dialog being displayed. Our application allows the user to download both PDF and CSV files. The code we're using is: HttpResponse response = HttpContext.Current.Response; response.Clear(); response.AddHeader("Content-Disposition", "attachment;filename=\"theFilename.pdf\""); response.ContentType = "application/pdf"; response.BinaryWrite(MethodThatReturnsFileContents()); response.End(); This is called from the code-behind click event handler of a button server control. Where are we going wrong with this approach? Edit Following James' answer to this posting, the code I'm using now looks like this: HttpResponse response = HttpContext.Current.Response; response.ClearHeaders(); // Setting cache to NoCache was recommended, but doing so results in a security // warning in IE6 //response.Cache.SetCacheability(HttpCacheability.NoCache); response.AppendHeader("Content-Disposition", "attachment; filename=\"theFilename.pdf\""); response.ContentType = "application/pdf"; response.BinaryWrite(MethodThatReturnsFileContents()); response.Flush(); response.End(); However, I don't believe that any of the changes made will fix the issue.

    Read the article

  • python tkinter gui

    - by Lewis Townsend
    I'm wanting to make a small python program for yearly temperatures. I can get nearly everything working in the standard console but I'm wanting to implement it into a GUI. The program opens a csv file reads it into lists, works out the average, and min & max temps. Then on closing the application will save a summary to a new text file. I am wanting the default start up screen to show All Years. When a button is clicked it just shows that year's data. Here is a what I want it to look like. Pretty simple layout with just the 5 buttons and the out puts for each. I can make up the buttons for the top fine with: Code: class App: def __init__(self, master): frame = Frame(master) frame.pack() self.hi_there = Button(frame, text="All Years", command=self.All) self.hi_there.pack(side=LEFT) self.hi_there = Button(frame, text="2011", command=self.Y1) self.hi_there.pack(side=LEFT) self.hi_there = Button(frame, text="2012", command=self.Y2) self.hi_there.pack(side=LEFT) self.hi_there = Button(frame, text="2013", command=self.Y3) self.hi_there.pack(side=LEFT) self.hi_there = Button(frame, text="Save & Exit", command=self.Exit) self.hi_there.pack(side=LEFT) I'm not sure as to how to make the other elements, such as the title & table. I was going to post the code of the small program but decided not to. Once I have the structure/framework I think I can populate the fields & I might learn better this way. Using Python 2.7.3

    Read the article

  • Parallel.Foreach loop creating multiple db connections throws connection errors?

    - by shawn.mek
    Login failed. The login is from an untrusted domain and cannot be used with Windows authentication I wanted to get my code running in parallel, so I changed my foreach loop to a parallel foreach loop. It seemed simple enough. Each loop connects to the database, looks up some stuff, performs some logic, adds some stuff, closes the connection. But I get the above error? I'm using my local sql server and entity framework (each loop uses it's own context). Is there some problem with connecting multiple times using the same local login or something? How did I get around this? I have (before trying to covert to a parallel.foreach loop) split my list of objects that I am foreach looping through into four groups (separate csv files) and run four concurrent instances of my program (which ran faster overall than just one, thus the idea for parallel). So it seems connecting to the db shouldn't be a problem? Any ideas? EDIT: Here's before var gtgGenerator = new CustomGtgGenerator(); var connectionString = ConfigurationManager.ConnectionStrings["BioEntities"].ConnectionString; var allAccessionsFromObs = _GetAccessionListFromDataFiles(collectionId); ForEach(cloneIdAndAccessions in allAccessionsFromObs) DoWork(gtgGenerator, taxonId, organismId, cloneIdAndAccessions, connectionString)); after var gtgGenerator = new CustomGtgGenerator(); var connectionString = ConfigurationManager.ConnectionStrings["BioEntities"].ConnectionString; var allAccessionsFromObs = _GetAccessionListFromDataFiles(collectionId); Parallel.ForEach(allAccessionsFromObs, cloneIdAndAccessions => DoWork(gtgGenerator, taxonId, organismId, cloneIdAndAccessions, connectionString)); Inside the DoWork I use the BioEntities using (var bioEntities = new BioEntities(connectionString)) {...}

    Read the article

  • Architecture for database analytics

    - by David Cournapeau
    Hi, We have an architecture where we provide each customer Business Intelligence-like services for their website (internet merchant). Now, I need to analyze those data internally (for algorithmic improvement, performance tracking, etc...) and those are potentially quite heavy: we have up to millions of rows / customer / day, and I may want to know how many queries we had in the last month, weekly compared, etc... that is the order of billions entries if not more. The way it is currently done is quite standard: daily scripts which scan the databases, and generate big CSV files. I don't like this solutions for several reasons: as typical with those kinds of scripts, they fall into the write-once and never-touched-again category tracking things in "real-time" is necessary (we have separate toolset to query the last few hours ATM). this is slow and non-"agile" Although I have some experience in dealing with huge datasets for scientific usage, I am a complete beginner as far as traditional RDBM go. It seems that using column-oriented database for analytics could be a solution (the analytics don't need most of the data we have in the app database), but I would like to know what other options are available for this kind of issues.

    Read the article

  • What's a good way to provide additional decoration/metadata for Python function parameters?

    - by Will Dean
    We're considering using Python (IronPython, but I don't think that's relevant) to provide a sort of 'macro' support for another application, which controls a piece of equipment. We'd like to write fairly simple functions in Python, which take a few arguments - these would be things like times and temperatures and positions. Different functions would take different arguments, and the main application would contain user interface (something like a property grid) which allows the users to provide values for the Python function arguments. So, for example function1 might take a time and a temperature, and function2 might take a position and a couple of times. We'd like to be able to dynamically build the user interface from the Python code. Things which are easy to do are to find a list of functions in a module, and (using inspect.getargspec) to get a list of arguments to each function. However, just a list of argument names is not really enough - ideally we'd like to be able to include some more information about each argument - for instance, it's 'type' (high-level type - time, temperature, etc, not language-level type), and perhaps a 'friendly name' or description. So, the question is, what are good 'pythonic' ways of adding this sort of information to a function. The two possibilities I have thought of are: Use a strict naming convention for arguments, and then infer stuff about them from their names (fetched using getargspec) Invent our own docstring meta-language (could be little more than CSV) and use the docstring for our metadata. Because Python seems pretty popular for building scripting into large apps, I imagine this is a solved problem with some common conventions, but I haven't been able to find them.

    Read the article

  • How would you start automating my job?

    - by Jurily
    At my new job, we sell imported stuff. In order to be able to sell said stuff, currently the following things need to happen for every incoming shipment: Invoice arrives, in the form of an email attachment, Excel spreadsheet Monkey opens invoice, copy-pastes the relevant part of three columns into the relevant parts of a spreadsheet template, where extremely complex calculations happen, like =B2*550 Monkey sends this new spreadsheet to boss (email if lucky, printer otherwise), who sets the retail price Monkey opens the reply, then proceeds to input the data into the production database using a client program that is unusable on so many levels it's not even worth detailing Monkey fires up HyperTerminal, types in "AT", disconnect Monkey sends text messages and emails to customers using another part of the horrible client program, one at a time I want to change Monkey from myself to software wherever possible. I've never written anything that interfaces with email, Excel, databases or SMS before, but I'd be more than happy to learn if it saves me from this. Here's my uneducated wishlist: Monkey asks Thunderbird (mail server perhaps?) for the attachment Monkey tells Excel to dump the spreadsheet into a more Jurily-friendly format, like CSV or something Monkey parses the output, does the complex calculations // TODO: find a way to get the boss-generated prices with minimal manual labor involved Monkey connects to the database, inserts data Monkey spams costumers Is all this feasible? If yes, where do I start reading? How would you improve it? What language/framework do you think would be ideal for this? What would you do about the boss?

    Read the article

  • Google Adwords API response parse

    - by Yun Ling
    I am trying to figure out how to parse the Adword API query response without exceptions and one issue that i came across is that sometimes, the data itself contains comma besides the comma between each column. Say i do a query on Adroup, campaign and impression by using <reportDefinition xmlns="https://adwords.google.com/api/adwords/cm/v201209"> <selector> <fields>CampaignName</fields> <fields>AdgroupName</fields> <fields>Impressions</fields> <predicates> <field>Status</field> <operator>IN</operator> <values>ENABLED</values> <values>PAUSED</values> </predicates> </selector> <reportName>Custom Adgroup Performance Report</reportName> <reportType>ADGROUP_PERFORMANCE_REPORT</reportType> <dateRangeType>LAST_7_DAYS</dateRangeType> <downloadFormat>CSV</downloadFormat> </reportDefinition> Since my campaign has comma within the string like below: "Adroup,Campaign,Impressions, Premiun Beer, Beer, Chicago, 1000" where the adgroup is "premium beer" and campaign is "Beer,Chicago". that will cause an issue if we parse this information by using comma. Does anyone know how to solve this problem?

    Read the article

  • Error handling in C++, constructors vs. regular methods

    - by Dennis Ritchie
    I have a cheesesales.txt CSV file with all of my recent cheese sales. I want to create a class CheeseSales that can do things like these: CheeseSales sales("cheesesales.txt"); //has no default constructor cout << sales.totalSales() << endl; sales.outputPieChart("piechart.pdf"); The above code assumes that no failures will happen. In reality, failures will take place. In this case, two kinds of failures could occur: Failure in the constructor: The file may not exist, may not have read-permissions, contain invalid/unparsable data, etc. Failure in the regular method: The file may already exist, there may not be write access, too little sales data available to create a pie chart, etc. My question is simply: How would you design this code to handle failures? One idea: Return a bool from the regular method indicating failure. Not sure how to deal with the constructor. How would seasoned C++ coders do these kinds of things?

    Read the article

  • R: Are there any alternatives to loops for subsetting from an optimization standpoint?

    - by Adam
    A recurring analysis paradigm I encounter in my research is the need to subset based on all different group id values, performing statistical analysis on each group in turn, and putting the results in an output matrix for further processing/summarizing. How I typically do this in R is something like the following: data.mat <- read.csv("...") groupids <- unique(data.mat$ID) #Assume there are then 100 unique groups results <- matrix(rep("NA",300),ncol=3,nrow=100) for(i in 1:100) { tempmat <- subset(data.mat,ID==groupids[i]) #Run various stats on tempmat (correlations, regressions, etc), checking to #make sure this specific group doesn't have NAs in the variables I'm using #and assign results to x, y, and z, for example. results[i,1] <- x results[i,2] <- y results[i,3] <- z } This ends up working for me, but depending on the size of the data and the number of groups I'm working with, this can take up to three days. Besides branching out into parallel processing, is there any "trick" for making something like this run faster? For instance, converting the loops into something else (something like an apply with a function containing the stats I want to run inside the loop), or eliminating the need to actually assign the subset of data to a variable?

    Read the article

  • Python 2.7.3 memory error

    - by Tom Baker
    I have a specific case with python code. Every time I run the code, the RAM memory is increasing until it reaches 1.8 gb and crashes. import itertools import csv import pokersleuth cards = ['2s', '3s', '4s', '5s', '6s', '7s', '8s', '9s', 'Ts', 'Js', 'Qs', 'Ks', 'As', '2h', '3h', '4h', '5h', '6h', '7h', '8h', '9h', 'Th', 'Jh', 'Qh', 'Kh', 'Ah', '2c', '3c', '4c', '5c', '6c', '7c', '8c', '9c', 'Tc', 'Jc', 'Qc', 'Kc', 'Ac', '2d', '3d', '4d', '5d', '6d', '7d', '8d', '9d', 'Td', 'Jd', 'Qd', 'Kd', 'Ad'] flop = itertools.combinations(cards,3) a1 = 'Ks' ; a2 = 'Qs' b1 = 'Jc' ; b2 = 'Jd' cards1 = a1+a2 cards2 = b1+b2 number = 0 n=0 m=0 for row1 in flop: if (row1[0] <> a1 and row1[0] <>a2 and row1[0] <>b1 and row1[0] <>b2) and (row1[1] <> a1 and row1[1] <>a2 and row1[1] <>b1 and row1[1] <>b2) and (row1[2] <> a1 and row1[2] <> a2 and row1[2] <> b1 and row1[2] <> b2): for row2 in cards: if (row2 <> a1 and row2 <> a2 and row2 <> b1 and row2 <> b2 and row2 <> row1[0] and row2 <> row1[1] and row2 <> row1[2]): s = pokersleuth.compute_equity(row1[0]+row1[1]+row1[2]+row2, (cards1, cards2)) if s[0]>=0.5: number +=1 del s[:] del s[:] print number/45.0 number = 0 n+=1

    Read the article

  • Send file FTP over SSL with custom port number

    - by JM4
    I have asked the question before but in a different manner. I am trying taking form data, compiling into a temporary CSV file and trying to send over to a client via FTP over SSL (this is the only route I am interested in hearing solutions for unless there is a workaround to doing this, I cannot make changes). I have tried the following: ftp_connect - nothing happens, the page just times out ftp_ssl_connect - nothing happens, the page just times out curl library - same thing, given URL it also gives error. I am given the following information: FTPS Server IP Address TCP Port (1234) Username Password Data Directory to dump file FTP Mode: Passive very, very basic code (which I believe should initiate a connection at minimum): Code: <?php $ftp_server = "00.000.00.000"; //masked for security $ftp_port = "1234"; // masked but not 990 $ftp_user_name = "username"; $ftp_user_pass = "password"; // set up basic ssl connection $conn_id = ftp_ssl_connect($ftp_server, $ftp_port, "20"); // login with username and password $login_result = ftp_login($conn_id, $ftp_user_name, $ftp_user_pass); echo ftp_pwd($conn_id); // / echo "hello"; // close the ssl connection ftp_close($conn_id); ?> When I run this over a SmartFTP client, everything works just fine. I just can't get it to work using PHP (which is a necessity). Has anybody had success doing this in the past? I would be very interested to hear your approach.

    Read the article

  • creating arrays in for loops.... without creating an endless loop that ruins my day!

    - by Peter
    Hey Guys, Im starting with a csv varible of column names. This is then exploded into an array, then counted and tossed into a for loop that is supposed to create another array. Every time I run it, it goes into this endless loop that just hammers away at my browser...until it dies. :( Here is the code.. $columns = 'id, name, phone, blood_type'; <code> $column_array = explode(',',$columns); $column_length = count($column_array); //loop through the column length, create post vars and set default for($i = 0; $i <= $column_length; $i++) { //create the array iSortCol_1 => $column_array[1]... $array[] = 'iSortCol_'.$i = $column_array[0]; } </code> What I would like to get out of all this is a new array that looks like so.. <code> $goal = array( "iSortCol_1" => "id", "iSortCol_2" => "name", "iSortCol_3" => "phone", "iSortCol_4" => "blood_type" ); </code>

    Read the article

  • Statistical analysis on large data set to be published on the web

    - by dassouki
    I have a non-computer related data logger, that collects data from the field. This data is stored as text files, and I manually lump the files together and organize them. The current format is through a csv file per year per logger. Each file is around 4,000,000 lines x 7 loggers x 5 years = a lot of data. some of the data is organized as bins item_type, item_class, item_dimension_class, and other data is more unique, such as item_weight, item_color, date_collected, and so on ... Currently, I do statistical analysis on the data using a python/numpy/matplotlib program I wrote. It works fine, but the problem is, I'm the only one who can use it, since it and the data live on my computer. I'd like to publish the data on the web using a postgres db; however, I need to find or implement a statistical tool that'll take a large postgres table, and return statistical results within an adequate time frame. I'm not familiar with python for the web; however, I'm proficient with PHP on the web side, and python on the offline side. users should be allowed to create their own histograms, data analysis. For example, a user can search for all items that are blue shipped between week x and week y, while another user can search for sort the weight distribution of all items by hour for all year long. I was thinking of creating and indexing my own statistical tools, or automate the process somehow to emulate most queries. This seemed inefficient. I'm looking forward to hearing your ideas Thanks

    Read the article

  • Reserve space for initially hidden widget in QVBoxLayout

    - by Skinniest Man
    I am using a QVBoxLayout to arrange a vertical stack of widgets. I want some of them to be initially hidden and only show up when a check box is checked. Here is an example of the code I'm using. MyWidget::MyWidget(QWidget *parent) : QWidget(parent) { QVBoxLayout *layout = new QVBoxLayout(this); QLabel *labelLogTypes = new QLabel(tr("Log Types")); m_checkBoxCsv = new QCheckBox(tr("&Delimited File (CSV)")); m_labelDelimiter = new QLabel(tr("Delimiter:")); m_lineEditDelimiter = new QLineEdit(","); checkBoxCsv_Toggled(m_checkBoxCsv-isChecked()); connect(m_checkBoxCsv, SIGNAL(toggled(bool)), SLOT(checkBoxCsv_Toggled(bool))); QHBoxLayout *layoutDelimitedChar = new QHBoxLayout(); layoutDelimitedChar-addWidget(m_labelDelimiter); layoutDelimitedChar-addWidget(m_lineEditDelimiter); m_checkBoxXml = new QCheckBox(tr("&XML File")); m_checkBoxText = new QCheckBox(tr("Plain &Text File")); // Now that everything is constructed, put it all together // in the main layout. layout-addWidget(labelLogTypes); layout-addWidget(m_checkBoxCsv); layout-addLayout(layoutDelimitedChar); layout-addWidget(m_checkBoxXml); layout-addWidget(m_checkBoxText); layout-addStretch(); } MyWidget::checkBoxCsv_Toggled(bool checked) { m_labelDelimiter-setVisible(checked); m_lineEditDelimiter-setVisible(checked); } I want m_labelDelimiter and m_lineEditDelimiter both to be initially invisible and I want their visibility to toggle with the state of m_checkBoxCsv. This code acheives the functionality I desire, but it doesn't seem to reserve space for the two initially hidden widgets. When I check the checkbox, they become visible, but everything is kind of scrunched to accomodate them. If I leave them initially visible, everything is laid out just the way I would like it. Is there any way to make the QVBoxLayout reserve space for these widgets even if they're initially invisible?

    Read the article

  • MySQL to SQL Server ODBC Connector?

    - by Scott C.
    My boss wants to have data in MySQL DBs used for our website to be "linked and synced" with a Financial Server that has its DB in SQL Server. Sooooo...even though I have no idea how to accomplish this, this just sounds like an absolute nightmare especially since the MySQL DB is most likely going to be hosted in the cloud and not on a machine next to the Financial Server. Any ideas how to accomplish this? (within reason?) Also, his big thing is he wants to basically pull up the data from any record a user enters and using data pulled from that do all sorts of calculations using ANOTHER program that stores its data (apparently) in SQL Server. Thinking of all the data I might have to convert makes me very uneasy. Please tell a ODBC eliminates complicated junk like this. :/ I'm trying to talk him into just having MySQL do a nightly dump into a CSV file or something and using that (rather than connector) to update the SQL Server DBs. I guess I'm just not that comfortable with a server and/or programming I have no say over being connected DIRECTLY to my MySQL DB for the website. If there's no good answer for this, can anyone offer a suggestion as to what I can say to talk him out of this? (I'm a low-level IT guy w/ a decent grasp on programming...but I'm no expert - should I try to push this off to a seasoned IT pro?) Thanks in advance.

    Read the article

  • how to add data to ARRAYLIST

    - by Chamal
    try { ArrayList ar=new ArrayList(); PRIvariable pri=new PRIvariable(); BufferedReader reader = new BufferedReader(new InputStreamReader(new FileInputStream("C:/cdr2.csv"))); while (reader.ready()) { String line = reader.readLine(); String[] values = line.split(","); pri.dateText=values[2]+" "+values[4]; pri.count=pri.count+1; pri.sum = pri.sum+Integer.parseInt(values[7]); System.out.println(pri.dateText+" "+pri.sum+" "+pri.count); ar.add(pri); } String[] columnNames={"Date","TOTAL","COUNTS"}; String[][] cells=new String[ar.size()][3]; for(int i=0;i<ar.size();i++){ cells[i][0]=((PRIvariable)ar.get(i)).dateText; cells[i][1]=""+((PRIvariable)ar.get(i)).sum; cells[i][2]=""+((PRIvariable)ar.get(i)).count; } table = new JTable(cells,columnNames); table.setSize(400,400); table.setVisible(true); JScrollPane js=new JScrollPane(); js.setViewportView(table); js.setSize(400,400); js.setVisible(true); add(js,java.awt.BorderLayout.CENTER); } catch (Exception e) { System.out.println(e); } This is my code. Here i want to Read text file and put that data to Jtable. But in this code it shows every row of the Jtable filled with same data that contain in arraylist(ar) last row. ( i think there is problem in my arraylist). How can i solve this......

    Read the article

  • Is there a more memory efficient way to search through a Core Data database?

    - by Kristian K
    I need to see if an object that I have obtained from a CSV file with a unique identifier exists in my Core Data Database, and this is the code I deemed suitable for this task: NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity; entity = [NSEntityDescription entityForName:@"ICD9" inManagedObjectContext:passedContext]; [fetchRequest setEntity:entity]; NSPredicate *pred = [NSPredicate predicateWithFormat:@"uniqueID like %@", uniqueIdentifier]; [fetchRequest setPredicate:pred]; NSError *err; NSArray* icd9s = [passedContext executeFetchRequest:fetchRequest error:&err]; [fetchRequest release]; if ([icd9s count] > 0) { for (int i = 0; i < [icd9s count]; i++) { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc]init]; NSString *name = [[icd9s objectAtIndex:i] valueForKey:@"uniqueID"]; if ([name caseInsensitiveCompare:uniqueIdentifier] == NSOrderedSame && name != nil) { [pool release]; return [icd9s objectAtIndex:i]; } [pool release]; } } return nil; After more thorough testing it appears that this code is responsible for a huge amount of leaking in the app I'm writing (it crashes on a 3GS before making it 20 percent through the 1459 items). I feel like this isn't the most efficient way to do this, any suggestions for a more memory efficient way? Thanks in advance!

    Read the article

  • Passing session between jsf backing bean and model

    - by Rachel
    Background : I am having backing bean which has upload method that listen when file is uploaded. Now I pass this file to parser and in parser am doing validation check for row present in csv file. If validation fails, I have to log information and saving in logging table in database. My end goal : Is to get session information in logging bean so that I can get initialContext and make call to ejb to save data to database. What is happening : In my upload backing bean, am getting session but when i call parser, I do not pass session information as I do not want parser to be dependent on session as I want to unit test parser individually. So in my parser, I do not have session information, from parser am making call to logging bean(just a bean with some ejb methods) but in this logging bean, i need session because i need to get initial context. Question Is there a way in JSF, that I can get the session in my logging bean that I have in my upload backing bean? I tried doing: FacesContext ctx = FacesContext.getCurrentInstance(); HttpSession session = (HttpSession) ctx.getExternalContext().getSession(false); but session value was null, more generic question would be : How can I get session information in model bean or other beans that are referenced from backing beans in which we have session? Do we have generic method in jsf using which we can access session information throughout JSF Application?

    Read the article

  • Are background threads a bad idea? Why?

    - by Matt Grande
    So I've been told what I'm doing here is wrong, but I'm not sure why. I have a webpage that imports a CSV file with document numbers to perform an expensive operation on. I've put the expensive operation into a background thread to prevent it from blocking the application. Here's what I have in a nutshell. protected void ButtonUpload_Click(object sender, EventArgs e) { if (FileUploadCSV.HasFile) { string fileText; using (var sr = new StreamReader(FileUploadCSV.FileContent)) { fileText = sr.ReadToEnd(); } var documentNumbers = fileText.Split(new[] {',', '\n', '\r'}, StringSplitOptions.RemoveEmptyEntries); ThreadStart threadStart = () => AnotherClass.ExpensiveOperation(documentNumbers); var thread = new Thread(threadStart) {IsBackground = true}; thread.Start(); } } (obviously with some error checking & messages for users thrown in) So my three-fold question is: a) Is this a bad idea? b) Why is this a bad idea? c) What would you do instead?

    Read the article

  • R: building a simple command line plotting tool/Capturing window close events

    - by user275455
    I am trying to use R within a script that will act as a simple command line plot tool. I.e. user pipes in a csv file and they get a plot. I can get to R fine and get the plot to display through various temp file machinations, but I have hit a roadblock. I cannot figure out how to get R to keep running until the users closes the window. If I plot and exit, the plot disappears immediately. If I plot and use some kind of infinite loop, the user cannot close the plot; he must exit by using an interrupt which I don't like. I see there is a getGraphicsEvent function, but it claims that the device is not supported (X11). Anyway, it doesn't appear to actually support an onClose event, only onMouseDown. Any ideas on how to solve this? edit: Thanks to Dirk for the advice to check out the tk interface. Here is the test code that works: require(tcltk) library(tkrplot) ##function to display plot, called by tkrplot and embedded in a window plotIt<-function(){ plot(x=1:10, y=1:10) } ##create top level window tt<-tktoplevel() ##variable to wait on like a condition variable, to be set by event handler done <- tclVar(0) ##bind to the window destroy event, set done variable when destroyed tkbind(tt,"",function() tclvalue(done) <- 1) ##Have tkrplot embed the plot window, then realize it with tkgrid tkgrid(tkrplot(tt,plotIt)) ##wait until done is true tkwait.variable(done)

    Read the article

  • can I use NSDictionary values as [@"[array abjectAtIndex:value]", key1,nil] ?

    - by srikanth rongali
    I have a string of comma separated values in this way (0.2,0.3,0.4,1.0). I used componentsSeperatedByString to store the values in an NSArray. Now I need to store them in NSDictionary as (1st value from NSArray), key1, (2nd Value of NsArray), key2,..nil]; My program is in this way NSArray *split1; NSDictionary *enemy1; float value1; for(id line in lines) { string1 = line; split1 = [string1 componentsSeparatedByString:@","]; numberOfValues = [split1 count]; for(id value in split1) { enemy1 = [NSDictionary dictionaryWithObjectsAndKeys:@"[[split1 objectAtIndex:0]flaotValue]", @"A1", nil]; } } value1 = [enemy1 objectForKey:@"A1"]; Here lines is an NSArray consisting of CSV strings of number 10. I am getting error as error: incompatible types in assignment Should I do not write my code in this way? Where I doing wrong ? Please tell me my mistakes. Thank You.

    Read the article

  • JQuery-AJAX: No further request after timeout and delay in form post

    - by Nogga
    I got a form containing multiple checkboxes. This form shall be sent to the server to receive appropriate results from a server side script. This is already working. What I would achieve now: 1) Implementing a timeout: This is already working, but as soon as a timeout occurs, a new request is not working anymore. 2) Implementing a delay in requesting results: A delay shall be implemented so that not every checkbox is resulting in a POST request. This is what I have right now: function update_listing() { // remove postings from table $('.tbl tbody').children('tr').remove(); // get the results through AJAX var request = $.ajax({ type: "POST", url: "http://localhost/hr/index.php/listing/ajax_csv", data: $("#listing_form").serialize(), timeout: 5000, success: function(data) { $(".tbl tbody").append(data); }, error: function(objAJAXRequest, strError) { $(".tbl tbody").append("<tr><td>failed " + strError + "</td></tr>"); } }); return true; } Results are for now passed as HTML table rows - I will transform them to CSV/JSON in the next step. Thanks so much for your advice.

    Read the article

  • In Excel 2010, how can I show a count of occurrences on a specific date within multiple time ranges?

    - by Justin
    Here's what I'm trying to do. I have three columns of data. ID, Date(MM/DD/YY), Time(00:00). I need to create a chart or table that shows the number of occurrences on, say, 12/10/2010 between 00:00 and 00:59, 1:00 and 1:59, etc, for each hour of the day. I can do countif and get results for the date, but I cannot figure out how to show a summary of the count of occurrences per hour for the 24 hour period. I have months of data and many times each day. Example of data set is below. Any help is greatly ID Date Time 221 12/10/2010 00:01 223 12/10/2010 00:45 227 12/10/2010 01:13 334 12/11/2010 14:45 I would like the results to read: Date Time Count 12/10/2010 00:00AM - 00:59AM 2 12/10/2010 01:00AM - 01:59AM 1 12/10/2010 02:00AM - 02:59AM 0 ......(continues for every hour of the day) 12/11/2010 00:00AM - 00:59AM 0 ......... 12/11/2010 14:00PM - 14:59PM 1 And so on. Sorry for the length but I wanted to be clear. EDIT Here is a sample spreadsheet. Very little data, but I couldn't figure out a better way without having a huge file. Tested in notepad for formatting and worked ok on import as csv. PID,Date,Time 2888759,12/10/2010,0:10 2888760,12/10/2010,0:10 2888761,12/10/2010,0:10 2888762,12/10/2010,0:11 2889078,12/10/2010,15:45 2889079,12/10/2010,15:57 2889080,12/10/2010,15:57 2889081,12/10/2010,15:58 2889082,12/10/2010,16:10 2889083,12/10/2010,16:11 2889084,12/10/2010,16:11 2889085,12/10/2010,16:12 2889086,12/10/2010,16:12 2889087,12/10/2010,16:12 2889088,12/10/2010,16:13 2891529,12/14/2010,16:21

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >