Search Results

Search found 6407 results on 257 pages for 'reorder columns'.

Page 244/257 | < Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >

  • Insert Registration Data in MySQL using PHP

    - by J M 4
    I may not be asking this in the best way possible but i will try my hardest. Thank you ahead of time for your help: I am creating an enrollment website which allows an individual OR manager to enroll for medical testing services for professional athletes. I will NOT be using the site as a query DB which anybody can view information stored within the database. The information is instead simply stored, and passed along in a CSV format to our network provider so they can use as needed after the fact. There are two possible scenarios: Scenario 1 - Individual Enrollment If an individual athlete chooses to enroll him/herself, they enter their personal information, submit their payment information (credit/bank account) for processing, and their information is stored in an online database as Athlete1. Scenario 2 - Manager Enrollment If a manager chooses to enroll several athletes he manages/ promotes for, he enters his personal information, then enters the personal information for each athlete he wishes to pay for (name, address, ssn, dob, etc), then submits payment information for ALL athletes he is enrolling. This number can range from 1 single athlete, up to 20 athletes per single enrollment (he can return and complete a follow up enrollment for additional athletes). Initially, I was building the database to house ALL information regardless of enrollment type in a single table which housed over 400 columns (think 20 athletes with over 10 fields per athlete such as name, dob, ssn, etc). Now that I think about it more, I believe create multiple tables (manager(s), athlete(s)) may be a better idea here but still not quite sure how to go about it for the following very important reasons: Issue 1 If I list the manager as the parent table, I am afraid the individual enrolling athlete will not show up in the primary table and will not be included in the overall registration file which needs to be sent on to the network providers. Issue 2 All athletes being enrolled by a manager are being stored in SESSION as F1FirstName, F2FirstName where F1 and F2 relate to the id of the fighter. I am not sure technically speaking how to store multiple pieces of information within the same table under separate rows using PHP. For example, all athleteswill have a first name. The very basic theory of what i am trying to do is: If number_of_athletes 1, store F1FirstName in row 1, column 1 of Table "Athletes"; store F1LastName in row 1, column 2 of Table "Athletes"; store F2FirstName in row 2, column 1 of Table "Athletes"; store F2LastName in row 2, column 2 of table "Athletes"; Does this make sense? I know this question is very long and probably difficult so i appreciate the guidance.

    Read the article

  • Searching for duplicate records within a text file where the duplicate is determined by only two fie

    - by plg
    First, Python Newbie; be patient/kind. Next, once a month I receive a large text file (think 7 Million records) to test for duplicate values. This is catalog information. I get 7 fields, but the two I'm interested in are a supplier code and a full orderable part number. To determine if the record is dupliacted, I compress all special characters from the part number (except . and #) and create a compressed part number. The test for duplicates becomes the supplier code and compressed part number combination. This part is fairly straight forward. Currently, I am just copying the original file with 2 new columns (compressed part and duplicate indicator). If the part is a duplicate, I put a "YES" in the last field. Now that this is done, I want to be able to go back (or better yet, at the same time) to get the previous record where there was a supplier code/compressed part number match. So far, my code looks like this: Compress Full Part to a Compressed Part and Check for Duplicates on Supplier Code and Compressed Part combination import sys import re import time ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ start=time.time() try: file1 = open("C:\Accounting\May Accounting\May.txt", "r") except IOError: print sys.stderr, "Cannot Open Read File" sys.exit(1) try: file2 = open(file1.name[0:len(file1.name)-4] + "_" + "COMPRESSPN.txt", "a") except IOError: print sys.stderr, "Cannot Open Write File" sys.exit(1) hdrList="CIGSUPPLIER|FULL_PART|PART_STATUS|ALIAS_FLAG|ACQUISITION_FLAG|COMPRESSED_PART|DUPLICATE_INDICATOR" file2.write(hdrList+chr(10)) lines_seen=set() affirm="YES" records = file1.readlines() for record in records: fields = record.split(chr(124)) if fields[0]=="CIGSupplier": continue #If incoming file has a header line, skip it file2.write(fields[0]+"|"), #Supplier Code file2.write(fields[1]+"|"), #Full_Part file2.write(fields[2]+"|"), #Part Status file2.write(fields[3]+"|"), #Alias Flag file2.write(re.sub("[$\r\n]", "", fields[4])+"|"), #Acquisition Flag file2.write(re.sub("[^0-9a-zA-Z.#]", "", fields[1])+"|"), #Compressed_Part dupechk=fields[0]+"|"+re.sub("[^0-9a-zA-Z.#]", "", fields[1]) if dupechk not in lines_seen: file2.write(chr(10)) lines_seen.add(dupechk) else: file2.write(affirm+chr(10)) print "it took", time.time() - start, "seconds." ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ file2.close() file1.close() It runs in less than 6 minutes, so I am happy with this part, even if it is not elegant. Right now, when I get my results, I import the results into Access and do a self join to locate the duplicates. Loading/querying/exporting results in Access a file this size takes around an hour, so I would like to be able to export the matched duplicates to another text file or an Excel file. Confusing enough? Thanks.

    Read the article

  • What is wrong with this C++ Code ?

    - by mr.bio
    Hi .. i am a beginner and i have a problem : this code doesnt compile : main.cpp: #include <stdlib.h> #include "readdir.h" #include "mysql.h" #include "readimage.h" int main(int argc, char** argv) { if (argc>1){ readdir(argv[1]); // test(); return (EXIT_SUCCESS); } std::cout << "Bitte Pfad angeben !" << std::endl ; return (EXIT_FAILURE); } readimage.cpp #include <Magick++.h> #include <iostream> #include <vector> using namespace Magick; using namespace std; void readImage(std::vector<string> &filenames) { for (unsigned int i = 0; i < filenames.size(); ++i) { try { Image img("binary/" + filenames.at(i)); for (unsigned int y = 1; y < img.rows(); y++) { for (unsigned int x = 1; x < img.columns(); x++) { ColorRGB rgb(img.pixelColor(x, y)); // cout << "x: " << x << " y: " << y << " : " << rgb.red() << endl; } } cout << "done " << i << endl; } catch (Magick::Exception & error) { cerr << "Caught Magick++ exception: " << error.what() << endl; } } } readimage.h #ifndef _READIMAGE_H #define _READIMAGE_H #include <Magick++.h> #include <iostream> #include <vector> #include <string> using namespace Magick; using namespace std; void readImage(vector<string> &filenames) #endif /* _READIMAGE_H */ If want to compile it with this code : g++ main.cpp Magick++-config --cflags --cppflags --ldflags --libs readimage.cpp i get this error message : main.cpp:5: error: expected initializer before ‘int’ i have no clue , why ? :( Can somebody help me ? :)

    Read the article

  • Create lags with a for-loop in R

    - by cptn
    I've got a data.frame with stock data of several companies (here it's only two). I want 10 additional columns in my stock data.frame df with lagged dates (from -5 days to +5 days) for both companies in my event data.frame. I'm using a for loop which is probably not the best solution, but it works partially. DATE <- c("01.01.2000","02.01.2000","03.01.2000","06.01.2000","07.01.2000","09.01.2000","10.01.2000","01.01.2000","02.01.2000","04.01.2000","06.01.2000","07.01.2000","09.01.2000","10.01.2000") RET <- c(-2.0,1.1,3,1.4,-0.2, 0.6, 0.1, -0.21, -1.2, 0.9, 0.3, -0.1,0.3,-0.12) COMP <- c("A","A","A","A","A","A","A","B","B","B","B","B","B","B") df <- data.frame(DATE, RET, COMP, stringsAsFactors=F) df # DATE RET COMP # 1 01.01.2000 -2.00 A # 2 02.01.2000 1.10 A # 3 03.01.2000 3.00 A # 4 06.01.2000 1.40 A # 5 07.01.2000 -0.20 A # 6 09.01.2000 0.60 A # 7 10.01.2000 0.10 A # 8 01.01.2000 -0.21 B # 9 02.01.2000 -1.20 B # 10 04.01.2000 0.90 B # 11 06.01.2000 0.30 B # 12 07.01.2000 -0.10 B # 13 09.01.2000 0.30 B # 14 10.01.2000 -0.12 B this loop works fine comp <- as.vector(unique(df$COMP)) mylist <- vector('list', length(comp)) # create lags in DATE for(i in 1:length(comp)) { print(i) comp_i <- comp[i] df_k <- df[df$COMP %in% comp_i, ] # all trading days of one firm df_k <- transform(df_k, DATEm1 = c(NA, head(DATE, -1)), DATEm2 = c(NA, NA, head(DATE, -2)), DATEm3 = c(NA, NA, NA, head(DATE, -3)), DATEm4 = c(NA, NA, NA, NA,head(DATE, -4)), DATEm5 = c(NA, NA, NA, NA, NA, head(DATE, -5)), DATEp1 = c(DATE[-1], NA)) #DATEp2 = c(DATE[-2], NA, NA), #DATEp3 = c(DATE[-3], NA, NA, NA), #DATEp4 = c(DATE[-4], NA, NA, NA, NA), #DATEp5 = c(DATE[-5], NA, NA, NA, NA, NA)) mylist[[i]] = df_k } df1 <- do.call(rbind, mylist) But if I add the lines with DATEp2, DATEp3, DATEp4, DATEp5. the code doesn't work. Can anybody tell me what I'm doing wrong here? Here the code with all the lagged dates. # create lags in DATE for(i in 1:length(comp)) { print(i) comp_i <- comp[i] df_k <- df[df$COMP %in% comp_i, ] # all trading days of one firm df_k <- transform(df_k, DATEm1 = c(NA, head(DATE, -1)), DATEm2 = c(NA, NA, head(DATE, -2)), DATEm3 = c(NA, NA, NA, head(DATE, -3)), DATEm4 = c(NA, NA, NA, NA,head(DATE, -4)), DATEm5 = c(NA, NA, NA, NA, NA, head(DATE, -5)), DATEp1 = c(DATE[-1], NA), DATEp2 = c(DATE[-2], NA, NA), DATEp3 = c(DATE[-3], NA, NA, NA), DATEp4 = c(DATE[-4], NA, NA, NA, NA), DATEp5 = c(DATE[-5], NA, NA, NA, NA, NA)) mylist[[i]] = df_k } df1 <- do.call(rbind, mylist)

    Read the article

  • Database Schema Versioning Strategies

    - by Jack Ryan
    I work on a project that uses a reasonably large database, the live version weighing in at somewhere around 60-80GB. The live database is the only real definitive source of our schema, and because of its size duplicating this database is too slow to be done often. This means we have ended up developing our database schema in a pretty ad hoc way, using sql compare to migrate changes from dev dbs to the live system, and only wiping our dev dbs every month or two. I am hoping to get some pointers on how to improve our database development work flow so that we have a little more control. Some things to think about: Currently nobody is really in charge of the database schema, all developers can change it if they need to, though generally these decisions are talked about before they are done. There are stored procedures, functions, and views in the database. These should probably be dumped to files so they can be reloaded on every build. Schema changes should probably be checked in as scripts. We have started to do this recently. However all our scripts must then be numbered (because there may be dependencies between them), and must be re runnable (because our build script currently runs them all in order). This makes them hard to read because they are full of conditionals that check whether tables or columns already exist. This is a step that is often forgotten by developers. Getting a new database should be quick and easy. This is currently a big problem, it takes several hours to get a copy of last nights backup and restore it onto a dev machine. Some mechanism needs to be in place to allow developers to update static data. We have tables that contain data that is never updated through the application, but does potentially need to be changed when we do a new release (often this drives dropdowns). The whole thing needs to be runnable as part of a build script. Are there any tools that can be used to help to do this? Eventually I would like to be at a point where a new DB can be built from scratch without copying any data from the live system. I don't mind writing some scripts to glue all the steps together but each part should be easily editable so that we continue to use it rather than make changes directly on DBs.

    Read the article

  • EJB / JSF java.lang.ClassNotFoundException: com.ericsantanna.jobFC.dao.DAOFactoryRemote from [Module "com.sun.jsf-impl:main" from local module loader

    - by Eric Sant'Anna
    I'm in my first time using EJB and JSF, and I can't resolve this: 20:23:12,457 Grave [javax.enterprise.resource.webcontainer.jsf.application] (http-localhost-127.0.0.1-8081-2) com.ericsantanna.jobFC.dao.DAOFactoryRemote from [Module "com.sun.jsf-impl:main" from local module loader @439db2b2 (roots: C:\jboss-as-7.1.1.Final\modules)]: java.lang.ClassNotFoundException: com.ericsantanna.jobFC.dao.DAOFactoryRemote from [Module "com.sun.jsf-impl:main" from local module loader @439db2b2 (roots: C:\jboss-as-7.1.1.Final\modules)] I'm getting this when I do an action like a selectOneMenu or a commandButton click. DAOFactory.class @Singleton @Remote(DAOFactoryRemote.class) public class DAOFactory implements DAOFactoryRemote { private static final long serialVersionUID = 6030538139815885895L; @PersistenceContext private EntityManager entityManager; @EJB private JobDAORemote jobDAORemote; /** * Default constructor. */ public DAOFactory() { // TODO Auto-generated constructor stub } @Override public JobDAORemote getJobDAO() { JobDAO jobDAO = (JobDAO) jobDAORemote; jobDAO.setEntityManager(entityManager); return jobDAO; } JobDAO.class @Stateless @Remote(JobDAORemote.class) public class JobDAO implements JobDAORemote { private static final long serialVersionUID = -5483992924812255349L; private EntityManager entityManager; /** * Default constructor. */ public JobDAO() { // TODO Auto-generated constructor stub } @Override public void insert(Job t) { entityManager.persist(t); } @Override public Job findById(Class<Job> classe, Long id) { return entityManager.getReference(classe, id); } @Override public Job findByName(Class<Job> clazz, String name) { return entityManager .createQuery("SELECT job FROM " + clazz.getName() + " job WHERE job.nome = :nome" , Job.class) .setParameter("name", name) .getSingleResult(); } ... TriggerFormBean.class @ManagedBean @ViewScoped @Stateless public class TriggerFormBean implements Serializable { private static final long serialVersionUID = -3293560384606586480L; @EJB private DAOFactoryRemote daoFactory; @EJB private TriggerManagerRemote triggerManagerRemote; ... triggerForm.xhtml (a portion with problem) </p:layoutUnit> <p:layoutUnit id="eastConditionPanel" position="center" size="50%"> <p:panel header="Conditions to Release" style="width:97%;height:97%;"> <h:panelGrid columns="2" cellpadding="3"> <h:outputLabel value="Condition Name:" for="conditionName" /> <p:inputText id="conditionName" value="#{triggerFormBean.newCondition.name}" /> </h:panelGrid> <p:commandButton value="Add Condition" update="conditionsToReleaseList" id="addConditionToRelease" actionListener="#{triggerFormBean.addNewCondition}" /> <p:orderList id="conditionsToReleaseList" value="#{triggerFormBean.trigger.conditionsToRelease}" var="condition" controlsLocation="none" itemLabel="#{condition.name}" itemValue="#{condition}" iconOnly="true" style="width:97%;heigth:97%;"/> </p:panel> </p:layoutUnit> In TriggerFormBean.class if comments daoFactory we get the same exception with triggerManagerRemote, both annotated with @EJB. I'm don't understand the relationship between my DAOFactory and the "Module com.sun.jsf-impl:main"... Thanks.

    Read the article

  • C++ Stack Overflow

    - by PhilMAN
    Here is some code: void main() { GameEngine ge("phil", "anotherguy"); string response; do { ge.playGame(); cout << endl << "Do you want to (r)eplay the same battle, (s)tart a new battle, or (q)uit? "; cin >> response; } while(response == "r" || response == "R" || response == "s" || response == "S" ); } GameEngine::GameEngine(string name1, string name2) { p1Name = name1; p2Name = name2; } void GameEngine::playGame() { cout << "PLAY GAME" << endl; Army p1, p2; Battlefield testField; RuleSet rs; int xSize = 13; // Number of rows int ySize = 13; // Number of columns loadData(p1, p2, testField, rs, xSize, ySize); ... } void GameEngine::loadData(Army& p1, Army& p2, Battlefield& testField, RuleSet& rs, int& xSize, int& ySize) { string terrain = BattlefieldUtils::pickTerrain(); string armySplit[14];//id index 1 string ruleSplit[19];//in index 7 string armyP1, armyP2, ruleSet; Skill p1Skills[8]; Skill p2Skills[8]; CreatureStack p1Stacks[20]; CreatureStack p2Stacks[20]; ... } CreatureStack(){quantity = 0; isLive = false; id = -1;}; Army(){}; Battlefield(){}; RuleSet(){}; I have posted every line of code that executes until the program crashes. This code ran fine for a long time, I added some stuff that does not even execute until way after the code I have posted here, and bam stack overflow that occurs at GameEngine::loadData() line: CreatureStack p2Stacks[20]; will not go away. What am I doing wrong here? Is that all the stack can handle? I increased the stack size in Visual Studio and got the error to go away, but that slowed things down considerably, so I would rather just get to the source of the issue and fix that.

    Read the article

  • hibernate annotation- extending base class - values are not being set - strange error

    - by gt_ebuddy
    I was following Hibernate: Use a Base Class to Map Common Fields and openjpa inheritance tutorial to put common columns like ID, lastModifiedDate etc in base table. My annotated mappings are as follow : BaseTable : @MappedSuperclass public abstract class BaseTable { @Id @GeneratedValue @Column(name = "id") private int id; @Column(name = "lastmodifieddate") private Date lastModifiedDate; ... Person table - @Entity @Table(name = "Person ") public class Person extends BaseTable implements Serializable{ ... Create statement generated : create table Person (id integer not null auto_increment, lastmodifieddate datetime, name varchar(255), primary key (id)) ; After I save a Person object to db, Person p = new Person(); p.setName("Test"); p.setLastModifiedDate(new Date()); .. getSession().save(p); I am setting the date field but, it is saving the record with generated ID and LastModifiedDate = null, and Name="Test". Insert Statement : insert into Person (lastmodifieddate, name) values (?, ?) binding parameter [1] as [TIMESTAMP] - <null> binding parameter [2] as [VARCHAR] - Test Read by ID query : When I do hibernate query (get By ID) as below, It reads person by given ID. Criteria c = getSession().createCriteria(Person.class); c.add(Restrictions.eq("id", id)); Person person= c.list().get(0); //person has generated ID, LastModifiedDate is null select query select person0_.id as id8_, person0_.lastmodifieddate as lastmodi8_, person0_.name as person8_ from Person person0_ - Found [1] as column [id8_] - Found [null] as column [lastmodi8_] - Found [Test] as column [person8_] ReadAll query : //read all Query query = getSession().createQuery("from " + Person.class.getName()); List allPersons=query.list(); Corresponding SQL for read all select query select person0_.id as id8_, person0_.lastmodifieddate as lastmodi8_, person0_.name as person8_ from Person person0_ - Found [1] as column [id8_] - Found [null] as column [lastmodi8_] - Found [Test] as column [person8_] - Found [2] as column [id8_] - Found [null] as column [lastmodi8_] - Found [Test2] as column [person8_] But when I print out the list in console, its being more weird. it is selecting List of Person object with ID fields = all 0 (why all 0 ?) LastModifiedDate = null Name fields have valid values I don't know whats wrong here. Could you please look at it? FYI, My Hibernate-core version : 4.1.2, MySQL Connector version : 5.1.9 . In summary, There are two issues here Why I am getting All ID Fields =0 when using read all? Why the LastModifiedDate is not being inserted?

    Read the article

  • How SQLite on Android handles long stings?

    - by Levara
    Hi! I'm wondering how Android's implementation of SQLite handles long Strings. Reading from online documentation on sqlite, it said that strings in sqlite are limited to 1 million characters. My strings are definitely smaller. I'm creating a simple RSS application, and after parsing a html document, and extracting text, I'm having problem saving it to a database. I have 2 tables in database, feeds and articles. RSS feeds are correctly saved and retrieved from feeds table, but when saving to the articles table, logcat is saying that it cannot save extracted text to it's column. I don't know if other columns are making problems too, no mention of them in logcat. I'm wondering, since text is from an article on web, are signs like (",',;) creating problems? Is Android automaticaly escaping them, or I have to do that. I'm using a technique for inserting similar to one in notepad tutorial: public long insertArticle(long feedid, String title, String link, String description, String h1, String h2, String h3, String p, String image, long date) { ContentValues initialValues = new ContentValues(); initialValues.put(KEY_FEEDID, feedid); initialValues.put(KEY_TITLE, title); initialValues.put(KEY_LINK, link); initialValues.put(KEY_DESCRIPTION, description ); initialValues.put(KEY_H1, h1 ); initialValues.put(KEY_H2, h2); initialValues.put(KEY_H3, h3); initialValues.put(KEY_P, p); initialValues.put(KEY_IMAGE, image); initialValues.put(KEY_DATE, date); return mDb.insert(DATABASE_TABLE_ARTICLES,null, initialValues); } Column P is for extracted text, h1, h2 and h3 are for headers from a page. Logcat reports only column p to be the problem. The table is created with following statement: private static final String DATABASE_CREATE_ARTICLES = "create table articles( _id integer primary key autoincrement, feedid integer, title text, link text not null, description text," + "h1 text, h2 text, h3 text, p text, image text, date integer);"; Thanks!

    Read the article

  • Rails Active Record find(:all, :order => ) issue.

    - by CodingWithoutComments
    I seem to be unable to use :order_by for more than one column at a time. For example, I have a "Show" model with date and attending columns. If I run the following code: @shows = Show.find(:all, :order => "date") I get the following results: [#<Show id: 7, date: "2009-04-18", attending: 2>, #<Show id: 1, date: "2009-04-18", attending: 78>, #<Show id: 2, date: "2009-04-19", attending: 91>, #<Show id: 3, date: "2009-04-20", attending: 16>, #<Show id: 4, date: "2009-04-21", attending: 136>] If I run the following code: @shows = Show.find(:all, :order => "attending DESC") [#<Show id: 4, date: "2009-04-21", attending: 136>, #<Show id: 2, date: "2009-04-19", attending: 91>, #<Show id: 1, date: "2009-04-18", attending: 78>, #<Show id: 3, date: "2009-04-20", attending: 16>, #<Show id: 7, date: "2009-04-18", attending: 2>] But, if I run: @shows = Show.find(:all, :order => "date, attending DESC") OR @shows = Show.find(:all, :order => "date, attending ASC") OR @shows = Show.find(:all, :order => "date ASC, attending DESC") I get the same results as only sorting by date: [#<Show id: 7, date: "2009-04-18", attending: 2>, #<Show id: 1, date: "2009-04-18", attending: 78>, #<Show id: 2, date: "2009-04-19", attending: 91>, #<Show id: 3, date: "2009-04-20", attending: 16>, #<Show id: 4, date: "2009-04-21", attending: 136>] Where as, I want to get these results: [#<Show id: 1, date: "2009-04-18", attending: 78>, #<Show id: 7, date: "2009-04-18", attending: 2>, #<Show id: 2, date: "2009-04-19", attending: 91>, #<Show id: 3, date: "2009-04-20", attending: 16>, #<Show id: 4, date: "2009-04-21", attending: 136>] This is the query being generated from the logs: [4;35;1mUser Load (0.6ms)[0m [0mSELECT * FROM "users" WHERE ("users"."id" = 1) LIMIT 1[0m [4;36;1mShow Load (3.0ms)[0m [0;1mSELECT * FROM "shows" ORDER BY date ASC, attending DESC[0m [4;35;1mUser Load (0.6ms)[0m [0mSELECT * FROM "users" WHERE ("users"."id" = 1) [0m Finally, here is my model: create_table "shows", :force => true do |t| t.string "headliner" t.string "openers" t.string "venue" t.date "date" t.text "description" t.datetime "created_at" t.datetime "updated_at" t.decimal "price" t.time "showtime" t.integer "attending", :default => 0 t.string "time" end What am I missing? What am I doing wrong? UPDATE: Thanks for all your help, but it seems that all of you were stumped as much as I was. What solved the problem was actually switching databases. I switched from the default sqlite3 to mysql.

    Read the article

  • Calculating Growth-Rates by applying log-differences

    - by mropa
    I am trying to transform my data.frame by calculating the log-differences of each column and controlling for the rows id. So basically I like to calculate the growth rates for each id's variable. So here is a random df with an id column, a time period colum p and three variable columns: df <- data.frame (id = c("a","a","a","c","c","d","d","d","d","d"), p = c(1,2,3,1,2,1,2,3,4,5), var1 = rnorm(10, 5), var2 = rnorm(10, 5), var3 = rnorm(10, 5) ) df id p var1 var2 var3 1 a 1 5.375797 4.110324 5.773473 2 a 2 4.574700 6.541862 6.116153 3 a 3 3.029428 4.931924 5.631847 4 c 1 5.375855 4.181034 5.756510 5 c 2 5.067131 6.053009 6.746442 6 d 1 3.846438 4.515268 6.920389 7 d 2 4.910792 5.525340 4.625942 8 d 3 6.410238 5.138040 7.404533 9 d 4 4.637469 3.522542 3.661668 10 d 5 5.519138 4.599829 5.566892 Now I have written a function which does exactly what I want BUT I had to take a detour which is possibly unnecessary and can be removed. However, somehow I am not able to locate the shortcut. Here is the function and the output for the posted data frame: fct.logDiff <- function (df) { df.log <- dlply (df, "code", function(x) data.frame (p = x$p, log(x[, -c(1,2)]))) list.nalog <- llply (df.log, function(x) data.frame (p = x$p, rbind(NA, sapply(x[,-1], diff)))) ldply (list.nalog, data.frame) } fct.logDiff(df) id p var1 var2 var3 1 a 1 NA NA NA 2 a 2 -0.16136569 0.46472004 0.05765945 3 a 3 -0.41216720 -0.28249264 -0.08249587 4 c 1 NA NA NA 5 c 2 -0.05914281 0.36999681 0.15868378 6 d 1 NA NA NA 7 d 2 0.24428771 0.20188025 -0.40279188 8 d 3 0.26646102 -0.07267311 0.47041227 9 d 4 -0.32372771 -0.37748866 -0.70417351 10 d 5 0.17405309 0.26683625 0.41891802 The trouble is due to the added NA-rows. I don't want to collapse the frame and reduce it, which would be automatically done by the diff() function. So I had 10 rows in my original frame and am keeping the same amount of rows after the transformation. In order to keep the same length I had to add some NAs. I have taken a detour by transforming the data.frame into a list, add the NAs, and afterwards transform the list back into a data.frame. That looks tedious. Any ideas to avoid the data.frame-list-data.frame class transformation and optimize the function?

    Read the article

  • Converting text into numeric in xls using Java

    - by Work World
    When I create excel sheet through java ,the column which has number datatype in the oracle table, get converted to text format in excel.I want it to remain in the number format.Below is my code snippet for excel creation. FileWriter fw = new FileWriter(tempFile.getAbsoluteFile(),true); // BufferedWriter bw = new BufferedWriter(fw); HSSFWorkbook wb = new HSSFWorkbook(); HSSFSheet sheet = wb.createSheet("Excel Sheet"); //Column Size of excel for(int i=0;i<10;i++) { sheet.setColumnWidth((short) i, (short)8000); } String userSelectedValues=result; HSSFCellStyle style = wb.createCellStyle(); ///HSSFDataFormat df = wb.createDataFormat(); style.setFillForegroundColor(HSSFColor.GREY_25_PERCENT.index); style.setFillPattern(HSSFCellStyle.SOLID_FOREGROUND); //style.setDataFormat(df.getFormat("0")); HSSFFont font = wb.createFont(); font.setColor(HSSFColor.BLACK.index); font.setBoldweight((short) 700); style.setFont(font); int selecteditems=userSelectedValues.split(",").length; // HSSFRow rowhead = sheet.createRow((short)0); //System.out.println("**************selecteditems************" +selecteditems); for(int k=0; k<selecteditems;k++) { HSSFRow rowhead = sheet.createRow((short)k); if(userSelectedValues.contains("O_UID")) { HSSFCell cell0 = rowhead.createCell((short) k); cell0.setCellValue("O UID"); cell0.setCellStyle(style); k=k+1; } ///some columns here.. } int index=1; for (int i = 0; i<dataBeanList.size(); i++) { odb=(OppDataBean)dataBeanList.get(i); HSSFRow row = sheet.createRow((short)index); for(int j=0;j<selecteditems;j++) { if(userSelectedValues.contains("O_UID")) { row.createCell((short)j).setCellValue(odb.getUID()); j=j+1; } } index++; } FileOutputStream fileOut = null; try { fileOut = new FileOutputStream(path.toString()+"/temp.xls"); } catch (FileNotFoundException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } try { wb.write(fileOut); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } try { fileOut.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); }

    Read the article

  • Table row height in Internet Explorer

    - by Fritz H
    I have the following table: <table> <tr> <td style="height: 7px; width: 7px"> A1 </td> <td style="height: 7px"> B1 </td> <td style="height: 7px; width: 7px"> C1 </td> </tr> <tr> <td style="width: 7px"> A2 </td> <td> B2 </td> <td style="width: 7px"> C2 </td> </tr> <tr> <td style="height: 7px; width: 7px"> A3 </td> <td style="height: 7px"> B3 </td> <td style="height: 7px; width: 7px"> C3 </td> </tr> </table> The basic idea is that the first row must be 7 pixels high. The left- and rightmost cells (A1 and C1) must be 7px wide, and the middle cell (B1) must scale according to the width of the table. The same goes for the bottom row (A3, B3, C3). The middle row, however, needs to scale in height - in other words, it needs to be (tableheight - 14px). The left- and rightmost cells (A2, C2) need to be 7 pixels wide. An example: 7px x 7px |------|-------------------------|------| --- +------+-------------------------+------+ | | | | | | 7px | | | | | | | | | --- +------+-------------------------+------+ | | | | | | | | | | | | | | | | | | | | | y | | | | | | | | | | | | | | | | | | | | | | | | --- +------+-------------------------+------+ | | | | | | 7px | | | | | | | | | --- +------+-------------------------+------+ HOWEVER: In Internet Explorer, the widths work fine (columns A and C are 7px, column B scales dynamically) - but the heights don't. Rows 1, 2 and 3 turn out to be exactly 33% of the height of the table, no matter what I do. Unfortunately I have to use this table, so replacing it with a set of DIVs is not an option. I have the following DOCTYPE: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> I need to keep this, as some other elements on the page rely on some complex CSS-based layouts. Can anyone point me in the right direction to whip this into shape for IE? EDIT: Should have mentioned earlier - this table is resized on the fly using javascript.

    Read the article

  • Complex multiple join query across 3 tables

    - by Keir Simmons
    I have 3 tables: shops, PRIMARY KEY cid,zbid shop_items, PRIMARY KEY id shop_inventory, PRIMARY KEY id shops a is related to shop_items b by the following: a.cid=b.cid AND a.zbid=b.szbid shops is not directly related to shop_inventory shop_items b is related to shop_inventory c by the following: b.cid=c.cid AND b.id=c.iid Now, I would like to run a query which returns a.* (all columns from shops). That would be: SELECT a.* FROM shops a WHERE a.cid=1 AND a.zbid!=0 Note that the WHERE clause is necessary. Next, I want to return the number of items in each shop: SELECT a.*, COUNT(b.id) items FROM shops a LEFT JOIN shop_items b ON b.cid=a.cid AND b.szbid=a.zbid WHERE a.cid=1 GROUP BY b.szbid,b.cid As you can see, I have added a GROUP BY clause for this to work. Next, I want to return the average price of each item in the shop. This isn't too hard: SELECT a.*, COUNT(b.id) items, AVG(COALESCE(b.price,0)) average_price FROM shops a LEFT JOIN shop_items b ON b.cid=a.cid AND b.szbid=a.zbid WHERE a.cid=1 GROUP BY b.szbid,b.cid My next criteria is where it gets complicated. I also want to return the unique buyers for each shop. This can be done by querying shop_inventory c, getting the COUNT(DISTINCT c.zbid). Now remember how these tables are related; this should only be done for the rows in c which relate to an item in b which is owned by the respective shop, a. I tried doing the following: SELECT a.*, COUNT(b.id) items, AVG(COALESCE(b.price,0)) average_price, COUNT(DISTINCT c.zbid) FROM shops a LEFT JOIN shop_items b ON b.cid=a.cid AND b.szbid=a.zbid LEFT JOIN shop_inventory c ON c.cid=b.cid AND c.iid=b.id WHERE a.cid=1 GROUP BY b.szbid,b.cid However, this did not work as it messed up the items value. What is the proper way to achieve this result? I also want to be able to return the total number of purchases made in each shop. This would be done by looking at shop_inventory c and adding up the c.quantity value for each shop. How would I add that in as well?

    Read the article

  • How to use an adjacency matrix to determine which rows to 'pass' to a function in r?

    - by dubhousing
    New to R, and I have a long-ish question: I have a shapefile/map, and I'm aiming to calculate a certain index for every polygon in that map, based on attributes of that polygon and each polygon that neighbors it. I have an adjacency matrix -- which I think is the same as a "1st-order queen contiguity weights matrix", although I'm not sure -- that describes which polygons border which other polygons, e.g., POLYID A B C D E A 0 0 1 0 1 B 0 0 1 0 0 C 1 1 0 1 0 D 0 0 1 0 1 E 1 0 0 1 0 The above indicates, for instance, that polygons 'C' and 'E' adjoin polygon 'A'; polygon 'B' adjoins only polygon 'C', etc. The attribute table I have has one polygon per row: POLYID TOT L10K 10_15K 15_20K ... A 500 24 30 77 ... Where TOT, L10K, etc. are the variables I use to calculate an index. There are 525 polygons/rows in my data, so I'd like to use the adjacency matrix to determine which rows' attributes to incorporate into the calculation of the index of interest. For now, I can calculate the index when I subset the rows that correspond to one 'bundle' of neighboring polygons, and then use a loop (if it's of interest, I'm calculating the Centile Gap Index, a measure of local income segregation). E.g., subsetting the 'neighborhood' of the Detroit City Schools: Detroit <- UNSD00[c(142,150,164,221,226,236,295,327,157,177,178,364,233,373,418,424,449,451,487),] Then record the marginal column proportions and a running total: catprops <- vector() for(i in 4:19) { catprops[(i-3)]<-sum(Detroit[,i])/sum(Detroit[,3]) } catprops <- as.data.frame(catprops) catprops[,2]<-cumsum(catprops[,1]) Columns 4:19 are the necessary ones in the attribute table. Then I use the following code to calculate the index -- note that the loop has "i in 1:19" because the Detroit subset has 19 polygons. cgidistsum <- 0 for(i in 1:19) { pranks <- vector() for(j in 4:19) { if (Detroit[i,j]==0) pranks <- append(pranks,0) else if (j == 4) pranks <- append(pranks,seq(0,catprops[1,2],by=catprops[1,2]/Detroit[i,j])) else pranks <- append(pranks,seq(catprops[j-4,2],catprops[j-3,2],by=catprops[j-3,1]/Detroit[i,j])) } distpranks <- vector() distpranks<-abs(pranks-median(pranks)) cgidistsum <- cgidistsum + sum(distpranks) } cgi <- (.25-(cgidistsum/sum(Detroit[,3])))/.25 My apologies if I've provided more information than is necessary. I would really like to exploit the adjacency matrix in order to calculate the CGI for each 'bundle' of these rows. If you happen to know how I could started with this, that would be great. and my apologies for any novice mistakes, I'm new to R!

    Read the article

  • Java array of arry [matrix] of an integer partition with fixed term

    - by user335209
    Hello, for my study purpose I need to build an array of array filled with the partitions of an integer with fixed term. That is given an integer, suppose 10 and given the fixed number of terms, suppose 5 I need to populate an array like this 10 0 0 0 0 9 0 0 0 1 8 0 0 0 2 7 0 0 0 3 ............ 9 0 0 1 0 8 0 0 1 1 ............. 7 0 1 1 0 6 0 1 1 1 ............ ........... 0 6 1 1 1 ............. 0 0 0 0 10 am pretty new to Java and am getting confused with all the for loops. Right now my code can do the partition of the integer but unfortunately it is not with fixed term public class Partition { private static int[] riga; private static void printPartition(int[] p, int n) { for (int i= 0; i < n; i++) System.out.print(p[i]+" "); System.out.println(); } private static void partition(int[] p, int n, int m, int i) { if (n == 0) printPartition(p, i); else for (int k= m; k > 0; k--) { p[i]= k; partition(p, n-k, n-k, i+1); } } public static void main(String[] args) { riga = new int[6]; for(int i = 0; i<riga.length; i++){ riga[i] = 0; } partition(riga, 6, 1, 0); } } the output I get it from is like this: 1 5 1 4 1 1 3 2 1 3 1 1 1 2 3 1 2 2 1 1 2 1 2 1 2 1 1 1 what i'm actually trying to understand how to proceed is to have it with a fixed terms which would be the columns of my array. So, am stuck with trying to get a way to make it less dynamic. Any help?

    Read the article

  • error C2059: syntax error : ']', i cant figure out why this coming up in c++

    - by user320950
    void display_totals(); int exam1[100][3];// array that can hold 100 numbers for 1st column int exam2[100][3];// array that can hold 100 numbers for 2nd column int exam3[100][3];// array that can hold 100 numbers for 3rd column int main() { int go,go2,go3; go=read_file_in_array; go2= calculate_total(exam1[],exam2[],exam3[]); go3=display_totals; cout << go,go2,go3; return 0; } void display_totals() { int grade_total; grade_total=calculate_total(exam1[],exam2[],exam3[]); } int calculate_total(int exam1[],int exam2[],int exam3[]) { int calc_tot,above90=0, above80=0, above70=0, above60=0,i,j; calc_tot=read_file_in_array(exam[100][3]); exam1[][]=exam[100][3]; exam2[][]=exam[100][3]; exam3[][]=exam[100][3]; for(i=0;i<100;i++); { if(exam1[i] <=90 && exam1[i] >=100) { above90++; cout << above90; } } return exam1[i],exam2[i],exam3[i]; } int read_file_in_array(int exam[100][3]) { ifstream infile; int num, i=0,j=0; infile.open("grades.txt");// file containing numbers in 3 columns if(infile.fail()) // checks to see if file opended { cout << "error" << endl; } while(!infile.eof()) // reads file to end of line { for(i=0;i<100;i++); // array numbers less than 100 { for(j=0;j<3;j++); // while reading get 1st array or element infile >> exam[i][j]; cout << exam[i][j] << endl; } } infile.close(); return exam[i][j]; }

    Read the article

  • Set required attribute of two h:selectManyCheckbox

    - by BRabbit27
    I have two h:selectManyCheckBox with the required attribute set to true. What I want is that the required attribute of both of the components work together. Only display the error message if and only if both of the selected items list are empty. Right now my problem is that the message displays if either one of them is empty. Here's my code: <rich:panel> <f:facet name="header"> <h:outputText value="Actualización de catálogos"/> </f:facet> <h:panelGrid columns="4"> <h:outputLabel for="actualizarCatalogoPEC" value="Actualizar catálogos PEC"/> <h:selectBooleanCheckbox id="actualizarCatalogoPEC" value="#{administrationBean.actualizaTodosPecChecked}"> <f:ajax event="click" render="todosCatalogosPEC"/> </h:selectBooleanCheckbox> <h:outputLabel for="actualizarCatalogoSAGARPA" value="Actualizar catálogos SAGARPA"/> <h:selectBooleanCheckbox id="actualizarCatalogoSAGARPA" value="#{administrationBean.actualizaTodosSagarpaChecked}"> <f:ajax event="click" render="todosCatalogosSAGARPA"/> </h:selectBooleanCheckbox> <a4j:outputPanel id="todosCatalogosPEC"> <h:selectManyCheckbox id="selectCatalogosPEC" disabled="#{administrationBean.actualizaTodosPecChecked}" required="true" value="#{administrationBean.catalogosPecSeleccionados}" requiredMessage="Seleccione al menos un catálogo" layout="pageDirection"> <f:selectItems value="#{administrationBean.catalogosPecOptions}"/> </h:selectManyCheckbox> </a4j:outputPanel> <h:panelGroup/> <a4j:outputPanel id="todosCatalogosSAGARPA"> <h:selectManyCheckbox id="selectCatalogosSAGARPA" disabled="#{administrationBean.actualizaTodosSagarpaChecked}" required="true" value="#{administrationBean.catalogosSagarpaSeleccionados}" requiredMessage="Seleccione al menos un catálogo" layout="pageDirection" > <f:selectItems value="#{administrationBean.catalogosSagarpaOptions}"/> </h:selectManyCheckbox> </a4j:outputPanel> <h:panelGroup/> <rich:message id="messageCatalogosPEC" for="selectCatalogosPEC"/> <h:panelGroup/> <rich:message id="messageCatalogosSAGARPA" for="selectCatalogosSAGARPA"/> <h:panelGroup/> <a4j:commandButton value="Actualizar catálogos" render="messageCatalogosPEC" action="#{administrationBean.doActualizaCatalogos}"/> </h:panelGrid> </rich:panel> Cheers

    Read the article

  • User control events not getting to their handlers

    - by PhrkOnLsh
    I am trying to create a user control to wrap around the Membership API (A set of custom Gridviews to display the data better) and, while the code and the controls worked fine in the page, when I moved them to an .ascx, the events stopped firing to it. <%@ Control Language="C#" AutoEventWireup="true" CodeBehind="CustomMembership.ascx.cs" Inherits="CCGlink.CustomMembership" %> <asp:Panel ID="mainPnl" runat="server"> <asp:Label id="lblError" ForeColor="Red" Font-Bold="true" runat="server" /> <asp:GridView id="grdUsers" HeaderStyle-cssclass="<%# _headercss %>" RowStyle-cssclass="<%# _rowcss %>" AlternatingRowStyle-cssclass="<%# _alternatingcss %>" OnRowUpdating="grdUsers_RowUpdating" OnRowDeleting="grdUsers_RowDeleting" OnRowCancelingEdit="grdUsers_cancelEdit" autogeneratecolumns="false" allowsorting="true" AllowPaging="true" EmptyDataText="No users..." pagesize="<%# PageSizeForBoth %>" runat="server"> <!-- ...columns... --> </asp:GridView> <asp:Button id="btnAllDetails" onclick="btnAllDetails_clicked" text="Full Info" runat="server" /> <asp:GridView DataKeyNames="UserName" HeaderStyle-cssclass="<%# _headercss %>" RowStyle-cssclass="<%# _rowcss %>" AlternatingRowStyle-cssclass="<%# _alternatingcss %>" id="grdAllDetails" visible="false" allowsorting="true" EmptyDataText="No users in DB." pagesize="<%# PageSizeForBoth %>" runat="server" /> <asp:Button id="btnDoneAllDetails" onclick="btnAllDetails_clicked" text="Done." Visible="false" runat="server" /> </asp:Panel> However, none of the events in the first two controls (the gridview grdUsers and the button btnAllDetails) simply do NOT occur, I have verified this in the debugger. If they occured just fine in the aspx page, why do they die on moving to the ascx? My code in the aspx now is: <div class="admin-right"> <asp:ScriptManager ID="sm1" runat="server" /> <h1>User Management</h1> <div class="admin-right-users"> <asp:UpdatePanel ID="up1" runat="server"> <ContentTemplate> <cm1:CustomMembership id="showUsers" PageSizeForBoth="9" AlternatingRowStylecssclass="alternating" RowStylecssclass="row" DataSource="srcUsers" HeaderStylecssclass="header" runat="server" /> </ContentTemplate> </asp:UpdatePanel> </div> Thanks.

    Read the article

  • Code to generate random numbers in C++

    - by user1678927
    Basically I have to write a program to generate random numbers to simulate the rolling of a pair of dice. This program should be constructed in multiple files. The main function should be in one file, the other functions should be in a second source file, and their prototypes should be in a header file. First I write a short function that returns a random value between 1 and 6 to simulate the rolling of a single 6-sided die.Second, i write a function that pretends to roll a pair of dice by calling this function twice. My program starts by asking the user how many rolls should be made. Then I write a function to simulate rolling the dice this many times, keeping a count of exactly how many times the values 2,3,4,5,6,7,8,9,10,11,12(each number is the sum of a pair of dice) occur in an array. Later I write a function to display a small bar chart using these counts that ideally would look something like below for a sample of 144 rolls, where the number of asterisks printed corresponds to the count: 2 3 4 5 6 7 8 9 10 11 12 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * Next, to see how well the random number generator is doing, I write a function to compute the average value rolled. Compare this to the ideal average of 7. Also, print out a small table showing the counts of each roll made by the program, the ideal count based on the frequencies above given the total number of rolls, and the difference between these values in separate columns. This is my incomplete code so far: "Compiler visual studio 2010" int rolling(){ //Function that returns a random value between 1 and 6 rand(unsigned(time(NULL))); int dice = 1 + (rand() %6); return dice; } int roll_dice(int num1,int num2){ //it calls 'rolling function' twice int result1,result2; num1 = rolling(); num2 = rolling(); result1 = num1; result2 = num2; return result1,result2; } int main(void){ int times,i,sum,n1,n2; int c1,c2,c3,c4,c5,c6,c7,c8,c9,c10,c11;//counters for each sum printf("Please enter how many times you want to roll the dice.\n") scanf_s("%i",&times); I pretend to use counters to count each sum and store the number(the count) in an array. I know i need a loop (for) and some conditional statements (if) but m main problem is to get the values from roll_dice and store them in n1 and n2 so then i can sum them up and store the sum in 'sum'.

    Read the article

  • Allow users to pull temporary data then delete table?

    - by JM4
    I don't know the best way to title this question but am trying to accomplish the following goal: When a client logs into their profile, they are presented with a link to download data from an existing database in CSV format. The process works, however, I would like for this data to be 'fresh' each time they click the link so my plan was - once a user has clicked the link and downloaded the CSV file, the database table would 'erase' all of its data and start fresh (be empty) until the next set of data populated it. My EXISTING CSV creation code: <?php $host = 'localhost'; $user = 'username'; $pass = 'password'; $db = 'database'; $table = 'tablename'; $file = 'export'; $link = mysql_connect($host, $user, $pass) or die("Can not connect." . mysql_error()); mysql_select_db($db) or die("Can not connect."); $result = mysql_query("SHOW COLUMNS FROM ".$table.""); $i = 0; if (mysql_num_rows($result) > 0) { while ($row = mysql_fetch_assoc($result)) { $csv_output .= $row['Field'].", "; $i++; } } $csv_output .= "\n"; $values = mysql_query("SELECT * FROM ".$table.""); while ($rowr = mysql_fetch_row($values)) { for ($j=0;$j<$i;$j++) { $csv_output .= '"'.$rowr[$j].'",'; } $csv_output .= "\n"; } $filename = $file."_".date("Y-m-d",time()); header("Content-type: application/vnd.ms-excel"); header("Content-disposition: csv" . date("Y-m-d") . ".csv"); header( "Content-disposition: filename=".$filename.".csv"); print $csv_output; exit; ?> any ideas?

    Read the article

  • dojo dgrid tree, subrows in wrong position

    - by Ventura
    I have a dgrid, working with tree column plugin. Every time that the user click on the tree, I call the server, catch the subrows(json) and bind it. But when it happens, these subrows are show in wrong position, like the image bellow. The most strange is when I change the pagination, after go back to first page, the subrows stay on the correct place. (please, tell me if is possible to understand my english, then I can try to improve the text) My dgrid code: var CustomGrid = declare([OnDemandGrid, Keyboard, Selection, Pagination]); var grid = new CustomGrid({ columns: [ selector({label: "#", disabled: function(object){ return object.type == 'DOCx'; }}, "radio"), {label:'Id', field:'id', sortable: false}, tree({label: "Title", field:"title", sortable: true, indentWidth:20, allowDuplicates:true}), //{label:'Title', field:'title', sortable: false}, {label:'Count', field:'count', sortable: false} ], store: this.memoryStore, collapseOnRefresh:true, pagingLinks: false, pagingTextBox: true, firstLastArrows: true, pageSizeOptions: [10, 15, 25], selectionMode: "single", // for Selection; only select a single row at a time cellNavigation: false // for Keyboard; allow only row-level keyboard navigation }, "grid"); My memory store: loadMemoryStore: function(items){ this.memoryStore = Observable(new Memory({ data: items, getChildren: function(parent, options){ return this.query({parent: parent.id}, options); }, mayHaveChildren: function(parent){ return (parent.count != 0) && (parent.type != 'DOC'); } })); }, This moment I am binding the subrows: success: function(data){ for(var i=0; i<data.report.length; i++){ this.memoryStore.put({id:data.report[i].id, title:data.report[i].created, type:'DOC', parent:this.designId}); } }, I was thinking, maybe every moment that I bind the subrows, I could do like a refresh on the grid, maybe works. I think that the pagination does the same thing. Thanks. edit: I forgot the question. Well, How can I correct this bug? If The refresh in dgrid works. How can I do it? Other thing that I was thinking, maybe my getChildren is wrong, but I could not identify it. thanks again.

    Read the article

  • matrix multiplication with MPI [on hold]

    - by user3695701
    I'm working on an assignment on matrix multiplication with MPI. A*B=C. the requirement is that B should be vertically partitioned. Here's what I intend to do: broadcast matrix A to all processes and scatter B into several slices with each slice containing n/p columns. The following code only works when the number of process(p) is 1. when p1(say 2), I got [cluster2:21080] *** Process received signal *** [cluster2:21080] Signal: Segmentation fault (11) [cluster2:21080] Signal code: Address not mapped (1) [cluster2:21080] Failing at address: (nil) [cluster2:21080] [ 0] /lib/libpthread.so.0(+0xf8f0) [0x7f49f38108f0] [cluster2:21080] [ 1] /lib/libc.so.6(memcpy+0xe1) [0x7f49f35024c1] [cluster2:21080] [ 2] /usr/lib/libmpi.so.0(ompi_convertor_unpack+0x121)[0x7f49f47c88e1] [cluster2:21080] [ 3] /usr/lib/openmpi/lib/openmpi/mca_pml_ob1.so(+0x8a26) [0x7f49f0dcea26] [cluster2:21080] [ 4] /usr/lib/openmpi/lib/openmpi/mca_btl_tcp.so(+0x662c) [0x7f49efce462c] [cluster2:21080] [ 5] /usr/lib/libopen-pal.so.0(+0x1ede8) [0x7f49f42e0de8] [cluster2:21080] [ 6] /usr/lib/libopen-pal.so.0(opal_progress+0x99) [0x7f49f42d5369] [cluster2:21080] [ 7] /usr/lib/openmpi/lib/openmpi/mca_pml_ob1.so(+0x5585) [0x7f49f0dcb585] [cluster2:21080] [ 8] /usr/lib/openmpi/lib/openmpi/mca_coll_tuned.so(+0xcc01) [0x7f49eeeb1c01] [cluster2:21080] [ 9] /usr/lib/openmpi/lib/openmpi/mca_coll_tuned.so(+0x266c) [0x7f49eeea766c] [cluster2:21080] [10] /usr/lib/openmpi/lib/openmpi/mca_coll_sync.so(+0x1388) [0x7f49ef0c0388] [cluster2:21080] [11] /usr/lib/libmpi.so.0(MPI_Bcast+0x10e) [0x7f49f47d025e] [cluster2:21080] [12] ./out(main+0x259) [0x401571] [cluster2:21080] [13] /lib/libc.so.6(__libc_start_main+0xfd) [0x7f49f3498c8d] [cluster2:21080] [14] ./out() [0x400f29] [cluster2:21080] *** End of error message *** Can someone help me? Thanks. //matrices A and B //double* A =(double *)malloc(n*n*sizeof(double)); //double* B =(double *)malloc(n*n*sizeof(double)); //code initializing A,B... //n is the size of the matrix //p is the number of processes //myrank is the rank of calling process MPI_Init (&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &myrank); MPI_Comm_size(MPI_COMM_WORLD, &p); //broadcast A to all processes MPI_Bcast (A, n*n, MPI_DOUBLE, 0, MPI_COMM_WORLD); MPI_Datatype tmp_type, col_type; // extract a slice from B MPI_Type_vector(n, num_of_col_per_slice, n, MPI_DOUBLE, &tmp_type); // position of the first (0) and each next (stride * sizeof(double) ) slice MPI_Type_create_resized(tmp_type, 0, n * sizeof(double), &col_type); MPI_Type_commit(&col_type); //scatter a slice of B to each process MPI_Scatter(B, 1, col_type, B+myrank*n/p, n * n/p, MPI_DOUBLE, 0, MPI_COMM_WORLD); //use blas function to calculate A*sliceOfB and store the resulting slice to C cblas_dgemm(CblasRowMajor, CblasNoTrans, CblasNoTrans, n, n/p, n, 1.0, A, n, B+myrank*n/p, n, 0.0, C+myrank*n/p, n); //gather all those resulting slices into C MPI_Gather (C+myrank*n/p, n*n/p, MPI_DOUBLE, C, n*n/p, MPI_DOUBLE, 0, MPI_COMM_WORLD);

    Read the article

  • <o:massAttribute> affects another components in same <h:panelGrid>

    - by Ignacio Ayuste
    I'm using the new version of OmniFaces 1.8.1, and particullary I start to use the new tag: <o:massAttribute>. Basically, I have the following form with conditionally rendered and disabled fields: <h:form id="formABMProducto"> <h:panelGrid id="datosProducto" columns="4"> <o:massAttribute name="rendered" value="#{cc.attrs.page != 'baja'}"> <h:outputLabel for="codigo" ... /> <h:inputText id="codigo" ... /> <rich:message for="codigo" /> <h:panelGroup /> </o:massAttribute> <o:massAttribute name="rendered" value="#{cc.attrs.page eq 'baja'}"> <h:outputLabel for="codigo" .../> <rich:autocomplete id="codigoProducto" ... /> <rich:message for="codigo" /> <h:panelGroup /> </o:massAttribute> <o:massAttribute name="disabled" value="#{cc.attrs.disableComponents}"> <h:outputLabel for="nombre" ... /> <h:inputTextarea id="nombre" ... /> <rich:message for="nombre" /> <span /> <h:outputLabel for="descripcion" ... /> <h:inputTextarea id="descripcion" ... /> <rich:message for="descripcion" /> <span /> </o:massAttribute> <h:outputLabel value="#{msgs['producto.abm.panel.proveedor.tipo']}" for="CmbTipoProveedor"/> <rich:select id="CmbTipoProveedor" ... /> <rich:message for="CmbTipoProveedor" /> <a4j:commandButton ... /> </h:panelGrid> </h:form> However, when I open the page, the third <o:massAttribute> is also disabling another input fields codigo and codigoProducto. I think this isn't the expected behaviour.

    Read the article

  • Improving performance for WRITE operation on Oracle DB in Java

    - by Lucky
    I've a typical scenario & need to understand best possible way to handle this, so here it goes - I'm developing a solution that will retrieve data from a remote SOAP based web service & will then push this data to an Oracle database on network. Also, this will be a scheduled task that will execute every 15 minutes. I've event queues on remote service that contains the INSERT/UPDATE/DELETE operations that have been done since last retrieval, & once I retrieve the events for last 15 minutes, it again add events for next retrieval. Now, its just pushing data to Oracle so all my interactions are INSERT & UPDATE statements. There are around 60 tables on Oracle with some of them having 100+ columns. Moreover, for every 15 minutes cycle there would be around 60-70 Inserts, 100+ Updates & 10-20 Deletes. This will be an executable jar file that will terminate after operation & will again start on next 15 minutes cycle. So, I need to understand how should I handle WRITE operations (best practices) to improve performance for this application as whole ? Current Test Code (on every cycle) - Connects to remote service to get events. Creates a connection with DB (single connection object). Identifies the type of operation (INSERT/UPDATE/DELETE) & table on which it is done. After above, calls the respective method based on type of operation & table. Uses Preparedstatement with positional parameters, & retrieves each column value from remote service & assigns that to statement parameters. Commits the statement & returns to get event class to process next event. Above is repeated till all the retrieved events are processed after which program closes & then starts on next cycle & everything repeats again. Thanks for help !

    Read the article

< Previous Page | 240 241 242 243 244 245 246 247 248 249 250 251  | Next Page >