Search Results

Search found 30071 results on 1203 pages for 'numbers table'.

Page 35/1203 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Is SELECT INTO able to affect data from its original table during UPDATE

    - by driveby
    Whilst asking this question asp.net scheduling timed events user murph posted some insightful information: Point about this is that its very, very simple - you have an process for exchange that is performing a clearly defined task and you have a high frequency task that is not doing anything particularly complex, its a straightforward query (select from table where sent = false and send at < value) - probably into temporary table so that you can run a single query update after you've done the sends - that you can optimise the index for. You're not trying to queue up a huge pile of event triggers, just one that fires once a minute and processes things that are due. Is it possible to SELECT data from table X INTO table Y and have the UPDATES that are performed on table Y pushed into table X? I guess the alternative would be that the data gets updated in table Y then an update command can be run on table X based on the data in table Y. What would be the advantage of selecting into another table? Thank you,

    Read the article

  • Insert Record by Drag & Drop from ADF Tree to ADF Tree Table

    - by arul.wilson(at)oracle.com
    If you want to create record based on the values Dragged from ADF Tree and Dropped on a ADF Tree Table, then here you go.UseCase DescriptionUser Drags a tree node from ADF Tree and Drops it on a ADF Tree Table node. A new row gets added in the Tree Table based on the source tree node, subsequently a record gets added to the database table on which Tree table in based on.Following description helps to achieve this using ADF BC.Run the DragDropSchema.sql to create required tables.Create Business Components from tables (PRODUCTS, COMPONENTS, SUB_COMPONENTS, USERS, USER_COMPONENTS) created above.Add custom method to App Module Impl, this method will be used to insert record from view layer.   public String createUserComponents(String p_bugdbId, String p_productId, String p_componentId, String p_subComponentId){    Row newUserComponentsRow = this.getUserComponentsView1().createRow();    try {      newUserComponentsRow.setAttribute("Bugdbid", p_bugdbId);      newUserComponentsRow.setAttribute("ProductId", new oracle.jbo.domain.Number(p_productId));      newUserComponentsRow.setAttribute("Component1", p_componentId);      newUserComponentsRow.setAttribute("SubComponent", p_subComponentId);    } catch (Exception e) {        e.printStackTrace();        return "Failure";    }        return "Success";  }Expose this method to client interface.To display the root node we need a custom VO which can be achieved using below query. SELECT Users.ACTIVE, Users.BUGDB_ID, Users.EMAIL, Users.FIRSTNAME, Users.GLOBAL_ID, Users.LASTNAME, Users.MANAGER_ID, Users.MANAGER_PRIVILEGEFROM USERS UsersWHERE Users.MANAGER_ID is NULLCreate VL between UsersView and UsersRootNodeView VOs.Drop ProductsView from DC as ADF Tree to jspx page.Add Tree Level Rule based on ComponentsView and SubComponentsView.Drop UsersRootNodeView as ADF Tree TableAdd Tree Level Rules based on UserComponentsView and UsersView.Add DragSource to ADF Tree and CollectionDropTarget to ADF Tree Table respectively.Bind CollectionDropTarget's DropTarget to backing bean and implement method of signature DnDAction (DropEvent), this method gets invoked when Tree Table encounters a drop action, here details required for creating new record are captured from the drag source and passed to 'createUserComponents' method. public DnDAction onTreeDrop(DropEvent dropEvent) {      String newBugdbId = "";      String msgtxt="";            try {          // Getting the target node bugdb id          Object serverRowKey = dropEvent.getDropSite();          if (serverRowKey != null) {                  //Code for Tree Table as target              String dropcomponent = dropEvent.getDropComponent().toString();              dropcomponent = (String)dropcomponent.subSequence(0, dropcomponent.indexOf("["));              if (dropcomponent.equals("RichTreeTable")){                RichTreeTable richTreeTable = (RichTreeTable)dropEvent.getDropComponent();                richTreeTable.setRowKey(serverRowKey);                int rowIndexTreeTable = richTreeTable.getRowIndex();                //Drop Target Logic                if (((JUCtrlHierNodeBinding)richTreeTable.getRowData(rowIndexTreeTable)).getAttributeValue()==null) {                  //Get Parent                  newBugdbId = (String)((JUCtrlHierNodeBinding)richTreeTable.getRowData(rowIndexTreeTable)).getParent().getAttributeValue();                } else {                  if (isNum(((JUCtrlHierNodeBinding)richTreeTable.getRowData(rowIndexTreeTable)).getAttributeValue().toString())) {                    //Get Parent's parent                              newBugdbId = (String)((JUCtrlHierNodeBinding)richTreeTable.getRowData(rowIndexTreeTable)).getParent().getParent().getAttributeValue();                  } else{                      //Dropped on USER                                          newBugdbId = (String)((JUCtrlHierNodeBinding)richTreeTable.getRowData(rowIndexTreeTable)).getAttributeValue();                  }                  }              }           }                     DataFlavor<RowKeySet> df = DataFlavor.getDataFlavor(RowKeySet.class);          RowKeySet droppedValue = dropEvent.getTransferable().getData(df);            Object[] keys = droppedValue.toArray();          Key componentKey = null;          Key subComponentKey = null;           // binding for createUserComponents method defined in AppModuleImpl class  to insert record in database.                      operationBinding = bindings.getOperationBinding("createUserComponents");            // get the Product, Component, Subcomponent details and insert to UserComponents table.          // loop through the keys if more than one comp/subcomponent is select.                   for (int i = 0; i < keys.length; i++) {                  System.out.println("in for :"+i);              List list = (List)keys[i];                  System.out.println("list "+i+" : "+list);              System.out.println("list size "+list.size());              if (list.size() == 1) {                                // we cannot drag and drop  the highest node !                                msgtxt="You cannot drop Products, please drop Component or SubComponent from the Tree.";                  System.out.println(msgtxt);                                this.showInfoMessage(msgtxt);              } else {                  if (list.size() == 2) {                    // were doing the first branch, in this case all components.                    componentKey = (Key)list.get(1);                    Object[] droppedProdCompValues = componentKey.getAttributeValues();                    operationBinding.getParamsMap().put("p_bugdbId",newBugdbId);                    operationBinding.getParamsMap().put("p_productId",droppedProdCompValues[0]);                    operationBinding.getParamsMap().put("p_componentId",droppedProdCompValues[1]);                    operationBinding.getParamsMap().put("p_subComponentId","ALL");                    Object result = operationBinding.execute();              } else {                    subComponentKey = (Key)list.get(2);                    Object[] droppedProdCompSubCompValues = subComponentKey.getAttributeValues();                    operationBinding.getParamsMap().put("p_bugdbId",newBugdbId);                    operationBinding.getParamsMap().put("p_productId",droppedProdCompSubCompValues[0]);                    operationBinding.getParamsMap().put("p_componentId",droppedProdCompSubCompValues[1]);                    operationBinding.getParamsMap().put("p_subComponentId",droppedProdCompSubCompValues[2]);                    Object result = operationBinding.execute();                  }                   }            }                        /* this.getCil1().setDisabled(false);            this.getCil1().setPartialSubmit(true); */                      return DnDAction.MOVE;        } catch (Exception ex) {          System.out.println("drop failed with : " + ex.getMessage());          ex.printStackTrace();                  /* this.getCil1().setDisabled(true); */          return DnDAction.NONE;          }    } Run jspx page and drop a Component or Subcomponent from Products Tree to UserComponents Tree Table.

    Read the article

  • Problem creating a database with PHP PDO

    - by Leandro Alonso
    Hello guys, I'm having a problem with a SQL query in my PHP Application. When the user access it for the first time, the app executes this query to create all the database: CREATE TABLE `databases` ( `id` bigint(20) NOT NULL auto_increment, `driver` varchar(45) NOT NULL, `server` text NOT NULL, `user` text NOT NULL, `password` text NOT NULL, `database` varchar(200) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ; -- -------------------------------------------------------- -- -- Table structure for table `modules` -- CREATE TABLE `modules` ( `id` bigint(20) unsigned NOT NULL auto_increment, `title` varchar(100) NOT NULL, `type` varchar(150) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=29 ; -- -------------------------------------------------------- -- -- Table structure for table `modules_data` -- CREATE TABLE `modules_data` ( `id` bigint(20) NOT NULL auto_increment, `module_id` bigint(20) unsigned NOT NULL, `key` varchar(150) NOT NULL, `value` tinytext, PRIMARY KEY (`id`), KEY `fk_modules_data_modules` (`module_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=184 ; -- -------------------------------------------------------- -- -- Table structure for table `modules_position` -- CREATE TABLE `modules_position` ( `user_id` bigint(20) unsigned NOT NULL, `tab_id` bigint(20) unsigned NOT NULL, `module_id` bigint(20) unsigned NOT NULL, `column` smallint(1) default NULL, `line` smallint(1) default NULL, PRIMARY KEY (`user_id`,`tab_id`,`module_id`), KEY `fk_modules_order_users` (`user_id`), KEY `fk_modules_order_tabs` (`tab_id`), KEY `fk_modules_order_modules` (`module_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -------------------------------------------------------- -- -- Table structure for table `tabs` -- CREATE TABLE `tabs` ( `id` bigint(20) unsigned NOT NULL auto_increment, `title` varchar(60) NOT NULL, `columns` smallint(1) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=12 ; -- -------------------------------------------------------- -- -- Table structure for table `tabs_has_modules` -- CREATE TABLE `tabs_has_modules` ( `tab_id` bigint(20) unsigned NOT NULL, `module_id` bigint(20) unsigned NOT NULL, PRIMARY KEY (`tab_id`,`module_id`), KEY `fk_tabs_has_modules_tabs` (`tab_id`), KEY `fk_tabs_has_modules_modules` (`module_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -------------------------------------------------------- -- -- Table structure for table `users` -- CREATE TABLE `users` ( `id` bigint(20) unsigned NOT NULL auto_increment, `login` varchar(60) NOT NULL, `password` varchar(64) NOT NULL, `email` varchar(100) NOT NULL, `name` varchar(250) default NULL, `user_level` bigint(20) unsigned NOT NULL, PRIMARY KEY (`id`), KEY `fk_users_user_levels` (`user_level`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=4 ; -- -------------------------------------------------------- -- -- Table structure for table `users_has_tabs` -- CREATE TABLE `users_has_tabs` ( `user_id` bigint(20) unsigned NOT NULL, `tab_id` bigint(20) unsigned NOT NULL, `order` smallint(2) NOT NULL, `columns_width` varchar(255) default NULL, PRIMARY KEY (`user_id`,`tab_id`), KEY `fk_users_has_tabs_users` (`user_id`), KEY `fk_users_has_tabs_tabs` (`tab_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -------------------------------------------------------- -- -- Table structure for table `user_levels` -- CREATE TABLE `user_levels` ( `id` bigint(20) unsigned NOT NULL auto_increment, `level` smallint(2) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=3 ; -- -------------------------------------------------------- -- -- Table structure for table `user_meta` -- CREATE TABLE `user_meta` ( `id` bigint(20) unsigned NOT NULL auto_increment, `user_id` bigint(20) unsigned default NULL, `key` varchar(150) NOT NULL, `value` longtext NOT NULL, PRIMARY KEY (`id`), KEY `fk_user_meta_users` (`user_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=4 ; -- -- Constraints for dumped tables -- -- -- Constraints for table `modules_data` -- ALTER TABLE `modules_data` ADD CONSTRAINT `fk_modules_data_modules` FOREIGN KEY (`module_id`) REFERENCES `modules` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION; -- -- Constraints for table `modules_position` -- ALTER TABLE `modules_position` ADD CONSTRAINT `fk_modules_order_modules` FOREIGN KEY (`module_id`) REFERENCES `modules` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION, ADD CONSTRAINT `fk_modules_order_tabs` FOREIGN KEY (`tab_id`) REFERENCES `tabs` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION, ADD CONSTRAINT `fk_modules_order_users` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION; -- -- Constraints for table `users` -- ALTER TABLE `users` ADD CONSTRAINT `fk_users_user_levels` FOREIGN KEY (`user_level`) REFERENCES `user_levels` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION; -- -- Constraints for table `user_meta` -- ALTER TABLE `user_meta` ADD CONSTRAINT `fk_user_meta_users` FOREIGN KEY (`user_id`) REFERENCES `users` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION; INSERT INTO `user_levels` VALUES(1, 10); INSERT INTO `user_levels` VALUES(2, 1); INSERT INTO `users` VALUES(1, 'admin', 'password', '[email protected]', NULL, 1); INSERT INTO `user_meta` VALUES (NULL, 1, 'last_tab', 1); In some environments i get this error: SQLSTATE[HY000]: General error: 1005 Can't create table 'dms.databases' (errno: 150) I tried everything that I could find on Google but nothing works. The strange part is that if I run this query in PhpMyAdmin he creates my database, without any error.

    Read the article

  • MSSQL Server using multiple ID Numbers

    - by vincer
    I have an web application that creates printable forms, these forms have a unique number on them, the problem is I have 2 forms that separate numbers need to be created for them. ie) Form1- Numbered 2000000-2999999 Form2- Numbered 3000000-3999999 dbo.test2 - is my form information table Tsel - is my autoinc table for the 3000000 series numbers Tadv - is my autoinc table for the 2000000 series numbers What I have done is create 2 tables with just autoinc row (one for 2000000 series numbers and one for 3000000 series numbers), I then created a trigger to add a record to the coresponding table, read back the autoinc number and add it to my table that stores the form information including the just created autoinc number for the right series of forms. Although it does work, I'm concerned that the numbers will get messed up under load. I'm not sure the @@IDENTITY will always return the right value when many people are using the system. (I cannot have duplicates and I need to use the numbering form show above. Thanks for any help See code below. ** TRIGGER ** CREATE TRIGGER MAKEANID2 ON dbo.test2 AFTER INSERT AS SET NOCOUNT ON declare @someid int declare @someid2 int declare @startfrom int declare @test1 varchar(10) select @someid=@@IDENTITY select @test1 = (Select name1 from test2 where sysid = @someid ) if @test1 = 'select' begin insert into Tsel Default values select @someid2 = @@IDENTITY end if @test1 = 'adv' begin insert into Tadv Default values select @someid2 = @@IDENTITY end update test2 set name2=(@someid2) where sysid = @someid SET NOCOUNT OFF

    Read the article

  • How the number of indexes built on a table can impact performances?

    - by Davide Mauri
    We all know that putting too many indexes (I’m talking of non-clustered index only, of course) on table may produce performance problems due to the overhead that each index bring to all insert/update/delete operations on that table. But how much? I mean, we all agree – I think – that, generally speaking, having many indexes on a table is “bad”. But how bad it can be? How much the performance will degrade? And on a concurrent system how much this situation can also hurts SELECT performances? If SQL Server take more time to update a row on a table due to the amount of indexes it also has to update, this also means that locks will be held for more time, slowing down the perceived performance of all queries involved. I was quite curious to measure this, also because when teaching it’s by far more impressive and effective to show to attended a chart with the measured impact, so that they can really “feel” what it means! To do the tests, I’ve create a script that creates a table (that has a clustered index on the primary key which is an identity column) , loads 1000 rows into the table (inserting 1000 row using only one insert, instead of issuing 1000 insert of one row, in order to minimize the overhead needed to handle the transaction, that would have otherwise ), and measures the time taken to do it. The process is then repeated 16 times, each time adding a new index on the table, using columns from table in a round-robin fashion. Test are done against different row sizes, so that it’s possible to check if performance changes depending on row size. The result are interesting, although expected. This is the chart showing how much time it takes to insert 1000 on a table that has from 0 to 16 non-clustered indexes. Each test has been run 20 times in order to have an average value. The value has been cleaned from outliers value due to unpredictable performance fluctuations due to machine activity. The test shows that in a  table with a row size of 80 bytes, 1000 rows can be inserted in 9,05 msec if no indexes are present on the table, and the value grows up to 88 (!!!) msec when you have 16 indexes on it This means a impact on performance of 975%. That’s *huge*! Now, what happens if we have a bigger row size? Say that we have a table with a row size of 1520 byte. Here’s the data, from 0 to 16 indexes on that table: In this case we need near 22 msec to insert 1000 in a table with no indexes, but we need more that 500msec if the table has 16 active indexes! Now we’re talking of a 2410% impact on performance! Now we can have a tangible idea of what’s the impact of having (too?) many indexes on a table and also how the size of a row also impact performances. That’s why the golden rule of OLTP databases “few indexes, but good” is so true! (And in fact last week I saw a database with tables with 1700bytes row size and 23 (!!!) indexes on them!) This also means that a too heavy denormalization is really not a good idea (we’re always talking about OLTP systems, keep it in mind), since the performance get worse with the increase of the row size. So, be careful out there, and keep in mind the “equilibrium” is the key world of a database professional: equilibrium between read and write performance, between normalization and denormalization, between to few and too may indexes. PS Tests are done on a VMWare Workstation 7 VM with 2 CPU and 4 GB of Memory. Host machine is a Dell Precsioni M6500 with i7 Extreme X920 Quad-Core HT 2.0Ghz and 16Gb of RAM. Database is stored on a SSD Intel X-25E Drive, Simple Recovery Model, running on SQL Server 2008 R2. If you also want to to tests on your own, you can download the test script here: Open TestIndexPerformance.sql

    Read the article

  • Database - Designing an "Events" Table

    - by Alix Axel
    After reading the tips from this great Nettuts+ article I've come up with a table schema that would separate highly volatile data from other tables subjected to heavy reads and at the same time lower the number of tables needed in the whole database schema, however I'm not sure if this is a good idea since it doesn't follow the rules of normalization and I would like to hear your advice, here is the general idea: I've four types of users modeled in a Class Table Inheritance structure, in the main "user" table I store data common to all the users (id, username, password, several flags, ...) along with some TIMESTAMP fields (date_created, date_updated, date_activated, date_lastLogin, ...). To quote the tip #16 from the Nettuts+ article mentioned above: Example 2: You have a “last_login” field in your table. It updates every time a user logs in to the website. But every update on a table causes the query cache for that table to be flushed. You can put that field into another table to keep updates to your users table to a minimum. Now it gets even trickier, I need to keep track of some user statistics like how many unique times a user profile was seen how many unique times a ad from a specific type of user was clicked how many unique times a post from a specific type of user was seen and so on... In my fully normalized database this adds up to about 8 to 10 additional tables, it's not a lot but I would like to keep things simple if I could, so I've come up with the following "events" table: |------|----------------|----------------|--------------|-----------| | ID | TABLE | EVENT | DATE | IP | |------|----------------|----------------|--------------|-----------| | 1 | user | login | 201004190030 | 127.0.0.1 | |------|----------------|----------------|--------------|-----------| | 1 | user | login | 201004190230 | 127.0.0.1 | |------|----------------|----------------|--------------|-----------| | 2 | user | created | 201004190031 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 2 | user | activated | 201004190234 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 2 | user | approved | 201004190930 | 217.0.0.1 | |------|----------------|----------------|--------------|-----------| | 2 | user | login | 201004191200 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | created | 201004191230 | 127.0.0.1 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | impressed | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 2 | user | blocked | 201004200319 | 217.0.0.1 | |------|----------------|----------------|--------------|-----------| | 2 | user | deleted | 201004200320 | 217.0.0.1 | |------|----------------|----------------|--------------|-----------| Basically the ID refers to the primary key (id) field in the TABLE table, I believe the rest should be pretty straightforward. One thing that I've come to like in this design is that I can keep track of all the user logins instead of just the last one, and thus generate some interesting metrics with that data. Due to the nature of the events table I also thought of making some optimizations, such as: #9: Since there is only a finite number of tables and a finite (and predetermined) number of events, the TABLE and EVENTS columns could be setup as ENUMs instead of VARCHARs to save some space. #14: Store IPs as UNSIGNED INT with INET_ATON() instead of VARCHARs. Store DATEs as TIMESTAMPs instead of DATETIMEs. Use the ARCHIVE (or the CSV?) engine instead of InnoDB / MyISAM. Overall, each event would only consume 14 bytes which is okay for my traffic I guess. Pros: Ability to store more detailed data (such as logins). No need to design (and code for) almost a dozen additional tables (dates and statistics). Reduces a few columns per table and keeps volatile data separated. Cons: Non-relational (still not as bad as EAV): SELECT * FROM events WHERE id = 2 AND table = 'user' ORDER BY date DESC(); 6 bytes overhead per event (ID, TABLE and EVENT). I'm more inclined to go with this approach since the pros seem to far outweigh the cons, but I'm still a little bit reluctant.. Am I missing something? What are your thoughts on this? Thanks!

    Read the article

  • Normalizing a table

    - by Alex
    I have a legacy table, which I can't change. The values in it can be modified from legacy application (application also can't be changed). Due to a lot of access to the table from new application (new requirement), I'd like to create a temporary table, which would hopefully speed up the queries. The actual requirement, is to calculate number of business days from X to Y. For example, give me all business days from Jan 1'st 2001 until Dec 24'th 2004. The table is used to mark which days are off, as different companies may have different days off - it isn't just Saturday + Sunday) The temporary table would be created from a .NET program, each time user enters the screen for this query (user may run query multiple times, with different values, table is created once), so I'd like it to be as fast as possible. Approach below runs in under a second, but I only tested it with a small dataset, and still it takes probably close to half a second, which isn't great for UI - even though it's just the overhead for first query. The legacy table looks like this: CREATE TABLE [business_days]( [country_code] [char](3) , [state_code] [varchar](4) , [calendar_year] [int] , [calendar_month] [varchar](31) , [calendar_month2] [varchar](31) , [calendar_month3] [varchar](31) , [calendar_month4] [varchar](31) , [calendar_month5] [varchar](31) , [calendar_month6] [varchar](31) , [calendar_month7] [varchar](31) , [calendar_month8] [varchar](31) , [calendar_month9] [varchar](31) , [calendar_month10] [varchar](31) , [calendar_month11] [varchar](31) , [calendar_month12] [varchar](31) , misc. ) Each month has 31 characters, and any day off (Saturday + Sunday + holiday) is marked with X. Each half day is marked with an 'H'. For example, if a month starts on a Thursday, than it will look like (Thursday+Friday workdays, Saturday+Sunday marked with X): ' XX XX ..' I'd like the new table to look like so: create table #Temp (country varchar(3), state varchar(4), date datetime, hours int) And I'd like to only have rows for days which are off (marked with X or H from previous query) What I ended up doing, so far is this: Create a temporary-intermediate table, that looks like this: create table #Temp_2 (country_code varchar(3), state_code varchar(4), calendar_year int, calendar_month varchar(31), month_code int) To populate it, I have a union which basically unions calendar_month, calendar_month2, calendar_month3, etc. Than I have a loop which loops through all the rows in #Temp_2, after each row is processed, it is removed from #Temp_2. To process the row there is a loop from 1 to 31, and substring(calendar_month, counter, 1) is checked for either X or H, in which case there is an insert into #Temp table. [edit added code] Declare @country_code char(3) Declare @state_code varchar(4) Declare @calendar_year int Declare @calendar_month varchar(31) Declare @month_code int Declare @calendar_date datetime Declare @day_code int WHILE EXISTS(SELECT * From #Temp_2) -- where processed = 0) BEGIN Select Top 1 @country_code = t2.country_code, @state_code = t2.state_code, @calendar_year = t2.calendar_year, @calendar_month = t2.calendar_month, @month_code = t2.month_code From #Temp_2 t2 -- where processed = 0 set @day_code = 1 while @day_code <= 31 begin if substring(@calendar_month, @day_code, 1) = 'X' begin set @calendar_date = convert(datetime, (cast(@month_code as varchar) + '/' + cast(@day_code as varchar) + '/' + cast(@calendar_year as varchar))) insert into #Temp (country, state, date, hours) values (@country_code, @state_code, @calendar_date, 8) end if substring(@calendar_month, @day_code, 1) = 'H' begin set @calendar_date = convert(datetime, (cast(@month_code as varchar) + '/' + cast(@day_code as varchar) + '/' + cast(@calendar_year as varchar))) insert into #Temp (country, state, date, hours) values (@country_code, @state_code, @calendar_date, 4) end set @day_code = @day_code + 1 end delete from #Temp_2 where @country_code = country_code AND @state_code = state_code AND @calendar_year = calendar_year AND @calendar_month = calendar_month AND @month_code = month_code --update #Temp_2 set processed = 1 where @country_code = country_code AND @state_code = state_code AND @calendar_year = calendar_year AND @calendar_month = calendar_month AND @month_code = month_code END I am not an expert in SQL, so I'd like to get some input on my approach, and maybe even a much better approach suggestion. After having the temp table, I'm planning to do (dates would be coming from a table): select cast(convert(datetime, ('01/31/2012'), 101) -convert(datetime, ('01/17/2012'), 101) as int) - ((select sum(hours) from #Temp where date between convert(datetime, ('01/17/2012'), 101) and convert(datetime, ('01/31/2012'), 101)) / 8) Besides the solution of normalizing the table, the other solution I implemented for now, is a function which does all this logic of getting the business days by scanning the current table. It runs pretty fast, but I'm hesitant to call a function, if I can instead add a simpler query to get result. (I'm currently trying this on MSSQL, but I would need to do same for Sybase ASE and Oracle)

    Read the article

  • On asp:Table Control how do we create a thead ?

    - by balexandre
    from MSDN article on the subject we can see that we create a TableHeaderRowthat conatins TableHeaderCells. but they add the table header like this: myTable.Row.AddAt(0, headerRow); witch outputs the HTML: <table id="Table1" ... > <tr> <th scope="column" abbr="Col 1 Head">Column 1 Header</th> <th scope="column" abbr="Col 2 Head">Column 2 Header</th> <th scope="column" abbr="Col 3 Head">Column 3 Header</th> </tr> <tr> <td>(0,0)</td> <td>(0,1)</td> <td>(0,2)</td> </tr> ... and it should have <thead> and <tbody> (so it works seamless with tablesorter) :) <table id="Table1" ... > <thead> <tr> <th scope="column" abbr="Col 1 Head">Column 1 Header</th> <th scope="column" abbr="Col 2 Head">Column 2 Header</th> <th scope="column" abbr="Col 3 Head">Column 3 Header</th> </tr> </thead> <tbody> <tr> <td>(0,0)</td> <td>(0,1)</td> <td>(0,2)</td> </tr> ... </tbody> the HTML aspx code is <asp:Table ID="Table1" runat="server" /> How can I output the correct syntax? Just as information, the GridViewcontrol has this builed in as we just need to set teh Accesbility and use the HeaderRow gv.UseAccessibleHeader = true; gv.HeaderRow.TableSection = TableRowSection.TableHeader; gv.HeaderRow.CssClass = "myclass"; but the question is for the Table control.

    Read the article

  • css display:table first column too wide

    - by Mestore
    I have a css table setup like this: <div class='table'> <div> <span>name</span> <span>details</span> </div> </div> The css for the table is: .table{ display:table; width:100%; } .table div{ text-align:right; display:table-row; border-collapse: separate; border-spacing: 0px; } .table div span:first-child { text-align:right; } .table div span { vertical-align:top; text-align:left; display:table-cell; padding:2px 10px; } As it stands the two columns are split evenly between the space occupied by the width of the table. I'm trying to get the first column only to be as wide as is needed by the text occupying it's cells The table is an unknown width, as are the columns/cells.

    Read the article

  • Entire Table is pushed to the next page when rendering a SSRS 2005 Report (as .pdf) in SSRS 2008

    - by Pwninstein
    I have a SSRS 2005 report that I'm rendering in SSRS 2008 as a .pdf. The report contains (among other things) a table that's very simple: header row, details, no footer, no aggregation, no grouping, keep together = false, pageBreakAtStart = false, pageBreakAtEnd = false, repeatHeaderOnNewPage = true. I resized the table to be much narrower than the body of the report just to be sure it wasn't extending beyond the bounds of the report, pushing everything down. But, no matter what I try, if some of the detail rows in that table would need to be pushed to the next page, then the ENTIRE TABLE is pushed to the next page, not just the extra rows. So my question is: Is there a workaround for this problem, is this a known issue, or is it even possible to get this 2005 report to render properly in 2008? NOTE: this is related to a question that I previously asked here, and is based on this MSDN forum post started by a coworker. This question is not the same as my previous question, as I'd like to see things work properly in with a 2005 report. If it's not possible, that would be good to know, as it would indicate that we need to upgrade one of our servers to SQL 2008. Thanks!

    Read the article

  • Jquery add table row after the row which calling the jquery.

    - by marharépa
    Hello! I've got a table. <table id="servers" ...> ... {section name=i loop=$ownsites} <tr id="site_id_{$ownsites[i].id}"> ... <td>{$ownsites[i].phone}</td> <td class="icon"><a id="{$ownsites[i].id}" onClick="return makedeleterow(this.getAttribute('id'));" ...></a></td> </tr> {/section} <tbody> </table> And this java script. <script type="text/javascript"> function makedeleterow(id) { $('#delete').remove(); $('#servers').append($(document.createElement("tr")).attr({id: "delete"})); $('#delete').append($(document.createElement("td")).attr({colspan: "9", id: "deleter"})); $('#deleter').text('Biztosan törölni szeretnéd ezt a weblapod?'); $('#deleter').append($(document.createElement("input")).attr({type: "submit", id: id, onClick: "return truedeleterow(this.getAttribute('id'))"})); $('#deleter').append($(document.createElement("input")).attr({type: "hidden", name: "website_del", value: id})); } </script> It's workin fine, it makes a tr after the table's last tr and put the info to it, and the delete function also works fine. But i'd like to make this append AFTER the tr (with td class="icon") which calling the script. How can i do this?

    Read the article

  • one table is shared between several websites

    - by sami
    I have a static table that's shared by several websites. By static, I mean that the data is read but never updated by the websites. Currently, all websites are served from the same server but that may change. I want to minimize the need for creating/maintaining this table for each of the websites, so I thought about turning it to an xml file that's stored in a shared library that all websites have access to. The problem is I use an ORM and use forign key constraints to ensure integrity of the ids used from that table, so by removing that table out of the MySQL database into an XML file, will this affect the integrity of the ids coming from that table? My table looks like this <table name="entry"> <column name="id" type="INTEGER" primaryKey="true" autoIncrement="true" /> <column name="title" type="VARCHAR" size="500" required="true" /> </table> and I use it as a foreign key in other tables <table name="refer"> <column name="id" type="INTEGER" primaryKey="true" autoIncrement="true" /> <column name="linkto" type="INTEGER"/> <foreign-key foreignTable="entry"> <reference local="linkto" foreign="id" /> </foreign-key> </table> So I'm wondering if I remove that table out of the database, is there a way to retain that referential integrity? And of course are these any other efficient ways to do the same thing? I just don't want to have to repeat that table for several websites.

    Read the article

  • How do I use price data in one table for a calculation that is stored in another table?

    - by shane
    I'm still leanring PHP/MySQL but have learned quite a bit thanks to codies on StackOverflow. I'm trying to setup a sort of room reservations system using two tables: SETUP: Room price table: Has, prices for a type room a client may want to rent as well as the dates (day of week) they wish to use it. Pricing varies based on day of the week and per room. I've setup a different table for each room type as each room type carries different pricing for each day of the week. So, There is an Alpha room table, Bravo room, etc. Within Alpha table are headers for the days of the week with pricing pre-entered into the rows. Client info table: Has the name, address, date of room use, etc data for the specific client. EXAMPLE: Alpha-room price table: Sun = $100; Mon = $200; Tue=$300 and so on. Bravo-room price table: Sun = $100; Mon = $200; Tue=$300 and so on. Client data table: ClientName; date-of-room-use; address; day_subtotal; grand_total. QUESTION: I'm trying to find PHP code that will: look at the date of room use in the client data table, look up the associated cost for that date in the specific room pricing table, record that unit cost in the day subtotal of the client data table and sum a grand total in the grand total row of the client data table (assuming the room may be used more than one day by the customer). I know there's something to do with join but I'm finding it difficult to grasp the concept and, if someone can demonstrate using this example, I think I will have a better understanding of how to work this sort of transaction. Thank you ALL in advance for your suggestions or alternatvie approaches.

    Read the article

  • Which text margin does SWT Table use when drawing text?

    - by Zordid
    I got a relatively easy question - but I cannot find anything anywhere to answer it. I use a simple SWT table widget in my application that displays only text in the cells. I got an incremental search feature and want to highlight text snippets in all cells if they match. So when typing "a", all "a"s should be highlighted. To get this, I add an SWT.EraseItem listener to interfere with the background drawing. If the current cell's text contains the search string, I find the positions and calculate relative x-coordinates within the text using event.gc.stringExtent - easy. With that I just draw rectangles "behind" the occurrences. Now, there's a flaw in this. The table does not draw the text without a margin, so my x coordinate does not really match - it is slightly off by a few pixels! But how many?? Where do I retrieve the cell's text margins that table's own drawing will use? No clue. Cannot find anything. :-( Bonus question: the table's draw method also shortens text and adds "..." if it does not fit into the cell. Hmm. My occurrence finder takes the TableItem's text and thus also tries to mark occurrences that are actually not visible because they are consumed by the "...". How do I get the shortened text and not the "real" text within the EraseItem draw handler? Thanks!

    Read the article

  • Building static (but complicated) lookup table using templates.

    - by MarkD
    I am currently in the process of optimizing a numerical analysis code. Within the code, there is a 200x150 element lookup table (currently a static std::vector < std::vector < double ) that is constructed at the beginning of every run. The construction of the lookup table is actually quite complex- the values in the lookup table are constructed using an iterative secant method on a complicated set of equations. Currently, for a simulation, the construction of the lookup table is 20% of the run time (run times are on the order of 25 second, lookup table construction takes 5 seconds). While 5-seconds might not seem to be a lot, when running our MC simulations, where we are running 50k+ simulations, it suddenly becomes a big chunk of time. Along with some other ideas, one thing that has been floated- can we construct this lookup table using templates at compile time? The table itself never changes. Hard-coding a large array isn't a maintainable solution (the equations that go into generating the table are constantly being tweaked), but it seems that if the table can be generated at compile time, it would give us the best of both worlds (easily maintainable, no overhead during runtime). So, I propose the following (much simplified) scenario. Lets say you wanted to generate a static array (use whatever container suits you best- 2D c array, vector of vectors, etc..) at compile time. You have a function defined- double f(int row, int col); where the return value is the entry in the table, row is the lookup table row, and col is the lookup table column. Is it possible to generate this static array at compile time using templates, and how?

    Read the article

  • DB Architecture : Linking to intersection or to main tables?

    - by Jean-Nicolas
    Hi, I'm creating fantasy football system on my website but i'm very confuse about how I should link some of my table. Tables The main table is Pool which have all the info about the ruling of the fantasy draft. A standard table User, which contains the usual stuff. Intersection table called pools_users which contains id,pool_id,user_id because a user could be in more than one pool, and a pool contains more than 1 user. The problem Table Selections = that's the table that is causing problem. That's the selection that the user choose for his pool. This is related to the Player table but thats not relevant for this problem. Should I link this table to the table Pools_users or should I link it with both main table Pool and User. This table contains id,pool_id,user_id,player_id,... What is the best way link my tables? When I want to retrieve my data, I normally want the information to be divided BY users. "This user have those selections, this one those selections, etc).

    Read the article

  • Prototype: Form.serialize missing some inputs (due to table?)

    - by Chris
    I'm using JavaScript Prototype (through Ruby on Rails) to handle some Ajax calls; but in one particular case I'm missing a field from the form. I have a layout like this: +---------+---------+ | Thing 1 | Thing 2 | +---------+---------+-----------+ | o Opt 1 | o Opt 1 | <Confirm> | | o Opt 2 | o Opt 2 | | +---------+---------+-----------+ Opt 1 and 2 are Radio buttons, Confirm is a button. The entire table is wrapped in a form, with code like: <form action="javascript:void(0)"> <input type="hidden" name="context" value="foo" /> <input type="hidden" name="subcontext" value="bar" /> <table> <tr><td>Thing 1</td><td>Thing2</td></tr> <tr><td> <input type="radio" name="choice" value="1.1" />Opt 1<br /> <input type="radio" name="choice" value="1.2" />Opt 2<br /> </td><td> <input type="radio" name="choice" value="2.1" />Opt 1<br /> <input type="radio" name="choice" value="2.2" />Opt 2<br /> </td><td> <input name="choice_btn" type="button" value="Confirm" onclick="new AJAX.Updater('my_form', '/process_form', {asynchronous:true, evalScripts:true, parameters:Form.serialize(this.form)}); return false;" /> </td></tr> </table> </form> But I can see that the POST generated by clicking the Confirm button contains the foo and bar values for the hidden fields, but not the choice of the radio buttons. Is this because I've got a table inside my form? How can I get around this?

    Read the article

  • 'table' undeclared (first use this in a function)

    - by user2318083
    So I'm not sure why this isn't working, I'm creating a new table and setting it to the variable 'table' Is there something I'm doing wrong? This is the error I get when trying to run it: src/simpleshell.c:19:3: error: ‘table’ undeclared (first use in this function) src/simpleshell.c:19:3: note: each undeclared identifier is reported only once for each function it appears in My code is as follows: #include "parser.h" #include "hash_table.h" #include "variables.h" #include "shell.h" #include <stdio.h> int main(void) { char input[MAXINPUTLINE]; table = Table_create(); signal_c_init(); printf("\nhlsh$ "); while(fgets(input, sizeof(input), stdin)){ stripcrlf(input); parse(input); printf("\nhlsh$ "); } Table_free(table); return 0; } Then this is my create a table in the hash_table file: struct Table *Table_create(void){ struct Table *t; t = (struct Table*)calloc(1, sizeof(struct Table)); return t; } From the hash_table.c: #include "hash_table.h" #include "parser.h" #include "shell.h" #include "variables.h" #include <stdio.h> #include <sys/resource.h> #include <sys/wait.h> #include <sys/types.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #include <pwd.h> #include <fcntl.h> #include <limits.h> #include <signal.h> struct Table *table; unsigned int hash(const char *x){ int i; unsigned int h = 0U; for (i=0; x[i]!='\0'; i++){ h = h * 65599 + (unsigned char)x[i]; } return h % 1024; }

    Read the article

  • Gparted can't create partition table

    - by William
    Here's what the problem is. About a day or so ago I used Gparted live cd to create 3 NTFS primary partitions on my external 500 gig Goflex and one extended with 2 logical partitiones. I had planned to install windows 8 on the first partition, then ubuntu and kubuntu on the other 2. After I finished partitioning my drive with gparted, I booted into windows vista to make my bootable windows 8 usb to install it with, I also decided to check to make sure all my partitions were working properly. Then I found they were, and they weren't. My 50 gig first partition I had planned to install windows on showed up normal and the 300 gigs of space left in the extended partition did as well, the rest showed up as raw. So I figured alright, something went awal while making the partitions, so I booted up gparted once again. Then to my surprise gparted showed the entire drive as unallocated, and when I refreshed the list, it showed as all the partitions I had made earlier, buy with a exclamation mark by them all. So I figured ok, might be a problem with the partition table as I'd seen a similar problem in past on a drive that was not partitioned at all, so I decided to create a new partition table and take the time out again to sit and wait. Then I got a message saying gparted could not create the partition table, followed by it showing the entire drive as formatted into ntfs. After that I figured ok I'll take a break, come back in a hour, maybe it's something I did. So a hour later I came back after having booted up windows, plugged the drive in to see if by some miracle windows could access the drive. In disk management when I plugged the drive in, it would freeze attempting to read the drive, as I'd seen in the past with raw disks, yet when I unplugged it I got a glimpse of disk management showing it as a perfectly fine ntfs file system on the drive followed by a "you must format disk K in order to use it". So I then was assured the disk was raw as that is what had happened in the past, followed by a new partition table through gparted to fix the problem and a 10 hour format in windows. So I once again booted up gparted, to get the message "error fsyncing/closing/dev/sdg:input/output error" followed by "error opening dev/sdg No such file in directory" after I refreshed and somehow saw the disk show up as perfectly fine ntfs and then tried to create a new partition table to try to wipe out all my problems and start over again. And not gparted only shows the drive there about 1/10 refreshes the rest I get the directory error. If anybody can assist me in any way shape or form I will be thankful.

    Read the article

  • Membership numbers

    - by Ron Bruce
    I currently use phpMyAdmin 3.2.4 to monitor and manage the membership numbers for my organization members website. Not to long ago the member numbers jumped from 750 to 1,000,000 just over night? I am not sure how to fix this. I am new at this and I am not that famaliar of how this all works. This working with MySQL database. Also where so I fined on line manuals for phpMyAdmin and MySql? Respectfully Ron

    Read the article

  • load-causing processes disappearing from "top" ps -o pcpu shows bogus numbers

    - by Alec Matusis
    I administer a large number of servers, and I have this problem only with Ubuntu 10.04 LTS: I run a server under normal load (say load average 3.0 on an 8-core server). The "top" command shows processes taking certain % of CPU that cause this load average: say PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 11008 mysql 20 0 25.9g 22g 5496 S 67 76.0 643539:38 mysqld ps -o pcpu,pid -p11008 %CPU PID 53.1 11008 , everything is consistent. The all of the sudden, the process causing the load average disappears from "top", but the process continues to run normally (albeit with a slight performance decrease), and the system load average becomes somewhat higher. The output of ps -o pcpu becomes bogus: # ps -o pcpu,pid -p11008 %CPU PID 317910278 1587 This happened to at least 5 different severs (different brand new IBM System X hardware), each running different software: one httpd 2.2, one mysqld 5.1, and one Twisted Python TCP servers. Each time the kernel was between 2.6.32-32-server and 2.6.32-40-server. I updated some machines to 2.6.32-41-server, and it has not happened on those yet, but the bug is rare (once every 60 days or so). This is from an affected machine: top - 10:39:06 up 73 days, 17:57, 3 users, load average: 6.62, 5.60, 5.34 Tasks: 207 total, 2 running, 205 sleeping, 0 stopped, 0 zombie Cpu(s): 11.4%us, 18.0%sy, 0.0%ni, 66.3%id, 4.3%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 74341464k total, 71985004k used, 2356460k free, 236456k buffers Swap: 3906552k total, 328k used, 3906224k free, 24838212k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 805 root 20 0 0 0 0 S 3 0.0 1493:09 fct0-worker 982 root 20 0 0 0 0 S 1 0.0 111:35.05 fioa-data-groom 914 root 20 0 0 0 0 S 0 0.0 884:42.71 fct1-worker 1068 root 20 0 19364 1496 1060 R 0 0.0 0:00.02 top Nothing causing high load is showing on top, but I have two highly loaded mysqld instances on it, that suddenly show crazy %CPU: #ps -o pcpu,pid,cmd -p1587 %CPU PID CMD 317713124 1587 /nail/encap/mysql-5.1.60/libexec/mysqld and #ps -o pcpu,pid,cmd -p1624 %CPU PID CMD 2802 1624 /nail/encap/mysql-5.1.60/libexec/mysqld Here are the numbers from # cat /proc/1587/stat 1587 (mysqld) S 1212 1088 1088 0 -1 4202752 14307313 0 162 0 85773299069 4611685932654088833 0 0 20 0 52 0 3549 27255418880 5483524 18446744073709551615 4194304 11111617 140733749236976 140733749235984 8858659 0 552967 4102 26345 18446744073709551615 0 0 17 5 0 0 0 0 0 the 14th and 15th numbers according to man proc are supposed to be utime %lu Amount of time that this process has been scheduled in user mode, measured in clock ticks (divide by sysconf(_SC_CLK_TCK). This includes guest time, guest_time (time spent running a virtual CPU, see below), so that applications that are not aware of the guest time field do not lose that time from their calculations. stime %lu Amount of time that this process has been scheduled in kernel mode, measured in clock ticks (divide by sysconf(_SC_CLK_TCK). On a normal server, these numbers are advancing, every time I check the /proc/PID/stat. On a buggy server, these numbers are stuck at a ridiculously high value like 4611685932654088833, and it's not changing. Has anyone encountered this bug?

    Read the article

  • Reading Lotto Numbers

    - by neiling
    When we buy a large qty of Lotto tickets, is there a way to read all those numbers into a spreadsheet so that they can be checked against the winning numbers thru formulas/macros?

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >