Search Results

Search found 33124 results on 1325 pages for 'table valued functions'.

Page 29/1325 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Speeding up inner joins between a large table and a small table

    - by Zaid
    This may be a silly question, but it may shed some light on how joins work internally. Let's say I have a large table L and a small table S (100K rows vs. 100 rows). Would there be any difference in terms of speed between the following two options?: OPTION 1: OPTION 2: --------- --------- SELECT * SELECT * FROM L INNER JOIN S FROM S INNER JOIN L ON L.id = S.id; ON L.id = S.id; Notice that the only difference is the order in which the tables are joined. I realize performance may vary between different SQL languages. If so, how would MySQL compare to Access?

    Read the article

  • MySQL Select Statement - Two Tables, Sort One Table by Count of Other Table

    - by Robert Boka
    So I have built a voting system for a custom post system i wrote. I want to be able to sort by "most voted", "Most liked", etc. I have two tables. Entry: ID, Title, Post Vote: ID, EntryID, Result I want to be able to query the vote table for each entry and see how many vote's there are, and then sort the entry's by how many vote's each table had. I have messed around with joins, etc. and cannot seem to figure it out. Any suggestions?

    Read the article

  • The Columns in table <table> do not match an existing primary key or unique constraint

    - by Sven
    I have 2 tables, Stores - storeId (int) and year (int(4)) both Primary Keys. fruit - fruitId - Primary Key and storeId. I need to create 1 to many relationship between the Store and Fruit (Foreign Key held within Fruit) how ever I always get shown the error - The Columns in table <table> do not match an existing primary key or unique constraint. The type's are both int and named the same. Any help would be appreciate in advance, many thanks.

    Read the article

  • MySQL query : all records of one table plus count of another table

    - by Ricardo
    Hello Guys! I have 2 tables: User and Picture. The Picture table has the key of the user. So basically each user can have multiple pictures, and each picture belongs to one user. Now, I am trying to make the following query: I want to select all the user info plus the total number of pictures that he has (even if it's 0). How can I do that? Probably it sounds quite simple, but I am trying and trying and can't seem to find the right query. The only thing I could select is this info, but only for users that have at least 1 picture, meaning that the Pictures table has at least one record for that key... But I also wanna consider the users that don't have any. Any idea? Thanks!

    Read the article

  • Problems with real-valued input deep belief networks (of RBMs)

    - by Junier
    I am trying to recreate the results reported in Reducing the dimensionality of data with neural networks of autoencoding the olivetti face dataset with an adapted version of the MNIST digits matlab code, but am having some difficulty. It seems that no matter how much tweaking I do on the number of epochs, rates, or momentum the stacked RBMs are entering the fine-tuning stage with a large amount of error and consequently fail to improve much at the fine-tuning stage. I am also experiencing a similar problem on another real-valued dataset. For the first layer I am using a RBM with a smaller learning rate (as described in the paper) and with negdata = poshidstates*vishid' + repmat(visbiases,numcases,1); I'm fairly confident I am following the instructions found in the supporting material but I cannot achieve the correct errors. Is there something I am missing? See the code I'm using for real-valued visible unit RBMs below, and for the whole deep training. The rest of the code can be found here. rbmvislinear.m: epsilonw = 0.001; % Learning rate for weights epsilonvb = 0.001; % Learning rate for biases of visible units epsilonhb = 0.001; % Learning rate for biases of hidden units weightcost = 0.0002; initialmomentum = 0.5; finalmomentum = 0.9; [numcases numdims numbatches]=size(batchdata); if restart ==1, restart=0; epoch=1; % Initializing symmetric weights and biases. vishid = 0.1*randn(numdims, numhid); hidbiases = zeros(1,numhid); visbiases = zeros(1,numdims); poshidprobs = zeros(numcases,numhid); neghidprobs = zeros(numcases,numhid); posprods = zeros(numdims,numhid); negprods = zeros(numdims,numhid); vishidinc = zeros(numdims,numhid); hidbiasinc = zeros(1,numhid); visbiasinc = zeros(1,numdims); sigmainc = zeros(1,numhid); batchposhidprobs=zeros(numcases,numhid,numbatches); end for epoch = epoch:maxepoch, fprintf(1,'epoch %d\r',epoch); errsum=0; for batch = 1:numbatches, if (mod(batch,100)==0) fprintf(1,' %d ',batch); end %%%%%%%%% START POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% data = batchdata(:,:,batch); poshidprobs = 1./(1 + exp(-data*vishid - repmat(hidbiases,numcases,1))); batchposhidprobs(:,:,batch)=poshidprobs; posprods = data' * poshidprobs; poshidact = sum(poshidprobs); posvisact = sum(data); %%%%%%%%% END OF POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% poshidstates = poshidprobs > rand(numcases,numhid); %%%%%%%%% START NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% negdata = poshidstates*vishid' + repmat(visbiases,numcases,1);% + randn(numcases,numdims) if not using mean neghidprobs = 1./(1 + exp(-negdata*vishid - repmat(hidbiases,numcases,1))); negprods = negdata'*neghidprobs; neghidact = sum(neghidprobs); negvisact = sum(negdata); %%%%%%%%% END OF NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% err= sum(sum( (data-negdata).^2 )); errsum = err + errsum; if epoch>5, momentum=finalmomentum; else momentum=initialmomentum; end; %%%%%%%%% UPDATE WEIGHTS AND BIASES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% vishidinc = momentum*vishidinc + ... epsilonw*( (posprods-negprods)/numcases - weightcost*vishid); visbiasinc = momentum*visbiasinc + (epsilonvb/numcases)*(posvisact-negvisact); hidbiasinc = momentum*hidbiasinc + (epsilonhb/numcases)*(poshidact-neghidact); vishid = vishid + vishidinc; visbiases = visbiases + visbiasinc; hidbiases = hidbiases + hidbiasinc; %%%%%%%%%%%%%%%% END OF UPDATES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% end fprintf(1, '\nepoch %4i error %f \n', epoch, errsum); end dofacedeepauto.m: clear all close all maxepoch=200; %In the Science paper we use maxepoch=50, but it works just fine. numhid=2000; numpen=1000; numpen2=500; numopen=30; fprintf(1,'Pretraining a deep autoencoder. \n'); fprintf(1,'The Science paper used 50 epochs. This uses %3i \n', maxepoch); load fdata %makeFaceData; [numcases numdims numbatches]=size(batchdata); fprintf(1,'Pretraining Layer 1 with RBM: %d-%d \n',numdims,numhid); restart=1; rbmvislinear; hidrecbiases=hidbiases; save mnistvh vishid hidrecbiases visbiases; maxepoch=50; fprintf(1,'\nPretraining Layer 2 with RBM: %d-%d \n',numhid,numpen); batchdata=batchposhidprobs; numhid=numpen; restart=1; rbm; hidpen=vishid; penrecbiases=hidbiases; hidgenbiases=visbiases; save mnisthp hidpen penrecbiases hidgenbiases; fprintf(1,'\nPretraining Layer 3 with RBM: %d-%d \n',numpen,numpen2); batchdata=batchposhidprobs; numhid=numpen2; restart=1; rbm; hidpen2=vishid; penrecbiases2=hidbiases; hidgenbiases2=visbiases; save mnisthp2 hidpen2 penrecbiases2 hidgenbiases2; fprintf(1,'\nPretraining Layer 4 with RBM: %d-%d \n',numpen2,numopen); batchdata=batchposhidprobs; numhid=numopen; restart=1; rbmhidlinear; hidtop=vishid; toprecbiases=hidbiases; topgenbiases=visbiases; save mnistpo hidtop toprecbiases topgenbiases; backpropface; Thanks for your time

    Read the article

  • Problems with real-valued deep belief networks (of RBMs)

    - by Junier
    I am trying to recreate the results reported in Reducing the dimensionality of data with neural networks of autoencoding the olivetti face dataset with an adapted version of the MNIST digits matlab code, but am having some difficulty. It seems that no matter how much tweaking I do on the number of epochs, rates, or momentum the stacked RBMs are entering the fine-tuning stage with a large amount of error and consequently fail to improve much at the fine-tuning stage. I am also experiencing a similar problem on another real-valued dataset. For the first layer I am using a RBM with a smaller learning rate (as described in the paper) and with negdata = poshidstates*vishid' + repmat(visbiases,numcases,1); I'm fairly confident I am following the instructions found in the supporting material but I cannot achieve the correct errors. Is there something I am missing? See the code I'm using for real-valued visible unit RBMs below, and for the whole deep training. The rest of the code can be found here. rbmvislinear.m: epsilonw = 0.001; % Learning rate for weights epsilonvb = 0.001; % Learning rate for biases of visible units epsilonhb = 0.001; % Learning rate for biases of hidden units weightcost = 0.0002; initialmomentum = 0.5; finalmomentum = 0.9; [numcases numdims numbatches]=size(batchdata); if restart ==1, restart=0; epoch=1; % Initializing symmetric weights and biases. vishid = 0.1*randn(numdims, numhid); hidbiases = zeros(1,numhid); visbiases = zeros(1,numdims); poshidprobs = zeros(numcases,numhid); neghidprobs = zeros(numcases,numhid); posprods = zeros(numdims,numhid); negprods = zeros(numdims,numhid); vishidinc = zeros(numdims,numhid); hidbiasinc = zeros(1,numhid); visbiasinc = zeros(1,numdims); sigmainc = zeros(1,numhid); batchposhidprobs=zeros(numcases,numhid,numbatches); end for epoch = epoch:maxepoch, fprintf(1,'epoch %d\r',epoch); errsum=0; for batch = 1:numbatches, if (mod(batch,100)==0) fprintf(1,' %d ',batch); end %%%%%%%%% START POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% data = batchdata(:,:,batch); poshidprobs = 1./(1 + exp(-data*vishid - repmat(hidbiases,numcases,1))); batchposhidprobs(:,:,batch)=poshidprobs; posprods = data' * poshidprobs; poshidact = sum(poshidprobs); posvisact = sum(data); %%%%%%%%% END OF POSITIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% poshidstates = poshidprobs > rand(numcases,numhid); %%%%%%%%% START NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% negdata = poshidstates*vishid' + repmat(visbiases,numcases,1);% + randn(numcases,numdims) if not using mean neghidprobs = 1./(1 + exp(-negdata*vishid - repmat(hidbiases,numcases,1))); negprods = negdata'*neghidprobs; neghidact = sum(neghidprobs); negvisact = sum(negdata); %%%%%%%%% END OF NEGATIVE PHASE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% err= sum(sum( (data-negdata).^2 )); errsum = err + errsum; if epoch>5, momentum=finalmomentum; else momentum=initialmomentum; end; %%%%%%%%% UPDATE WEIGHTS AND BIASES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% vishidinc = momentum*vishidinc + ... epsilonw*( (posprods-negprods)/numcases - weightcost*vishid); visbiasinc = momentum*visbiasinc + (epsilonvb/numcases)*(posvisact-negvisact); hidbiasinc = momentum*hidbiasinc + (epsilonhb/numcases)*(poshidact-neghidact); vishid = vishid + vishidinc; visbiases = visbiases + visbiasinc; hidbiases = hidbiases + hidbiasinc; %%%%%%%%%%%%%%%% END OF UPDATES %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% end fprintf(1, '\nepoch %4i error %f \n', epoch, errsum); end dofacedeepauto.m: clear all close all maxepoch=200; %In the Science paper we use maxepoch=50, but it works just fine. numhid=2000; numpen=1000; numpen2=500; numopen=30; fprintf(1,'Pretraining a deep autoencoder. \n'); fprintf(1,'The Science paper used 50 epochs. This uses %3i \n', maxepoch); load fdata %makeFaceData; [numcases numdims numbatches]=size(batchdata); fprintf(1,'Pretraining Layer 1 with RBM: %d-%d \n',numdims,numhid); restart=1; rbmvislinear; hidrecbiases=hidbiases; save mnistvh vishid hidrecbiases visbiases; maxepoch=50; fprintf(1,'\nPretraining Layer 2 with RBM: %d-%d \n',numhid,numpen); batchdata=batchposhidprobs; numhid=numpen; restart=1; rbm; hidpen=vishid; penrecbiases=hidbiases; hidgenbiases=visbiases; save mnisthp hidpen penrecbiases hidgenbiases; fprintf(1,'\nPretraining Layer 3 with RBM: %d-%d \n',numpen,numpen2); batchdata=batchposhidprobs; numhid=numpen2; restart=1; rbm; hidpen2=vishid; penrecbiases2=hidbiases; hidgenbiases2=visbiases; save mnisthp2 hidpen2 penrecbiases2 hidgenbiases2; fprintf(1,'\nPretraining Layer 4 with RBM: %d-%d \n',numpen2,numopen); batchdata=batchposhidprobs; numhid=numopen; restart=1; rbmhidlinear; hidtop=vishid; toprecbiases=hidbiases; topgenbiases=visbiases; save mnistpo hidtop toprecbiases topgenbiases; backpropface; Thanks for your time

    Read the article

  • Adjust CSS of all cells of a specific table without giving each cell a unique id?

    - by javanix
    Is there any way to modify the CSS properties of one table's cells based on the table's ids, rather than specific child ids? I would like to change one table's appearance (giving each cell a colored border, for instance) one way, and another table another way, but I'd like to avoid specifying an id for each cell. To be clear, I don't need to individually access each cell in the table - I just want to set all of the properties of "child" cells of various tables.

    Read the article

  • Un-table a cell range in Excel 2007

    - by Joe
    In Excel 2007, if you highlight a block of cells and then "Format as Table", it doesn't just apply colors and formatting, it somehow marks those cells as being a table. Now I want to get rid of the table, but keep all the cells (i.e. keep the data). So I tried clearing the table style and formatting, but Excel still recognizes those cells as being a table. I can tell because: When I select a cell that was in the table, Excel still displays the "Table Tools / Design" tab I cannot merge cells that were in the table <- this is what's annoying me So, how do I un-table those cells? I want to keep all the cell data and formatting, but have Excel not recognize them as a table.

    Read the article

  • Single Table Per Class Hierarchy with an abstract superclass using Hibernate Annotations

    - by Andy Hull
    I have a simple class hierarchy, similar to the following: @Entity @Table(name="animal") @Inheritance(strategy=InheritanceType.SINGLE_TABLE) @DiscriminatorColumn(name="animal_type", discriminatorType=DiscriminatorType.STRING) public abstract class Animal { } @Entity @DiscriminatorValue("cat") public class Cat extends Animal { } @Entity @DiscriminatorValue("dog") public class Dog extends Animal { } When I query "from Animal" I get this exception: "org.hibernate.InstantiationException: Cannot instantiate abstract class or interface: Animal" If I make Animal concrete, and add a dummy discriminator... such as @DiscriminatorValue("animal")... my query returns my cats and dogs as instances of Animals. I remember this being trivial with HBM based mappings but I think I'm missing something when using annotations. Can anyone help? Thanks!

    Read the article

  • SQL RENAME TABLE command

    - by Nano HE
    Hello. I can run RENAME TABLE student TO student_new ; The command is same and easy to follow. Is there a methods to rename a lot of tables in simple command. Assume all the tables belog to the same DB name. I don't need write a lot of code as below? RENAME TALBE pre_access TO pre_new_access; RENAME TALBE pre_activities TO pre_new_activities; RENAME TALBE pre_activityapplies TO pre_new_activityapplies; RENAME TALBE pre_adminactions TO pre_new_adminactions; RENAME TALBE pre_admincustom TO pre_new_admincustom; RENAME TALBE pre_admingroups TO pre_new_admingroups; RENAME TALBE pre_adminnotes TO pre_new_adminnotes; ... (there are still so many tables need to be renamed)

    Read the article

  • Rails single table inheritance/subclass find condition in parent

    - by slythic
    Hi all, I have a table called Users (class User < ActiveRecord::Base) and a subclass/STI of it for Clients (class Client < User). Client "filtering" works as expected, in other words Client.find(:all) works to find all the clients. However, for users I need to filter the result to only find users that are NOT clients (where type is null or blank). I've tried the following in my index controller but no matter what I put for the type it returns all users regardless of type. User.find(:all, :conditions => { :type => nil }, :order => 'name') Any clue on how to get this condition to work? Thanks!

    Read the article

  • List of objects to JSON to Html Table?

    - by DaveDev
    If I have a list of objects IEnumerable<MyType> myTypes; Is it possible for me to return this to the client as JSON return Json(myTypes); and if so, is it possible for me to convert this (now JSON format) list to a when it gets to the client? Is there any jQuery plugin to do this? The thing is, there's loads of other stuff I need to send as JSON also, so generating a table with a PartialView and embedding that into the JSON is a extra complexity that I'd like to avoid. Thanks.

    Read the article

  • How do i create a table dynamically with dynamic datatype from a PL/SQL procedure

    - by Swapna
    CREATE OR REPLACE PROCEDURE p_create_dynamic_table IS v_qry_str VARCHAR2 (100); v_data_type VARCHAR2 (30); BEGIN SELECT data_type || '(' || data_length || ')' INTO v_data_type FROM all_tab_columns WHERE table_name = 'TEST1' AND column_name = 'ZIP'; FOR sql_stmt IN (SELECT * FROM test1 WHERE zip IS NOT NULL) LOOP IF v_qry_str IS NOT NULL THEN v_qry_str := v_qry_str || ',' || 'zip_' || sql_stmt.zip || ' ' || v_data_type; ELSE v_qry_str := 'zip_' || sql_stmt.zip || ' ' || v_data_type; END IF; END LOOP; IF v_qry_str IS NOT NULL THEN v_qry_str := 'create table test2 ( ' || v_qry_str || ' )'; END IF; EXECUTE IMMEDIATE v_qry_str; COMMIT; END p_create_dynamic_table; Is there any better way of doing this ?

    Read the article

  • HTML Table <thead> Position Fixed

    - by Bry4n
    Variations of this question have been answered, but none of which dealing with my issue. I don't know if this can be achieved or not but I thought if anyone knew they would know on here. I have a table with a thead and one row with many columns. I want the thead to remain fixed (the row is extremely long) however when the thead is fixed, it gets cut off at the end of the browser. Is there a way I can keep it fixed but when you scroll horozontally for the entire page you can see the rest of the thead?

    Read the article

  • ALTER TABLE on dependant column

    - by Sharmi
    I am trying to alter column datatype of a primary key to tinyint from int.This column is a foreign key in other tables.So,I get the following error: Msg 5074, Level 16, State 1, Line 1 The object 'PK_User_tbl' is dependent on column 'appId'. Msg 5074, Level 16, State 1, Line 1 The object 'FK_Details_tbl_User_tbl' is dependent on column 'appId'. Msg 5074, Level 16, State 1, Line 1 The object 'FK_Log_tbl_User_tbl' is dependent on column 'appId'. Msg 4922, Level 16, State 9, Line 1 ALTER TABLE ALTER COLUMN appId failed because one or more objects access this column. Howw should i rectify this?

    Read the article

  • how to build a index table(python dict like) in python with sqlite3

    - by Registered User KC
    Suppose I have one string list may have duplicated items: A B C A A C D E F F I want to make a list can assign an unique index for each item, looks like: 1 A 2 B 3 C 4 D 5 E 6 F now I created sqlite3 database with below SQL statement: CREATE TABLE aa ( myid INTEGER PRIMARY KEY AUTOINCREMENT, name STRING, UNIQUE (myid) ON CONFLICT FAIL, UNIQUE (name) ON CONFLICT FAIL); The plan is insert each row into the database in python. My question is how to handle the error when conflict do happened when insert in python module sqlite3? For example: the program will printing a warning message which item is conflicted and continue next insert action when inserting in python? Thanks

    Read the article

  • Table alias has an effect with execution time?

    - by RedFux227
    So I have this query : SELECT A.e_cd "INFLATE" , A.ccgpt "1" , ( SELECT J.comp FROM JBB J WHERE J.e_cd=A.e_cd AND emplo_RCD=:1 AND J.Dte = ( SELECT MAX(J1.Dte) FROM JBB J1 WHERE J.e_cd=J1.e_cd AND TO_CHAR(J1.Dte,'MM') <=A.mth_to AND TO_CHAR(J1.Dte,'YYYY') <=A.year ) AND J.seq = ( SELECT MAX(J2.seq) FROM PS_JOB J2 WHERE J2.e_cd=J.e_cd AND J2.Dte=J.Dte)) "Company" FROM PPS A WHERE orcd=:2 AND rcd=:3 AND tx_cd=:4 AND year=:5 With this one : SELECT A.e_cd "INFLATE" , A.ccgpt "1" , ( SELECT J.comp FROM JBB J WHERE J.e_cd=A.e_cd AND emplo_RCD=:1 AND J.Dte = ( SELECT MAX(J1.Dte) FROM JBB J1 WHERE J.e_cd=J1.e_cd AND TO_CHAR(J1.Dte,'MM') <=A.mth_to AND TO_CHAR(J1.Dte,'YYYY') <=A.year ) AND J.seq = ( SELECT MAX(J1.seq) **FROM PS_JOB J1 WHERE J1.e_cd=J.e_cd AND J1.Dte=J.Dte**)) "Company" FROM PPS A WHERE orcd=:2 AND rcd=:3 AND tx_cd=:4 AND year=:5 The 1st query run for about 3.20 sec with buffer gets 8,134 and for the 2nd query it run for about 1.73 sec with buffer gets 7,006. So, is the table alias somehow has an impact on the execution time / buffer gets? Thanks in advance!

    Read the article

  • Table-level diff and sync procedure for T-SQL

    - by Ville Koskinen
    I'm interested in T-SQL source code for synchronizing a table (or perhaps a subset of it) with data from another similar table. The two tables could contain any variables, for example I could have base table source table ========== ============ id val id val ---------- ------------ 0 1 0 3 1 2 1 2 2 3 3 4 or base table source table =================== ================== key val1 val2 key val1 val2 ------------------- ------------------ A 1 0 A 1 1 B 2 1 C 2 2 C 3 3 E 4 0 or any two tables containing similar columns with similar names. I'd like to be able to check that the two tables have matching columns: the source table has exactly the same columns as the base table and the datatypes match make a diff from the base table to the source table do the necessary updates, deletes and inserts to change the data in the base table to correspond the source table optionally limit the diff to a subset of the base table, preferrably with a stored procedure. Has anyone written a stored proc for this or could you point to a source?

    Read the article

  • Adding STI to Existing Table...

    - by keruilin
    I want to add STI to an existing table using a custom type column. Let's call this taste_type whose corresponding model is Fruit. In the Fruit model I have: set_inheritance_column :taste_type In my migration to add STI I have: class AddSTI < ActiveRecord::Migration def self.up add_column :fruits, :taste_type, :string, :limit => 100, :null => false Fruit.reset_column_information Fruit.find_by_id(1).update_attributes({:taste_type => 'Sour'}) end def self.down remove_column :fruits, :taste_type end end When I run the migration, I get the following error: Mysql::Error: Column 'taste_type' cannot be null: ... Any idea what's going? I can get the migration to run if I comment the set_inheritance_column in the Fruit model, then uncomment it after I run the migration. Obviously, I don't want to do this, however.

    Read the article

  • Change color of a table cell using javascript using dropdown menu

    - by Mike Burzycki
    I'd like to use some javascript code to change the background color of a single cell within a table. I have some code below which allows me to change the page background color. This is similar in concept to what I would like to do, but I would really like to be able to change just one cell...not the whole page. I have thought about making the rest of the cell borders and background colors white, leaving the cell I want to manipulate transparent, but I think this probably a brute force method that will cause me trouble down the road. Does anyone have any advice to do this with javascript? The page background color changing code is here: <form name="bgcolorForm">Try it now: <select onChange="if(this.selectedIndex!=0) document.bgColor=this.options[this.selectedIndex].value"> <option value="choose">set background color <option value="FFFFCC">light yellow <option value="CCFFFF">light blue <option value="CCFFCC">light green <option value="CCCCCC">gray <option value="FFFFFF">white </select></form> Thanks for the help, Mike

    Read the article

  • HTML input type="submit" doubles row height in table

    - by FelixM
    I have a HTML table where some rows have a button like this: <td > <form action="..." method="GET"> <input type="submit" value="..."/> </form> </td> The rows with the input have about twice the height of other rows that have otherwise similar data. When I remove the just the input, the row height goes back to normal. I have the same behavior in Firefox and IE. Is there any way I can have normal row height AND the button?

    Read the article

  • Fact table with multiple facts

    - by Jeff Meatball Yang
    I have a dimension (SiteItem) has two important facts: perUserClicks perBrowserClicks however, within this dimension, I have groups of dimensions based on an attribute column (let's call the groups AboveFoldItems, LeftNavItems, OnTheFlyItems, etc.) each have more facts that are specific to that group: AboveFoldItems: eyeTime, loadTime LeftNavItems: mouseOverTime OnTheFlyItems: doesn't have any extra, but may in the future Is the following fact table schema ok? DateKey SessionKey SiteItemKey perUserClicks perBrowserClicks eyeTime loadTime mouseOverTime It seems a little wasteful since only some columns pertain to some dimension keys (the irrelevant facts are left NULL). But... this seems like it would be a common problem, so there should be a common solution for this, right?

    Read the article

  • Using javascript on an HTML table?

    - by Michael McKay
    Hi, few questions, just wondering if anyone can help? I have a table with 1 long row (1000 pixels) and one single column, how do i go about creating a method whereby when the mouse cursor is on the leftmost side of the cell, a variable, lets say X is set to 0, the further right the mouse cursor moves in the cell, the value of X increases. I know that sounds like a strange question but im working on a project where this type of functionality is desired. Is there a javascript method to create this feature? Thanks for any help.

    Read the article

  • Loading Fact Table + Lookup / UnionAll for SK lookups.

    - by Nev_Rahd
    I got to populate FactTable with 12 lookups to dimension table to get SK's, of which 6 are to different Dim Tables and rest 6 are lookup to same DimTable (type II) doing lookup to same natural key. Ex: PrimeObjectID = lookup to DimObject.ObjectID = get ObjectSK and got other columns which does same OtherObjectID1 = lookup to DimObject.ObjectID = get ObjectSK OtherObjectID2 = lookup to DimObject.ObjectID = get ObjectSK OtherObjectID3 = lookup to DimObject.ObjectID = get ObjectSK OtherObjectID4 = lookup to DimObject.ObjectID = get ObjectSK OtherObjectID5 = lookup to DimObject.ObjectID = get ObjectSK for such multiple lookup how should go in my SSIS package. for now am using lookup / unionall foreach lookup. Is there a better way to this.

    Read the article

  • MySQL Inserting into locked aliased table

    - by Whitey
    I am trying to insert data into a InnoDB MySQL table which is locked using an alias and I cannot for the life of me get it to work! The following works: LOCK TABLES Problems p1 WRITE, Problems p2 WRITE, Server READ; SELECT * FROM Problems p1; UNLOCK TABLES; But try and do an insert and it doesn't work (it claims there is a syntax error round the 'p1' in my INSERT): LOCK TABLES Problems p1 WRITE, Problems p2 WRITE, Server READ; INSERT INTO Problems p1 (SomeCol) VALUES(43534); UNLOCK TABLES; Help please!

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >