Search Results

Search found 10719 results on 429 pages for 'temp tables'.

Page 151/429 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • Avoiding Agnostic Jagged Array Flattening in Powershell

    - by matejhowell
    Hello, I'm running into an interesting problem in Powershell, and haven't been able to find a solution to it. When I google (and find things like this post), nothing quite as involved as what I'm trying to do comes up, so I thought I'd post the question here. The problem has to do with multidimensional arrays with an outer array length of one. It appears Powershell is very adamant about flattening arrays like @( @('A') ) becomes @( 'A' ). Here is the first snippet (prompt is , btw): > $a = @( @( 'Test' ) ) > $a.gettype().isarray True > $a[0].gettype().isarray False So, I'd like to have $a[0].gettype().isarray be true, so that I can index the value as $a[0][0] (the real world scenario is processing dynamic arrays inside of a loop, and I'd like to get the values as $a[$i][$j], but if the inner item is not recognized as an array but as a string (in my case), you start indexing into the characters of the string, as in $a[0][0] -eq 'T'). I have a couple of long code examples, so I have posted them at the end. And, for reference, this is on Windows 7 Ultimate with PSv2 and PSCX installed. Consider code example 1: I build a simple array manually using the += operator. Intermediate array $w is flattened, and consequently is not added to the final array correctly. I have found solutions online for similar problems, which basically involve putting a comma before the inner array to force the outer array to not flatten, which does work, but again, I'm looking for a solution that can build arrays inside a loop (a jagged array of arrays, processing a CSS file), so if I add the leading comma to the single element array (implemented as intermediate array $y), I'd like to do the same for other arrays (like $z), but that adversely affects how $z is added to the final array. Now consider code example 2: This is closer to the actual problem I am having. When a multidimensional array with one element is returned from a function, it is flattened. It is correct before it leaves the function. And again, these are examples, I'm really trying to process a file without having to know if the function is going to come back with @( @( 'color', 'black') ) or with @( @( 'color', 'black'), @( 'background-color', 'white') ) Has anybody encountered this, and has anybody resolved this? I know I can instantiate framework objects, and I'm assuming everything will be fine if I create an object[], or a list<, or something else similar, but I've been dealing with this for a little bit and something sure seems like there has to be a right way to do this (without having to instantiate true framework objects). Code Example 1 function Display($x, [int]$indent, [string]$title) { if($title -ne '') { write-host "$title`: " -foregroundcolor cyan -nonewline } if(!$x.GetType().IsArray) { write-host "'$x'" -foregroundcolor cyan } else { write-host '' $s = new-object string(' ', $indent) for($i = 0; $i -lt $x.length; $i++) { write-host "$s[$i]: " -nonewline -foregroundcolor cyan Display $x[$i] $($indent+1) } } if($title -ne '') { write-host '' } } ### Start Program $final = @( @( 'a', 'b' ), @('c')) Display $final 0 'Initial Value' ### How do we do this part ??? ########### ## $w = @( @('d', 'e') ) ## $x = @( @('f', 'g'), @('h') ) ## # But now $w is flat, $w.length = 2 ## ## ## # Even if we put a leading comma (,) ## # in front of the array, $y will work ## # but $w will not. This can be a ## # problem inside a loop where you don't ## # know the length of the array, and you ## # need to put a comma in front of ## # single- and multidimensional arrays. ## $y = @( ,@('D', 'E') ) ## $z = @( ,@('F', 'G'), @('H') ) ## ## ## ########################################## $final += $w $final += $x $final += $y $final += $z Display $final 0 'Final Value' ### Desired final value: @( @('a', 'b'), @('c'), @('d', 'e'), @('f', 'g'), @('h'), @('D', 'E'), @('F', 'G'), @('H') ) ### As in the below: # # Initial Value: # [0]: # [0]: 'a' # [1]: 'b' # [1]: # [0]: 'c' # # Final Value: # [0]: # [0]: 'a' # [1]: 'b' # [1]: # [0]: 'c' # [2]: # [0]: 'd' # [1]: 'e' # [3]: # [0]: 'f' # [1]: 'g' # [4]: # [0]: 'h' # [5]: # [0]: 'D' # [1]: 'E' # [6]: # [0]: 'F' # [1]: 'G' # [7]: # [0]: 'H' Code Example 2 function Display($x, [int]$indent, [string]$title) { if($title -ne '') { write-host "$title`: " -foregroundcolor cyan -nonewline } if(!$x.GetType().IsArray) { write-host "'$x'" -foregroundcolor cyan } else { write-host '' $s = new-object string(' ', $indent) for($i = 0; $i -lt $x.length; $i++) { write-host "$s[$i]: " -nonewline -foregroundcolor cyan Display $x[$i] $($indent+1) } } if($title -ne '') { write-host '' } } function funA() { $ret = @() $temp = @(0) $temp[0] = @('p', 'q') $ret += $temp Display $ret 0 'Inside Function A' return $ret } function funB() { $ret = @( ,@('r', 's') ) Display $ret 0 'Inside Function B' return $ret } ### Start Program $z = funA Display $z 0 'Return from Function A' $z = funB Display $z 0 'Return from Function B' ### Desired final value: @( @('p', 'q') ) and same for r,s ### As in the below: # # Inside Function A: # [0]: # [0]: 'p' # [1]: 'q' # # Return from Function A: # [0]: # [0]: 'p' # [1]: 'q' Thanks, Matt

    Read the article

  • 2 large databases - worth merging into 1?

    - by Ardman
    I have 2 large databases that were sharded before. I now have removed the sharding and have created a new database with all of the data except for the tables that were originally sharded. Is it worth importing this data into the new database, or keeping them as seperate entities that I can just scan through? We are talking around 60million records in each sharded table, of which there are 2 tables. Also, whilst I have an empty table, should I be adding indexes which weren't thought of when the database was originally constructed and now too large to add them?

    Read the article

  • Scrolling with CSS

    - by Jordan Trulen
    I have 4 tables that need to scroll, they are set up as follows: Table1(static) Table2(Horizontal Scrolling) Table3(Vertical Scrolling) Table4(Horizontal and Vertical Scrolling) Table1 Table2 Table3 Table4 The tricky part of this is that Table 3 and 4 need to keep in sync as this is a listing of data broken out into two tables. Table 2 and 4 are in the same situation. Any ideas? No Javascript please as we have a script that works, but it is far too slow to work. Thanks.

    Read the article

  • How to Design a SaaS Database

    - by Josh Curren
    I have a web app that I built for a trucking company that I would like to offer as SaaS. What is the best way to design the database? Should I create a new database for each company? Or should I use one database with tables that have a prefix of the company name? Or should I Use one database with one of each table and just add a company id field to the tables? Or is there some other way to do it?

    Read the article

  • SQL Server CLR Integration to acheive Encryption/Decryption

    - by Aakash
    I have a requirement to store the data in encrypted form in database tables. I want to do it at database level but the problems I am facing: ( a) Data Type of the field should be Varbinary. ( b) Encryption is not supported by Workgroup edition ( c) Is it possible to encrypt Numeric Fields? I want to access the encrypted data in tables to fetch in views and stored procedure for some processing but due to above problems I am not able to. Here is my Environment: Development Platform - ASP.Net,.Net Framework 3.5,Visual studio 2008 Server Operating System - Windows Server 2008 Database - SQL Server 2008 Work group edition I was also thinking to adopt a different approach to resolve this issue (yet to test it's feasibility). I was just wondering if I could create a CLR function (which could take parameters to encrypt and decrypt data using Cryptography types provided in .Net framework) and use the CLR integration feature of SQL Server and call that function from stored procedure and views. I am not sure if I am thinking in right direction? Any advice on this as well please.

    Read the article

  • MSSQL Efficiently dropping a group of rows with millions and millions of rows

    - by Net Citizen
    I recently asked this question: http://stackoverflow.com/questions/2519183/ms-sql-share-identity-seed-amongst-tables (Many people wondered why) I have the following layout of a table: Table: Stars starId bigint categoryId bigint starname varchar(200) But my problem is that I have millions and millions of rows. So when I want to delete stars from the table Stars it is too intense on MS SQL. I cannot use built in partitioning for 2005+ because I do not have an enterprise license. When I do delete though, I always delete a whole category Id at a time. I thought of doing a design like this: Table: Star_1 starId bigint CategoryId bigint constaint rock=1 starname varchar(200) Table: Star_2 starId bigint CategoryId bigint constaint rock=2 starname varchar(200) In this way I can delete a whole category and hence millions of rows in O(1) by doing a simple drop table. My question is, is it a problem to have thousands of tables in your MS SQL? The drop in O(1) is extremely desirable to me. Maybe there's a completely different solution I'm not thinking of?

    Read the article

  • Join query in doctrine symfony

    - by THOmas
    I have two tables userdetails and blog question The schema is UserDetails: connection: doctrine tableName: user_details columns: id: type: integer(8) fixed: false name: type: string(255) fixed: false BlogQuestion: connection: doctrine tableName: blog_question columns: question_id: type: integer(8) fixed: false unsigned: false primary: true autoincrement: true blog_id: type: integer(8) fixed: false user_id: type: integer(8) fixed: false question_title: type: string(255) I am using one join query for retrieving all the questions and user details from this two tables My join query is $q = Doctrine_Query::create() ->select('*') ->from('BlogQuestion u') ->leftJoin('u.UserDetails p'); $q->execute(); But it is showing this error Unknown relation alias UserDetails Pls anybody help me Thanks in advance

    Read the article

  • Join column with different collation issue

    - by George2
    Hello everyone, I am using SQL Server 2005. I have two tables, and they are using different collations. It is not allowed to concatenate columns from tables with different collations, for example the following SQL is not allowed, select table1column1 + table2column2 from ... My question is, why concatenation of two columns from different collations is not allowed from database engine design perspective? I do not know why collation will impact results, the result is just concatenating strings -- should be simple enough and not dependent on collation... thanks in advance, George

    Read the article

  • ORA-06502: PL/SQL: numeric or value error: character string buffer too small with Oracle aggregate f

    - by Tunde
    Good day gurus, I have a script that populates tables on a regular basis that crashed and gave the above error. The strange thing is that it has been running for close to 3 months on the production system with no problems and suddenly crashed last week. There has not been any changes on the tables as far as I know. Has anyone encountered something like this before? I believe it has something to do with the aggregate functions I'm implementing in it; but it worked initially. please; kindly find attached the part of the script I've developed into a procedure that I reckon gives the error. CREATE OR REPLACE PROCEDURE V1 IS --DECLARE v_a VARCHAR2(4000); v_b VARCHAR2(4000); v_c VARCHAR2(4000); v_d VARCHAR2(4000); v_e VARCHAR2(4000); v_f VARCHAR2(4000); v_g VARCHAR2(4000); v_h VARCHAR2(4000); v_i VARCHAR2(4000); v_j VARCHAR2(4000); v_k VARCHAR2(4000); v_l VARCHAR2(4000); v_m VARCHAR2(4000); v_n NUMBER(10); v_o VARCHAR2(4000); -- -- Procedure that populates DEMO table BEGIN -- Delete all from the DEMO table DELETE FROM DEMO; -- Populate fields in DEMO from DEMOV1 INSERT INTO DEMO(ID, D_ID, CTR_ID, C_ID, DT_NAM, TP, BYR, ENY, ONG, SUMM, DTW, REV, LD, MD, STAT, CRD) SELECT ID, D_ID, CTR_ID, C_ID, DT_NAM, TP, TO_NUMBER(TO_CHAR(BYR,'YYYY')), TO_NUMBER(TO_CHAR(NVL(ENY,SYSDATE),'YYYY')), CASE WHEN ENY IS NULL THEN 'Y' ELSE 'N' END, SUMMARY, DTW, REV, LD, MD, '1', SYSDATE FROM DEMOV1; -- LOOP THROUGH DEMO TABLE FOR j IN (SELECT ID, CTR_ID, C_ID FROM DEMO) LOOP Select semic_concat(TXTDESC) INTO v_a From GEOT WHERE ID = j.ID; SELECT COUNT(*) INTO v_n FROM MERP M, PROJ P WHERE M.MID = P.COD AND ID = j.ID AND PROAC IS NULL; IF (v_n > 0) THEN Select semic_concat(PRO) INTO v_b FROM MERP M, PROJ P WHERE M.MID = P.COD AND ID = j.ID; ELSE Select semic_concat(PRO || '(' || PROAC || ')' ) INTO v_b FROM MERP M, PROJ P WHERE M.MID = P.COD AND ID = j.ID; END IF; Select semic_concat(VOCNAME('P02',COD)) INTO v_c From PAR WHERE ID = j.ID; Select semic_concat(VOCNAME('L05',COD)) INTO v_d From INST WHERE ID = j.ID; Select semic_concat(NVL(AUTHOR,'Anon') ||' ('||to_char(PUB,'YYYY')||') '||TITLE||', '||EDT) INTO v_e From REFE WHERE ID = j.ID; Select semic_concat(NAM) INTO v_f FROM EDM E, EDO EO WHERE E.EDMID = EO.EDOID AND ID = j.ID; Select semic_concat(VOCNAME('L08', COD)) INTO v_g FROM AVA WHERE ID = j.ID; SELECT or_concat(NAM) INTO v_o FROM CON WHERE ID = j.ID AND NAM = 'Unknown'; IF (v_o = 'Unknown') THEN Select or_concat(JOBTITLE || ' (' || EMAIL || ')') INTO v_h FROM CON WHERE ID = j.ID; ELSE Select or_concat(NAM || ' (' || EMAIL || ')') INTO v_h FROM CON WHERE ID = j.ID; END IF; Select commaencap_concat(COD) INTO v_i FROM PAR WHERE ID = j.ID; IF (v_i = ',') THEN v_i := null; ELSE Select commaencap_concat(COD) INTO v_i FROM PAR WHERE ID = j.ID; END IF; Select commaencap_concat(COD) INTO v_j FROM INST WHERE ID = j.ID; IF (v_j = ',') THEN v_j := null; ELSE Select commaencap_concat(COD) INTO v_j FROM INST WHERE ID = j.ID; END IF; Select commaencap_concat(COD) INTO v_k FROM SAR WHERE ID = j.ID; IF (v_k = ',') THEN v_k := null; ELSE Select commaencap_concat(COD) INTO v_k FROM SAR WHERE ID = j.ID; END IF; Select commaencap_concat(CONID) INTO v_l FROM CON WHERE ID = j.ID; IF (v_l = ',') THEN v_l := null; ELSE Select commaencap_concat(CONID) INTO v_l FROM CON WHERE ID = j.ID; END IF; Select commaencap_concat(PROID) INTO v_m FROM PRO WHERE ID = j.ID; IF (v_m = ',') THEN v_m := null; ELSE Select commaencap_concat(PROID) INTO v_m FROM PRO WHERE ID = j.ID; END IF; -- UPDATE DEMO TABLE UPDATE DEMO SET GEOC = v_a, PRO = v_b, PAR = v_c, INS = v_d, REFER = v_e, ORGR = v_f, AVAY = v_g, CON = v_h, DTH = v_i, INST = v_j, SA = v_k, CC = v_l, EDPR = v_m, CTR = (SELECT NAM FROM EDM WHERE EDMID = j.CTR_ID), COLL = (SELECT NAM FROM EDM WHERE EDMID = j.C_ID) WHERE ID = j.ID; END LOOP; END V1; / The aggregate functions, commaencap_concat (encapsulates with a comma), or_concat (concats with an or) and semic_concat(concats with a semi-colon). the remaining tables used are all linked to the main table DEMO. I have checked the column sizes and there seems to be no problem. I tried executing the SELECT statements alone and they give the same error without populating the tables. Any clues? Many thanks for your anticipated support.

    Read the article

  • Provide "Paste Link" Functionality in C# Winforms App

    - by Tim
    I would like to add Copy-Paste Link functionality to an application. The application replaces a complex Excel workbook. I would like to be able to copy tables, text, and charts from the application and use Paste Link in MS Word. For the uninitiated: With Excel, when you use Paste Link for the tables, text, charts, etc. the items update in Word when you change them in Excel. Does anyone know for sure if this is/is not possible (is it some proprietary feature of MS Word-Excel)? If not, can anyone point me to some resources that will help (either an app that does this or a tutorial/write-up). Thanks!

    Read the article

  • Database design: circular references

    - by SlappyTheFish
    I have three database tables: users emails invitations Emails are linked to users by a user_id field. Invitations are also linked to users by a user_id field Emails can be created without an invitation, but every invitation must have an email. I would like to link the emails and invitations tables so it is possible to find the email for a particular invitation. However this creates a circular reference, both an invitation and an email record hold the id for the same user. Is this bad design and if so, how could I improve it? My feeling is that with use of foreign keys and good business logic, it is fine.

    Read the article

  • Can a PL/pgSQL function contain a dynamic subquery?

    - by morpheous
    I am writing a PL/pgSQL function. The function has input parameters which specify (indirectly), which tables to read filtering information from. The function embeds business logic which allows it to select data from different tables based on the input arguments. The function dynamically builds a subquery which returns filtering data which is then used to run the main query. My questions are: Is it 'legal' to use a dynamic subquery in a PL/pgSQL function. I cant see why not - but this question is related to the next one. AFAIK, PL/pgSQL are cached or precompiled by the query engine. How does having a function that generates dynamic subqueries impact the work of the query engine?

    Read the article

  • log4j.xml configuration with <rollingPolicy> and <triggeringPolicy>

    - by Mike Smith
    I try to configure log4j.xml in such a way that file will be rolled upon file size, and the rolled file's name will be i.e: "C:/temp/test/test_log4j-%d{yyyy-MM-dd-HH_mm_ss}.log" I followed this discussion: http://web.archiveorange.com/archive/v/NUYyjJipzkDOS3reRiMz Finally it worked for me only when I add: try { Thread.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } to the method: public boolean isTriggeringEvent(Appender appender, LoggingEvent event, String filename, long fileLength) which make it works. The question is if there is a better way to make it work? since this method call many times and slow my program. Here is the code: package com.mypack.rolling; import org.apache.log4j.rolling.RollingPolicy; import org.apache.log4j.rolling.RolloverDescription; import org.apache.log4j.rolling.TimeBasedRollingPolicy; /** * Same as org.apache.log4j.rolling.TimeBasedRollingPolicy but acts only as * RollingPolicy and NOT as TriggeringPolicy. * * This allows us to combine this class with a size-based triggering policy * (decision to roll based on size, name of rolled files based on time) * */ public class CustomTimeBasedRollingPolicy implements RollingPolicy { TimeBasedRollingPolicy timeBasedRollingPolicy = new TimeBasedRollingPolicy(); /** * Set file name pattern. * @param fnp file name pattern. */ public void setFileNamePattern(String fnp) { timeBasedRollingPolicy.setFileNamePattern(fnp); } /* public void setActiveFileName(String fnp) { timeBasedRollingPolicy.setActiveFileName(fnp); }*/ /** * Get file name pattern. * @return file name pattern. */ public String getFileNamePattern() { return timeBasedRollingPolicy.getFileNamePattern(); } public RolloverDescription initialize(String file, boolean append) throws SecurityException { return timeBasedRollingPolicy.initialize(file, append); } public RolloverDescription rollover(String activeFile) throws SecurityException { return timeBasedRollingPolicy.rollover(activeFile); } public void activateOptions() { timeBasedRollingPolicy.activateOptions(); } } package com.mypack.rolling; import org.apache.log4j.helpers.OptionConverter; import org.apache.log4j.Appender; import org.apache.log4j.rolling.TriggeringPolicy; import org.apache.log4j.spi.LoggingEvent; import org.apache.log4j.spi.OptionHandler; /** * Copy of org.apache.log4j.rolling.SizeBasedTriggeringPolicy but able to accept * a human-friendly value for maximumFileSize, eg. "10MB" * * Note that sub-classing SizeBasedTriggeringPolicy is not possible because that * class is final */ public class CustomSizeBasedTriggeringPolicy implements TriggeringPolicy, OptionHandler { /** * Rollover threshold size in bytes. */ private long maximumFileSize = 10 * 1024 * 1024; // let 10 MB the default max size /** * Set the maximum size that the output file is allowed to reach before * being rolled over to backup files. * * <p> * In configuration files, the <b>MaxFileSize</b> option takes an long * integer in the range 0 - 2^63. You can specify the value with the * suffixes "KB", "MB" or "GB" so that the integer is interpreted being * expressed respectively in kilobytes, megabytes or gigabytes. For example, * the value "10KB" will be interpreted as 10240. * * @param value * the maximum size that the output file is allowed to reach */ public void setMaxFileSize(String value) { maximumFileSize = OptionConverter.toFileSize(value, maximumFileSize + 1); } public long getMaximumFileSize() { return maximumFileSize; } public void setMaximumFileSize(long maximumFileSize) { this.maximumFileSize = maximumFileSize; } public void activateOptions() { } public boolean isTriggeringEvent(Appender appender, LoggingEvent event, String filename, long fileLength) { try { Thread.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } boolean result = (fileLength >= maximumFileSize); return result; } } and the log4j.xml: <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE log4j:configuration SYSTEM "log4j.dtd"> <log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/" debug="true"> <appender name="console" class="org.apache.log4j.ConsoleAppender"> <param name="Target" value="System.out" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d [%t] %-5p %c -> %m%n" /> </layout> </appender> <appender name="FILE" class="org.apache.log4j.rolling.RollingFileAppender"> <param name="file" value="C:/temp/test/test_log4j.log" /> <rollingPolicy class="com.mypack.rolling.CustomTimeBasedRollingPolicy"> <param name="fileNamePattern" value="C:/temp/test/test_log4j-%d{yyyy-MM-dd-HH_mm_ss}.log" /> </rollingPolicy> <triggeringPolicy class="com.mypack.rolling.CustomSizeBasedTriggeringPolicy"> <param name="MaxFileSize" value="200KB" /> </triggeringPolicy> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d [%t] %-5p %c -> %m%n" /> </layout> </appender> <logger name="com.mypack.myrun" additivity="false"> <level value="debug" /> <appender-ref ref="FILE" /> </logger> <root> <priority value="debug" /> <appender-ref ref="console" /> </root> </log4j:configuration>

    Read the article

  • Consolidate data from many different databases into one with minimum latency

    - by NTDLS
    I have 12 databases totaling roughly 1.0TB, each on a different physical server running SQL 2005 Enterprise - all with the same exact schema. I need to offload this data into a separate single database so that we can use for other purposes (reporting, web services, ect) with a maximum of 1 hour latency. It should also be noted that these servers are all in the same rack, connected by gigabit connections and that the inserts to the databases are minimal (Avg. 2500 records/hour). The current method is very flakey: The data is currently being replicated (SQL Server Transactional Replication) from each of the 12 servers to a database on another server (yes, 12 different employee tables from 12 different servers into a single employee table on a different server). Every table has a primary key and the rows are unique across all tables (there is a FacilityID in each table). What are my options, these has to be a simple way to do this.

    Read the article

  • SQL Query - Count column values separately

    - by user575535
    I have a hard time getting a Query to work right. This is the DDL for my Tables CREATE TABLE Agency ( id SERIAL not null, city VARCHAR(200) not null, PRIMARY KEY(id) ); CREATE TABLE Customer ( id SERIAL not null, fullname VARCHAR(200) not null, status VARCHAR(15) not null CHECK(status IN ('new','regular','gold')), agencyID INTEGER not null REFERENCES Agency(id), PRIMARY KEY(id) ); Sample Data from the Tables AGENCY id|'city' 1 |'London' 2 |'Moscow' 3 |'Beijing' CUSTOMER id|'fullname' |'status' |agencyid 1 |'Michael Smith' |'new' |1 2 |'John Doe' |'regular'|1 3 |'Vlad Atanasov' |'new' |2 4 |'Vasili Karasev'|'regular'|2 5 |'Elena Miskova' |'gold' |2 6 |'Kim Yin Lu' |'new' |3 7 |'Hu Jintao' |'regular'|3 8 |'Wen Jiabao' |'regular'|3 I want to produce the following output, but i need to count separately for ('new','regular','gold') 'city' |new_customers|regular_customers|gold_customers 'Moscow' |1 |1 |1 'Beijing'|1 |2 |0 'London' |1 |1 |0

    Read the article

  • MYSQL TRIGGER LOOP

    - by Lee
    Hey all I am going through the pain stacking process of sorting out someone else code. So I am decided to recreate a new database to sit alongside the old one then to use triggers to transfer data between both tables. Now I have an issue with a it looping IE A trigger on each table to update the other. Once one updates it should update the other but as both tables have triggers it just will loop which will cause an issue. Is their a way to stop this from happening ? Hope this makes sense and hope you can advise.

    Read the article

  • Views performance in MySQL for denormalization

    - by Gianluca Bargelli
    I am currently writing my truly first PHP Application and i would like to know how to project/design/implement MySQL Views properly; In my particular case User data is spread across several tables (as a consequence of Database Normalization) and i was thinking to use a View to group data into one large table: CREATE VIEW `Users_Merged` ( name, surname, email, phone, role ) AS ( SELECT name, surname, email, phone, 'Customer' FROM `Customer` ) UNION ( SELECT name, surname, email, tel, 'Admin' FROM `Administrator` ) UNION ( SELECT name, surname, email, tel, 'Manager' FROM `manager` ); This way i can use the View's data from the PHP app easily but i don't really know how much this can affect performance. For example: SELECT * from `Users_Merged` WHERE role = 'Admin'; Is the right way to filter view's data or should i filter BEFORE creating the view itself? (I need this to have a list of users and the functionality to filter them by role). EDIT Specifically what i'm trying to obtain is Denormalization of three tables into one. Is my solution correct? See Denormalization on wikipedia

    Read the article

  • vb.net | Update DB with OleDB

    - by liron
    i wrote a module of a connection to DB with OleDB and the 'sub UpdateClients' doesn't work, the DB don't update. what's missing or wrong? Module mdlDB Const CONNECTION_STRING As String = _ "provider= Microsoft.Jet.OleDB.4.0;Data Source=DbHalf.mdb;mode= Share Deny None" Dim daClient As New OleDb.OleDbDataAdapter Dim dsClient As New DataSet Dim cmClient As CurrencyManager Public Sub OpenClients(ByVal txtId, ByVal txtName, ByVal BindingContext) Dim Con As New OleDb.OleDbConnection(CONNECTION_STRING) Dim sqlClient As New OleDb.OleDbCommand Con.Open() sqlClient.CommandText = "SELECT*" sqlClient.CommandText += "FROM tblClubClient" sqlClient.Connection = Con daClient.SelectCommand = sqlClient dsClient.Clear() daClient.Fill(dsClient, "CLUB_CLIENT") cmClient = BindingContext(dsClient, "CLUB_CLIENT") cmClient.Position = 0 txtId.DataBindings.Add("text", dsClient, "CLUB_CLIENT.ClntId") txtName.DataBindings.Add("text", dsClient, "CLUB_CLIENT.ClntName") Con.Close() End Sub Public Sub UpdateClients(ByVal txtId, ByVal txtName, ByVal BindingContext) Dim cb As New OleDb.OleDbCommandBuilder(daClient) cmClient = BindingContext(dsClient, "CLUB_CLIENT") dsClient.Tables("CLUB_CLIENT").Rows(cmClient.Position).Item("ClntId") = txtId.Text dsClient.Tables("CLUB_CLIENT").Rows(cmClient.Position).Item("ClntName") = txtName.Text daClient.Update(dsClient, "CLUB_CLIENT") End Sub End Module

    Read the article

  • Dynamically Added CheckBox Column is Disabled in GridView

    - by Mark Maslar
    I'm dynamically adding a Boolean column to a DataSet. The DataSet's table is the DataSource for a GridView, which AutoGenerates the columns. Issue: The checkboxes for this dynamically generated column are all disabled. How can I enable them? ds.Tables["Transactions"].Columns.Add("Retry", typeof(System.Boolean)); ds.Tables["Transactions"].Columns["Retry"].ReadOnly = false; In other words, how can I control how GridView generates the CheckBoxes for a Boolean field? (And why does setting ReadOnly to False have no effect?) Thanks!

    Read the article

  • How to properly name record creation(insertion) datetime field ?

    - by alpav
    If I create a table with datetime default getdate() field that is intended to keep date&time of record insertion, which name is better to use for that field ? I like to use Created and I've seen people use DateCreated or CreateDate. Other possible candidates that I can think of are: CreatedDate, CreateTime, TimeCreated, CreateDateTime, DateTimeCreated, RecordCreated, Inserted, InsertedDate, ... From my point of view anything with Date inside name looks bad because it can be confused with date part in case if I have 2 fields: CreateDate,CreateTime, so I wonder if there are any specific recommendations/standards in that area based on real reasons, not just style, mood or consistency. Of course, if there are 100 existing tables and this is table 101 then I would use same naming convention as used in those 100 tables for the sake of consistency, but this question is about first table in first database in first server in first application.

    Read the article

  • MDX performance vs. T-SQL

    - by SubPortal
    I have a database containing tables with more than 600 million records and a set of stored procedures that make complex search operations on the database. The performance of the stored procedures is so slow even with suitable indexes on the tables. The design of the database is a normal relational db design. I want to change the database design to be multidimensional and use the MDX queries instead of the traditional T-SQL queries but the question is: Is the MDX query better than the traditional T-SQL query with regard to performance? and if yes, to what extent will that improve the performance of the queries? Thanks for any help.

    Read the article

  • Linux Mplayer Instead of flash any distro/browser

    - by data_jepp
    The thing is that flash sucks for linux on 64-bit, performance vice. Due to this I actually download every clip to a temp folder and play the file while it's being downloaded with mplayer. This works really nice. Flash is not to play a video. Does there exist a plugin for this for any browser? I tried google, but I can't find the right words to search for.

    Read the article

  • Dynamic type for List<T>?

    - by Brett
    Hi All, I've got a method that returns a List for a DataSet table public static List<string> GetListFromDataTable(DataSet dataSet, string tableName, string rowName) { int count = dataSet.Tables[tableName].Rows.Count; List<string> values = new List<string>(); // Loop through the table and row and add them into the array for (int i = 0; i < count; i++) { values.Add(dataSet.Tables[tableName].Rows[i][rowName].ToString()); } return values; } Is there a way I can dynamically set the datatype for the list and have this one method cater for all datatypes so I can specify upon calling this method that it should be a List<int or List<string> or List<AnythingILike>? Also, what would the return type be when declaring the method? Thanks in advance, Brett

    Read the article

  • PHP: How to use mysql fulltext search and handle fulltext search result

    - by garcon1986
    Hello, I have tried to use mysql fulltext search in my intranet. I wanted to use it to search in multiple tables, and get the independant results depending on tables in the result page. This is what i did for searching. $query = " SELECT * FROM testtable t1, testtable2 t2, testtable3 t3 WHERE match(t1.firstName, t1.lastName, t1.details) against(' ".$value."') or match(t2.others, t2.information, t2.details) against(' ".$value."') or match(t3.other, t2.info, t2.details) against(' ".$value."') "; $result = mysql_query($query)or die('query error'.mysql_error()); while($row = mysql_fetch_assoc($result)){ echo $row['firstName']; echo $row['lastName']; echo $row['details'].'<br />'; } Do you have any ideas about optimizing the query and format the output of search results?

    Read the article

  • Datamodel for a MVC learning project

    - by Dofs
    Hi, I am trying to learn Microsoft MVC 2, and have in that case found a small project I wanted to deploy it on. My idea was to simulate a restaurant where you can order a table. Basics: A user can only reserve a full table, so I don't have the trouble of merging people on different tables. A person can order a table for a certain amount of hours. My question was how I could make the data model the smartest way. I thought of just having a my database like this: Table { Id, TableName } Reservations { Id TableId ReservedFrom ReservedTo UserId } User { UserId UserName ... } By doing it this way I would have to program a lot of the logic in e.g. the business layer, to support which tables are occupied at what time, instead of having the data model handle it. Therefore do you guys have a better way to do this?

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >