Search Results

Search found 8943 results on 358 pages for 'audit tables'.

Page 43/358 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Drupal CCK 3 Multigroup tables with views

    - by henrijs.seso
    Hi, Each node have CCK-3-dev multigroup with 3 fields. Can I use one of fields as table header? Values of this field are taken from Taxonomy. With a simple recipe example, where food Z and Y are 2 nodes with recipes and each node has multigroup with ingredient name (milk, fish), amount (numbers) and unit (g, kg). Is there a way to create a table like this with views, I want to use views calc to calculate total amount of ingredient needed to prepare multiple recipes: Ingredient A Ingredient B Ingredient C Food Z 200 g 700 g 0 g Food Y 500 g 1000 g 50 g I don't even need units by number, but that would be good too. Anyways, how do I get this?

    Read the article

  • Lua - Iterate Through Table With nil Values

    - by Tony Trozzo
    My lua function receives a table that is of the array form: { field1, field2, nil, field3, } No keys, only values. I'm trying to convert this to a pseudo CSV form by grabbing all the fields and concatenating them into a string. Here is my function: function ArenaRewind:ConvertToCSV(tableName) csvRecord = "\n" for i,v in pairs(tableName) do if v == nil then v = "nil" end csvRecord = csvRecord .. "\"" .. v .. "\"" if i ~= #tableName then csvRecord = csvRecord .. "," end end return csvRecord end Not the prettiest code by any means, but it seems to iterate through them and grab all the non-nil values. The other table iteration function is ipairs() which stops as soon as it hits a nil value. Is there any easy way to grab all of these fields including the nil values? The tables are various sizes, so I hope to refrain from accessing each part like an array [i.e., tableName[1] through tableName[4]) and just grabbing the nil values that way. Thanks in advance.

    Read the article

  • MS Access 2003 - VBA for altering a table after a "SELECT * INTO tblTemp FROM tblMain" statement

    - by Justin
    Hi. I use functions like the following to make temporary tables out of crosstabs queries. Function SQL_Tester() Dim sql As String If DCount("*", "MSysObjects", "[Name]='tblTemp'") Then DoCmd.DeleteObject acTable, "tblTemp" End If sql = "SELECT * INTO tblTemp from TblMain;" Debug.Print (sql) Set db = CurrentDb db.Execute (sql) End Function I do this so that I can then use more vba to take the temporary table to excel, use some of excel functionality (formulas and such) and then return the values to the original table (tblMain). Simple spot i am getting tripped up is that after the Select INTO statement I need to add a brand new additional column to that temporary table and I do not know how to do this: sql = "Create Table..." is like the only way i know how to do this and of course this doesn't work to well with the above approach because I can't create a table that has already been created after the fact, and I cannot create it before because the SELECT INTO statement approach will return a "table already exists" message. Any help? thanks guys!

    Read the article

  • Scrape HTML tables from a given URL into CSV

    - by dreeves
    I seek a tool that can be run on the command line like so: tablescrape 'http://someURL.foo.com' [n] If n is not specified and there's more than one HTML table on the page, it should summarize them (header row, total number of rows) in a numbered list. If n is specified or if there's only one table, it should parse the table and spit it to stdout as CSV or TSV. Potential additional features: To be really fancy you could parse a table within a table, but for my purposes -- fetching data from wikipedia pages and the like -- that's overkill. The Perl module HTML::TableExtract can do this and may be good place to start for writing the tool I have in mind. An option to asciify any unicode. An option to apply an arbitrary regex substitution for fixing weirdnesses in the parsed table. Related questions: http://stackoverflow.com/questions/259091/how-can-i-scrape-an-html-table-to-csv http://stackoverflow.com/questions/1403087/how-can-i-convert-an-html-table-to-csv http://stackoverflow.com/questions/2861/options-for-html-scraping

    Read the article

  • What to consider if using triggers on tables in a sql-server merge replication

    - by Ice
    Hi, i am driving since some years a sql-server2000 merge-replication over three locations. Triggers do a lot of work in this database. i got no troubles. Now migrating these database to a brand new sql2008, i got some issues about the triggers. They are firing even if the merge-agent does his work. Is there anybody who has some experience with that kind of stuff on sql2008-server? Can anybody confirm that different behaviour to sql2000? Peace Ice

    Read the article

  • SQL Server 2008 - Editing Tables: Bit columns require 'True' or 'False'

    - by CJM
    Not so much a question as an observation... I'm just upgrading to SQL Server 2008 on my development machine in anticipation of upgrading my live applications. I didn't anticipate any problems since [I think] I generally use standard T-SQL, and probably not too far from ANSI standard SQL. So far so good, but I was really thrown by a very simple change: I was creating a simple, small look-up table to store a list of codes and including a bit column to indicate the current default code. But when I used the new/modified 'Edit Top 200 Rows' option, and entered my 0s and 1s in the the bit column I got an error: 'Invalid value for cell - String was not recognised as a valid boolean' After a bit of head-scratching, I tried True and False - and they worked. So it seems this new Edit feature requires 4 or 5 characters to be typed, rather than the previous 1. Checking further, we can still use '...where bitval = 1' but can now also use '...where bitval = 'true''. But any results returned render these bit columns as 0 or 1 still. It all sounds like half a step backwards. Not the end of the world, but and unnecessary annoyance. Does anybody have any insight on this issue? Or there any other new Gotchas with SQL Server 2008?

    Read the article

  • hibernate fail mapping two tables

    - by sebbalex
    Hi guys, I'd like to understand how it is possible: Until I was working with one table everything worked fine, when I have mapped another table it fails as shown below: Glassfish start: INFO: configuring from resource: /hibernate.cfg.xml INFO: Configuration resource: /hibernate.cfg.xml INFO: Reading mappings from resource : hibernate_centrale.hbm.xml //first table INFO: Mapping class: com.italtel.patchfinder.objects.centrale - centrale INFO: Reading mappings from resource : hibernate_impianti.hbm.xml //second table INFO: Mapping class: com.italtel.patchfinder.objects.Impianto - impianti INFO: Configured SessionFactory: null INFO: schema update complete INFO: Hibernate: select centrale0_.id as id0_, centrale0_.name as name0_, centrale0_.impianto as impianto0_, centrale0_.servizio as servizio0_ from centrale centrale0_ group by centrale0_.name INFO: Hibernate: select centrale0_.id as id0_, centrale0_.name as name0_, centrale0_.impianto as impianto0_, centrale0_.servizio as servizio0_ from centrale centrale0_ where centrale0_.name='ANCONA' order by centrale0_.name asc //Error org.hibernate.hql.ast.QuerySyntaxException: impianti is not mapped [from impianti where impianto='SD' order by modulo asc] at org.hibernate.hql.ast.util.SessionFactoryHelper.requireClassPersister(SessionFactoryHelper.java:181) ..... config: table1 <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <hibernate-mapping> <class name="com.italtel.patchfinder.objects.Impianto" table="impianti"> <id column="id" name="id"> <generator class="increment"/> </id> <property name="impianto"/> <property name="modulo"/> </class> </hibernate-mapping> table2 <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <hibernate-mapping> <class name="com.italtel.patchfinder.objects.centrale" table="centrale"> <id column="id" name="id"> <generator class="increment"/> </id> <property name="name"/> <property name="impianto"/> <property name="servizio"/> </class> </hibernate-mapping> connection stuff ... <property name="hbm2ddl.auto">update</property> <mapping resource="hibernate_centrale.hbm.xml"/> <mapping resource="hibernate_impianti.hbm.xml"/> </session-factory> </hibernate-configuration> Class: public List loadAll() { Session session = sessionFactory.getCurrentSession(); session.beginTransaction(); return session.createQuery("from centrale group by name").list(); } public List<centrale> loadImplants(String centrale) { Session session = sessionFactory.getCurrentSession(); session.beginTransaction(); return session.createQuery("from centrale where name='" + centrale + "' order by name asc").list(); } public List<Impianto> loadModules(String implant) { Session session = sessionFactory.getCurrentSession(); session.beginTransaction(); return session.createQuery("from impianti where impianto='" + implant + "' order by modulo asc").list(); } } Do you have some advice? Thanks in advance

    Read the article

  • database splitting; multiple tables

    - by Ben
    I am coding a classifieds ad web app. What is the optimal way to structure the database for this? Because of the high repeatability, would it be faster (in terms of searching/indexing) to have a separate table in the database for each city? Or would it be okay to just have one table for every city (it would have a lot of rows..). The classifieds table has id, user_id, city_name, category,[description and detail fields].

    Read the article

  • How to differentiate between to similer fields in Linq Join tables

    - by Azhar
    How to differentiate between to select new fields e.g. Description c.Description and lt.Description DataTable lDt = new DataTable(); try { lDt.Columns.Add(new DataColumn("AreaTypeID", typeof(Int32))); lDt.Columns.Add(new DataColumn("CategoryRef", typeof(Int32))); lDt.Columns.Add(new DataColumn("Description", typeof(String))); lDt.Columns.Add(new DataColumn("CatDescription", typeof(String))); EzEagleDBDataContext lDc = new EzEagleDBDataContext(); var lAreaType = (from lt in lDc.tbl_AreaTypes join c in lDc.tbl_AreaCategories on lt.CategoryRef equals c.CategoryID where lt.AreaTypeID== pTypeId select new { lt.AreaTypeID, lt.Description, lt.CategoryRef, c.Description }).ToArray(); for (int j = 0; j< lAreaType.Count; j++) { DataRow dr = lDt.NewRow(); dr["AreaTypeID"] = lAreaType[j].LandmarkTypeID; dr["CategoryRef"] = lAreaType[j].CategoryRef; dr["Description"] = lAreaType[j].Description; dr["CatDescription"] = lAreaType[j].; lDt.Rows.Add(dr); } } catch (Exception ex) { }

    Read the article

  • How do I select the max value from multiple tables in one column

    - by Derick
    I would like to get the last date of records modified. Here is a sample simple SELECT: SELECT t01.name, t01.last_upd date1, t02.last_upd date2, t03.last_upd date3, 'maxof123' maxdate FROM s_org_ext t01, s_org_ext_x t02, s_addr_org t03 WHERE t02.par_row_id(+)= t01.row_id and t03.row_id(+)= t01.pr_addr_id and t01.int_org_flg = 'n'; How can I get column maxdate to display the max of the three dates? Note: no UNION or sub SELECT statements ;)

    Read the article

  • oracle global temporary tables

    - by mrp
    I created the global temp table. when I execute the code as an individual scripts it works fine. but when I execute it as a single script in TOAD then no record was created. there was just an empty global temp table. eg. CREATE GLOBAL TEMPORARY TABLE TEMP_TRAN ( COL1 NUMBER(9), COL2 VARCHAR2(30), COL3 DATE ) ON COMMIT PRESERVE ROWS / INSERT INTO TEMP_TRAN VALUES(1,'D',sysdate); / INSERT INTO TEMP_TRAN VALUES(2,'I',sysdate); / INSERT INTO TEMP_TRAN VALUES(3,'s',sysdate); / COMMIT; When I run the above code one statement at a time it works fine. But when I execute it as a script it runs fine but there was no records in temp table. can anyone help me on this please?

    Read the article

  • Rows dropping when I try to join data from two tables

    - by blcArmadillo
    I have a fairly simple query I'm try to write. If I run the following query: SELECT parts.id, parts.type_id FROM parts WHERE parts.type_id=1 OR parts.type_id=2 OR parts.type_id=4 ORDER BY parts.type_id; I get all the rows I expect to be returned. Now when I try to grab the parent_unit from another table with the following query six rows suddenly drop out of the result: SELECT parts.id, parts.type_id, sp.parent_unit FROM parts, serialized_parts sp WHERE (parts.type_id=1 OR parts.type_id=2 OR parts.type_id=4) AND sp.parts_id = parts.id ORDER BY parts.type_id In the past I've never really dealt with ORs in my queries so maybe I'm just doing it wrong. That said I'm guessing it's just a simple mistake. Let me know if you need sample data and I'll post some. Thanks.

    Read the article

  • Using sectioned tables from an sqlite file with an iPhone uitableview

    - by Chris
    I am currently trying to make a sectioned table like this: Section 1: Entry 1 Entry 2 Entry 3 Section 2: Entry 4 Entry 5 Entry 6 Section 3: Entry 7 Entry 8 ... However, using this code: Event *lists = (Event *)[eventList objectAtIndex:indexPath.row]; accessStatement = "select * from DatabaseName"; [self findEvents]; // Code that builds each object's strings from database cell.textLabel.text = lists.name; Section 1: Entry 1 Entry 2 Entry 3 Section 2: Entry 1 Entry 2 Entry 3 Section 3: Entry 1 Entry 2 ... It keeps on repeating itself. How do I make the index so that it continues in each section rather than restarts? Thanks in advance.

    Read the article

  • Auto width on tables

    - by Hulk
    A html table cols and rows are generated dynamically, i.e, for the first instance it could be two rows and there columns. and next time it could be two rows and 10 columns My question is how to adjust the with automatically of the table so that the table always appears 100% in the page adjusting the coulmn size and row size <table> <tr><td></td><td></td><td></td></tr> <tr><td></td><td></td><td></td></tr> </table> <table> <tr><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr> <tr><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr> <tr><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr> </table> Thanks..

    Read the article

  • Mysql - Summary Tables

    - by jwzk
    Which method do you suggest and why? Creating a summary table and . . . 1) Updating the table as the action occurs in real time. 2) Running group by queries every 15 minutes to update the summary table. 3) Something else? The data must be near real time, it can't wait an hour, a day, etc.

    Read the article

  • How do I make multi-page landscape tables in LaTeX

    - by Tim
    The title is pretty much the extent of my question. I am trying to insert a large table into a document using the xtabular environment. If I wrap the xtabular environment in a landscape environment, then the bottom of my table gets chopped off. Does anyone have any better suggestions? Thanks \begin{landscape} \singlespace \begin{xtabular}{|c|c|c|c|c|} \hline some & stuff & ... & \\ \end{xtabular} \end{landscape} Tim

    Read the article

  • Paging enormous tables on DB2

    - by grenade
    We have a view that, without constraints, will return 90 million rows and a reporting application that needs to display paged datasets of that view. We're using nhibernate and recently noticed that its paging mechanism looks like this: select * from (select rownumber() over() as rownum, this_.COL1 as COL1_20_0_, this_.COL2 as COL2_20_0_ FROM SomeSchema.SomeView this_ WHERE this_.COL1 = 'SomeValue') as tempresult where rownum between 10 and 20 The query brings the db server to its knees. I think what's happening is that the nested query is assigning a row number to every row satisfied by the where clause before selecting the subset (rows 10 - 20). Since the nested query will return a lot of rows, the mechanism is not very efficient. I've seen lots of tips and tricks for doing this efficiently on other SQL platforms but I'm struggling to find a DB2 solution. In fact an article on IBM's own site recommends the approach that nhibernate has taken. Is there a better way?

    Read the article

  • Two asp.net applications to use the same membership tables - specifically user login data

    - by Lk
    HI, I have created a asp.net solution with two applications. They bothe use the same database which is setup with .net membership and roles. Application 1 uses the membership for sauthentication to an administration area - this works fine. Application 2 - has a different applicationID to App1. I want to be able to use the existing user account to manage App2's authentication needs. How is this best achieved? Do I just match App2 appliactionID to App1's or is there another way? Many Thanks, Lk

    Read the article

  • Using hash tables in racket

    - by user2963128
    Im working on an Ngrams program and im having trouble filling out my hash table. i want to write out a recursive function that will take the words and add them to the hash table. The way its supposed to work is given the data set 1 2 3 4 5 6 7 the first entry in the hash table should have a key of [1 2] and the data should be 3. The second entry should be: [2 3] and its data should be 4 and continuing on until the end of the text file. we are given a predefined function called readword that will simply return 1 word from the text. But im not sure how to make these calls overlap each other. The calls would look something like this if the data was hard coded in. (hash-set! (list "1" "2") 3 (hash-set! (list "2" "3") 4 2 calls that ive tried look like this (hash-set! Ngram-table(list((word1) (word2)) readword in))) (hash-set! Ngram-table(append((cdr data) word1)) readword in) apparently the in after readword is supposed to tell the computer that this is and input instead of an output or something like that. How would I call this to make the data in the key of the hashtable overlap like this? And what would the recursive call look like? edit: also we are not allowed to use assigment statements in this program.

    Read the article

  • PHP MySQLi timeout on locked tables?

    - by Marcin
    Hi guys I have a weird problem with mysqli timeout options, here you go: I am using mysqli_init() and real_connect() in order to set MYSQLI_OPT_CONNECT_TIMEOUT $this->__mysqli = mysqli_init(); if(!$this->__mysqli->options(MYSQLI_OPT_CONNECT_TIMEOUT,1)) throw new Exception('Timeout settings failed') $this->__mysqli->real_connect(host,user,pass,db); .... Then I am initiating query on locked table (LOCKE TABLE users WRITE) and its just hanging, ignoring all my settings even: set_time_limit(1); ini_set('max_execution_time',1); ini_set('default_socket_timeout',1); ini_set('mysql.connect_timeout',1); I understand why set_time_limit(1) and max_execution_time is ignored but why other timeouts and especially MYSQLI_OPT_CONNECT_TIMEOUT are ignored and how to solve it. I am using PHP 5.3.1 on Windows and Linux boxes, please help.

    Read the article

  • mysql medium int vs. int performance?

    - by aviv
    Hi, I have a simple users table, i guess the maximum users i am going to have is 300,000. Currently i am using: CREATE TABLE users ( id INT UNSIGEND AUTOINCEREMENT PRIMARY KEY, .... Of course i have many other tables that the users(id) is a FOREIGN KEY in them. I read that since the id is not going to use the full maximum of INT it is better to use: MEDIUMINT and it will give better performance. Is it true? (I am using mysql on Windows Server 2008) Thanks.

    Read the article

  • Users in database server or database tables

    - by Batcat
    Hi all, I came across an interesting issue about client server application design. We have this browser based management application where it has many users using the system. So obvisously within that application we have an user management module within it. I have always thought having an user table in the database to keep all the login details was good enough. However, a senior developer said user management should be done in the database server layer if not then is poorly designed. What he meant was, if a user wants to use the application then a user should be created in the user table AND in the database server as a user account as well. So if I have 50 users using my applications, then I should have 50 database server user logins. I personally think having just one user account in the database server for this database was enough. Just grant this user with the allowed privileges to operate all the necessary operation need by the application. The users that are interacting with the application should have their user accounts created and managed within the database table as they are more related to the application layer. I don't see and agree there is need to create a database server user account for every user created for the application in the user table. A single database server user should be enough to handle all the query sent by the application. Really hope to hear some suggestions / opinions and whether I'm missing something? performance or security issues? Thank you very much.

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >