Search Results

Search found 30061 results on 1203 pages for 'table layout'.

Page 246/1203 | < Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >

  • How does MySQL's ORDER BY RAND() work?

    - by Eugene
    Hi, I've been doing some research and testing on how to do fast random selection in MySQL. In the process I've faced some unexpected results and now I am not fully sure I know how ORDER BY RAND() really works. I always thought that when you do ORDER BY RAND() on the table, MySQL adds a new column to the table which is filled with random values, then it sorts data by that column and then e.g. you take the above value which got there randomly. I've done lots of googling and testing and finally found that the query Jay offers in his blog is indeed the fastest solution: SELECT * FROM Table T JOIN (SELECT CEIL(MAX(ID)*RAND()) AS ID FROM Table) AS x ON T.ID >= x.ID LIMIT 1; While common ORDER BY RAND() takes 30-40 seconds on my test table, his query does the work in 0.1 seconds. He explains how this functions in the blog so I'll just skip this and finally move to the odd thing. My table is a common table with a PRIMARY KEY id and other non-indexed stuff like username, age, etc. Here's the thing I am struggling to explain SELECT * FROM table ORDER BY RAND() LIMIT 1; /*30-40 seconds*/ SELECT id FROM table ORDER BY RAND() LIMIT 1; /*0.25 seconds*/ SELECT id, username FROM table ORDER BY RAND() LIMIT 1; /*90 seconds*/ I was sort of expecting to see approximately the same time for all three queries since I am always sorting on a single column. But for some reason this didn't happen. Please let me know if you any ideas about this. I have a project where I need to do fast ORDER BY RAND() and personally I would prefer to use SELECT id FROM table ORDER BY RAND() LIMIT 1; SELECT * FROM table WHERE id=ID_FROM_PREVIOUS_QUERY LIMIT 1; which, yes, is slower than Jay's method, however it is smaller and easier to understand. My queries are rather big ones with several JOINs and with WHERE clause and while Jay's method still works, the query grows really big and complex because I need to use all the JOINs and WHERE in the JOINed (called x in his query) sub request. Thanks for your time!

    Read the article

  • Why is MySQL with InnoDB doing a table scan when key exists and choosing to examine 70 times more ro

    - by andysk
    Hello, I'm troubleshooting a query performance problem. Here's an expected query plan from explain: mysql> explain select * from table1 where tdcol between '2010-04-13:00:00' and '2010-04-14 03:16'; +----+-------------+--------------------+-------+---------------+--------------+---------+------+---------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------------+-------+---------------+--------------+---------+------+---------+-------------+ | 1 | SIMPLE | table1 | range | tdcol | tdcol | 8 | NULL | 5437848 | Using where | +----+-------------+--------------------+-------+---------------+--------------+---------+------+---------+-------------+ 1 row in set (0.00 sec) That makes sense, since the index named tdcol (KEY tdcol (tdcol)) is used, and about 5M rows should be selected from this query. However, if I query for just one more minute of data, we get this query plan: mysql> explain select * from table1 where tdcol between '2010-04-13 00:00' and '2010-04-14 03:17'; +----+-------------+--------------------+------+---------------+------+---------+------+-----------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------------+------+---------------+------+---------+------+-----------+-------------+ | 1 | SIMPLE | table1 | ALL | tdcol | NULL | NULL | NULL | 381601300 | Using where | +----+-------------+--------------------+------+---------------+------+---------+------+-----------+-------------+ 1 row in set (0.00 sec) The optimizer believes that the scan will be better, but it's over 70x more rows to examine, so I have a hard time believing that the table scan is better. Also, the 'USE KEY tdcol' syntax does not change the query plan. Thanks in advance for any help, and I'm more than happy to provide more info/answer questions.

    Read the article

  • Need Help on entity framework

    - by Sarathi1904
    I have 3 tables(Roles,Actions and RoleActionLinks). Roles table has few columns(RoleID,RoleName,Desc). Actions table has few colums(ActionID,ActionName,Desc). In RoleActionLink is created for store the association between Roles and Actions and this table has the columns such as RoleID,ActionID When I created the data model(edmx). it shows only Role and Action as entity. i did not find RoleActionLink table. but even there is no direct relation between Roles and Actions table, both tables are automatically related using RoleActionLink table. When i create the new Action, a action record should be populated in Action table(this is works fine). At the same time, i need to populate record in RoleActionLinks table. But i dont have the entity to populate. Please tell me how to accomplish my needs.

    Read the article

  • deadlock because of foreign key?

    - by George2
    Hello everyone, I am using SQL Server 2008 Enterprise. I met with deadlock in the following store procedure, but because of my fault, I did not record the deadlock graph. But now I can not reproduce deadlock issue. I want to have a postmortem to find the root cause of deadlock to avoid deadlock in the future. The deadlock happens on delete statement. For the delete statement, Param1 is a column of table FooTable, Param1 is a foreign key of another table (refers to another primary key clustered index column of the other table). There is no index on Param1 itself for table FooTable. FooTable has another column which is used as clustered primary key, but not Param1 column. Here is my guess why there is deadlock, and I want to let people review whether my analysis is correct? Since Param1 column has no index, there will be a table scan, and will acquire table level lock, because of foreign key, the delete operation will also need to check master table (e.g. to acquire lock on master table); Some operation on master table acquires master table lock, but want to acquire lock on FooTable; (1) and (2) cause cycle lock which makes deadlock happen. My analysis correct? Any reproduce scenario? create PROCEDURE [dbo].[FooProc] ( @Param1 int ,@Param2 int ,@Param3 int ) AS DELETE FooTable WHERE Param1 = @Param1 INSERT INTO FooTable ( Param1 ,Param2 ,Param3 ) VALUES ( @Param1 ,@Param2 ,@Param3 ) DECLARE @ID bigint SET @ID = ISNULL(@@Identity,-1) IF @ID > 0 BEGIN SELECT IdentityStr FROM FooTable WHERE ID = @ID END thanks in advance, George

    Read the article

  • EJB Persist On Master Child Relationship

    - by deepak.siddappa(at)oracle.com
    Let us take scenario where in users wants to persist master child relationship. Here will have two tables dept, emp (using Scott Schema) which are having master child relation.Model Diagram: Here in the above model diagram, Dept is the Master table and Emp is child table and Dept is related to emp by one to n relationship. Lets assume we need to make new entries in emp table using EJB persist method. Create a Emp form manually dropping the fields, where deptno will be dropped as Single Selection -> ADF Select One Choice (which is a foreign key in emp table) from deptFindAll DC. Make sure to bind all field variables in backing bean.Employee Form:Once the Emp form created, If the persistEmp() method is used to commit the record this will persist all the Emp fields into emp table except deptno, because the deptno will be passed as a Object reference in persistEmp method  (Its foreign key reference). So directly deptno can't be passed to the persistEmp method instead deptno should be explicitly set to the emp object, then the persist will save the deptno to the emp table.Below solution is one way of work around to achieve this scenario -Create a method in sessionBean for adding emp records and expose this method in DataControl.     For Ex: Here in the below code 'em" is a EntityManager.            private EntityManager em - will be member variable in sessionEJBBeanpublic void addEmpRecord(String ename, String job, BigDecimal deptno) { Emp emp = new Emp(); emp.setEname(ename); emp.setJob(job); //setting the deptno explicitly Dept dept = new Dept(); dept.setDeptno(deptno); //passing the dept object emp.setDept(dept); //persist the emp object data to Emp table em.persist(emp); }From DataControl palette Drop addEmpRecord as Method ADF button, In Edit action binding window enter the parameter values which are binded in backing bean.     For Ex:     If the name deptno textfield is binded with "deptno" variable in backing bean, then El Expression Builder pass value as "#{backingbean.deptno.value}"Binding:

    Read the article

  • Slow retrieval of data in SQLITE takes a long using ContentProvider

    - by Arlyn
    I have an application in Android (running 4.0.3) that stores a lot of data in Table A. Table A resides in SQLite Database. I am using a ContentProvider as an abstraction layer above the database. Lots of data here means almost 80,000 records per month. Table A is structured like this: String SQL_CREATE_TABLE = "CREATE TABLE IF NOT EXISTS " + TABLE_A + " ( " + COLUMN_ID + " INTEGER PRIMARY KEY NOT NULL" + "," + COLUMN_GROUPNO + " INTEGER NOT NULL DEFAULT(0)" + "," + COLUMN_TIMESTAMP + " DATETIME UNIQUE NOT NULL" + "," + COLUMN_TAG + " TEXT" + "," + COLUMN_VALUE + " REAL NOT NULL" + "," + COLUMN_DEVICEID + " TEXT NOT NULL" + "," + COLUMN_NEW + " NUMERIC NOT NULL DEFAULT(1)" + " )"; Here is the index statement: String SQL_CREATE_INDEX_TIMESTAMP = "CREATE INDEX IF NOT EXISTS " + TABLE_A + "_" + COLUMN_TIMESTAMP + " ON " + TABLE_A + " (" + COLUMN_TIMESTAMP + ") "; I have defined the columns as well as the table name as String Constants. I am already experiencing significant slow down when retrieving this data from Table A. The problem is that when I retrieve data from this table, I first put it in an ArrayList and then I display it. Obviously, this is possibly the wrong way of doing things. I am trying to find a better way to approach this problem using a ContentProvider. But this is not the problem that bothers me. The problem is for some reason, it takes a lot longer to retrieve data from other tables which have only upto 12 records maximum. I see this delay increase as the number of records in Table A increase. This does not make any sense. I can understand the delay if I retrieve data from Table A, but why the delay in retrieving data from other tables. To clarify, I do not experience this delay if Table A is empty or has less than 3000 records. What could be the problem?

    Read the article

  • SQL query. An unusual join. DB implemented in sqlite-3

    - by user02814
    This is essentially a question about constructing an SQL query. The db is implemented with sqlite3. I am a relatively new user of SQL. I have two tables and want to join them in an unusual way. The following is an example to explain the problem. Table 1 (t1): id year name ------------------------- 297 2010 Charles 298 2011 David 300 2010 Peter 301 2011 Richard Table 2 (t2) id year food --------------------------- 296 2009 Bananas 296 2011 Bananas 297 2009 Melon 297 2010 Coffee 297 2012 Cheese 298 2007 Sugar 298 2008 Cereal 298 2012 Chocolate 299 2000 Peas 300 2007 Barley 300 2011 Beans 300 2012 Chickpeas 301 2010 Watermelon I want to join the tables on id and year. The catch is that (1) id must match exactly, but if there is no exact match in Table 2 for the year in Table 1, then I want to choose the year that is the next (lower) available. A selection of the kind that I want to produce would give the following result id year matchyr name food ------------------------------------------------- 297 2010 2010 Charles Coffee 298 2011 2008 David Cereal 300 2010 2007 Peter Barley 301 2011 2010 Richard Watermelon To summarise, id=297 had an exact match for year=2010 given in Table 1, so the corresponding line for id=297, year=2010 is chosen from Table 2. id=298, year=2011 did not have a matching year in Table 2, so the next available year (less than 2011) is chosen. As you can see, I would also like to know what that matched year (whether exactly , or inexactly) actually was. I would very much appreciate (1) an indication (yes/no answer) of whether this is possible to do in SQL alone, or whether I need to look outside SQL, and (2) a solution, if that is not too onerous.

    Read the article

  • IPHONE - number of row being shown

    - by Mike
    I have a table inside a UIImageView. The table occupies the entire UIImageView. The table contains a series of images. Scroll and paging are enabled. As the table size is the same of its parent view, I can see just one cell of the table at a time. When I roll the table, I see the next or previous cell. I have a variable that must contain the number of the row being shown, but how can I make this table update the variable. I can see that I can use the didSelectRowAtIndexPath to change the variable when the cell is selected, but this is not the case. I need to update the variable when the table is rolled. I want to do the same thing I could do using a UISCrollView and scrollViewDidScroll:, but even if this is possible, I would have to know the number of the row that is visible. How can I do that? thanks.

    Read the article

  • R: Why does read.table stop reading a file?

    - by Mike Dewar
    I have a file, called genes.txt, which I'd like to become a data.frame. It's got a lot of lines, each line has three, tab delimited fields: mike$ wc -l genes.txt 42476 genes.txt I'd like to read this file into a data.frame in R. I use the command read.table, like this: genes = read.table( genes_file, sep="\t", na.strings="-", fill=TRUE, col.names=c("GeneSymbol","synonyms","description") ) Which seems to work fine, where genes_file points at genes.txt. However, the number of lines in my data.frame is significantly less than the number of lines in my text file: > nrow(genes) [1] 27896 and things I can find in the text file: mike$ grep "SELL" genes.txt SELL CD62L|LAM1|LECAM1|LEU8|LNHR|LSEL|LYAM1|PLNHR|TQ1 selectin L don't seem to be in the data.frame > grep("SELL",genes$GeneSymbol) integer(0) it turns out that genes = read.delim( genes_file, header=FALSE, na.strings="-", fill=TRUE, col.names=c("GeneSymbol","synonyms","description"), ) works just fine. Why does read.delim work when read.table not? If it's of use, you can recreate genes.txt using the following commands which you should run from a command line curl -O ftp://ftp.ncbi.nlm.nih.gov/gene/DATA/gene_info.gz gzip -cd gene_info.gz | awk -Ft '$1==9606{print $3 "\t" $5 "\t" $9}' > genes.txt be warned, though, that gene_info.gz is 101MBish.

    Read the article

  • Dynamic scoping in Clojure?

    - by j-g-faustus
    Hi, I'm looking for an idiomatic way to get dynamically scoped variables in Clojure (or a similar effect) for use in templates and such. Here is an example problem using a lookup table to translate tag attributes from some non-HTML format to HTML, where the table needs access to a set of variables supplied from elsewhere: (def *attr-table* ; Key: [attr-key tag-name] or [boolean-function] ; Value: [attr-key attr-value] (empty array to ignore) ; Context: Variables "tagname", "akey", "aval" '( ; translate :LINK attribute in <a> to :href [:LINK "a"] [:href aval] ; translate :LINK attribute in <img> to :src [:LINK "img"] [:src aval] ; throw exception if :LINK attribute in any other tag [:LINK] (throw (RuntimeException. (str "No match for " tagname))) ; ... more rules ; ignore string keys, used for internal bookkeeping [(string? akey)] [] )) ; ignore I want to be able to evaluate the rules (left hand side) as well as the result (right hand side), and need some way to put the variables in scope at the location where the table is evaluated. I also want to keep the lookup and evaluation logic independent of any particular table or set of variables. I suppose there are similar issues involved in templates (for example for dynamic HTML), where you don't want to rewrite the template processing logic every time someone puts a new variable in a template. Here is one approach using global variables and bindings. I have included some logic for the table lookup: ;; Generic code, works with any table on the same format. (defn rule-match? [rule-val test-val] "true if a single rule matches a single argument value" (cond (not (coll? rule-val)) (= rule-val test-val) ; plain value (list? rule-val) (eval rule-val) ; function call :else false )) (defn rule-lookup [test-val rule-table] "looks up rule match for test-val. Returns result or nil." (loop [rules (partition 2 rule-table)] (when-not (empty? rules) (let [[select result] (first rules)] (if (every? #(boolean %) (map rule-match? select test-val)) (eval result) ; evaluate and return result (recur (rest rules)) ))))) ;; Code specific to *attr-table* (def tagname) ; need these globals for the binding in html-attr (def akey) (def aval) (defn html-attr [tagname h-attr] "converts to html attributes" (apply hash-map (flatten (map (fn [[k v :as kv]] (binding [tagname tagname akey k aval v] (or (rule-lookup [k tagname] *attr-table*) kv))) h-attr )))) (defn test-attr [] "test conversion" (prn "a" (html-attr "a" {:LINK "www.google.com" "internal" 42 :title "A link" })) (prn "img" (html-attr "img" {:LINK "logo.png" }))) user=> (test-attr) "a" {:href "www.google.com", :title "A link"} "img" {:src "logo.png"} This is nice in that the lookup logic is independent of the table, so it can be reused with other tables and different variables. (Plus of course that the general table approach is about a quarter of the size of the code I had when I did the translations "by hand" in a giant cond.) It is not so nice in that I need to declare every variable as a global for the binding to work. Here is another approach using a "semi-macro", a function with a syntax-quoted return value, that doesn't need globals: (defn attr-table [tagname akey aval] `( [:LINK "a"] [:href ~aval] [:LINK "img"] [:src ~aval] [:LINK] (throw (RuntimeException. (str "No match for " tagname))) ; ... more rules [(string? ~akey)] [] ))) Only a couple of changes are needed to the rest of the code: In rule-match?, when syntax-quoted the function call is no longer a list: - (list? rule-val) (eval rule-val) + (seq? rule-val) (eval rule-val) In html-attr: - (binding [tagname tagname akey k aval v] - (or (rule-lookup [k tagname] *attr-table*) kv))) + (or (rule-lookup [k tagname] (attr-table tagname k v)) kv))) And we get the same result without globals. (And without dynamic scoping.) Are there other alternatives to pass along sets of variable bindings declared elsewhere, without the globals required by Clojure's binding? Is there an idiomatic way of doing it, like Ruby's binding or Javascript's function.apply(context)?

    Read the article

  • Windows Mobile : How to bind dropdown's selectedvalue to a column in table A and the list data to a

    - by Rob
    Hi, I am trying to learn the basics of Windows Mobile development against SQL CE and have come across a basic problem. I have two tables. One called Customers that stores customer info and has an identity column called ID as the primary key. The other table is called Orders which has a column called CustomerID (the FK constraint is present). I have added a DataSet to the project that contains both tables and have autogenerated the edit/view forms. This has produced a text control for the CustomerID column in the Order table for the new/edit form and I deleted it and replaced it with a dropdown list. Then, using the 'Advanced' databinding options (in Properties) I set the datasource of the list to the Customers table setting the value to the ID field and the text to the CustomerName field. I then set the SelectedValue of the list box to the CustomerID field of the Orders dataset. So far so good. When I run the app in the emulator and view the 'New' form for Orders the Customer dropdown is indeed populated with a list of customer names and I can select one and happily create a new order successfully. This is confirmed when I see the order appear in the Orders Grid form. However, when I then click on the order in the grid and then select 'Edit' the order loads but the dropdown always shows the first customer in the list and doesn't seem to bind the SelectedValue to the Orders dataset CustomerID field. Now I am an ASP.NET guy and normally hand craft the DAL and it's binding to the UI so I'm not entirely sure where to look to investigate what is going wrong here as this is all generated code. I am sure it is something very trivial but any pointers would be appreciated. My gut feeling is that the SelectedValue and the Customers.CustomerID values do not match for some reason? Many thanks, Rob.

    Read the article

  • GPTsync mismatch problem

    - by user86762
    I have a hybrid disk. After trying to copy some files from another disk to this one, I lost my OSX and Ubuntu boot capability. Ran gptsync and got: Current GPT partition table: # Start LBA End LBA Type 1 34 1987 BIOS Boot Partition 2 1988 1029662719 Basic Data 3 1029662720 2108995583 Basic Data 4 2108995584 2109405183 EFI System (FAT) 5 2109405184 2517004287 Mac OS X HFS+ 6 2517266432 2667417599 Mac OS X HFS+ 7 2667417600 3900229631 Basic Data 8 3900230504 3907029118 Linux Swap Current MBR partition table: # A Start LBA End LBA Type 1 1 3907029167 ee EFI Protective Status: MBR table must be updated. Proposed new MBR partition table: # A Start LBA End LBA Type 1 1 33 ee EFI Protective 2 34 1987 da Non-FS data 3 1988 1029662719 83 Linux 4 * 1029662720 2108995583 07 NTFS/HPFS May I update the MBR as printed above? [y/N] Clearly the MBR table is damaged or mismatched. But it does not reflect the correct GPT table partitions at all. How do I get the MBR repaired to match the GPT table (up to the 4 part limit of course)? The question is simply - Do I blindly say Yes to gptsync's suggestion? It looks sort of ok but not exactly so...Advice please on interpreting the above output to get my disk usable would be greatly appreciated. Thank You!

    Read the article

  • basic SQL atomicity "UPDATE ... SET .. WHERE ..."

    - by elgcom
    I have a rather basic and general question about atomicity of "UPDATE ... SET .. WHERE ..." statement. having a table (without extra constraint), +----------+ | id | name| +----------+ | 1 | a | +----+-----+ now, I would execute following 4 statements "at the same time" (concurrently). UPDATE table SET name='b1' WHERE name='a' UPDATE table SET name='b2' WHERE name='a' UPDATE table SET name='b3' WHERE name='a' UPDATE table SET name='b4' WHERE name='a' is there only one UPDATE statement would be executed with table update? or, is it possible that more than one UPDATE statements can really update the table? should I need extra transaction or lock to let only one UPDATE write values into table? thanks

    Read the article

  • Will SQL Server Partitioning increase performance without changing filegroups

    - by Tom
    Scenario I have a 10 million row table. I partition it into 10 partitions, which results in 1 million rows per partition but I do not do anything else (like move the partitions to different file groups or spindles) Will I see a performance increase? Is this in effect like creating 10 smaller tables? If I have queries that perform key lookups or scans, will the performance increase as if they were operating against a much smaller table? I'm trying to understand how partitioning is different from just having a well indexed table, and where it can be used to improve performance. Would a better scenario be to move the old data (using partition switching) out of the primary table to a read only archive table? Is having a table with a 1 million row partition and a 9 million row partition analagous (performance wise) to moving the 9 million rows to another table and leaving only 1 million rows in the original table?

    Read the article

  • Simple OpenGL program major slow down at high resolution

    - by Grieverheart
    I have created a small OpenGL 3.3 (Core) program using freeglut. The whole geometry is two boxes and one plane with some textures. I can move around like in an FPS and that's it. The problem is I face a big slow down of fps when I make my window large (i.e. above 1920x1080). I have monitors GPU usage when in full-screen and it shows GPU load of nearly 100% and Memory Controller load of ~85%. When at 600x600, these numbers are at about 45%, my CPU is also at full load. I use deferred rendering at the moment but even when forward rendering, the slow down was nearly as severe. I can't imagine my GPU is not powerful enough for something this simple when I play many games at 1080p (I have a GeForce GT 120M btw). Below are my shaders, First Pass #VS #version 330 core uniform mat4 ModelViewMatrix; uniform mat3 NormalMatrix; uniform mat4 MVPMatrix; uniform float scale; layout(location = 0) in vec3 in_Position; layout(location = 1) in vec3 in_Normal; layout(location = 2) in vec2 in_TexCoord; smooth out vec3 pass_Normal; smooth out vec3 pass_Position; smooth out vec2 TexCoord; void main(void){ pass_Position = (ModelViewMatrix * vec4(scale * in_Position, 1.0)).xyz; pass_Normal = NormalMatrix * in_Normal; TexCoord = in_TexCoord; gl_Position = MVPMatrix * vec4(scale * in_Position, 1.0); } #FS #version 330 core uniform sampler2D inSampler; smooth in vec3 pass_Normal; smooth in vec3 pass_Position; smooth in vec2 TexCoord; layout(location = 0) out vec3 outPosition; layout(location = 1) out vec3 outDiffuse; layout(location = 2) out vec3 outNormal; void main(void){ outPosition = pass_Position; outDiffuse = texture(inSampler, TexCoord).xyz; outNormal = pass_Normal; } Second Pass #VS #version 330 core uniform float scale; layout(location = 0) in vec3 in_Position; void main(void){ gl_Position = mat4(1.0) * vec4(scale * in_Position, 1.0); } #FS #version 330 core struct Light{ vec3 direction; }; uniform ivec2 ScreenSize; uniform Light light; uniform sampler2D PositionMap; uniform sampler2D ColorMap; uniform sampler2D NormalMap; out vec4 out_Color; vec2 CalcTexCoord(void){ return gl_FragCoord.xy / ScreenSize; } vec4 CalcLight(vec3 position, vec3 normal){ vec4 DiffuseColor = vec4(0.0); vec4 SpecularColor = vec4(0.0); vec3 light_Direction = -normalize(light.direction); float diffuse = max(0.0, dot(normal, light_Direction)); if(diffuse 0.0){ DiffuseColor = diffuse * vec4(1.0); vec3 camera_Direction = normalize(-position); vec3 half_vector = normalize(camera_Direction + light_Direction); float specular = max(0.0, dot(normal, half_vector)); float fspecular = pow(specular, 128.0); SpecularColor = fspecular * vec4(1.0); } return DiffuseColor + SpecularColor + vec4(0.1); } void main(void){ vec2 TexCoord = CalcTexCoord(); vec3 Position = texture(PositionMap, TexCoord).xyz; vec3 Color = texture(ColorMap, TexCoord).xyz; vec3 Normal = normalize(texture(NormalMap, TexCoord).xyz); out_Color = vec4(Color, 1.0) * CalcLight(Position, Normal); } Is it normal for the GPU to be used that much under the described circumstances? Is it due to poor performance of freeglut? I understand that the problem could be specific to my code, but I can't paste the whole code here, if you need more info, please tell me.

    Read the article

  • Problem using Embarcadero ER/Studio with Postgres (With Serial PK)

    - by Paul
    Hello ... I created a table using ER/STudio 8.0.3 ... The table has a serial pk (SERIAL/INTEGER in ER/Studio)... But the ER/Studio Physical Model generated convert the Serial to Integer... And The generated table in database has a integer pk, without auto-increment functionality... Any idea? Table generated : CREATE TABLE test ( id integer NOT NULL ) Should be : CREATE TABLE test ( id serial NOT NULL )

    Read the article

  • Updating multiple tables with LinqToSql in one unit of work

    - by zsharp
    Table 1: int ID-a(pk) Table 2: int ID-a(pk), int ID-b(pk) Table 3: int ID-b(pk), string C I have the data to insert into Table 1. But I do not have the ID-a, which is autogenerated. I have many string C to insert in Table 3. I am trying to insert row into Table 1, get the ID-a to insert in Table 2 along with the ID-b that is auto-Generated in Table 3 when I submit each string C, all in one submission to db. Right now I am calling dc.SubmitChanges twice in same call. Is it efficient to have to submit changes twice on same DataContext or can this be combined further?

    Read the article

  • Windows 7/Ubuntu 10.10 Dual-Triple Boot Partitioning Recommendation for HP Laptop OEM

    - by Denja
    Hi Linux Community, I find my self struggling with the ever slow and buggy windows OS once again. It's Time for me to go with the Ubuntu/Linux way for a better and faster Operating System. As a Computer technician i want to learn and use both Systems but possibly introduce New users to more affordable Linux Based Systems. For now, Im in the process of creating dual-boot or even triple boot layouts on my laptop machine Here's the layout in use now: * (C:) Windows 7 system partition NTFS - 284,89GB (Primary,Boot,Pagefile,Dump) * HP_TOOLS system partition FAT32 - 99MB (Primary) * (D:) RECOVERY partition NTFS - 12,90GB (Primary) * SYSTEM partition NTFS 199MB (Primary) Here's the layout I want to make. * (C:) Windows 7 system partition NTFS - 60GB (Primary) (sda1) * (D:) Windows data partition (user files) NTFS - 60GB(Extended or Primary)(sda2);wanna share with Linux * Linux root Ext4 - 10GB (Primary)(sda3) * Linux swap swap- RAM size, 3GB (sda4) * Linux home Ext4- 164,9GB (Extended)(sda5) Question 1: Based on my layout what is your suggestion for a Triple Boot layout for an additional Linux OS (Like Puppy)? Thank you in advance for your advises and suggestions.

    Read the article

  • Auto Increment feature of SQL Server

    - by Rahul Tripathi
    I have created a table named as ABC. It has three columns which are as follows:- The column number_pk (int) is the primary key of my table in which I have made the auto increment feature on for that column. Now I have deleted two rows from that table say Number_pk= 5 and Number_pk =6. The table which I get now is like this:- Now if I again enter two new rows in this table with the same value I get the two new Number_pk starting from 7 and 8 i.e, My question is that what is the logic behind this since I have deleted the two rows from the table. I know that a simple answer is because I have set the auto increment on for the primary key of my table. But I want to know is there any way that I can insert the two new entries starting from the last Number_pk without changing the design of my table? And how the SQL Server manage this record since I have deleted the rows from the database??

    Read the article

  • I am not able to drop foreign key in mysql Error 150. Please help

    - by Shantanu Gupta
    i am trying to create a foreign key in my table. But when i executes my query it shows me error 150 Error Code : 1005 Can't create table '.\vts#sql-6ec_1.frm' (errno: 150) (0 ms taken) My Queries are Query to create a foreign Key alter table `vts`.`tblguardian` add constraint `FK_tblguardian` FOREIGN KEY (`GuardianPickPointId`) REFERENCES `tblpickpoint` (`PickPointId`) EDIT: Now I am trying to drop this constraint But it fails again and shows me same error as it was giving when i was trying to create foreign key. alter table `vts`.`tblguardian` drop index `FK_tblguardian` Primary Key table CREATE TABLE `tblpickpoint` ( `PickPointId` int(4) NOT NULL auto_increment, `PickPointName` varchar(500) default NULL, `PickPointLabel` varchar(500) default NULL, `PickPointLatLong` varchar(100) NOT NULL, PRIMARY KEY (`PickPointId`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 CHECKSUM=1 DELAY_KEY_WRITE=1 ROW_FORMAT=DYNAMIC Foreign Key Table CREATE TABLE `tblguardian` ( `GuardianId` int(4) NOT NULL auto_increment, `GuardianName` varchar(500) default NULL, `GuardianAddress` varchar(500) default NULL, `GuardianMobilePrimary` varchar(15) NOT NULL, `GuardianMobileSecondary` varchar(15) default NULL, `GuardianPickPointId` int(4) default NULL, PRIMARY KEY (`GuardianId`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1

    Read the article

  • How to get database table header information into an CSV File.

    - by Rachel
    I am trying to connect to the database and get current state of a table and update that information into csv file, with below mentioned piece of code, am able to get data information into csv file but am not able to get header information from database table into csv file. So my questions is How can I get Database Table Header information into an CSV File ? $config['database'] = 'sakila'; $config['host'] = 'localhost'; $config['username'] = 'root'; $config['password'] = ''; $d = new PDO('mysql:dbname='.$config['database'].';host='.$config['host'], $config['username'], $config['password']); $query = "SELECT * FROM actor"; $stmt = $d->prepare($query); // Execute the statement $stmt->execute(); var_dump($stmt->fetch(PDO::FETCH_ASSOC)); $data = fopen('file.csv', 'w'); while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) { echo "Hi"; // Export every row to a file fputcsv($data, $row); } Header information meaning: Vehicle Build Model car 2009 Toyota jeep 2007 Mahindra So header information for this would be Vehicle Build Model Any guidance would be highly appreciated.

    Read the article

  • javascript innerHTML without childNodes?

    - by John Doe
    hi all im having a firefox issue where i dont see the wood for the trees using ajax i get html source from a php script this html code contains a tag and within the tbody some more tr/td's now i want to append this tbody plaincode to an existing table. but there is one more condition: the table is part of a form and thus contains checkboxe's and drop down's. if i would use table.innerHTML += content; firefox reloads the table and reset's all elements within it which isnt very userfriendly as id like to have what i have is this // content equals transport.responseText from ajax request function appendToTable(content){ var wrapper = document.createElement('table'); wrapper.innerHTML = content; wrapper.setAttribute('id', 'wrappid'); wrapper.style.display = 'none'; document.body.appendChild(wrapper); // get the parsed element - well it should be wrapper = document.getElementById('wrappid'); // the destination table table = document.getElementById('tableid'); // firebug prints a table element - seems right console.log(wrapper); // firebug prints the content ive inserted - seems right console.log(wrapper.innerHTML); var i = 0; // childNodes is iterated 2 times, both are textnode's // the second one seems to be a simple '\n' for(i=0;i<wrapper.childNodes.length;i++){ // firebug prints 'undefined' - wth!?? console.log(wrapper.childNodes[i].innerHTML); // firebug prints a textnode element - <TextNode textContent=" "> console.log(wrapper.childNodes[i]); table.appendChild(wrapper.childNodes[i]); } // WEIRD: firebug has no problems showing the 'wrappid' table and its contents in the html view - which seems there are the elements i want and not textelements } either this is so trivial that i dont see the problem OR its a corner case and i hope someone here has that much of expirience to give an advice on this - anyone can imagine why i get textnodes and not the finally parsed dom elements i expect? btw: btw i cant give a full example cause i cant write a smaller non working piece of code its one of those bugs that occure in the wild and not in my testset thx all

    Read the article

  • ASP.net application advice needed

    - by c11ada
    hey all, im biulding a ASP.net application, which is connected to a database. the database design is as follows **Users Table** UserID `(PK) autonumber` Username **Question Table** QuestionID `(PK) autonumber` QuestionNumber QuestionText **Questionnaire Table** QuestionnaireID `(PK) autonumber` UserID `(FK) User Table` Date **Feedback Table** FeedbackID `(PK) autonumber` QuestionnaireID `(FK) Questionnaire Table` QuestionID `(FK) Questions Table` Answer Comment please can some one advise me on how I should go about inserting data into the questionnaire table and the feedback table. I know that the questionnaire table needs to be updated first. but the Questionnaire ID is linked to the feedback table, so how do I go about updating both tables ?

    Read the article

  • Load an html table from a mysql database when an onChange event is triggered from a select tag

    - by Crew Peace
    So, here's the deal. I have an html table that I want to populate. Specificaly the first row is the one that is filled with elements from a mysql database. To be exact, the table is a questionnaire about mobile phones. The first row is the header where the cellphone names are loaded from the database. There is also a select tag that has company names as options in it. I need to trigger an onChange event on the select tag to reload the page and refill the first row with the new names of mobiles from the company that is currently selected in the dropdown list. This is what my select almost looks like: <select name="select" class="companies" onChange="reloadPageWithNewElements()"> <?php $sql = "SELECT cname FROM companies;"; $rs = mysql_query($sql); while($row = mysql_fetch_array($rs)) { echo "<option value=\"".$row['cname']."\">".$row['cname']."</option>\n "; } ?> </select> So... is there a way to refresh this page with onChange and pass the selected value to the same page again and assign it in a new php variable so i can do the query i need to fill my table? <?php //$mobileCompanies = $_GET["selectedValue"]; $sql = "SELECT mname FROM ".$mobileCompanies.";"; $rs = mysql_query($sql); while ($row = mysql_fetch_array($rs)) { echo "<td><div class=\"q1\">".$row['mname']."</div></td>"; } ?> something like this. (The reloadPageWithNewElements() and selectedValue are just an idea for now)

    Read the article

< Previous Page | 242 243 244 245 246 247 248 249 250 251 252 253  | Next Page >