Search Results

Search found 17924 results on 717 pages for 'z order'.

Page 659/717 | < Previous Page | 655 656 657 658 659 660 661 662 663 664 665 666  | Next Page >

  • Multiplying field value with retreived value jQuery

    - by Efe
    I pull product price from another page. Then I multiply it with quantity entered in the quantity input field. The problem is, if I forget to enter quantity value before I pull product price data or if I change the quantity field later, the final total price is not updated. I hope I clearly explained it. javascript: $('#AddProduct').click(function() { var i = 0; var adding = $(+(i++)+'<div class="row'+(i)+'"><div class="column width50"><input type="text" id="PRODUCTNAME" name="PRODUCTNAME'+(i)+'" value="" class="width98" /><input type="hidden" class="PRODUCTID" name="PRODUCTID" /><input type="hidden" class="UNITPRICE" name="UNITPRICE'+(i)+'" /><small>Search Products</small></div><div class="column width20"><input type="text" class="UNITQUANTITY" name="UNITQUANTITY'+(i)+'" value="" class="width98" /><small>Quantity</small></div><div class="column width30"><span class="prices">Unit Price:<span class="UNITPRICE"></span><br />Total Price:<span class="TOTALPRICE"></span><br /><a href="#" id="RemoveProduct(".row'+(i)+'");">Remove</a></span></div>'); $('#OrderProducts').append(adding); adding.find("#PRODUCTNAME").autocomplete("orders.cs.asp?Process=ListProducts", { selectFirst: false }).result(function(event, data, formatted) { if (data) { adding.find(".UNITPRICE").html(data[1]); adding.find(".PRODUCTID").val(data[2]); adding.find(".TOTALPRICE").html(data[1] * $('.UNITQUANTITY').val()); } }); return false; }); $('#RemoveProduct').click(function() { $().remove(); return false; }); my html: <fieldset> <h2>Order Items</h2> <div id="OrderProducts"> <a href="#" id="AddProduct"><img src="icons/add.png" alt="Add" /></a> </div> </fieldset> edit Now I totaly messed it up. Removing a row is not working anymore as well... Test link: http://refinethetaste.com/html/cp/?Section=orders&Process=AddOrder#

    Read the article

  • How to show percentage of 'memory used' in a win32 process?

    - by pj4533
    I know that memory usage is a very complex issue on Windows. I am trying to write a UI control for a large application that shows a 'percentage of memory used' number, in order to give the user an indication that it may be time to clear up some memory, or more likely restart the application. One implementation used ullAvailVirtual from MEMORYSTATUSEX as a base, then used HeapWalk() to walk the process heap looking for additional free memory. The HeapWalk() step was needed because we noticed that after a while of running the memory allocated and freed by the heap was never returned and reported by the ullAvailVirtual number. After hours of intensive working, the ullAvailVirtual number no longer would accurately report the amount of memory available. However, this method proved not ideal, due to occasional odd errors that HeapWalk() would return, even when the process heap was not corrupted. Further, since this is a UI control, the heap walking code was executing every 5-10 seconds. I tried contacting Microsoft about why HeapWalk() was failing, escalated a case via MSDN, but never got an answer other than "you probably shouldn't do that". So, as a second implementation, I used PagefileUsage from PROCESS_MEMORY_COUNTERS as a base. Then I used VirtualQueryEx to walk the virtual address space adding up all regions that weren't MEM_FREE and returned a value for GetMappedFileNameA(). My thinking was that the PageFileUsage was essentially 'private bytes' so if I added to that value the total size of the DLLs my process was using, it would be a good approximation of the amount of memory my process was using. This second method seems to (sorta) work, at least it doesn't cause crashes like the heap walker method. However, when both methods are enabled, the values are not the same. So one of the methods is wrong. So, StackOverflow world...how would you implement this? which method is more promising, or do you have a third, better method? should I go back to the original method, and further debug the odd errors? should I stay away from walking the heap every 5-10 seconds? Keep in mind the whole point is to indicate to the user that it is getting 'dangerous', and they should either free up memory or restart the application. Perhaps a 'percentage used' isn't the best solution to this problem? What is? Another idea I had was a color based system (red, yellow, green, which I could base on more factors than just a single number)

    Read the article

  • R glm standard error estimate differences to SAS PROC GENMOD

    - by Michelle
    I am converting a SAS PROC GENMOD example into R, using glm in R. The SAS code was: proc genmod data=data0 namelen=30; model boxcoxy=boxcoxxy ~ AGEGRP4 + AGEGRP5 + AGEGRP6 + AGEGRP7 + AGEGRP8 + RACE1 + RACE3 + WEEKEND + SEQ/dist=normal; FREQ REPLICATE_VAR; run; My R code is: parmsg2 <- glm(boxcoxxy ~ AGEGRP4 + AGEGRP5 + AGEGRP6 + AGEGRP7 + AGEGRP8 + RACE1 + RACE3 + WEEKEND + SEQ , data=data0, family=gaussian, weights = REPLICATE_VAR) When I use summary(parmsg2) I get the same coefficient estimates as in SAS, but my standard errors are wildly different. The summary output from SAS is: Name df Estimate StdErr LowerWaldCL UpperWaldCL ChiSq ProbChiSq Intercept 1 6.5007436 .00078884 6.4991975 6.5022897 67911982 0 agegrp4 1 .64607262 .00105425 .64400633 .64813891 375556.79 0 agegrp5 1 .4191395 .00089722 .41738099 .42089802 218233.76 0 agegrp6 1 -.22518765 .00083118 -.22681672 -.22355857 73401.113 0 agegrp7 1 -1.7445189 .00087569 -1.7462352 -1.7428026 3968762.2 0 agegrp8 1 -2.2908855 .00109766 -2.2930369 -2.2887342 4355849.4 0 race1 1 -.13454883 .00080672 -.13612997 -.13296769 27817.29 0 race3 1 -.20607036 .00070966 -.20746127 -.20467944 84319.131 0 weekend 1 .0327884 .00044731 .0319117 .03366511 5373.1931 0 seq2 1 -.47509583 .00047337 -.47602363 -.47416804 1007291.3 0 Scale 1 2.9328613 .00015586 2.9325559 2.9331668 -127 The summary output from R is: Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 6.50074 0.10354 62.785 < 2e-16 AGEGRP4 0.64607 0.13838 4.669 3.07e-06 AGEGRP5 0.41914 0.11776 3.559 0.000374 AGEGRP6 -0.22519 0.10910 -2.064 0.039031 AGEGRP7 -1.74452 0.11494 -15.178 < 2e-16 AGEGRP8 -2.29089 0.14407 -15.901 < 2e-16 RACE1 -0.13455 0.10589 -1.271 0.203865 RACE3 -0.20607 0.09315 -2.212 0.026967 WEEKEND 0.03279 0.05871 0.558 0.576535 SEQ -0.47510 0.06213 -7.646 2.25e-14 The importance of the difference in the standard errors is that the SAS coefficients are all statistically significant, but the RACE1 and WEEKEND coefficients in the R output are not. I have found a formula to calculate the Wald confidence intervals in R, but this is pointless given the difference in the standard errors, as I will not get the same results. Apparently SAS uses a ridge-stabilized Newton-Raphson algorithm for its estimates, which are ML. The information I read about the glm function in R is that the results should be equivalent to ML. What can I do to change my estimation procedure in R so that I get the equivalent coefficents and standard error estimates that were produced in SAS? To update, thanks to Spacedman's answer, I used weights because the data are from individuals in a dietary survey, and REPLICATE_VAR is a balanced repeated replication weight, that is an integer (and quite large, in the order of 1000s or 10000s). The website that describes the weight is here. I don't know why the FREQ rather than the WEIGHT command was used in SAS. I will now test by expanding the number of observations using REPLICATE_VAR and rerunning the analysis.

    Read the article

  • How do I add a column that displays the number of distinct rows to this query?

    - by Fake Code Monkey Rashid
    Hello good people! I don't know how to ask my question clearly so I'll just show you the money. To start with, here's a sample table: CREATE TABLE sandbox ( id integer NOT NULL, callsign text NOT NULL, this text NOT NULL, that text NOT NULL, "timestamp" timestamp with time zone DEFAULT now() NOT NULL ); CREATE SEQUENCE sandbox_id_seq START WITH 1 INCREMENT BY 1 NO MINVALUE NO MAXVALUE CACHE 1; ALTER SEQUENCE sandbox_id_seq OWNED BY sandbox.id; SELECT pg_catalog.setval('sandbox_id_seq', 14, true); ALTER TABLE sandbox ALTER COLUMN id SET DEFAULT nextval('sandbox_id_seq'::regclass); INSERT INTO sandbox VALUES (1, 'alpha', 'foo', 'qux', '2010-12-29 16:51:09.897579+00'); INSERT INTO sandbox VALUES (2, 'alpha', 'foo', 'qux', '2010-12-29 16:51:36.108867+00'); INSERT INTO sandbox VALUES (3, 'bravo', 'bar', 'quxx', '2010-12-29 16:52:36.370507+00'); INSERT INTO sandbox VALUES (4, 'bravo', 'foo', 'quxx', '2010-12-29 16:52:47.584663+00'); INSERT INTO sandbox VALUES (5, 'charlie', 'foo', 'corge', '2010-12-29 16:53:00.742356+00'); INSERT INTO sandbox VALUES (6, 'delta', 'foo', 'qux', '2010-12-29 16:53:10.884721+00'); INSERT INTO sandbox VALUES (7, 'alpha', 'foo', 'corge', '2010-12-29 16:53:21.242904+00'); INSERT INTO sandbox VALUES (8, 'alpha', 'bar', 'corge', '2010-12-29 16:54:33.318907+00'); INSERT INTO sandbox VALUES (9, 'alpha', 'baz', 'quxx', '2010-12-29 16:54:38.727095+00'); INSERT INTO sandbox VALUES (10, 'alpha', 'bar', 'qux', '2010-12-29 16:54:46.237294+00'); INSERT INTO sandbox VALUES (11, 'alpha', 'baz', 'qux', '2010-12-29 16:54:53.891606+00'); INSERT INTO sandbox VALUES (12, 'alpha', 'baz', 'corge', '2010-12-29 16:55:39.596076+00'); INSERT INTO sandbox VALUES (13, 'alpha', 'baz', 'corge', '2010-12-29 16:55:44.834019+00'); INSERT INTO sandbox VALUES (14, 'alpha', 'foo', 'qux', '2010-12-29 16:55:52.848792+00'); ALTER TABLE ONLY sandbox ADD CONSTRAINT sandbox_pkey PRIMARY KEY (id); Here's the current SQL query I have: SELECT * FROM ( SELECT DISTINCT ON (this, that) id, this, that, timestamp FROM sandbox WHERE callsign = 'alpha' AND CAST(timestamp AS date) = '2010-12-29' ) playground ORDER BY timestamp DESC This is the result it gives me: id this that timestamp ----------------------------------------------------- 14 foo qux 2010-12-29 16:55:52.848792+00 13 baz corge 2010-12-29 16:55:44.834019+00 11 baz qux 2010-12-29 16:54:53.891606+00 10 bar qux 2010-12-29 16:54:46.237294+00 9 baz quxx 2010-12-29 16:54:38.727095+00 8 bar corge 2010-12-29 16:54:33.318907+00 7 foo corge 2010-12-29 16:53:21.242904+00 This is what I want to see: id this that timestamp count ------------------------------------------------------------- 14 foo qux 2010-12-29 16:55:52.848792+00 3 13 baz corge 2010-12-29 16:55:44.834019+00 2 11 baz qux 2010-12-29 16:54:53.891606+00 1 10 bar qux 2010-12-29 16:54:46.237294+00 1 9 baz quxx 2010-12-29 16:54:38.727095+00 1 8 bar corge 2010-12-29 16:54:33.318907+00 1 7 foo corge 2010-12-29 16:53:21.242904+00 1 EDIT: I'm using PostgreSQL 9.0.* (if that helps any).

    Read the article

  • Apache: How can i see my localhost on 192.168.1.101 from 192.168.1.102?

    - by takpar
    Hi, I'm running Apache on Ubuntu. My IP address is 192.168.1.101 While http://localhost and http://192.168.1.101 work fine in my PC, I cannot access it from within my laptop using http://192.168.1.102 It's strange. I can ping 192.168.1.101 but I got "The connection has timed out." in browser. I'm using default apache config. so this is what my sites-available/default looks like: NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /home/www/public_html <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/www/public_html> Options Indexes FollowSymLinks MultiViews #AllowOverride None AllowOverride all Order allow,deny allow from all </Directory> /etc/apache2/posrts.conf NameVirtualHost *:80 Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> my laptop runs Ubuntu as well. so I don't think this is a firewall issue. commands executed in Laptop (192.168.1.102): adp@adp-laptop:~$ ping 192.168.1.101 PING 192.168.1.101 (192.168.1.101) 56(84) bytes of data. 64 bytes from 192.168.1.101: icmp_seq=1 ttl=64 time=32.1 ms 64 bytes from 192.168.1.101: icmp_seq=2 ttl=64 time=54.8 ms 64 bytes from 192.168.1.101: icmp_seq=3 ttl=64 time=77.0 ms 64 bytes from 192.168.1.101: icmp_seq=4 ttl=64 time=100 ms ^C --- 192.168.1.101 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3003ms rtt min/avg/max/mdev = 32.193/66.193/100.717/25.463 ms adp@adp-laptop:~$ telnet 192.168.1.101 80 Trying 192.168.1.101... telnet: Unable to connect to remote host: Connection timed out commands executed in PC (192.168.1.101): adp@adp-desktop:~$ ps afx | grep http 12672 pts/4 S+ 0:00 | \_ grep --color=auto http adp@adp-desktop:~$ ping 192.168.1.102 PING 192.168.1.102 (192.168.1.102) 56(84) bytes of data. 64 bytes from 192.168.1.102: icmp_seq=1 ttl=64 time=32.1 ms 64 bytes from 192.168.1.102: icmp_seq=2 ttl=64 time=54.8 ms 64 bytes from 192.168.1.102: icmp_seq=3 ttl=64 time=77.0 ms 64 bytes from 192.168.1.102: icmp_seq=4 ttl=64 time=100 ms ^C --- 192.168.1.102 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3003ms rtt min/avg/max/mdev = 32.193/66.193/100.717/25.463 ms adp@adp-desktop:~$ telnet 192.168.1.102 80 Trying 192.168.1.102... telnet: Unable to connect to remote host: Connection refused adp@adp-desktop:~$ telnet 192.168.1.102 Trying 192.168.1.102... telnet: Unable to connect to remote host: Connection refused What should i do?

    Read the article

  • How to make this sub-sub-query work?

    - by Josh Weissbock
    I am trying to do this in one query. I asked a similar question a few days ago but my personal requirements have changed. I have a game type website where users can attend "classes". There are three tables in my DB. I am using MySQL. I have four tables: hl_classes (int id, int professor, varchar class, text description) hl_classes_lessons (int id, int class_id, varchar lessonTitle, varchar lexiconLink, text lessonData) hl_classes_answers (int id, int lesson_id, int student, text submit_answer, int percent) hl_classes stores all of the classes on the website. The lessons are the individual lessons for each class. A class can have infinite lessons. Each lesson is available in a specific term. hl_classes_terms stores a list of all the terms and the current term has the field active = '1'. When a user submits their answers to a lesson it is stored in hl_classes_answers. A user can only answer each lesson once. Lessons have to be answered sequentially. All users attend all "classes". What I am trying to do is grab the next lesson for each user to do in each class. When the users start they are in term 1. When they complete all 10 lessons in each class they move on to term 2. When they finish lesson 20 for each class they move on to term 3. Let's say we know the term the user is in by the PHP variable $term. So this is my query I am currently trying to massage out but it doesn't work. Specifically because of the hC.id is unknown in the WHERE clause SELECT hC.id, hC.class, (SELECT MIN(output.id) as nextLessonID FROM ( SELECT id, class_id FROM hl_classes_lessons hL WHERE hL.class_id = hC.id ORDER BY hL.id LIMIT $term,10 ) as output WHERE output.id NOT IN (SELECT lesson_id FROM hl_classes_answers WHERE student = $USER_ID)) as nextLessonID FROM hl_classes hC My logic behind this query is first to For each class; select all of the lessons in the term the current user is in. From this sort out the lessons the user has already done and grab the MINIMUM id of the lessons yet to be done. This will be the lesson the user has to do. I hope I have made my question clear enough.

    Read the article

  • Structs, strtok, segmentation fault

    - by FILIaS
    I'm trying to make a programme with structs and files.The following is just a part of my code(it;s not all). What i'm trying to do is: ask the user to write his command. eg. delete John eg. enter John James 5000 ipad purchase. The problem is that I want to split the command in order to save its 'args' for a struct element. That's why i used strtok. BUT I'm facing another problem in who to 'put' these on the struct. #include <stdio.h> #include <stdlib.h> #include <string.h> #define MAX 100 char command[1500]; struct catalogue { char short_name[50]; char surname[50]; signed int amount; char description[1000]; }*catalog[MAX]; int main ( int argc, char *argv[] ) { int i,n; char choice[3]; printf(">sort1: Print savings sorted by surname\n"); printf(">sort2: Print savings sorted by amount\n"); printf(">search+name:Print savings of each name searched\n"); printf(">delete+full_name+amount: Erase saving\n"); printf(">enter+full_name+amount+description: Enter saving \n"); printf(">quit: Update + EXIT program.\n"); printf("Choose your selection:\n>"); gets(command); //it save the whole command /*in choice it;s saved only the first 2 letters(needed for menu choice again)*/ strncpy(choice,command,2); choice[2]='\0'; char** args = (char**)malloc(strlen(command)*sizeof(char*)); memset(args, 0, sizeof(char*)*strlen(command)); char* curToken = strtok(command, " \t"); for (n = 0; curToken != NULL; ++n) { args[n] = strdup(curToken); curToken = strtok(NULL, " \t"); *catalog[n]->short_name=*args[1]; *catalog[n]->surname=args[2]; catalog[n]->amount=atoi(args[3]); *catalog[n]->description=args[4]; } return 0; } I get a warning (warning: assignment makes integer from pointer without a cast) for the lines: *catalog[n]->short_name=*args[1]; *catalog[n]->surname=args[2]; *catalog[n]->description=args[4]; As a result, after running the program i get a Segmentation Fault... Any help? Any ideas?

    Read the article

  • The question about the basics of LINQ to SQL

    - by Alex
    I just started learning LINQ to SQL, and so far I'm impressed with the easy of use and good performance. I used to think that when doing LINQ queries like from Customer in DB.Customers where Customer.Age > 30 select Customer LINQ gets all customers from the database ("SELECT * FROM Customers"), moves them to the Customers array and then makes a search in that Array using .NET methods. This is very inefficient, what if there are hundreds of thousands of customers in the database? Making such big SELECT queries would kill the web application. Now after experiencing how actually fast LINQ to SQL is, I start to suspect that when doing that query I just wrote, LINQ somehow converts it to a SQL Query string SELECT * FROM Customers WHERE Age > 30 And only when necessary it will run the query. So my question is: am I right? And when is the query actually run? The reason why I'm asking is not only because I want to understand how it works in order to build good optimized applications, but because I came across the following problem. I have 2 tables, one of them is Books, the other has information on how many books were sold on certain days. My goal is to select books that had at least 50 sales/day in past 10 days. It's done with this simple query: from Book in DB.Books where (from Sale in DB.Sales where Sale.SalesAmount >= 50 && Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID).Contains(Book.ID) select Book The point is, I have to use the checking part in several queries and I decided to create an array with IDs of all popular books: var popularBooksIDs = from Sale in DB.Sales where Sale.SalesAmount >= 50 && Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID; BUT when I try to do the query now: from Book in DB.Books where popularBooksIDs.Contains(Book.ID) select Book It doesn't work! That's why I think that we can't use thins kinds of shortcuts in LINQ to SQL queries, like we can't use them in real SQL. We have to create straightforward queries, am I right?

    Read the article

  • What Can I Do To One Of My Team Number (Good Friend As Well) Who Lost His Passion.

    - by skyflyer
    It seems this question is not program related, but there are lot of similar questions. So please bear with me! By the way, I am programmer and my team is also charging a software project. And SO is the only place which solved me lot of thorny troubles!THANK YOU GUYS! I joined my company with him years ago. At that time he was quite passionate on his job which is a front-end development. He gave us lot of useful suggestions concerning his work like design. And I believed he was a smart guy. I believe he still is smart too by the way. One years later, however, he seemed lost his passion and fooling around every day, did not care about his work any more and produced poorwork. Even worse he literally stopped learning new skills and honing his work related skills. For me it is horrible, we got to keep abreast with new technology development, otherwise we will be throw out. Since we were just coworkers, I did not care about it too much except mentioned my thoughts several times. But last month, we resembled a new group and assigned very important project. And I am the team leader, sadly! My boss gave me lot of support and expectation as well. I did a pretty good job before and I am very optimism to our future. But as a team, if my team does not work hard, we will be doomed to failure no matter how hard I work and push. In order to revitalize his passion, I tried couple of ways like talking to him about my concern and my boss's angry. I offered his new task which is quite new to him. I even persuaded my boss to give him new incentive package. But all of them knocked wall. His reaction was just he did not care. Even worse he did not want to talk about his situation. I want to be hard on him, but since we are friends and coworkers, I really can not see it will work. Even it works, I can not so quickly change my self from friend and coworker into manager. As a novice in management, I am really overwhelmed! I do not want get him fired, we are friends and I do not see him fired as my team number. What can I do? Thank you guys!

    Read the article

  • Java map / nio / NFS issue causing a VM fault: "a fault occurred in a recent unsafe memory access op

    - by Matthew Bloch
    I have written a parser class for a particular binary format (nfdump if anyone is interested) which uses java.nio's MappedByteBuffer to read through files of a few GB each. The binary format is just a series of headers and mostly fixed-size binary records, which are fed out to the called by calling nextRecord(), which pushes on the state machine, returning null when it's done. It performs well. It works on a development machine. On my production host, it can run for a few minutes or hours, but always seems to throw "java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code", fingering one of the Map.getInt, getShort methods, i.e. a read operation in the map. The uncontroversial (?) code that sets up the map is this: /** Set up the map from the given filename and position */ protected void open() throws IOException { // Set up buffer, is this all the flexibility we'll need? channel = new FileInputStream(file).getChannel(); MappedByteBuffer map1 = channel.map(FileChannel.MapMode.READ_ONLY, 0, channel.size()); map1.load(); // we want the whole thing, plus seems to reduce frequency of crashes? map = map1; // assumes the host writing the files is little-endian (x86), ought to be configurable map.order(java.nio.ByteOrder.LITTLE_ENDIAN); map.position(position); } and then I use the various map.get* methods to read shorts, ints, longs and other sequences of bytes, before hitting the end of the file and closing the map. I've never seen the exception thrown on my development host. But the significant point of difference between my production host and development is that on the former, I am reading sequences of these files over NFS (probably 6-8TB eventually, still growing). On my dev machine, I have a smaller selection of these files locally (60GB), but when it blows up on the production host it's usually well before it gets to 60GB of data. Both machines are running java 1.6.0_20-b02, though the production host is running Debian/lenny, the dev host is Ubuntu/karmic. I'm not convinced that will make any difference. Both machines have 16GB RAM, and are running with the same java heap settings. I take the view that if there is a bug in my code, there is enough of a bug in the JVM not to throw me a proper exception! But I think it is just a particular JVM implementation bug due to interactions between NFS and mmap, possibly a recurrence of 6244515 which is officially fixed. I already tried adding in a "load" call to force the MappedByteBuffer to load its contents into RAM - this seemed to delay the error in the one test run I've done, but not prevent it. Or it could be coincidence that was the longest it had gone before crashing! If you've read this far and have done this kind of thing with java.nio before, what would your instinct be? Right now mine is to rewrite it without nio :)

    Read the article

  • Exception opening TAdoDataset: Arguments are of the wrong type, are out of acceptable range, or are

    - by Dave Falkner
    I've been trying to debug the following problem for several weeks now - this method is called from several places within the same datamodule, but this exception (from the subject line of this post) only occurs when integers for a certain purpose (pickup orders vs. orders that we ship through a carrier) are used - and don't ask me how the application can tell the difference between one integer's purpose and another! Furthermore, I cannot duplicate this issue on my machine - the error occurs on a warehouse machine but not my own development machine, even when working with the same production database. I have suspected an MDAC version conflict between the two machines, but have run a version checker and confirmed that both machines are running 2.8, and additionally have confirmed this by logging the TAdoDataset's .Version property at runtime. function TdmESShip.SecondaryID(const PrimaryID : Integer ): String; begin try with qESPackage2 do begin if Active then Close; LogMessage('-----------------------------------'); LogMessage('Version: ' + FConnection.Version); LogMessage('DB Info: ' + FConnection.Properties['Initial Catalog'].Value + ' ' + FConnection.Properties['Data Source'].Value); LogMessage('Setting the parameter.'); Parameters.ParamByName('ParameterName').Value := PrimaryID; LogMessage('Done setting the parameter.'); Open; Ninety-nine times out of 100 this logging code logs a successful operation as follows: Version: 2.8 DB Info: (database name and instance) Setting the parameter. Done setting the parameter. Opened the dataset. But then whenever a "pickup" order is processed, this exception gets thrown whenever the dataset is opened: Version: 2.8 DB Info: (database name and instance) Setting the parameter. Done setting the parameter. GetESPackageID() threw an exception. Type: EOleException, Message: Arguments are of the wrong type, are out of acceptable range, or are in conflict with one another Error: Arguments are of the wrong type, are out of acceptable range, or are in conflict with one another for packageID 10813711 I've tried eliminating the parameter and have built the commandtext for this dataset programmatically, suspecting that some part of the TParameter's configuration might be out of whack, but the same error occurs under the same circumstances. I've tried every combination of TParameter properties that I can think of - this is the millionth TParameter I've created for my millionth dataset, and I've never encountered this error. I've even created a second dataset from scratch and removed all references to the original dataset in case some property of the original dataset in the .dfm might be corrupted, but the same error occurs under the same circumstances. The commandtext for this dataset is a simple select ValueA from TableName where ValueB = @ParameterB I'm about ready to do something extreme, such as writing a web service to look these values up - it feels right now as though I could destroy my machine, rebuild it, rewrite this entire application from scratch, and the application would still know to throw an exception whenever I try to look up a secondary value from a primary value, but only for pickup orders, and only from the one machine in the warehouse, but I'm probably missing something simple. So, any help anyone could provide would be greatly appreciated.

    Read the article

  • ASP.NET MVC & EF4 Entity Framework - Are there any performance concerns in using the entities vs retrieving only the fields i need?

    - by Ant
    Lets say we have 3 tables, Users, Products, Purchases. There is a view that needs to display the purchases made by a user. I could lookup the data required by doing: from p in DBSet<Purchases>.Include("User").Include("Product") select p; However, I am concern that this may have a performance impact because it will retrieve the full objects. Alternatively, I could select only the fields i need: from p in DBSet<Purchases>.Include("User").Include("Product") select new SimplePurchaseInfo() { UserName = p.User.name, Userid = p.User.Id, ProductName = p.Product.Name ... etc }; So my question is: Whats the best practice in doing this? == EDIT Thanks for all the replies. [QUESTION 1]: I want to know whether all views should work with flat ViewModels with very specific data for that view, or should the ViewModels contain the entity objects. Real example: User reviews Products var query = from dr in productRepository.FindAllReviews() where dr.User.UserId = 'userid' select dr; string sql = ((ObjectQuery)query).ToTraceString(); SELECT [Extent1].[ProductId] AS [ProductId], [Extent1].[Comment] AS [Comment], [Extent1].[CreatedTime] AS [CreatedTime], [Extent1].[Id] AS [Id], [Extent1].[Rating] AS [Rating], [Extent1].[UserId] AS [UserId], [Extent3].[CreatedTime] AS [CreatedTime1], [Extent3].[CreatorId] AS [CreatorId], [Extent3].[Description] AS [Description], [Extent3].[Id] AS [Id1], [Extent3].[Name] AS [Name], [Extent3].[Price] AS [Price], [Extent3].[Rating] AS [Rating1], [Extent3].[ShopId] AS [ShopId], [Extent3].[Thumbnail] AS [Thumbnail], [Extent3].[Creator_UserId] AS [Creator_UserId], [Extent4].[Comment] AS [Comment1], [Extent4].[DateCreated] AS [DateCreated], [Extent4].[DateLastActivity] AS [DateLastActivity], [Extent4].[DateLastLogin] AS [DateLastLogin], [Extent4].[DateLastPasswordChange] AS [DateLastPasswordChange], [Extent4].[Email] AS [Email], [Extent4].[Enabled] AS [Enabled], [Extent4].[PasswordHash] AS [PasswordHash], [Extent4].[PasswordSalt] AS [PasswordSalt], [Extent4].[ScreenName] AS [ScreenName], [Extent4].[Thumbnail] AS [Thumbnail1], [Extent4].[UserId] AS [UserId1], [Extent4].[UserName] AS [UserName] FROM [ProductReviews] AS [Extent1] INNER JOIN [Users] AS [Extent2] ON [Extent1].[UserId] = [Extent2].[UserId] LEFT OUTER JOIN [Products] AS [Extent3] ON [Extent1].[ProductId] = [Extent3].[Id] LEFT OUTER JOIN [Users] AS [Extent4] ON [Extent1].[UserId] = [Extent4].[UserId] WHERE N'615005822' = [Extent2].[UserId] or from d in productRepository.FindAllProducts() from dr in d.ProductReviews where dr.User.UserId == 'userid' orderby dr.CreatedTime select new ProductReviewInfo() { product = new SimpleProductInfo() { Id = d.Id, Name = d.Name, Thumbnail = d.Thumbnail, Rating = d.Rating }, Rating = dr.Rating, Comment = dr.Comment, UserId = dr.UserId, UserScreenName = dr.User.ScreenName, UserThumbnail = dr.User.Thumbnail, CreateTime = dr.CreatedTime }; SELECT [Extent1].[Id] AS [Id], [Extent1].[Name] AS [Name], [Extent1].[Thumbnail] AS [Thumbnail], [Extent1].[Rating] AS [Rating], [Extent2].[Rating] AS [Rating1], [Extent2].[Comment] AS [Comment], [Extent2].[UserId] AS [UserId], [Extent4].[ScreenName] AS [ScreenName], [Extent4].[Thumbnail] AS [Thumbnail1], [Extent2].[CreatedTime] AS [CreatedTime] FROM [Products] AS [Extent1] INNER JOIN [ProductReviews] AS [Extent2] ON [Extent1].[Id] = [Extent2].[ProductId] INNER JOIN [Users] AS [Extent3] ON [Extent2].[UserId] = [Extent3].[UserId] LEFT OUTER JOIN [Users] AS [Extent4] ON [Extent2].[UserId] = [Extent4].[UserId] WHERE N'userid' = [Extent3].[UserId] ORDER BY [Extent2].[CreatedTime] ASC [QUESTION 2]: Whats with the ugly outer joins?

    Read the article

  • NHibernate FetchMode.Lazy

    - by RyanFetz
    I have an object which has a property on it that has then has collections which i would like to not load in a couple situations. 98% of the time i want those collections fetched but in the one instance i do not. Here is the code I have... Why does it not set the fetch mode on the properties collections? [DataContract(Name = "ThemingJob", Namespace = "")] [Serializable] public class ThemingJob : ServiceJob { [DataMember] public virtual Query Query { get; set; } [DataMember] public string Results { get; set; } } [DataContract(Name = "Query", Namespace = "")] [Serializable] public class Query : LookupEntity<Query>, DAC.US.Search.Models.IQueryEntity { [DataMember] public string QueryResult { get; set; } private IList<Asset> _Assets = new List<Asset>(); [IgnoreDataMember] [System.Xml.Serialization.XmlIgnore] public IList<Asset> Assets { get { return _Assets; } set { _Assets = value; } } private IList<Theme> _Themes = new List<Theme>(); [IgnoreDataMember] [System.Xml.Serialization.XmlIgnore] public IList<Theme> Themes { get { return _Themes; } set { _Themes = value; } } private IList<Affinity> _Affinity = new List<Affinity>(); [IgnoreDataMember] [System.Xml.Serialization.XmlIgnore] public IList<Affinity> Affinity { get { return _Affinity; } set { _Affinity = value; } } private IList<Word> _Words = new List<Word>(); [IgnoreDataMember] [System.Xml.Serialization.XmlIgnore] public IList<Word> Words { get { return _Words; } set { _Words = value; } } } using (global::NHibernate.ISession session = NHibernateApplication.GetCurrentSession()) { global::NHibernate.ICriteria criteria = session.CreateCriteria(typeof(ThemingJob)); global::NHibernate.ICriteria countCriteria = session.CreateCriteria(typeof(ThemingJob)); criteria.AddOrder(global::NHibernate.Criterion.Order.Desc("Id")); var qc = criteria.CreateCriteria("Query"); qc.SetFetchMode("Assets", global::NHibernate.FetchMode.Lazy); qc.SetFetchMode("Themes", global::NHibernate.FetchMode.Lazy); qc.SetFetchMode("Affinity", global::NHibernate.FetchMode.Lazy); qc.SetFetchMode("Words", global::NHibernate.FetchMode.Lazy); pageIndex = Convert.ToInt32(pageIndex) - 1; // convert to 0 based paging index criteria.SetMaxResults(pageSize); criteria.SetFirstResult(pageIndex * pageSize); countCriteria.SetProjection(global::NHibernate.Criterion.Projections.RowCount()); int totalRecords = (int)countCriteria.List()[0]; return criteria.List<ThemingJob>().ToPagedList<ThemingJob>(pageIndex, pageSize, totalRecords); }

    Read the article

  • Recursive N-way merge/diff algorithm for directory trees?

    - by BobMcGee
    What algorithms or Java libraries are available to do N-way, recursive diff/merge of directories? I need to be able to generate a list of folder trees that have many identical files, and have subdirectories with many similar files. I want to be able to use 2-way merge operations to quickly remove as much redundancy as possible. Goals: Find pairs of directories that have many similar files between them. Generate short list of directory pairs that can be synchronized with 2-way merge to eliminate duplicates Should operate recursively (there may be nested duplicates of higher-level directories) Run time and storage should be O(n log n) in numbers of directories and files Should be able to use an embedded DB or page to disk for processing more files than fit in memory (100,000+). Optional: generate an ancestry and change-set between folders Optional: sort the merge operations by how many duplicates they can elliminate I know how to use hashes to find duplicate files in roughly O(n) space, but I'm at a loss for how to go from this to finding partially overlapping sets between folders and their children. EDIT: some clarification The tricky part is the difference between "exact same" contents (otherwise hashing file hashes would work) and "similar" (which will not). Basically, I want to feed this algorithm at a set of directories and have it return a set of 2-way merge operations I can perform in order to reduce duplicates as much as possible with as few conflicts possible. It's effectively constructing an ancestry tree showing which folders are derived from each other. The end goal is to let me incorporate a bunch of different folders into one common tree. For example, I may have a folder holding programming projects, and then copy some of its contents to another computer to work on it. Then I might back up and intermediate version to flash drive. Except I may have 8 or 10 different versions, with slightly different organizational structures or folder names. I need to be able to merge them one step at a time, so I can chose how to incorporate changes at each step of the way. This is actually more or less what I intend to do with my utility (bring together a bunch of scattered backups from different points in time). I figure if I can do it right I may as well release it as a small open source util. I think the same tricks might be useful for comparing XML trees though.

    Read the article

  • Workflow for statistical analysis and report writing

    - by ws
    Does anyone have any wisdom on workflows for data analysis related to custom report writing? The use-case is basically this: Client commissions a report that uses data analysis, e.g. a population estimate and related maps for a water district. The analyst downloads some data, munges the data and saves the result (e.g. adding a column for population per unit, or subsetting the data based on district boundaries). The analyst analyzes the data created in (2), gets close to her goal, but sees that needs more data and so goes back to (1). Rinse repeat until the tables and graphics meet QA/QC and satisfy the client. Write report incorporating tables and graphics. Next year, the happy client comes back and wants an update. This should be as simple as updating the upstream data by a new download (e.g. get the building permits from the last year), and pressing a "RECALCULATE" button, unless specifications change. At the moment, I just start a directory and ad-hoc it the best I can. I would like a more systematic approach, so I am hoping someone has figured this out... I use a mix of spreadsheets, SQL, ARCGIS, R, and Unix tools. Thanks! PS: Below is a basic Makefile that checks for dependencies on various intermediate datasets (w/ ".RData" suffix) and scripts (".R" suffix). Make uses timestamps to check dependencies, so if you 'touch ss07por.csv', it will see that this file is newer than all the files / targets that depend on it, and execute the given scripts in order to update them accordingly. This is still a work in progress, including a step for putting into SQL database, and a step for a templating language like sweave. Note that Make relies on tabs in its syntax, so read the manual before cutting and pasting. Enjoy and give feedback! http://www.gnu.org/software/make/manual/html%5Fnode/index.html#Top R=/home/wsprague/R-2.9.2/bin/R persondata.RData : ImportData.R ../../DATA/ss07por.csv Functions.R $R --slave -f ImportData.R persondata.Munged.RData : MungeData.R persondata.RData Functions.R $R --slave -f MungeData.R report.txt: TabulateAndGraph.R persondata.Munged.RData Functions.R $R --slave -f TabulateAndGraph.R report.txt

    Read the article

  • Read variable-length records from a buffer - weird memory issues

    - by bsg
    Hi, I'm trying to implement an i/o intensive quicksort (C++ qsort) on a very large dataset. In the interests of speed, I'd like to read in a chunk of data at a time into a buffer and then use qsort to sort it inside the buffer. (I am currently working with text files but would like to move to binary soon.) However, my data is composed of variable-length records, and qsort needs to be told the length of the record in order to sort. Is there any way to standardize this? The only thing I could think of was rather convoluted: my program currently reads from the buffer until it hits a linefeed character ('10' in ascii), transferring each character over to another array. When it finds a linefeed (the delimiter in the input file), it fills the number of spaces remaining in the buffer for that record (record size is set to 30) with null characters. This way, I should end up with a buffer full of fixed-size records to give qsort. I know there are several problems with my approach, one being that it's just clumsy, another that the record size might conceivably be larger than 30, but is generally much less. Is there a better way of doing this? As well, my current code doesn't even work. When I debug it, it seems to be transferring characters from one buffer to the other, but when I try to print out the buffer, it contains only the first record. Here is my code: FILE *fp; unsigned char *buff; unsigned char *realbuff; FILE *inputFiles[NUM_INPUT_FILES]; buff = (unsigned char *) malloc(2048); realbuff = (unsigned char *) malloc(NUM_RECORDS * RECORD_SIZE); fp = fopen("postings0.txt", "r"); if(fp) { fread(buff, 1, 2048, fp); /*for(int i=0; i <30; i++) cout << buff[i] <<endl;*/ int y=0; int recordcounter = 0; //cout << buff; for(int i=0;i <100; i++) { if(buff[i] != char(10)) { realbuff[y] = buff[i]; y++; recordcounter++; } else { if(recordcounter < RECORD_SIZE) for(int j=recordcounter; j < RECORD_SIZE;j++) { realbuff[y] = char(0); y++; } recordcounter = 0; } } cout << realbuff <<endl; cout << buff; } else cout << "sorry"; Thank you very much, bsg

    Read the article

  • difference equations in MATLAB - why the need to switch signs?

    - by jefflovejapan
    Perhaps this is more of a math question than a MATLAB one, not really sure. I'm using MATLAB to compute an economic model - the New Hybrid ISLM model - and there's a confusing step where the author switches the sign of the solution. First, the author declares symbolic variables and sets up a system of difference equations. Note that the suffixes "a" and "2t" both mean "time t+1", "2a" means "time t+2" and "t" means "time t": %% --------------------------[2] MODEL proc-----------------------------%% % Define endogenous vars ('a' denotes t+1 values) syms y2a pi2a ya pia va y2t pi2t yt pit vt ; % Monetary policy rule ia = q1*ya+q2*pia; % ia = q1*(ya-yt)+q2*pia; %%option speed limit policy % Model equations IS = rho*y2a+(1-rho)yt-sigma(ia-pi2a)-ya; AS = beta*pi2a+(1-beta)*pit+alpha*ya-pia+va; dum1 = ya-y2t; dum2 = pia-pi2t; MPs = phi*vt-va; optcon = [IS ; AS ; dum1 ; dum2; MPs]; He then computes the matrix A: %% ------------------ [3] Linearization proc ------------------------%% % Differentiation xx = [y2a pi2a ya pia va y2t pi2t yt pit vt] ; % define vars jopt = jacobian(optcon,xx); % Define Linear Coefficients coef = eval(jopt); B = [ -coef(:,1:5) ] ; C = [ coef(:,6:10) ] ; % B[c(t+1) l(t+1) k(t+1) z(t+1)] = C[c(t) l(t) k(t) z(t)] A = inv(C)*B ; %(Linearized reduced form ) As far as I understand, this A is the solution to the system. It's the matrix that turns time t+1 and t+2 variables into t and t+1 variables (it's a forward-looking model). My question is essentially why is it necessary to reverse the signs of all the partial derivatives in B in order to get this solution? I'm talking about this step: B = [ -coef(:,1:5) ] ; Reversing the sign here obviously reverses the sign of every component of A, but I don't have a clear understanding of why it's necessary. My apologies if the question is unclear or if this isn't the best place to ask.

    Read the article

  • .NET: Avoidance of custom exceptions by utilising existing types, but which?

    - by Mr. Disappointment
    Consider the following code (ASP.NET/C#): private void Application_Start(object sender, EventArgs e) { if (!SetupHelper.SetUp()) { throw new ShitHitFanException(); } } I've never been too hesitant to simply roll my own exception type, basically because I have found (bad practice, or not) that mostly a reasonable descriptive type name gives us enough as developers to go by in order to know what happened and why something might have happened. Sometimes the existing .NET exception types even accommodate these needs - regardless of the message. In this particular scenario, for demonstration purposes only, the application should die a horrible, disgraceful death should SetUp not complete properly (as dictated by its return value), but I can't find an already existing exception type in .NET which would seem to suffice; though, I'm sure one will be there and I simply don't know about it. Brad Abrams posted this article that lists some of the available exception types. I say some because the article is from 2005, and, although I try to keep up to date, it's a more than plausible assumption that more have been added to future framework versions that I am still unaware of. Of course, Visual Studio gives you a nicely formatted, scrollable list of exceptions via Intellisense - but even on analysing those, I find none which would seem to suffice for this situation... ApplicationException: ...when a non-fatal application error occurs The name seems reasonable, but the error is very definitely fatal - the app is dead. ExecutionEngineException: ...when there is an internal error in the execution engine of the CLR Again, sounds reasonable, superficially; but this has a very definite purpose and to help me out here certainly isn't it. HttpApplicationException: ...when there is an error processing an HTTP request Well, we're running an ASP.NET application! But we're also just pulling at straws here. InvalidOperationException: ...when a call is invalid for the current state of an instance This isn't right but I'm adding it to the list of 'possible should you put a gun to my head, yes'. OperationCanceledException: ...upon cancellation of an operation the thread was executing Maybe I wouldn't feel so bad using this one, but I'd still be hijacking the damn thing with little right. You might even ask why on earth I would want to raise an exception here but the idea is to find out that if I were to do so then do you know of an appropriate exception for such a scenario? And basically, to what extent can we piggy-back on .NET while keeping in line with rationality?

    Read the article

  • [How-to] Make a working xml file in php (encoding properly) to export data from online database to i

    - by gotye
    Hey guys, I am trying to make an xml file to export my data from my database to my iphone. Each time I create a new post, I need to execute a php file to create a xml file containing the latest post ;) Okay so far ? :D Here is current php code ... but my nsxmlparser gives me an error code (33 - String is not started). I have no idea what kind of php functions I have to use ... <?php // Èdition du dÈbut du fichier XML $xml .= '<?xml version=\"1.0\" encoding=\"UTF-8\"?>'; $sml .= '<channel>'; $xml .= '<title>Infonul</title>'; $xml .= '<link>aaa</link>'; $xml .= '<description>aaa</description>'; // connexion a la base (mettre ‡ jour le mdp) $connect = mysql_connect('...-12','...','...'); /* selection de la base de donnÈe mysql */ mysql_select_db('...'); // selection des 20 derniËres news $res=mysql_query("SELECT u.display_name as author,p.post_date as date,p.comment_count as commentCount, p.post_content as content,p.post_title as title FROM wp_posts p, wp_users u WHERE p.post_status = 'publish' and p.post_type = 'post' and p.post_author = u.id ORDER BY p.id DESC LIMIT 0,20"); // extraction des informations et ajout au contenu while($tab=mysql_fetch_array($res)){ $title=$tab[title]; $author=$tab[author]; $content=$tab[content]; //html stuff $commentCount=$tab[commentCount]; $date=$tab[date]; $xml .= '<item>'; $xml .= '<title>'.$title.'</title>'; $xml .= '<content><![CDATA['.$content.']]></content>'; $xml .= '<date>'.$date.'</date>'; $xml .= '<author>'.$author.'</author>'; $xml .= '<commentCount>'.$commentCount.'</commentCount>'; $xml .= '</item>'; } // Èdition de la fin du fichier XML $xml .= '</channel>'; $xml = utf8_encode($xml); echo $xml; // Ècriture dans le fichier if ($fp = fopen("20news.xml",'w')) { fputs($fp,$xml); fclose($fp); } //mysql_close(); ?> I noticed a few things when i open 20news.xml in my browser : I got squares instead of single quotes ... I can't see <[CDATA[ but ]] is clearly visible ... why ?!? Thanks for any input ;) Gotye.

    Read the article

  • Is this scatter-brained workflow realizable in Git?

    - by Luke Maurer
    This is what I'd like my workflow to look like at a conceptual level: I hack on my new feature for a while I notice a typo in a comment I change it Since the typo is completely unrelated to anything else, I put that change in a pile of comment fixes I keep working on the code I realize I need to flesh out a few utility functions I do so I put that change in its own pile Steps 2, 3, and 4 each repeat throughout the day I finish the new feature and put the changes for that feature in a pile I push nice patches upstream: One with the new feature, a few for the other tweaks, and one with a bunch of comment fixes if enough have accumulated Since I'm both lazy and a perfectionist, I want to be able to do some things out of order: I might correct a typo but forget to put it in the comment fix pile; when I prepare the upstream patches (I'm using git-svn, so I need to be pretty deliberate about these), I'll then pull out the comment fixes at that point. I might forget to separate things altogether until the very end. But I might /also/ have committed some of the piles along the way (sorry, the metaphor is breaking down …). This is all rather like just using Eclipse changesets with SVN, only I can have different changes to the same file in different piles (having to disentangle changes into different commits is what motivated me to move to git-svn, in fact …), and with Git I can have my full discombobulated change history, experimental branches and all, but still make a nice, neat patch. I've just recently started with Git after having wanted to for a good while, and I'm quite happy so far. The biggest way in which the above workflow doesn't really map into Git, though, is that a “bin” can't really be just a local branch, since the working tree only ever reflects the state of a single branch. Or maybe the Git index is a “pile,” and what I want is to have more than one somehow (effectively). I can think of a few ways to approximate what I want (maybe creative use of stash? Intricate stash-checkout-merge dances?), but my grasp on Git isn't solid enough to be sure of how best to put all the pieces together. It's said that Git is more a toolkit than a VCS, so I guess the question comes down to: How do I build this thing with these tools?

    Read the article

  • Grails Twitter Bootstrap Plugin Issue with navbar-fixed-top

    - by Philip Tenn
    I am using Grails 2.1.0 and Twitter Bootstrap Plugin 2.1.1 and am encountering an issue with navbar-fixed-top. In order to get the Navbar fixed to the top of the page to behave correctly during resize, the Twitter Bootstrap Docs states: Add .navbar-fixed-top and remember to account for the hidden area underneath it by adding at least 40px padding to the . Be sure to add this after the core Bootstrap CSS and before the optional responsive CSS. How can I do this when using the Grails Plugin for Twitter Bootstrap? Here is what I have tried: main.gsp <head> ... <r:require modules="bootstrap-css"/> <style type="text/css"> body { padding-top: 60px; padding-bottom: 40px; } .sidebar-nav { padding: 9px 0; } </style> <r:require modules="bootstrap-responsive-css"/> <r:layoutResources/> </head> The problem is that Grails Plugin for Twitter Bootstrap takes the content of bootstrap.css and bootstrap-responsive.css and combines them into the following merged file: static/bundle-bundle_bootstrap_head.css. Thus, I am not able to put the body padding style "after core Bootstrap CSS and before Responsive CSS" as per Twitter Bootstrap docs. Here is the View Source HTML that I get from the main.gsp above <style type="text/css"> body { padding-top: 60px; padding-bottom: 40px; } .sidebar-nav { padding: 9px 0; } </style> <link href="/homes/static/bundle-bundle_bootstrap_head.css" type="text/css" rel="stylesheet" media="screen, projection" /> If there is no way to do this, I could always just drop the Grails Twitter Bootstrap Plugin and manually download Twitter Bootstrap and put it my Grails Project's web-app/css, web-app/images, and web-app/js. However, I would like to be able to use the Grails Twitter Bootstrap Plugin. Thank you very much in advance, I appreciate it!

    Read the article

  • GAE Datastore: persisting referenced objects

    - by David
    Two "I'm sorries" to begin with: 1) I've looked for a solution (here, and elsewhere), and couldn't find the answer. 2) English is not my mother tongue, so I may have some typos and the sort - please ignore them. To the point: I am trying to persist Java objects to the GAE datastore. I am not sure as to how to persist object having ("non-trivial") referenced object. That is, assume I have the following. public class Father { String name; int age; Vector<Child> offsprings; //this is what I call "non-trivial" reference //ctor, getters, setters... } public class Child { String name; int age; Father father; //this is what I call "non-trivial" reference //ctor, getters, setters... } The name field is unique in each type domain, and is considered a Primary-Key. In order to persist the "trivial" (String, int) fields, all I need is to add the correct annotation. So far so good. However, I don't understand how should I persist the home-brewed (Child, Father) types referenced. Should I: Convert each such reference to hold the Primary-Key (a name String, in this example) instead of the "actual" object. So, Vector<Child> offsprings; becomes Vector<String> offspringsNames; ? If that is the case, how do I handle the object at run-time? Do I just query for the Primary-Key from Class.getName, to retrieve the refrenced objects? Convert each such reference to hold the actual Key provided to me by the Datastore upon the proper put() operation? So, Vector<Child> offsprings; becomes Vector<Key> offspringsHashKeys; ? I would very much appreciate all kinds of comments. I have read ALL the offical relevant GAE docs/example. Throughout, they always persist "trivial" references, natively supported by the Datastore (e.g. in the Guestbook example, only Strings, and Longs). Many thanks, David

    Read the article

  • Why is cell phone software still so primitive?

    - by Tomislav Nakic-Alfirevic
    I don't do mobile development, but it strikes me as odd that features like this aren't available by default on most phones: full text search: searches all address book contents, messages, anything else being a plus better call management: e.g. a rotating audio call log, meaning you always have the last N calls recorded for your listening pleasure later (your little girl just said her first "da-da" while you were on a business trip, you had a telephone job interview, you received complex instructions to do something etc.) bluetooth remote control (like e.g. anyRemote, but available by default on a bluetooth phone) no multitasking capabilities worth mentioning and in general no e.g. weekly software updates, making the phone much more usable (even if it had to be done over USB, rather than over the network). I'm sure I was dumbfounded by the lack or design of other features as well, but they don't come to mind right now. To clarify, I'm not talking about smartphones here: my plain, 2-year old phone has a CPU an order of magnitude faster than my first PC, about as much storage space and it's ridiculous how bad (slow, unwieldy) the software is and it's not one phone or one manufacturer. What keeps the (to me) obvious software functionality vacuum on a capable hardware platform from being filled up? Edit: I believe a clarification on the multitasking point might be beneficial. I'll use my phone as an example, although the point is much more general. The phone can multitask and in fact does: you can listen to music and do something else at the same time. On the other hand, the way the software has been designed makes multitasking next to useless. (Ditto with the external touch screen: it can take touch commands, but only one application makes use of it, and only with 3 commands.) To take the multitasking example to the extreme, if I plug my phone into my laptop and it registers as an external disk, it doesn't allow any kind of operation: messages, calling, calendar, everything out of reach, although I can receive a call. No "battery life" issue there: it's charging while connected. BTW, another example of design below the current state of the art: I don't see a phone on the horizon which will remember where in an audio or video file you were when you stopped listening/watching it last time (podcasts are a good use case). Simplistic rewind/fast forward functionality only aggravates the problem.

    Read the article

  • Appending data into Listview control

    - by strakastroukas
    In my webpage i use the following in order filling the listview control <asp:ListView ID="ListView1" runat="server"> <layouttemplate> <asp:PlaceHolder id="itemPlaceholder" runat="server" /></layouttemplate> <ItemTemplate> <tr> <td><asp:Label ID="Label1" runat="server" Text = '<%# DataBinder.Eval(Container.DataItem, "Ans1") %>' Visible = '<%# DataBinder.Eval(Container.DataItem, "Ans1Visible") %>'></asp:Label> <br /> <asp:Label ID="Label2" runat="server" Text = '<%# DataBinder.Eval(Container.DataItem, "Ans2") %>' Visible = '<%# DataBinder.Eval(Container.DataItem, "Ans2Visible") %>'></asp:Label> <br /> <asp:Label ID="Label3" runat="server" Text = '<%# DataBinder.Eval(Container.DataItem, "Ans3") %>' Visible = '<%# DataBinder.Eval(Container.DataItem, "Ans3Visible") %>'></asp:Label> <br /> <asp:Label ID="Label4" runat="server" Text = '<%# DataBinder.Eval(Container.DataItem, "Ans4") %>' Visible = '<%# DataBinder.Eval(Container.DataItem, "Ans4Visible") %>'></asp:Label> <br /> <asp:Label ID="Label5" runat="server" Text = '<%# DataBinder.Eval(Container.DataItem, "Ans5") %>' Visible = '<%# DataBinder.Eval(Container.DataItem, "Ans5Visible") %>'></asp:Label> <br /> <asp:Label ID="Label6" runat="server" Text = '<%# DataBinder.Eval(Container.DataItem, "Ans6") %>' Visible = '<%# DataBinder.Eval(Container.DataItem, "Ans6Visible") %>'></asp:Label> </td> </tr> </ItemTemplate> </asp:ListView> Now i would like to add numbers to the labels before they are rendered. For example currently the data displayed are like Tennis Football Basketball Nfl Nba Polo and the output i would like to have is 1. Tennis 2. Football 3. Basketball 4. Nfl 5. Nba 6. Polo Could i use ListView1_ItemCreated or the ListView1_ItemDataBound event to achieve this? If that is true, could you point me a place to start?

    Read the article

  • I need help with creating a data structure in PHP

    - by alex
    What I need to do is have a data structure that shows jobs organised into 14 day periods, but only when an id is the same. I've implemented all sorts of stuff, but they have failed miserably. Ideally, maybe a SQL expert could handle all of this in the query. Here is some of my code. You can assume all library stuff works as expected. $query = 'SELECT date, rig_id, comments FROM dor ORDER BY date DESC'; $dors = Db::query(Database::SELECT, $query)->execute()->as_array(); This will return all jobs, but I need to have them organised by 14 day period with the same rig_id value. $hitches = array(); foreach($dors as $dor) { $rigId = $dor['rig_id']; $date = strtotime($dor['date']); if (empty($hitches)) { $hitches[] = array( 'rigId' => $rigId, 'startDate' => $date, 'dors' => array($dor) ); } else { $found = false; foreach($hitches as $key => $hitch) { $hitchStartDate = $hitch['startDate']; $dateDifference = abs($hitchStartDate - $date); $isSameHitchTimeFrame = $dateDifference < (Date::DAY * 14); if ($rigId == $hitch['rigId'] AND $isSameHitchTimeFrame) { $found = true; $hitches[$key]['dors'][] = $dor; } } if ($found === false) { $hitches[] = array( 'rigId' => $rigId, 'startDate' => $date, 'dors' => array($dor) ); } } } This seems to work OK splitting up by rig_id, but not by date. I also think I'm doing it wrong because I need to check the earliest date. Is it possible at all to do any of this in the database query? To recap, here is my problem I have a list of jobs with all have a rig_id (many jobs can have the same) and a date. I need the data to be organised into hitches. That is, the rig_id must be the same per hitch, and they must span a 14 day period, in which the next 14 days with the same rig_id will be a new hitch. Can someone please point me on the right track? Cheers

    Read the article

< Previous Page | 655 656 657 658 659 660 661 662 663 664 665 666  | Next Page >