Search Results

Search found 1637 results on 66 pages for 'ids'.

Page 60/66 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Using MySQL to generate daily sales reports with filled gaps, grouped by currency

    - by Shane O'Grady
    I'm trying to create what I think is a relatively basic report for an online store, using MySQL 5.1.45 The store can receive payment in multiple currencies. I have created some sample tables with data and am trying to generate a straightforward tabular result set grouped by date and currency so that I can graph these figures. I want to see each currency that is available per date, with a 0 in the result if there were no sales in that currency for that day. If I can get that to work I want to do the same but also grouped by product id. In the sample data I have provided there are only 3 currencies and 2 product ids, but in practice there can be any number of each. I can correctly group by date, but then when I add a grouping by currency my query does not return what I want. I based my work off this article. My reporting query, grouped only by date: SELECT calendar.datefield AS date, IFNULL(SUM(orders.order_value),0) AS total_value FROM orders RIGHT JOIN calendar ON (DATE(orders.order_date) = calendar.datefield) WHERE (calendar.datefield BETWEEN (SELECT MIN(DATE(order_date)) FROM orders) AND (SELECT MAX(DATE(order_date)) FROM orders)) GROUP BY date Now grouped by date and currency: SELECT calendar.datefield AS date, orders.currency_id, IFNULL(SUM(orders.order_value),0) AS total_value FROM orders RIGHT JOIN calendar ON (DATE(orders.order_date) = calendar.datefield) WHERE (calendar.datefield BETWEEN (SELECT MIN(DATE(order_date)) FROM orders) AND (SELECT MAX(DATE(order_date)) FROM orders)) GROUP BY date, orders.currency_id The results I am getting (grouped by date and currency): +------------+-------------+-------------+ | date | currency_id | total_value | +------------+-------------+-------------+ | 2009-08-15 | 3 | 81.94 | | 2009-08-15 | 45 | 25.00 | | 2009-08-15 | 49 | 122.60 | | 2009-08-16 | NULL | 0.00 | | 2009-08-17 | 45 | 25.00 | | 2009-08-17 | 49 | 122.60 | | 2009-08-18 | 3 | 81.94 | | 2009-08-18 | 49 | 245.20 | +------------+-------------+-------------+ The results I want: +------------+-------------+-------------+ | date | currency_id | total_value | +------------+-------------+-------------+ | 2009-08-15 | 3 | 81.94 | | 2009-08-15 | 45 | 25.00 | | 2009-08-15 | 49 | 122.60 | | 2009-08-16 | 3 | 0.00 | | 2009-08-16 | 45 | 0.00 | | 2009-08-16 | 49 | 0.00 | | 2009-08-17 | 3 | 0.00 | | 2009-08-17 | 45 | 25.00 | | 2009-08-17 | 49 | 122.60 | | 2009-08-18 | 3 | 81.94 | | 2009-08-18 | 45 | 0.00 | | 2009-08-18 | 49 | 245.20 | +------------+-------------+-------------+ The schema and data I am using in my tests: CREATE TABLE orders ( id INT PRIMARY KEY AUTO_INCREMENT, order_date DATETIME, order_id INT, product_id INT, currency_id INT, order_value DECIMAL(9,2), customer_id INT ); INSERT INTO orders (order_date, order_id, product_id, currency_id, order_value, customer_id) VALUES ('2009-08-15 10:20:20', '123', '1', '45', '12.50', '322'), ('2009-08-15 12:30:20', '124', '1', '49', '122.60', '400'), ('2009-08-15 13:41:20', '125', '1', '3', '40.97', '324'), ('2009-08-15 10:20:20', '126', '2', '45', '12.50', '345'), ('2009-08-15 13:41:20', '131', '2', '3', '40.97', '756'), ('2009-08-17 10:20:20', '3234', '1', '45', '12.50', '1322'), ('2009-08-17 10:20:20', '4642', '2', '45', '12.50', '1345'), ('2009-08-17 12:30:20', '23', '2', '49', '122.60', '3142'), ('2009-08-18 12:30:20', '2131', '1', '49', '122.60', '4700'), ('2009-08-18 13:41:20', '4568', '1', '3', '40.97', '3274'), ('2009-08-18 12:30:20', '956', '2', '49', '122.60', '3542'), ('2009-08-18 13:41:20', '443', '2', '3', '40.97', '7556'); CREATE TABLE currency ( id INT PRIMARY KEY, name VARCHAR(255) ); INSERT INTO currency (id, name) VALUES (3, 'Euro'), (45, 'US Dollar'), (49, 'CA Dollar'); CREATE TABLE calendar (datefield DATE); DELIMITER | CREATE PROCEDURE fill_calendar(start_date DATE, end_date DATE) BEGIN DECLARE crt_date DATE; SET crt_date=start_date; WHILE crt_date < end_date DO INSERT INTO calendar VALUES(crt_date); SET crt_date = ADDDATE(crt_date, INTERVAL 1 DAY); END WHILE; END | DELIMITER ; CALL fill_calendar('2008-01-01', '2011-12-31');

    Read the article

  • How to create 2 different scripts for 2 browsers (IE+all the rest)

    - by Barnabas Nagy
    I'm trying to solve an IE CSS issue by using conditional tags. My jQuery that does the job of tabbed box effect calls an id that is inside the conditional tags. I've given new ids to IE so on IE the jQuery is not working. I tried to duplicate the script with the new id in one of the script but the 2 scripts seems to get in conflict. My jQuery (in between the head tags) is (for all all browsers except IE): <script> var currentTab = 0; // Set to a different number to start on a different tab. function openTab(clickedTab) { var thisTab = $(".tabbed-box .tabs a").index(clickedTab); $(".tabbed-box .tabs li a").removeClass("active"); $(".tabbed-box .tabs li a:eq("+thisTab+")").addClass("active"); $(".tabbed-box .tabbed-content").hide(); $(".tabbed-box .tabbed-content:eq("+thisTab+")").show(); currentTab = thisTab; } $(document).ready(function() { $(".tabs li:eq(0) a").css("border-left", "none"); $(".tabbed-box .tabs li a").click(function() { openTab($(this)); return false; }); $(".tabbed-box .tabs li a:eq("+currentTab+")").click() }); </script> And I would need this for IE: <script> var currentTab = 0; // Set to a different number to start on a different tab. function openTab(clickedTab) { var thisTab = $(".tabbed-box .tabs a").index(clickedTab); $(".tabbed-box ie-.tabs li a").removeClass("active"); $(".tabbed-box ie-.tabs li a:eq("+thisTab+")").addClass("active"); $(".tabbed-box .tabbed-content").hide(); $(".tabbed-box .tabbed-content:eq("+thisTab+")").show(); currentTab = thisTab; } $(document).ready(function() { $(".ie-tabs li:eq(0) a").css("border-left", "none"); $(".tabbed-box .ie-tabs li a").click(function() { openTab($(this)); return false; }); $(".tabbed-box .ie-tabs li a:eq("+currentTab+")").click() }); </script> Having the 2 scripts in the head conflicts with each other. Maybe there is a way to combine them?

    Read the article

  • How to fine tune FluentNHibernate's auto mapper?

    - by Venemo
    Okay, so yesterday I managed to get the latest trunk builds of NHibernate and FluentNHibernate to work with my latest little project. (I'm working on a bug tracking application.) I created a nice data access layer using the Repository pattern. I decided that my entities are nothing special, and also that with the current maturity of ORMs, I don't want to hand-craft the database. So, I chose to use FluentNHibernate's auto mapping feature with NHibernate's "hbm2ddl.auto" property set to "create". It really works like a charm. I put the NHibernate configuration in my app domain's config file, set it up, and started playing with it. (For the time being, I created some unit tests only.) It created all tables in the database, and everything I need for it. It even mapped my many-to-many relationships correctly. However, there are a few small glitches: All of the columns created in the DB allow null. I understand that it can't predict which properties should allow null and which shouldn't, but at least I'd like to tell it that it should allow null only for those types for which null makes sense in .NET (eg. non-nullable value types shouldn't allow null). All of the nvarchar and varbinary columns it created, have a default length of 255. I would prefer to have them on max instead of that. Is there a way to tell the auto mapper about the two simple rules above? If the answer is no, will it work correctly if I modify the tables it created? (So, if I set some columns not to allow null, and change the allowed length for some other, will it correctly work with them?) EDIT: I managed to achieve the above by using Fluent NHibernate's convention API. Thanks to everyone who helped! However, there is one more thing: after checking out the convention API, I really would like my IDs to be calld "ID", not "Id", but it seems to me that the PrimaryKey.Name.Is(x => "ID") is not working at all. If I add it to the conventions collection and rewrite my entities' properties to "ID" instead of "Id", it throws an exception that there is no primary key mapped. Any thoughts on this?

    Read the article

  • .Net Entity Framework SaveChanges is adding without add method

    - by tmfkmoney
    I'm new to the entity framework and I'm really confused about how savechanges works. There's probably a lot of code in my example which could be improved, but here's the problem I'm having. The user enters a bunch of picks. I make sure the user hasn't already entered those picks. Then I add the picks to the database. var db = new myModel() var predictionArray = ticker.Substring(1).Split(','); // Get rid of the initial comma. var user = Membership.GetUser(); var userId = Convert.ToInt32(user.ProviderUserKey); // Get the member with all his predictions for today. var memberQuery = (from member in db.Members where member.user_id == userId select new { member, predictions = from p in member.Predictions where p.start_date == null select p }).First(); // Load all the company ids. foreach (var prediction in memberQuery.predictions) { prediction.CompanyReference.Load(); } var picks = from prediction in predictionArray let data = prediction.Split(':') let companyTicker = data[0] where !(from i in memberQuery.predictions select i.Company.ticker).Contains(companyTicker) select new Prediction { Member = memberQuery.member, Company = db.Companies.Where(c => c.ticker == companyTicker).First(), is_up = data[1] == "up", // This turns up and down into true and false. }; // Save the records to the database. // HERE'S THE PART I DON'T UNDERSTAND. // This saves the records, even though I don't have db.AddToPredictions(pick) foreach (var pick in picks) { db.SaveChanges(); } // This does not save records when the db.SaveChanges outside of a loop of picks. db.SaveChanges(); foreach (var pick in picks) { } // This saves records, but it will insert all the picks exactly once no matter how many picks you have. //The fact you're skipping a pick makes no difference in what gets inserted. var counter = 1; foreach (var pick in picks) { if (counter == 2) { db.SaveChanges(); } counter++; } There's obviously something going on with the context I don't understand. I'm guessing I've somehow loaded my new picks as pending changes, but even if that's true I don't understand I have to loop over them to save changes. Can someone explain this to me?

    Read the article

  • jQuery: modify hidden form field value before submit

    - by Jason Miesionczek
    I have the following code in a partial view (using Spark): <span id="selectCount">0</span> video(s) selected. <for each="var video in Model"> <div style="padding: 3px; margin:2px" class="video_choice" id="${video.YouTubeID}"> <span id="video_name">${video.Name}</span><br/> <for each="var thumb in video.Thumbnails"> <img src="${thumb}" /> </for> </div> </for> # using(Html.BeginForm("YouTubeVideos","Profile", FormMethod.Post, new { id = "youTubeForm" })) # { <input type="hidden" id="video_names" name="video_names" /> <input type="submit" value="add selected"/> # } <ScriptBlock> $(".video_choice").click(function() { $(this).toggleClass('selected'); var count = $(".selected").length; $("#selectCount").html(count); }); var options = { target: '#videos', beforeSubmit: function(arr, form, opts) { var names = []; $(".selected").each(function() { names[names.length] = $(this).attr('id'); }); var namestring = names.join(","); $("#video_names").attr('value',namestring); //alert(namestring); //arr["video_names"] = namestring; //alert($.param(arr)); //alert($("#video_names").attr('value')); return true; } }; $("#youTubeForm").ajaxForm(options); </ScriptBlock> Essentially i display a series of divs that contain information pulled from the YouTube API. I use jQuery to allow the the user to select which videos they would like to add to their profile. When i submit the form i would like to populate the hidden field with a comma separated list of video ids. Everything works except that when i try to set the value of the field, in the controller on post, the field comes back empty. I am using the jQuery ajax form plugin. What am i doing wrong that is not allowing the value i set in the field to be sent to the server?

    Read the article

  • Populating a PHP array within a foreach loop

    - by patrick
    I am wanting to add each user into an array and check for duplicates before I do. $spotcount = 10; for ($topuser_count = 0; $topuser_count < $spotcount; $topuser_count++) //total spots { $spottop10 = $ids[$topuser_count]; $top_10 = $gowalla->getSpotInfo($spottop10); $usercount = 0; $c = 0; $array = array(); foreach($top_10['top_10'] as $top10) //loop each spot { //$getuser = substr($top10['url'],7); //strip the url $getuser = ltrim($top10['url'], " users/" ); if ($usercount < 3) //loop only certain number of top users { if (($getuser != $userurl) && (array_search($getuser, $array) !== true)) { //echo " no duplicates! <br /><br />"; echo ' <a href= "http://gowalla.com'.$top10['url'].'"><img width="90" height="90" src= " '.$top10['image_url'].' " title="'.$top10['first_name'].'" alt="Error" /></a> '; $array[$c++] = $getuser; } else { //echo "duplicate <br /><br />"; } } $usercount++; } print_r($array); } The previous code prints: Array ( [0] => 62151 [1] => 204501 [2] => 209368 ) Array ( [0] => 62151 [1] => 33116 [2] => 122485 ) Array ( [0] => 120728 [1] => 205247 [2] => 33116 ) Array ( [0] => 150883 [1] => 248551 [2] => 248558 ) Array ( [0] => 157580 [1] => 77490 [2] => 52046 ) Which is wrong. It does check for duplicates, but only the contents of each foreach loop instead of the entire array. How is this if I am storing everything into $array?

    Read the article

  • NHibernate unintentional lazy property loading

    - by chiccodoro
    I introduced a mapping for a business object which has (among others) a property called "Name": public class Foo : BusinessObjectBase { ... public virtual string Name { get; set; } } For some reason, when I fetch "Foo" objects, NHibernate seems to apply lazy property loading (for simple properties, not associations): The following code piece generates n+1 SQL statements, whereof the first only fetches the ids, and the remaining n fetch the Name for each record: ISession session = ...IQuery query = session.CreateQuery(queryString); ITransaction tx = session.BeginTransaction(); List<Foo> result = new List<Foo>(); foreach (Foo foo in query.Enumerable()) { result.Add(foo); } tx.Commit(); session.Close(); produces: NHibernate: select foo0_.FOO_ID as col_0_0_ from V1_FOO foo0_<br/> NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 81<br/> NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36470<br/> NHibernate: SELECT foo0_.FOO_ID as FOO1_2_0_, foo0_.NAME as NAME2_0_ FROM V1_FOO foo0_ WHERE foo0_.FOO_ID=:p0;:p0 = 36473 Similarly, the following code leads to a LazyLoadingException after session is closed: ISession session = ... ITransaction tx = session.BeginTransaction(); Foo result = session.Load<Foo>(id); tx.Commit(); session.Close(); Console.WriteLine(result.Name); Following this post, "lazy properties ... is rarely an important feature to enable ... (and) in Hibernate 3, is disabled by default." So what am I doing wrong? I managed to work around the LazyLoadingException by doing a NHibernateUtil.Initialize(foo) but the even worse part are the n+1 sql statements which bring my application to its knees. This is how the mapping looks like: <class name="Foo" table="V1_FOO"> ... <property name="Name" column="NAME"/> </class> BTW: The abstract "BusinessObjectBase" base class encapsulates the ID property which serves as the internal identifier.

    Read the article

  • Is this a legitimate implementation of a 'remember me' function for my web app?

    - by user246114
    Hi, I'm trying to add a "remember me" feature to my web app to let a user stay logged in between browser restarts. I think I got the bulk of it. I'm using google app engine for the backend which lets me use java servlets. Here is some pseudo-code to demo: public class MyServlet { public void handleRequest() { if (getThreadLocalRequest().getSession().getAttribute("user") != null) { // User already has session running for them. } else { // No session, but check if they chose 'remember me' during // their initial login, if so we can have them 'auto log in' // now. Cookie[] cookies = getThreadLocalRequest().getCookies(); if (cookies.find("rememberMePlz").exists()) { // The value of this cookie is the cookie id, which is a // unique string that is in no way based upon the user's // name/email/id, and is hard to randomly generate. String cookieid = cookies.find("rememberMePlz").value(); // Get the user object associated with this cookie id from // the data store, would probably be a two-step process like: // // select * from cookies where cookieid = 'cookieid'; // select * from users where userid = 'userid fetched from above select'; User user = DataStore.getUserByCookieId(cookieid); if (user != null) { // Start session for them. getThreadLocalRequest().getSession() .setAttribute("user", user); } else { // Either couldn't find a matching cookie with the // supplied id, or maybe we expired the cookie on // our side or blocked it. } } } } } // On first login, if user wanted us to remember them, we'd generate // an instance of this object for them in the data store. We send the // cookieid value down to the client and they persist it on their side // in the "rememberMePlz" cookie. public class CookieLong { private String mCookieId; private String mUserId; private long mExpirationDate; } Alright, this all makes sense. The only frightening thing is what happens if someone finds out the value of the cookie? A malicious individual could set that cookie in their browser and access my site, and essentially be logged in as the user associated with it! On the same note, I guess this is why the cookie ids must be difficult to randomly generate, because a malicious user doesn't have to steal someone's cookie - they could just randomly assign cookie values and start logging in as whichever user happens to be associated with that cookie, if any, right? Scary stuff, I feel like I should at least include the username in the client cookie such that when it presents itself to the server, I won't auto-login unless the username+cookieid match in the DataStore. Any comments would be great, I'm new to this and trying to figure out a best practice. I'm not writing a site which contains any sensitive personal information, but I'd like to minimize any potential for abuse all the same, Thanks

    Read the article

  • sequence generators are getting ignored

    - by luvfort
    I'm getting the following error while saving a object. However similar configuration is working for other model objects in my projects. Any help would be greatly appreciated. @Entity @Table(name = "ENROLLMENT_GROUP_MEMBERSHIPS", schema = "LEAD_ROUTING") public class EnrollmentGroupMembership implements Serializable, Comparable,Auditable { @javax.persistence.SequenceGenerator(name = "enrollmentGroupMemID", sequenceName = "S_ENROLLMENT_GROUP_MEMBERSHIPS") @Id @GeneratedValue(strategy = GenerationType.AUTO, generator = "enrollmentGroupMemID") @Column(name = "ID") private Long id; @ManyToOne() @JoinColumn(name = "TIER_WEIGHT_OID", referencedColumnName = "OID", updatable = false, insertable = false) private TierWeight tierWeight; public EnrollmentGroupMembership() { } } Code: @Entity @Table(name = "TIER_WEIGHT", schema = "LEAD_ROUTING") public class TierWeight implements Serializable, Auditable { @SequenceGenerator(name = "tierSequence",sequenceName = "S_TIER_WEIGHT") @Column(name = "OID") @Id @GeneratedValue(strategy = GenerationType.AUTO, generator = "tierSequence") private Long id; @OneToMany @JoinColumn(name = "TIER_WEIGHT_OID", referencedColumnName = "OID") private Set<EnrollmentGroupMembership> memberships; public TierWeight() { } } The logic layer's code is @Override public void createTier(String tierName, float weight) { TierWeight tier = new TierWeight(); tier.setWeight(weight); tier.setTier(tierName); tierWeightDAO.create(tier); } Similar Many-one configuration is working through out the project. I don't know why this one instance is failing. Any help would be greatly appreciated. The following is the error that I'm getting Caused by: org.hibernate.id.IdentifierGenerationException: ids for this class must be manually assigned before calling save(): edu.apollogrp.d2ec.model.TierWeight at org.hibernate.id.Assigned.generate(Assigned.java:3 3) at org.hibernate.event.def.AbstractSaveEventListener. saveWithGeneratedId(AbstractSaveEventListener.java :99) The log file is telling that the sequence generator tierSequence is not getting created. However other sequence generators are getting created. 2010-06-03 11:24:51,834 DEBUG [org.hibernate.cfg.AnnotationBinder:] Processing annotations of edu.apollogrp.d2ec.model.TierWeight.dateCreated 2010-06-03 11:24:51,834 DEBUG [org.hibernate.cfg.AnnotationBinder:] Processing annotations of edu.apollogrp.d2ec.model.TierWeight.dateCreated 2010-06-03 11:24:51,834 DEBUG [org.hibernate.cfg.Ejb3Column:] Binding column DATE_CREATED unique false ....................................... ....................................... 2010-06-03 11:24:51,756 DEBUG [org.hibernate.cfg.AnnotationBinder:] Processing annotations of edu.apollogrp.d2ec.model.CounselorAvailability.id 2010-06-03 11:24:51,756 DEBUG [org.hibernate.cfg.Ejb3Column:] Binding column OID unique false 2010-06-03 11:24:51,756 DEBUG [org.hibernate.cfg.Ejb3Column:] Binding column OID unique false 2010-06-03 11:24:51,756 DEBUG [org.hibernate.cfg.AnnotationBinder:] id is an id 2010-06-03 11:24:51,756 DEBUG [org.hibernate.cfg.AnnotationBinder:] id is an id 2010-06-03 11:24:51,756 DEBUG [org.hibernate.cfg.AnnotationBinder:] Add sequence generator with name: counselorAvailabilityID 2010-06-03 11:24:51,756 DEBUG [org.hibernate.cfg.AnnotationBinder:] Add sequence generator with name: counselorAvailabilityID While debugging, I see that the org.hibernate.impl.SessionFactoryImpl is returning the "Assigned" identifierGenerator. This is horrible. I've specified the identifierGenerator as "Auto". Please see the above code. As a sidenote, I was trying to debug and seeing how the objects are getting retrieved from the database. Looks like the enrollmentgroupmembership records have the tierweight value populated. However if I look at the tierweight object, it doesn't have the enrollmentgroupmembership records. I'm puzzled. I think these two problems must be related. Maddy.

    Read the article

  • Foreign key pointing to different tables

    - by Álvaro G. Vicario
    I'm implementing a table per subclass design I discussed in a previous question. It's a product database where products can have very different attributes depending on their type, but attributes are fixed for each type and types are not manageable at all. I have a master table that holds common attributes: product_type ============ product_type_id INT product_type_name VARCHAR E.g.: 1 'Magazine' 2 'Web site' product ======= product_id INT product_name VARCHAR product_type_id INT -> Foreign key to product_type.product_type_id valid_since DATETIME valid_to DATETIME E.g. 1 'Foo Magazine' 1 '1998-12-01' NULL 2 'Bar Weekly Review' 1 '2005-01-01' NULL 3 'E-commerce App' 2 '2009-10-15' NULL 4 'CMS' 2 '2010-02-01' NULL ... and one subtable for each product type: item_magazine ============= item_magazine_id INT title VARCHAR product_id INT -> Foreign key to product.product_id issue_number INT pages INT copies INT close_date DATETIME release_date DATETIME E.g. 1 'Foo Magazine Regular Issue' 1 89 52 150000 '2010-06-25' '2010-06-31' 2 'Foo Magazine Summer Special' 1 90 60 175000 '2010-07-25' '2010-07-31' 3 'Bar Weekly Review Regular Issue' 2 12 16 20000 '2010-06-01' '2010-06-02' item_web_site ============= item_web_site_id INT name VARCHAR product_id INT -> Foreign key to product.product_id bandwidth INT hits INT date_from DATETIME date_to DATETIME E.g. 1 'The Carpet Store' 3 10 90000 '2010-06-01' NULL 2 'Penauts R Us' 3 20 180000 '2010-08-01' NULL 3 'Springfield Cattle Fair' 4 15 150000 '2010-05-01' '2010-10-31' Now I want to add some fees that relate to one specific item. Since there are very little subtypes, it's feasible to do this: fee === fee_id INT fee_description VARCHAR item_magazine_id INT -> Foreign key to item_magazine.item_magazine_id item_web_site_id INT -> Foreign key to item_web_site.item_web_site_id net_price DECIMAL E.g.: 1 'Front cover' 2 NULL 1999.99 2 'Half page' 2 NULL 500.00 3 'Square banner' NULL 3 790.50 4 'Animation' NULL 3 2000.00 I have tight foreign keys to handle cascaded editions and I presume I can add a constraint so only one of the IDs is NOT NULL. However, my intuition suggests that it would be cleaner to get rid of the item_WHATEVER_id columns and keep a separate table: fee_to_item =========== fee_id INT -> Foreign key to fee.fee_id product_id INT -> Foreign key to product.product_id item_id INT -> ??? But I can't figure out how to create foreign keys on item_id since the source table varies depending on product_id. Should I stick to my original idea?

    Read the article

  • JNI cached jclass global reference variables being garbage collected?

    - by bubbadoughball
    I'm working in the JNI Invocation API, calling into Java from C. I have some upfront initialization to cache 30+ Java classes into global references. The results of FindClass are passed into NewGlobalRef to acquire a global reference to the class. I'm caching these class variables to reuse them later. I have 30+ global references to classes (and 30+ global methodIDs for the class constructors). In the following sample, I've removed exception handling as well as JNI invocation for the purpose of shortening the code snippet. My working code has exception checks after every JNI call and I'm running with -Xcheck:jni. Here's the snippet: jclass aClass; jclass bClass; jmethodID aCtor; jmethodID bCtor; void getGlobalRef(const char* clazz, jclass* globalClass) { jclass local = (*jenv)->FindClass(jenv,clazz); if (local) { *globalClass = (jclass) (*jenv)->NewGlobalRef(jenv,local); (*jenv)->DeleteLocalRef(jenv,local); } } methodID getMethodID(jclass clazz, const char* method, const char* sig) { return (*jenv)->GetMethodID(jenv,clazz,method,sig); } void initializeJNI() { getGlobalRef("MyProj/Testclass1", &aclass); getGlobalRef("MyProj/Testclass2", &bclass); . . aCtor = getMethodID(aclass,"<init>","()V"); bCtor = getMethodID(bclass,"<init>","(I)V"); } The initializeJNI() function sets the global references for jclasses and method IDs for constructors as well as some jfieldID's and some initialization of C data structures. After initialization, when I call into a JNI function using some of the cached jclasses and ctor jmethodIDs, I get a bad global or local reference calling reported from the -Xcheck:jni. In gdb, I break at the last line of initializeJNI(), and print all jclasses and jmethodIDs and the ones causing problems look to have been turned into garbage or garbage-collected (i.e. 0x00 or 0x06). Is it possible for global references to be gc'ed? Any suggestions?

    Read the article

  • How do I handle the Maybe result of at in Control.Lens.Indexed without a Monoid instance

    - by Matthias Hörmann
    I recently discovered the lens package on Hackage and have been trying to make use of it now in a small test project that might turn into a MUD/MUSH server one very distant day if I keep working on it. Here is a minimized version of my code illustrating the problem I am facing right now with the at lenses used to access Key/Value containers (Data.Map.Strict in my case) {-# LANGUAGE OverloadedStrings, GeneralizedNewtypeDeriving, TemplateHaskell #-} module World where import Control.Applicative ((<$>),(<*>), pure) import Control.Lens import Data.Map.Strict (Map) import qualified Data.Map.Strict as DM import Data.Maybe import Data.UUID import Data.Text (Text) import qualified Data.Text as T import System.Random (Random, randomIO) newtype RoomId = RoomId UUID deriving (Eq, Ord, Show, Read, Random) newtype PlayerId = PlayerId UUID deriving (Eq, Ord, Show, Read, Random) data Room = Room { _roomId :: RoomId , _roomName :: Text , _roomDescription :: Text , _roomPlayers :: [PlayerId] } deriving (Eq, Ord, Show, Read) makeLenses ''Room data Player = Player { _playerId :: PlayerId , _playerDisplayName :: Text , _playerLocation :: RoomId } deriving (Eq, Ord, Show, Read) makeLenses ''Player data World = World { _worldRooms :: Map RoomId Room , _worldPlayers :: Map PlayerId Player } deriving (Eq, Ord, Show, Read) makeLenses ''World mkWorld :: IO World mkWorld = do r1 <- Room <$> randomIO <*> (pure "The Singularity") <*> (pure "You are standing in the only place in the whole world") <*> (pure []) p1 <- Player <$> randomIO <*> (pure "testplayer1") <*> (pure $ r1^.roomId) let rooms = at (r1^.roomId) ?~ (set roomPlayers [p1^.playerId] r1) $ DM.empty players = at (p1^.playerId) ?~ p1 $ DM.empty in do return $ World rooms players viewPlayerLocation :: World -> PlayerId -> RoomId viewPlayerLocation world playerId= view (worldPlayers.at playerId.traverse.playerLocation) world Since rooms, players and similar objects are referenced all over the code I store them in my World state type as maps of Ids (newtyped UUIDs) to their data objects. To retrieve those with lenses I need to handle the Maybe returned by the at lens (in case the key is not in the map this is Nothing) somehow. In my last line I tried to do this via traverse which does typecheck as long as the final result is an instance of Monoid but this is not generally the case. Right here it is not because playerLocation returns a RoomId which has no Monoid instance. No instance for (Data.Monoid.Monoid RoomId) arising from a use of `traverse' Possible fix: add an instance declaration for (Data.Monoid.Monoid RoomId) In the first argument of `(.)', namely `traverse' In the second argument of `(.)', namely `traverse . playerLocation' In the second argument of `(.)', namely `at playerId . traverse . playerLocation' Since the Monoid is required by traverse only because traverse generalizes to containers of sizes greater than one I was now wondering if there is a better way to handle this that does not require semantically nonsensical Monoid instances on all types possibly contained in one my objects I want to store in the map. Or maybe I misunderstood the issue here completely and I need to use a completely different bit of the rather large lens package?

    Read the article

  • Robust way to save/load objects with dependencies?

    - by mrteacup
    I'm writing an Android game in Java and I need a robust way to save and load application state quickly. The question seems to apply to most OO languages. To understand what I need to save: I'm using a Strategy pattern to control my game entities. The idea is I have a very general Entity class which e.g. stores the location of a bullet/player/enemy and I then attach a Behaviour class that tells the entity how to act: class Entiy { float x; float y; Behavior b; } abstract class Behavior { void update(Entity e); {} // Move about at a constant speed class MoveBehavior extends Behavior { float speed; void update ... } // Chase after another entity class ChaseBehavior extends Behavior { Entity target; void update ... } // Perform two behaviours in sequence class CombineBehavior extends Behavior { Behaviour a, b; void update ... } Essentially, Entity objects are easy to save but Behaviour objects can have a semi-complex graph of dependencies between other Entity objects and other Behaviour objects. I also have cases where a Behaviour object is shared between entities. I'm willing to change my design to make saving/loading state easier, but the above design works really well for structuring the game. Anyway, the options I've considered are: Use Java serialization. This is meant to be really slow in Android (I'll profile it sometime). I'm worried about robustness when changes are made between versions however. Use something like JSON or XML. I'm not sure how I would cope with storing the dependencies between objects however. Would I have to give each object a unique ID and then use these IDs on loading to link the right objects together? I thought I could e.g. change the ChaseBehaviour to store a ID to an entity, instead of a reference, that would be used to look up the Entity before performing the behaviour. I'd rather avoid having to write lots of loading/saving code myself as I find it really easy to make mistakes (e.g. forgetting to save something, reading things out in the wrong order). Can anyone give me any tips on good formats to save to or class designs that make saving state easier?

    Read the article

  • B-trees, databases, sequential inputs, and speed.

    - by IanC
    I know from experience that b-trees have awful performance when data is added to them sequentially (regardless of the direction). However, when data is added randomly, best performance is obtained. This is easy to demonstrate with the likes of an RB-Tree. Sequential writes cause a maximum number of tree balances to be performed. I know very few databases use binary trees, but rather used n-order balanced trees. I logically assume they suffer a similar fate to binary trees when it comes to sequential inputs. This sparked my curiosity. If this is so, then one could deduce that writing sequential IDs (such as in IDENTITY(1,1)) would cause multiple re-balances of the tree to occur. I have seen many posts argue against GUIDs as "these will cause random writes". I never use GUIDs, but it struck me that this "bad" point was in fact a good point. So I decided to test it. Here is my code: SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[T1]( [ID] [int] NOT NULL CONSTRAINT [T1_1] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO CREATE TABLE [dbo].[T2]( [ID] [uniqueidentifier] NOT NULL CONSTRAINT [T2_1] PRIMARY KEY CLUSTERED ([ID] ASC) ) GO declare @i int, @t1 datetime, @t2 datetime, @t3 datetime, @c char(300) set @t1 = GETDATE() set @i = 1 while @i < 2000 begin insert into T2 values (NEWID(), @c) set @i = @i + 1 end set @t2 = GETDATE() WAITFOR delay '0:0:10' set @t3 = GETDATE() set @i = 1 while @i < 2000 begin insert into T1 values (@i, @c) set @i = @i + 1 end select DATEDIFF(ms, @t1, @t2) AS [Int], DATEDIFF(ms, @t3, getdate()) AS [GUID] drop table T1 drop table T2 Note that I am not subtracting any time for the creation of the GUID nor for the considerably extra size of the row. The results on my machine were as follows: Int: 17,340 ms GUID: 6,746 ms This means that in this test, random inserts of 16 bytes was almost 3 times faster than sequential inserts of 4 bytes. Would anyone like to comment on this? Ps. I get that this isn't a question. It's an invite to discussion, and that is relevant to learning optimum programming.

    Read the article

  • Sort a list of pointers.

    - by YuppieNetworking
    Hello all, Once again I find myself failing at some really simple task in C++. Sometimes I wish I could de-learn all I know from OO in java, since my problems usually start by thinking like Java. Anyways, I have a std::list<BaseObject*> that I want to sort. Let's say that BaseObject is: class BaseObject { protected: int id; public: BaseObject(int i) : id(i) {}; virtual ~BaseObject() {}; }; I can sort the list of pointer to BaseObject with a comparator struct: struct Comparator { bool operator()(const BaseObject* o1, const BaseObject* o2) const { return o1->id < o2->id; } }; And it would look like this: std::list<BaseObject*> mylist; mylist.push_back(new BaseObject(1)); mylist.push_back(new BaseObject(2)); // ... mylist.sort(Comparator()); // intentionally omitted deletes and exception handling Until here, everything is a-ok. However, I introduced some derived classes: class Child : public BaseObject { protected: int var; public: Child(int id1, int n) : BaseObject(id1), var(n) {}; virtual ~Child() {}; }; class GrandChild : public Child { public: GrandChild(int id1, int n) : Child(id1,n) {}; virtual ~GrandChild() {}; }; So now I would like to sort following the following rules: For any Child object c and BaseObject b, b<c To compare BaseObject objects, use its ids, as before. To compare Child objects, compare its vars. If they are equal, fallback to rule 2. GrandChild objects should fallback to the Child behavior (rule 3). I initially thought that I could probably do some casts in Comparator. However, this casts away constness. Then I thought that probably I could compare typeids, but then everything looked messy and it is not even correct. How could I implement this sort, still using list<BaseObject*>::sort ? Thank you

    Read the article

  • How to handle very frequent updates to a Lucene index

    - by fsm
    I am trying to prototype an indexing/search application which uses very volatile indexing data sources (forums, social networks etc), here are some of the performance requirements, Very fast turn-around time (by this I mean that any new data (such as a new message on a forum) should be available in the search results very soon (less than a minute)) I need to discard old documents on a fairly regular basis to ensure that the search results are not dated. Last but not least, the search application needs to be responsive. (latency on the order of 100 milliseconds, and should support at least 10 qps) All of the requirements I have currently can be met w/o using Lucene (and that would let me satisfy all 1,2 and 3), but I am anticipating other requirements in the future (like search relevance etc) which Lucene makes easier to implement. However, since Lucene is designed for use cases far more complex than the one I'm currently working on, I'm having a hard time satisfying my performance requirements. Here are some questions, a. I read that the optimize() method in the IndexWriter class is expensive, and should not be used by applications that do frequent updates, what are the alternatives? b. In order to do incremental updates, I need to keep committing new data, and also keep refreshing the index reader to make sure it has the new data available. These are going to affect 1 and 3 above. Should I try duplicate indices? What are some common approaches to solving this problem? c. I know that Lucene provides a delete method, which lets you delete all documents that match a certain query, in my case, I need to delete all documents which are older than a certain age, now one option is to add a date field to every document and use that to delete documents later. Is it possible to do range queries on document ids (I can create my own id field since I think that the one created by lucene keeps changing) to delete documents? Is it any faster than comparing dates represented as strings? I know these are very open questions, so I am not looking for a detailed answer, I will try to treat all of your answers as suggestions and use them to inform my design. Thanks! Please let me know if you need any other information.

    Read the article

  • javascript error: variable is not defined

    - by Hristo
    to is not defined [Break on this error] setTimeout('updateChat(from, to)', 1); I'm getting this error... I'm using Firebug to test and this comes up in the Console. The error corresponds to line 71 of chat.js and the whole function that wraps this line is: function updateChat(from, to) { $.ajax({ type: "POST", url: "process.php", data: { 'function': 'getFromDB', 'from': from, 'to': to }, dataType: "json", cache: false, success: function(data) { if (data.text != null) { for (var i = 0; i < data.text.length; i++) { $('#chat-box').append($("<p>"+ data.text[i] +"</p>")); } document.getElementById('chat-box').scrollTop = document.getElementById('chat-box').scrollHeight; } instanse = false; state = data.state; setTimeout('updateChat(from, to)', 1); // gives error }, }); } This links to process.php with function call getFromDB and the code for that is: case ('getFromDB'): // get the sender and receiver user IDs from their user names $from = mysql_real_escape_string($_POST['from']); $query = "SELECT `user_id` FROM `Users` WHERE `user_name` = '$from' LIMIT 1"; $result = mysql_query($query) or die(mysql_error()); $row = mysql_fetch_assoc($result); $fromID = $row['user_id']; $to = mysql_real_escape_string($_POST['to']); $query = "SELECT `user_id` FROM `Users` WHERE `user_name` = '$to' LIMIT 1"; $result = mysql_query($query) or die(mysql_error()); $row = mysql_fetch_assoc($result); $toID = $row['user_id']; $query = "SELECT * FROM `Messages` WHERE `from_id` = '$fromID' AND `to_id` = '$toID' LIMIT 1"; $result = mysql_query($query); while($row = mysql_fetch_assoc($result)) { $text[] = $line = $row['message']; $log['text'] = $text; } break; So I'm confused with the line that is giving the error. setTimeout('updateChat(from,to)',1); aren't the parameters to updateChat the same parameters that came into the function? Or are they being pulled in from somewhere else and I have to define to and from else where? Any ideas how to fix this error? Thanks, Hristo

    Read the article

  • Using Selenium-IDE with a rich Javascript application?

    - by Darien
    Problem At my workplace, we're trying to find the best way to create automated-tests for an almost wholly javascript-driven intranet application. Right now we're stuck trying to find a good tradeoff between: Application code in reusable and nest-able GUI components. Tests which are easily created by the testing team Tests which can be recorded once and then automated Tests which do not break after small cosmetic changes to the site XPath expressions (or other possible expressions, like jQuery selectors) naively generated from Selenium-IDE are often non-repeatable and very fragile. Conversely, having the JS code generate special unique ID values for every important DOM-element on the page... well, that is its own headache, complicated by re-usable GUI components and IDs needing to be consistent when the test is re-run. What successes have other people had with this kind of thing? How do you do automated application-level testing of a rich JS interface? Limitations We are using JavascriptMVC 2.0, hopefully 3.0 soon so that we can upgrade to jQuery 1.4.x. The test-making folks are mostly trained to use Selenium IDE to directly record things. The test leads would prefer a page-unique HTML ID on each clickable element on the page... Training the testers to write or alter special expressions (such as telling them which HTML class-names are important branching points) is a no-go. We try to make re-usable javascript components, but this means very few GUI components can treat themselves (or what they contain) as unique. Some of our components already use HTML ID values in their operation. I'd like to avoid doing this anyway, but it complicates the idea of ID-based testing. It may be possible to add custom facilities (like a locator-builder or new locator method) to the Selenium-IDE installation testers use. Almost everything that goes on occurs within a single "page load" from a conventional browser perspective, even when items are saved Current thoughts I'm considering a system where a custom locator-builder (javascript code) for Selenium-IDE will talk with our application code as the tester is recording. In this way, our application becomes partially responsible for generating a mostly-flexible expression (XPath or jQuery) for any given DOM element. While this can avoid requiring more training for testers, I worry it may be over-thinking things.

    Read the article

  • Sending string to wcf service using jquery ajax. why can i only send strings of numbers?

    - by Robodude
    Hi Guys, For some reason, I'm only able to pass strings containing numbers to my web service when using jquery ajax. This hasn't been an issue so far because I was always just passing IDs to my wcf service. But I'm trying to do something more complex now but I can't figure it out. In my interface: [OperationContract] [WebInvoke(ResponseFormat = WebMessageFormat.Json)] DataTableOutput GetDataTableOutput(string json); My webservice: public DataTableOutput GetDataTableOutput(string json) { DataTableOutput x = new DataTableOutput(); x.iTotalDisplayRecords = 9; x.iTotalRecords = 50; x.sColumns = "1"; x.sEcho = "1"; x.aaData = null; return x; } Javascript/Jquery: var x = "1"; $.ajax({ type: "POST", async: false, url: "Services/Service1.svc/GetDataTableOutput", contentType: "application/json; charset=utf-8", data: x, dataType: "json", success: function (msg) { }, error: function (XMLHttpRequest, textStatus, errorThrown) { //alert(XMLHttpRequest.status); //alert(XMLHttpRequest.responseText); } }); The above code WORKS perfectly. But when I change x to "t" or even to "{'test':'test'}" I get a Error 400 Bad Request error in Firebug. Thanks, John EDIT: Making some progress! data: JSON.stringify("{'test':'test'}"), Sends the string to my function! EDIT2: var jsonAOData = JSON.stringify(aoData); $.ajax({ type: "POST", async: false, url: sSource, contentType: "application/json; charset=utf-8", data: "{'Input':" + jsonAOData + "}", dataType: "json", success: function (msg) { }, error: function (XMLHttpRequest, textStatus, errorThrown) { //alert(XMLHttpRequest.status); //alert(XMLHttpRequest.responseText); } }); EDIT3: I modified the code block I put in EDIT2 up above. Swapping the " and ' did the trick! $.ajax({ type: "POST", async: false, url: sSource, contentType: "application/json; charset=utf-8", data: '{"Input":' + jsonAOData + '}', dataType: "json", success: function (msg) { }, error: function (XMLHttpRequest, textStatus, errorThrown) { //alert(XMLHttpRequest.status); //alert(XMLHttpRequest.responseText); } }); However, I have a new problem: public DataTableOutput GetDataTableOutput(DataTableInputOverview Input) { The input here is completely null. The values I passed from jsonAOData didn't get assigned to the DataTableInputOverview Input variable. :(

    Read the article

  • How to populate the span tags between specific tags.

    - by Rachel
    I want to populate span tags for all text nodes between the self closing tags s1 and s2 having the same id.If the transformation of the input to the output form requires the list of ids of the s1 , s2 tag pairs say 1, 2 etc i can assume that it is in place in any form if required.The input can have any structure and not exactly the same as seen in the sample input. It can have any number of s1, s2 tag pairs but each pair will have a unique id. Hope i am clear. How can this be achieved using XSL? Input: <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>This is my title</title> </head> <body> <h1 align="center">This <s1 id="1" />is my <s2 id="1" />heading</h1> <p> Sample content <s1 id="2" />Some text here. Some content here. </p> <p> Here you <s2 id="2" />go. </p> </body> </html> Desired output: <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>This is my title</title> </head> <body> <h1 align="center">This <span id="1">is my </span>heading</h1> <p> Sample content <span id="2">Some text here. Some content here.</span> </p> <p> <span id="2">Here you </span>go. </p> </body> </html>

    Read the article

  • Is it possible to use .data() as a search criteria?

    - by Andrew
    I have a pretty complex chat application going on, and there are multiple chat panes, chat entries, chat submits, etc. going on in the same window. At first I was going to do something like.... <input type="text" class="chattext" id="chattext-42"> <input type="text" class="chattext" id="chattext-93"> <input type="button" class="chatsubmit" id="chatsubmit-42"> <input type="button" class="chatsubmit" id="chatsubmit-93"> ... etc. (of course this is vastly simplified, they'd be in separate divs, separate visibilities, etc) So, when they clicked on a .chatsubmit, it would then get the id of that and find the last two characters for the chat ID. This presents some problems, as it would require rewrites if IDs changed lengths, and seems just plain inelegant to me. I then remembered the .data() facility in jQuery... I thought, maybe I could do it more like this: <input type="text" class="chattext"> ... and add a .data("id", 42) to this one <input type="button" class="chatsubmit"> ... and add a .data("id", 42) So that when they click chatsubmit, it gets the ID, and then finds the chattext with that ID and processes it. But looking at the documentation, I don't see an easy way to search by this. For example, let's say the event target in this case is the chatsubmit with the data('id') of 42... var ID = $(event.target).data('id'); // Sets it to 42 var chattext = ... And here I run into the trouble. How do I find which DOM element matches a class of chattext and a data('id') of 42? Is there any easy method, or do I have to search every .chattext for the one with an id of 42? Or is there another easy way of doing this? I did consider the possibility of the container div having the ID, which would make it, I think,? slightly easier to get. But if this works, it could be dealing with things in other container divs as well, making that not a long-term solution. Edit: Literally seconds after posting this, I found this: http://james.padolsey.com/javascript/extending-jquerys-selector-capabilities/ which includes information on extending the selector to data. So I'll try that out, and in the meantime, is this a completely foolhardy way of handling this?

    Read the article

  • .htaccess mod_rewrite URL query

    - by 1001001
    I was hoping someone could help me out. I'm building a CRM application and need help modifying the .htaccess file to clean up the URLs. I've read every post regarding .htaccess and mod_rewrite and I've even tried using http://www.generateit.net/mod-rewrite/ to obtain the results with no success. Here is what I am attempting to do. Let's call the base URL www.domain.com We are using php with a mysql back-end and some jQuery and javascript In that "root" folder is my .htaccess file. I'm not sure if I need a .htaccess file in each subdirectory or if one in the root is enough. We have several actual directories of files including "crm", "sales", "finance", etc. First off we want to strip off all the ".php" extensions which I am able to do myself thanks to these posts. However, the querying of the company and contact IDs are where I am stuck. Right now if I load www.domain.com/crm/companies.php it displays all the companies in a list. If I click on one of the companies it uses javascript to call a "goto_company(x)" jQuery script that writes a form and submit that form based on the ID (x) of the company. This works fine and keeps the links clean as all the end user sees is www.domain.com/crm/company.php. However you can't navigate directly to a company. So we added a few lines in PHP to see if the POST is null and try a GET instead allowing us to do www.domain.com/crm/company.php?companyID=40 which displays company #40 out of the database. I need to rewrite this link, and all other associated links to www.domain.com/crm/company/40 I've tried everything and nothing seems to work. Keep in mind that I need to do this for "contacts" and also on the sales portion of the app will need to do something for "deals". To summarize here's what I am looking to do: Change www.domain.com/crm/dash.php to www.domain.com/crm/dash Change www.domain.com/crm/company.php?companyID=40 to www.domain.com/crm/company/40 Change www.domain.com/crm/contact.php?contactID=27 to www.domain.com/crm/contact/27 Change www.domain.com/sales/dash.php to www.domain.com/sales/dash Change www.domain.com/sales/deal.php?dealID=6 to www.domain.com/sales/deal/6 (40, 27, and 6 are just arbitrary numbers as examples) Just for reference, when I used the generateit.net/mod-rewrite site using www.domain.com/crm/company.php?companyID=40 as an example, here is what it told me to put in my .htaccess file: Options +FollowSymLinks RewriteEngine On RewriteRule ^crm/company/([^/]*)$ /crm/company.php?companyID=$1 [L] Needless to say that didn't work.

    Read the article

  • Optimizing sparse dot-product in C#

    - by Haggai
    Hello. I'm trying to calculate the dot-product of two very sparse associative arrays. The arrays contain an ID and a value, so the calculation should be done only on those IDs that are common to both arrays, e.g. <(1, 0.5), (3, 0.7), (12, 1.3) * <(2, 0.4), (3, 2.3), (12, 4.7) = 0.7*2.3 + 1.3*4.7 . My implementation (call it dict) currently uses Dictionaries, but it is too slow to my taste. double dot_product(IDictionary<int, double> arr1, IDictionary<int, double> arr2) { double res = 0; double val2; foreach (KeyValuePair<int, double> p in arr1) if (arr2.TryGetValue(p.Key, out val2)) res += p.Value * val2; return res; } The full arrays have about 500,000 entries each, while the sparse ones are only tens to hundreds entries each. I did some experiments with toy versions of dot products. First I tried to multiply just two double arrays to see the ultimate speed I can get (let's call this "flat"). Then I tried to change the implementation of the associative array multiplication using an int[] ID array and a double[] values array, walking together on both ID arrays and multiplying when they are equal (let's call this "double"). I then tried to run all three versions with debug or release, with F5 or Ctrl-F5. The results are as follows: debug F5: dict: 5.29s double: 4.18s (79% of dict) flat: 0.99s (19% of dict, 24% of double) debug ^F5: dict: 5.23s double: 4.19s (80% of dict) flat: 0.98s (19% of dict, 23% of double) release F5: dict: 5.29s double: 3.08s (58% of dict) flat: 0.81s (15% of dict, 26% of double) release ^F5: dict: 4.62s double: 1.22s (26% of dict) flat: 0.29s ( 6% of dict, 24% of double) I don't understand these results. Why isn't the dictionary version optimized in release F5 as do the double and flat versions? Why is it only slightly optimized in the release ^F5 version while the other two are heavily optimized? Also, since converting my code into the "double" scheme would mean lots of work - do you have any suggestions how to optimize the dictionary one? Thanks! Haggai

    Read the article

  • What is the best way to reduce code and loop through a hierarchial commission script?

    - by JM4
    I have a script which currently "works" but is nearly 3600 lines of code and makes well over 50 database calls within a single script. From my experience, there is no way to really "loop" the script and minimize it because each call to the database is a subquery of the ones before based on referral ids. Perhaps I can give a very simple example of what I am trying to accomplish and see if anybody has experience with something similar. In my example, there are three tables: Table 1 - Sellers ID | Comm_level | Parent ----------------------------------- 1 | 4 | NULL 2 | 3 | 1 3 | 2 | 1 4 | 2 | 2 5 | 2 | 2 6 | 1 | 3 Where ID is the id of one of our sales agents, comm_level will determine what his commission percentage is for each product he sells, parent indicates the ID for whom recruited that particular agent. In the example above, 1 is the top agent, he recruited two agents, 2 and 3. 2 recruited two agents, 4 and 5. 3 recruited one agent, 6. NOTE: An agent can NEVER recruit anybody equal to or higher than their own level. Table 2 - Commissions Level | Item 1 | Item 2 | Item 3 ----------------------------------------------------- 4 | .5 | .4 | .3 3 | .45 | .35 | .25 2 | .4 | .3 | .2 1 | .35 | .25 | .15 This table lays out the commission percentages for each agent based on their actual comm_level (if an agent is at a level 4, he will receive 50% on every item 1 sold, 40% on every item 2, 30% on every item 3 and so on. Table 3 - Items Sold ID | Item --------------------- 4 | item_1 4 | item_2 1 | item_1 2 | item_3 6 | item_2 1 | item_3 This table pairs the actual item sold with the seller who sold the item. When generating the commission report, calculating individual values is very simple. Calculating their commission based on their sub_sellers however is very difficult. In this example, Seller ID 1 gets a piece of every single item sold. The commission percentages indicate individual sales or the height of their commission. For example: When seller ID 6 sold one of item_2 above, the tree for commissions will look like the following: -ID 6 - 25% of cost(item_1) -ID 3 - 5% of cost(item_1) - (30% is his comm - 25% comm of seller id 6) -ID 1 - 10% of cost(item_1) - (40% is his comm - 30% of seller id 3) This must be calculated for every agent in the system from the top down (hence the DB calls within while loops throughout my enormous script). Anybody have a good suggestion or samples they may have used in the past?

    Read the article

  • PHP getting Twitter API JSON file contents without OAuth (Almost have it)

    - by DexCurl
    Hey guys, I have this script working fine with OAuth, but I accidentally nuked my 350 API hits with a stupid while statement :( I'm trying to get data from the Twitter API without OAuth, I can't figure it out (still pretty new), heres what I have <html> <body> <center> <hr /> <br /> <table border="1"> <tr><td>ScreenName</td><td>Followed back?</td></tr> <?php //twitter oauth deets $consumerKey = 'x'; $consumerSecret = 'x'; $oAuthToken = 'x'; $oAuthSecret = 'x'; // Create Twitter API objsect require_once("twitteroauth.php"); $oauth = new TwitterOAuth($consumerKey, $consumerSecret, $oAuthToken, $oAuthSecret); //get home timeline tweets and it is stored as an array $youfollow = $oauth->get('http://api.twitter.com/1/friends/ids.json?screen_name=lccountdown'); $i = 0; //start loop to print our results cutely in a table while ($i <= 20){ $youfollowid = $youfollow[$i]; $resolve = "http://api.twitter.com/1/friendships/exists.json?user_a=".$youfollow[$i]."&user_b=jwhelton"; $followbacktest = $oauth->get($resolve); //$homedate= $hometimeline[$i]->created_at; //$homescreenname = $hometimeline[$i]->user->screen_name; echo "<tr><td>".$youfollowid."</td><td>".$followbacktest."</td></tr>"; $i++; } ?> </table> </center> </body> </html> Neither of the two Twitter functions require authentication, so how can I get the same results? Thanks guys, Dex

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66  | Next Page >