Search Results

Search found 28744 results on 1150 pages for 'higher order functions'.

Page 137/1150 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • Complex SQL query help on aggregating values for nested subquery

    - by François Beausoleil
    Hi! I have people, companies, employees, events and event kinds. I'm making a report/followup sheet where people, companies and employees are the rows, and the columns are event kinds. Event kinds are simple values describing: "Promised Donation", "Received Donation", "Phoned", "Followed up" and such. Event kinds are ordered: CREATE TABLE event_kinds ( id, name, position); Events hold the actual reference to the event: CREATE TABLE events ( id, person_id, company_id, referrer_id, event_kind_id, created_at); referrer_id is another reference to people. It is the person which sent the information/tip along, and is an optional field, although I sometimes want to filter on an event_kind that has a specific referrer, while I don't for other event kinds. Notice I don't have an employee ID reference. The reference exists, but is implied. I have application code to validate that person_id and company_id really reference an employee record. The other tables are pretty basic: CREATE TABLE people ( id, name); CREATE TABLE companies ( id, name); CREATE TABLE employees ( id, person_id, company_id); I'm trying to achieve the following report: Referrer Phoned Promised Donated Francois Feb 16th Feb 20th Mar 1st Apple (Steve Jobs) Steve Ballmer Mar 3rd IBM Bill Gates Mar 7th The first row is a people record, the 2nd is an employee, and the 3rd is a company. If I asked for referrer Bill Gates for Phoned event kinds, I'd only see the 3rd row, while asking for Steve and Phoned would return no rows. Right now, I do 3 queries, one for companies, one for people and a last one for employees. I want the event kind columns to be ordered, but I do that in application code and show it properly there. Here's where I'm at so far: SELECT companies.id, companies.name, (SELECT events.id FROM events WHERE events.referrer_id = 1470 AND events.company_id = companies.id AND events.person_id IS NULL AND events.event_kind_id = 9 ORDER BY created_at DESC LIMIT 1) event_kind_9, (SELECT events.id FROM events WHERE events.company_id = companies.id AND events.person_id IS NULL AND events.event_kind_id = 10 ORDER BY created_at DESC LIMIT 1) event_kind_10, (SELECT events.created_at FROM events WHERE events.referrer_id = 1470 AND events.company_id = companies.id AND events.person_id IS NULL AND events.event_kind_id = 9 ORDER BY created_at DESC LIMIT 1) event_kind_9_order FROM "companies" SELECT people.id, people.name, (SELECT events.id FROM events WHERE events.referrer_id = 1470 AND events.company_id IS NULL AND events.person_id = people.id AND events.event_kind_id = 9 ORDER BY created_at DESC LIMIT 1) event_kind_9, (SELECT events.id FROM events WHERE events.company_id IS NULL AND events.person_id = people.id AND events.event_kind_id = 10 ORDER BY created_at DESC LIMIT 1) event_kind_10, (SELECT events.created_at FROM events WHERE events.referrer_id = 1470 AND events.company_id IS NULL AND events.person_id = people.id AND events.event_kind_id = 9 ORDER BY created_at DESC LIMIT 1) event_kind_9_order FROM "people" SELECT employees.id, employees.company_id, employees.person_id, (SELECT events.id FROM events WHERE events.referrer_id = 1470 AND events.company_id = employees.company_id AND events.person_id = employees.person_id AND events.event_kind_id = 9 ORDER BY created_at DESC LIMIT 1) event_kind_9, (SELECT events.id FROM events WHERE events.company_id = employees.company_id AND events.person_id = employees.person_id AND events.event_kind_id = 10 ORDER BY created_at DESC LIMIT 1) event_kind_10, (SELECT events.created_at FROM events WHERE events.referrer_id = 1470 AND events.company_id = employees.company_id AND events.person_id = employees.person_id AND events.event_kind_id = 9 ORDER BY created_at DESC LIMIT 1) event_kind_9_order FROM "employees" I rather suspect I'm doing this wrong. There should be an "easier" way to do it. One other filter criteria would be to filter on people/company names: WHERE LOWER(companies.name) LIKE '%apple%'. Note that I'm ordering by the dates of event_kind_9 here, and a secondary sort is by person/company name. To summarize: I want to paginate the result set, find the latest event for each cell, order the result set by the date of the latest event, and by company/person name, filter by referrer in some event kinds, but not others. For reference, I'm using PostgreSQL, from Ruby, ActiveRecord/Rails. The solution is pure SQL though.

    Read the article

  • PHP-How to Pass Multiple Value In Form Field

    - by Tall boY
    hi i have a php based sorting method with drop down menu to sort no of rows, it is working fine. i have another sorting links to sort id & title, it is also working fine. but together they are not working fine. what happens is that when i sort(say by title) using links, result gets sorted by title, then if i sort rows using drop down menu rows get sorted but result gets back to default of id sort. sorting codes for id & tite is if ($orderby == 'title' && $sortby == 'asc') {echo " <li id='scurrent'><a href='?rpp=$rowsperpage&order=title&sort=asc'>title-asc:</a></li> ";} else {echo " <li><a href='?rpp=$rowsperpage&order=title&sort=asc'>title-asc:</a></li> ";} if ($orderby == 'title' && $sortby == 'desc') {echo " <li id='scurrent'><a href='?rpp=$rowsperpage&order=title&sort=desc'>title-desc:</a></li> ";} else {echo " <li><a href='?rpp=$rowsperpage&order=title&sort=desc'>title-desc:</a></li> ";} if ($orderby == 'id' && $sortby == 'asc') {echo " <li id='scurrent'><a href='?rpp=$rowsperpage&order=id&sort=asc'>id-asc:</a></li> ";} else {echo " <li><a href='?rpp=$rowsperpage&order=id&sort=asc'>id-asc:</a></li> ";} if ($orderby == 'id' && $sortby == 'desc') {echo " <li id='scurrent'><a href='?rpp=$rowsperpage&order=id&sort=desc'>id-desc:</a></li> ";} else {echo " <li><a href='?rpp=$rowsperpage&order=id&sort=desc'>id-desc:</a></li> ";} ?> sorting codes for rows is <form action="is-test.php" method="get"> <select name="rpp" onchange="this.form.submit()"> <option value="10" <?php if ($rowsperpage == 10) echo 'selected="selected"' ?>>10</option> <option value="20" <?php if ($rowsperpage == 20) echo 'selected="selected"' ?>>20</option> <option value="30" <?php if ($rowsperpage == 30) echo 'selected="selected"' ?>>30</option> </select> </form> this method passes only rows per page(rpp) into url. i want it to pass order, sort& rpp. is there a way around to pass multiple values in form fields like this. <form action="is-test.php" method="get"> <select name="rpp, order, sort" onchange="this.form.submit()"> <option value="10, $orderby, $sortby" <?php if ($rowsperpage == 10) echo 'selected="selected"' ?>>10</option> <option value="20, $orderby, $sortby" <?php if ($rowsperpage == 20) echo 'selected="selected"' ?>>20</option> <option value="30, $orderby, $sortby" <?php if ($rowsperpage == 30) echo 'selected="selected"' ?>>30</option> </select> </form> this may seem silly but it just to give you an idea of what i am trying to implement,(i am very new to php) please suggest any way to make this work. thanks

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • NHibernate flush should save only dirty objects

    - by Emilian
    Why NHibernate fires an update on firstOrder when saving secondOrder in the code below? I'm using optimistic locking on Order. Is there a way to tell NHibernate to update firstOrder when saving secondOrder only if firstOrder was modified? // Configure var cfg = new Configuration(); var configFile = Path.Combine( AppDomain.CurrentDomain.BaseDirectory, "NHibernate.MySQL.config"); cfg.Configure(configFile); // Create session factory var sessionFactory = cfg.BuildSessionFactory(); // Create session var session = sessionFactory.OpenSession(); // Set session to flush on transaction commit session.FlushMode = FlushMode.Commit; // Create first order var firstOrder = new Order(); var firstOrder_OrderLine = new OrderLine { ProductName = "Bicycle", ProductPrice = 120.00M, Quantity = 1 }; firstOrder.Add(firstOrder_OrderLine); // Save first order using (var tx = session.BeginTransaction()) { try { session.Save(firstOrder); tx.Commit(); } catch { tx.Rollback(); } } // Create second order var secondOrder = new Order(); var secondOrder_OrderLine = new OrderLine { ProductName = "Hat", ProductPrice = 12.00M, Quantity = 1 }; secondOrder.Add(secondOrder_OrderLine); // Save second order using (var tx = session.BeginTransaction()) { try { session.Save(secondOrder); tx.Commit(); } catch { tx.Rollback(); } } session.Close(); sessionFactory.Close();

    Read the article

  • XSLT: use parameters in xls:sort attributes

    - by fireeyedboy
    How do I apply a parameter to a select and order attribute in a xsl:sort element? I'ld like to do this dynamic with PHP with something like this: $xsl = new XSLTProcessor(); $xslDoc = new DOMDocument(); $xslDoc->load( $this->_xslFilePath ); $xsl->importStyleSheet( $xslDoc ); $xsl->setParameter( '', 'sortBy', 'viewCount' ); $xsl->setParameter( '', 'order', 'descending' ); But I'ld first have to now how to get this to work. I tried the following, but it gives me a 'compilation error' : 'invalid value $order for order'. $sortBy doesn't seem to do anything either: <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="xml" indent="yes"/> <xsl:param name="sortBy" select="viewCount"/> <xsl:param name="order" select="descending"/> <xsl:template match="/"> <media> <xsl:for-each select="media/medium"> <xsl:sort select="$sortBy" order="$order"/> // <someoutput> </xsl:for-each> </media> </xsl:template> </xsl:stylesheet>

    Read the article

  • Ordering by formula fields in NHibernate

    - by Darin Dimitrov
    Suppose that I have the following mapping with a formula property: <class name="Planet" table="planets"> <id name="Id" column="id"> <generator class="native" /> </id> <!-- somefunc() is a native SQL function --> <property name="Distance" formula="somefunc()" /> </class> I would like to get all planets and order them by the Distance calculated property: var planets = session .CreateCriteria<Planet>() .AddOrder(Order.Asc("Distance")) .List<Planet>(); This is translated to the following query: SELECT Id as id0, somefunc() as formula0 FROM planets ORDER BY somefunc() Desired query: SELECT Id as id0, somefunc() as formula0 FROM planets ORDER BY formula0 If I set a projection with an alias it works fine: var planets = session .CreateCriteria<Planet>() .SetProjection(Projections.Alias(Projections.Property("Distance"), "dist")) .AddOrder(Order.Asc("dist")) .List<Planet>(); SQL: SELECT somefunc() as formula0 FROM planets ORDER BY formula0 but it populates only the Distance property in the result and I really like to avoid projecting manually over all the other properties of my object (there could be many other properties). Is this achievable with NHibernate? As a bonus I would like to pass parameters to the native somefunc() SQL function. Anything producing the desired SQL is acceptable (replacing the formula field with subselects, etc...), the important thing is to have the calculated Distance property in the resulting object and order by this distance inside SQL.

    Read the article

  • XSLT: use parameters in xls:sort attributes (dynamic sorting)

    - by fireeyedboy
    How do I apply a parameter to a select and order attribute in a xsl:sort element? I'ld like to do this dynamic with PHP with something like this: $xsl = new XSLTProcessor(); $xslDoc = new DOMDocument(); $xslDoc->load( $this->_xslFilePath ); $xsl->importStyleSheet( $xslDoc ); $xsl->setParameter( '', 'sortBy', 'viewCount' ); $xsl->setParameter( '', 'order', 'descending' ); But I'ld first have to now how to get this to work. I tried the following, but it gives me a 'compilation error' : 'invalid value $order for order'. $sortBy doesn't seem to do anything either: <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output method="xml" indent="yes"/> <xsl:param name="sortBy" select="viewCount"/> <xsl:param name="order" select="descending"/> <xsl:template match="/"> <media> <xsl:for-each select="media/medium"> <xsl:sort select="$sortBy" order="$order"/> // <someoutput> </xsl:for-each> </media> </xsl:template> </xsl:stylesheet>

    Read the article

  • functional dependencies vs type families

    - by mhwombat
    I'm developing a framework for running experiments with artificial life, and I'm trying to use type families instead of functional dependencies. Type families seems to be the preferred approach among Haskellers, but I've run into a situation where functional dependencies seem like a better fit. Am I missing a trick? Here's the design using type families. (This code compiles OK.) {-# LANGUAGE TypeFamilies, FlexibleContexts #-} import Control.Monad.State (StateT) class Agent a where agentId :: a -> String liveALittle :: Universe u => a -> StateT u IO a -- plus other functions class Universe u where type MyAgent u :: * withAgent :: (MyAgent u -> StateT u IO (MyAgent u)) -> String -> StateT u IO () -- plus other functions data SimpleUniverse = SimpleUniverse { mainDir :: FilePath -- plus other fields } defaultWithAgent :: (MyAgent u -> StateT u IO (MyAgent u)) -> String -> StateT u IO () defaultWithAgent = undefined -- stub -- plus default implementations for other functions -- -- In order to use my framework, the user will need to create a typeclass -- that implements the Agent class... -- data Bug = Bug String deriving (Show, Eq) instance Agent Bug where agentId (Bug s) = s liveALittle bug = return bug -- stub -- -- .. and they'll also need to make SimpleUniverse an instance of Universe -- for their agent type. -- instance Universe SimpleUniverse where type MyAgent SimpleUniverse = Bug withAgent = defaultWithAgent -- boilerplate -- plus similar boilerplate for other functions Is there a way to avoid forcing my users to write those last two lines of boilerplate? Compare with the version using fundeps, below, which seems to make things simpler for my users. (The use of UndecideableInstances may be a red flag.) (This code also compiles OK.) {-# LANGUAGE MultiParamTypeClasses, FunctionalDependencies, FlexibleInstances, UndecidableInstances #-} import Control.Monad.State (StateT) class Agent a where agentId :: a -> String liveALittle :: Universe u a => a -> StateT u IO a -- plus other functions class Universe u a | u -> a where withAgent :: Agent a => (a -> StateT u IO a) -> String -> StateT u IO () -- plus other functions data SimpleUniverse = SimpleUniverse { mainDir :: FilePath -- plus other fields } instance Universe SimpleUniverse a where withAgent = undefined -- stub -- plus implementations for other functions -- -- In order to use my framework, the user will need to create a typeclass -- that implements the Agent class... -- data Bug = Bug String deriving (Show, Eq) instance Agent Bug where agentId (Bug s) = s liveALittle bug = return bug -- stub -- -- And now my users only have to write stuff like... -- u :: SimpleUniverse u = SimpleUniverse "mydir"

    Read the article

  • DDD and MVC Models hold ID of separate entity or the entity itself?

    - by Dr. Zim
    If you have an Order that references a customer, does the model include the ID of the customer or a copy of the customer object like a value object (thinking DDD)? I would like to do ths: public class Order { public int ID {get;set;} public Customer customer {get;set;} ... } right now I do this: public class Order { public int ID {get;set;} public int customerID {get;set;} ... } It would be more convenient to include the complete customer object rather than an ID to the View Model passed to the form. Otherwise, I need to figure out how to get the vendor information to the view that the order references by ID. This also implies that the repository understand how to deal with the customer object that it finds within the order object when they call save (if we select the first option). If we select the second option, we will need to know where in the view model to put it. It is certain they will select an existing customer. However, it is also certain they may want to change the information in-place on the display form. One could argue to have the controller extract the customer object, submit customer changes separately to the repository, then submit changes to the order, keeping the customerID in the order.

    Read the article

  • LINQ to SQL - Insert yielding strange behavior.

    - by Isaac
    Hi, I'm trying to insert several newly created items to the database. I have a LINQ2SQL generated class called "Order". Inside order, there's a property called "OrderItems" which is also generated by LINQ2SQL and represents the Items of that Order. So far so good. The problem I'm having right now, is when I try to add more than one newly created OrderItem inside Order. I.E: Order o = orderWorker.GetById( 10 ); for( int i=0; i < 5; ++i ) { OrderItem oi =new OrderItem { Order = order, Price = 100, ShippingPrice = 100, ShippingMethod = ... }; o.OrderItems.Add( oi ); } context.SubmitChanges(); Unfortunately, only a single entity is being added. Yes, I checked the generated SQL by adding Context.Log = Console.Out, and yes, only one statement was created. Any clues? By the way I know I'm not using InsertOnSubmit, by the documentation says: You can explicitly request Inserts by using InsertOnSubmit. Alternatively, LINQ to SQL can infer Inserts by finding objects connected to one of the known objects that must be updated. For example, if you add an Untracked object to an EntitySet(TEntity) or set an EntityRef(TEntity) to an Untracked object, you make the Untracked object reachable by way of tracked objects in the graph. While processing SubmitChanges, LINQ to SQL traverses the tracked objects and discovers any reachable persistent objects that are not tracked. Such objects are candidates for insertion into the database. Thank you very much for your time.

    Read the article

  • Require reasonably random results from an SQL SELECT query within a Joomla article (Cache enabled)

    - by Shrinivas
    Setup: Joomla website on LAMP stack I have a MySQL table containing some records, these are queried by a simple SELECT on the Joomla article, as pasted below. This specific Joomla website has Caching turned on in Joomla's Global Configuration. I need to randomize the order in which I display the resultset, each time the page is loaded. Regular php/mysql would offer me two approaches for this: 1. use 'order by RAND()' or any of a number of methods to allow a SELECT query to return reasonably random results. 2. once php gets the result from the SELECT into an array, shuffle the array to get a reasonably random order of array items. However, as this Joomla instance has Caching turned ON in its Global Configuration, either of the above approaches fails. The first time I load the page the order is randomized, however any further reloads do not cause the order to change, as the page is delivered from cache. The instant the Cache is disabled, both approaches (shuffle/order by rand) work perfectly. What am I missing? How do I override the Global Cache for this specific article? A very simple requirement, that is met by both php and mysql reasonably well, is blocked by the Joomla Cache that I cannot turn off. The php that returns results from the database. <pre> $db = JFactory::getDBO(); $select = "SELECT id FROM jos_mytable;"; //order by RAND() $db->setQuery($select); echo $db->getQuery(); //Show me the Query! $rows = $db->loadObjectList(); //shuffle($rows); foreach($rows as $row) { echo "$row->id"; }

    Read the article

  • undefined method `code' for nil:NilClass message with rails and a legacy database

    - by Jude Osborn
    I'm setting up a very simple rails 3 application to view data in a legacy MySQL database. The legacy database is mostly rails ORM compatible, except that foreign key fields are pluralized. For example, my "orders" table has a foreign key field to the "companies" table called "companies_id" (rather than "company_id"). So naturally I'm having to use the ":foreign_key" attribute of "belongs_to" to set the field name manually. I haven't used rails in a few years, but I'm pretty sure I'm doing everything right, yet I get the following error when trying to access "order.currency.code": undefined method `code' for nil:NilClass This is a very simple application so far. The only thing I've done is generate the application and a bunch of scaffolds for each of the legacy database tables. Then I've gone into some of the models to make adjustments to accommodate the above mentioned difference in database naming conventions, and added some fields to the views. That's it. No funny business. So my database tables look like this (relevant fields only): orders ------ id description invoice_number currencies_id currencies ---------- id code description My Order model looks like this: class Order < ActiveRecord::Base belongs_to :currency, :foreign_key=>'currencies_id' end My Currency model looks like this: class Currency < ActiveRecord::Base has_many :orders end The relevant view snippet looks like this: <% @orders.each do |order| %> <tr> <td><%= order.description %></td> <td><%= order.invoice_number %></td> <td><%= order.currency.code %></td> </tr> <% end %> I'm completely out of ideas. Any suggestions?

    Read the article

  • PHP ZendOptimizer on Red Hat Enterprise Linux

    - by Jacob Kristensen
    I would like to install FlashMoto and the requirements are not unreasonable: PHP 5.2.1 or higher, Zend Optimizer 3.3 or higher. However my RHEL 5.4 provides me with PHP 5.1.6. So I tried the remi repository http://rpms.famillecollet.com/ but it gave me PHP 5.3.1 and Zend Optimizer from zend.com does not support anything higher than 5.2.x. I also tried the dag repo but it does not have PHP in any version. I also tried some RPMs that Oracle provides on their homepage but they don't provide php-mbstring that I also need. Does anyone know how to get PHP 5.2.1 installed on a RHEL 5.4? Then I can probably fix install the Zend thing. Thanks in advance.

    Read the article

  • SQL Server Express Failed to Install

    - by JasCav
    I am attempting to install SQL Server Express (as part of the Visual Studio 2010 Professional installation), but it is failing. I am receiving this error log. [06/22/11,16:31:39] Microsoft Visual Studio 2010 Professional - ENU: [2] UpdateFileFetcherFromMsi: Warning: Missing fwlink entry for cabinet: #SP.cab [06/22/11,16:31:40] setup.exe: [2] Duplicate module ID: {0AFE11CA-57AA-4F66-90BE-284F0F3A5ABD} [06/22/11,16:32:12] setup.exe: [2] Duplicate component in install order: SQL EULAs [06/22/11,16:32:12] setup.exe: [2] Duplicate component in install order: SQL EULAs [06/22/11,16:32:12] setup.exe: [2] Duplicate component in install order: SQL EULAs [06/22/11,17:07:55] Microsoft Visual Studio 2010 Professional - ENU: [2] UpdateFileFetcherFromMsi: Warning: Missing fwlink entry for cabinet: #SP.cab [06/22/11,17:07:55] setup.exe: [2] Duplicate module ID: {0AFE11CA-57AA-4F66-90BE-284F0F3A5ABD} [06/23/11,10:39:33] Microsoft Visual Studio 2010 Professional - ENU: [2] UpdateFileFetcherFromMsi: Warning: Missing fwlink entry for cabinet: #SP.cab [06/23/11,10:39:33] setup.exe: [2] Duplicate module ID: {0AFE11CA-57AA-4F66-90BE-284F0F3A5ABD} [06/23/11,10:40:22] setup.exe: [2] Duplicate component in install order: SQL EULAs [06/23/11,10:40:22] setup.exe: [2] Duplicate component in install order: SQL EULAs [06/23/11,10:40:22] setup.exe: [2] Duplicate component in install order: SQL EULAs [06/23/11,10:53:48] Microsoft Visual Studio 2010 Professional - ENU: [2] UpdateFileFetcherFromMsi: Warning: Missing fwlink entry for cabinet: #SP.cab [06/23/11,10:53:48] setup.exe: [2] Duplicate module ID: {0AFE11CA-57AA-4F66-90BE-284F0F3A5ABD} [06/23/11,13:19:26] Microsoft Visual Studio 2010 Professional - ENU: [2] UpdateFileFetcherFromMsi: Warning: Missing fwlink entry for cabinet: #SP.cab [06/23/11,13:19:26] setup.exe: [2] Duplicate module ID: {0AFE11CA-57AA-4F66-90BE-284F0F3A5ABD} [06/23/11,16:47:36] Microsoft SQL Server 2008 Express Service Pack 1 (x64): [2] Error code -2068643839 for this component is not recognized. [06/23/11,16:47:36] Microsoft SQL Server 2008 Express Service Pack 1 (x64): [2] Component Microsoft SQL Server 2008 Express Service Pack 1 (x64) returned an unexpected value. ***EndOfSession*** I'm reading various articles which point towards something being wrong in the register, but I can't find anything specific. Any suggestions for what I can do to fix this?

    Read the article

  • kernel software trap handling

    - by Tony
    I'm reading a book on Windows Internals and there's something I don't understand: "The kernel handles software interrupts either as part of hardware interrupt handling or synchronously when a thread invokes kernel functions related to the software interrupt." So does this mean that software interrupts or exceptions will only be handled under these conditions: a. When the kernel is executing a function from said thread related to the software exception(trap) b. when it is already handling a hardware trap Is my understanding of this correct? The next bit: "In most cases, the kernel installs front-end trap handling functions that perform general trap handling tasks before and after transferring control to other functions that field the trap." I don't quite understand what it means by 'front-end trap handling functions' and 'field the trap'? Can anyone help me?

    Read the article

  • Running two different websites domains one one IP address

    - by Akshar Prabhu Desai
    Here is my apache configuration file. I have two domain names running on same ip but i want them to point to different webapps. But in this case both point to the one intended for e-yantra.org. If I copy paste akshar.co.in part before E-yantra.org both start pointing to akshar.co.in I have two A DNS entries (one per domain name) pointing to the same IP. NameVirtualHost *:80 <VirtualHost *:80> ServerName www.e-yantra.org ServerAdmin [email protected] DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> <Directory /var/www/ci/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> <Directory /var/www/db2/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> <VirtualHost *:80> ServerName www.akshar.co.in ServerAdmin [email protected] DocumentRoot /var/akshar.co.in <Directory /var/akshar.co.in/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> </VirtualHost>

    Read the article

  • How to Programmatically Split Data Using VBA Using Specific Logic

    - by Charlene
    This is an addition to my previous post here. The code that was previously supplied to me worked like a charm, but I am having issues modifying it adding some additional logic. I am creating a macro in VBA to do the following. I have raw order data that I need to transform based on some logic. Raw Data: order-id product-num date buyer-name prod-name qty-purc sales-tax freight order-st 0000000000-00 10000000000000 5/29/2014 John Doe Product 0 1 1.00 1.50 GA 0000000000-00 10000000000001 5/29/2014 John Doe Product 1 2 1.00 1.50 GA 0000000000-00 10000000000002 5/29/2014 John Doe Product 2 1 1.00 2.00 GA 0000000000-01 10000000000002 5/30/2014 Jane Doe Product 2 1 0.00 0.00 PA 0000000000-01 10000000000003 5/30/2014 Jane Doe Product 3 1 0.00 0.00 PA Desired Outcome: HDR 0000000000-00 John Doe 5/29/2014 CHG Tax 3.00 CHG Freight 5.00 ITM 10000000000000 Product 0 1 ITM 10000000000001 Product 1 2 ITM 10000000000002 Product 2 1 HDR 0000000000-01 Jane Doe 5/30/2014 ITM 10000000000002 Product 2 1 ITM 10000000000003 Product 3 1 The "CHG" rows are created based on the following logic; if the order-st is CA or GA, add the total of sales-tax and freight for each of the rows with the same order-id. If the order-st is NOT CA or GA, no CHG rows should be created. Any help would be appreciated - let me know if I left any details out!

    Read the article

  • Can't escape single quotes in shell

    - by user13743
    I'm trying to make a command to do a perl substitution on a batch of php files in a directory. The string I want to replace has single quotes in it, and I can't get it to properly escape the in shell. I tried echoing the string with unescaped quotes, to see what perl would be getting: echo 's/require_once('include.constants.php');/require_once('include.constants.php');require_once("./functions/include.session.inc.php");/g' and it doesn't have the single-quotes in the result: s/require_once\(include.constants.php\);/require_once\(include.constants.php\);require_once\("\./functions/include\.session\.inc\.php"\);/g However, when I try to escape the single quotes: echo 's/require_once\(\'include\.constants\.php\'\);/require_once\(\'include\.constants\.php\'\);require_once\("\./functions/include\.session\.inc\.php"\);/g' I get the prompt to complete the command: > What I want it to parse to is this: What am I doing wrong? s/require_once\('include.constants.php'\);/require_once\('include.constants.php'\);require_once\("\./functions/include\.session\.inc\.php"\);/g

    Read the article

  • phpmyadmin “Forbidden: You don't have permission to access /phpmyadmin on this server.”

    - by Caterpillar
    I need to modify the file /etc/httpd/conf.d/phpMyAdmin.conf in order to allow remote users (not only localhost) to login # phpMyAdmin - Web based MySQL browser written in php # # Allows only localhost by default # # But allowing phpMyAdmin to anyone other than localhost should be considered # dangerous unless properly secured by SSL Alias /phpMyAdmin /usr/share/phpMyAdmin Alias /phpmyadmin /usr/share/phpMyAdmin <Directory "/usr/share/phpMyAdmin/"> Options Indexes FollowSymLinks MultiViews AllowOverride all Order Allow,Deny Allow from all </Directory> <Directory /usr/share/phpMyAdmin/setup/> <IfModule mod_authz_core.c> # Apache 2.4 <RequireAny> Require ip 127.0.0.1 Require ip ::1 </RequireAny> </IfModule> <IfModule !mod_authz_core.c> # Apache 2.2 Order Deny,Allow Allow from All Allow from 127.0.0.1 Allow from ::1 </IfModule> </Directory> # These directories do not require access over HTTP - taken from the original # phpMyAdmin upstream tarball # <Directory /usr/share/phpMyAdmin/libraries/> Order Deny,Allow Deny from All Allow from None </Directory> <Directory /usr/share/phpMyAdmin/setup/lib/> Order Deny,Allow Deny from All Allow from None </Directory> <Directory /usr/share/phpMyAdmin/setup/frames/> Order Deny,Allow Deny from All Allow from None </Directory> # This configuration prevents mod_security at phpMyAdmin directories from # filtering SQL etc. This may break your mod_security implementation. # #<IfModule mod_security.c> # <Directory /usr/share/phpMyAdmin/> # SecRuleInheritance Off # </Directory> #</IfModule> When I get into phpmyadmin webpage, I am not prompted for user and password, before getting the error message: Forbidden: You don't have permission to access /phpmyadmin on this server. My system is Fedora 20

    Read the article

  • Excel or Access: how to group several lines in a table and insert contents in columns? ("split column")

    - by Martin
    I have a table containing data of sold products (shown in the example on the left): Columns: Number of the order Product Name Attribute - specifies what is given in the following field "value", e. g. Customer Name or Product Variant Value - is the value of the Attribute Count - is the number of products of this variant sold in the order That means: Product B has 2 variants "c" and "d" Note that in Order 1 Product B was sold in Variant d only, because the letter "N" in field "D4" means "none". Note, that in OrdnerNo 3 Product B was sold only in Variant c, because for Variant d field "D9" is "N"!! This is confusing, but it is the structure of the original data (which I can not change). I need a way to convert the table on the left in a table like that on the right: one line for each product type Order Number Product Name Customer Name Count (number of products sold in this order) Variant - this is the problem, as it has to be filled with the So all rows with the same OrderNo and same product have to be grouped in to one, and I hope it is clear what I need. I tried to do it with Pivot Tables, but that fails, as the Count is always in each line, no matter if it has Value "N" or not and for the products without variants there is only one line for each order, however for products with variants there are several... So how could I create the right table with a VBA macro in MS Excel or maybe there is a trick in MS Access to do it directly or with an SQL query?

    Read the article

  • Bonnie does not provide speed for Sequential Input / Block

    - by Lqp1
    I'm using ProxmoxVE and I would like to run some benchmarks regarding performances of this product. One of these benchmarks is bonnie++ ; it runs very well in a VM (qemu-kvm) but when I run it in a conainer (openVZ), it does not provide me reading speed (only writing). I don't understand why... Does anyone know what's happenning ? VMs ans Containers are Debian 7.4. Here's the output of bonnie in the container: root@ct2:/# bonnie++ -u root Using uid:0, gid:0. Writing a byte at a time...done Writing intelligently...done Rewriting...done Reading a byte at a time...done Reading intelligently...done start 'em...done...done...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.96 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP ct2 1G 843 99 59116 8 60351 4 4966 99 +++++ +++ 2745 8 Latency 9558us 3582ms 527ms 1672us 936us 5248us Version 1.96 ------Sequential Create------ --------Random Create-------- ct2 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ Latency 19567us 358us 368us 107us 59us 25us 1.96,1.96,ct2,1,1401810323,1G,,843,99,59116,8,60351,4,4966,99,+++++,+++,2745,8,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,9558us,3582ms,527ms,1672us,936us,5248us,19567us,358us,368us,107us,59us,25us The filesystem for / is of type "simfs", which is a pseudo filesystem for openVZ. Maybe it's related to this issue but I can't find anyone with the same issue with bonnie and openVZ... Thanks for your help. Regards, Thomas.

    Read the article

  • In Wireshark's Protocol Hierarchy Statistics screen, is the total byte count of a capture the sum of the Bytes column or just the top line (Frame)?

    - by Howiecamp
    Part 1 - I'm looking at Wireshark's Protocol Hierarchy Statistics screen (sample below), is the total byte count of the capture the sum of the Bytes column or just the top line (Frame)? I'm 99% that it's the latter because of protocol rollup but I wanted to conform. Part 2 - From Wireshark documentation on this screen, "Protocol layers can consist of packets that won't contain any higher layer protocol, so the sum of all higher layer packets may not sum up to the protocols packet count. Example: In the screenshot TCP has 85,83% but the sum of the subprotocols (HTTP, ...) is much less. This may be caused by TCP protocol overhead, e.g. TCP ACK packets won't be counted as packets of the higher layer)." Can you explain this?

    Read the article

  • Platform Builder: Cloning – the Linker is your Friend

    - by Bruce Eitman
    I was tasked this week with making a minor change to NetMsgBox() behavior. NetMsgBox() is a little function in NETUI that handles MessageBox() for the Network User Interface.  The obvious solution is to clone the entire NETUI directory from Public\Common\Oak\Drivers (see Platform Builder: Clone Public Code for more on cloning). If you haven’t already, take a minute to look in that folder. There are a lot of files in the folder, but I only needed to modify one function in one of those files. There must be a better way. Enter the linker. Instead of cloning the entire folder, here is what I did: Create a new folder in my Platform named NETUI (but the name isn’t important) Copy the C file that I needed to modify to the new folder, in this case netui.c Copy a makefile from one of the other folder (really they are all the same) Run Sysgen_capture Open a build window (see Platform Builder: Build Tools, Opening a Build Window) Change directories to the new folder Run “Sysgen_capture netui” Rename sources.netui to sources Add the C file to sources as SOURCES=netui.c Modify the code Build the code Done That is it, the functions from my new folder now replace the functions from the Public code and link with the rest to create NETUI.dll. There is a catches. If you remove any of the functions from the C file, linking will fail because the remaining functions will be found twice.   Copyright © 2010 – Bruce Eitman All Rights Reserved

    Read the article

  • Why Oracle Delivers More Value than IBM in Data Integration Solutions

    - by irem.radzik(at)oracle.com
    For data integration projects, IT organization look for a robust but an easy-to-use solution, which simplifies enterprise data architecture while providing exceptional value-- not one that adds complexity and costs. This is a major challenge today for customers who are using IBM InfoSphere products like DataStage or Change Data Capture. Whereas, Oracle consistently delivers higher level value with its data integration products such as Oracle Data Integrator, Oracle GoldenGate. There are many differentiators for Oracle's Data Integration offering in comparison to IBM. Here are the top five: Lower cost of ownership Higher performance in both real-time and bulk data movement Ease of use and flexibility Reliability Complete, Open, and Integrated Middleware Offering Architectural differences between products contribute a great deal to these differences. First of all, Oracle's ETL architecture does not require a middle-tier transformation server, something IBM does require. Not only it costs more to manage an additional transformation server including energy costs, but it adds a performance bottleneck as well. In addition, IBM's data integration products are complex and often require lengthy professional services engagements to integrate. This translates to higher costs and delayed time to market. Then there's the reliability factor. Our customers choose Oracle GoldenGate over IBM's InfoSphere Change Data Capture product because Oracle GoldenGate is designed for mission-critical systems that require guaranteed data delivery and automatic recovery in case of process interruptions. On Thursday we will discuss these key differentiators in detail and provide customer examples that chose Oracle over IBM in data integration projects. Join us on Thursday Feb 10th at 11am PT to learn how Oracle delivers more value than IBM in data integration solutions.

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >