Search Results

Search found 17924 results on 717 pages for 'z order'.

Page 130/717 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • If you are forced to use an Anemic domain model, where do you put your business logic and calculated

    - by Jess
    Our current O/RM tool does not really allow for rich domain models, so we are forced to utilize anemic (DTO) entities everywhere. This has worked fine, but I continue to struggle with where to put basic object-based business logic and calculated fields. Current layers: Presentation Service Repository Data/Entity Our repository layer has most of the basic fetch/validate/save logic, although the service layer does a lot of the more complex validation & saving (since save operations also do logging, checking of permissions, etc). The problem is where to put code like this: Decimal CalculateTotal(LineItemEntity li) { return li.Quantity * li.Price; } or Decimal CalculateOrderTotal(OrderEntity order) { Decimal orderTotal = 0; foreach (LineItemEntity li in order.LineItems) { orderTotal += CalculateTotal(li); } return orderTotal; } Any thoughts?

    Read the article

  • SQL Server: export data via SQL query?

    - by rlb.usa
    I have FK and PK all over my db and table data needs to be specified in a certain order or else I get FK/PK insertion errors. I'm tired of executing the wizard again and again to transfer data one table at a time. In the SQL Server export data wizard there is an option to "Write a query to specify the data to transfer". I'd like to write the query myself and specify the correct order. Will this solve my problem? How do I do this? Can you provide a sample query (or link to one) The databases are on two different servers - SQL Server 2008 on each ; The database names & permissions are the same ; each table name & col is the same ; I need Identity Insert for each table.

    Read the article

  • Grouping with operands question

    - by Filip
    I have a table: mysql> desc kursy_bid; +-----------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-----------+-------------+------+-----+---------+-------+ | datetime | datetime | NO | PRI | NULL | | | currency | varchar(6) | NO | PRI | NULL | | | value | varchar(10) | YES | | NULL | | +-----------+-------------+------+-----+---------+-------+ 3 rows in set (0.01 sec) I would like to select some rows from a table, grouped by some time interval (can be one day) where I will have the first row and the last row of the group, the max(value) and min(value). I tried: select datetime, (select value order by datetime asc limit 1) open, (select value order by datetime desc limit 1) close, max(value), min(value) from kursy_bid_test where datetime > '2009-09-14 00:00:00' and currency = 'eurpln' group by month(datetime), day(datetime), hour(datetime); but the output is: | open | close | datetime | max(value) | min(value) | +--------+--------+---------------------+------------+------------+ | 1.4581 | 1.4581 | 2009-09-14 00:00:05 | 4.1712 | 1.4581 | | 1.4581 | 1.4581 | 2009-09-14 01:00:01 | 1.4581 | 1.4581 | As you see open and close is the same (but they shouldn't be). What should be the query to do what I want?

    Read the article

  • SQL syntax error

    - by Robert
    Im using Microsoft SQL Server which I think is T-SQL or ANSI SQL. I want to search a database with a string. The matches that fit the begging of the string should come first then sort alphabetically. I.e. If the table contains FOO, BAR and RAP a search for the string 'R' should yield: RAP BAR In that order. Here is my attempt: SELECT Name FROM MyTable WHERE (Name LIKE '%' + @name + '%') ORDER BY (IF(Name LIKE @name + '%',1,0)) The error message is: "must declare scalar variable @name"

    Read the article

  • Accessing different connection strings at runtime in ASP.NET MVC 1

    - by Neil T.
    I'm trying to implement integration testing in my ASP.NET MVC 1.0 solution. The technologies in use are LINQ-to-SQL, NUnit and WatiN. I recently discovered a pattern that will allow me to create a testing version of the database on the fly without modifying the development version of the database. I needed this behavior in order to run my user interface tests in WatiN that may modify the database. The plan is to modify the connection string in the Web.config file, and pass that new connection string to the DataContext constructor. This way, I don't have to add routes or modify my URLs in order to perform the integration testing. I've set up the project so that the test setup can modify the connection string to point to the test database when the tests are running. The connection string is stored in web.config. The problem I'm having is that when I try to run the tests, I get a NullReferenceException when trying to access the HTTPContext. From everything that I have read so far, the HTTPContext is only available within the context of a controller. Here is the code for the property that is supposed to give me the reference to the Web.config file: private System.Configuration.Configuration WebConfig { get { ExeConfigurationFileMap fileMap = new ExeConfigurationFileMap(); // NullReferenceException occurs on this line. fileMap.ExeConfigFilename = HttpContext.Current.Server.MapPath("~\\web.config"); System.Configuration.Configuration config = ConfigurationManager.OpenMappedExeConfiguration(fileMap, ConfigurationUserLevel.None); return config; } } Is there something that I am missing in order to make this work? Is there a better way to accomplish what I'm trying to achieve? UPDATE: I decided to abandon the modification of Web.config in lieu of a "request-scoped DataContext" pattern that I found here. From the looks of it, I believe it should give me the results I'm looking for. However, during the TextFixtureSetUp, I try to create a new copy of the database for testing purposes, and it fails silently. When I get to the tests, the repository still uses the production database connection string to load data.

    Read the article

  • NHibernate AssertException: Interceptor.OnPrepareStatement(SqlString) returned null or empty SqlString.

    - by jwynveen
    I am trying to switch a table from being a many-to-one mapping to being many-to-many with an intermediate mapping table. However, when I switched it over and tried to do a query on it with NHibernate, it's giving me this error: "Interceptor.OnPrepareStatement(SqlString) returned null or empty SqlString." My query was originally something more complex, but I switched it to a basic fetch all and I'm still having the problem: Session.QueryOver<T>().Future(); It would seem to either be a problem in my model mapping files or something in my database. Here are my model mappings: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="GBI.Core" namespace="GBI.Core.Models"> <class name="Market" table="gbi_Market"> <id name="Id" column="MarketId"> <generator class="identity" /> </id> <property name="Name" /> <property name="Url" /> <property name="Description" type="StringClob" /> <property name="Rating" /> <property name="RatingComment" /> <property name="RatingCommentedOn" /> <many-to-one name="RatingCommentedBy" column="RatingCommentedBy" lazy="proxy"></many-to-one> <property name="ImageFilename" /> <property name="CreatedOn" /> <property name="ModifiedOn" /> <property name="IsDeleted" /> <many-to-one name="CreatedBy" column="CreatedBy" lazy="proxy"></many-to-one> <many-to-one name="ModifiedBy" column="ModifiedBy" lazy="proxy"></many-to-one> <set name="Content" where="IsDeleted=0 and ParentContentId is NULL" order-by="Ordering asc, CreatedOn asc, Name asc" lazy="extra"> <key column="MarketId" /> <one-to-many class="MarketContent" /> </set> <set name="FastFacts" where="IsDeleted=0" order-by="Ordering asc, CreatedOn asc, Name asc" lazy="extra"> <key column="MarketId" /> <one-to-many class="MarketFastFact" /> </set> <set name="NewsItems" table="gbi_NewsItem_Market_Map" lazy="true"> <key column="MarketId" /> <many-to-many class="NewsItem" fetch="join" column="NewsItemId" where="IsDeleted=0"/> </set> <!--<set name="MarketUpdates" table="gbi_Market_MarketUpdate_Map" lazy="extra"> <key column="MarketId" /> <many-to-many class="MarketUpdate" fetch="join" column="MarketUpdateId" where="IsDeleted=0" order-by="CreatedOn desc" /> </set>--> <set name="Documents" table="gbi_Market_Document_Map" lazy="true"> <key column="MarketId" /> <many-to-many class="Document" fetch="join" column="DocumentId" where="IsDeleted=0"/> </set> </class> <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="GBI.Core" namespace="GBI.Core.Models"> <class name="MarketUpdate" table="gbi_MarketUpdate"> <id name="Id" column="MarketUpdateId"> <generator class="identity" /> </id> <property name="Description" /> <property name="CreatedOn" /> <property name="ModifiedOn" /> <property name="IsDeleted" /> <!--<many-to-one name="Market" column="MarketId" lazy="proxy"></many-to-one>--> <set name="Comments" where="IsDeleted=0" order-by="CreatedOn desc" lazy="extra"> <key column="MarketUpdateId" /> <one-to-many class="MarketUpdateComment" /> </set> <many-to-one name="CreatedBy" column="CreatedBy" lazy="proxy"></many-to-one> <many-to-one name="ModifiedBy" column="ModifiedBy" lazy="proxy"></many-to-one> </class> <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="GBI.Core" namespace="GBI.Core.Models"> <class name="MarketUpdateMarketMap" table="gbi_Market_MarketUpdate_Map"> <id name="Id" column="MarketUpdateMarketMapId"> <generator class="identity" /> </id> <property name="CreatedOn" /> <property name="ModifiedOn" /> <property name="IsDeleted" /> <many-to-one name="CreatedBy" column="CreatedBy" lazy="proxy"></many-to-one> <many-to-one name="ModifiedBy" column="ModifiedBy" lazy="proxy"></many-to-one> <many-to-one name="MarketUpdate" column="MarketUpdateId" lazy="proxy"></many-to-one> <many-to-one name="Market" column="MarketId" lazy="proxy"></many-to-one> </class> As I mentioned, MarketUpdate was originally a many-to-one with Market (MarketId column is still in there, but I'm ignoring it. Could this be a problem?). But I've added in the Market_MarketUpdate_Map table to make it a many-to-many. I'm running in circles trying to figure out what this could be. I couldn't find any reference to this error when searching. And it doesn't provide much detail. Using: NHibernate 2.2 .NET 4.0 SQL Server 2005

    Read the article

  • Question about rights for svn

    - by diadiora
    I have a web application written in asp.net mvc. I have in MyApp.Web assembly the list of views and and the content files(images, scripts, css, and so on). In MyApp.WebBase I have the rest of fonctionality(Controllers, domain(entities, repositories, services)). Now the question is the following: I want to give to third party html coder access only to MyApp.Web source code in order he to be able to compile the application locally and see the results. By other hand the developer team shoul have access to full source code. The problem is that in order the html coder to be able to compile the application locally he need in his project the references to the MyApp.WebBase.dll Can anyone help me? Thanks.

    Read the article

  • Out of the box approach to upload XML file to the BIRT Server for Processing

    - by Paul
    Hello, I have the BIRT Report Server configured in TOMCAT and it works fine when running reports that require an XML datasource, but that XML file has be available on the network in order for the server to find it and run. Is there an out of the box configuration in the BIRT server that will prompt the user to upload the XML file directly to the server when they try to run a given report that requires an XML data source? This would be handy for users that have the XML datasource stored locally on their C drive and not have to move them to a network server in order to be read by BIRT. Thanks in advance. Paul

    Read the article

  • AS2 acts randomly when changing scenes on the first frame

    - by fabieno
    I have a flash movie containing to scenes: scene1, scene2. I have chosen the order so that scene1 starts first, I was requested to add a functionality to allow flashvars to be passed, if fv_change equals one then scene2 should be the first to appear when the movie is loaded. I have included the following code in scene1 first frame of some layer: this.onEnterFrame = function() { delete this.onEnterFrame; if (isset==undefined && _root.fv_change && _root.fv_change==1) { isset = true; gotoAndStop("scene2",1); } } when testing in my flash environment everything worked fine, when I exported it to an HTML & SWF combo I got random results, I refreshed the page several times and some of the times scene2 appeared and some of the times it stayed with scene1. Am I doing something wrong? what is the correct way to change scene order using AS2 and external data(flashvars for that matter).

    Read the article

  • regenerate the ID row in a MySQL Table with a Mysql's Script or PHP

    - by DomingoSL
    Hello, i have a database fill with information of the users who use my webpage. The table as many MySql tables have the ID parameters who is autoincrement. The issue is that when somebody eliminate his account from the site, in the database remain a jump in the sequence that i dont want cuz i have a script who fail if find some jump in the ID. Ex. ID Name PASS 1 Jhon 1234 2 Max 2233 3 Jorge 2232 If Max get out and a new user go in, this is what will happend. ID Name PASS 1 Jhon 1234 4 NewU 1133 3 Jorge 2232 So what is the best way to erase some body from the data base in order to avoid this isuue, or if is not a way, its posible to do a PHP or MySql script who eliminate all the contents in the ID row and regenerate it in order? Thanks A lot! sorry for my english

    Read the article

  • apache front end using mod_proxy_ajp to tomcat on different servers

    - by user302307
    Anyone knows the steps to run Apache on server A as front end and run mod_proxy_ajp to connect to tomcat instances on server B? I want to run apache on sever A to do name based vhost that connects to many tomcat servers. I can run mod_proxy_ajp, only if apache and tomcat are on the same server. What I've tried so far: In server A, running Apache 2.2: NameVirtualHost *:80 ServerName tc0.domo.lan ErrorLog "C:\Apache\Apache2.2\logs\tc0.ajp.error.log" CustomLog "C:\Apache\Apache2.2\logs\tc0.ajp.access.log" combined DocumentRoot C:/htdocs0 AddDefaultCharset Off Order deny,allow Allow from all ProxyPass / ajp://192.168.77.233:8009/ ProxyPassReverse / ajp://192.168.77.233:8009/ Options FollowSymLinks AllowOverride None Order deny,allow Allow from all Server B: 192.168.77.233, tomcat 6 connector: I can confirm if going to http://192.168.77.233:8080/manager/html, tomcat works. When I use packet sniffer on server A, I found that server A is trying to connect to server B at port 80 when I'm connecting http://tc0.domo.lan/manager/html on server A

    Read the article

  • Ordering the calling of asynchronous methods in c#

    - by Peter Kelly
    Hi, Say I have 4 classes ControllerClass MethodClass1 MethodClass2 MethodClass3 and each MethodClass has an asynchronous method DoStuff() and each has a CompletedEvent. The ControllerClass is responsible for invoking the 3 asynchronous methods on the 3 MethodClasses in a particular order. So ControllerClass invokes MethodClass1.DoStuff() and subscribes to MethodClass1.CompletedEvent. When that event is fired, ControllerClass invokes MethodClass2.DoStuff() and subscribes to MethodClass2.CompletedEvent. When that event is fired, the ControllerClass invokes MethodClass3.DoStuff() Is there a best practice for a situation like this? Is this bad design? I believe it is because I am finding it hard to unit test (a sure sign) It is not easy to change the order I have an uneasy, code-smell feeling about it What are the alternatives in a situation like this?

    Read the article

  • filter to reverse lines of a text file

    - by Greg Hewgill
    I'm writing a small shell script that needs to reverse the lines of a text file. Is there a standard filter command to do this sort of thing? My specific application is that I'm getting a list of Git commit identifiers, and I want to process them in reverse order: git log --pretty=oneline work...master | grep -v DEBUG: | cut -d' ' -f1 | reverse The best I've come up with is to implement reverse like this: ... | cat -b | sort -rn | cut -f2- This uses cat to number every line, then sort to sort them in descending numeric order (which ends up reversing the whole file), then cut to remove the unneeded line number. The above works for my application, but may fail in the general case because cat -b only numbers nonblank lines. Is there a better, more general way to do this?

    Read the article

  • SQL Server: collect values in an aggregation temporarily and reuse in the same query

    - by Erwin Brandstetter
    How do I accumulate values in t-SQL? AFAIK there is no ARRAY type. I want to reuse the values like demonstrated in this PostgreSQL example using array_agg(). SELECT a[1] || a[i] AS foo ,a[2] || a[5] AS bar -- assuming we have >= 5 rows for simplicity FROM ( SELECT array_agg(text_col ORDER BY text_col) AS a ,count(*)::int4 AS i FROM tbl WHERE id between 10 AND 100 ) x How would I best solve this with t-SQL? Best I could come up with are two CTE and subselects: ;WITH x AS ( SELECT row_number() OVER (ORDER BY name) AS rn ,name AS a FROM #t WHERE id between 10 AND 100 ), i AS ( SELECT count(*) AS i FROM x ) SELECT (SELECT a FROM x WHERE rn = 1) + (SELECT a FROM x WHERE rn = i) AS foo ,(SELECT a FROM x WHERE rn = 2) + (SELECT a FROM x WHERE rn = 5) AS bar FROM i Test setup: CREATE TABLE #t( id INT PRIMARY KEY ,name NVARCHAR(100)) INSERT INTO #t VALUES (3 , 'John') ,(5 , 'Mary') ,(8 , 'Michael') ,(13, 'Steve') ,(21, 'Jack') ,(34, 'Pete') ,(57, 'Ami') ,(88, 'Bob') Is there a simpler way?

    Read the article

  • PostgreSQL - fetch the row which has the Max value for a column

    - by Joshua Berry
    I'm dealing with a Postgres table (called "lives") that contains records with columns for time_stamp, usr_id, transaction_id, and lives_remaining. I need a query that will give me the most recent lives_remaining total for each usr_id There are multiple users (distinct usr_id's) time_stamp is not a unique identifier: sometimes user events (one by row in the table) will occur with the same time_stamp. trans_id is unique only for very small time ranges: over time it repeats remaining_lives (for a given user) can both increase and decrease over time example: time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 07:00 | 1 | 1 | 1 09:00 | 4 | 2 | 2 10:00 | 2 | 3 | 3 10:00 | 1 | 2 | 4 11:00 | 4 | 1 | 5 11:00 | 3 | 1 | 6 13:00 | 3 | 3 | 1 As I will need to access other columns of the row with the latest data for each given usr_id, I need a query that gives a result like this: time_stamp|lives_remaining|usr_id|trans_id ----------------------------------------- 11:00 | 3 | 1 | 6 10:00 | 1 | 2 | 4 13:00 | 3 | 3 | 1 As mentioned, each usr_id can gain or lose lives, and sometimes these timestamped events occur so close together that they have the same timestamp! Therefore this query won't work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp) AS max_timestamp FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp = b.time_stamp Instead, I need to use both time_stamp (first) and trans_id (second) to identify the correct row. I also then need to pass that information from the subquery to the main query that will provide the data for the other columns of the appropriate rows. This is the hacked up query that I've gotten to work: SELECT b.time_stamp,b.lives_remaining,b.usr_id,b.trans_id FROM (SELECT usr_id, max(time_stamp || '*' || trans_id) AS max_timestamp_transid FROM lives GROUP BY usr_id ORDER BY usr_id) a JOIN lives b ON a.max_timestamp_transid = b.time_stamp || '*' || b.trans_id ORDER BY b.usr_id Okay, so this works, but I don't like it. It requires a query within a query, a self join, and it seems to me that it could be much simpler by grabbing the row that MAX found to have the largest timestamp and trans_id. The table "lives" has tens of millions of rows to parse, so I'd like this query to be as fast and efficient as possible. I'm new to RDBM and Postgres in particular, so I know that I need to make effective use of the proper indexes. I'm a bit lost on how to optimize. I found a similar discussion here. Can I perform some type of Postgres equivalent to an Oracle analytic function? Any advice on accessing related column information used by an aggregate function (like MAX), creating indexes, and creating better queries would be much appreciated! P.S. You can use the following to create my example case: create TABLE lives (time_stamp timestamp, lives_remaining integer, usr_id integer, trans_id integer); insert into lives values ('2000-01-01 07:00', 1, 1, 1); insert into lives values ('2000-01-01 09:00', 4, 2, 2); insert into lives values ('2000-01-01 10:00', 2, 3, 3); insert into lives values ('2000-01-01 10:00', 1, 2, 4); insert into lives values ('2000-01-01 11:00', 4, 1, 5); insert into lives values ('2000-01-01 11:00', 3, 1, 6); insert into lives values ('2000-01-01 13:00', 3, 3, 1);

    Read the article

  • How to create a backup from SqlAlchemy?

    - by swilliams
    I'm writing a Pylons app, and am trying to create a simple backup system where every table is serialized and tarred up into a single file for an administrator to download, and use to restore the app should something bad happen. I can serialize my table data just fine using the SqlAlchemy serializer, and I can deserialize it fine as well, but I can't figure out how to commit those changes back to the database. In order to serialize my data I am doing this: from myproject.model.meta import Session from sqlalchemy.ext.serializer import loads, dumps q = Session.query(MyTable) serialized_data = dumps(q.all()) In order to test things out, I go ahead and truncation MyTable, and then attempt to restore using serialized_data: from myproject.model import meta restore_q = loads(serialized_data, meta.metadata, Session) This doesn't seem to do anything... I've tried calling a Session.commit after the fact, individually walking through all the objects in restore_q and adding them, but nothing seems to work. What am I missing? Or is there a better way to do what I'm aiming for? I don't want to shell out and directly touch the database, since SqlAlchemy supports different database engines.

    Read the article

  • Retrieving XML node from a path specified in an attribute value of another node

    - by Olivier PAYEN
    From this XML source : <?xml version="1.0" encoding="utf-8" ?> <ROOT> <STRUCT> <COL order="1" nodeName="FOO/BAR" colName="Foo Bar" /> <COL order="2" nodeName="FIZZ" colName="Fizz" /> </STRUCT> <DATASET> <DATA> <FIZZ>testFizz</FIZZ> <FOO> <BAR>testBar</BAR> <LIB>testLib</LIB> </FOO> </DATA> <DATA> <FIZZ>testFizz2</FIZZ> <FOO> <BAR>testBar2</BAR> <LIB>testLib2</LIB> </FOO> </DATA> </DATASET> </ROOT> I want to generate this HTML : <html> <head> <title>Test</title> </head> <body> <table border="1"> <tr> <td>Foo Bar</td> <td>Fizz</td> </tr> <tr> <td>testBar</td> <td>testFizz</td> </tr> <tr> <td>testBar2</td> <td>testFizz2</td> </tr> </table> </body> </html> Here is the XSLT I currently have : <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" exclude-result-prefixes="msxsl"> <xsl:output method="html" indent="yes"/> <xsl:template match="/ROOT"> <html> <head> <title>Test</title> </head> <body> <table border="1"> <tr> <!--Generate the table header--> <xsl:apply-templates select="STRUCT/COL"> <xsl:sort data-type="number" select="@order"/> </xsl:apply-templates> </tr> <xsl:apply-templates select="DATASET/DATA" /> </table> </body> </html> </xsl:template> <xsl:template match="COL"> <!--Template for generating the table header--> <td> <xsl:value-of select="@colName"/> </td> </xsl:template> <xsl:template match="DATA"> <xsl:variable name="pos" select="position()" /> <tr> <xsl:for-each select="/ROOT/STRUCT/COL"> <xsl:sort data-type="number" select="@order"/> <xsl:variable name="elementName" select="@nodeName" /> <td> <xsl:value-of select="/ROOT/DATASET/DATA[$pos]/*[name() = $elementName]" /> </td> </xsl:for-each> </tr> </xsl:template> </xsl:stylesheet> It almost works, the problem I have is to retrieve the correct DATA node from the path specified in the "nodeName" attribute value of the STRUCT block.

    Read the article

  • Interesting interview question. .Net

    - by rahul
    Coding Problem NumTrans There is an integer K. You are allowed to add to K any of its divisors not equal to 1and K. The same operation can be applied to the resulting number and so on. Notice that starting from the number 4, we can reach any composite number by applying several such operations. For example, the number 24 can be reached starting from 4 using 5 operations: 468121824 You will solve a more general problem. Given integers n and m, return the minimal number of the described operations necessary to transform n into m. Return -1 if m can't be obtained from n. Definition Method signature: int GetLeastCount (int n, int m) Constraints N will be between 4 and 100000, inclusive. M will be between N and 100000, inclusive. Examples 1) 4 576 Returns: 14 The shortest order of operations is: 468121827365481108162243324432576 2) 8748 83462 Returns: 10 The shortest order of operations is: 874813122196832624439366590497873283106834488346083462 3) 4 99991 Returns: -1 The number 99991 can't be obtained because it’s prime!

    Read the article

  • New to programming

    - by Shaun
    I have a form (Quote) with an auto-number ID, on the form at the moment are two subforms that show different items (sub 1 shows partition modules sub 2 shows partition abutments) both forms use the same parts tables to build them. Both forms are linked to the quote form using the ID. All works well until the forms is refreshed or re-loaded, subform 1 shows the module names and quantities and blank spaces for the abutment names but shows the quantiews for the abutments, the reverse of this is shown in the abutments subform 2. When the lists for the variuos types and the detailed parts lists are printed they are correct. This seems to be only a visual problem. All based on Access 2003. Subform 1 SELECT Quote_Modules.ModuleID, Quote_Modules.QuoteID, Quote_Modules.ModuleDescription, Quote_Modules.ModuleQty, Quote.Style, Quote.Trim FROM Quote INNER JOIN Quote_Modules ON Quote.QuoteID=Quote_Modules.QuoteID ORDER BY Quote_Modules.ModuleID; Subform 2 SELECT Quote_Modules.ModuleID, Quote_Modules.QuoteID, Quote_Modules.ModuleDescription, Quote_Modules.ModuleQty, Quote.Style, Quote.Trim FROM Quote INNER JOIN Quote_Modules ON Quote.QuoteID=Quote_Modules.QuoteID ORDER BY Quote_Modules.ModuleID;

    Read the article

  • In Haskell, how can you sort a list of infinite lists of strings?

    - by HaskellNoob
    So basically, if I have a (finite or infinite) list of (finite or infinite) lists of strings, is it possible to sort the list by length first and then by lexicographic order, excluding duplicates? A sample input/output would be: Input: [["a", "b",...], ["a", "aa", "aaa"], ["b", "bb", "bbb",...], ...] Output: ["a", "b", "aa", "bb", "aaa", "bbb", ...] I know that the input list is not a valid haskell expression but suppose that there is an input like that. I tried using merge algorithm but it tends to hang on the inputs that I give it. Can somebody explain and show a decent sorting function that can do this? If there isn't any function like that, can you explain why? In case somebody didn't understand what I meant by the sorting order, I meant that shortest length strings are sorted first AND if one or more strings are of same length then they are sorted using < operator. Thanks!

    Read the article

  • Use fileupload as template field in a details view

    - by MyHeadHurts
    I have an admin page where a user will select a document path and add that path to a certain column of a database. I am using a fileupload on the page where they can find the document and copy the path and then paste it into the details view. However, I want to skip this step and I want them to select a document and automatically make the path show up in the details view. <asp:FileUpload ID="FileUpload1" runat="server" Visible="False" Width="384px" /><br /> <br /> <div> <asp:UpdatePanel ID="UpdatePanel1" runat="server" UpdateMode="Conditional"> <ContentTemplate> <center> <asp:DetailsView ID="DetailsView1" runat="server" AllowPaging="True" AutoGenerateRows="False" DataKeyNames="ID" DataSourceID="SqlDataSource1" Height="128px" Width="544px" Visible="False" OnModeChanged="Button2_Click" CellPadding="4" ForeColor="#333333" GridLines="None" > <Fields> <asp:BoundField DataField="Order" HeaderText="Order" SortExpression="Order" /> <asp:BoundField DataField="Department" HeaderText="Department" SortExpression="Department"/> <asp:BoundField DataField="DOC_Type" HeaderText="DOC_Type" SortExpression="DOC_Type" /> <asp:BoundField DataField="Title" HeaderText="Title" SortExpression="Title" /> <asp:BoundField DataField="Revision" HeaderText="Revision" SortExpression="Revision" /> <asp:BoundField DataField="DOC" HeaderText="DOC" SortExpression="DOC" /> <asp:BoundField DataField="Active" HeaderText="Active" SortExpression="Active" /> <asp:BoundField DataField="Rev_Date" HeaderText="Rev_Date" SortExpression="Rev_Date" /> <asp:BoundField DataField="ID" HeaderText="ID" InsertVisible="False" ReadOnly="True" SortExpression="ID" Visible="False" /> <asp:CommandField ShowInsertButton="True" /> </Fields> <FooterStyle BackColor="#5D7B9D" BorderStyle="None" Font-Bold="True" ForeColor="White" /> <CommandRowStyle BackColor="#E2DED6" BorderStyle="None" Font-Bold="True" /> <RowStyle BackColor="#F7F6F3" BorderStyle="None" ForeColor="#333333" /> <FieldHeaderStyle BackColor="#E9ECF1" BorderStyle="None" Font-Bold="True" /> <EmptyDataRowStyle BorderStyle="None" /> <PagerStyle BackColor="#284775" BorderStyle="None" ForeColor="White" HorizontalAlign="Center" /> <HeaderStyle BackColor="#5D7B9D" BorderStyle="None" Font-Bold="True" ForeColor="White" /> <InsertRowStyle BorderStyle="None" /> <EditRowStyle BackColor="#999999" BorderStyle="None" /> <AlternatingRowStyle BackColor="White" BorderStyle="None" ForeColor="#284775" /> </asp:DetailsView> &nbsp; <br /> I need to get the fileupload1 into the DOC contenttemplate area so instead of showing an empty textbox it will show just a textbox it will show the fileupload

    Read the article

  • changing the htaccess file in php

    - by tibin mathew
    Hi, I want to change the maximum file upload size in my website, for this i'm going to add some code lines in my .htaccess file. i have searched in google and i got the lines of code to add in .htaccess file. But i don't know exactly were to add that lines of code . Below is the lines of code currently in my .htaccess file code below -FrontPage- IndexIgnore .htaccess /.?? *~ *# /HEADER */README* */_vti* order deny,allow deny from all allow from all order deny,allow deny from allenter code here AuthName www.mysite.com AuthUserFile /www/htdocs/domains/s11/01712/www.mysite.com/webdocs/_vti_pvt/service.pwd AuthGroupFile /www/htdocs/domains/s11/01712/www.mysite.com/webdocs/_vti_pvt/service.grp AddHandler php5-script .php Here is the code to add php_value post_max_size 50M php_value upload_max_filesize 50M how to add this lines of code in .htaccess file and where?? Please help me.. Thanks

    Read the article

  • How can I load a file into a DataBag from within a Yahoo PigLatin UDF?

    - by Cervo
    I have a Pig program where I am trying to compute the minimum center between two bags. In order for it to work, I found I need to COGROUP the bags into a single dataset. The entire operation takes a long time. I want to either open one of the bags from disk within the UDF, or to be able to pass another relation into the UDF without needing to COGROUP...... Code: # **** Load files for iteration **** register myudfs.jar; wordcounts = LOAD 'input/wordcounts.txt' USING PigStorage('\t') AS (PatentNumber:chararray, word:chararray, frequency:double); centerassignments = load 'input/centerassignments/part-*' USING PigStorage('\t') AS (PatentNumber: chararray, oldCenter: chararray, newCenter: chararray); kcenters = LOAD 'input/kcenters/part-*' USING PigStorage('\t') AS (CenterID:chararray, word:chararray, frequency:double); kcentersa1 = CROSS centerassignments, kcenters; kcentersa = FOREACH kcentersa1 GENERATE centerassignments::PatentNumber as PatentNumber, kcenters::CenterID as CenterID, kcenters::word as word, kcenters::frequency as frequency; #***** Assign to nearest k-mean ******* assignpre1 = COGROUP wordcounts by PatentNumber, kcentersa by PatentNumber; assignwork2 = FOREACH assignpre1 GENERATE group as PatentNumber, myudfs.kmeans(wordcounts, kcentersa) as CenterID; basically my issue is that for each patent I need to pass the sub relations (wordcounts, kcenters). In order to do this, I do a cross and then a COGROUP by PatentNumber in order to get the set PatentNumber, {wordcounts}, {kcenters}. If I could figure a way to pass a relation or open up the centers from within the UDF, then I could just GROUP wordcounts by PatentNumber and run myudfs.kmeans(wordcount) which is hopefully much faster without the CROSS/COGROUP. This is an expensive operation. Currently this takes about 20 minutes and appears to tack the CPU/RAM. I was thinking it might be more efficient without the CROSS. I'm not sure it will be faster, so I'd like to experiment. Anyway it looks like calling the Loading functions from within Pig needs a PigContext object which I don't get from an evalfunc. And to use the hadoop file system, I need some initial objects as well, which I don't see how to get. So my question is how can I open a file from the hadoop file system from within a PIG UDF? I also run the UDF via main for debugging. So I need to load from the normal filesystem when in debug mode. Another better idea would be if there was a way to pass a relation into a UDF without needing to CROSS/COGROUP. This would be ideal, particularly if the relation resides in memory.. ie being able to do myudfs.kmeans(wordcounts, kcenters) without needing the CROSS/COGROUP with kcenters... But the basic idea is to trade IO for RAM/CPU cycles. Anyway any help will be much appreciated, the PIG UDFs aren't super well documented beyond the most simple ones, even in the UDF manual.

    Read the article

  • Back Orders for ERP: data model references ?

    - by Patrick Honorez
    I have built an ERP using Sql Server as a back-end. These are the different types of Client documents (there are also Supplier Docs): Order -- impact: BO Delivery Note (also used for returns, with negative quantity) --impact: BO, Stock Invoice --impact: accounting only Credit Note --impact: accounting, BO I use a complex system of self joins (at detail level) to find out the quantities in each OrderDetail that still have a backorder (BO). It'd like to simplify this using a [group] field that could be used through all detail line related to an original order. There are many difficult things to trace: a Return of a product may be due to a defect and thus increase the BO, or it can be just a return, joined with a Credit Note, and then has no impact on BO. My question is: do you know of any real good reference (book, web) for this matter ?

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >