Search Results

Search found 17537 results on 702 pages for 'doctrine query'.

Page 629/702 | < Previous Page | 625 626 627 628 629 630 631 632 633 634 635 636  | Next Page >

  • What is the best way to auto-generate INSERT statements for a SQL Server table?

    - by JosephStyons
    We are writing a new application, and while testing, we will need a bunch of dummy data. I've added that data by using MS Access to dump excel files into the relevant tables. Every so often, we want to "refresh" the relevant tables, which means dropping them all, re-creating them, and running a saved MS Access append query. The first part (dropping & re-creating) is an easy sql script, but the last part makes me cringe. I want a single setup script that has a bunch of INSERTs to regenerate the dummy data. I have the data in the tables now. What is the best way to automatically generate a big list of INSERT statements from that dataset? I'm thinking of something like in TOAD (for Oracle) where you can right-click on a grid and click Save As-Insert Statements, and it will just dump a big sql script wherever you want. The only way I can think of doing it is to save the table to an excel sheet and then write an excel formula to create an INSERT for every row, which is surely not the best way. I'm using the 2008 Management Studio to connect to a SQL Server 2005 database.

    Read the article

  • Use an Array to Cycle Through a MySQL Where Statement

    - by Ryan
    I'm trying to create a loop that will output multiple where statements for a MySQL query. Ultimately, my goal is to end up with these four separate Where statements: `fruit` = '1' AND `vegetables` = '1' `fruit` = '1' AND `vegetables` = '2' `fruit` = '2' AND `vegetables` = '1' `fruit` = '2' AND `vegetables` = '2' My theoretical code is pasted below: <?php $columnnames = array('fruit','vegetables'); $column1 = array('1','2'); $column2 = array('1','2'); $where = ''; $column1inc =0; $column2inc =0; while( $column1inc <= count($column1) ) { if( !empty( $where ) ) $where .= ' AND '; $where = "`".$columnnames[0]."` = "; $where .= "'".$column1[$column1inc]."'"; while( $column2inc <= count($column2) ) { if( !empty( $where ) ) $where .= ' AND '; $where .= "`".$columnnames[1]."` = "; $where .= "'".$column2[$column2inc]."'"; echo $where."\n"; $column2inc++; } $column1inc++; } ?> When I run this code, I get the following output: `fruit` = '1' AND `vegetables` = '1' `fruit` = '1' AND `vegetables` = '1' AND `vegetables` = '2' `fruit` = '1' AND `vegetables` = '1' AND `vegetables` = '2' AND `vegetables` = '' Does anyone see what I am doing incorrectly? Thanks.

    Read the article

  • SQL Server Reset Identity Increment for all tables

    - by DanSpd
    Basically I need to reset Identity Increment for all tables to its original. Here I tried some code, but it fails. http://pastebin.com/KSyvtK5b actual code from link: USE World00_Character GO -- Create a cursor to loop through the System Ojects and get each table name DECLARE TBL_CURSOR CURSOR -- Declare the SQL Statement to cursor through FOR ( SELECT Name FROM Sysobjects WHERE Type='U' ) -- Declare the @SQL Variable which will hold our dynamic sql DECLARE @SQL NVARCHAR(MAX); SET @SQL = ''; -- Declare the @TblName Variable which will hold the name of the current table DECLARE @TblName NVARCHAR(MAX); -- Open the Cursor OPEN TBL_CURSOR -- Setup the Fetch While that will loop through our cursor and set @TblName FETCH NEXT FROM TBL_CURSOR INTO @TblName -- Do this while we are not at the end of the record set WHILE (@@FETCH_STATUS <> -1) BEGIN -- Appeand this table's select count statement to our sql variable SET @SQL = @SQL + ' ( SELECT '''+@TblName+''' AS Table_Name,COUNT(*) AS Count FROM '+@TblName+' ) UNION'; -- Delete info EXEC('DBCC CHECKIDENT ('+@TblName+',RESEED,(SELECT IDENT_SEED('+@TblName+')))'); -- Pull the next record FETCH NEXT FROM TBL_CURSOR INTO @TblName -- End the Cursor Loop END -- Close and Clean Up the Cursor CLOSE TBL_CURSOR DEALLOCATE TBL_CURSOR -- Since we were adding the UNION at the end of each part, the last query will have -- an extra UNION. Lets trim it off. SET @SQL = LEFT(@SQL,LEN(@SQL)-6); -- Lets do an Order By. You can pick between Count and Table Name by picking which -- line to execute below. SET @SQL = @SQL + ' ORDER BY Count'; --SET @SQL = @SQL + ' ORDER BY Table_Name'; -- Now that our Dynamic SQL statement is ready, lets execute it. EXEC (@SQL); GO error message: Error: Msg 102, Level 15, State 1, Line 1 Incorrect syntax near '('. How can I either fix that SQL or reset identity for all tables to its original? Thank you

    Read the article

  • PHP - "Fat Free Framework" Find Methods and Showing Results in Template

    - by user1672808
    Just started trying the "Fat Free Framework" I'm building a site using a MySQL DB with 265 fields, and 5000+ rows in the DB; I can load() a specific record easily, no problems. When using find(), afind(), and even "select()", template will show blank lines or lines with "filler" text, with the correct number of rows for the query results, but no text/data from the DB itself; Same problem whether using objects or simply arrays from result (afind() and find()). I've copied/pasted the code verbatim from examples and from documentation, with only the DB specific items changed. Still, no luck. CODE IN PHP FILE (function from CLASS): static function home() { $featured=new Axon('boats'); $F3::set('boatlist',$featured->afind('D_CustomerID=173')); F3::set('content',TEMPLATE_DIR .'/home.html'); echo Template::serve(TEMPLATE_DIR .'/layout.html'); } TEMPLATE home.html: <div class="span8"> <h3> Featured Boats </h3> <F3:repeat group="{{@boatlist}}" value="{{@boat}}"> <div style="margin-left: 2em" class="thumbnails"> <p> <a href="boat/{{@boat['D_BoatNum']}}">{{trim(@boat['D_Description'])}}</a> by {{@boat['D_CustomerID']}} </p> <p> {{@boat['D_Price']}} </p> </div> </F3:repeat> </div> The number of rows this produces coincides with the correct number of rows in the DB. However, the actual data from each field does not show. Any ideas?

    Read the article

  • Why does XPath.selectNodes(context) always use the whole document in JDOM

    - by Simeon
    Hi, I'm trying to run the same query on several different contexts, but I always get the same result. This is an example xml: <root> <p> <r> <t>text</t> </r> </p> <t>text2</t> </root> So this is what I'm doing: final XPath xpath = XPath.newInstance("//t"); List<Element> result = xpath.selectNodes(thisIsThePelement); // and I've debuged it, it really is the <p> element And I always get both <t> elements in the result list. I need just the <t> inside the <p> I'm passing to the XPath object. Any ideas would be of great help, thanks.

    Read the article

  • archiving strategies and limitations of data in a table

    - by Samuel
    Environment: Jboss, Mysql, JPA, Hibernate Our web application will be catering to a large amount of users (~ 1,000,000) and there are a lots of child table where user specific data are stored (e.g. personal, health, forum contributions ...). What would be the best practice to archive user & user specific information. [a] Would it be wise to move the archived user & user specific information to their respective tables within the same database (e.g. user_archive, user_forum_comments_archive ...) OR [b] Would you just mark the database entries with a flag in the original table(s) and just query only non archived entries. We have a unique constraint on User.loginid, how do you handle this requirement if the users are archived via 1-[a] (i.e if a user with loginid 'samuel' gets moved into the archive table and if a new user gets added with the same name in the original table, how would you prevent this. What would be the best strategy to address the unique key constraints. We have a requirement to selectively archive records and bring it back if necessary, will you rely on database tools are would you handle this via your persistence APIs exposed by the JPA entity model.

    Read the article

  • Populate an SSRS Report parameter(hidden) by a sproc or user defined function

    - by Nauman
    My SSRS report fetches data from my DATAWAREHOUSE. The ASP.NET application I have is connected to an OLTP database. I invoke the SSRS Report from my ASP.NET application and provide a parameter as CustomerID(this is an application key in datawarehouse) to my report. Since my report is connected to my datawarehouse, I do not query my report databased on the OLTP's CustomerID. Instead I use my datawarehouse's surrogate key (CustomerDimKey). Now, in my report, I need to find the correct surrogate key for the CustomerID parameter, which I passed from my ASP.NET application! My report already has a parameter as @CustomerDimKey(this is used across all the report sprocs). We used it for testing, but now we'll hide this as we have integrated it with the ASP.NET application. I have already added a new parameter to the report as @CustomerID(this will have the OLTP's CustomerID), which will now get a value from ASP.NET. I need to know a way to re-use the @CustomerDimKey report parameter, which should now get value from a sql statement or a sproc once the report is requested. Based on the value contained by the @CustomerID parameter.

    Read the article

  • Getting "[Microsoft][ODBC SQL Server Driver][SQL Server]Incorrect syntax near 'Microsoft.'

    - by brohjoe
    Hi Experts, I'm getting an error, "[Microsoft][ODBC SQL Server Driver][SQL Server]Incorrect syntax near 'Microsoft.' Here is the code: Dim conn As ADODB.Connection Dim rst As ADODB.Recordset Dim stSQL As String Public Sub loadData() 'This was set up using Microsoft ActiveX Data Components version 6.0. 'Create ADODB connection object, open connection and construct the connection string object which is the DSN name. Set conn = New ADODB.Connection conn.ConnectionString = "sql_server" conn.Open 'conn.Execute (strSQL) On Error GoTo ErrorHandler 'Open Excel and run query to export data to SQL Server. strSQL = "SELECT * INTO SalesOrders FROM OPENDATASOURCE(Microsoft.ACE.OLEDB.12.0;" & _ "Data Source=C:\Workbook.xlsx;" & _ "Extended Properties=Excel 12.0; [Sales Orders])" conn.Execute (strSQL) 'Error handling. ErrorExit: 'Reclaim memory from the cntection objects Set rst = Nothing Set conn = Nothing Exit Sub ErrorHandler: MsgBox Err.Description, vbCritical Resume ErrorExit 'clean up and reclaim memory resources. conn.Close If CBool(cnt.State And adStateOpen) Then Set rst = Nothing Set conn = Nothing End If End Sub

    Read the article

  • Applying for .net jobs as a "self learner"

    - by DeanMc
    Hi All, I have recently started applying for .Net jobs. I currently work in a sales role with a large telco. I found out quite late that I like programming and as such bought my house and made commitments that mean college is not an option. What I would like to know is, is it harder to get a junior job as a self learner? I have gotten a few enquiries regarding my C.V but nothing concrete yet. I try to be involved in projects as I get the chance and tend to put up any worthwhile projects as I develop them. Some examples of my work are: A Xaml lexer and parser: http://www.xlight.mendhak.com A font obfuscation tool: http://www.silverlightforums.com/showthread.php?1516-Font-Obsfucation-Tool-ALPHA A tagger for m4a: http://projectaudiophile.codeplex.com/SourceControl/list/changesets I, of course think that these are great examples of my work but that is my opinion based on self learning. The other query is how much should I actually know? I've never used linked lists but I know that strings are immutable and I understand what that means. I am only touching on T-SQL but I understand things like how properties function in IL (as two standard methods :) ). I suppose I understand a lot of concepts but specific features need some looking up to implement as I may not know the syntax off the top of my head.

    Read the article

  • value of Identity column returning null when retrieved by value in dataGridView

    - by Raven Dreamer
    Greetings. I'm working on a windows forms application that interacts with a previously created SQL database. Specifically, I'm working on implementing an "UPDATE" query. for (int i = 0; i < dataGridView1.RowCount; i++) { string firstName = (string)dataGridView1.Rows[i].Cells[0].Value; string lastName = (string)dataGridView1.Rows[i].Cells[1].Value; string phoneNo = (string)dataGridView1.Rows[i].Cells[2].Value; short idVal = (short)dataGridView1.Rows[i].Cells[3].Value; this.contactInfoTableAdapter.UpdateQuery(firstName, lastName, phoneNo, idVal); } The dataGridView has 4 columns, First Name, Last Name, Phone Number, and ID (which was created as an identity column when I initially formed the table in SQL). When I try to run this code, the three strings are returned properly, but dataGridView1.Rows[i].Cells[3].Value is returning "null". I'm guessing this is because it was created as an identity column rather than a normal column. What's a better way to retrieve the relevant ID value for my UpdateQuery?

    Read the article

  • How can I filter a Perl DBIx recordset with 2 conditions on the same column?

    - by BrianH
    I'm getting my feet wet in DBIx::Class - loving it so far. One problem I am running into is that I want to query records, filtering out records that aren't in a certain date range. It took me a while to find out how to do a "<=" type of match instead of an equality match: my $start_criteria = ">= $start_date"; my $end_criteria = "<= $end_date"; my $result = $schema->resultset('MyTable')->search( { 'status_date' => \$start_criteria, 'status_date' => \$end_criteria, }); The obvious problem with this is that since the filters are in a hash, I am overwriting the value for "status_date", and am only searching where the status_date <= $end_date. The SQL that gets executed is: SELECT me.* from MyTable me where status_date <= '9999-12-31' I've searched CPAN, Google and SO and haven't been able to figure out how to apply 2 conditions to the same column. All documentation I've been able to find shows how to filter on more than 1 column, but not 2 conditions on the same column. I'm sure I'm missing something obvious - hoping someone here can point it out to me? Thanks in advance! Brian

    Read the article

  • Hadoop Map/Reduce - simple use example to do the following...

    - by alexeypro
    I have MySQL database, where I store the following BLOB (which contains JSON object) and ID (for this JSON object). JSON object contains a lot of different information. Say, "city:Los Angeles" and "state:California". There are about 500k of such records for now, but they are growing. And each JSON object is quite big. My goal is to do searches (real-time) in MySQL database. Say, I want to search for all JSON objects which have "state" to "California" and "city" to "San Francisco". I want to utilize Hadoop for the task. My idea is that there will be "job", which takes chunks of, say, 100 records (rows) from MySQL, verifies them according to the given search criteria, returns those (ID's) which qualify. Pros/cons? I understand that one might think that I should utilize simple SQL power for that, but the thing is that JSON object structure is pretty "heavy", if I put it as SQL schemas, there will be at least 3-5 tables joins, which (I tried, really) creates quite a headache, and building all the right indexes eats RAM faster than I one can think. ;-) And even then, every SQL query has to be analyzed to be utilizing the indexes, otherwise with full scan it literally is a pain. And with such structure we have the only way "up" is just with vertical scaling. But I am not sure it's the best option for me, as I see how JSON objects will grow (the data structure), and I see that the number of them will grow too. :-) Help? Can somebody point me to simple examples of how this can be done? Does it make sense at all? Am I missing something important? Thank you.

    Read the article

  • Retrieving Json via HTML request from Jboss server

    - by Seth Solomon
    I am running into a java.net.SocketException: Unexpected end of file from server when I am trying to query some JSON from my JBoss server. I am hoping someone can spot where I am going wrong. Or does anyone have any suggestions of a better way to pull this JSON from my Jboss server? try{ URL u = new URL("http://localhost:9990/management/subsystem/datasources/data-source/MySQLDS/statistics?read-resource&include-runtime=true&recursive=true"); HttpURLConnection c = (HttpURLConnection) u.openConnection(); String encoded = Base64.encode(("username"+":"+"password").getBytes()); c.setRequestMethod("POST"); c.setRequestProperty("Authorization", "Basic "+encoded); c.setRequestProperty("Content-Type","application/json"); c.setUseCaches(false); c.setAllowUserInteraction(false); c.setConnectTimeout(5000); c.setReadTimeout(5000); c.connect(); int status = c.getResponseCode(); // throws the exception here switch (status) { case 200: case 201: BufferedReader br = new BufferedReader(new InputStreamReader(c.getInputStream())); StringBuilder sb = new StringBuilder(); String line; while ((line = br.readLine()) != null) { sb.append(line+"\n"); } br.close(); System.out.println(sb.toString()); break; default: System.out.println(status); break; } } catch (Exception e) { e.printStackTrace(); }

    Read the article

  • Where Not In OR Except simulation of SQL in LINQ to Objects

    - by Thinking
    Suppose I have two lists that holds the list of source file names and destination file names respectively. The Sourcefilenamelist has files as 1.txt, 2.txt,3.txt, 4.txt while the Destinaitonlist has 1.txt,2.txt. I ned to write a linq query to find out which files are in SourceList that are absent in DestinationFile list. e.g. here the out put will be 3.txt and 4.txt. I have done this by a foreach statement. but now I want to do the same by using LINQ(C#). Edit: My Code is List<FileList> sourceFileNames = new List<FileList>(); sourceFileNames.Add(new FileList { fileNames = "1.txt" }); sourceFileNames.Add(new FileList { fileNames = "2.txt" }); sourceFileNames.Add(new FileList { fileNames = "3.txt" }); sourceFileNames.Add(new FileList { fileNames = "4.txt" }); List<FileList> destinationFileNames = new List<FileList>(); destinationFileNames.Add(new FileList { fileNames = "1.txt" }); destinationFileNames.Add(new FileList { fileNames = "2.txt" }); IEnumerable<FileList> except = sourceFileNames.Except(destinationFileNames); And Filelist is a simple class with only one property fileNames of type string.

    Read the article

  • Using Linq to group a list of objects into a new grouped list of list of objects

    - by Simon G
    Hi, I don't know if this is possible in Linq but here goes... I have an object: public class User { public int UserID { get; set; } public string UserName { get; set; } public int GroupID { get; set; } } I return a list that may look like the following: List<User> userList = new List<User>(); userList.Add( new User { UserID = 1, UserName = "UserOne", GroupID = 1 } ); userList.Add( new User { UserID = 2, UserName = "UserTwo", GroupID = 1 } ); userList.Add( new User { UserID = 3, UserName = "UserThree", GroupID = 2 } ); userList.Add( new User { UserID = 4, UserName = "UserFour", GroupID = 1 } ); userList.Add( new User { UserID = 5, UserName = "UserFive", GroupID = 3 } ); userList.Add( new User { UserID = 6, UserName = "UserSix", GroupID = 3 } ); I want to be able to run a Linq query on the above list that groups all the users by GroupID. So the out pub will be a list of user lists that contains user (if that makes sense?). So the out put would be something like: GroupedUserList UserList UserID = 1, UserName = "UserOne", GroupID = 1 UserID = 2, UserName = "UserTwo", GroupID = 1 UserID = 4, UserName = "UserFour", GroupID = 1 UserList UserID = 3, UserName = "UserThree", GroupID = 2 UserList UserID = 5, UserName = "UserFive", GroupID = 3 UserID = 6, UserName = "UserSix", GroupID = 3 I've tried using the groupby linq clause but this seems to return a list of keys and its not grouped by correctly: var groupedCustomerList = userList.GroupBy( u => u.GroupID ).ToList(); Any help would be much appreciated. Thanks

    Read the article

  • Is there a good way to QuickCheck Happstack.State methods?

    - by Paul Kuliniewicz
    I have a set of Happstack.State MACID methods that I want to test using QuickCheck, but I'm having trouble figuring out the most elegant way to accomplish that. The problems I'm running into are: The only way to evaluate an Ev monad computation is in the IO monad via query or update. There's no way to create a purely in-memory MACID store; this is by design. Therefore, running things in the IO monad means there are temporary files to clean up after each test. There's no way to initialize a new MACID store except with the initialValue for the state; it can't be generated via Arbitrary unless I expose an access method that replaces the state wholesale. Working around all of the above means writing methods that only use features of MonadReader or MonadState (and running the test inside Reader or State instead of Ev. This means forgoing the use of getRandom or getEventClockTime and the like inside the method definitions. The only options I can see are: Run the methods in a throw-away on-disk MACID store, cleaning up after each test and settling for starting from initialValue each time. Write the methods to have most of the code run in a MonadReader or MonadState (which is more easily testable), and rely on a small amount of non-QuickCheck-able glue around it that calls getRandom or getEventClockTime as necessary. Is there a better solution that I'm overlooking?

    Read the article

  • Using MongoDB's map/reduce to "group by" two fields

    - by ibz
    I need something slightly more complex than the examples in the MongoDB docs and I can't seem to be able to wrap my head around it. Say I have a collection of objects of the form {date: "2010-10-10", type: "EVENT_TYPE_1", user_id: 123, ...} Now I want to get something similar to a SQL GROUP BY query, grouping over both date and type. That is, I want the number of events of each type in each day. Also, I'd like to make it unique by user_id, ie. if a user has more events in the same day, count it only once. I'm trying to do this with map/reduce. I do db.logs.mapReduce(function() { emit(this.type, 1); }, function(k, vals) { var total = 0; for (var i = 0; i < vals.length; i++) total += vals[i]; return total; }}) which nicely groups by type, but now, how can I group by date at the same time? Seems the key in emit() can't be an array (I thought about doing emit([this.date, this.type], 1)). Also, how can I ensure the per-user uniqueness? I'm just starting with MongoDB and I'm still having trouble grasping the basic concepts. Also, there is not much documentation available out there. Any help from more experienced users is appreciated. Thanks!

    Read the article

  • Network Authentication when running exe from WMI

    - by Andy
    Hi, I have a C# exe that needs to be run using WMI and access a network share. However, when I access the share I get an UnauthorizedAccessException. If I run the exe directly the share is accessible. I am using the same user account in both cases. There are two parts to my application, a GUI client that runs on a local PC and a backend process that runs on a remote PC. When the client needs to connect to the backend it first launches the remote process using WMI (code reproduced below). The remote process does a number of things including accessing a network share using Directory.GetDirectories() and reports back to the client. When the remote process is launched automatically by the client using WMI, it cannot access the network share. However, if I connect to the remote machine using Remote Desktop and manually launch the backend process, access to the network share succeeds. The user specifed in the WMI call and the user logged in for the Remote Desktop session are the same, so the permissions should be the same, shouldn't they? I see in the MSDN entry for Directory.Exists() it states "The Exists method does not perform network authentication. If you query an existing network share without being pre-authenticated, the Exists method will return false." I assume this is related? How can I ensure the user is authenticated correctly in a WMI session? ConnectionOptions opts = new ConnectionOptions(); opts.Username = username; opts.Password = password; ManagementPath path = new ManagementPath(string.Format("\\\\{0}\\root\\cimv2:Win32_Process", remoteHost)); ManagementScope scope = new ManagementScope(path, opts); scope.Connect(); ObjectGetOptions getOpts = new ObjectGetOptions(); using (ManagementClass mngClass = new ManagementClass(scope, path, getOpts)) { ManagementBaseObject inParams = mngClass.GetMethodParameters("Create"); inParams["CommandLine"] = commandLine; ManagementBaseObject outParams = mngClass.InvokeMethod("Create", inParams, null); }

    Read the article

  • Optimal two variable linear regression calculation

    - by Dave Jarvis
    Problem Am looking to apply the y = mx + b equation (where m is SLOPE, b is INTERCEPT) to a data set, which is retrieved as shown in the SQL code. The values from the (MySQL) query are: SLOPE = 0.0276653965651912 INTERCEPT = -57.2338357550468 SQL Code SELECT ((sum(t.YEAR) * sum(t.AMOUNT)) - (count(1) * sum(t.YEAR * t.AMOUNT))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as SLOPE, ((sum( t.YEAR ) * sum( t.YEAR * t.AMOUNT )) - (sum( t.AMOUNT ) * sum(power(t.YEAR, 2)))) / (power(sum(t.YEAR), 2) - count(1) * sum(power(t.YEAR, 2))) as INTERCEPT, FROM (SELECT D.AMOUNT, Y.YEAR FROM CITY C, STATION S, YEAR_REF Y, MONTH_REF M, DAILY D WHERE -- For a specific city ... -- C.ID = 8590 AND -- Find all the stations within a 15 unit radius ... -- SQRT( POW( C.LATITUDE - S.LATITUDE, 2 ) + POW( C.LONGITUDE - S.LONGITUDE, 2 ) ) < 15 AND -- Gather all known years for that station ... -- S.STATION_DISTRICT_ID = Y.STATION_DISTRICT_ID AND -- The data before 1900 is shaky; insufficient after 2009. -- Y.YEAR BETWEEN 1900 AND 2009 AND -- Filtered by all known months ... -- M.YEAR_REF_ID = Y.ID AND -- Whittled down by category ... -- M.CATEGORY_ID = '001' AND -- Into the valid daily climate data. -- M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR ORDER BY Y.YEAR ) t Data The data is visualized here: Question The following results (to calculate the start and end points of the line) appear incorrect. Why are the results off by ~10 degrees (e.g., outliers skewing the data)? (1900 * 0.0276653965651912) + (-57.2338357550468) = -4.66958228 (2009 * 0.0276653965651912) + (-57.2338357550468) = -1.65405406 I would have expected the 1900 result to be around 10 (not -4.67) and the 2009 result to be around 11.50 (not -1.65). Related Sites Least absolute deviations Robust regression Thank you!

    Read the article

  • Is there a tag in XHTML that you can put anywhere in the body - even inside TABLE elements?

    - by Iain Fraser
    I would like to be able to place an empty tag anywhere in my document as a marker that can be addressed by jQuery. However, it is important that the XHTML still validates. To give you a bit of background as to what I'm doing: I've compared the current and previous versions of a particular document and I'm placing markers in the html where the differences are. I'm then intending to use jQuery to highlight the parent block-level elements when highlightchanges=true is in the URL's query string. At the moment I'm using <span> tags but it occurred to me that this sort of thing wouldn't validate: <table> <tr> <td>Old row</td> </tr> <span class="diff"></span><tr> <td>Just added</td> </tr> </table> So is there a tag I can use anywhere? Meta tag maybe? Thanks for your help! Iain

    Read the article

  • How can I solve this NHibernate Querying in an n-tier architecture?

    - by Tyler Wright
    I've hit a wall with trying to decouple NHibernate from my services layer. My architecture looks like this: web - services - repositories - nhibernate - db I want to be able to spawn nhibernate queries from my services layer and possibly my web layer without those layers knowing what orm they are dealing with. Currently, I have a find method on all of my repositories that takes in IList<object[]> criteria. This allows me to pass in a list of criteria such as new object() {"Username", usernameVariable}; from anywhere in my architecture. NHibernate takes this in and creates a new Criteria object and adds in the passed in criteria. This works fine for basic searches from my service layer, but I would like to have the ability to pass in a query object that my repository translates into an NHibernate Criteria. Really, I would love to implement something like what is described in this question: Is there value in abstracting nhibernate criterion. I'm just not finding any good resources on how to implement something like this. Is the method described in that question a good approach? If so, could anyone provide some pointers on how to implement such a solution?

    Read the article

  • singleton factory connection pdo

    - by Scarface
    Hey guys I am having a lot of trouble trying to understand this and I was just wondering if someone could help me with some questions. I found some code that is supposed to create a connection with pdo. The problem I was having was having my connection defined within functions. Someone suggested globals but then pointed to a 'better' solution http://stackoverflow.com/questions/130878/global-or-singleton-for-database-connection. My questions with this code are. PS I cannot format this code on this page so see the link if you cannot read What is the point of the connection factory? What goes inside new ConnectionFactory(...) When the connection is defined $db = new PDO(...); why is there no try or catch (I use those for error handling)? Does this then mean I have to use try and catch for every subsequent query? class ConnectionFactory { private static $factory; public static function getFactory() { if (!self::$factory) self::$factory = new ConnectionFactory(...); return self::$factory; } private $db; public function getConnection() { if (!$db) $db = new PDO(...); return $db; } } function getSomething() { $conn = ConnectionFactory::getFactory()-getConnection(); . . . }

    Read the article

  • How to get height for NSAttributedString at a fixed width

    - by bonaldi
    I want to do some drawing of NSAttributedStrings in fixed-width boxes, but am having trouble calculating the right height they'll take up when drawn. So far, I've tried: Calling - (NSSize) size, but the results are useless (for this purpose), as they'll give whatever width the string desires. Calling - (void)drawWithRect:(NSRect)rect options:(NSStringDrawingOptions)options with a rect shaped to the width I want and NSStringDrawingUsesLineFragmentOrigin in the options, exactly as I'm using in my drawing. The results are ... difficult to understand; certainly not what I'm looking for. (As is pointed out in a number of places, including this Cocoa-Dev thread). Creating a temporary NSTextView and doing: [[tmpView textStorage] setAttributedString:aString]; [tmpView setHorizontallyResizable:NO]; [tmpView sizeToFit]; When I query the frame of tmpView, the width is still as desired, and the height is often correct ... until I get to longer strings, when it's often half the size that's required. (There doesn't seem to be a max size being hit: one frame will be 273.0 high (about 300 too short), the other will be 478.0 (only 60-ish too short)). I'd appreciate any pointers, if anyone else has managed this.

    Read the article

  • t-sql most efficient row to column? crosstab for xml path, pivot

    - by ajberry
    I am looking for the most performant way to turn rows into columns. I have a requirement to output the contents of the db (not actual schema below, but concept is similar) in both fixed width and delimited formats. The below FOR XML PATH query gives me the result I want, but when dealing with anything other than small amounts of data, can take awhile. select orderid ,REPLACE(( SELECT ' ' + CAST(ProductId as varchar) FROM _details d WHERE d.OrderId = o.OrderId ORDER BY d.OrderId,d.DetailId FOR XML PATH('') ),'&#x20;','') as Products from _orders o I've looked at pivot but most of the examples I have found are aggregating information. I just want to combine the child rows and tack them onto the parent. I should also point out I don't need to deal with the column names either since the output of the child rows will either be a fixed width string or a delimited string. For example, given the following tables: OrderId CustomerId ----------- ----------- 1 1 2 2 3 3 DetailId OrderId ProductId ----------- ----------- ----------- 1 1 100 2 1 158 3 1 234 4 2 125 5 3 101 6 3 105 7 3 212 8 3 250 for an order I need to output: orderid Products ----------- ----------------------- 1 100 158 234 2 125 3 101 105 212 250 or orderid Products ----------- ----------------------- 1 100|158|234 2 125 3 101|105|212|250 Thoughts or suggestions? I am using SQL Server 2k5. Example Setup: create table _orders ( OrderId int identity(1,1) primary key nonclustered ,CustomerId int ) create table _details ( DetailId int identity(1,1) primary key nonclustered ,OrderId int ,ProductId int ) insert into _orders (CustomerId) select 1 union select 2 union select 3 insert into _details (OrderId,ProductId) select 1,100 union select 1,158 union select 1,234 union select 2,125 union select 3,105 union select 3,101 union select 3,212 union select 3,250 using FOR XML PATH: select orderid ,REPLACE(( SELECT ' ' + CAST(ProductId as varchar) FROM _details d WHERE d.OrderId = o.OrderId ORDER BY d.OrderId,d.DetailId FOR XML PATH('') ),'&#x20;','') as Products from _orders o which outputs what I want, however is very slow for large amounts of data. One of the child tables is over 2 million rows, pushing the processing time out to ~ 4 hours. orderid Products ----------- ----------------------- 1 100 158 234 2 125 3 101 105 212 250

    Read the article

  • Escape Quote in C# for javascript consumption

    - by Jason
    I have a ASP.Net web handler that returns results of a query in JSON format public static String dt2JSON(DataTable dt) { String s = "{\"rows\":["; if (dt.Rows.Count > 0) { foreach (DataRow dr in dt.Rows) { s += "{"; for (int i = 0; i < dr.Table.Columns.Count; i++) { s += "\"" + dr.Table.Columns[i].ToString() + "\":\"" + dr[i].ToString() + "\","; } s = s.Remove(s.Length - 1, 1); s += "},"; } s = s.Remove(s.Length - 1, 1); } s += "]}"; return s; } The problem is that sometimes the data returned has quotes in it and I would need to javascript-escape these so that it can be properly created into a js object. I need a way to find quotes in my data (quotes aren't there every time) and place a "/" character in front of them. Example response text (wrong): {"rows":[{"id":"ABC123","length":"5""}, {"id":"DEF456","length":"1.35""}, {"id":"HIJ789","length":"36.25""}]} I would need to escape the " so my response should be: {"rows":[{"id":"ABC123","length":"5\""}, {"id":"DEF456","length":"1.35\""}, {"id":"HIJ789","length":"36.25\""}]} Also, I'm pretty new to C# (coding in general really) so if something else in my code looks silly let me know.

    Read the article

< Previous Page | 625 626 627 628 629 630 631 632 633 634 635 636  | Next Page >