Search Results

Search found 7311 results on 293 pages for 'rows'.

Page 116/293 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • Beginner question - Loop invariants (Specifically Ch.3 of "Accelerated C++")

    - by Owen
    Hi - as I said, a complete beginner question here. I'm currently working my way through "Accelerated C++" and just came across this in chapter 3: // invariant: // we have read count grades so far, and // sum is the sum of the first count grades while (cin >> x) { ++count; sum += x; } The authors follow this by explaining that the invariant needs special attention paid to it because when the input is read into the variable x, we will have read count+1 grades and thus the invariant will be untrue. Similarly, when we have incremented the counter, the variable sum will no longer be the sum of the last count grades (in case you hadn't guessed, it's the traditional program for calculating student marks). What I don't understand is why this matters. Surely for just about any other loop, a similar statement would be true? For example, here is the book's first while loop (the output is filled in later): // invariant: we have written r rows so far while (r != rows) { // write a row of output std::cout << std::endl; ++r; } Once we have written the appropriate row of output, surely the invariant is false until we have incremented r, just as in the other example? It's probably something really obvious, anyone could enlighten me as to what makes these two cases different, that'd be great - and thanks in advance for taking the time to answer such a complete novice question. Owen

    Read the article

  • Export the datagrid data to text in asp.net

    - by SRIRAM
    Problem:It will asks there is no assembly reference/namespace for Database Database db = DatabaseFactory.CreateDatabase(); DBCommandWrapper selectCommandWrapper = db.GetStoredProcCommandWrapper("sp_GetLatestArticles"); DataSet ds = db.ExecuteDataSet(selectCommandWrapper); StringBuilder str = new StringBuilder(); for(int i=0;i<=ds.Tables[0].Rows.Count - 1; i++) { for(int j=0;j<=ds.Tables[0].Columns.Count - 1; j++) { str.Append(ds.Tables[0].Rows[i][j].ToString()); } str.Append("<BR>"); } Response.Clear(); Response.AddHeader("content-disposition", "attachment;filename=FileName.txt"); Response.Charset = ""; Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.ContentType = "application/vnd.text"; System.IO.StringWriter stringWrite = new System.IO.StringWriter(); System.Web.UI.HtmlTextWriter htmlWrite = new HtmlTextWriter(stringWrite); Response.Write(str.ToString()); Response.End();

    Read the article

  • scraping text from multiple html files into a single csv file

    - by Lulu
    I have just over 1500 html pages (1.html to 1500.html). I have written a code using Beautiful Soup that extracts most of the data I need but "misses" out some of the data within the table. My Input: e.g file 1500.html My Code: #!/usr/bin/env python import glob import codecs from BeautifulSoup import BeautifulSoup with codecs.open('dump2.csv', "w", encoding="utf-8") as csvfile: for file in glob.glob('*html*'): print 'Processing', file soup = BeautifulSoup(open(file).read()) rows = soup.findAll('tr') for tr in rows: cols = tr.findAll('td') #print >> csvfile,"#".join(col.string for col in cols) #print >> csvfile,"#".join(td.find(text=True)) for col in cols: print >> csvfile, col.string print >> csvfile, "===" print >> csvfile, "***" Output: One CSV file, with 1500 lines of text and columns of data. For some reason my code does not pull out all the required data but "misses" some data, e.g the Address1 and Address 2 data at the start of the table do not come out. I modified the code to put in * and === separators, I then use perl to put into a clean csv file, unfortunately I'm not sure how to work my code to get all the data I'm looking for!

    Read the article

  • Mark duplicates in MySql with php (without deleting)

    - by Adam
    So, I'm having some problems with a MySQL query (see other question), and decided to try a different approach. I have a database table with some duplicate rows, which I actually might need for future reference, so I don't want to remove. What I'm looking for is a way to display the data without those duplicates, but without removing them. I can't use a simple select query (as described in the other question). So what I need to do is write a code that does the following: 1. Go through my db Table. 2. Spot duplicates in the "ip" column. 3. Mark the first instance of each duplicate with "0" (in a column named "duplicate") and the rest with "1". This way I can later SELECT only the rows WHERE duplicate=0. NOTE: If your solution is related to the SELECT query, please read this other question first - there's a reason I'm not just using GROUP BY / DISTINCT. Thanks in advance.

    Read the article

  • Can not parse tabular information from html document.

    - by Harikrishna
    I am parsing many html documents.I am using html agility pack And I want to parse the tabular information from each document. And there may be any number of tables in each document.But I want to extract only one table from each document which has column header name NAME,PHONE NO,ADDRESS.And this table can be anywhere in the document,like in the document there is ten tables and from ten table there is one table which has many nested tables and from nested table there may be a table what I want to extract means table can be anywhere in the document and I want to find that table from the document by column header name.If I got that table then I want to then extract the information from that table. Now I can find the table which has column header NAME,PHONE NO,ADDRESS and also can extract the information from that.I am doing for that is, first I find the all tables in a document by foreach (var table in doc.DocumentNode.Descendants("table")) then for each table got I find the row for each table like, var rows = table.Descendants("tr"); and then for each row I am checking that row has that header name NAME,ADDRESS,PHONENO and if it is then I skip that row and extract all information after that row foreach (var row in rows.Skip(rowNo)) { var data = new List<string>(); foreach (var column in row.Descendants("td")) { data.Add(properText); } } Such that I am extracting all information from almost many document. But now problem is sometimes what happened that in some document I can not parse the information.Like a document in which there are like 10 tables and from these 10 tables 1 table is like there are many nested tables in that table. And from these nested tables I want to find the table which tabel has column header like NAME,ADDRESS,PHONE NO.So if table may be anywhere in the document even in the nested tables or anywhere it can be find through column header name.So I can parse the information from that table and skip the outer tabular information of that table.

    Read the article

  • Does replace into have a where clause?

    - by Lajos Arpad
    I'm writing an application and I'm using MySQL as DBMS, we are downloading property offers and there were some performance issues. The old architecture looked like this: A property is updated. If the number of affected rows is not 1, then the update is not considered successful, elseway the update query solves our problem. If the update was not successful, and the number of affected rows is more than 1, we have duplicates and we delete all of them. After we deleted duplicates if needed if the update was not successful, an insert happens. This architecture was working well, but there were some speed issues, because properties are deleted if they were not updated for 15 days. Theoretically the main problem is deleting properties, because some properties are alive for months and the indexes are very far from each other (we are talking about 500, 000+ properties). Our host told me to use replace into instead of deleting properties and all deprecated properties should be considered as DEAD. I've done this, but problems started to occur because of syntax error and I couldn't find anywhere an example of replace into with a where clause (I'd like to replace a DEAD property with the new property instead of deleting the old property and insert a new to assure optimization). My query looked like this: replace into table_name(column1, ..., columnn) values(value1, ..., valuen) where ID = idValue Of course, I've calculated idValue and handled everything but I had a syntax error. I would like to know if I'm wrong and there is a where clause for replace into. I've found an alternative solution, which is even better than replace into (using simply an update query) because deletes are happening behind the curtains if I use replace into, but I would like to know if I'm wrong when I say that replace into doesn't have a where clause. For more reference, see this link: http://dev.mysql.com/doc/refman/5.0/en/replace.html Thank you for your answers in advance, Lajos Árpád

    Read the article

  • Is there a more useful explanation for UITableViewStylePlain?

    - by mystify
    From the docs: In the plain style, section headers and footers float above the content if the part of a complete section is visible. A table view can have an index that appears as a bar on the right hand side of the table (for example, "a" through "z"). You can touch a particular label to jump to the target section. I find that very hard to grasp. First, this one: if the part of a complete section is visible What do they mean by this? This is paradox. Which one is it? A) Table must be exactly the height of that section. If I have 5 Rows, and each row is 50px high, I must make it 5*50 high. The full section must be visible on the screen. Otherwise, if I have 100 rows but my table view is only 400 high, this will not apply. Nothing will float above my content. Sounds wrong. B) It doesn't matter how high my table view actually is. Header and Footer is floating above the content and I can scroll the section. Makes more sense. But is completely against this nonsense making sentence: 'if the part of a complete section is visible' Can anyone explain it better than they did?

    Read the article

  • Advice on optimzing speed for a Stored Procedure that uses Views

    - by Belliez
    Based on a previous question and with a lot of help from Damir Sudarevic (thanks) I have the following sql code which works great but is very slow. Can anyone suggest how I can speed this up and optimise for speed. I am now using SQL Server Express 2008 (not 2005 as per my original question). What this code does is retrieves parameters and their associated values from several tables and rotates the table in a form that can be easily compared. Its great for one of two rows of data but now I am testing with 100 rows and to run GetJobParameters takes over 7 minutes to complete? Any advice is gratefully accepted, thank you in advanced. /*********************************************************************************************** ** CREATE A VIEW (VIRTUAL TABLE) TO ALLOW EASIER RETREIVAL OF PARMETERS ************************************************************************************************/ CREATE VIEW dbo.vParameters AS SELECT m.MachineID AS [Machine ID] ,j.JobID AS [Job ID] ,p.ParamID AS [Param ID] ,t.ParamTypeID AS [Param Type ID] ,m.Name AS [Machine Name] ,j.Name AS [Job Name] ,t.Name AS [Param Type Name] ,t.JobDataType AS [Job DataType] ,x.Value AS [Measurement Value] ,x.Unit AS [Unit] ,y.Value AS [JobDataType] FROM dbo.Machines AS m JOIN dbo.JobFiles AS j ON j.MachineID = m.MachineID JOIN dbo.JobParams AS p ON p.JobFileID = j.JobID JOIN dbo.JobParamType AS t ON t.ParamTypeID = p.ParamTypeID LEFT JOIN dbo.JobMeasurement AS x ON x.ParamID = p.ParamID LEFT JOIN dbo.JobTrait AS y ON y.ParamID = p.ParamID GO -- Step 2 CREATE VIEW dbo.vJobValues AS SELECT [Job Name] ,[Param Type Name] ,COALESCE(cast([Measurement Value] AS varchar(50)), [JobDataType]) AS [Val] FROM dbo.vParameters GO /*********************************************************************************************** ** GET JOB PARMETERS FROM THE VIEW JUST CREATED ************************************************************************************************/ CREATE PROCEDURE GetJobParameters AS -- Step 3 DECLARE @Params TABLE ( id int IDENTITY (1,1) ,ParamName varchar(50) ); INSERT INTO @Params (ParamName) SELECT DISTINCT [Name] FROM dbo.JobParamType -- Step 4 DECLARE @qw TABLE( id int IDENTITY (1,1) , txt nchar(300) ) INSERT INTO @qw (txt) SELECT 'SELECT' UNION SELECT '[Job Name]' ; INSERT INTO @qw (txt) SELECT ',MAX(CASE [Param Type Name] WHEN ''' + ParamName + ''' THEN Val ELSE NULL END) AS [' + ParamName + ']' FROM @Params ORDER BY id; INSERT INTO @qw (txt) SELECT 'FROM dbo.vJobValues' UNION SELECT 'GROUP BY [Job Name]' UNION SELECT 'ORDER BY [Job Name]'; -- Step 5 --SELECT txt FROM @qw DECLARE @sql_output VARCHAR (MAX) SET @sql_output = '' -- NULL + '' = NULL, so we need to have a seed SELECT @sql_output = -- string to avoid losing the first line. COALESCE (@sql_output + txt + char (10), '') FROM @qw EXEC (@sql_output) GO

    Read the article

  • Entire Table is pushed to the next page when rendering a SSRS 2005 Report (as .pdf) in SSRS 2008

    - by Pwninstein
    I have a SSRS 2005 report that I'm rendering in SSRS 2008 as a .pdf. The report contains (among other things) a table that's very simple: header row, details, no footer, no aggregation, no grouping, keep together = false, pageBreakAtStart = false, pageBreakAtEnd = false, repeatHeaderOnNewPage = true. I resized the table to be much narrower than the body of the report just to be sure it wasn't extending beyond the bounds of the report, pushing everything down. But, no matter what I try, if some of the detail rows in that table would need to be pushed to the next page, then the ENTIRE TABLE is pushed to the next page, not just the extra rows. So my question is: Is there a workaround for this problem, is this a known issue, or is it even possible to get this 2005 report to render properly in 2008? NOTE: this is related to a question that I previously asked here, and is based on this MSDN forum post started by a coworker. This question is not the same as my previous question, as I'd like to see things work properly in with a 2005 report. If it's not possible, that would be good to know, as it would indicate that we need to upgrade one of our servers to SQL 2008. Thanks!

    Read the article

  • mysql error in php

    - by fusion
    i'm trying to run this php code which should display a quote from mysql, but can't figure out where is it going wrong. the result variable is null or empty. can someone help me out. thanks! <?php include 'config.php'; // 'text' is the name of your table that contains // the information you want to pull from $rowcount = mysql_query("select count(*) as rows from quotes"); // Gets the total number of items pulled from database. while ($row = mysql_fetch_assoc($rowcount)) { $max = $row["rows"]; //print_r ($max); } // Selects an item's index at random $rand = rand(1,$max)-1; print_r ($rand); $result = mysql_query("select * from quotes limit $rand, 1") or die ('Error: '.mysql_error()); if (!$result or mysql_num_rows($result)) { echo "Empty"; } else{ while ($row = mysql_fetch_array($result)) { $randomOutput = $row['cQuotes']; echo '<p>' . $randomOutput . '</p>'; } }

    Read the article

  • SQL 2005 indexed queries slower than unindexed queries

    - by uos??
    Adding a seemingly perfectly index is having an unexpectedly adverse affect on a query performance... -- [Data] has a predictable structure and a simple clustered index of the primary key: ALTER TABLE [dbo].[Data] ADD PRIMARY KEY CLUSTERED ( [ID] ) -- My query, joins on itself looking for a certain kind of "overlapping" records SELECT DISTINCT [Data].ID AS [ID] FROM dbo.[Data] AS [Data] JOIN dbo.[Data] AS [Compared] ON [Data].[A] = [Compared].[A] AND [Data].[B] = [Compared].[B] AND [Data].[C] = [Compared].[C] AND ([Data].[D] = [Compared].[D] OR [Data].[E] = [Compared].[E]) AND [Data].[F] <> [Compared].[F] WHERE 1=1 AND [Data].[A] = @A AND @CS <= [Data].[C] AND [Data].[C] < @CE -- Between a range [Data] has about a quarter-million records so far, 10% to 50% of the data satisfies the where clause depending on @A, @CS, and @CE. As is, the query takes 1 second to return about 300 rows when querying 10%, and 30 seconds to return 3000 rows when querying 50% of the data. Curiously, the estimated/actual execution plan indicates two parallel Clustered Index Scans, but the clustered index is only of the ID, which isn't part of the conditions of the query, only the output. ?? If I add this hand-crafted [IDX_A_B_C_D_E_F] index which I fully expected to improve performance, the query slows down by a factor of 8 (8 seconds for 10% & 4 minutes for 50%). The estimated/actual execution plans show an Index Seek, which seems like the right thing to be doing, but why so slow?? CREATE UNIQUE INDEX [IDX_A_B_C_D_E_F] ON [dbo].[Data] ([A], [B], [C], [D], [E], [F]) INCLUDE ([ID], [X], [Y], [Z]); The Data Engine Tuning wizard suggests a similar index with no noticeable difference in performance from this one. Moving AND [Data].[F] <> [Compared].[F] from the join condition to the where clause makes no difference in performance. I need these and other indexes for other queries. I'm sure I could hint that the query should refer to the Clustered Index, since that's currently winning - but we all know it is not as optimized as it could be, and without a proper index, I can expect the performance will get much worse with additional data. What gives?

    Read the article

  • django customizing form labels

    - by Henri
    I have a problem in customizing labels in a Django form This is the form code in file contact_form.py: from django import forms class ContactForm(forms.Form): def __init__(self, subject_label="Subject", message_label="Message", email_label="Your email", cc_myself_label="Cc myself", *args, **kwargs): super(ContactForm, self).__init__(*args, **kwargs) self.fields['subject'].label = subject_label self.fields['message'].label = message_label self.fields['email'].label = email_label self.fields['cc_myself'].label = cc_myself_label subject = forms.CharField(widget=forms.TextInput(attrs={'size':'60'})) message = forms.CharField(widget=forms.Textarea(attrs={'rows':15, 'cols':80})) email = forms.EmailField(widget=forms.TextInput(attrs={'size':'60'})) cc_myself = forms.BooleanField(required=False) The view I am using this in looks like: def contact(request, product_id=None): . . . if request.method == 'POST': form = contact_form.ContactForm(request.POST) if form.is_valid(): . . else: form = contact_form.ContactForm( subject_label = "Subject", message_label = "Your Message", email_label = "Your email", cc_myself_label = "Cc myself") The strings used for initializing the labels will eventually be strings dependent on the language, i.e. English, Dutch, French etc. When I test the form the email is not sent and instead of the redirect-page the form returns with: <QueryDict: {u'cc_myself': [u'on'], u'message': [u'message body'], u'email':[u'[email protected]'], u'subject': [u'test message']}>: where the subject label was before. This is obviously a dictionary representing the form fields and their contents. When I change the file contact_form.py into: from django import forms class ContactForm(forms.Form): """ def __init__(self, subject_label="Subject", message_label="Message", email_label="Your email", cc_myself_label="Cc myself", *args, **kwargs): super(ContactForm, self).__init__(*args, **kwargs) self.fields['subject'].label = subject_label self.fields['message'].label = message_label self.fields['email'].label = email_label self.fields['cc_myself'].label = cc_myself_label """ subject = forms.CharField(widget=forms.TextInput(attrs={'size':'60'})) message = forms.CharField(widget=forms.Textarea(attrs={'rows':15, 'cols':80})) email = forms.EmailField(widget=forms.TextInput(attrs={'size':'60'})) cc_myself = forms.BooleanField(required=False) i.e. disabling the initialization then everything works. The form data is sent by email and the redirect page shows up. So obviously something the the init code isn't right. But what? I would really appreciate some help.

    Read the article

  • How can a data ellipse be superimposed on a ggplot2 scatterplot?

    - by Radu
    Hi, I have an R function which produces 95% confidence ellipses for scatterplots. The output looks like this, having a default of 50 points for each ellipse (50 rows): [,1] [,2] [1,] 0.097733810 0.044957994 [2,] 0.084433494 0.050337990 [3,] 0.069746783 0.054891438 I would like to superimpose a number of such ellipses for each level of a factor called 'site' on a ggplot2 scatterplot, produced from this command: > plat1 <- ggplot(mapping=aes(shape=site, size=geom), shape=factor(site)); plat1 + geom_point(aes(x=PC1.1,y=PC2.1)) This is run on a dataset, called dflat which looks like this: site geom PC1.1 PC2.1 PC3.1 PC1.2 PC2.2 1 Buhlen 1259.5649 -0.0387975838 -0.022889782 0.01355317 0.008705276 0.02441577 2 Buhlen 653.6607 -0.0009398704 -0.013076251 0.02898955 -0.001345149 0.03133990 The result is fine, but when I try to add the ellipse (let's say for this one site, called "Buhlen"): > plat1 + geom_point(aes(x=PC1.1,y=PC2.1)) + geom_path(data=subset(dflat, site="Buhlen"),mapping=aes(x=ELLI(PC1.1,PC2.1)[,1],y=ELLI(PC1.1,PC2.1)[,2])) I get an error message: "Error in data.frame(x = c(0.0977338099339815, 0.0844334944904515, 0.0697467834016782, : arguments imply differing number of rows: 50, 211 I've managed to fix this in the past, but I cannot remember how. It seems that geom_path is relying on the same points rather than plotting new ones. Any help would be appreciated.

    Read the article

  • optimizing an sql query using inner join and order by

    - by Sergio B
    I'm trying to optimize the following query without success. Any idea where it could be indexed to prevent the temporary table and the filesort? EXPLAIN SELECT SQL_NO_CACHE `groups`.* FROM `groups` INNER JOIN `memberships` ON `groups`.id = `memberships`.group_id WHERE ((`memberships`.user_id = 1) AND (`memberships`.`status_code` = 1 AND `memberships`.`manager` = 0)) ORDER BY groups.created_at DESC LIMIT 5;` +----+-------------+-------------+--------+--------------------------+---------+---------+---------------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------------+--------+--------------------------+---------+---------+---------------------------------------------+------+----------------------------------------------+ | 1 | SIMPLE | memberships | ref | grp_usr,grp,usr,grp_mngr | usr | 5 | const | 5 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | groups | eq_ref | PRIMARY | PRIMARY | 4 | sportspool_development.memberships.group_id | 1 | | +----+-------------+-------------+--------+--------------------------+---------+---------+---------------------------------------------+------+----------------------------------------------+ 2 rows in set (0.00 sec) +--------+------------+-----------------------------------+--------------+-----------------+-----------+-------------+----------+--------+------+------------+---------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | +--------+------------+-----------------------------------+--------------+-----------------+-----------+-------------+----------+--------+------+------------+---------+ | groups | 0 | PRIMARY | 1 | id | A | 6 | NULL | NULL | | BTREE | | | groups | 1 | index_groups_on_name | 1 | name | A | 6 | NULL | NULL | YES | BTREE | | | groups | 1 | index_groups_on_privacy_setting | 1 | privacy_setting | A | 6 | NULL | NULL | YES | BTREE | | | groups | 1 | index_groups_on_created_at | 1 | created_at | A | 6 | NULL | NULL | YES | BTREE | | | groups | 1 | index_groups_on_id_and_created_at | 1 | id | A | 6 | NULL | NULL | | BTREE | | | groups | 1 | index_groups_on_id_and_created_at | 2 | created_at | A | 6 | NULL | NULL | YES | BTREE | | +--------+------------+-----------------------------------+--------------+-----------------+-----------+-------------+----------+--------+------+------------+---------+ +-------------+------------+----------------------------------------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | +-------------+------------+----------------------------------------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+ | memberships | 0 | PRIMARY | 1 | id | A | 2 | NULL | NULL | | BTREE | | | memberships | 0 | grp_usr | 1 | group_id | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 0 | grp_usr | 2 | user_id | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | grp | 1 | group_id | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | usr | 1 | user_id | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | grp_mngr | 1 | group_id | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | grp_mngr | 2 | manager | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | complex_index | 1 | group_id | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | complex_index | 2 | user_id | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | complex_index | 3 | status_code | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | complex_index | 4 | manager | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | index_memberships_on_user_id_and_status_code_and_manager | 1 | user_id | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | index_memberships_on_user_id_and_status_code_and_manager | 2 | status_code | A | 2 | NULL | NULL | YES | BTREE | | | memberships | 1 | index_memberships_on_user_id_and_status_code_and_manager | 3 | manager | A | 2 | NULL | NULL | YES | BTREE | | +-------------+------------+----------------------------------------------------------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+

    Read the article

  • LINQ to SQL - Left Outer Join with multiple join conditions

    - by dan
    I have the following SQL which I am trying to translate to LINQ: SELECT f.value FROM period as p LEFT OUTER JOIN facts AS f ON p.id = f.periodid AND f.otherid = 17 WHERE p.companyid = 100 I have seen the typical implementation of the left outer join (ie. into x from y in x.DefaultIfEmpty() etc.) but am unsure how to introduce the other join condition ('AND f.otherid = 17') EDIT Why is the 'AND f.otherid = 17' condition part of the JOIN instead of in the WHERE clause? Because f may not exist for some rows and I still want these rows to be included. If the condition is applied in the WHERE clause, after the JOIN - then I don't get the behaviour I want. Unfortunately this: from p in context.Periods join f in context.Facts on p.id equals f.periodid into fg from fgi in fg.DefaultIfEmpty() where p.companyid == 100 && fgi.otherid == 17 select f.value seems to be equivalent to this: SELECT f.value FROM period as p LEFT OUTER JOIN facts AS f ON p.id = f.periodid WHERE p.companyid = 100 && AND f.otherid = 17 which is not quite what I'm after.

    Read the article

  • Linq to SQL case sensitivity causing problems

    - by Roger Lipscombe
    I've seen this question, but that's asking for case-insensitive comparisons when the database is case-sensitive. I'm having a problem with the exact opposite. I'm using SQL Server 2005, my database collation is set to Latin1_General_CI_AS. I've got a table, "User", like this: CREATE TABLE [dbo].[User] ( [Id] [int] IDENTITY(1,1) NOT NULL, [Name] [nvarchar](max) NOT NULL, CONSTRAINT [PK_Example] PRIMARY KEY CLUSTERED ( [Id] ASC ) ) And I'm using the following code to populate it: string[] names = new[] { "Bob", "bob", "BoB" }; using (MyDataContext dataContext = new AppCompatDataContext()) { foreach (var name in names) { string s = name; if (dataContext.Users.SingleOrDefault(u => u.Name == s) == null) dataContext.Users.InsertOnSubmit(new User { Name = name }); } dataContext.SubmitChanges(); } When I run this the first time, I end up with "Bob", "bob" and "BoB" in the table. When I run it again, I get an InvalidOperationException: "Sequence contains more than one element", because the query against the database returns all 3 rows, and... SELECT * FROM [User] WHERE Name = 'bob' ... is case-insensitive. That is: when I'm inserting rows, Linq to SQL appears to use C# case-sensitive comparisons. When I query later, Linq to SQL uses SQL Server case-insensitive comparisons. I'd like the initial insert to use case-insensitive comparisons, but when I change the code as follows... if (dataContext.Users.SingleOrDefault(u => u.Name.Equals(s, StringComparison.InvariantCultureIgnoreCase) ) == null) ... I get a NotSupportedException: "Method 'Boolean Equals(System.String, System.StringComparison)' has no supported translation to SQL." Question: how do I get the initial insert to be case-insensitive or, more precisely, match the collation of the column in the database? Update: This doesn't appear to be my problem. My problem appears to be that SingleOrDefault doesn't actually look at the pending inserts at all.

    Read the article

  • Pros & Cons of Google App Engine

    - by Rishi
    Pros & Cons of Google App Engine [An Updated List 21st Aug 09] Help me Compile a List of all the Advantages & Disadvantages of Building an Application on the Google App Engine Pros: 1) No Need to buy Servers or Server Space (no maintenance). 2) Makes solving the problem of scaling much easier. Cons: 1) Locked into Google App Engine ?? 2)Developers have read-only access to the filesystem on App Engine. 3)App Engine can only execute code called from an HTTP request (except for scheduled background tasks). 4)Users may upload arbitrary Python modules, but only if they are pure-Python; C and Pyrex modules are not supported. 5)App Engine limits the maximum rows returned from an entity get to 1000 rows per Datastore call. 6)Java applications may only use a subset (The JRE Class White List) of the classes from the JRE standard edition. 7)Java applications cannot create new threads. Known Issues!! http://code.google.com/p/googleappengine/issues/list Hard limits Apps per developer - 10 Time per request - 30 sec Files per app - 3,000 HTTP response size - 10 MB Datastore item size - 1 MB Application code size - 150 MB Pro or Con? App Engine's infrastructure removes many of the system administration and development challenges of building applications to scale to millions of hits. Google handles deploying code to a cluster, monitoring, failover, and launching application instances as necessary. While other services let users install and configure nearly any *NIX compatible software, App Engine requires developers to use Python or Java as the programming language and a limited set of APIs. Current APIs allow storing and retrieving data from a BigTable non-relational database; making HTTP requests; sending e-mail; manipulating images; and caching. Most existing Web applications can't run on App Engine without modification, because they require a relational database.

    Read the article

  • ADO.NET Batch Insert with over 2000 parameters

    - by Liming
    Hello all, I'm using Enterprise library, but the idea is the same. I have a SqlStringCommand and the sql is constructed using StringBuilder in the forms of "insert into table (column1, column2, column3) values (@param1-X, @param2-X, @parm3-X)"+" " where "X" represents a "for loop" about 700 rows StringBuilder sb = new StringBuilder(); for(int i=0; i<700; i++) { sb.Append("insert into table (column1, column2, column3) values (@param1-"+i+", @param2-"+i, +",@parm3-"+i+") " ); } followed by constructing a command object injecting all the parameters w/ values into it. Essentially, 700 rows with 3 parameters, I ended up with 2100 parameters for this "one sql" Statement. It ran fine for about a few days and suddenly I got this error =============================================================== A severe error occurred on the current command. The results, if any, should be discarded. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.InternalExecuteNon Any pointers are greatly appreciated.

    Read the article

  • Practical size limitations for RDBMS

    - by grenade
    I am working on a project that must store very large datasets and associated reference data. I have never come across a project that required tables quite this large. I have proved that at least one development environment cannot cope at the database tier with the processing required by the complex queries against views that the application layer generates (views with multiple inner and outer joins, grouping, summing and averaging against tables with 90 million rows). The RDBMS that I have tested against is DB2 on AIX. The dev environment that failed was loaded with 1/20th of the volume that will be processed in production. I am assured that the production hardware is superior to the dev and staging hardware but I just don't believe that it will cope with the sheer volume of data and complexity of queries. Before the dev environment failed, it was taking in excess of 5 minutes to return a small dataset (several hundred rows) that was produced by a complex query (many joins, lots of grouping, summing and averaging) against the large tables. My gut feeling is that the db architecture must change so that the aggregations currently provided by the views are performed as part of an off-peak batch process. Now for my question. I am assured by people who claim to have experience of this sort of thing (which I do not) that my fears are unfounded. Are they? Can a modern RDBMS (SQL Server 2008, Oracle, DB2) cope with the volume and complexity I have described (given an appropriate amount of hardware) or are we in the realm of technologies like Google's BigTable? I'm hoping for answers from folks who have actually had to work with this sort of volume at a non-theoretical level.

    Read the article

  • How to get image's coordinate on JPanel

    - by Jessy
    This question is related to my previous question http://stackoverflow.com/questions/2376027/how-to-generate-cartesian-coordinate-x-y-from-gridbaglayout I have successfully get the coordinate of each pictures, however when I checked the coordinate through (System.out.println) and the placement of the images on the screen, it seems to be wrong. e.g. if on the screen it was obvious that the x point of the first picture is on cell 2 which is on coordinate of 20, but the program shows x=1. Here is part of the code: public Grid (){ setPreferredSize(new Dimension(600,600)); .... setLayout(new GridBagLayout()); GridBagConstraints gc = new GridBagConstraints(); gc.weightx = 1d; gc.weighty = 1d; gc.insets = new Insets(0, 0, 0, 0);//top, left, bottom, and right gc.fill = GridBagConstraints.BOTH; JLabel[][] label = new JLabel[ROWS][COLS]; Random rand = new Random(); // fill the panel with labels for (int i=0;i<IMAGES;i++){ ImageIcon icon = createImageIcon("myPics.jpg"); int r, c; do{ //pick random cell which is empty r = (int)Math.floor(Math.random() * ROWS); c = (int)Math.floor(Math.random() * COLS); } while (label[r][c]!=null); //randomly scale the images int x = rand.nextInt(50)+30; int y = rand.nextInt(50)+30; Image image = icon.getImage().getScaledInstance(x,y, Image.SCALE_SMOOTH); icon.setImage(image); JLabel lbl = new JLabel(icon); // Instantiate GUI components gc.gridx = r; gc.gridy = c; add(lbl, gc); //add(component, constraintObj); label[r][c] = lbl; } I checked the coordinate through this code: Component[] components = getComponents(); for (Component component : components) { System.out.println(component.getBounds()); }

    Read the article

  • Reading,Writing, Editing BLOBS through DataTables and DataRows

    - by Soham
    Consider this piece of code: DataSet ds = new DataSet(); SQLiteDataAdapter Da = new SQLiteDataAdapter(Command); Da.Fill(ds); DataTable dt = ds.Tables[0]; bool PositionExists; if (dt.Rows.Count > 0) { PositionExists = true; } else { PositionExists = false; } if (PositionExists) { //dt.Rows[0].Field<>("Date") } Here the "Date" field is a BLOB. My question is, a. Will reading through the DataAdapter create any problems later on, when I am working with BLOBS? More, specifically, will it read the BLOB properly? b. This was the read part.Now when I am writing the BLOB to the DB,its a queue actually. I.e I am trying to store a queue in MySQLite using a BLOB. Will Conn.ExecuteNonQuery() serve my purpose? c. When I am reading the BLOB back from the DB, can I edit it as the original datatype, it used to be in C# environment i.e { Queue - BLOB - ? } {C# -MySQL - C# } So in this context, Date field was a queue, I wrote it back as a BLOB, when reading it back, can I access it(and edit) as a queue? Thank You. Soham

    Read the article

  • Silverlight 4 race condition with DataGrid master details control

    - by Simon_Weaver
    Basically I want a DataGrid (master) and details (textbox), where DataGrid is disabled during edit of details (forcing people to save/cancel)... Here's what I have... I have a DataGrid which serves as my master data. <data:DataGrid IsEnabled="{Binding CanLoad,ElementName=dsReminders}" ItemsSource="{Binding Data, ElementName=dsReminders}" > Its data comes from a DomainDataSource: <riaControls:DomainDataSource Name="dsReminders" AutoLoad="True" ... I have a bound Textbox which is the 'details' (very simple right now). There are buttons (Save/Cancel) which should be enabled when user tries to edit the text. Unfortunately Silverlight doesn't support UpdateSourceTrigger=PropertyChanged so I have to raise an event: <TextBox Text="{Binding SelectedItem.AcknowledgedNote, Mode=TwoWay, UpdateSourceTrigger=Explicit, ElementName=gridReminders}" TextChanged="txtAcknowledgedNote_TextChanged"/> The event to handle this calls BindingExpression.UpdateSource to update the source immediately: private void txtAcknowledgedNote_TextChanged(object sender, TextChangedEventArgs e) { BindingExpression be = txtAcknowledgedNote.GetBindingExpression(TextBox.TextProperty); be.UpdateSource(); } IN other words - typing in the textbox causes CanLoad of the DomainDataSource to become False (because we're editing). This in turn disables the DataGrid (IsEnabled is bound to it) and enables 'Cancel' and 'Save' buttons. However I'm running up against a race condition if I move quickly through rows in the DataGrid (just clicking random rows). The TextChanged presumably is being called on the textbox and confusing the DomainDataSource which then thinks there's been a change. So how should I disable the DataGrid while editing without having the race condition? One obvious solution would be to use KeyDown events to trigger the call to UpdateSource but I always hate having to do that.

    Read the article

  • Reporting System architecture for better performance

    - by pauloya
    Hi, We have a product that runs Sql Server Express 2005 and uses mainly ASP.NET. The database has around 200 tables, with a few (4 or 5) that can grow from 300 to 5000 rows per day and keep a history of 5 years, so they can grow to have 10 million rows. We have built a reporting platform, that allows customers to build reports based on templates, fields and filters. We face performance problems almost since the beginning, we try to keep reports display under 10 seconds but some of them go up to 25 seconds (specially on those customers with long history). We keep checking indexes and trying to improve the queries but we get the feeling that there's only so much we can do. Off course the fact that the queries are generated dynamically doesn't help with the optimization. We also added a few tables that keep redundant data, but then we have the added problem of maintaining this data up to date, and also Sql Express has a limit on the size of databases. We are now facing a point where we have to decide if we want to give up real time reports, or maybe cut the history to be able to have better performance. I would like to ask what is the recommended approach for this kind of systems. Also, should we start looking for third party tools/platforms? I know OLAP can be an option but can we make it work on Sql Server Express, or at least with a license that is cheap enough to distribute to thousands of deployments? Thanks

    Read the article

  • Identifying and Resolving Oracle ITL Deadlock

    - by Allan
    I have an Oracle DB package that is routinely causing what I believe is an ITL (Interested Transaction List) deadlock. The relevant portion of a trace file is below. Deadlock graph: ---------Blocker(s)-------- ---------Waiter(s)--------- Resource Name process session holds waits process session holds waits TM-0000cb52-00000000 22 131 S 23 143 SS TM-0000ceec-00000000 23 143 SX 32 138 SX SSX TM-0000cb52-00000000 30 138 SX 22 131 S session 131: DID 0001-0016-00000D1C session 143: DID 0001-0017-000055D5 session 143: DID 0001-0017-000055D5 session 138: DID 0001-001E-000067A0 session 138: DID 0001-001E-000067A0 session 131: DID 0001-0016-00000D1C Rows waited on: Session 143: no row Session 138: no row Session 131: no row There are no bit-map indexes on this table, so that's not the cause. As far as I can tell, the lack of "Rows waited on" plus the "S" in the Waiter waits column likely indicates that this is an ITL deadlock. Also, the table is written to quite often (roughly 8 insert or updates concurrently, as often as 240 times a minute), so and ITL deadlock seems like a strong possibility. I've increased the INITRANS parameter of the table and it's indexes to 100 and increased the PCT_FREE on the table from 10 to 20 (then rebuilt the indexes), but the deadlocks are still occurring. The deadlock seems to happen most often during an update, but that could just be a coincidence, as I've only traced it a couple of times. My questions are two-fold: 1) Is this actually an ITL deadlock? 2) If it is an ITL deadlock, what else can be done to avoid it?

    Read the article

  • Can NSCollectionView autoresize the width of its subviews to display one column

    - by littlecharva
    Hi, I have an NSCollectionView that contains a collection of CustomViews. Initially it tiled the subviews into columns and rows like a grid. I then set the Columns property in IB to 1, so now it just displays them one after another in rows. However, even though my CustomView is 400px wide, it's set to autoresize, the NSCollectionView is 400px wide, and it's set to 1 column, the subviews are drawn about 80px wide. I know I can get around this by calling: CGFloat width = [collectionView bounds].size.width; NSSize size = NSMakeSize(width, 85); [collectionView setMinItemSize:size]; [collectionView setMaxItemSize:size]; But putting this code in the awakeFromNib method of my WindowController only sets the correct width when the program launches. When I resize the window (and the NSCollectionView autoresizes as I've specified), the CustomViews stay at their initially set width. I'm happy to take care of resizing the subviews myself if need be, but I'm quite new to Cocoa and can't seem to find any articles explaining how to do such a thing. Can someone point me in the right direction? Anthony

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >