Search Results

Search found 17867 results on 715 pages for 'delete row'.

Page 626/715 | < Previous Page | 622 623 624 625 626 627 628 629 630 631 632 633  | Next Page >

  • Drawing and filling different polygons at the same time in MATLAB

    - by Hossein
    Hi,I have the code below. It load a CSV file into memory. This file contains the coordinates for different polygons.Each row of this file has X,Y coordinates and a string which tells that to which polygon this datapoint belongs. for example a polygone named "Poly1" with 100 data points has 100 rows in this file like : Poly1,X1,Y1 Poly1,X2,Y2 ... Poly1,X100,Y100 Poly2,X1,Y1 ..... The index.csv file has the number of datapoint(number of rows) for each polygon in file Polygons.csv. These details are not important. The thing is: I can successfully extract the datapoints for each polygon using the code below. However, When I plot the lines of different polygons are connected to each other and the plot looks crappy. I need the polygons to be separated(they are connected and overlapping the some areas though). I thought by using "fill" I can actually see them better. But "fill" just filles every polygon that it can find and that is not desirable. I only want to fill inside the polygons. Can someone help me? I can also send you my datapoint if necessary, they are less than 200Kb. Thanks [coordinates,routeNames,polygonData] = xlsread('Polygons.csv'); index = dlmread('Index.csv'); firstPointer = 0 lastPointer = index(1) for Counter=2:size(index) firstPointer = firstPointer + index(Counter) + 1 hold on plot(coordinates(firstPointer:lastPointer,2),coordinates(firstPointer:lastPointer,1),'r-') lastPointer = lastPointer + index(Counter) end

    Read the article

  • QSqlQuery UPDATE/INSERT DateTime with server's time (eg CURRENT_TIMESTAMP)

    - by Skinniest Man
    I am using QSqlQuery to insert data into a MySQL database. Currently all I care about is getting this to work with MySQL, but ideally I'd like to keep this as platform-independent as possible. What I'm after, in the context of MySQL, is to end up with code that effectively executes something like the following query: UPDATE table SET time_field=CURRENT_TIMESTAMP() WHERE id='5' The following code is what I have attempted, but it fails: QSqlQuery query; query.prepare("INSERT INTO table SET time_field=? WHERE id=?"); query.addBindValue("CURRENT_TIMESTAMP()"); query.addBindValue(5); query.exec(); The error I get is: Incorrect datetime value: 'CURRENT_TIMESTAMP()' for column 'time_field' at row 1 QMYSQL3: Unable to execute statement. I am not surprised as I assume Qt is doing some type checking when it binds values. I have dug through the Qt documentation as well as I know how, but I can't find anything in the API designed specifically for supporting MySQL's CURRENT_TIMESTAMP() function, or that of any other DBMS. Any suggestions?

    Read the article

  • ADO.NET DataTable/DataRow Thread Safety

    - by Allen E. Scharfenberg
    Introduction A user reported to me this morning that he was having an issue with inconsistent results (namely, column values sometimes coming out null when they should not be) of some parallel execution code that we provide as part of an internal framework. This code has worked fine in the past and has not been tampered with lately, but it got me to thinking about the following snippet: Code Sample lock (ResultTable) { newRow = ResultTable.NewRow(); } newRow["Key"] = currentKey; foreach (KeyValuePair<string, object> output in outputs) { object resultValue = output.Value; newRow[output.Name] = resultValue != null ? resultValue : DBNull.Value; } lock (ResultTable) { ResultTable.Rows.Add(newRow); } (No guarantees that that compiles, hand-edited to mask proprietery information.) Explanation We have this cascading type of locking code other places in our system, and it works fine, but this is the first instance of cascading locking code that I have come across that interacts with ADO .NET. As we all know, members of framework objects are usually not thread safe (which is the case in this situation), but the cascading locking should ensure that we are not reading and writing to ResultTable.Rows concurrently. We are safe, right? Hypothesis Well, the cascading lock code does not ensure that we are not reading from or writing to ResultTable.Rows at the same time that we are assigning values to columns in the new row. What if ADO .NET uses some kind of buffer for assigning column values that is not thread safe--even when different object types are involved (DataTable vs. DataRow)? Has anyone run into anything like this before? I thought I would ask here at StackOverflow before beating my head against this for hours on end :) Conclusion Well, the consensus appears to be that changing the cascading lock to a full lock has resolved the issue. That is not the result that I expected, but the full lock version has not produced the issue after many, many, many tests. The lesson: be wary of cascading locks used on APIs that you do not control. Who knows what may be going on under the covers!

    Read the article

  • Which templating languages output HTML *as a tree of nodes*?

    - by alamar
    HTML is a tree of nodes, before all. It's not just a text. However, most templating engines handle their input and output as it was just a text; they don't care what happens around their tags, their {$foo}'s and <% bar() %>'s; also they don't care about what are they outputting. Sometimes they happen to produce a correct html, but that's just a coincidence; they didn't aim for that, all they wanted is to replace some funny marks in the text stream with their evaluation. There are a few templating engines which do treat their output as a set of nodes; XSLT and Haml come to mind. For some tasks, this has advantages: for example, you can automatically reformat (like, delete all empty text nodes; auto-indent; word-wrap). The result is guaranteed to be a correct xml/sgml unless you use a strict subset of operations that can break that. Also, such templating engine would automatically quote strings, differently in text nodes and in attributes, because it strictly knows whether you're writing an attribute or a text node. Moreover, it can conditionally remove a node from output because it knows where it does begin and end, which is useful, and do other non-trivial node operations. You might not like XSLT for its verbosiness or functionalness, but it's damn helps that your template is xmllint-able XML, and your output is a good sgml/xml. So the question is: Which template engines do you know that treat their output as a set of correct nodes, not just an unstructured text? I know XSLT, Haml and some obscure python-based one. Moar!

    Read the article

  • Is there a best practice for concatenating MP3 Files, adjusting sample rates to match, while preserving original files?

    - by Scott
    Hello overflow community! Does anyone know if there is a "best practice" to concatenate mp3 files to create new files, while preserving the original files? I am working on a CentOS Linux machine, in command line. I will eventually call the command line from a PHP script. I have been doing research and I have come up with a process that I think could work. It combines general advice from different forums, blogs, and sources like this one. So here I go: Create a temporary folder Loop through files to create a new, converted copy, of file into a "raw" format (which one, I don't know. I didn't know "raw" files existed before too long ago. I could use some suggestions on this) Store the path to the temporary files, in the temporary folder, and then loop through the files to concatenate them and then put the new merged file the final "processed directory" Delete the contents of the temporary file with the temporary raw files inside. Convert the final file from "raw" to mp3 and enjoy the finished result I'm thinking that this course of action might be best because I can't necessarily control the quality of the original "source" mp3s. The only other option I could think of would be to create a script that would perform a similar process upon files being added to the system leaving only the files with the "proper" format and removing the original "erroneous" file. Hopefully you can see that I have put some thought into this and that I'm trying to leverage the collective knowledge of this community to choose the best direction. Perhaps there is a better path that I could take? By concatenate, I mean to join together in sequence to create a new audio file from the "concatenated files."

    Read the article

  • Django: DatabaseError column does not exist

    - by Rosarch
    I'm having a problem with Django 1.2.4. Here is a model: class Foo(models.Model): # ... ftw = models.CharField(blank=True) bar = models.ForeignKey(Bar) Right after flushing the database, I use the shell: Python 2.6.6 (r266:84292, Sep 15 2010, 15:52:39) [GCC 4.4.5] on linux2 Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) >>> from apps.foo.models import Foo >>> Foo.objects.all() Traceback (most recent call last): File "<console>", line 1, in <module> File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 67, in __repr__ data = list(self[:REPR_OUTPUT_SIZE + 1]) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 82, in __len__ self._result_cache.extend(list(self._iter)) File "/usr/local/lib/python2.6/dist-packages/django/db/models/query.py", line 271, in iterator for row in compiler.results_iter(): File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/compiler.py", line 677, in results_iter for rows in self.execute_sql(MULTI): File "/usr/local/lib/python2.6/dist-packages/django/db/models/sql/compiler.py", line 732, in execute_sql cursor.execute(sql, params) File "/usr/local/lib/python2.6/dist-packages/django/db/backends/util.py", line 15, in execute return self.cursor.execute(sql, params) File "/usr/local/lib/python2.6/dist-packages/django/db/backends/postgresql_psycopg2/base.py", line 44, in execute return self.cursor.execute(query, args) DatabaseError: column foo_foo.bar_id does not exist LINE 1: ...t_omg", "foo_foo"."ftw", "foo_foo... What am I doing wrong here?

    Read the article

  • Why might my PHP log file not entirely be text?

    - by Fletcher Moore
    I'm trying to debug a plugin-bloated Wordpress installation; so I've added a very simple homebrew logger that records all the callbacks, which are basically listed in a single, ultimately 250+ row multidimensional array in Wordpress (I can't use print_r() because I need to catch them right before they are called). My logger line is $logger->log("\t" . $callback . "\n"); The logger produces a dandy text file in normal situations, but at two points during this particular task it is adding something which causes my log file to no longer be encoded properly. Gedit (I'm on Ubuntu) won't open the file, claiming to not understand the encoding. In vim, the culprit corrupt callback (which I could not find in the debugger, looking at the array) is about in the middle and printed as ^@lambda_546 and at the end of file there's this cute guy ^M. The ^M and ^@ are blue in my vim, which has no color theme set for .txt files. I don't know what it means. I tried adding an is_string($callback) condition, but I get the same results. Any ideas?

    Read the article

  • error in C++, what to do ?: could not find an match for ostream::write(long *, unsigned int)

    - by Shantanu Gupta
    I am trying to write data stored in a binary file using turbo C++. But it shows me an error could not find an match for ostream::write(long *, unsigned int) I want to write a 4 byte long data into that file. When i tries to write data using char pointer. It runs successfully. But i want to store large value i.e. eg. 2454545454 Which can be stored in long only. I dont know how to convert 1 byte into bit. I have 1 byte of data as a character. Moreover what i m trying to do is to convert 4 chars into long and store data into it. And at the other side i want to reverse this so as to retrieve how many bytes of data i have written. long *lmem; lmem=new long; *lmem=Tsize; fo.write(lmem,sizeof(long));// error occurs here delete lmem; I am implementing steganography and i have successfully stored txt file into image but trying to retrieve that file data now.

    Read the article

  • What is a good approach to preloading data?

    - by Bob Horn
    Are there best practices out there for loading data into a database, to be used with a new installation of an application? For example, for application foo to run, it needs some basic data before it can even be started. I've used a couple options in the past: TSQL for every row that needs to be preloaded: IF NOT EXISTS (SELECT * FROM Master.Site WHERE Name = @SiteName) INSERT INTO [Master].[Site] ([EnterpriseID], [Name], [LastModifiedTime], [LastModifiedUser]) VALUES (@EnterpriseId, @SiteName, GETDATE(), @LastModifiedUser) Another option is a spreadsheet. Each tab represents a table, and data is entered into the spreadsheet as we realize we need it. Then, a program can read this spreadsheet and populate the DB. There are complicating factors, including the relationships between tables. So, it's not as simple as loading tables by themselves. For example, if we create Security.Member rows, then we want to add those members to Security.Role, we need a way of maintaining that relationship. Another factor is that not all databases will be missing this data. Some locations will already have most of the data, and others (that may be new locations around the world), will start from scratch. Any ideas are appreciated.

    Read the article

  • Export large amount of data from Oracle 10G to SQL Server 2005

    - by uniball
    Dear all, I need to export 100 million data rows (avg row length ~ 100 bytes) from Oracle 10G database table into SQL server (over WAN/VLAN with 6MBits/sec capacity) on a regular basis. So far, these are the options that I have tried and a quick summary. Has anyone tried this before? Are there other better options? Which option would be the best in terms of performance and reliability? The time taken has been calculated using tests on smaller amounts of data and then extrapolating it to estimate the time required. Using data import wizard on the SQL server or SSIS packages to import the data. It will take around 150 hours to complete the task. Using Oracle batch job to spool data into a comma-delimited flat-file. Then using SSIS package to FTP this file to the SQL server and then load directly from the flat-file. The issue here is the size of the flat-file which is expected to run in GBs. Although this option is drastically different, I am even considering the option of using Linked Server to query the Oracle data directly at run-time to avoid bringing in data. Performance is a big problem and I have limited control over the Oracle database in terms of creating table indexes. Regards, Uniball

    Read the article

  • How do I create rows with alternating colors for a UITableView on iPhone?

    - by Mat
    Hi all, i would to have alternate 2 colors of rows, like the first black, the second white, the third black, etc, etc... my approach is like a basic exercise of programming to calculate if a number is odd number or not: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; cell = ((MainCell*)[tableView dequeueReusableCellWithIdentifier:CellIdentifier]); if (cell==nil) { NSArray *topLevelObjects=[[NSBundle mainBundle] loadNibNamed:@"MainCell" owner:self options:nil]; for (id currentObject in topLevelObjects){ if ([currentObject isKindOfClass:[UITableViewCell class]]){ if ((indexPath.row % 2)==0) { [cell.contentView setBackgroundColor:[UIColor purpleColor]]; }else{ [cell.contentView setBackgroundColor:[UIColor whiteColor]]; } cell = (MainCell *) currentObject; break; } } }else { AsyncImageView* oldImage = (AsyncImageView*) [cell.contentView viewWithTag:999]; [oldImage removeFromSuperview]; }return cell; The problem is that when i do a rapid scroll, the background of cells become like the last 2 cell black, the first 2 cell white or something like this, but if i scroll slow works fine. I think the problem is the cache of reusableCell. Any ideas? TIA

    Read the article

  • C++ Memory Leak, Can't find where

    - by Nicholas
    I'm using Visual Studio 2008, Developing an OpenGL window. I've created several classes for creating a skeleton, one for joints, one for skin, one for a Body(which is a holder for several joints and skin) and one for reading a skel/skin file. Within each of my classes, I'm using pointers for most of my data, most of which are declared using = new int[XX]. I have a destructor for each Class that deletes the pointers, using delete[XX]. Within my GLUT display function I have it declaring a body, opening the files and drawing them, then deleting the body at the end of the display. But there's still a memory leak somewhere in the program. As Time goes on, it's memory usage just keep increasing, at a consistent rate, which I'm interpreting as something that's not getting deleted. I'm not sure if it's something in the glut display function that's just not deleting the Body class, or something else. I've followed the steps for memory leak detection in Visual Studio 2008 and it doesn't report any leak, but I'm not 100% sure if it's working right for me. I'm not fluent in C++, so there maybe something I'm overlooking, can anyone see it?

    Read the article

  • How to use custom UITableViewCell from Interface Builder?

    - by Krumelur
    I want to be able to design my own UITableViewCell in IB. But I keep getting a null ref exception when trying to access the label I defined in IB. Here's what I'm doing: In Interface Builder: I removed the "View" and added a UITableViewCell instead. Changed the class of the UITableViewCell to "TestCellView". Added a UILabel to the cell. Added an outlet "oLblText" to TestCellView and connected the UILabel to it. Changed the identifier of the class to "TestCellView". Implement TestCellView.xib.cs public partial class TestCellView : UITableViewCell { public TestCellView(string sKey) : base(UITableViewCellStyle.Default, sKey) { } public TestCellView(IntPtr oHandle) : base(oHandle) { } public string TestText { get { return this.oLblText.Text; } set { // HERE I get the null ref exception! this.oLblText.Text = value; } } } ** The TestCellView.designer.cs** [MonoTouch.Foundation.Register("TestCellView")] public partial class TestCellView { private MonoTouch.UIKit.UILabel __mt_oLblText; #pragma warning disable 0169 [MonoTouch.Foundation.Connect("oLblText")] private MonoTouch.UIKit.UILabel oLblText { get { this.__mt_oLblText = ((MonoTouch.UIKit.UILabel)(this.GetNativeField("oLblText"))); return this.__mt_oLblText; } set { this.__mt_oLblText = value; this.SetNativeField("oLblText", value); } } } In my table's source: public override UITableViewCell GetCell (UITableView tableView, NSIndexPath indexPath) { TestCellView oCell = (TestCellView)tableView.DequeueReusableCell("myCell"); if(oCell == null) { // I suppose this is wrong but how to do it correctly? // this == my UITableViewSource. NSBundle.MainBundle.LoadNib("TestCellView", this, null); oCell = new TestCellView("myCell"); } oCell.TestText = "Cell " + indexPath.Row; return oCell; } Please note that I do NOT want a solution that involves a UIViewController for every cell. I have seen a couple of examples on the web doing this. I just think it is total overkill. What am I doing wrong?

    Read the article

  • Need to find number of new unique ID numbers in a MySQL table

    - by Nicholas
    I have an iPhone app out there that "calls home" to my server every time a user uses it. On my server, I create a row in a MySQL table each time with the unique ID (similar to a serial number) aka UDID for the device, IP address, and other data. Table ClientLog columns: Time, UDID, etc, etc. What I'd like to know is the number of new devices (new unique UDIDs) on a given date. I.e. how many UDIDs were added to the table on a given date that don't appear before that date? Put plainly, this is the number of new users I gained that day. This is close, I think, but I'm not 100% there and not sure it's what I want... SELECT distinct UDID FROM ClientLog a WHERE NOT EXISTS ( SELECT * FROM ClientLog b WHERE a.UDID = b.UDID AND b.Time <= '2010-04-05 00:00:00' ) I think the number of rows returned is the new unique users after the given date, but I'm not sure. And I want to add to the statement to limit it to a date range (specify an upper bound as well).

    Read the article

  • Enhancing an 'ORDER BY' clause to judge condition by more than 1 integer

    - by Yvonne
    Hi folks, I have some PHP code which allows me to sort a column into ascending and descending order (upon click of a table row title), which is good. It works perfectly for my D.O.B colum (with date/time field type), but not for a quantity column. For example, I have quantites of 10, 50, 100, 30 and another 100. The order seems to be only appreciating the 1st integer, so my sorting of the column ends up in this order: 10, 100, 100, 30, 50... and 50, 30, 100, 100, 10. This is obviously incorrect as 100 is bigger than 50, therefore both 100 values should appear at the end surely? It seems to me that 100 is only being taken into account as having the '1' value, then it appears before 10 because the system recognises it has another 0. Is this normal to happen? Is there any way I can solve this problem? Thanks for any help. P.S. I can show code if necessary, but would like to know if this is a common issue by default.

    Read the article

  • create and write to a text file in vb.net

    - by woolardz
    I'm creating a small vb.net application, and I'm trying trying to write a list of results from a listview to a text file. I've looked online and found the code to open the save file dialog and write the text file. When I click save on the save file dialog, I receive an IOException with the message "The process cannot access the file 'C:\thethe.txt' because it is being used by another process." The text file is created in the correct location, but is empty. The application quits at this line "Dim fs As New FileStream(saveFileDialog1.FileName, FileMode.OpenOrCreate, FileAccess.Write)" Thanks in advance for any help. Private Sub btnSave_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnSave.Click Dim myStream As Stream Dim saveFileDialog1 As New SaveFileDialog() saveFileDialog1.Filter = "txt files (*.txt)|*.txt|All files (*.*)|*.*" saveFileDialog1.FilterIndex = 2 saveFileDialog1.RestoreDirectory = True If saveFileDialog1.ShowDialog() = DialogResult.OK Then myStream = saveFileDialog1.OpenFile() If (myStream IsNot Nothing) Then Dim fs As New FileStream(saveFileDialog1.FileName, FileMode.OpenOrCreate, FileAccess.Write) Dim m_streamWriter As New StreamWriter(fs) m_streamWriter.Flush() 'Write to the file using StreamWriter class m_streamWriter.BaseStream.Seek(0, SeekOrigin.Begin) 'write each row of the ListView out to a tab-delimited line in a file For i As Integer = 0 To Me.ListView1.Items.Count - 1 m_streamWriter.WriteLine(((ListView1.Items(i).Text & vbTab) + ListView1.Items(i).SubItems(0).ToString() & vbTab) + ListView1.Items(i).SubItems(1).ToString()) Next myStream.Close() End If End If End Sub

    Read the article

  • ASP.NET Repeater - Evaluate Item as int?

    - by WedTM
    I'm trying to style a table row based upon a value in the databound collection (from LINQ to SQL) in my item template, however it's not working. This is what I have so far: <ItemTemplate> <% string style = String.Empty; if ((string)DataBinder.Eval(Quotes.Cu, "Status") == "Rejected") style = "color:red;"; else if ((string)Eval("Priority") == "Y") style = "color:green;"; if (style == String.Empty) Response.Write("<tr>"); else Response.Write("<tr style=\"" + style + "\""); %> <td> <%# Eval("QuoteID") %> </td> <td> <%# Eval("DateDue", "{0:dd/MM/yyyy}") %> </td> <td> <%# Eval("Company") %> </td> <td> <%# Eval("Estimator") %> </td> <td> <%# Eval("Attachments") %> </td> <td> <%# Eval("Employee") %> </td> </tr> </ItemTemplate>

    Read the article

  • template files evaluation in python

    - by saminny
    I am trying to use python for translating a set of templates to a set of configuration files based on values taken from a main configuration file. However, I am having certain issues. Consider the following example of a template file. file1.cfg.template %(CLIENT1)s %(HOST1)s %(PORT1)d C %(COMPID1)s %(CLIENT2)s %(HOST2)s %(PORT2)d C %(COMPID2)s This file contains an entry for each client. There are hundreds of config files like this and I don't want to have logic for each type of config file. Python should do the replacements and generate config files automatically given a set of global values read from a main xml config file. However, in the above example, if CLIENT2 does not exist, how do I delete that line? I expect Python would generate the config file using something like this: os.open("file1.cfg.template").read() % myhash where myhash is hash of configuration parameters from the main config file which may not contain CLIENT2 at all. In the case it does not contain CLIENT2, I want that line to disappear from the file. Is it possible to insert some 'IF' block in the file and have python evaluate it? Thanks for your help. Any suggestions most welcome.

    Read the article

  • Grid does not get auto size wpf

    - by Jasim Khan Afridi
    I have a Grid inside a grid. I want it to to avail maximum size irrespective of window size. Main_Grid should be max width Grid_tool_bar should be max width but infact it is not. I am unable to find the reason. Plz help me out. <Window x:Class="SocialNetworkingApp.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Background="White" WindowStyle="None" HorizontalAlignment="Stretch" WindowState="Normal" AllowsTransparency="True" WindowStartupLocation="CenterScreen"> <Border Margin="0,0,0,0" BorderBrush="Black" BorderThickness="1,1,1,1" > <Grid x:Name="Main_Grid" Background="White" Width="Auto" Height="Auto" Margin="0,0,0,0" HorizontalAlignment="Left" VerticalAlignment="Top"> <Grid.RowDefinitions> <RowDefinition Height="30" /> <RowDefinition Height="25"/> <RowDefinition Height="50"/> <RowDefinition Height="Auto"/> <RowDefinition Height="20"/> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="1*"/> </Grid.ColumnDefinitions> <Grid Name="Title_Bar" Grid.Row="0" VerticalAlignment="Top" ShowGridLines="True" Grid.IsSharedSizeScope="True" MouseDown="Drag_Window" Background="#FF4FA2DA" HorizontalAlignment="Left" Width="766" Height="31"> <Grid.RowDefinitions> <RowDefinition Height="Auto" /> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="30"/> <ColumnDefinition Width="Auto"/> <ColumnDefinition Width="120" /> </Grid.ColumnDefinitions> </Grid> </Grid> </Border> </Window>

    Read the article

  • Entity Framework and WCf

    - by Nihilist
    Hi I am little confused on designing WCf services with EF. When using WCf and EF, where do we draw this line on what properties to return and what not to with the entity. Here is my scenario I have User. Here are the relations. User [1 to many] Address, User [ 1 to many] Email, User [ 1 to many] Phone So now on the webform, on page1 I can edit user information. say I can edit few properties on the user entity and can also edit address, phone, email entities[ like add / delete and update any] On page2, i can only update user properties and nothing related to navigation properties [ address, email, phone]. So when I return the User Entity [ OR DTO] should i be returning the navigation properties too? Or should the client make multiple calls to get navigation properites. Also, how does it go with Save? Like should the client make multiple calls to save user and related entites or just one call to save the graph? Lets say, if I just have a Save(User user) [ where user has all the related entities too] both page1 and page2 will call save and pass me the user. but one page1 i will need a lot more information. but on page2 i just need the user primitive properties. So my question is, where do we draw this line, how do we design theses services ? Is the WCF operation designed on the page and the fields it has ? I am hoping i explained my problem well enough.

    Read the article

  • SSRS code variable resetting on new page

    - by edmicman
    In SSRS 2008 I am trying to maintain a SUM of SUMs on a group using custom Code. The reason is that I have a table of data, grouped and returning SUMs of the data. I have a filter on the group to remove lines where group sums are zero. Everything works except I'm running into problems with the group totals - it should be summing the visible group totals but is instead summing the entire dataset. There's tons of articles about how to work around this, usually using custom code. I've made custom functions and variables to maintain a counter: Public Dim GroupMedTotal as Integer Public Dim GrandMedTotal as Integer Public Function CalcMedTotal(ThisValue as Integer) as Integer GroupMedTotal = GroupMedTotal + ThisValue GrandMedTotal = GrandMedTotal + ThisValue Return ThisValue End Function Public Function ReturnMedSubtotal() as Integer Dim ThisValue as Integer = GroupMedTotal GroupMedTotal = 0 Return ThisValue End Function Basically CalcMedTotal is fed a SUM of a group, and maintains a running total of that sum. Then in the group total line I output ReturnMedSubtotal which is supposed to give me the accumulated total and reset it for the next group. This actually works great, EXCEPT - it is resetting the GroupMedTotal value on each page break. I don't have page breaks explicitly set, it's just the natural break in the SSRS viewer. And if I export the results to Excel everything works and looks correctly. If I output Code.GroupMedTotal on each group row, I see it count correctly, and then if a group spans multiple pages on the next page GroupMedTotal is reset and begins counting from zero again. Any help in what's going on or how to work around this? Thanks!

    Read the article

  • UITableViewCell checkmarks

    - by burki
    Hi! When you select a cell in the UITableView, the - (void)tableView:(UITableView *)table didSelectRowAtIndexPath:(NSIndexPath *)indexPath is called. There some of the NSManagedObjects will be updated with the right values and the row will be deselected. Well, it works all right, but you can't see any selection of the tableviewcell. I found out that the access on core data causes the problem, that means, if i comment out the lines with the commands of updating the NSManagedObjects, it all works like I want, with a smooth selection and deselection. Can anybody help? Thanks. - (void)tableView:(UITableView *)table didSelectRowAtIndexPath:(NSIndexPath *)indexPath { //[table deselectRowAtIndexPath:indexPath animated:YES]; NSMutableSet *favoriteGroups = [NSMutableSet setWithSet:element.favoriteGroup]; NSMutableSet *elements = [NSMutableSet setWithSet:[(FavoriteGroup *)[fetchedResultsController objectAtIndexPath:indexPath] element]]; UITableViewCell *checkedCell = [table cellForRowAtIndexPath:indexPath]; if (checkedCell.accessoryType == UITableViewCellAccessoryCheckmark) { [elements removeObject:element]; [favoriteGroups removeObject:[fetchedResultsController objectAtIndexPath:indexPath]]; [[table cellForRowAtIndexPath:indexPath] setAccessoryType:UITableViewCellAccessoryNone]; } else { [elements addObject:element]; [favoriteGroups addObject:[fetchedResultsController objectAtIndexPath:indexPath]]; [[table cellForRowAtIndexPath:indexPath] setAccessoryType:UITableViewCellAccessoryCheckmark]; } element.favoriteGroup = favoriteGroups; FavoriteGroup *favoriteGroup = [self.fetchedResultsController objectAtIndexPath:indexPath]; favoriteGroup.element = elements; [self.tableView deselectRowAtIndexPath:indexPath animated:YES]; }

    Read the article

  • JavaFX MouseEvent continues when I remove the object it happened on

    - by Kyle
    It took me a while to realize what was going on with mouse events going through my blocking dialog boxes when I closed them, but I finally figured out why. I still don't know any good way to fix it. I have a custom dialog box (that blocks the mouse) with a close button. When I click the close button, I remove the dialog box from the scene, but JavaFx is still processing the MouseEvent and now it finds that there is nothing blocking the screen behind where the cancel button was, so that component receives a MouseEvent. How do I make the mouseEvent stop processing when I see that they pressed cancel and remove the dialog box? Or, is there a way to make the removing of the dialog box not happen until after it is done processing the MouseEvent? Example Code for the problem: import javafx.stage.Stage; import javafx.scene.Scene; import javafx.scene.shape.Rectangle; import javafx.scene.input.MouseEvent; import javafx.scene.control.Button; var theScene:Scene; var btn:Button; Stage { title: "Application title" scene: theScene= Scene { width: 500 height: 200 content: [ Rectangle{ width: bind theScene.width height: bind theScene.height onMouseClicked: function(e:MouseEvent):Void{ println("Rectangle");} }, Button{ layoutX: 20 layoutY: 50 blocksMouse: true text: "JustPrint" action:function():Void{ println("JustPrint");} }, btn = Button{ layoutX: 20 layoutY: 20 blocksMouse: true text: "Cancel" action:function():Void{ println("Cancel"); delete btn from theScene.content;} }, ] } } When you press "JustPrint" you get: JustPrint When you press "Cancel" you get: Cancel Rectangle

    Read the article

  • Grails - Removing an item from a hasMany association List on data bind?

    - by ecrane
    Grails offers the ability to automatically create and bind domain objects to a hasMany List, as described in the grails user guide. So, for example, if my domain object "Author" has a List of many "Book" objects, I could create and bind these using the following markup (from the user guide): <g:textField name="books[0].title" value="the Stand" /> <g:textField name="books[1].title" value="the Shining" /> <g:textField name="books[2].title" value="Red Madder" /> In this case, if any of the books specified don't already exist, Grails will create them and set their titles appropriately. If there are already books in the specified indices, their titles will be updated and they will be saved. My question is: is there some easy way to tell Grails to remove one of those books from the 'books' association on data bind? The most obvious way to do this would be to omit the form element that corresponds to the domain instance you want to delete; unfortunately, this does not work, as per the user guide: Then Grails will automatically create a new instance for you at the defined position. If you "skipped" a few elements in the middle ... Then Grails will automatically create instances in between. I realize that a specific solution could be engineered as part of a command object, or as part of a particular controller- however, the need for this functionality appears repeatedly throughout my application, across multiple domain objects and for associations of many different types of objects. A general solution, therefore, would be ideal. Does anyone know if there is something like this included in Grails?

    Read the article

  • How do I access Dictionary items?

    - by salvationishere
    I am developing a C# VS2008 / SQL Server website app and am new to the Dictionary class. Can you please advise on best method of accomplishing this? Here is a code snippet: SqlConnection conn2 = new SqlConnection(connString); SqlCommand cmd = conn2.CreateCommand(); cmd.CommandText = "dbo.AppendDataCT"; cmd.CommandType = CommandType.StoredProcedure; cmd.Connection = conn2; SqlParameter p1, p2, p3; foreach (string s in dt.Rows[1].ItemArray) { DataRow dr = dt.Rows[1]; // second row p1 = cmd.Parameters.AddWithValue((string)dic[0], (string)dr[0]); p1.SqlDbType = SqlDbType.VarChar; p2 = cmd.Parameters.AddWithValue((string)dic[1], (string)dr[1]); p2.SqlDbType = SqlDbType.VarChar; p3 = cmd.Parameters.AddWithValue((string)dic[2], (string)dr[2]); p3.SqlDbType = SqlDbType.VarChar; } but this is giving me compiler error: The best overloaded method match for 'System.Collections.Generic.Dictionary<string,string>.this[string]' has some invalid arguments I just want to access each value from "dic" and load into these SQL parameters. How do I do this? Do I have to enter the key? The keys are named "col1", "col2", etc., so not the most user-friendly. Any other tips? Thanks!

    Read the article

< Previous Page | 622 623 624 625 626 627 628 629 630 631 632 633  | Next Page >