Search Results

Search found 4593 results on 184 pages for 'operator equal'.

Page 160/184 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • Determine asymmetric latencies in a network

    - by BeeOnRope
    Imagine you have many clustered servers, across many hosts, in a heterogeneous network environment, such that the connections between servers may have wildly varying latencies and bandwidth. You want to build a map of the connections between servers my transferring data between them. Of course, this map may become stale over time as the network topology changes - but lets ignore those complexities for now and assume the network is relatively static. Given the latencies between nodes in this host graph, calculating the bandwidth is a relative simply timing exercise. I'm having more difficulty with the latencies - however. To get round-trip time, it is a simple matter of timing a return-trip ping from the local host to a remote host - both timing events (start, stop) occur on the local host. What if I want one-way times under the assumption that the latency is not equal in both directions? Assuming that the clocks on the various hosts are not precisely synchronized (at least that their error is of the the same magnitude as the latencies involved) - how can I calculate the one-way latency? In a related question - is this asymmetric latency (where a link is quicker in direction than the other) common in practice? For what reasons/hardware configurations? Certainly I'm aware of asymmetric bandwidth scenarios, especially on last-mile consumer links such as DSL and Cable, but I'm not so sure about latency. Added: After considering the comment below, the second portion of the question is probably better off on serverfault.

    Read the article

  • Will this cause a problem with different runtimes with DLL?

    - by Milo
    My gui application supports polymorphic timed events so that means that the user calls new, and the gui calls delete. This can create a problem if the runtimes are incompatible. So I was told a proposed solution would be this: class base; class Deallocator { void operator()(base* ptr) { delete ptr; } } class base { public: base(Deallocator dealloc) { m_deleteFunc = dealloc; } ~base() { m_deleteFunc(this); } private: Deallocator m_deleteFunc; } int main { Deallocator deletefunc; base baseObj(deletefunc); } While this is a good solution, it does demand that the user create a Deallocator object which I do not want. I was however wondering if I provided a Deallocator to each derived class: eg class derived : public base { Deallocator dealloc; public: Derived() : base(dealloc); { } }; I think this still does not work though. The constraint is that: The addTimedEvent() function is part of the Widget class which is also in the dll, but it is instanced by the user. The other constraint is that some classes which derive from Widget call this function with their own timed event classes. Given that "he who called new must call delete" what could work given these constraints? Thanks

    Read the article

  • Building an array out of values from another array

    - by George
    This is a follow up from a question of mine that was just answered concerning parsing numbers in an array. I have an array, data[], with numbers that I'd like to use in a calculation and then put the resulting values into another array. So say data[0] = 100. I'd like to find a percentage using the calculatin, (data[0]/dataSum*100).toFixed(2) where dataSum is the sum of all the numbers in data[]. I've tried: dataPercentage = []; for (var i=0; i < data.length; i++) { data[i] = parseFloat(data[i]); dataSum += data[i]; // looping through data[i] and setting it equal to dataPercentage. dataPercentage[] = (data[i]/dataSum*100).toFixed(2); // thought maybe I was overriding dataPercentage everytime I looped? dataPercentage[] += (data[i]/dataSum*100).toFixed(2); } I also tried just setting dataPercentage = [(data/dataSum*100).toFixed(2)], but I think this creates a nested array, which I don't think is what I need.

    Read the article

  • Why the databinding fails in ListView (WPF) ?

    - by Ashish Ashu
    I have a ListView of which ItemSource is set to my Custom Collection. I have defined a GridView CellTemplate that contains a combo box as below : <ListView MaxWidth="850" Grid.Row="1" SelectedItem="{Binding Path = SelectedCondition}" ItemsSource="{Binding Path = Conditions}" FontWeight="Normal" FontSize="11" Name="listview"> <ListView.View> <GridView> <GridViewColumn Width="175" Header="Type"> <GridViewColumn.CellTemplate> <DataTemplate> <ComboBox Style="{x:Null}" x:Name="TypeCmbox" Height="Auto" Width="150" SelectedValuePath="Key" DisplayMemberPath="Value" SelectedItem="{Binding Path = MyType}" ItemsSource="{Binding Path = MyTypes}" HorizontalAlignment="Center" /> </DataTemplate> </GridViewColumn.CellTemplate> </GridViewColumn> </ListView> My Custom collection is the ObservableCollection. I have a two buttons - Move Up and Move Down on top of the listview control . When user clicks on the Move Up or Move Down button I call MoveUp and MoveDown methods of Observable Collection. But when I Move Up and Move Down the rows then the Selected Index of a combo box is -1. I have ensured that selectedItem is not equal to null when performing Move Up and Move Down commands. Please Help!!

    Read the article

  • database table design

    - by e.b.white
    I design the tables as below for the system which looks like a package delivering system For example, after user received the package, postman should record in system, and the state(history table) is "delivered",and operator is this postman, the current state(state table) is of course "delivered" history table: +---------------+--------------------------+ | Field | Desc | +---------------+--------------------------+ | id | PRIMARY KEY | +---------------+--------------------------+ | package_id | package_tacking_id | +---------------+--------------------------+ | state | package_state | +---------------+--------------------------+ | operators | operators | +---------------+--------------------------+ | create_time| create_time | +---------------+--------------------------+ state table: +---------------+--------------------------+ | Field | Desc | +---------------+--------------------------+ | id | PRIMARY KEY | +---------------+--------------------------+ | package_id | package_tacking_id | +---------------+--------------------------+ | state | latest_package_state | +---------------+--------------------------+ Above is just the basic information to record, some other information( like invoice, destination,...) should be recored as well. But there are different service types like s1 and s2, for s1 it is not needed to record invoice but s1 need, and maybe s1 need some other information to record (like the tel of end user). After all, at delivering way stations there are additional information to record, and for different service type the information type is different. My question is: 1. For different service type, shall I need to declare different tables(option A) or just one big table which can record all information for all types(option B)? 2. If option A, since the basic information above is MUST, how can prevent from declaring there duplicate fields in different tables?

    Read the article

  • When do Symfony's user attributes get written to session?

    - by Rob Wilkerson
    I have a Symfony app that populates the "widgets" of a portal application and I'm noticing something (that seems) odd. The portal app has iframes that make calls to the Symfony app. On each of those calls, a random user key is passed on the query string. The Symfony app stores that key its session using myUser->setAttribute(). If the incoming value is different from what it has in session, it overwrites the session value. In pseudo-code (and applying a synchronous nature for clarity even though it may not exist): # Widget request arrives with ?foo=bar if the user attribute 'foo' does not equal 'bar' overwrite the user attribute 'foo' with 'bar' end What I'm noticing is that, on a portal page with multiple widgets (read: multiple requests coming in more or less simultaneously) where the value needs to be overwritten, each request is trying to overwrite. Is this a timing problem? When I look at the log prints, I'd expect the first request that arrives to overwrite and subsequent requests to see that the user attribute they received matches what was just put into cache by the initial request. In this scenario, it could be that subsequent requests begin (and are checked) even before the first one--the one that should overwrite the cached value--has completely finished. Are session values not really available to subsequent requests until one request has completed entirely or could there be something else that I'm missing? Thanks.

    Read the article

  • Determine if a Range contains a value

    - by Brad Dwyer
    I'm trying to figure out a way to determine if a value falls within a Range in Swift. Basically what I'm trying to do is adapt one of the examples switch statement examples to do something like this: let point = (1, -1) switch point { case let (x, y) where (0..5).contains(x): println("(\(x), \(y)) has an x val between 0 and 5.") default: println("This point has an x val outside 0 and 5.") } As far as I can tell, there isn't any built in way to do what my imaginary .contains method above does. So I tried to extend the Range class. I ended up running into issues with generics though. I can't extend Range<Int> so I had to try to extend Range itself. The closest I got was this but it doesn't work since >= and <= aren't defined for ForwardIndex extension Range { func contains(val:ForwardIndex) -> Bool { return val >= self.startIndex && val <= self.endIndex } } How would I go about adding a .contains method to Range? Or is there a better way to determine whether a value falls within a range? Edit2: This seems to work to extend Range extension Range { func contains(val:T) -> Bool { for x in self { if(x == val) { return true } } return false } } var a = 0..5 a.contains(3) // true a.contains(6) // false a.contains(-5) // false I am very interested in the ~= operator mentioned below though; looking into that now.

    Read the article

  • Selective replication with CouchDB

    - by FRotthowe
    I'm currently evaluating possible solutions to the follwing problem: A set of data entries must be synchonized between multiple clients, where each client may only view (or even know about the existence of) a subset of the data. Each client "owns" some of the elements, and the decision who else can read or modify those elements may only be made by the owner. To complicate this situation even more, each element (and each element revision) must have an unique identifier that is equal for all clients. While the latter sounds like a perfect task for CouchDB (and a document based data model would fit my needs perfectly), I'm not sure if the authentication/authorization subsystem of CouchDB can handle these requirements: While it should be possible to restict write access using validation functions, there doesn't seem to be a way to authorize read access. All solutions I've found for this problem propose to route all CouchDB requests through a proxy (or an application layer) that handles authorization. So, the question is: Is it possible to implement an authorization layer that filters requests to the database so that access is granted only to documents that the requesting client has read access to and still use the replication mechanism of CouchDB? Simplified, this would be some kind of "selective replication" where only some of the documents, and not the whole database is replicated. I would also be thankful for directions to some detailed information about how replication works. The CouchDB wiki and even the "Definite Guide" Book are not too specific about that.

    Read the article

  • Should I *always* import my file references into the database in drupal?

    - by sprugman
    I have a cck type with an image field, and a unique_id text field. The file name of the image is based on the unique_id. All of the content, including the image itself is being generated automatically via another process, and I'm parsing what that generates into nodes. Rather than creating separate fields for the id and the image, and doing an official import of the image into the files table, I'm tempted to only create the id field and create the file reference in the theme layer. I can think of pros and cons: 1) Theme Layer Approach Pros: makes the import process much less complex don't have to worry about syncing the db with the file system as things change more flexible -- I can move my images around more easily if I want Cons: maybe not as much The Drupal Way™ not as pure -- I'll wind up with more logic on the theme side. 2) Import Approach Pros: import method is required if we ever wanted to make the files private (we won't.) safer? Maybe I'll know if there's a problem with the image at import time, rather than view time. Since I'll be bulk importing, that might make a difference. if I delete a node through the admin interface, drupal might be able to delete the file for me, as well. Con: more complex import and maintenance All else being equal, simpler is always better, so I'm leaning toward #1. Are there any other issues I'm missing? (Since this is an open ended question, I guess I'll make it a community wiki, whatever that means.)

    Read the article

  • Freeing of allocated memory in Solaris/Linux

    - by user355159
    Hi, I have written a small program and compiled it under Solaris/Linux platform to measure the performance of applying this code to my application. The program is written in such a way, initially using sbrk(0) system call, i have taken base address of the heap region. After that i have allocated an 1.5GB of memory using malloc system call, Then i used memcpy system call to copy 1.5GB of content to the allocated memory area. Then, I freed the allocated memory. After freeing, i used again sbrk(0) system call to view the heap size. This is where i little confused. In solaris, eventhough, i freed the memory allocated (of nearly 1.5GB) the heap size of the process is huge. But i run the same application in linux, after freeing, i found that the heap size of the process is equal to the size of the heap memory before allocation of 1.5GB. I know Solaris does not frees memory immediately, but i don't know how to tune the solaris kernel to immediately free the memory after free() system call. Also, please explain why the same problem does not comes under Linux? Can anyone help me out of this? Thanks, Santhosh.

    Read the article

  • How would I compare two Lists(Of <CustomClass>) in VB?

    - by Kumba
    I'm working on implementing the equality operator = for a custom class of mine. The class has one property, Value, which is itself a List(Of OtherClass), where OtherClass is yet another custom class in my project. I've already implemented the IComparer, IComparable, IEqualityComparer, and IEquatable interfaces, the operators =, <>, bool and not, and overriden Equals and GetHashCode for OtherClass. This should give me all the tools I need to compare these objects, and various tests comparing two singular instances of these objects so far checks out. However, I'm not sure how to approach this when they are in a List. I don't care about the list order. Given: Dim x As New List(Of OtherClass) From {New OtherClass("foo"), New OtherClass("bar"), New OtherClass("baz")} Dim y As New List(Of OtherClass) From {New OtherClass("baz"), New OtherClass("foo"), New OtherClass("bar")} Then (x = y).ToString should print out True. I need to compare the same (not distinct) set of objects in this list. The list shouldn't support dupes of OtherClass, but I'll have to figure out how to add that in later as an exception. Not interested in using LINQ. It looks nice, but in the few examples I've played with, adds a performance overhead in that bugs me. Loops are ugly, but they are fast :) A straight code answer is fine, but I'd like to understand the logic needed for such a comparison as well. I'm probably going to have to implement said logic more than a few times down the road.

    Read the article

  • How to combine the multiple part linq into one query?

    - by user2943399
    Operator should be ‘AND’ and not a ‘OR’. I am trying to refactor the following code and i understood the following way of writing linq query may not be the correct way. Can somone advice me how to combine the following into one query. AllCompany.Where(itm => itm != null).Distinct().ToList(); if (AllCompany.Count > 0) { //COMPANY NAME if (isfldCompanyName) { AllCompany = AllCompany.Where(company => company["Company Name"].StartsWith(fldCompanyName)).ToList(); } //SECTOR if (isfldSector) { AllCompany = AllCompany.Where(company => fldSector.Intersect(company["Sectors"].Split('|')).Any()).ToList(); } //LOCATION if (isfldLocation) { AllCompany = AllCompany.Where(company => fldLocation.Intersect(company["Location"].Split('|')).Any()).ToList(); } //CREATED DATE if (isfldcreatedDate) { AllCompany = AllCompany.Where(company => company.Statistics.Created >= createdDate).ToList(); } //LAST UPDATED DATE if (isfldUpdatedDate) { AllCompany = AllCompany.Where(company => company.Statistics.Updated >= updatedDate).ToList(); } //Allow Placements if (isfldEmployerLevel) { fldEmployerLevel = (fldEmployerLevel == "Yes") ? "1" : ""; AllCompany = AllCompany.Where(company => company["Allow Placements"].ToString() == fldEmployerLevel).ToList(); }

    Read the article

  • Passing functor and function pointers interchangeably using a templated method in C++

    - by metroxylon
    I currently have a templated class, with a templated method. Works great with functors, but having trouble compiling for functions. Foo.h template <typename T> class Foo { public: // Constructor, destructor, etc... template <typename Func> void bar(T x, Func f); }; template <typename T> template <typename Func> Foo::bar(T x, Func f) { /* some code here */ } Main.cpp #include "Foo.h" template <typename T> class Functor { public: Functor() {} void operator()(T x) { /* ... */ } private: /* some attributes here */ }; void Function(T x) { /* ... */ } int main() { Foo<int> foo; foo.bar(2, Functor); // No problem foo.bar(2, Function); // <unresolved overloaded function type> return 0; }

    Read the article

  • Parallelizing L2S Entity Retrieval

    - by MarkB
    Assuming a typical domain entity approach with SQL Server and a dbml/L2S DAL with a logic layer on top of that: In situations where lazy loading is not an option, I have settled on a convention where getting a list of entities does not also get each item's child entities (no loading), but getting a single entity does (eager loading). Since getting a single entity also gets children, it causes a cascading effect in which each child then gets its children too. This sounds bad, but as long as the model is not too deep, I usually don't see performance problems that outweigh the benefits of the ease of use. So if I want to get a list in which each of the items is fully hydrated with children, I combine the GetList and GetItem methods. So I'll get a list and then loop through it getting each item with the full cascade. Even this is generally acceptable in many of the projects I've worked on - but I have recently encountered situations with larger models and/or more data in which it needs to be more efficient. I've found that partitioning the loop and executing it on multiple threads yields excellent results. In my first experiment with a list of 50 items from one particular project, I did 5 threads of 10 items each and got a 3X improvement in time. Of course, the mileage will vary depending on the project but all else being equal this is clearly a big opportunity. However, before I go further, I was wondering what others have done that have already been through this. What are some good approaches to parallelizing this type of thing?

    Read the article

  • Which C++ Standard Library wrapper functions do you use?

    - by Neil Butterworth
    This question, asked this morning, made me wonder which features you think are missing from the C++ Standard Library, and how you have gone about filling the gaps with wrapper functions. For example, my own utility library has this function for vector append: template <class T> std::vector<T> & operator += ( std::vector<T> & v1, const std::vector <T> & v2 ) { v1.insert( v1.end(), v2.begin(), v2.end() ); return v1; } and this one for clearing (more or less) any type - particularly useful for things like std::stack: template <class C> void Clear( C & c ) { c = C(); } I have a few more, but I'm interested in which ones you use? Please limit answers to wrapper functions - i.e. no more than a couple of lines of code.

    Read the article

  • Need help getting parent reference to child view controller

    - by Andy
    I've got the following code in one of my view controllers: - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { switch (indexPath.section) { case 0: // "days" section tapped { DayPicker *dayPicker = [[DayPicker alloc] initWithStyle:UITableViewStylePlain]; dayPicker.rowLabel = self.activeDaysLabel; dayPicker.selectedDays = self.newRule.activeDays; [self.navigationController pushViewController:dayPicker animated:YES]; [dayPicker release]; break; ... Then, in the DayPicker controller, I do some stuff to the dayPicker.rowLabel property. Now, when the dayPicker is dismissed, I want the value in dayPicker.rowLabel to be used as the cell.textLabel.text property in the cell that called the controller in the first place (i.e., the cell label becomes the option that was selected within the DayPicker controller). I thought that by using the assignment operator to set dayPicker.rowLabel = self.activeDaysLabel, the two pointed to the same object in memory, and that upon dismissing the DayPicker, my first view controller, which uses self.activeDaysLabel as the cell.textLabel.text property for the cell in question, would automagically pick up the new value of the object. But no such luck. Have I missed something basic here, or am I going about this the wrong way? I originally passed a reference to the calling view controller to the child view controller, but several here told me that was likely to cause problems, being a circular reference. That setup worked, though; now I'm not sure how to accomplish the same thing "the right way." As usual, thanks in advance for your help.

    Read the article

  • Replace duplicate values in array with new randomly generated values

    - by RussellDias
    I have below a function (created by Gordon in a previous question that went unanswered) that creates an array with n amount of values. The sum of the array is equal to $max. function randomDistinctPartition($n, $max) { $partition= array(); for($i=1; $i < $n; $i++) { $maxSingleNumber = $max - $n; $partition[] = $number = rand(1, $maxSingleNumber); } $max -= $number; } $partition[] = $max; return $partition; } For example: If I set $n = 4 and $max = 30. Then I should get the following. array(5, 7, 10, 8); However, this function does not take into account duplicates and 0s. What I would like - and have been trying to accomplish - is to generate an array with unique numbers that add up to my predetermined variable $max. No Duplicate numbers and No 0 and/or negative integers.

    Read the article

  • Using list() to extract a data.table inside of a function

    - by Nathan VanHoudnos
    I must admit that the data.table J syntax confuses me. I am attempting to use list() to extract a subset of a data.table as a data.table object as described in Section 1.4 of the data.table FAQ, but I can't get this behavior to work inside of a function. An example: require(data.table) ## Setup some test data set.seed(1) test.data <- data.table( X = rnorm(10), Y = rnorm(10), Z = rnorm(10) ) setkey(test.data, X) ## Notice that I can subset the data table easily with literal names test.data[, list(X,Y)] ## X Y ## 1: -0.8356286 -0.62124058 ## 2: -0.8204684 -0.04493361 ## 3: -0.6264538 1.51178117 ## 4: -0.3053884 0.59390132 ## 5: 0.1836433 0.38984324 ## 6: 0.3295078 1.12493092 ## 7: 0.4874291 -0.01619026 ## 8: 0.5757814 0.82122120 ## 9: 0.7383247 0.94383621 ## 10: 1.5952808 -2.21469989 I can even write a function that will return a column of the data.table as a vector when passed the name of a column as a character vector: get.a.vector <- function( my.dt, my.column ) { ## Step 1: Convert my.column to an expression column.exp <- parse(text=my.column) ## Step 2: Return the vector return( my.dt[, eval(column.exp)] ) } get.a.vector( test.data, 'X') ## [1] -0.8356286 -0.8204684 -0.6264538 -0.3053884 0.1836433 0.3295078 ## [7] 0.4874291 0.5757814 0.7383247 1.5952808 But I cannot pull a similar trick for list(). The inline comments are the output from the interactive browser() session. get.a.dt <- function( my.dt, my.column ) { ## Step 1: Convert my.column to an expression column.exp <- parse(text=my.column) ## Step 2: Enter the browser to play around browser() ## Step 3: Verity that a literal X works: my.dt[, list(X)] ## << not shown >> ## Step 4: Attempt to evaluate the parsed experssion my.dt[, list( eval(column.exp)] ## Error in `rownames<-`(`*tmp*`, value = paste(format(rn, right = TRUE), (from data.table.example.R@1032mCJ#7) : ## length of 'dimnames' [1] not equal to array extent return( my.dt[, list(eval(column.exp))] ) } get.a.dt( test.data, "X" ) What am I missing? Update: Due to some confusion as to why I would want to do this I wanted to clarify. My use case is when I need to access a data.table column when when I generate the name. Something like this: set.seed(2) test.data[, X.1 := rnorm(10)] which.column <- 'X' new.column <- paste(which.column, '.1', sep="") get.a.dt( test.data, new.column ) Hopefully that helps.

    Read the article

  • Deleting from 2 arrays of dictionaries

    - by Matt Winters
    I have an array of dictionaries loaded from a plist (below) called arrayHistory. <plist version="1.0"> <array> <dict> <key>item</key> <string>1</string> <key>result</key> <string>8.1</string> <key>date</key> <date>2009-12-15T19:36:59Z</date> </dict> ... </array> </plist> I filter this array based on 'item' so that a second array, arrayHistoryDetail has the same structure as arrayHistory but only contains e.g. 'item's equal to '1'. These detail items are successfully displayed in a tableView. I then want to select an item from the tableView, and delete it from the tableView data source, arrayHistoryDetail (line 2 in the code below) - works, then I want to delete the item from the tableView itself (line 3 in the code below) - also works. My problem is that I also need to delete it from the original arrayHistory, so I tried the following: created a temporary dictionary as an ivar: NSMutableDictionary *tempDict; @property (nonatomic, retain) NSMutableDictionary *tempDict; Then my thinking was to make a copy in line 1 and remove it from the original array in line 4. 1 tempDict = [arrayHistoryDetail objectAtIndex: indexPath.row]; 2 [arrayHistoryDetail removeObjectAtIndex: indexPath.row]; 3 [tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; 4 [arrayHistory removeObject:tempDict]; Didn't work. Could someone please guide me in the right direction. I'm thinking that tempDict is a pointer and that removeObject needs a copy? I don't know. Thanks.

    Read the article

  • How to hadle a tags inside another tags in NSXMLParser

    - by SimpleMan
    I have a file: <xml> <component>something <system>somethingDeeper <value>somethingDeepest</value> </system> </component> <component>somethinfDifferent <value>somethingDifferentDeeper</value> </component> <value>somethingNew</value> </xml> So i want to distinguish what is inside another tag (ex. <system>) what is not. How to do this with NSXMLParser ? I currently use BOOL ivar's but this is a lot of tags and this is not as elegant as i want it to be. I know that NSXMLParser is a SAX parser and i understand that. In above example I will be enter to didEndElement method three times with: elementName equal value Is there a more elegant way to distinguish what entry was from <component> tag above what not?

    Read the article

  • Invalid table view update with insertRowsAtIndexPaths:

    - by Crystal
    I'm having trouble with insertRowsAtIndexPaths:. I'm not quite sure how it works. I watched the WWDC 2010 video on it, but I'm still getting an error. I thought I was supposed to update the model, then wrap the insertRowsAtIndexPaths: in the tableView beginUpdates and endUpdates calls. What I have is this: self.customTableArray = (NSMutableArray *)sortedArray; [_customTableView beginUpdates]; [tempUnsortedArray enumerateObjectsUsingBlock:^(id obj, NSUInteger idx, BOOL *stop) { [sortedArray enumerateObjectsUsingBlock:^(id sortedObj, NSUInteger sortedIdx, BOOL *sortedStop) { if ([obj isEqualToString:sortedObj]) { NSIndexPath *newRow = [NSIndexPath indexPathForRow:sortedIdx inSection:0]; [_customTableView insertRowsAtIndexPaths:[NSArray arrayWithObject:newRow] withRowAnimation:UITableViewRowAnimationAutomatic]; *sortedStop = YES; } }]; }]; [_customTableView endUpdates]; customTableArray is my model array. sortedArray is just the sorted version of that array. When I run this code when I hit my plus button to add a new row, I get this error: Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Invalid update: invalid number of rows in section 0. The number of rows contained in an existing section after the update (2) must be equal to the number of rows contained in that section before the update (1), plus or minus the number of rows inserted or deleted from that section (2 inserted, 0 deleted) and plus or minus the number of rows moved into or out of that section (0 moved in, 0 moved out).' I'm not sure what I'm doing wrong. Thoughts? Thanks.

    Read the article

  • Flex: Scale an Image so that its width matches contentWidth?

    - by Tong Wang
    I have a dynamic layout, where an image is loaded into an HBox: <mx:HBox ...> <mx:Image height="100%" .../> </mx:HBox> only the image's height is set on the image, so that it can take all the vertical space available, while its width is left undefined and I expect the width to scale accordingly with its height. It turns out that the image's height is equal to its contentHeight, i.e. height scales properly; however, the image's width is still the same as measuredWidth (the image's original width), and is not scaled accordingly. For example, if the image's original size is 800x600 and if the HBox is 300 in height, then image height will scale down to 300, however its width doesn't scale down to 400, instead it stays at 800. I tried to add an event listener to explicitly set the image width: <mx:Image id="img" height="100%" updateComplete="img.width=img.contentWidth;" .../> It works only the first time the image is loaded, after that, if I use Image.load() to dynamically load different images, it stops working - the image width is set to 0. Any help/advice will be highly appreciated.

    Read the article

  • Help Me With This MS-Access Query

    - by yae
    I have 2 tables: "products" and "pieces" PRODUCTS idProd product price PIECES id idProdMain idProdChild quant idProdMain and idProdChild are related with the table: "products". Other considerations is that 1 product can have some pieces and 1 product can be a piece. Price product equal a sum of quantity * price of all their pieces. "Products" table contains all products (p EXAMPLE: TABLE PRODUCTS (idProd - product - price) 1 - Computer - 300€ 2 - Hard Disk - 100€ 3 - Memory - 50€ 4 - Main Board - 100€ 5 - Software - 50€ 6 - CDroms 100 un. - 30€ TABLE PIECES (id - idProdMain - idProdChild - Quant.) 1 - 1 - 2 - 1 2 - 1 - 3 - 2 3 - 1 - 4 - 1 WHAT I NEED? I need update the price of the main product when the price of the product child (piece) is changed. Following the previous example, if I change the price of this product "memory" (is a piece too) to 60€, then product "Computer" will must change his price to 320€ How I can do it using queries? Already I have tried this to obtain the price of the main product, but not runs. This query not returns any value: SELECT Sum(products.price*pieces.quant) AS Expr1 FROM products LEFT JOIN pieces ON (products.idProd = pieces.idProdChild) AND (products.idProd = pieces.idProdChild) AND (products.idProd = pieces.idProdMain) WHERE (((pieces.idProdMain)=5)); MORE INFO The table "products" contains all the products to sell that it is in the shop. The table "pieces" is to take a control of the compound products. To know those who are the products children. For example of compound product: computers. This product is composed by other products (motherboard, hard disk, memory, cpu, etc.)

    Read the article

  • Interpretation of int (*a)[3]

    - by kapuzineralex
    When working with arrays and pointers in C, one quickly discovers that they are by no means equivalent although it might seem so at a first glance. I know about the differences in L-values and R-values. Still, recently I tried to find out the type of a pointer that I could use in conjunction with a two-dimensional array, i.e. int foo[2][3]; int (*a)[3] = foo; However, I just can't find out how the compiler "understands" the type definition of a in spite of the regular operator precedence rules for * and []. If instead I were to use a typedef, the problem becomes significantly simpler: int foo[2][3]; typedef int my_t[3]; my_t *a = foo; At the bottom line, can someone answer me the questions as to how the term int (*a)[3] is read by the compiler? int a[3] is simple, int *a[3] is simple as well. But then, why is it not int *(a[3])? EDIT: Of course, instead of "typecast" I meant "typedef" (it was just a typo).

    Read the article

  • Optimizing C++ Tree Generation

    - by cam
    Hi, I'm generating a Tic-Tac-Toe game tree (9 seconds after the first move), and I'm told it should take only a few milliseconds. So I'm trying to optimize it, I ran it through CodeAnalyst and these are the top 5 calls being made (I used bitsets to represent the Tic-Tac-Toe board): std::_Iterator_base::_Orphan_me std::bitset<9::test std::_Iterator_base::_Adopt std::bitset<9::reference::operator bool std::_Iterator_base::~_Iterator_base void BuildTreeToDepth(Node &nNode, const int& nextPlayer, int depth) { if (depth > 0) { //Calculate gameboard states int evalBoard = nNode.m_board.CalculateBoardState(); bool isFinished = nNode.m_board.isFinished(); if (isFinished || (nNode.m_board.isWinner() > 0)) { nNode.m_winCount = evalBoard; } else { Ticboard tBoard = nNode.m_board; do { int validMove = tBoard.FirstValidMove(); if (validMove != -1) { Node f; Ticboard tempBoard = nNode.m_board; tempBoard.Move(validMove, nextPlayer); tBoard.Move(validMove, nextPlayer); f.m_board = tempBoard; f.m_winCount = 0; f.m_Move = validMove; int currPlay = (nextPlayer == 1 ? 2 : 1); BuildTreeToDepth(f,currPlay, depth - 1); nNode.m_winCount += f.m_board.CalculateBoardState(); nNode.m_branches.push_back(f); } else { break; } }while(true); } } } Where should I be looking to optimize it? How should I optimize these 5 calls (I don't recognize them=.

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >