Search Results

Search found 8982 results on 360 pages for 'delete'.

Page 303/360 | < Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >

  • copy rows before updating them to preserve archive in Postgres

    - by punkish
    I am experimenting with creating a table that keeps a version of every row. The idea is to be able to query for how the rows were at any point in time even if the query has JOINs. Consider a system where the primary resource is books, that is, books are queried for, and author info comes along for the ride CREATE TABLE authors ( author_id INTEGER NOT NULL, version INTEGER NOT NULL CHECK (version > 0), author_name TEXT, is_active BOOLEAN DEFAULT '1', modified_on TIMESTAMP DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (author_id, version) ) INSERT INTO authors (author_id, version, author_name) VALUES (1, 1, 'John'), (2, 1, 'Jack'), (3, 1, 'Ernest'); I would like to be able to update the above like so UPDATE authors SET author_name = 'Jack K' WHERE author_id = 1; and end up with 2, 1, Jack, t, 2012-03-29 21:35:00 2, 2, Jack K, t, 2012-03-29 21:37:40 which I can then query with SELECT author_name, modified_on FROM authors WHERE author_id = 2 AND modified_on < '2012-03-29 21:37:00' ORDER BY version DESC LIMIT 1; to get 2, 1, Jack, t, 2012-03-29 21:35:00 Something like the following doesn't really work CREATE OR REPLACE FUNCTION archive_authors() RETURNS TRIGGER AS $archive_author$ BEGIN IF (TG_OP = 'UPDATE') THEN -- The following fails because author_id,version PK already exists INSERT INTO authors (author_id, version, author_name) VALUES (OLD.author_id, OLD.version, OLD.author_name); UPDATE authors SET version = OLD.version + 1 WHERE author_id = OLD.author_id AND version = OLD.version; RETURN NEW; END IF; RETURN NULL; -- result is ignored since this is an AFTER trigger END; $archive_author$ LANGUAGE plpgsql; CREATE TRIGGER archive_author AFTER UPDATE OR DELETE ON authors FOR EACH ROW EXECUTE PROCEDURE archive_authors(); How can I achieve the above? Or, is there a better way to accomplish this? Ideally, I would prefer to not create a shadow table to store the archived rows.

    Read the article

  • G++ Multi-platform memory leak detection tool

    - by indyK1ng
    Does anyone know where I can find a memory memory leak detection tool for C++ which can be either run in a command line or as an Eclipse plug-in in Windows and Linux. I would like it to be easy to use. Preferably one that doesn't overwrite new(), delete(), malloc() or free(). Something like GDB if its gonna be in the command line, but I don't remember that being used for detecting memory leaks. If there is a unit testing framework which does this automatically, that would be great. This question is similar to other questions (such as http://stackoverflow.com/questions/283726/memory-leak-detection-under-windows-for-gnu-c-c ) however I feel it is different because those ask for windows specific solutions or have solutions which I would rather avoid. I feel I am looking for something a bit more specific here. Suggestions don't have to fulfill all requirements, but as many as possible would be nice. Thanks. EDIT: Since this has come up, by "overwrite" I mean anything which requires me to #include a library or which otherwise changes how C++ compiles my code, if it does this at run time so that running the code in a different environment won't affect anything that would be great. Also, unfortunately, I don't have a Mac, so any suggestions for that are unhelpful, but thank you for trying. My desktop runs Windows (I have Linux installed but my dual monitors don't work with it) and I'd rather not run Linux in a VM, although that is certainly an option. My laptop runs Linux, so I can use that tool on there, although I would definitely prefer sticking to my desktop as the screen space is excellent for keeping all of the design documentation and requirements in view without having to move too much around on the desktop. NOTE: While I may try answers, I won't mark one as accepted until I have tried the suggestion and it is satisfactory. EDIT2: I'm not worried about the cross-platform compatibility of my code, it's a command line application using just the C++ libraries.

    Read the article

  • LINQ to Sql: Insert instead of Update

    - by Christina Mayers
    I am stuck with this problems for a long time now. Everything I try to do is insert a row in my DB if it's new information - if not update the existing one. I've updated many entities in my life before - but what's wrong with this code is beyond me (probably something pretty basic) I guess I can't see the wood for the trees... private Models.databaseDataContext db = new Models.databaseDataContext(); internal void StoreInformations(IEnumerable<EntityType> iEnumerable) { foreach (EntityType item in iEnumerable) { EntityType type = db.EntityType.Where(t => t.Room == item.Room).FirstOrDefault(); if (type == null) { db.EntityType.InsertOnSubmit(item); } else { type.Date = item.Date; type.LastUpdate = DateTime.Now(); type.End = item.End; } } } internal void Save() { db.SubmitChanges(); } Edit: just checked the ChangeSet, there are no updates only inserts. For now I've settled with foreach (EntityType item in iEnumerable) { EntityType type = db.EntityType.Where(t => t.Room == item.Room).FirstOrDefault(); if (type != null) { db.Exams.DeleteOnSubmit(type); } db.EntityType.InsertOnSubmit(item); } but I'd love to do updates and lose these unnecessary delete statements.

    Read the article

  • Getting the name of a child class in the parent class (static context)

    - by Benoit Myard
    Hi everybody, I'm building an ORM library with reuse and simplicity in mind; everything goes fine except that I got stuck by a stupid inheritance limitation. Please consider the code below: class BaseModel { /* * Return an instance of a Model from the database. */ static public function get (/* varargs */) { // 1. Notice we want an instance of User $class = get_class(parent); // value: bool(false) $class = get_class(self); // value: bool(false) $class = get_class(); // value: string(9) "BaseModel" $class = __CLASS__; // value: string(9) "BaseModel" // 2. Query the database with id $row = get_row_from_db_as_array(func_get_args()); // 3. Return the filled instance $obj = new $class(); $obj->data = $row; return $obj; } } class User extends BaseModel { protected $table = 'users'; protected $fields = array('id', 'name'); protected $primary_keys = array('id'); } class Section extends BaseModel { // [...] } $my_user = User::get(3); $my_user->name = 'Jean'; $other_user = User::get(24); $other_user->name = 'Paul'; $my_user->save(); $other_user->save(); $my_section = Section::get('apropos'); $my_section->delete(); Obviously, this is not the behavior I was expecting (although the actual behavior also makes sense).. So my question is if you guys know of a mean to get, in the parent class, the name of child class.

    Read the article

  • What's the best way to resolve this scope problem?

    - by Peter Stewart
    I'm writing a program in python that uses genetic techniques to optimize expressions. Constructing and evaluating the expression tree is the time consumer as it can happen billions of times per run. So I thought I'd learn enough c++ to write it and then incorporate it in python using cython or ctypes. I've done some searching on stackoverflow and learned a lot. This code compiles, but leaves the pointers dangling. I tried 'this_node = new Node(...' . It didn't seem to work. And I'm not at all sure how I'd delete all the references as there would be hundreds. I'd like to use variables that stay in scope, but maybe that's not the c++ way. What is the c++ way? class Node { public: char *cargo; int depth; Node *left; Node *right; } Node make_tree(int depth) { depth--; if(depth <= 0) { Node tthis_node("value",depth,NULL,NULL); return tthis_node; } else { Node this_node("operator" depth, &make_tree(depth), &make_tree(depth)); return this_node; } };

    Read the article

  • Prevent two users from editing the same data

    - by Industrial
    Hi everyone, I have seen a feature in different web applications including Wordpress (not sure?) that warns a user if he/she opens an article/post/page/whatever from the database, while someone else is editing the same data simultaneously. I would like to implement the same feature in my own application and I have given this a bit of thought. Is the following example a good practice on how to do this? It goes a little something like this: 1) User A enters a the editing page for the mysterious article X. The database tableEvents is queried to make sure that no one else is editing the same page for the moment, which no one is by then. A token is then randomly being generated and is inserted into a database table called Events. 1) User B also want's to make updates to the article X. Now since our User A already is editing the article, the Events table is queried and looks like this: | timestamp | owner | Origin | token | ------------------------------------------------------------ | 1273226321 | User A | article-x | uniqueid## | 2) The timestamp is being checked. If it's valid and less than say 100 seconds old, a message appears and the user cannot make any changes to the requested article X: Warning: User A is currently working with this article. In the meantime, editing cannot be done. Please do something else with your life. 3) If User A decides to go on and save his changes, the token is posted along with all other data to update the database, and toggles a query to delete the row with token uniqueid##. If he decides to do something else instead of committing his changes, the article X will still be available for editing in 100 seconds for User B Let me know what you think about this approach! Wish everyone a great weekend!

    Read the article

  • Sequential animations in Jquery

    - by Pickels
    I see a lot people struggling with this(me included). I guess it's mostly due to not perfectly knowing how Javascript scopes work. An image slideshow is a good example of this. So lets say you have series of images, the first one fades in = waits = fades out = next image. When I first created this I was already a little lost. I think the biggest problem for creating the basics is to keep it clean. I noticed working with callbacks can get uggly pretty fast. So to make it even more complicated most slideshows have control buttons. Next, previous, go to img, pause, ... I've been trying this for a while now and this is where I got: $(InitSlideshow); function InitSlideshow() { var img = $("img").hide(); var animate = { wait: undefined, start: function() { img.fadeIn(function() { animate.middle(); }); }, middle: function() { animate.wait = setTimeout(function() { animate.end(); }, 1000); }, end: function() { img.fadeOut(function() { animate.start(); }); }, stop: function() { img.stop(); clearTimeout(animate.wait); } }; $("#btStart").click(animate.start); $("#btStop").click(animate.stop); }; This code works(I think) but I wonder how other people deal with sequentials animations in Jquery? Tips and tricks are most welcome. So are links to good resources dealing with this issue. If this has been asked before I will delete this question. I didn't find a lot of info about this right away. I hope making this community wiki is correct as there is not one correct answer to my question. Kind regards, Pickels

    Read the article

  • How can I pare down Vim's buffer list to only include active buffers

    - by nelstrom
    How can I pare down my buffer list to only include buffers that are currently open in a window/tab? When I've been running Vim for a long time, the list of buffers revealed by the :ls command is too large to work with. Ideally, I would like to delete all of the buffers which are not currently visible in a tab or window by running a custom command such as :Only. Can anybody suggest how to achieve this? It looks like the :bdelete command can accept a list of buffer numbers, but I'm not sure how to translate the output from :ls to a format that can be consumed by the :bdelete command. Any help would be appreciated. Clarification Lets say that in my Vim session I have opened 4 files. The :ls command outputs: :ls 1 a "abc.c" 2 h "123.c" 3 h "xyz.c" 4 a "abc.h" Buffer 1 is in the current tab, and and buffer 4 is in a separate tab, but buffers 2 and 3 are both hidden. I would like to run the command :Only, and it would wipe buffers 2 and 3, so the :ls command would output: :ls 1 a "abc.c" 4 a "abc.h" This example doesn't make the proposed :Only command look very useful, but if you have a list of 40 buffers it would be very welcome.

    Read the article

  • How do I test is storage-conf is being loaded in Cassandra 0.7.3?

    - by user657253
    I have installed Cassandra and gotten it working on two machines. I have followed the instructions to hook them up to each other by configuring the storage-conf.xml files. Both machines respond well to thrift and to command line cassandra. This is tutorial I used to setup the storage-conf.xml files. The tutorial says that if I run netstat, I should NOT see Cassandra bound to 127.0.0.1 on my seed node. I should see it bound to my internal IP, which I have configured in the storage-conf.xml file. I have rebooted the servers and relaunched cassandra. Still, I see the localhost address insead of the correct internal IP address. Is it that my .yaml file is overriding the storage-conf.xml file? If so, how do I delete the appropriate things in the .yaml? Or how do I tell Cassandra to look for my storage-conf.xml file? A few things I have tried: renaming the cassandra.yaml file. What happens is that cassandra will not load. If i rename the storage-conf.xml, cassandra does load. When I installed Cassandra, it did not come with a storage-conf.xml file. I had to grab it off the apache wiki.

    Read the article

  • SCD2 + Merge Statement + MSSQL

    - by Nev_Rahd
    I am trying work out with MERGE statment to Insert / Update Dimension Table of Type SCD2 My source is a Table var to Merge with Dimension table. My Merget statement is throwing an error as: The target table 'DM.DATA_ERROR.ERROR_DIMENSION' of the INSERT statement cannot be on either side of a (primary key, foreign key) relationship when the FROM clause contains a nested INSERT, UPDATE, DELETE, or MERGE statement. Found reference constraint 'FK_ERROR_DIMENSION_to_AUDIT_CreatedBy'. My Merge Statement: DECLARE @DATAERROROBJECT AS [ERROR_DIMENSION] INSERT INTO DM.DATA_ERROR.ERROR_DIMENSION SELECT ERROR_CODE, DATA_STREAM_ID, [ERROR_SEVERITY], DATA_QUALITY_RATING, ERROR_LONG_DESCRIPTION, ERROR_DESCRIPTION, VALIDATION_RULE, ERROR_TYPE, ERROR_CLASS, VALID_FROM, VALID_TO, CURR_FLAG, CREATED_BY_AUDIT_SK, UPDATED_BY_AUDIT_SK FROM (MERGE DM.DATA_ERROR.ERROR_DIMENSION ED USING @DATAERROROBJECT OBJ ON(ED.ERROR_CODE = OBJ.ERROR_CODE AND ED.DATA_STREAM_ID = OBJ.DATA_STREAM_ID) WHEN NOT MATCHED THEN INSERT VALUES( OBJ.ERROR_CODE ,OBJ.DATA_STREAM_ID ,OBJ.[ERROR_SEVERITY] ,OBJ.DATA_QUALITY_RATING ,OBJ.ERROR_LONG_DESCRIPTION ,OBJ.ERROR_DESCRIPTION ,OBJ.VALIDATION_RULE ,OBJ.ERROR_TYPE ,OBJ.ERROR_CLASS ,GETDATE() ,'9999-12-13' ,'Y' ,1 ,1 ) WHEN MATCHED AND ED.CURR_FLAG = 'Y' AND ( ED.[ERROR_SEVERITY] <> OBJ.[ERROR_SEVERITY] OR ED.[DATA_QUALITY_RATING] <> OBJ.[DATA_QUALITY_RATING] OR ED.[ERROR_LONG_DESCRIPTION] <> OBJ.[ERROR_LONG_DESCRIPTION] OR ED.[ERROR_DESCRIPTION] <> OBJ.[ERROR_DESCRIPTION] OR ED.[VALIDATION_RULE] <> OBJ.[VALIDATION_RULE] OR ED.[ERROR_TYPE] <> OBJ.[ERROR_TYPE] OR ED.[ERROR_CLASS] <> OBJ.[ERROR_CLASS] ) THEN UPDATE SET ED.CURR_FLAG = 'N', ED.VALID_TO = GETDATE() OUTPUT $ACTION ACTION_OUT, OBJ.ERROR_CODE ERROR_CODE, OBJ.DATA_STREAM_ID DATA_STREAM_ID, OBJ.[ERROR_SEVERITY] [ERROR_SEVERITY], OBJ.DATA_QUALITY_RATING DATA_QUALITY_RATING, OBJ.ERROR_LONG_DESCRIPTION ERROR_LONG_DESCRIPTION, OBJ.ERROR_DESCRIPTION ERROR_DESCRIPTION, OBJ.VALIDATION_RULE VALIDATION_RULE, OBJ.ERROR_TYPE ERROR_TYPE, OBJ.ERROR_CLASS ERROR_CLASS, GETDATE() VALID_FROM, '9999-12-31' VALID_TO, 'Y' CURR_FLAG, 555 CREATED_BY_AUDIT_SK, 555 UPDATED_BY_AUDIT_SK ) AS MERGE_OUT WHERE MERGE_OUT.ACTION_OUT = 'UPDATE'; What am i doing wrong ?

    Read the article

  • Stored Procedure Does Not Fire Last Command

    - by jp2code
    On our SQL Server (Version 10.0.1600), I have a stored procedure that I wrote. It is not throwing any errors, and it is returning the correct values after making the insert in the database. However, the last command spSendEventNotificationEmail (which sends out email notifications) is not being run. I can run the spSendEventNotificationEmail script manually using the same data, and the notifications show up, so I know it works. Is there something wrong with how I call it in my stored procedure? [dbo].[spUpdateRequest](@packetID int, @statusID int output, @empID int, @mtf nVarChar(50)) AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; DECLARE @id int SET @id=-1 -- Insert statements for procedure here SELECT A.ID, PacketID, StatusID INTO #act FROM Action A JOIN Request R ON (R.ID=A.RequestID) WHERE (PacketID=@packetID) AND (StatusID=@statusID) IF ((SELECT COUNT(ID) FROM #act)=0) BEGIN -- this statusID has not been entered. Continue SELECT ID, MTF INTO #req FROM Request WHERE PacketID=@packetID WHILE (0 < (SELECT COUNT(ID) FROM #req)) BEGIN SELECT TOP 1 @id=ID FROM #req INSERT INTO Action (RequestID, StatusID, EmpID, DateStamp) VALUES (@id, @statusID, @empID, GETDATE()) IF ((@mtf IS NOT NULL) AND (0 < LEN(RTRIM(@mtf)))) BEGIN UPDATE Request SET MTF=@mtf WHERE ID=@id END DELETE #req WHERE ID=@id END DROP TABLE #req SELECT @id=@@IDENTITY, @statusID=StatusID FROM Action SELECT TOP 1 @statusID=ID FROM Status WHERE (@statusID<ID) AND (-1 < Sequence) EXEC spSendEventNotificationEmail @packetID, @statusID, 'http:\\cpweb:8100\NextStep.aspx' END ELSE BEGIN SET @statusID = -1 END DROP TABLE #act END Idea of how the data tables are connected:

    Read the article

  • Making a simple searchable directory of people and their skills in a day - Which technologies?

    - by gav
    Hi All, I am working with a small theatre company. Currently they have a list of people on paper with notes about their skills next to each one. I want to create a database / directory for them so that they can add, delete, update and search for people. It is a very simple and common scenario I know but the issue here is that I only have a day to build a working solution. The search has to be very simple At first I was thinking LAMP but I'd rather not have to create it all from scratch and host it myself. That lead me to Google Spreadsheet as a database, this has the advantage that they already use google docs for everything and if my front end goes tits up they can still get to the data. Presuming none of you can think of some existing software which does exactly what I want the next step is to make a front end for the database. You can create forms for Google Spreadsheets but they only let you add new entries, I can make a Google Gadget but that will only let me implement the search as the Google Visualisation API provides read only access. It's at this point I'm stuck, should I just create a Java Servlet front end for the Google Spreadsheet and use the Java API to add, search and update? I know this is a broad question but I'm just asking 'What would you do?' to implement this system with a day's development time? Gav

    Read the article

  • Is there such a thing as IMAP for podcasts?

    - by Gerrit
    Is there such a thing as IMAP for podcasts? I own a desktop, laptop, iPod, smartphone and a web-client all downloading StackOverflow Podcasts. (among others) They all tell me which episodes are available and which are already played. Everything is a horrible mess, ofcourse. My iPod is somewhat in sync with my desktop, but everything else is a random jungle. The same problem with e-mail is solved by IMAP. Every device gets content and meta-information from one server, and stays in sync with it. Per device, I can set preferences (do or do not download the complete archive including junkmail). Can we implement the IMAP approach for podcasts? Or is there a better metaphore/standard to solve this problem? How will the adoption-strategy look like? (by the way: except for the Windows smartphone, I own a full Apple-stack of products. Even then, I run into this problem) UPDATE The RSS-to-Imap link to sourceforge looks promesting, but very alpha/experimental. UPDATE 2 The one thing RSS is missing is the command/method/parameter/attribute to delete/unread items. RSS can only add, not remove. If RSS(N+1) (3?) could add a value for unread="true|false", it would be solved. If I cache all my RSS-feeds on my own server, and add the attribute myself, I only would have to convince iTunes and every other client to respect that.

    Read the article

  • Query broke down and left me stranded in the woods

    - by user1290323
    I am trying to execute a query that deletes all files from the images table that do not exist in the filters tables. I am skipping 3,500 of the latest files in the database as to sort of "Trim" the table back to 3,500 + "X" amount of records in the filters table. The filters table holds markers for the file, as well as the file id used in the images table. The code will run on a cron job. My Code: $sql = mysql_query("SELECT * FROM `images` ORDER BY `id` DESC") or die(mysql_error()); while($row = mysql_fetch_array($sql)){ $id = $row['id']; $file = $row['url']; $getId = mysql_query("SELECT `id` FROM `filter` WHERE `img_id` = '".$id."'") or die(mysql_error()); if(mysql_num_rows($getId) == 0){ $IdQue[] = $id; $FileQue[] = $file; } } for($i=3500; $i<$x; $i++){ mysql_query("DELETE FROM `images` WHERE id='".$IdQue[$i]."' LIMIT 1") or die("line 18".mysql_error()); unlink($FileQue[$i]) or die("file Not deleted"); } echo ($i-3500)." files deleted."; Output: 0 files deleted. Database contents: images table: 10,000 rows filters table: 63 rows Amount of rows in filters table that contain an images table id: 63 Execution time of php script: 4 seconds +/- 0.5 second Relevant DB structure TABLE: images id url etc... TABLE: filter id img_id (CONTAINS ID FROM images table) etc...

    Read the article

  • Rails: generating URLs for actions in JSON response

    - by Chris Butler
    In a view I am generating an HTML canvas of figures based on model data in an app. In the view I am preloading JSON model data in the page like this (to avoid an initial request back): <script type="text/javascript" charset="utf-8"> <% ActiveRecord::Base.include_root_in_json = false -%> var objects = <%= @objects.to_json(:include => :other_objects) %>; ... Based on mouse (or touch) interaction I want to redirect to other parts of my app that are model specific (such as view, edit, delete, etc.). Rather than hard code the URLs in my JavaScript I want to generate them from Rails (which means it always adapts the latest routes). It seems like I have one of three options: Add an empty attr to the model that the controller fills in with the appropriate URL (we don't want to use routes in the model) before the JSON is generated Generate custom JSON where I add the different URLs manually Generate the URL as a template from Rails and replace the IDs in JavaScript as appropriate I am starting to lean towards #1 for ease of implementation and maintainability. Are there any other options that I am missing? Is #1 not the best? Thanks! Chris

    Read the article

  • coredata using old file version on device

    - by Martin KS
    This is a follow on from my previous problems here. Resetting the simulator solved all my troubles before, and I've gone on to complete my App. I now have the exact same problem when installing the app onto my iPhone device. It picks up an old version of my database, which doesn't have the second entity in it, and crashes when I try to access the second entity: 2010-04-22 23:52:18.860 albumCloud[135:207] *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: '+entityForName: could not locate an NSManagedObjectModel for entity name 'Image'' 2010-04-22 23:52:18.874 albumCloud[135:207] Stack: ( 843263261, 825818644, 820669213, 20277, 844154820, 16985, 14633, 844473760, 844851728, 862896011, 843011267, 843009055, 860901832, 843738160, 843731504, 11547, 11500 ) terminate called after throwing an instance of 'NSException' I have two questions: 1) How on earth do I delete my app thoroughly enough from my phone that it removes the old data? (I've so far tried regular app deletion, deleting and then holding home and power for a reboot, cursing at and threatening the app while running it... everything) 2) How do I prevent this happening when my application is in the App store, and I for some reason decide that I want to add another entity to the store, or another attribute to the existing entities? is there an "if x doesn't exist then create it" method?

    Read the article

  • How much memory I would need for something like this?

    - by Vdas Dorls
    If I create social portal similar to Twitter (not exactly twitter, but similar in my country), how much space would I need for images? I guess with 100 gb of disk space it won't be enough, so could you please give me some information how much I would exactly need? And is there any suggestions how should I add the profile images? Is there any tactics from programming side, when I could save some space uploading and hosting images for profile users? I guess, each time user changes images, it would be good to delete the previous image, correctly? In additional, how much disk space would be needed for 1000000 user profiles, if we have like 15 default images, and part of the users won't upload their own images, but use some of the default ones. So 3 questions - How much disk space I would need to hold a good social portal? Is there any suggested way to deal with pictures with PHP to save disk space? How much disk space would be needed for 1000000 user profile, if we have like 15 default images and part of the users won't upload their own, but instead use one of the 15 default images?

    Read the article

  • Autoselect, focus and highlight a new NSOutlineView row

    - by coneybeare
    This is probably just lack of experience with the NSOutlineView but I can't see a way to do this. I have a NSOutlineView (implemented with the excellent PXSourceList) with an add button that is totally functional in the aspect that I save/write/insert/delete rows correctly. I do not use a NSTreeController, and I don't use bindings. I add the entity using the following code: - (void)addEntity:(NSNotification *)notification { // Create the core data representation, and add it as a child to the parent node UABaseNode *node = [[UAModelController defaultModelController] createBaseNode]; [sourceList reloadData]; for (int i = 0; i < [sourceList numberOfRows]; i++) { if (node == [sourceList itemAtRow:i]) { [sourceList selectRowIndexes:[NSIndexSet indexSetWithIndex:i] byExtendingSelection:NO]; [sourceList editColumn:0 row:i withEvent:nil select:NO]; break; } } } When the add button is pressed, a new row is inserted like this: If I click away, then select the row and press enter to edit it, it now looks like this: My question is: How can I programmatically get the same state (focus, selected, highlighted) the first time, to make the user experience better?

    Read the article

  • Crash generated during destruction of hash_map

    - by Alien01
    I am using hash_map in application as typedef hash_map<DWORD,CComPtr<IInterfaceXX>> MapDword2Interface; In main application I am using static instance of this map static MapDword2Interface m_mapDword2Interface; I have got one crash dump from one of the client machines which point to the crash in clearing this map I opened that crash dump and here is assembly during debugging > call std::list<std::pair<unsigned long const ,ATL::CComPtr<IInterfaceXX> >,std::allocator<std::pair<unsigned long const ,ATL::CComPtr<IInterfaceXX> > > >::clear > mov eax,dword ptr [CMainApp::m_mapDword2Interface+8 (49XXXXX)] Here is code where crash dump is pointing. Below code is from stl:list file void clear() { // erase all #if _HAS_ITERATOR_DEBUGGING this->_Orphan_ptr(*this, 0); #endif /* _HAS_ITERATOR_DEBUGGING */ _Nodeptr _Pnext; _Nodeptr _Pnode = _Nextnode(_Myhead); _Nextnode(_Myhead) = _Myhead; _Prevnode(_Myhead) = _Myhead; _Mysize = 0; for (; _Pnode != _Myhead; _Pnode = _Pnext) { // delete an element _Pnext = _Nextnode(_Pnode); this->_Alnod.destroy(_Pnode); this->_Alnod.deallocate(_Pnode, 1); } } Crash is pointing to the this->_Alnod.destroy(_Pnode); statement in above code. I am not able to guess it, what could be reason. Any ideas??? How can I make sure, even is there is something wrong with the map , it should not crash?

    Read the article

  • Group vs role (Any real difference?)

    - by Ondrej
    Can anyone tell me, what's the real difference between group and role? Ive been trying to figure this out for some time now and the more information I read, the more I get the sence, that this is brought up just to confuse people and there is no proper difference in this. Both can do the other one's job. Ive always used a group to manage users and their access rights. Recently, I've come accross an administration software, where is a bunch of users. Each user can have assigned a module (whole system is split into a few parts called modules ie. Administration module, Survey module, Orders module, Customer module). On top of it, each module have a list of functionalities, that can be allowed or denied for each user. So let's say, a user John Smith can access module Orders and can edit any order, but havent given a right to delete any of them. If there was more users with the same competency, I would use a group to manage that. I would aggregate such users into the same group and assign access rights to modules and their functions to the group. All users in the same group would have the same access rights. Why call it a group and not role? I don't know, I just feel it that way. It seems to me, that simply it just doesnt really matter :] But I still would like to know the real difference. What about you guys? Any suggestions why this should be rather called role than group or the other way round? Thanks to everyone.

    Read the article

  • Convert Google Analytics cookies to Local/Session Storage

    - by David Murdoch
    Google Analytics sets 4 cookies that will be sent with all requests to that domain (and ofset its subdomains). From what I can tell no server actually uses them directly; they're only sent with __utm.gif as a query param. Now, obviously Google Analytics reads, writes and acts on their values and they will need to be available to the GA tracking script. So, what I am wondering is if it is possible to: rewrite the __utm* cookies to local storage after ga.js has written them delete them after ga.js has run rewrite the cookies FROM local storage back to cookie form right before ga.js reads them start over Or, monkey patch ga.js to use local storage before it begins the cookie read/write part. Obviously if we are going so far out of the way to remove the __utm* cookies we'll want to also use the Async variant of Analytics. I'm guessing the down vote was because I didn't ask a question. DOH! My questions are: Can it be done as described above? If so, why hasn't it been done? I have a default HTML/CSS/JS boilerplate template that passes YSlow, PageSpeed, and Chrome's Audit with near perfect scores. I'm really looking for a way to squeeze those remaining cookie bytes from Google Analytics in browsers that support local storage.

    Read the article

  • Checking for valid email addresses

    - by Roland
    I'm running a website with more than 60 000 registered users. Every week notifications are send to these users via email, now I've noticed some of the mail addresses do not exists anymore eg. the domain address is valid but the email name en asdas@ is not valid anymore since person does not work at a company anymore etc. Now I'm looping through the database and doing some regular expression checks and checking if the MX records exist with the following two functions function verify_email($email){ if(!preg_match('/^[_A-z0-9-]+((\.|\+)[_A-z0-9-]+)*@[A-z0-9-]+(\.[A-z0-9-]+)*(\.[A-z]{2,4})$/',$email)){ return false; } else { return true; } } // Our function to verify the MX records function verify_email_dns($email){ list($name, $domain) = split('@',$email); if(!checkdnsrr($domain,'MX')){ return false; } else { return true; } } If the email address is in an invalid format or the domain does not exists I delete the users account. Are there any methods I could use to check if the email address still exists or not if the domain name is valid and the email address is in the correct format? For example [email protected] does not exist anymore but test.com is a valid domain name.

    Read the article

  • NSPredicate cause update editing to return NSFetchedResultsChangeDelete not NSFetchedResultsChangeUp

    - by Matthew Weiss
    I have predicate inside of - (NSFetchedResultsController *)fetchedResultsController in a standard way starting from the CoreDataBook example. NSPredicate *predicate = [NSPredicate predicateWithFormat:@"state=%@ && date = %@ && date < %@", @"1",fromDate,toDate]; [fetchRequest setPredicate:predicate]; This works fine however when editing an item, it returns with NSFetchedResultsChangeDelete not Update. When the main view returns, it is missing the item. If I restart the simulator the delete was not saved and the correct editing result is shown the the predicate working correctly. case NSFetchedResultsChangeDelete: [tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; break; I can confirm the behavior by commenting out the two predicate lines ONLY and then all works as it should correctly returning with the full set after editing and calling NSFetchedResultsChangeUpdate instead of NSFetchedResultsChangeDelete. I have read http://matteocaldari.it/2009/11/multiple-contexts-controllers-delegates-and-coredata-bug who reports similar behavior but I have not found a work around to my problem. I can

    Read the article

  • Stream PDF to another local App

    - by Nathan
    Hi, I'm currently trying to optimize a small firefox extension that will grab a pdf off the current document and send it to a port that another local application is listening on. Right now it uses a terrifying hackjob of cache viewer. The way I'm getting it is loading the cache, searching through it using the current URL and grabbing the file and saving it to a temp directory. Then I stream the file in, delete the temp, and send it through the socket. Now, my new design, ideally I'd want to build it from scratch and cut out saving it to the local machine at all, and just stream it through the socket. I've been looking at doing something like, //check page to ensure its a pdf //init in/out streams //stream through sock //flush Now, this would be vastly superior to the 400 line hacked up mess I have now, but I'm new to building FF extensions, and after reading a lot about URIs and the file streaming and such I'm probably more confused than when I started trying to fix this three hours ago. I'm okay with sending things through the sockets and whatnot, I understand that, I'm mainly confused about what multitude of interfaces I want to use. Gah! Thanks! Also, long time reader, first time poster!

    Read the article

  • C++ destructos causing crash's

    - by larsonator
    ok, so i got a some what intricate program that simulates the uni systems of students, units, and students enrolling in units. Students are stored in a binary search tree, Units are stored in a standard list. Student has a list of Unit Pointers, to store which units he/she is enrolled in Unit has a list of Student pointers, to store students which are enrolled in that unit. The unit collections (storing units in a list) as made as a static variable where the main function is, as is the Binary search tree of students. when its finaly time to close the program, i call the destructors of each. but at some stage, during the destructors on the unit side, Unhandled exception at 0x002e4200 in ClassAllocation.exe: 0xC0000005: Access violation reading location 0x00000000. UnitCollection destructor: UnitCol::~UnitCol() { list<Unit>::iterator itr; for(itr = UnitCollection.begin(); itr != UnitCollection.end();) { UnitCollection.pop_front(); itr = UnitCollection.begin(); } } Unit Destructor Unit::~Unit() { } now i got the same sorta problem on the student side of things BST destructors void StudentCol::Destructor(const BTreeNode * r) { if(r!= 0) { Destructor(r->getLChild()); Destructor(r->getRChild()); delete r; } } StudentCol::~StudentCol() { Destructor(root); } Student Destructor Student::~Student() { } so yeah any help would be greatly appreciated

    Read the article

< Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >