Search Results

Search found 40229 results on 1610 pages for 'deleted files'.

Page 510/1610 | < Previous Page | 506 507 508 509 510 511 512 513 514 515 516 517  | Next Page >

  • XDocument + IEnumerable is causing out of memory exception in System.Xml.Linq.dll

    - by Manatherin
    Basically I have a program which, when it starts loads a list of files (as FileInfo) and for each file in the list it loads a XML document (as XDocument). The program then reads data out of it into a container class (storing as IEnumerables), at which point the XDocument goes out of scope. The program then exports the data from the container class to a database. After the export the container class goes out of scope, however, the garbage collector isn't clearing up the container class which, because its storing as IEnumerable, seems to lead to the XDocument staying in memory (Not sure if this is the reason but the task manager is showing the memory from the XDocument isn't being freed). As the program is looping through multiple files eventually the program is throwing a out of memory exception. To mitigate this ive ended up using System.GC.Collect(); to force the garbage collector to run after the container goes out of scope. this is working but my questions are: Is this the right thing to do? (Forcing the garbage collector to run seems a bit odd) Is there a better way to make sure the XDocument memory is being disposed? Could there be a different reason, other than the IEnumerable, that the document memory isnt being freed? Thanks. Edit: Code Samples: Container Class: public IEnumerable<CustomClassOne> CustomClassOne { get; set; } public IEnumerable<CustomClassTwo> CustomClassTwo { get; set; } public IEnumerable<CustomClassThree> CustomClassThree { get; set; } ... public IEnumerable<CustomClassNine> CustomClassNine { get; set; }</code></pre> Custom Class: public long VariableOne { get; set; } public int VariableTwo { get; set; } public DateTime VariableThree { get; set; } ... Anyway that's the basic structures really. The Custom Classes are populated through the container class from the XML document. The filled structures themselves use very little memory. A container class is filled from one XML document, goes out of scope, the next document is then loaded e.g. public static void ExportAll(IEnumerable<FileInfo> files) { foreach (FileInfo file in files) { ExportFile(file); //Temporary to clear memory System.GC.Collect(); } } private static void ExportFile(FileInfo file) { ContainerClass containerClass = Reader.ReadXMLDocument(file); ExportContainerClass(containerClass); //Export simply dumps the data from the container class into a database //Container Class (and any passed container classes) goes out of scope at end of export } public static ContainerClass ReadXMLDocument(FileInfo fileToRead) { XDocument document = GetXDocument(fileToRead); var containerClass = new ContainerClass(); //ForEach customClass in containerClass //Read all data for customClass from XDocument return containerClass; } Forgot to mention this bit (not sure if its relevent), the files can be compressed as .gz so I have the GetXDocument() method to load it private static XDocument GetXDocument(FileInfo fileToRead) { XDocument document; using (FileStream fileStream = new FileStream(fileToRead.FullName, FileMode.Open, FileAccess.Read, FileShare.Read)) { if (String.Compare(fileToRead.Extension, ".gz", true) == 0) { using (GZipStream zipStream = new GZipStream(fileStream, CompressionMode.Decompress)) { document = XDocument.Load(zipStream); } } else { document = XDocument.Load(fileStream); } return document; } } Hope this is enough information. Thanks Edit: The System.GC.Collect() is not working 100% of the time, sometimes the program seems to retain the XDocument, anyone have any idea why this might be?

    Read the article

  • Loading multiple copies of a group of DLLs in the same process

    - by george
    Background I'm maintaining a plugin for an application. I'm Using Visual C++ 2003. The plugin is composed of several DLLs - there's the main DLL, that's the one that the application loads using LoadLibrary, and there are several utility DLLs that are used by the main DLL and by each other. Dependencies generally look like this: plugin.dll - utilA.dll, utilB.dll utilA.dll - utilB.dll utilB.dll - utilA.dll, utilC.dll You get the picture. Some of the dependencies between the DLLs are load-time and some run-time. All the DLL files are stored in the executable's directory (not a requirement, just how it works now). The problem There's a new requirement - running multiple instances of the plugin within the application. The application runs each instance of a plugin in its own thread, i.e. each thread calls The plugin's code, however, is nothing but thread-safe - lots of global variables etc.. Unfortunately, fixing the whole thing isn't currently an option, so I need a way to load multiple (at most 3) copies of the plugin's DLLs in the same process. Option 1: The distinct names approach Creating 3 copies of each DLL file, so that each file has a distinct name. e.g. plugin1.dll, plugin2.dll, plugin3.dll, utilA1.dll, utilA2.dll, utilA3.dll, utilB1.dll, etc.. The application will load plugin1.dll, plugin2.dll and plugin3.dll. The files will be in the executable's directory. For each group of DLLs to know each other by name (so the inter-dependencies work), the names need to be known at compilation time - meaning the DLLs need to be compiled multiple times, only each time with different output file names. Not very complicated, but I'd hate having 3 copies of the VS project files, and don't like having to compile the same files over and over. Option 2: The side-by-side assemblies approach Creating 3 copies of the DLL files, each group in its own directory, and defining each group as an assembly by putting an assembly manifest file in the directory, listing the plugin's DLLs. Each DLL will have an application manifest pointing to the assembly, so that the loader finds the copies of the utility DLLs that reside in the same directory. The manifest needs to be embedded for it to be found when a DLL is loaded using LoadLibrary. I'll use mt.exe from a later VS version for the job, since VS2003 has no built-in manifest embedding support. I've tried this approach with partial success - dependencies are found during load-time of the DLLs, but not when a DLL function is called that loads another DLL. This seems to be the expected behavior according to this article - A DLL's activation context is only used at the DLL's load-time, and afterwards it's deactivated and the process's activation context is used. I haven't yet tried working around this using ISOLATION_AWARE_ENABLED. Questions Got any other options? Any quick & dirty solution will do. :-) Will ISOLATION_AWARE_ENABLED even work with VS2003? Comments will be greatly appreciated. Thanks!

    Read the article

  • How can i get more than one jpg. or txt file from any folder?

    - by Phsika
    Dear Sirs; i have two Application to listen network Stream : Server.cs on the other hand; send file Client.cs. But i want to send more files on a stream from any folder. For example. i have C:/folder whish has got 3 jpg files. My client must run. Also My server.cs get files on stream: Client.cs: private void btn_send2_Click(object sender, EventArgs e) { string[] paths= null; paths= System.IO.Directory.GetFiles(@"C:\folder" + @"\", "*.jpg", System.IO.SearchOption.AllDirectories); byte[] Dizi; TcpClient Gonder = new TcpClient("127.0.0.1", 51124); FileStream Dosya; FileInfo Dos; NetworkStream Akis; foreach (string path in paths) { Dosya = new FileStream(path , FileMode.OpenOrCreate); Dos = new FileInfo(path ); Dizi = new byte[(int)Dos.Length]; Dosya.Read(Dizi, 0, (int)Dos.Length); Akis = Gonder.GetStream(); Akis.Write(Dizi, 0, (int)Dosya.Length); Gonder.Close(); Akis.Flush(); Dosya.Close(); } } Also i have Server.cs void Dinle() { TcpListener server = null; try { Int32 port = 51124; IPAddress localAddr = IPAddress.Parse("127.0.0.1"); server = new TcpListener(localAddr, port); server.Start(); Byte[] bytes = new Byte[1024 * 250000]; // string ReceivedPath = "C:/recieved"; while (true) { MessageBox.Show("Waiting for a connection... "); TcpClient client = server.AcceptTcpClient(); MessageBox.Show("Connected!"); NetworkStream stream = client.GetStream(); if (stream.CanRead) { saveFileDialog1.ShowDialog(); // burasi degisecek string pathfolder = saveFileDialog1.FileName; StreamWriter yaz = new StreamWriter(pathfolder); string satir; StreamReader oku = new StreamReader(stream); while ((satir = oku.ReadLine()) != null) { satir = satir + (char)13 + (char)10; yaz.WriteLine(satir); } oku.Close(); yaz.Close(); client.Close(); } } } catch (SocketException e) { Console.WriteLine("SocketException: {0}", e); } finally { // Stop listening for new clients. server.Stop(); } Console.WriteLine("\nHit enter to continue..."); Console.Read(); } Please look Client.cs: icollected all files from "c:\folder" paths= System.IO.Directory.GetFiles(@"C:\folder" + @"\", "*.jpg", System.IO.SearchOption.AllDirectories); My Server.cs how to get all files from stream?

    Read the article

  • Managing Dependency Hell with WiX and C#

    - by Tom the Junglist
    We are on the eve of product launch, and at the last minute I am being bombarded with crash reports that appear to be related to our installer, which is a WiX3 project with separate outputs for x86 and x64 builds. These have been an ongoing problem that I always thought were fixed, only to find out that they were still lurking. The product itself is a collection of binaries that communicate with each other via .Net remoting, including a Windows Service and a small COM component that is loaded as an addon in another app. The service runs as SYSTEM, the COM piece runs in a low-rights context, while the other pieces run in normal user contexts. Other pieces include an third-party COM object library DLL and a shared DLL with the .net Remoting interfaces. I've observed flat-out weird behavior with MSI, particularly on version upgrades. Between MS' anal strong-name implementation (specifically, the exact version check before loading a given assembly), a documented WiX/MSI bug that sees critical files erased on upgrades (essentially, if a file in the upgrade MSI has the same version number as the existing install, that file is deleted), and having to work around Wow64 virtualization (x86 MSI can only write to registry/HD locations via Wow64, yet x64 MSIs cannot run on x86 computers...), I am about ready to trash the whole thing and port it over to a different install system. What I am looking for on tips + tricks, techniques, or suggestions on how to properly do things so that I am not fighting with Windows Installer's twisted sense of logic. I am tired of fighting with WiX/MSI/Windows Installer. All it needs to do is place files and registry keys where I tell it to, upgrade them when appropriate, and don't delete anything until the user uninstalls. Instead, dependencies are deleted willy-nilly, bringing up a whole bunch of uncatchable exceptions (can't wrap a try{} block around function declarations) and GPF'ing the whole app. I am particularly interested in 'best practices' and examples regarding shared and dependency DLLs, and any tips on making sure if a file needs to go to GAC, that it actually goes to the GAC and stays there until it is appropriate to remove it. Thanks! Tom

    Read the article

  • Entity Framework: Delete Object and its related entities

    - by Waheed
    Hi, Does anyone know how to delete an object and all of it's related entities. For example i have tables, Products, Category, ProductCategory and productDetails, the productCategory is joining table of both Product and Category. I have red from http://msdn.microsoft.com/en-us/library/bb738580.aspx that Deleting the parent object also deletes all the child objects in the constrained relationship. This result is the same as enabling the CascadeDelete property on the association for the relationship. I am using this code Product productObj = this.ObjectContext.Product.Where(p => p.ProductID.Equals(productID)).First(); if (!productObj.ProductCategory.IsLoaded) productObj.ProductCategory.Load(); if (!productObj.ProductDetails.IsLoaded) productObj.ProductDetails.Load(); //my own methods. base.Delete(productObj); base.SaveAllObjectChanges(); But i am getting error on ObjectContext.SaveChanges(); i.e A relationship is being added or deleted from an AssociationSet 'FK_ProductCategory_Product'. With cardinality constraints, a corresponding 'ProductCategory' must also be added or deleted. Thanks in advance....

    Read the article

  • Add new item to UITableView and Core Data as data source?

    - by David.Chu.ca
    I have trouble to add new item to my table view with core data. Here is the brief logic in my codes. In my ViewController class, I have a button to trigle the edit mode: - (void) toggleEditing { UITableView *tv = (UITableView *)self.view; if (isEdit) // class level flag for editing { self.newEntity = [NSEntityDescription insertNewObjectForEntityName:@"entity1" inManagedObjectContext:managedObjectContext]; NSArray *insertIndexPaths = [NSArray arrayWithObjects: [NSInextPath indexPathForRow:0 inSection:0], nil]; // empty at beginning so hard code numbers here. [tv insertRowsAtIndexPaths:insertIndexPaths withRowAnimation:UITableViewRowAnimationFade]; [self.tableView setEditing:YES animated:YES]; // enable editing mode } else { ...} } In this block of codes, I added a new item to my current managed object context first, and then I added a new row to my tv. I think that both the number of objects in my data source or context and the number of rows in my table view should be 1. However, I got an exception in the event of tabView:numberOfRowsInSection: Invalid update: invalid number of rows in section 0. The number of rows contained in an existing section after the update (0) must be equal to the number of rows contained in that section before the update (0), plus or minus the number of rows inserted or deleted from that section (1 inserted, 0 deleted). The exception was raised right after the delegate event: - (NSInteger) tableView:(UITableView *) tableView numberOfRawsInSection:(NSInteger) section { // fetchedResultsController is class member var NSFetchedResultsController id <NSFechedResultsSectionInfo> sectionInfo = [[fetchedResultsController sections] objectAtIndex: section]; NSInteger rows = [sectionInfo numberOfObjects]; return rows; } In debug mode, I found that the rows was still 0 and the event invoked after the the even of toggleEditing. It looks like that sectionInfo obtained from fetchedResultsController did not include the new entity object inserted. Not sure if I miss anything or steps? I am not sure how it works: to get the fetcedResultsController notified or reflect the change when a new entity is inserted into the current managed object context?

    Read the article

  • Problem with cascade delete using Entity Framework and System.Data.SQLite

    - by jamone
    I have a SQLite DB that is set up so when I delete a Person the delete is cascaded. This works fine when I manually delete a Person (all records that reference the PersonID are deleted). But when I use Entity Framework to delete the Person I get an error: System.InvalidOperationException: The operation failed: The relationship could not be changed because one or more of the foreign-key properties is non-nullable. When a change is made to a relationship, the related foreign-key property is set to a null value. If the foreign-key does not support null values, a new relationship must be defined, the foreign-key property must be assigned another non-null value, or the unrelated object must be deleted. I don't understand why this is occurring. My trigger is set to clean up all related objects before deleting the object it was told to delete. When I go into the model editor and check the properties of the relationship it shows no action for the OnDelete property. Why isn't this set correctly by pulling it from the DB? If I change this value to Cascade everything works properly, but I would rather not rely on this manual change because what if I refresh my model from the DB and it looses that. Here's the relivent SQL for my tables. CREATE TABLE [SomeTable] ( [SomeTableID] INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT, [PersonID] INTEGER NOT NULL REFERENCES [Person](PersonID) ON DELETE CASCADE ) CREATE TABLE [Person] ( [PersonID] INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT )

    Read the article

  • Problem deleting record using Entity Framework and System.Data.SQLite

    - by jamone
    I have a SQLite DB that is set up so when I delete a Person the delete is cascaded. This works fine when I manually delete a Person (all records that reference the PersonID are deleted). But when I use Entity Framework to delete the Person I get an error: System.InvalidOperationException: The operation failed: The relationship could not be changed because one or more of the foreign-key properties is non-nullable. When a change is made to a relationship, the related foreign-key property is set to a null value. If the foreign-key does not support null values, a new relationship must be defined, the foreign-key property must be assigned another non-null value, or the unrelated object must be deleted. I don't understand why this is occurring. My trigger is set to clean up all related objects before deleting the object it was told to delete. When I go into the model editor and check the properties of the relationship it shows no action for the OnDelete property. Why isn't this set correctly by pulling it from the DB? Here's the relivent SQL for my tables. CREATE TABLE [SomeTable] ( [SomeTableID] INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT, [PersonID] INTEGER NOT NULL REFERENCES [Person](PersonID) ON DELETE CASCADE ) CREATE TABLE [Person] ( [PersonID] INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT )

    Read the article

  • Atomic UPSERT in SQL Server 2005

    - by rabidpebble
    What is the correct pattern for doing an atomic "UPSERT" (UPDATE where exists, INSERT otherwise) in SQL Server 2005? I see a lot of code on SO (e.g. see http://stackoverflow.com/questions/639854/tsql-check-if-a-row-exists-otherwise-insert) with the following two-part pattern: UPDATE ... FROM ... WHERE <condition> -- race condition risk here IF @@ROWCOUNT = 0 INSERT ... or IF (SELECT COUNT(*) FROM ... WHERE <condition>) = 0 -- race condition risk here INSERT ... ELSE UPDATE ... where will be an evaluation of natural keys. None of the above approaches seem to deal well with concurrency. If I cannot have two rows with the same natural key, it seems like all of the above risk inserting rows with the same natural keys in race condition scenarios. I have been using the following approach but I'm surprised not to see it anywhere in people's responses so I'm wondering what is wrong with it: INSERT INTO <table> SELECT <natural keys>, <other stuff...> FROM <table> WHERE NOT EXISTS -- race condition risk here? ( SELECT 1 FROM <table> WHERE <natural keys> ) UPDATE ... WHERE <natural keys> (Note: I'm assuming that rows will not be deleted from this table. Although it would be nice to discuss how to handle the case where they can be deleted -- are transactions the only option? Which level of isolation?) Is this atomic? I can't locate where this would be documented in SQL Server documentation.

    Read the article

  • A catalogue of Cassandra log messages: What is the correct interpretation?

    - by knorv
    The following is a complete catalogue of all log messages generated by Cassandra 0.6 when stress-testing a Cassandra installation over an extended period of time: AntiEntropyService: Sending AEService tree for (,) to: [] CassandraDaemon: Binding thrift service to localhost/N.N.N.N:N CassandraDaemon: Cassandra starting up... ColumnFamilyStore: has reached its threshold; switching in a fresh Memtable at CommitLogContext(file='.../cassandra/commitlog/CommitLog-N.log', position=N) ColumnFamilyStore: Enqueuing flush of Memtable()@N CommitLog: Discarding obsolete commit log:CommitLogSegment(.../cassandra/commitlog/CommitLog-N.log) CommitLog: Log replay complete CommitLog: Replaying .../cassandra/commitlog/CommitLog-N.log, ... CommitLogSegment: Creating new commitlog segment .../cassandra/commitlog/CommitLog-N.log CompactionManager: Compacted to .../cassandra/data//-N-Data.db. N/N bytes for N keys. Time: Nms. CompactionManager: Compacting [org.apache.cassandra.io.SSTableReader(path='.../cassandra/data//-N-Data.db'), ...] DatabaseDescriptor: Auto DiskAccessMode determined to be mmap GCInspector: GC for ConcurrentMarkSweep: N ms, N reclaimed leaving N used; max is N GCInspector: GC for ParNew: N ms, N reclaimed leaving N used; max is N Memtable: Completed flushing .../cassandra/data//-N-Data.db Memtable: Writing Memtable()@N SSTable: Deleted .../cassandra/data//-N-Data.db SSTableDeletingReference: Deleted .../cassandra/data//-N-Data.db SSTableReader: Sampling index for .../cassandra/data//-N-Data.db StorageService: Starting up server gossip SystemTable: Saved ClusterName found: Test Cluster SystemTable: Saved ClusterName not found. Using Test Cluster SystemTable: Saved Token found: N SystemTable: Saved Token not found. Using N For each of the log messages listed - what is the correct interpretation of the log message?

    Read the article

  • Enable LLVM + Clang in Xcode new project causes linking errors

    - by Ger Teunis
    I've done a complete clean uninstall of XCode and deleted the prefs and deleted complete /Developer folder and reinstalled XCode again. I create a new Cocoa application, go over to Target, doing a "Get info" in the target and enable "C / C++ compiler version" to "LLVM compiler 1.0.2" and press Build. I get: ld: warning: directory '/usr/lib/gcc/i686-apple-darwin10/4.2.1/x86_64' following -L not found ld: warning: directory '/usr/lib/gcc/i686-apple-darwin10/4.2.1/x86_64' following -L not found ld: warning: directory '/usr/lib/i686-apple-darwin10/4.2.1' following -L not found ld: warning: directory '/usr/lib/gcc/i686-apple-darwin10/4.2.1' following -L not found ld: warning: directory '/usr/lib/gcc/i686-apple-darwin10/4.2.1' following -L not found ld: warning: directory '/usr/lib/gcc/i686-apple-darwin10/4.2.1/../../../i686-apple-darwin10/4.2.1' following -L not found ld: warning: directory '/usr/lib/gcc/i686-apple-darwin10/4.2.1/../../..' following -L not found ld: library not found for -lgcc Command /Developer/usr/bin/clang failed with exit code 1 Anyone able to help me here? LLVM + GCC frontend does work though but I really would like to use Clang (LLVM compiler 1.0.2). New XCode install, new Cocoa project still have this issue.

    Read the article

  • Deleting a user > need to also delete their project, and then activities for that project? (PHP, MyS

    - by Jamie
    Hi guys, Really stuck with this... basically my system has 4 tables; users, projects, user_projects and activities. The user table has a usertype field which defines whether or not they are admin or user (by an integer)... An admin can create a project, create an acitivity for the project and assign a user (limited access user) an activity. Therefore, this setup means that an admin is never directly associated with an activity (instead a project). When my head admin user deletes an admin, I need all projects and activities (for their projects) to be deleted also. My delete script for a user is simple so far and works, but I'm having trouble on how to gain the projectID in order to know which activities to remove (associated with the projects which are about to be deleted): $userid = $_GET['userid']; $query = "DELETE FROM users WHERE userid=".$userid; $result = mysql_query($sql, $connection) or die("Error: ".mysql_error()); $query = "DELETE FROM projects WHERE userid=".$userid; $result = mysql_query($sql, $connection) or die("Error: ".mysql_error()); $query = "DELETE FROM userprojects WHERE userid=".$userid; $result = mysql_query($sql, $connection) or die("Error: ".mysql_error()); $query = "DELETE FROM activities WHERE projectid=".$projectid; $result = mysql_query($sql, $connection) or die("Error: ".mysql_error()); Now the first three queries execute fine, obviously because the userid is being retrieved successfully. However the 4th and final query I know is wrong, because there is no projectid to be gained from anywhere, however I put it there to help understand what I am trying to get!! :D Im guessing that i would need something like 'WHERE projectid=' then something to gather the removed projects from the userid which can be related to the activities for that project(s)!! Its a simple concept but I'm having trouble...please excuse any bad code as I am learning also. Thanks for any help!

    Read the article

  • How do I keep Visual Studio's Windows Forms Designer from deleting controls?

    - by Sören Kuklau
    With several forms of mine, I occasionally run into the following issue: I edit the form using the designer (Visual Studio 2008, Windows Forms, .NET 2.0, VB.NET) to add components, only to find out later that some minor adjustments were made (e.g. the form's size is suddenly changed by a few pixels), and controls get deleted. This happens silently — event-handling methods automatically have their Handles suffix removed, too, so they never get called, and there's no compiler error. I only notice much later or not at all, because I'm working on a different area in the form. As an example, I have a form with a SplitContainer containing an Infragistics UltraListView to the left, and an UltraTabControl to the right. I added a new tab, and controls within, and they worked fine. I later on found out that the list view's scrollbar was suddenly invisible, due to its size being off, and at least one control was removed from a different tab that I hadn't been working on. Is this a known issue with the WinForms Designer, or with Infragistics? I use version control, of course, so I can compare the changes and merge the deleted code back in, but it's a tedious process that shouldn't be necessary. Are there ways to avoid this? Is there a good reason for this to occur? One clue is that the control that was removed may have code (such as a Load event handler) that expects to be run in run time, not design time, and may be throwing an exception. Could this cause Visual Studio to remove the control?

    Read the article

  • Rails: show some examples of code from controllers, models and views

    - by Totty
    Hy, my controller example: class FriendsController < ApplicationController before_filter :authorize, :except => [:friends] ############## ############## ## REQUESTS ## ############## ############## ################## # GET MY FRIENDS # ################## # Get my friends. def friends @friends = @my_profile.friends.paginate({:page => params[:page], :per_page => 3}) @profile = @my_profile end ################### # REMOVED FRIENDS # ################### # Get my deleted friends. def removed_friends @removed_friends = @my_profile.friends('removed_friends', params[:page]) end ################### # PENDING FRIENDS # ################### # Friend requests made by other profiles to me. def pending_friends @pending_friends = @my_profile.friends('pending_friends', params[:page]) end ############################ # REJECTED PENDING FRIENDS # ############################ # Rejected friend requests made by other profiles to me. def rejected_pending_friends @rejected_pending_friends = @my_profile.friends('rejected_pending_friends', params[:page]) end ##################### # REQUESTED FRIENDS # ##################### # The friend requests I've sent to others profiles. def requested_friends @requested_friends = @my_profile.friends('requested_friends', params[:page]) end ############################# # DELETED REQUESTED FRIENDS # ############################# # The requests I've sent to others # profiles and then canceled. def deleted_requested_friends @deleted_requested_friends = @my_profile.friends('deleted_requested_friends', params[:page]) end ############# ############# ## ACTIONS ## ############# ############# ########################## # ADD FRIENDSHIP REQUEST # ########################## # Add a friendship request. def add_friendship_request friendship = @my_profile.add_friendship_request(params[:profile_id]) render :json => friendship end ############################# # REMOVE FRIENDSHIP REQUEST # ############################# # Removes a friendship request I've done. def remove_friendship_request friendship = @my_profile.remove_friendship_request(params[:profile_id]) render :json => friendship end ###################### # PROCESS FRIENDSHIP # ###################### # Process friendship: accept or reject a friend. # This will make a new friend or # will make a new rejected pending friend. def process_friendship friendship = @my_profile.process_friendship(params[:profile_id].to_i, params[:accepted].to_i) render :json => friendship end ################### # REMOVE A FRIEND # ################### # Remove a friend from my friends by id. def remove_friend friendship = @my_profile.remove_friend(params[:profile_id]) render :json => friendship end end

    Read the article

  • is it possible to add DataRelation to DataSet if child table contains rows that have no parent in pa

    - by matti
    If I fill the DataSet with DataAdapters that select all rows from Orders and Customers and call: private void CreateRelation() { // Get the DataColumn objects from two DataTable objects // in a DataSet. Code to get the DataSet not shown here. DataColumn parentColumn = DataSet1.Tables["Customers"].Columns["CustID"]; DataColumn childColumn = DataSet1.Tables["Orders"].Columns["CustID"]; // Create DataRelation. DataRelation relCustOrder; relCustOrder = new DataRelation("CustomersOrders", parentColumn, childColumn); // Add the relation to the DataSet. DataSet1.Relations.Add(relCustOrder); } (from http://msdn.microsoft.com/en-us/library/system.data.datarelation.aspx) there will be a runtime error if there is orders that do not have customers. This might happen when a buggy program has not deleted customer's orders when customer was deleted. What can I do except put Orders select string a additional where-condition: CUSTID IN (SELECT DISTINCT CUSTID FROM CUSTOMERS) OR: is it really that way (that all children have to have parents)? My code might have a bug also. The exception occurs when IN MY CODE I add the relation to filled DataSet. The exception is: An unhandled exception of type 'System.ArgumentException' occurred in System.Data.dll Additional information: This constraint cannot be enabled as not all values have corresponding parent values. Thanks & Best Regards - Matti

    Read the article

  • Django inlineformset validation and delete

    - by Andrew Gee
    Hi, Can someone tell me if a form in an inlineformset should go through validation if the DELETE field is checked. I have a form that uses an inlineformset and when I check the DELETE box it fails because the required fields are blank. If I put data in the fields it will pass validation and then be deleted. Is that how it is supposed to work, I would have thought that if it is marked for delete it would bypass the validation for that form. Regards Andrew Follow up - but I would still appreciate some others opinions/help What I have figured out is that for validation to work the a formset form must either be empty or complete(valid) otherwise it will have errors when it is created and will not be deleted. As I have a couple of hidden fields in my formset forms and they are pre-populated when the page loads via javascript the form fails validation on the other required fields which might still be blank. The way I have gotten around this by adding in a check in the add_fields that tests if the DELETE input is True and if it is it makes all fields on the form not required, which means it passes validation and will then delete. def add_fields(self, form, index) #add other fields that are required.... deleteValue = form.fields['DELETE'].widget.value_from datadict(form.data, form.files, form.add_prefix('DELETE')) if bool(deleteValue) or deleteValue == '': for name, field in form.fields.items(): form.fields[name].required= False This seems to be an odd way to do things but I cannot figure out another way. Is there a simpler way that I am missing? I have also noticed that when I add the new form to my page and check the Delete box, there is no value passed back via the request, however an existing form (one loaded from the database) has a value of on when the Delete box is checked. If the box is not checked then the input is not in the request at all. Thanks Andrew

    Read the article

  • NHibernate.Search - async mode

    - by Atul
    Hi, I am using NHibernate Lucene search in my project. Lucene.Net.dll - v - 2.3.1.3 NHibernate.dll - v - 2.1.0.4000 At this point I am trying to use async option for indexing and used following options config.SetProperty(NHibernate.Search.Environment.WorkerExecution, "async"); config.SetProperty(NHibernate.Search.Environment.WorkerThreadPoolSize, "1"); config.SetProperty(NHibernate.Search.Environment.WorkerWorkQueueSize, "5000"); Questions 1) My initial index was not build with this option, when used these settings first time, I had error saying NHibernate.Search.dll not found. When I deleted existing index and then started working, it went fine. Do we need to rebuild indexes whenever we change config settings like above ? 2) How size of index should be interpreted; i.e. initially my index was about 400MB (build over the last few months), which I deleted. Later when I reindexed, the size of index went down to 5MB ! Search appear to be alright after limited testing, but such a change appeared bit scary. Should we delete/rebuild indexes once in a while & is it normal to change this drastically ? 3) Is my above setting is OK ? When I had WorkerThreadPoolSize=5, I once got Dr Watson kind of error. Please advise on best practices of using async configuration for search. Regards, Atul

    Read the article

  • Does replace into have a where clause?

    - by Lajos Arpad
    I'm writing an application and I'm using MySQL as DBMS, we are downloading property offers and there were some performance issues. The old architecture looked like this: A property is updated. If the number of affected rows is not 1, then the update is not considered successful, elseway the update query solves our problem. If the update was not successful, and the number of affected rows is more than 1, we have duplicates and we delete all of them. After we deleted duplicates if needed if the update was not successful, an insert happens. This architecture was working well, but there were some speed issues, because properties are deleted if they were not updated for 15 days. Theoretically the main problem is deleting properties, because some properties are alive for months and the indexes are very far from each other (we are talking about 500, 000+ properties). Our host told me to use replace into instead of deleting properties and all deprecated properties should be considered as DEAD. I've done this, but problems started to occur because of syntax error and I couldn't find anywhere an example of replace into with a where clause (I'd like to replace a DEAD property with the new property instead of deleting the old property and insert a new to assure optimization). My query looked like this: replace into table_name(column1, ..., columnn) values(value1, ..., valuen) where ID = idValue Of course, I've calculated idValue and handled everything but I had a syntax error. I would like to know if I'm wrong and there is a where clause for replace into. I've found an alternative solution, which is even better than replace into (using simply an update query) because deletes are happening behind the curtains if I use replace into, but I would like to know if I'm wrong when I say that replace into doesn't have a where clause. For more reference, see this link: http://dev.mysql.com/doc/refman/5.0/en/replace.html Thank you for your answers in advance, Lajos Árpád

    Read the article

  • Database design for invoices, invoice lines & revisions

    - by FreshCode
    I'm designing the 2nd major iteration of a relational database for a franchise's CRM (with lots of refactoring) and I need help on the best database design practices for storing job invoices and invoice lines with a strong audit trail of any changes made to each invoice. Current schema Invoices Table InvoiceId (int) // Primary key JobId (int) StatusId (tinyint) // Pending, Paid or Deleted UserId (int) // auditing user Reference (nvarchar(256)) // unique natural string key with invoice number Date (datetime) Comments (nvarchar(MAX)) InvoiceLines Table LineId (int) // Primary key InvoiceId (int) // related to Invoices above Quantity (decimal(9,4)) Title (nvarchar(512)) Comment (nvarchar(512)) UnitPrice (smallmoney) Revision schema InvoiceRevisions Table RevisionId (int) // Primary key InvoiceId (int) JobId (int) StatusId (tinyint) // Pending, Paid or Deleted UserId (int) // auditing user Reference (nvarchar(256)) // unique natural string key with invoice number Date (datetime) Total (smallmoney) Schema design considerations 1. Is it sensible to store an invoice's Paid or Pending status? All payments received for an invoice are stored in a Payments table (eg. Cash, Credit Card, Cheque, Bank Deposit). Is it meaningful to store a "Paid" status in the Invoices table if all the income related to a given job's invoices can be inferred from the Payments table? 2. How to keep track of invoice line item revisions? I can track revisions to an invoice by storing status changes along with the invoice total and the auditing user in an invoice revision table (see InvoiceRevisions above), but keeping track of an invoice line revision table feels hard to maintain. Thoughts? 3. Tax How should I incorporate sales tax (or 14% VAT in SA) when storing invoice data?

    Read the article

  • Python metaclass for enforcing immutability of custom types

    - by Mark Lehmacher
    Having searched for a way to enforce immutability of custom types and not having found a satisfactory answer I came up with my own shot at a solution in form of a metaclass: class ImmutableTypeException( Exception ): pass class Immutable( type ): ''' Enforce some aspects of the immutability contract for new-style classes: - attributes must not be created, modified or deleted after object construction - immutable types must implement __eq__ and __hash__ ''' def __new__( meta, classname, bases, classDict ): instance = type.__new__( meta, classname, bases, classDict ) # Make sure __eq__ and __hash__ have been implemented by the immutable type. # In the case of __hash__ also make sure the object default implementation has been overridden. # TODO: the check for eq and hash functions could probably be done more directly and thus more efficiently # (hasattr does not seem to traverse the type hierarchy) if not '__eq__' in dir( instance ): raise ImmutableTypeException( 'Immutable types must implement __eq__.' ) if not '__hash__' in dir( instance ): raise ImmutableTypeException( 'Immutable types must implement __hash__.' ) if _methodFromObjectType( instance.__hash__ ): raise ImmutableTypeException( 'Immutable types must override object.__hash__.' ) instance.__setattr__ = _setattr instance.__delattr__ = _delattr return instance def __call__( self, *args, **kwargs ): obj = type.__call__( self, *args, **kwargs ) obj.__immutable__ = True return obj def _setattr( self, attr, value ): if '__immutable__' in self.__dict__ and self.__immutable__: raise AttributeError( "'%s' must not be modified because '%s' is immutable" % ( attr, self ) ) object.__setattr__( self, attr, value ) def _delattr( self, attr ): raise AttributeError( "'%s' must not be deleted because '%s' is immutable" % ( attr, self ) ) def _methodFromObjectType( method ): ''' Return True if the given method has been defined by object, False otherwise. ''' try: # TODO: Are we exploiting an implementation detail here? Find better solution! return isinstance( method.__objclass__, object ) except: return False However, while the general approach seems to be working rather well there are still some iffy implementation details (also see TODO comments in code): How do I check if a particular method has been implemented anywhere in the type hierarchy? How do I check which type is the origin of a method declaration (i.e. as part of which type a method has been defined)?

    Read the article

  • nhibernate many to many deletes

    - by asi farran
    I have 2 classes that have a many to many relationship. What i'd like to happen is that whenever i delete one side ONLY the association records will be deleted with no concern which side i delete. simplified model: classes: class Qualification { IList<ProfessionalListing> ProfessionalListings } class ProfessionalListing { IList<Qualification> Qualifications void AddQualification(Qualification qualification) { Qualifications.Add(qualification); qualification.ProfessionalListings.Add(this); } } fluent automapping with overrides: void Override(AutoMapping<Qualification> mapping) { mapping.HasManyToMany(x => x.ProfessionalListings).Inverse(); } void Override(AutoMapping<ProfessionalListing> mapping) { mapping.HasManyToMany(x => x.Qualifications).Not.LazyLoad(); } I'm trying various combinations of cascade and inverse settings but can never get there. If i have no cascades and no inverse i get duplicated entities in my collections. Setting inverse on one side makes the duplication go away but when i try to delete a qualification i get a 'deleted object would be re-saved by cascade'. How do i do this? Should i be responsible for clearing the associations of each object i delete?

    Read the article

  • Programmatical Creation of NSMappingModel

    - by enchilada
    I want to programmatically (without Lightweight Migration) create a mapping model between two models that are exactly the same, except one of the entities (there are a bunch of entities) has different attributes. Let's call this entity "Person". And let's say the destination model has 1) added a new attribute called "address" 2) deleted an attribute called "eyeColor" 3) kept (i.e. not done anything with) an attribute called "name" How would you create an NSMappingModel between these models programmatically? I happen to have some explicit questions that might help me do this by myself: Q1) Do I have to create NSEntityMapping objects for all of the entities other than "Person", even if they remain unchanged? Q2) How do I deal with the "address" attribute in "Person", which is a new one being created? Should I create an NSPropertyMapping for that somehow, that turns nothing into something ("address")? Q3) How do I deal with the "name" attribute in "Person"? Do I have to create an NSPropertyMapping for that, even though it simply stays the same? Q4) For the NSEntityMapping corresponding to "Person", is not creating any NSPropertyMapping for "eyeColor" a proper way to get it deleted? Or should I create an NSPropertyMapping for "eyeColor"? If yes, how would this object be created, i.e. what would determine that its purpose is to get rid of "eyeColor"? Thank you in advance, and I apologize not being able to answer these questions myself, as the documenation really has no good example of how to create NSMappingModels programmatically. Note again that I'm not allowed to use Lightweight Migration. I must do this manually.

    Read the article

  • Unable to create website error (NEW)

    - by salvationishere
    I copied my ClickOnce deployment to my C:/Inetpub/ folder on my webserver and I deleted my Virtual directory. I deleted the WpfApplication1 folder beneath wwwroot in Win Explorer. Then I turned on Web Sharing for this folder. Then I viewed my IIS Manager and this new Share name appeared under wwwroot. So now under Inetpub folder on my web server I have the following directory path: C:\Inetpub\WpfApplication1\ with contents: Application Files publish.htm setup.exe WpfApplication1.application Next, I remapped both the publishing and installation URL's for the project to http://myserver/WpfApplication1/ And I clicked Publish Now. But after I performed a Publish Now operation, I got the following error on my development server (D610-M): Error 1 Failed to connect to 'http://myserver/WpfApplication1/' with the following error: Unable to create the Web site 'http://myserver/WpfApplication1/'. The Web server does not appear to have any authentication methods enabled. It asked for user authentication, but did not send a WWW-Authenticate header. 1 1 WpfApplication1 On my webserver, when I click Browse from the IIS Manager on the WpfApplication1 directory, it shows me the Install page. But after I click the Browse button, it returns an error which says: The remote name could not be resolved: 'd610-m' (D610-M is the name of my development server). How do I fix this?

    Read the article

  • When does invoking a member function on a null instance result in undefined behavior?

    - by GMan
    This question arose in the comments of a now-deleted answer to this other question. Our question was asked in the comments by STingRaySC as: Where exactly do we invoke UB? Is it calling a member function through an invalid pointer? Or is it calling a member function that accesses member data through an invalid pointer? With the answer deleted I figured we might as well make it it's own question. Consider the following code: #include <iostream> struct foo { void bar(void) { std::cout << "gman was here" << std::endl; } void baz(void) { x = 5; } int x; }; int main(void) { foo* f = 0; f->bar(); // (a) f->baz(); // (b) } We expect (b) to crash, because there is no corresponding member x for the null pointer. In practice, (a) doesn't crash because the this pointer is never used. Because (b) dereferences the this pointer (this->x = 5;), and this is null, the program enters undefined behavior. Does (a) result in undefined behavior? What about if both functions are static?

    Read the article

  • Get ID in GridView

    - by Romil
    Hi, I have One grid say Grid 1 in which there are some columns. There is one view image button, one delete image button and one column which says that color column is Red or Blue. If color column is Red the deleted button is hidden else its shown (Based on user given rights to delete a column or not). Now a user clicks a view button for Red Color Column. If this condition is satisfied, then i want that delete icon should not be present in Grid 2. Grid 2 has 2 columns. One is deleted image button and one is file name (which is uploaded via upload control). So If in Grid One "View Image Button" is clicked for "Red" Column i should be able to hide the delete button from Grid 2. I have tried by writing code in Item command but i am not able to access control of grid2. is this the correct way? Or else suggest me some correct way. Please Make sure that code is compatible with VS 2003. let me know if more inputs are needed. Thanks

    Read the article

< Previous Page | 506 507 508 509 510 511 512 513 514 515 516 517  | Next Page >