Search Results

Search found 6441 results on 258 pages for 'schema compare'.

Page 221/258 | < Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >

  • Symfony 1.4 - Don't save a blank password on a executeUpdate action.

    - by Twelve47
    I have a form to edit a UserProfile which is stored in mysql db. Which includes the following custom configuration: public function configure() { $this->widgetSchema['password']=new sfWidgetFormInputPassword(); $this->validatorSchema['password']->setOption('required', false); // you don't need to specify a new password if you are editing a user. } When the user tries to save the executeUpdate method is called to commit the changes. If the password is left blank, the password field is set to '', but I want it to retain the old password instead of overwriting it. What is the best (/most in the symfony ethos) way of doing this? My solution was to override the setter method on the model (which i had done anyway for password encryption), and ignore blank values. public function setPassword( $password ) { if ($password=='') return false; // if password is blank don't save it. return $this->_set('password', UserProfile ::encryptPassword( $password )); } It seems to work fine like this, but is there a better way? If you're wondering I cannot use sfDoctrineGuard for this project as I am dealing with a legacy database, and cannot change the schema.

    Read the article

  • How Best to Replace PL/SQL with C#?

    - by Mike
    Hi, I write a lot of one-off Oracle SQL queries/reports (in Toad), and sometimes they can get complex, involving lots of unions, joins, and subqueries, and/or requiring dynamic SQL, and/or procedural logic. PL/SQL is custom made for handling these situations, but as a language it does not compare to C#, and even if it did, it's tooling does not, and if even that did, forcing yet another language on the team is something to be avoided whenever possible. Experience has shown me that using SQL for the set based processing, coupled with C# for the procedural processing, is a powerful combination indeed, and far more readable, maintainable and enhanceable than PL/SQL. So, we end up with a number of small C# programs which typically would construct a SQL query string piece by piece and/or run several queries and process them as needed. This kind of code could easily be a disaster, but a good developer can make this work out quite well, and end up with very readable code. So, I don't think it's a bad way to code for smaller DB focused projects. My main question is, how best to create and package all these little C# programs that run ad hoc queries and reports against the database? Right now I create little report objects in a DLL, developed and tested with NUnit, but then I continue to use NUnit as the GUI to select and run them. NUnit just happens to provide a nice GUI for this kind of thing, even after testing has been completed. I'm also interested in suggestions for reporting apps generally. I feel like there is a missing piece or product. The perfect tool would allow writing and running C# inside of Toad, or SQL inside of Visual Studio, along with simple reporting facilities. All ideas will be appreciated, but let's make the assumption that PL/SQL is not the solution.

    Read the article

  • Flexible forms and supporting database structure

    - by sunwukung
    I have been tasked with creating an application that allows administrators to alter the content of the user input form (i.e. add arbitrary fields) - the contents of which get stored in a database. Think Modx/Wordpress/Expression Engine template variables. The approach I've been looking at is implementing concrete tables where the specification is consistent (i.e. user profiles, user content etc) and some generic field data tables (i.e. text, boolean) to store non-specific values. Forms (and model fields) would be generated by first querying the tables and retrieving the relevant columns - although I've yet to think about how I would setup validation. I've taken a look at this problem, and it seems to be indicating an EAV type approach - which, from my brief research - looks like it could be a greater burden than the blessings it's flexibility would bring. I've read a couple of posts here, however, which suggest this is a dangerous route: How to design a generic database whose layout may change over time? Dynamic Database Schema I'd appreciate some advice on this matter if anyone has some to give regards SWK

    Read the article

  • Currently using View, Should I use a hard table instead?

    - by 1001010101
    I am currently debating whether my table, mapping_uGroups_uProducts, which is a view formed by the following table: CREATE ALGORITHM=UNDEFINED DEFINER=`root`@`localhost` SQL SECURITY DEFINER VIEW `db`.`mapping_uGroups_uProducts` AS select distinct `X`.`upID` AS `upID`,`Z`.`ugID` AS `ugID` from ((`db`.`mapping_uProducts_Products` `X` join `db`.`productsInfo` `Y` on((`X`.`pID` = `Y`.`pID`))) join `db`.`mapping_uGroups_Groups` `Z` on((`Y`.`gID` = `Z`.`gID`))); My current query is: SELECT upID FROM uProductsInfo \ JOIN fs_uProducts USING (upID) column \ JOIN mapping_uGroups_uProducts USING (upID) -- could be faster if we use hard table and index \ JOIN mapping_fs_key USING (fsKeyID) \ WHERE fsName="OVERALL" \ AND ugID=1 \ ORDER BY score DESC \ LIMIT 0,30; which is pretty slow. (for 30 results, it requires about 10 secondes). I think the reason for my query being so slow is definitely due to the fact that that particular query relies on a VIEW which has no index to speed things up. +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ | 1 | PRIMARY | mapping_fs_key | const | PRIMARY,fsName | fsName | 386 | const | 1 | Using temporary; Using filesort | | 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 19706 | Using where | | 1 | PRIMARY | uProductsInfo | eq_ref | PRIMARY | PRIMARY | 4 | mapping_uGroups_uProducts.upID | 1 | Using index | | 1 | PRIMARY | fs_uProducts | ref | upID | upID | 4 | db.uProductsInfo.upID | 221 | Using where | | 2 | DERIVED | X | ALL | PRIMARY | NULL | NULL | NULL | 40772 | Using temporary | | 2 | DERIVED | Y | eq_ref | PRIMARY | PRIMARY | 4 | db.X.pID | 1 | Distinct | | 2 | DERIVED | Z | ref | PRIMARY | PRIMARY | 4 | db.Y.gID | 2 | Using index; Distinct | +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ 7 rows in set (0.48 sec) The explain here looks pretty cryptic, and I don't know whether I should drop view and write a script to just insert everything in the view to a hard table. ( obviously, it will lose the flexibility of the view since the mapping changes quite frequently). Does anyone have any idea to how I can optimize my schema better?

    Read the article

  • MySQL query against pseudo-key-value pair data in WordPress custom query

    - by andrevr
    I'm writing a custom WordPress query to use some of the data which the Woothemes Diarise theme creates. Diarise is an event planner theme with calendar blah, blah... and uses custom fields to store the event start and end dates in WP custom fields in the *wp_postmeta* table, which implements a key-value store. So for each post in the "event" category, there are 2 records in *wp_postmeta*, named *event_start_date* and *event_end_date* that I'm interested in. The task is to compare a tourist's arrival and departure dates with the start and end dates of events, yielding a what's on list of events available. We thought we'd killed it with a grand flash of logic, that goes like this: Disregard any event that ends before the tourist arrives, and any that begin after the departure date. I wrote this query: SELECT wposts.* FROM wp_posts wposts LEFT JOIN wp_postmeta wpostmeta ON wposts.ID = wpostmeta.post_id LEFT JOIN wp_term_relationships ON (wposts.ID = wp_term_relationships.object_id) LEFT JOIN wp_term_taxonomy ON (wp_term_relationships.term_taxonomy_id = wp_term_taxonomy.term_taxonomy_id) WHERE wp_term_taxonomy.taxonomy = 'category' AND wp_term_taxonomy.term_id IN(3,4) AND ( wpostmeta.meta_key = 'event_start_date' AND NOT ( concat(subst(wpostmeta.meta_value,7,4),'-',subst(wpostmeta.meta_value,4,2),'-',subst(wpostmeta.meta_value,1,2) > '2010-07-31' ) ) AND ( wpostmeta.meta_key = 'event_end_date' AND NOT ( concat(subst(wpostmeta.meta_value,7,4),'-',subst(wpostmeta.meta_value,4,2),'-',subst(wpostmeta.meta_value,1,2) < '2010-05-01' ) ) ) ORDER BY wpostmeta.meta_value ASC And, of course it returns no records. The problem I believe is in the dual reference to wpostmeta.meta_key, but how to get around that?

    Read the article

  • mongoose updating a field in a MongoDB not working

    - by Masiar
    I have this code var UserSchema = new Schema({ Username: {type: String, index: true}, Password: String, Email: String, Points: {type: Number, default: 0} }); [...] var User = db.model('User'); /* * Function to save the points in the user's account */ function savePoints(name, points){ if(name != "unregistered user"){ User.find({Username: name}, function(err, users){ var oldPoints = users[0].Points; var newPoints = oldPoints + points; User.update({name: name}, { $inc: {Points: newPoints}}, function(err){ if(err){ console.log("some error happened when update"); } else{ console.log("update successfull! with name = " + name); User.find({Username: name}, function(err, users) { console.log("updated : " + users[0].Points); }); } }); }); } } savePoints("Masiar", 666); I would like to update my user (by finding it with its name) by updating his/her points. I'm sure oldPoints and points contain a value, but still my user keep being at zero points. The console prints "update successful". What am I doing wrong? Sorry for the stupid / noob question. Masiar

    Read the article

  • Cannot pass null to server using jQuery AJAX. Value received at the server is the string "null".

    - by Tom
    I am converting a javascript/php/ajax application to use jQuery to ensure compatibility with browsers other than Firefox. I am having trouble passing true, false, and null values using jQuery's ajax function. Javascript code: $.ajax ( { url : <server_url>, dataType: 'json', type : 'POST', success : receiveAjaxMessage, data: { valueTrue : true, valueFalse : false, valueNull : null } } ); PHP code: var_dump($_POST); Server output: array(3) { ["valueTrue"]=> string(4) "true" ["valueFalse"]=> string(5) "false" ["valueNull"]=> string(4) "null" } The problem is that the null, true, and false values are being converted to strings. The Javascript AJAX code currently in use passes null, true, and false correctly but only works in Firefox. Does anyone know how to solve this problem using jQuery? Here is some working code (not using jQuery) to compare with the code not-working code given above. Javascript Code: ajaxPort.send ( <server_url>, { valueTrue : true, valueFalse : false, valueNull : null } ); PHP code: var_dump(json_decode(file_get_contents('php://input'), true)); Server output: array(3) { ["valueTrue"]=> bool(true) ["valueFalse"]=> bool(false) ["valueNull"]=> NULL } Note that the null, true, and false values are correctly received. Note also that in the second method the $_POST array is not used in the PHP code. I think this is the key to the problem, but I cannot find a way to replicate this behavior using jQuery.

    Read the article

  • Why won't EF4 generate a method to support my Function Import?

    - by Deane
    I have a stored proc in my database which returns an integer. I added a Function Import to my model. This appears in the EDMX file: <Function Name="GetTotalEntityCount" Aggregate="false" BuiltIn="false" NiladicFunction="false" IsComposable="false" ParameterTypeSemantics="AllowImplicitConversion" Schema="dbo" /> However, no method actually gets generated for this. It should be top level, right? using (MyContext context = new MyContext()) { context.MyMethodShouldBeRightHere(); } Nothing appears in Intellisense, I've gone through the designer.cs file and there's nothing in there, and reflected the DLL...nothing. The code generator is just not generating any code to support this stored proc. I added another table to my database and updated the model, and that came in, so the model will update, it's just specifically ignoring this stored proc. I've tried everything I can think of, and consulted every resource I can find, and as near as I can tell, I'm doing everything right. I'm using EF4, database-first. (I'm pretty sure on the version, anyway. This shows up in the generated file: Runtime Version:4.0.30319.1 )

    Read the article

  • How to Map Two Tables To One Class in Fluent NHibernate

    - by Richard Nagle
    I am having a problem with fluent nhiberbate mapping two tables to one class. I have the following database schema: TABLE dbo.LocationName ( LocationId INT PRIMARY KEY, LanguageId INT PRIMARY KEY, Name VARCHAR(200) ) TABLE dbo.Language ( LanguageId INT PRIMARY KEY, Locale CHAR(5) ) And want to build the following class definition: public class LocationName { public virtual int LocationId { get; private set; } public virtual int LanguageId { get; private set; } public virtual string Name { get; set; } public virtual string Locale { get; set; } } Here is my mapping class: public LocalisedNameMap() { WithTable("LocationName"); UseCompositeId() .WithKeyProperty(x => x.LanguageId) .WithKeyProperty(x => x.LocationId); Map(x => x.Name); WithTable("Language", lang => { lang.WithKeyColumn("LanguageId"); lang.Map(x => x.Locale); }); } The problem is with the mapping of the Locale field being from another table, and in particular that the keys between those tables don't match. Whenever I run the application with this mapping I get the following error on startup: Foreign key (FK7FC009CCEEA10EEE:Language [LanguageId])) must have same number of columns as the referenced primary key (LocationName [LanguageId, LocationId]) How do I tell nHibernate to map from LocationName to Language using only the LanguageId field?

    Read the article

  • Nhibernate:null index column for collection Error

    - by Quintin Par
    I am working a subsonic to NH migration(I can’t change the schema) and while creating the mapping I came across this error null index column for collection: Company.Core.CompanyUser.Addresses My mapping from the User side is mapping.HasMany(x => x.Addresses).AsList().KeyColumn("user_id").Cascade.All().Inverse(); xml <list cascade="all" inverse="true" name="Addresses"> <key> <column name="user_id" /> </key> <index /> <one-to-many class="Company.Core.CompanyAddress, Company.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> </list> On the Address side it is mapping.CompositeId().KeyReference(x => x.User, "user_id").KeyProperty(x => x.Type); xml <composite-id mapped="false" unsaved-value="undefined"> <key-property name="Type" type="System.String, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="Type" /> </key-property> <key-many-to-one name="User" class="Company.Core.CompanyUser, Company.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null"> <column name="user_id" /> </key-many-to-one> </composite-id> When I try to load this collection as user.Addresses I get the index null exception. How do I fix this error?

    Read the article

  • Aggregating a list of dates to start and end date

    - by Joe Mako
    I have a list of dates and IDs, and I would like to roll them up into periods of consucitutive dates, within each ID. For a table with the columns "testid" and "pulldate" in a table called "data": | A79 | 2010-06-02 | | A79 | 2010-06-03 | | A79 | 2010-06-04 | | B72 | 2010-04-22 | | B72 | 2010-06-03 | | B72 | 2010-06-04 | | C94 | 2010-04-09 | | C94 | 2010-04-10 | | C94 | 2010-04-11 | | C94 | 2010-04-12 | | C94 | 2010-04-13 | | C94 | 2010-04-14 | | C94 | 2010-06-02 | | C94 | 2010-06-03 | | C94 | 2010-06-04 | I want to generate a table with the columns "testid", "group", "start_date", "end_date": | A79 | 1 | 2010-06-02 | 2010-06-04 | | B72 | 2 | 2010-04-22 | 2010-04-22 | | B72 | 3 | 2010-06-03 | 2010-06-04 | | C94 | 4 | 2010-04-09 | 2010-04-14 | | C94 | 5 | 2010-06-02 | 2010-06-04 | This is the the code I came up with: SELECT t2.testid, t2.group, MIN(t2.pulldate) AS start_date, MAX(t2.pulldate) AS end_date FROM(SELECT t1.pulldate, t1.testid, SUM(t1.check) OVER (ORDER BY t1.testid,t1.pulldate) AS group FROM(SELECT data.pulldate, data.testid, CASE WHEN data.testid=LAG(data.testid,1) OVER (ORDER BY data.testid,data.pulldate) AND data.pulldate=date (LAG(data.pulldate,1) OVER (PARTITION BY data.testid ORDER BY data.pulldate)) + integer '1' THEN 0 ELSE 1 END AS check FROM data ORDER BY data.testid, data.pulldate) AS t1) AS t2 GROUP BY t2.testid,t2.group ORDER BY t2.group; I use the use the LAG windowing function to compare each row to the previous, putting a 1 if I need to increment to start a new group, I then do a running sum of that column, and then aggregate to the combinations of "group" and "testid". Is there a better way to accomplish my goal, or does this operation have a name? I am using PostgreSQL 8.4

    Read the article

  • Does the managed main UI thread stay on the same (unmanaged) Operation System thread?

    - by Daniel Rose
    I am creating a managed WPF UI front-end to a legacy Win32-application. The WPF front-end is the executable; as part of its startup routines I start the legacy app as a DLL in a second thread. Any UI-operation (including CreateWindowsEx, etc.) by the legacy app is invoked back on the main UI-thread. As part of the shutdown process of the app I want to clean up properly. Among other things, I want to call DestroyWindow on all unmanaged windows, so they can properly clean themselves up. Thus, during shutdown I use EnumWindows to try to find all my unmanaged windows. Then I call DestroyWindow one the list I generate. These run on the main UI-thread. After this background knowledge, on to my actual question: In the enumeration procedure of EnumWindows, I have to check if one of the returned top-level windows is one of my unmanaged windows. I do this by calling GetWindowThreadProcessId to get the process id and thread id of the window's creator. I can compare the process id with Process.GetCurrentProcess().Id to check if my app created it. For additional security, I also want to see if my main UI-thread created the window. However, the returned thread id is the OS's ThreadId (which is different than the managed thread id). As explained in this question, the CLR reserves the right to re-schedule the managed thread to different OS threads. Can I rely on the CLR to be "smart enough" to never do this for the main UI thread (due to thread-affinity of the UI)? Then I could call GetCurrentThreadId to get the main UI-thread's unmanaged thread id for comparison.

    Read the article

  • SQL Server race condition issue with range lock

    - by Freek
    I'm implementing a queue in SQL Server (please no discussions about this) and am running into a race condition issue. The T-SQL of interest is the following: set transaction isolation level serializable begin tran declare @RecordId int declare @CurrentTS datetime2 set @CurrentTS=CURRENT_TIMESTAMP select top 1 @RecordId=Id from QueuedImportJobs with (updlock) where Status=@Status and (LeaseTimeout is null or @CurrentTS>LeaseTimeout) order by Id asc if @@ROWCOUNT> 0 begin update QueuedImportJobs set LeaseTimeout = DATEADD(mi,5,@CurrentTS), LeaseTicket=newid() where Id=@RecordId select * from QueuedImportJobs where Id = @RecordId end commit tran RecordId is the PK and there is also an index on Status,LeaseTimeout. What I'm basically doing is select a record of which the lease happens to be expired, while simultaneously updating the lease time with 5 minutes and setting a new lease ticket. So the problem is that I'm getting deadlocks when I run this code in parallel using a couple of threads. I've debugged it up to the point where I found out that the update statement sometimes gets executing twice for the same record. Now, I was under the impression that the with (updlock) should prevent this (it also happens with xlock btw, not with tablockx). So it actually look like there is a RangeS-U and a RangeX-X lock on the same range of records, which ought to be impossible. So what am I missing? I'm thinking it might have something to do with the top 1 clause or that SQL Server does not know that where Id=@RecordId is actually in the locked range? Deadlock graph: Table schema (simplified):

    Read the article

  • Multithreaded linked list traversal

    - by Rob Bryce
    Given a (doubly) linked list of objects (C++), I have an operation that I would like multithread, to perform on each object. The cost of the operation is not uniform for each object. The linked list is the preferred storage for this set of objects for a variety of reasons. The 1st element in each object is the pointer to the next object; the 2nd element is the previous object in the list. I have solved the problem by building an array of nodes, and applying OpenMP. This gave decent performance. I then switched to my own threading routines (based off Windows primitives) and by using InterlockedIncrement() (acting on the index into the array), I can achieve higher overall CPU utilization and faster through-put. Essentially, the threads work by "leap-frog'ing" along the elements. My next approach to optimization is to try to eliminate creating/reusing the array of elements in my linked list. However, I'd like to continue with this "leap-frog" approach and somehow use some nonexistent routine that could be called "InterlockedCompareDereference" - to atomically compare against NULL (end of list) and conditionally dereference & store, returning the dereferenced value. I don't think InterlockedCompareExchangePointer() will work since I cannot atomically dereference the pointer and call this Interlocked() method. I've done some reading and others are suggesting critical sections or spin-locks. Critical sections seem heavy-weight here. I'm tempted to try spin-locks but I thought I'd first pose the question here and ask what other people are doing. I'm not convinced that the InterlockedCompareExchangePointer() method itself could be used like a spin-lock. Then one also has to consider acquire/release/fence semantics... Ideas? Thanks!

    Read the article

  • Knowing which annotation is selected and accessing properties of it.

    - by kevin Mendoza
    So far my program can display a database of custom annotation views. Eventually I want my program to be able to display extra information after a button on the annotation bubble is clicked. Each element in the database has a unique entry Number, so I thought it would be a good idea to add this entry number as a property of the custom annotation. The problem I am having is that after the button is clicked and the program switches to a new view I am unable to retrieve the entry number of the annotation I selected. Below is the code that assigns the entry Number property to the annotation: for (id mine in mines) { workingCoordinate.latitude = [[mine latitudeInitial] doubleValue]; workingCoordinate.longitude = [[mine longitudeInitial] doubleValue]; iProspectAnnotation *tempMine = [[iProspectAnnotation alloc] initWithCoordinate:workingCoordinate]; [tempMine setTitle:[mine mineName]]; [tempMine setAnnotationEntryNumber:[mine entryNumber]]; } [mines dealloc]; When the button on an annotation is selected, this is the code that initializes the new view: - (void)mapView:(MKMapView *)mapView annotationView:(MKAnnotationView *)view calloutAccessoryControlTapped:(UIControl *)control { mineInformationController *controller = [[mineInformationController alloc] initWithNibName:@"mineInformationController" bundle:nil]; controller.modalTransitionStyle = UIModalTransitionStyleCrossDissolve; [self presentModalViewController:controller animated:YES]; [controller release]; } and lastly is my attempt at retrieving the entryNumber property from the new view so that I can compare it to the mines database and retrieve more information on the array element. iProspectFresno_LiteAppDelegate *appDelegate = (iProspectFresno_LiteAppDelegate *)[[UIApplication sharedApplication] delegate]; NSMutableArray* mines = [[NSMutableArray alloc] initWithArray:(NSMutableArray *)appDelegate.mines]; for(id mine in mines) { if ([[mine entryNumber] isEqualToNumber: /*the entry Number of the selected annotation*/]) { /* display the information in the mine object */ } } So how do I access this entry number property in this new view controller?

    Read the article

  • How to fix this simple SQL query?

    - by morpheous
    I have a database with three tables: user_table country_table city_table I want to write ANSI SQL which will allow me to fetch all the user data (i.e. user details including the name of the country of the last school and the name of the city they live in now). The problem I am having is that I have to use a self join, and I am getting slightly confused. The schema is shown below: CREATE TABLE user_table (id int, first_name varchar(16), last_school_country_id int, city_id int); CREATE TABLE country_table (id int, name varchar(32)); CREATE TABLE city_table (id int, country_id int, name varchar(32)); This is the query I have come up with so far, but the results are wrong, and sometimes, the db engine (mySQL), asks me if I want to show all [HUGE NUMBER HERE] results - which makes me suspect that I am unintentionally creating a cartesian product somewhere. Can someone explain what is wrong with this SQL statement, and what I need to do to fix it? SELECT usr.id AS id, usr.first_name, ctry1.name as loc_country_name, ctry2.name as school_country_name, city.name as loc_city_name FROM user_table usr, country_table ctry1, country_table ctry2, city_table city WHERE usr.last_school_country_id=ctry2.id AND usr.city_id=city.id AND city.country_id=ctry1.id AND ctry1.id=ctry2.id;

    Read the article

  • Multiple "ObjectChangeTracker" getting created, can it be avoided?

    - by user555937
    Hi, We are working on a POC where we have following architecture (MVVM), WPF(Client) + WCF + Model(DataAccess)+ ADO.Net Entity Framework 4.0 (with SQL Server 2008 R2 as DB) All are different projects. In the DataAccess layer we have created different Entity Models(edmx) based on the functionality. The tables under perticular flow are grouped and created different entity models. We are using self tracking entities to and fro to communicate with the WPF client through wcf service. For Single model everything works fine. But when we created a Multiple models then few issues started coming. Mutliple models have few duplicate tables/entities. Two probels are, 1) When we try to access entities from different models mutiple objects "ObjectChangeTracker" are getting created. E.g. CompanyModel(edmx) - Company(Entity) - ObjectChangeTracker, ObjectState ProductModel(edmx) - Customer(Entity) - ObjectChangeTracker1, ObjectState1 OrderModel(edmx) - Oder(Entity) - ObjectChangeTracker2, ObjectState2 Is there any way to avoid this? 2) There are few tables which shared across the Models, E.g. Company(Entity) is used in All above mdoels. During compile time it does not thow any error. But run time It gives error saying "Schema specified is not valid. Errors: The mapping of CLR type to EDM type is ambiguous because multiple CLR types match the EDM type "Company"".. To resolve this, we renamed the entities with some prefix to make them Unique. Is there any other way we can resolve this without changing the name of the entity in the same assembly? Thanks in advance and appreciate if anyone has approach for these issues. Thanks, Kiran

    Read the article

  • File size monitoring in C#

    - by manemawanna
    Hello, I work in the Systems & admin team and have been given the task of creating a quota management application to try and encourage users to better manage there resources as we currently have issues with disc space and don't enforce hard quotas. At the moment I'm using the code below to go through all the files in a users homespace to retrieve the overall amount of space they are using. As from what I've seen else where theres no other way to do this in C#, the issue with it is theirs quite a high overhead while it retireves the size of each file then creates a total. try { long dirSize = 0; FileInfo[] FI = new DirectoryInfo("I:\\").GetFiles("*.*", SearchOption.AllDirectories); foreach (FileInfo F1 in FI) { dirSize += F1.Length; } return dirSize; } So I'm looking for a quicker way to do this or a quick way to monitor changes in the size of files while using the options avaliable through FileSystemWatcher. At the moment the only thing I can think of is creating a hashtable containing the file location and size of each file, so when a size changed event occurs I can compare the old size against the new one and update the total. Any suggestions would be greatly appreciated.

    Read the article

  • Out-of-memory algorithms for addressing large arrays

    - by reve_etrange
    I am trying to deal with a very large dataset. I have k = ~4200 matrices (varying sizes) which must be compared combinatorially, skipping non-unique and self comparisons. Each of k(k-1)/2 comparisons produces a matrix, which must be indexed against its parents (i.e. can find out where it came from). The convenient way to do this is to (triangularly) fill a k-by-k cell array with the result of each comparison. These are ~100 X ~100 matrices, on average. Using single precision floats, it works out to 400 GB overall. I need to 1) generate the cell array or pieces of it without trying to place the whole thing in memory and 2) access its elements (and their elements) in like fashion. My attempts have been inefficient due to reliance on MATLAB's eval() as well as save and clear occurring in loops. for i=1:k [~,m] = size(data{i}); cur_var = ['H' int2str(i)]; %# if i == 1; save('FileName'); end; %# If using a single MAT file and need to create it. eval([cur_var ' = cell(1,k-i);']); for j=i+1:k [~,n] = size(data{j}); eval([cur_var '{i,j} = zeros(m,n,''single'');']); eval([cur_var '{i,j} = compare(data{i},data{j});']); end save(cur_var,cur_var); %# Add '-append' when using a single MAT file. clear(cur_var); end The other thing I have done is to perform the split when mod((i+j-1)/2,max(factor(k(k-1)/2))) == 0. This divides the result into the largest number of same-size pieces, which seems logical. The indexing is a little more complicated, but not too bad because a linear index could be used. Does anyone know/see a better way?

    Read the article

  • Where can I find my iPhone app's Core Data persistent store?

    - by Dr Dork
    I'm diving into iPhone development, so I apologize in advance if this is a ridiculous question, but in a new iPad app project using the Core Data framework, here's the generated code for creating the persistentStoreCoordinator... - (NSPersistentStoreCoordinator *)persistentStoreCoordinator { if (persistentStoreCoordinator != nil) { return persistentStoreCoordinator; } NSURL *storeUrl = [NSURL fileURLWithPath: [[self applicationDocumentsDirectory] stringByAppendingPathComponent: @"ApplicationName.sqlite"]]; NSError *error = nil; persistentStoreCoordinator = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:[self managedObjectModel]]; if (![persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeUrl options:nil error:&error]) { /* Replace this implementation with code to handle the error appropriately. abort() causes the application to generate a crash log and terminate. You should not use this function in a shipping application, although it may be useful during development. If it is not possible to recover from the error, display an alert panel that instructs the user to quit the application by pressing the Home button. Typical reasons for an error here include: * The persistent store is not accessible * The schema for the persistent store is incompatible with current managed object model Check the error message to determine what the actual problem was. */ NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } return persistentStoreCoordinator; } My questions are... The first time I run the app, is the ApplicationName.sqllite database created automatically if it doesn't exist? If not, when is it created? When data is added to it programmatically? Once the DB does exist, where can I locate the file? I'd like to open it with a different program so I can manually manipulate the data. Thanks so much in advance for your help! I'm going to continue researching these questions right now.

    Read the article

  • Copy only files that are newer

    - by ErocM
    I am currently using this code: if (!Directory.Exists(command2)) Directory.CreateDirectory(command2); if (Directory.Exists(vmdaydir)) Directory.Delete(vmdaydir,true); if (!Directory.Exists(vmdaydir)) Directory.CreateDirectory(vmdaydir); var dir = Path.GetDirectoryName(args[0]); sb.AppendLine("Backing Up VM: " + DateTime.Now.ToString(CultureInfo.InvariantCulture)); Microsoft.VisualBasic.FileIO.FileSystem.CopyDirectory(dir, vmdaydir); sb.AppendLine("VM Backed Up: " + DateTime.Now.ToString(CultureInfo.InvariantCulture)); As you can see, I am deleting the directory, then I am copying the folder back. This is taking way to long since the directory is ~80gb in size. I realized that I do not need to copy all the files, only the ones that have changed. How would I copy the files from one folder to another but only copying the files that are newer? Anyone have any suggestions? ==== edit ==== I assume I can just do a file compare of each file and then copy it to the new directory, iterating through each folder/file? Is there a simpler way to do this?

    Read the article

  • Zend XML parsing

    - by Vincent
    I have an xml file called error.xml like this: <?xml version="1.0" encoding="UTF-8"?> <errorList> <error> <code>0</code> <desc>Fault</desc> <ufmessage>Fault</ufmessage> </error> <error> <code>1</code> <desc>Unknown</desc> <ufmessage>Unknown</ufmessage> </error> <error> <code>2</code> <desc>Internal Error</desc> <ufmessage>Internal Error</ufmessage> </error> </errorList> I am using Zend and have the above xml file stored in a Zend Registry variable called "apperrors" like this: $apperrors = new Zend_Config_Xml( 'error.xml'); Zend_Registry::set('apperrors', $apperrors->errorList); If my code throws an error code of 2, I want a snippet of code to check the error code with "code" key in "apperrors" array and echo the ufmessage value corresponding to it. How can I do this? Ex: //Error thrown by the code. Hardcoded for now... $errorThrownByCode = 2; //Get the Zend Registry value for all errors $errorList = Zend_Registry::get('apperrors'); //Write some code to compare $errorThrownByCode with $errorList->error->code //If found echo $errorList->error->ufmessage, else echo "An Unknown error occured"; Thanks

    Read the article

  • Application Code Redesign to reduce no. of Database Hits from Performance Perspective

    - by Rachel
    Scenario I want to parse a large CSV file and inserts data into the database, csv file has approximately 100K rows of data. Currently I am using fgetcsv to parse through the file row by row and insert data into Database and so right now I am hitting database for each line of data present in csv file so currently database hit count is 100K which is not good from performance point of view. Current Code: public function initiateInserts() { //Open Large CSV File(min 100K rows) for parsing. $this->fin = fopen($file,'r') or die('Cannot open file'); //Parsing Large CSV file to get data and initiate insertion into schema. while (($data=fgetcsv($this->fin,5000,";"))!==FALSE) { $query = "INSERT INTO dt_table (id, code, connectid, connectcode) VALUES (:id, :code, :connectid, :connectcode)"; $stmt = $this->prepare($query); // Then, for each line : bind the parameters $stmt->bindValue(':id', $data[0], PDO::PARAM_INT); $stmt->bindValue(':code', $data[1], PDO::PARAM_INT); $stmt->bindValue(':connectid', $data[2], PDO::PARAM_INT); $stmt->bindValue(':connectcode', $data[3], PDO::PARAM_INT); // Execute the statement $stmt->execute(); $this->checkForErrors($stmt); } } I am looking for a way wherein instead of hitting Database for every row of data, I can prepare the query and than hit it once and populate Database with the inserts. Any Suggestions !!! Note: This is the exact sample code that I am using but CSV file has more no. of field and not only id, code, connectid and connectcode but I wanted to make sure that I am able to explain the logic and so have used this sample code here. Thanks !!!

    Read the article

  • What changed in the DataGrid that means it won't work anymore?

    - by Jeff Yates
    I have a Silverlight app with a DataGrid containing some custom columns and all was working well. Then I updated to Silverlight 3 tools for VS 2008 SP1 and rebuilt it. Now it has the following problems: Rows aren't added when the collection is modified. The ItemsSource property is (and always has been) set to an ObservableCollection instance, which notifies when its contents change. This worked fine for Silverlight 2. However, in Silverlight 3 to get this working at all, I now have to null and then re-set ItemsSource - this seems like I'm hiding a bigger issue but I can't work out what that might be. I cannot select a row or a cell anymore. If I'm lucky, I can select one whole row before it stops working. I can't edit anything. I suspect this is related to the previous point. I'll post some source when I am able, but first I have to strip it down to the bare minimum. In the meantime, I was hoping someone might have some idea of what may be going on here. My gut feeling on the second two points is that my bindings are no longer working, but that's just a guess and if it is the case, I have no idea which ones. Thanks for any help anyone might be able to provide. Update So, I finally reduced my problem down to a simple works/doesn't work comparison. The problem seems to occur if I override Equals in my element type. As soon as I do that, something happens strangely in the ObservableCollection that contains that type, it seems, and my application breaks. To make it more interesting, there is a check to make sure that duplicate items don't even get close to being added to the collection. I don't exactly know why ObservableCollection needs to compare equality when inserting items (the stack trace indicates it is using IndexAt) but this seems to cause the issue. So, any thoughts?

    Read the article

  • Problem with comparing value with array values

    - by Java starter
    This code is what I use now. But it does not work when I try to use an array to compare values. If anybody has any idea of why, please respond. <html> <head> <script type-'text/javascript'> function hovedFunksjon() { //alert("test av funksjon fungerte"); //alert(passordLager); window.open("index10.html","Window1","menubar=no,width=430,height=360,toolbar=no"); } function inArray(array, value) { for (var i = 0; i < array.length; i++) { if (array[i] == value) return true; } return false; } function spørOmPassord() { var passordLager = ["pass0","pass1","pass2"]; window.passordInput = prompt("password");//Ved å bruke "window." skaper man en global variabel //if (passordInput == passordLager[0] || passordLager[1] || passordLager[2]) if (inArray(passordLager,passorInput) ) { hovedFunksjon(); } else { alert("Feil passord"); //href="javascript:self.close()">close window } } function changeBackgroundColor() { //document.bgColor="#CC9900"; //document.bgColor="YELLOW" document.bgColor="BLACK" } </script> </head> <body> <script type-'text/javascript'> changeBackgroundColor(); </script> <div align="center"> <form> <input type = "button" value = "Logg inn" onclick="spørOmPassord()"> </form> </div> </body> </html>

    Read the article

< Previous Page | 217 218 219 220 221 222 223 224 225 226 227 228  | Next Page >