Search Results

Search found 10189 results on 408 pages for 'db 11gr2'.

Page 349/408 | < Previous Page | 345 346 347 348 349 350 351 352 353 354 355 356  | Next Page >

  • Business entity: private instance VS single instance

    - by taoufik
    Suppose my WinForms application has a business entity Order, the entity is used in multiple views, each view handles a different domain or use-case in the application. As an example, one managing orders, the other one digging into one order and displaying additional data. If I'd use nHibernate (or any other ORM) and use one session/dataContext per view (or per db action), I'd end up getting two different instances for the same Order (let's say orderId = 1). Although functionally the same entity, they are technically two different instances. Yes, I could implement Equals/GetHashcode to make them "seem" the same. Why would you go for a single instance per entity vs private instances per view or per use-case? Having single instances has the advantage of sharing INotifyPropertyChanged events, and sharing additional (non-persistent) data. Having a private instance in each view would give you the flexibility of the undo functionality on a view level. In the example above, I'd allow the user to change order details, and give them the flexibility to not save the change. Here, synchronisation between the view/use-case happens on a data persistence level. What would your argument be?

    Read the article

  • Executing sequential stored procedures; works in query analyzer, doesn't in my .NET application

    - by evanmortland
    Hello, I have an audit record table that I am writing to. I am connecting to MyDb, which has a stored procedure called 'CreateAudit', which is a passthrough stored procedure to another database on the same machine called MyOther DB with a stored procedure called 'CreatedAudit' as well. In other words in MyDB I have CreateAudit, which does the following EXEC dbo.MyOtherDB.CreateAudit. I call the MyDb CreateAudit stored procedure from my application, using subsonic as the DAL. The first time I call it, I call it with the following (pseudocode): Result = CreateAudit(recordId, "Opened") One line after that, I call: Result2 = CreateAudit(recordId, "Closed") In my second stored procedure it is supposed to mark the record that was created by the CreateAudit(recordId, "Opened") with a status of closed. It works great if I run them independently of one another, but when they run in sequence in the application, the record is not marked as "Closed". When I run SQL profiler I see that both queries ran, and if I copy the queries out and run them from query analyzer the record gets marked as closed 100% of the time! When I run it from the application, about once every 20 times or so, the record is successfully marked closed - the other 19 times nothing happens, but I do not get an error! Is it possible for the .NET app to skip over the ouput from the first stored procedure and start executing the second stored procedure before the record in the first is created? When I add a "WAITFOR DELAY '00:00:00:003'" to the top of my stored procedure, the record is also closed 100% of the time. My head is spinning, any ideas why this is happening! Thanks for any responses, very interested in hearing how this can happen.

    Read the article

  • DataReader-DataSet Hybrid solution

    - by G33kKahuna
    My solution architects and I have exhausted both pure Dataset and Datareader solutions. Basically we have a Microsoft.NET 2.0 windows service application that pulls data based on a query and processes additional tasks per record; almost a poor mans workflow system. The recordsets are broader (in terms of the columns) and deeper (in terms of number of records). We observed that DataSet performs much better in terms of performance but runs into contraints as # of records increase say 100K+ we start seeing System.OutOfMemoryException on a 4G machine with processModel configured to run at memoryLimit set to 85. Since this is a multi-threaded app, there could be multiple threads processing different queries and building different DataSets, so we run into the exception sooner in that case DataReader on the other hand works but is a lot slower and hits other contraints; if there is some sort of disconnect it has to start over again or leaves open connections on the DB side and worst case takes down the service completely etc. So, we decided the best option would be some sort of hybrid solution. I'm open to guidance and suggestions. Are there any hybrid solutions available? Any other suggestions

    Read the article

  • How to allow users to define financial formulas in a C# app

    - by Peter Morris
    I need to allow my users to be able to define formulas which will calculate values based on data. For example //Example 1 return GetMonetaryAmountFromDatabase("Amount due") * 1.2; //Example 2 return GetMonetaryAmountFromDatabase("Amount due") * GetFactorFromDatabase("Discount"); I will need to allow / * + - operations, also to assign local variables and execute IF statements, like so var amountDue = GetMonetaryAmountFromDatabase("Amount due"); if (amountDue > 100000) return amountDue * 0.75; if (amountDue > 50000) return amountDue * 0.9; return amountDue; The scenario is complicated because I have the following structure.. Customer (a few hundred) Configuration (about 10 per customer) Item (about 10,000 per customer configuration) So I will perform a 3 level loop. At each "Configuration" level I will start a DB transaction and compile the forumlas, each "Item" will use the same transaction + compiled formulas (there are about 20 formulas per configuration, each item will use all of them). This further complicates things because I can't just use the compiler services as it would result in continued memory usage growth. I can't use a new AppDomain per each "Configuration" loop level because some of the references I need to pass cannot be marshalled. Any suggestions?

    Read the article

  • phppgadmin : How does it kick users out of postgres, so it can db_drop?

    - by egarcia
    I've got one Posgresql database (I'm the owner) and I'd like to drop it and re-create it from a dump. Problem is, there're a couple applications (two websites, rails and perl) that access the db regularly. So I get a "database is being accessed by other users" error. I've read that one possibility is getting the pids of the processes involved and killing them individually. I'd like to do something cleaner, if possible. Phppgadmin seems to do what I want: I am able to drop schemas using its web interface, even when the websites are on, without getting errors. So I'm investigating how its code works. However, I'm no PHP expert. I'm trying to understand the phppgadmin code in order to see how it does it. I found out a line (257 in Schemas.php) where it says: $data->dropSchema(...) $data is a global variable and I could not find where it is defined. Any pointers would be greatly appreciated.

    Read the article

  • Saving complex aggregates using Repository Pattern

    - by Kevin Lawrence
    We have a complex aggregate (sensitive names obfuscated for confidentiality reasons). The root, R, is composed of collections of Ms, As, Cs, Ss. Ms have collections of other low-level details. etc etc R really is an aggregate (no fair suggesting we split it!) We use lazy loading to retrieve the details. No problem there. But we are struggling a little with how to save such a complex aggregate. From the caller's point of view: r = repository.find(id); r.Ps.add(factory.createP()); r.Cs[5].updateX(123); r.Ms.removeAt(5); repository.save(r); Our competing solutions are: Dirty flags Each entity in the aggregate in the aggregate has a dirty flag. The save() method in the repository walks the tree looking for dirty objects and saves them. Deletes and adds are a little trickier - especially with lazy-loading - but doable. Event listener accumulates changes. Repository subscribes a listener to changes and accumulates events. When save is called, the repository grabs all the change events and writes them to the DB. Give up on repository pattern. Implement overloaded save methods to save the parts of the aggregate separately. The original example would become: r = repository.find(id); r.Ps.add(factory.createP()); r.Cs[5].updateX(123); r.Ms.removeAt(5); repository.save(r.Ps); repository.save(r.Cs); repository.save(r.Ms); (or worse) Advice please! What should we do?

    Read the article

  • How would I structure the loop to go through inputs?

    - by dmanexe
    I am attempting to make a loop that will go through an array structured like this: $input[n][checked] $input[n][input] The 2nd input acts as a price multiplier, but doesn't have to exist (field can be blank). I don't think a foreach loop is right because I don't think it handles the inputs from the form in the correct dimensional array order to keep the information together. I have inputs on a form that look like this: <input type="checkbox" name="measure[<?php echo $item->id; ?>][checked]" value="<?php echo $item->id; ?>"> <input class="item_mult" type="text" name="measure[<?php echo $item->id; ?>][input]" /> I am attempting to make the loop go through and act as a multiplier on the input relative to its sibling field. (i.e. input[1][input] would be an integer that I want to retrieve later, grouped with input[1][checked]) <? $field = $this->input->post('measure',true); $totals = array(); foreach($field as $value): if ($value['input'] == TRUE) { $query = $this->db->get_where('items', array('id' => $value['input']))->row(); $totals[] = $query->price; ?> <p><?=$query->name?> - <?=money_format('%(#10n', $query->price)?></p> <?php } else { } endforeach; ?> And finally, the last code to array_sum and print the grand total: <? $grand_total = array_sum($totals); ?> <p><?=money_format('%(#10n', $grand_total)?></p> Eventually, I will need to store these records in a database, so I am sending complete item IDs through, etc.

    Read the article

  • Using MongoDB's map/reduce to "group by" two fields

    - by ibz
    I need something slightly more complex than the examples in the MongoDB docs and I can't seem to be able to wrap my head around it. Say I have a collection of objects of the form {date: "2010-10-10", type: "EVENT_TYPE_1", user_id: 123, ...} Now I want to get something similar to a SQL GROUP BY query, grouping over both date and type. That is, I want the number of events of each type in each day. Also, I'd like to make it unique by user_id, ie. if a user has more events in the same day, count it only once. I'm trying to do this with map/reduce. I do db.logs.mapReduce(function() { emit(this.type, 1); }, function(k, vals) { var total = 0; for (var i = 0; i < vals.length; i++) total += vals[i]; return total; }}) which nicely groups by type, but now, how can I group by date at the same time? Seems the key in emit() can't be an array (I thought about doing emit([this.date, this.type], 1)). Also, how can I ensure the per-user uniqueness? I'm just starting with MongoDB and I'm still having trouble grasping the basic concepts. Also, there is not much documentation available out there. Any help from more experienced users is appreciated. Thanks!

    Read the article

  • Unknown user 'app' with capistrano

    - by trobrock
    This is my first time trying to set up capistrano to deploy a rails application. I am deploying from my local machine to my remote server that has the repo, web, app, and mysql servers all on the same machine. I am following this walk through: http://www.capify.org/index.php/From_The_Beginning I get to the command cap deploy:start Then I get this error: *** [err :: example.com] sudo: unknown user: app command finished failed: "sh -c 'cd /var/www/example/current && sudo -p '\\''sudo password: '\\'' -u app nohup script/spin'" on example.com Am I supposed to add an 'app' user, or is there a way of changing what user the command runs as? This is my deploy.rb: set :application, "example" set :repository, "[email protected]:example.git" set :user, "trobrock" set :branch, 'master' set :deploy_to, "/var/www/example" set :scm, :git # Or: `accurev`, `bzr`, `cvs`, `darcs`, `git`, `mercurial`, `perforce`, `subversion` or `none` role :web, "example.com" # Your HTTP server, Apache/etc role :app, "example.com" # This may be the same as your `Web` server role :db, "example.com", :primary => true # This is where Rails migrations will run And obviously everywhere it says example.com is my servers hostname and every it just says example is the app name.

    Read the article

  • How to put large text data (~20mb) into sql cs 3.5 database?

    - by Anindya Chatterjee
    I am using following query to insert some large text data : internal static string InsertStorageItem = "insert into Storage(FolderName, MessageId, MessageDate, StorageData) values ('{0}', '{1}', '{2}', @StorageData)"; and the code I am using to execute this query is as follows : string content = "very very large data"; string query = string.Format(InsertStorageItem, "Inbox", "AXOGTRR1445/DSDS587444WEE", "4/19/2010 11:11:03 AM"); var command = new SqlCeCommand(query, _sqlConnection); var paramData = command.Parameters.Add("@StorageData", System.Data.SqlDbType.NText); paramData.Value = content; paramData.SourceColumn = "StorageData"; command.ExecuteNonQuery(); But at the last line I am getting this following error : System.Data.SqlServerCe.SqlCeException was unhandled by user code Message=The data was truncated while converting from one data type to another. [ Name of function(if known) = ] Source=SQL Server Compact ADO.NET Data Provider HResult=-2147467259 NativeError=25920 StackTrace: at System.Data.SqlServerCe.SqlCeCommand.ProcessResults(Int32 hr) at System.Data.SqlServerCe.SqlCeCommand.ExecuteCommandText(IntPtr& pCursor, Boolean& isBaseTableCursor) at System.Data.SqlServerCe.SqlCeCommand.ExecuteCommand(CommandBehavior behavior, String method, ResultSetOptions options) at System.Data.SqlServerCe.SqlCeCommand.ExecuteNonQuery() at Chithi.Client.Exchange.ExchangeClient.SaveItem(Item item, Folder parentFolder) at Chithi.Client.Exchange.ExchangeClient.DownloadNewMails(Folder folder) at Chithi.Client.Exchange.ExchangeClient.SynchronizeParentChildFolder(WellKnownFolder wellknownFolder, Folder parentFolder) at Chithi.Client.Exchange.ExchangeClient.SynchronizeFolders() at Chithi.Client.Exchange.ExchangeClient.WorkerThreadDoWork(Object sender, DoWorkEventArgs e) at System.ComponentModel.BackgroundWorker.OnDoWork(DoWorkEventArgs e) at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) InnerException: Now my question is how am I supposed to insert such large data to sqlce db? Regards, Anindya Chatterjee http://abstractclass.org

    Read the article

  • Commit is VERY slow in my NHibernate / SQLite project

    - by Tom Bushell
    I've just started doing some real-world performance testing on my Fluent NHibernate / SQLite project, and am experiencing some serious delays when when I Commit to the database. By serious, I mean taking 20 - 30 seconds to Commit 30 K of data! This delay seems to get worse as the database grows. When the SQLite DB file is empty, commits happen almost instantly, but when it grows to 10 Meg, I see these huge delays. The database has 16 tables, averaging 10 columns each. One possible problem is that I'm storing a dozen or so IList members, but they are typically only 200 elements long. But this is a recent addition to Fluent NHibernate automapping, which stores each float in a single table row, so maybe that's a potential problem. Any suggestions on how to track this down? I suspect SQLite is the culprit, but maybe it's NHibernate? I don't have any experience with profilers, but am thinking of getting one. I'm aware of NHibernate Profiler - any recommendations for profilers that work well with SQLite? Here's the method that saves the data - it's just a SaveOrUpdate call and a Commit, if you ignore all the error handling and debug logging. public static void SaveMeasurement(object measurement) { Debug.WriteLine("\r\n---SaveMeasurement---"); // Get the application's database session var session = GetSession(); using (var transaction = session.BeginTransaction()) { try { session.SaveOrUpdate(measurement); } catch (Exception e) { throw new ApplicationException( "\r\n SaveMeasurement->SaveOrUpdate failed\r\n\r\n", e); } try { Debug.WriteLine("\r\n---Commit---"); transaction.Commit(); Debug.WriteLine("\r\n---Commit Complete---"); } catch (Exception e) { throw new ApplicationException( "\r\n SaveMeasurement->Commit failed\r\n\r\n", e); } } }

    Read the article

  • Castle Active Record - Working with the cache

    - by David
    Hi All, im new to the Castle Active Record Pattern and Im trying to get my head around how to effectivley use cache. So what im trying to do (or want to do) is when calling the GetAll, find out if I have called it before and check the cache, else load it, but I also want to pass a bool paramter that will force the cache to clear and requery the db. So Im just looking for the final bits. thanks public static List<Model.Resource> GetAll(bool forceReload) { List<Model.Resource> resources = new List<Model.Resource>(); //Request to force reload if (forceReload) { //need to specify to force a reload (how?) XmlConfigurationSource source = new XmlConfigurationSource("appconfig.xml"); ActiveRecordStarter.Initialize(source, typeof(Model.Resource)); resources = Model.Resource.FindAll().ToList(); } else { //Check the cache somehow and return the cache? } return resources; } public static List<Model.Resource> GetAll() { return GetAll(false); }

    Read the article

  • Important question about linq to SQL performance on high loaded web applications

    - by Alex
    I started working with linq to SQL several weeks ago. I got really tired of working with SQL server directly through the SQL queries (sqldatareader, sqlcommand and all this good stuff).  After hearing about linq to SQL and mvc I quickly moved all my projects to these technologies. I expected linq to SQL work slower but it suprisongly turned out to be pretty fast, primarily because I always forgot to close my connections when using datareaders. Now I don't have to worry about it. But there's one problem that really bothers me. There's one page that's requested thousands of times a day. The system gets data in the beginning, works with it and updates it. Primarily the updates are ++ @ -- (increase and decrease values). I used to do it like this UPDATE table SET value=value+1 WHERE ID=@I'd It worked with no problems obviously. But with linq to SQL the data is taken in the beginning, moved to the class, changed and then saved. Stats.registeredusers++; Db.submitchanges(); Let's say there were 100 000 users. Linq will say "let it be 100 001" instead of "let it be increased by 1". But if there value of users has already been increased (that happens in my site all the time) then linq will be like oops, this value is already 100 001. Whatever I'll throw an exception" You can change this behavior so that it won't throw an exception but it still will not set the value to 100 002. Like I said, it happened with me all the time. The stas value was increased twice a second on average. I simply had to rewrite this chunk of code with classic ado net. So my question is how can you solve the problem with linq

    Read the article

  • How can I protect this code from SQL Injection? A bit confused.

    - by Craig Whitley
    I've read various sources but I'm unsure how to implement them into my code. I was wondering if somebody could give me a quick hand with it? Once I've been shown how to do it once in my code I'll be able to pick it up I think! This is from an AJAX autocomplete I found on the net, although I saw something to do with it being vulnerable to SQL Injection due to the '%$queryString%' or something? Any help really appreciated! if ( isset( $_POST['queryString'] ) ) { $queryString = $_POST['queryString']; if ( strlen( $queryString ) > 0 ) { $query = "SELECT game_title, game_id FROM games WHERE game_title LIKE '%$queryString%' || alt LIKE '%$queryString%' LIMIT 10"; $result = mysql_query( $query, $db ) or die( "There is an error in database please contact [email protected]" ); while ( $row = mysql_fetch_array( $result ) ) { $game_id = $row['game_id']; echo '<li onClick="fill(\'' . $row['game_title'] . '\',' . $game_id . ');">' . $row['game_title'] . '</li>'; } } }

    Read the article

  • Entity Framework - refresh objects from database

    - by Nebo
    I'm having trouble with refreshing objects in my database. I have an two PC's and two applications. On the first PC, there's an application which communicates with my database and adds some data to Measurements table. On my other PC, there's an application which retrives the latest Measurement under a timer, so it should retrive measurements added by the application on my first PC too. The problem is it doesn't. On my application start, it caches all the data from database and never get new data added. I use Refresh() method which works well when I change any of the cached data, but it doesn't refresh newly added data. Here is my method which should update the data: public static Entities myEntities = new Entities(); public static Measurement GetLastMeasurement(int conditionId) { myEntities.Refresh(RefreshMode.StoreWins, myEntities.Measurements); return (from measurement in myEntities.Measurements where measurement.ConditionId == conditionId select measurement).OrderByDescending(cd => cd.Timestamp).First(); } P.S. Applications have different connection strings in app.config (different accounts for the same DB).

    Read the article

  • Long text input from user and PDF generation

    - by Petteri Hietavirta
    I have built a web application that can be seen as an overcomplicated application form. There are bunch of text areas with a given character limit. After the form submission various things happen and one of them is PDF generation. The text is queried from the DB and inserted in the PDF template created in iReports. This works fine but the major pain is overflowing text. The maximum number of characters is set based on 'average' text. But sometimes people prefer to write with CAPS or add plenty of linefeeds to format their text. These then cause user's text to overflow the space given in PDF. Unfortunately the PDF document must look like a real application form so I cannot allow unlimited space. What kind of approaches you have used to tackle this? Clean/restrict user input? Calculate the space requirement of the text based on font metrics? Provide preview of the PDF? (too bad users are not allowed to change their input after submission...)

    Read the article

  • Rails Heroku Migrate Unknown Error

    - by Ryan Max
    Hello. I am trying to get my app up and running on heroku. However once I go to migrate I get the following error: $ heroku rake db:migrate rake aborted! An error has occurred, this and all later migrations canceled: 530 5.7.0 Must issue a STARTTLS command first. bv42sm676794ibb.5 (See full trace by running task with --trace) (in /disk1/home/slugs/155328_f2d3c00_845e/mnt) == BortMigration: migrating ================================================= -- create_table(:sessions) -> 0.1366s -- add_index(:sessions, :session_id) -> 0.0759s -- add_index(:sessions, :updated_at) -> 0.0393s -- create_table(:open_id_authentication_associations, {:force=>true}) -> 0.0611s -- create_table(:open_id_authentication_nonces, {:force=>true}) -> 0.0298s -- create_table(:users) -> 0.0222s -- add_index(:users, :login, {:unique=>true}) -> 0.0068s -- create_table(:passwords) -> 0.0123s -- create_table(:roles) -> 0.0119s -- create_table(:roles_users, {:id=>false}) -> 0.0029s I'm not sure exactly what it means. Or really what it means at all. Could it have to do with my Bort installation? I did remove all the open-id stuff from it. But I never had any problems with my migrations locally. Additionally on Bort the Restful Authentication uses my gmail stmp to send confirmation emails...all the searches on google i do on STARTTLS have to do with stmp. Can someone point me in the right direction?

    Read the article

  • Rails - handling global site settings

    - by egarcia
    I'm developing a new rails application which is supposed to be installed several times in order to implement several sites. There are some things, like the "Site Title" or the "Default Number of Items per Page" that clearly belong to a "global settings" table / config file. I've made a list of the things I think I'll need: ActiveRecord model that is capable of: Storing different kinds of data. I suppose this would be accomplished encoding the values on a string on the db, probably with a "type" field. Indexing settings by name Validations based on a "type" attribute (i.e. don't accept invalid dates on "date" settings) Validations based on a allows_nil property. A controller that allows me to change settings via views. I'm pretty sure I could implement this myself, but I'm not willing to reinvent the wheel. I've done some searching, but I could only find rails-settings, which doesn't really serve me: I need a proper model & controller so I can use declarative-authorization, and it does not provide any controller or view facilities. Is there a gem or plugin out there that implements what I want, or any library I should look at? Thanks a lot.

    Read the article

  • How to get JSON code into MYSQL database?

    - by the_boy_za
    I've created this form with a jQuery autocomplete function. The selected brand from the autocomplete list needs to get sent to a PHP file using $.ajax function. My code doesn't seem to work but i can't find the error. I don't know why the data isn't getting inserted into MYSQL database. Here is my code: JQUERY: $(document).ready(function() { $("#autocomplete").autocomplete({ minLength: 2 }); $("#autocomplete").autocomplete({ source: ["Adidas", "Airforce", "Alpha Industries", "Asics", "Bikkemberg", "Birkenstock", "Bjorn Borg", "Brunotti", "Calvin Klein", "Cars Jeans", "Chanel", "Chasin", "Diesel", "Dior", "DKNY", "Dolce & Gabbana"] }); $("#add-brand").click(function() { var merk = $("#autocomplete").val(); $("#selected-brands").append(" <a class=\"deletemerk\" href=\"#\">" + merk + "</a>"); //Add your parameters here var param = JSON.stringify({ Brand: merk }); $.ajax({ type: "POST", async: true, url: "scripttohandlejson.php", contentType: "application/json", data: param, dataType: "json", success: function (good){ //handle success alert(good) }, failure: function (bad){ //handle any errors alert(bad) } }); return false; }); }); PHP FILE: scripttohandlejson.php <?PHP $getcontent = json_decode($json, true); $getcontent->{'Brand'}; $vraag ="INSERT INTO kijken (merk) VALUES ='$getcontent' "; $voerin = mysql_query($vraag) or die("couldnt put into db"); <?

    Read the article

  • mysql to excel generation using php

    - by pmms
    <?php // DB Connection here mysql_connect("localhost","root",""); mysql_select_db("hitnrunf_db"); $select = "SELECT * FROM jos_users "; $export = mysql_query ( $select ) or die ( "Sql error : " . mysql_error( ) ); $fields = mysql_num_fields ( $export ); for ( $i = 0; $i < $fields; $i++ ) { $header .= mysql_field_name( $export , $i ) . "\t"; } while( $row = mysql_fetch_row( $export ) ) { $line = ''; foreach( $row as $value ) { if ( ( !isset( $value ) ) || ( $value == "" ) ) { $value = "\t"; } else { $value = str_replace( '"' , '""' , $value ); $value = '"' . $value . '"' . "\t"; } $line .= $value; } $data .= trim( $line ) . "\n"; } $data = str_replace( "\r" , "" , $data ); if ( $data == "" ) { $data = "\n(0) Records Found!\n"; } header("Content-type: application/octet-stream"); header("Content-Disposition: attachment; filename=your_desired_name.xls"); header("Pragma: no-cache"); header("Expires: 0"); print "$header\n$data"; ?> The code above is used for generating an Excel spreadsheet from a MySQL database, but we are getting following error: The file you are trying to open, 'users.xls', is in a different format than specified by the file extension. Verify that the file is not corrupted and is from a trusted source before opening the file. Do you want to open the file now? What is the problem and how do we fix it?

    Read the article

  • Converting TrueClass / FalseClass to integer.

    - by Nick Gorbikoff
    Hello. I'm trying to figure out if there is an easy way to do the following short of adding to_i method to TrueClass/FalseClass. Here is a dilemma: I have a boolean field in my rails app - that is obviously stored as Tinyint in mysql. However - I need to generate xml based of the data in mysql and send it to customer - there SOAP service requires the field in question to have 0 or 1 as the value of this field. So at the time of the xml generation I need to convert my False to 0 and my True to 1 ( which is how they are stored in the DB). Since True & False lack to_i method I could write some if statement that generate either 1 or 0 depending on true/false state. However I have about 10 of these indicators and creating and if/else for each is not very DRY. So what you recommend I do? Or I could add a to_i method to the True / False class. But I'm not sure where should I scope it in my rails app? Just inside this particular model or somewhere else?

    Read the article

  • Entity Framework 4 overwrite Equals and GetHashCode of an own class property

    - by Zhok
    Hi, I’m using Visual Studio 2010 with .NET 4 and Entity Framework 4. I’m working with POCO Classes and not the EF4 Generator. I need to overwrite the Equals() and GetHashCode() Method but that doesn’t really work. Thought it’s something everybody does but I don’t find anything about the problem Online. When I write my own Classes and Equals Method, I use Equals() of property’s, witch need to be loaded by EF to be filled. Like this: public class Item { public virtual int Id { get; set; } public virtual String Name { get; set; } public virtual List<UserItem> UserItems { get; set; } public virtual ItemType ItemType { get; set; } public override bool Equals(object obj) { Item item = obj as Item; if (obj == null) { return false; } return item.Name.Equals(this.Name) && item.ItemType.Equals(this.ItemType); } public override int GetHashCode() { return this.Name.GetHashCode() ^ this.ItemType.GetHashCode(); } } That Code doesn’t work, the problems are in Equals and GetHashCode where I try to get HashCode or Equal from “ItemType” . Every time I get a NullRefernceException if I try to get data by Linq2Entites. A dirty way to fix it, is to capture the NullReferenceException and return false (by Equals) and return base.GetHashCode() (by GethashCode) but I hope there is a better way to fix this problem. I’ve wrote a little test project, with SQL Script for the DB and POCO Domain, EDMX File and Console Test Main Method. You can download it here: Download

    Read the article

  • Renaming TurboGears 2's Repoze Fields with TGAdmin

    - by William Chambers
    I've been working on renaming TurboGears 2's Repoze 'groups' field to 'roles' to free the namespace and db tables for other purposes. Also roles makes much more sense to me then groups because I have a strong Drupal background. Now I have found some of the docs to do this such as these: http://www.turbogears.org/2.1/docs/main/Auth/Customization.html#customizing-the-model-structure-assumed-by-the-quickstart http://code.gustavonarea.net/repoze.what-quickstart/#customizing-the-model-definition However these only go part of the way. I have made (I'm pretty sure at least, I've double checked a few times.) all the changes required as you can see in this diff. This seems to work fine however I've ran into a rather major issue with the TurboGears Admin system. I've tried http://turbogears.org/2.0/docs/main/Extensions/Admin/index.html and it didn't seem to make any difference, however I'm not 100% sure I did it correctly. The problem occurs when I attempt to go to localhost/admin/permissions/. It causes a Internal Server Error and outputs the following error. http://pastebin.com/YWMH3SiU This error does not happen on the Roles/Users pages and the permissions /edit/1 also works. I'm running kubuntu 10.04 with TG 2.1b2. (I'm running the beta mostly for easier mako support which is really important.) Any help would be very appreciated.

    Read the article

  • KD-Trees and missing values (vector comparison)

    - by labratmatt
    I have a system that stores vectors and allows a user to find the n most similar vectors to the user's query vector. That is, a user submits a vector (I call it a query vector) and my system spits out "here are the n most similar vectors." I generate the similar vectors using a KD-Tree and everything works well, but I want to do more. I want to present a list of the n most similar vectors even if the user doesn't submit a complete vector (a vector with missing values). That is, if a user submits a vector with three dimensions, I still want to find the n nearest vectors (stored vectors are of 11 dimensions) I have stored. I have a couple of obvious solutions, but I'm not sure either one seem very good: Create multiple KD-Trees each built using the most popular subset of dimensions a user will search for. That is, if a user submits a query vector of thee dimensions, x, y, z, I match that query to my already built KD-Tree which only contains vectors of three dimensions, x, y, z. Ignore KD-Trees when a user submits a query vector with missing values and compare the query vector to the vectors (stored in a table in a DB) one by one using something like a dot product. This has to be a common problem, any suggestions? Thanks for the help.

    Read the article

  • Maintaining a pool of DAO Class instances vs doing new operator

    - by Fazal
    we have been trying to benchmark our application performance in multiple way for sometime now. I always believed that object creation in java using Class.newInstance() was not slow (at least after 1.4 version of java). But we anyways did a test to use newInstance method vs mainitain an object pool of 1000 objects. We did about 200K iterations of loading data from DB using JDBC and populating these objects. I was amazed (even shocked) to see that newInstance code compared to object pool code was almost 10 times slower. These objects represent tables with about 50 fields and all string type. Can someone share there thoughts on this issue as now I am more confused if object pooling of atleast some DAO instances is a better option. The pool size as I see right now should be large enough to meet size of average requests. There is a flip side as my memory footprint will go up but I am beginning to wonder if this kind of idea makes sense atleast for some of the DAO entities representing tables of about 50 or more columns Please share your ideas and let me know if this has been tried by someone or am I missing some point here

    Read the article

< Previous Page | 345 346 347 348 349 350 351 352 353 354 355 356  | Next Page >