Search Results

Search found 67075 results on 2683 pages for 'data model'.

Page 119/2683 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • How to structure a Visual Studio project for the data access layer

    - by Akk
    I currently have a project that uses various DB access technologies mainly for showcasing or for demos. Currently we have: Namespace App.Data (App.Data.dll) Folder NHibernate Folder EntityFramework Folder LinqToSql The above structure is ok as we only use Sql Server as the DB. But going forward we will be including Oracle, MySql etc. So what would be a better structure with this in mind? I thought about: Namespace App.Data.SqlServer (App.Data.SqlServer.dll) Folder NHibernate Folder EntityFramework Folder LinqToSql Or would it just be better to have separate assemblies for each database and access technology?: Namespace App.Data.SqlServer.NHibernate (App.Data.SqlServer.NHibernate.dll) Namespace App.Data.SqlServer.EntityFramework(App.Data.SqlServer.EntityFramework.dll) Namespace App.Data.Oracle.NHibernate (App.Data.Oracle.NHibernate.dll) Namespace App.Data.MySql.NHibernate (App.Data.MySql.Oracle.dll)

    Read the article

  • Converting application to MVC architecture

    - by terence6
    I'm practicing writing MVC applications. I have a Mastermind game, that I would like to rewrite as MVC app. I have divided my code to parts, but instead of working game I'm getting empty Frame and an error in "public void paint( Graphics g )". Error comes from calling this method in my view with null argument. But how to overcome this ? MVC was quite simple with swing but awt and it's paint methods are much more complicated. Code of working app : http://paste.pocoo.org/show/224982/ App divided to MVC : Main : public class Main { public static void main(String[] args){ Model model = new Model(); View view = new View("Mastermind", 400, 590, model); Controller controller = new Controller(model, view); view.setVisible(true); } } Controller : import java.awt.*; import java.awt.event.*; public class Controller implements MouseListener, ActionListener { private Model model; private View view; public Controller(Model m, View v){ model = m; view = v; view.addWindowListener( new WindowAdapter(){ public void windowClosing(WindowEvent e){ System.exit(0); } }); view.addMouseListener(this); } public void actionPerformed( ActionEvent e ) { if(e.getSource() == view.checkAnswer){ if(model.isRowFull){ model.check(); } } } public void mousePressed(MouseEvent e) { Point mouse = new Point(); mouse = e.getPoint(); if (model.isPlaying){ if (mouse.x > 350) { int button = 1 + (int)((mouse.y - 32) / 50); if ((button >= 1) && (button <= 5)){ model.fillHole(button); } } } } public void mouseClicked(MouseEvent e) {} public void mouseReleased(MouseEvent e){} public void mouseEntered(MouseEvent e) {} public void mouseExited(MouseEvent e) {} } View : import java.awt.*; import javax.swing.*; import java.awt.event.*; public class View extends Frame implements ActionListener { Model model; JButton checkAnswer; private JPanel button; static final int HIT_X[] = {270,290,310,290,310}, HIT_Y[] = {506,496,496,516,516}; public View(String name, int w, int h, Model m){ model = m; setTitle( name ); setSize( w,h ); setResizable( false ); this.setLayout(new BorderLayout()); button = new JPanel(); button.setSize( new Dimension(400, 100)); button.setVisible(true); checkAnswer = new JButton("Check"); checkAnswer.addActionListener(this); checkAnswer.setSize( new Dimension(200, 30)); button.add( checkAnswer ); this.add( button, BorderLayout.SOUTH); button.setVisible(true); for ( int i=0; i < model.SCORE; i++ ){ for ( int j = 0; j < model.LINE; j++ ){ model.pins[i][j] = new Pin(20,0); model.pins[i][j].setPosition(j*50+30,510-i*50); model.pins[i+model.SCORE][j] = new Pin(8,0); model.pins[i+model.SCORE][j].setPosition(HIT_X[j],HIT_Y[j]-i*50); } } for ( int i=0; i < model.LINE; i++ ){ model.pins[model.OPTIONS][i] = new Pin( 20, i+2 ); model.pins[model.OPTIONS][i].setPosition( 370,i * 50 + 56); } model.combination(); model.paint(null); } public void actionPerformed( ActionEvent e ) { } } Model: import java.awt.*; public class Model extends Frame{ static final int LINE = 5, SCORE = 10, OPTIONS = 20; Pin pins[][] = new Pin[21][LINE]; int combination[] = new int[LINE]; int curPin = 0; int turn = 1; int repaintPin; boolean isUpdate = true, isPlaying = true, isRowFull = false; public Model(){ } void fillHole(int color) { pins[turn-1][curPin].setColor(color+1); repaintPins( turn ); curPin = (curPin+1) % LINE; if (curPin == 0){ isRowFull = true; } } public void paint( Graphics g ) { g.setColor( new Color(238, 238, 238)); g.fillRect( 0,0,400,590); for ( int i=0; i < pins.length; i++ ) { pins[i][0].paint(g); pins[i][1].paint(g); pins[i][2].paint(g); pins[i][3].paint(g); pins[i][4].paint(g); } } public void update( Graphics g ) { if ( isUpdate ) { paint(g); } else { isUpdate = true; pins[repaintPin-1][0].paint(g); pins[repaintPin-1][1].paint(g); pins[repaintPin-1][2].paint(g); pins[repaintPin-1][3].paint(g); pins[repaintPin-1][4].paint(g); } } void repaintPins( int pin ) { repaintPin = pin; isUpdate = false; repaint(); } void check() { int junkPegs[] = new int[LINE], junkCode[] = new int[LINE]; int pegCount = 0, pico = 0; for ( int i = 0; i < LINE; i++ ) { junkPegs[i] = pins[turn-1][i].getColor(); junkCode[i] = combination[i]; } for ( int i = 0; i < LINE; i++ ){ if (junkPegs[i]==junkCode[i]) { pins[turn+SCORE][pegCount].setColor(1); pegCount++; pico++; junkPegs[i] = 98; junkCode[i] = 99; } } for ( int i = 0; i < LINE; i++ ){ for ( int j = 0; j < LINE; j++ ) if (junkPegs[i]==junkCode[j]) { pins[turn+SCORE][pegCount].setColor(2); pegCount++; junkPegs[i] = 98; junkCode[j] = 99; j = LINE; } } repaintPins( turn+SCORE ); if ( pico == LINE ){ isPlaying = false; } else if ( turn >= 10 ){ isPlaying = false; } else{ curPin = 0; isRowFull = false; turn++; } } void combination() { for ( int i = 0; i < LINE; i++ ){ combination[i] = 1 + (int)(Math.random()*5); System.out.print(i+","); } } } class Pin{ private int color, X, Y, radius; private static final Color COLORS[] = { Color.black, Color.white, Color.red, Color.yellow, Color.green, Color.blue, new Color(7, 254, 250)}; public Pin(){ X = 0; Y = 0; radius = 0; color = 0; } public Pin( int r,int c ){ X = 0; Y = 0; radius = r; color = c; } public void paint( Graphics g ){ int x = X-radius; int y = Y-radius; if (color > 0){ g.setColor( COLORS[color]); g.fillOval( x,y,2*radius,2*radius ); } else{ g.setColor( new Color(238, 238, 238) ); g.drawOval( x,y,2*radius-1,2*radius-1 ); } g.setColor( Color.black ); g.drawOval( x,y,2*radius,2*radius ); } public void setPosition( int x,int y ){ this.X = x ; this.Y = y ; } public void setColor( int c ){ color = c; } public int getColor() { return color; } } Any clues on how to overcome this would be great. Have I divided my code improperly ?

    Read the article

  • App Engine: how would you... snapshotting entities

    - by Andrew B.
    Let's say you have two kinds, Message and Contact, related by a db.ListProperty of keys on Message. A user creates a message, adds some contacts as recipients, and emails the message. Later, the user deletes one of the contact entities that was a recipient of the message. Our application should delete the appropriate Contact entity, but we want to preserve the original recipient list for the message that was sent for the user's records. In essence, we want a snapshot of the message entity at the time it was sent. If we naively delete the contact entity, though, we lose snapshot integrity; if not, we are left with an invalid key. How would you handle this situation, either in controller logic or model changes? class User(db.Model): email = db.EmailProperty(required=True) class Contact(db.Model): email = db.EmailProperty(required=True) user = db.ReferenceProperty(User, collection_name='contacts') class Message(db.Model): recipients = db.ListProperty(db.Key) # contacts sender = db.ReferenceProperty(User, collection_name='messages') body = db.TextProperty() is_emailed = db.BooleanProperty(default=False)

    Read the article

  • How to start MSSQL Server with corrupt model db

    - by Jordan McGuigan
    After moving some databases around (restoring, deleting, etc) we experienced an issue creating new databases. Specifically, When trying to create a new database MSSQL Server it failed because the "The database 'model' is marked RESTORING and is in a state that does not allow recovery to be run". As some online solutions suggested, we tried to Start and Stop the MSSQL Service. Service would not restart because "Could not create tempdb. You may not have enough disk space available. Free additional disk space by deleting other files on the tempdb drive" (FYI: the drive has 100gb of free space). Tried restarting the machine the MSSQL Server is running on. When the server came back online, we received the same error. We have tried deleting tempdb.mdf and restoring the modeldb from the templates folder, but neither of these solved the issue. We are unable to connect to the database, even in single user mode. Many of the online solutions have us running SQL commands against the server, but we are unable to connect (even in single user mode) to the DB to run commands against the server. Specific error messages: Database 'model' cannot be opened. It is in the middle of a restore. (Microsoft SQL Server, Error: 927) The SQL Server (MSSQLSERVER) service is starting. The SQL Server (MSSQLSERVER) service could not be started. A service specific error occurred: 1814. We need the server up and running again ASAP.

    Read the article

  • PC doesn't POST with a certain model of PSU

    - by Core Xii
    I have this PC with an Asus P5N32-E SLI motherboard and an Intel Q6600 CPU. I got a new RX-5300 PSU, but it didn't work. The motherboard power LED is on fine When I switch power on, the PC powers up briefly (0.5-1 sec) then shuts down From there on out when I switch power on, it stays on, all fans and components seem to be receiving power, but the motherboard won't POST. No video output, no PC speaker beeps, nothing If I turn the hard switch on the PSU off and then back on again, go to step 2; The PC turns on and then off immediately again, and on subsequent power-ups it stays on but won't POST I disconnected every component but the motherboard, CPU and PSU. Still nothing. I tried three other models of PSU on this PC, both of higher (600) and lower (<300) wattage than the 530 on the RX-5300 and they all work fine. At this point I was convinced the PSU was faulty so I returned it. When the replacement arrived, it behaved exactly the same. So two different units of RX-5300 both with the same symptoms, neither working with this motherboard + CPU. Yet, three other models of PSU work perfectly fine. The PC store couldn't reproduce my problem with the returned PSU. I tried resetting the CMOS with the jumper. I tried with both the 4 and 4+4 (with and without the extra +4 connected) CPU connectors (curiously the RX-5300 comes equipped with both). Could it be a statistical probability that I get two units of the same model of PSU that are faulty in the exact same manner? Could the RX-5300 model itself be somehow incompatible with this motherboard? I was under the impression that PSUs were pretty much universal so long as you have the wattage. Could the motherboard be broken in some such a way as to work with certain PSUs but not others? What's going on here?

    Read the article

  • Are there any frameworks for data subscription and update?

    - by Timothy Pratley
    There is one server with multiple clients. The clients are viewing subsets of the servers entire data. If the data that a client is viewing changes, the client should be informed of the changes so that it displays the current data. Example: Two clients are viewing a list of users in an administration screen. One client adds a new user to the list and modifies the permissions of another user. The other client sees the changes propagated to their view.

    Read the article

  • Best way to migrate servers without losing any data and with no downtime(?)

    - by ina
    This is a methodology question from a freelancer, with a corollary on MySQL.. Is there a way to migrate from an old dedicated server to a new one without losing any data in-between - and with no downtime? In the past, I've had to lose MySQL data between the time when the new server goes up (i.e., all files transferred, system up and ready), and when I take the old server down (data still transferred to old until new one takes over). There is also a short period where both are down for DNS, etc., to refresh. Is there a way for MySQL/root to easily transfer all data that was updated/inserted between a certain time frame?

    Read the article

  • Rails has_one vs belongs_to semantics

    - by Anurag
    I have a model representing a Content item that contains some images. The number of images are fixed as these image references are very specific to the content. For example, the Content model refers to the Image model twice (profile image, and background image). I am trying to avoid a generic has_many, and sticking to multiple has_one's. The current database structure looks like: contents - id:integer - integer:profile_image_id - integer:background_image_id images - integer:id - string:filename - integer:content_id I just can't figure out how to setup the associations correctly here. The Content model could contain two belongs_to references to an Image, but that doesn't seem semantically right cause ideally an image belongs to the content, or in other words, the content has two images. This is the best I could think of (by breaking the semantics): class Content belongs_to :profile_image, :class_name => 'Image', :foreign_key => 'profile_image_id' belongs_to :background_image, :class_name => 'Image', :foreign_key => 'background_image_id' end Am I way off, and there a better way to achieve this association?

    Read the article

  • R: How to write out a data.frame so that I can paste it into SO for others to read?

    - by John
    I have a large data.frame displaying some weird properties when plotted. I'd like to ask a question about it on Stackoverflow, to do that I'd like to write the data.frame out in a form that I can paste it into SO and somebody else can easily run it and have it back into a data.frame object again. Is there an easy way to accomplish this? Also, if it is really long, should I use paste bin instead of directly paste it here?

    Read the article

  • Instantiating and referencing models in MVC

    - by fig-gnuton
    In MVC, should each model be a globally accessible singleton accessible to any view/controller? Or should the models be singletons that are dependency injected into any component that requires them? Or should a new model instance be created for each component that needs one, in which case events would be used to propagate changes across model instances of the same class?

    Read the article

  • C# or windows equivalent of OS X's Core Data?

    - by Nektarios
    I'm late to the boat and have only just now started using Core Data in OS X / Cocoa - it's incredible and is really changing the way I look at things. Is there an equivalent technology in C# or the modern Windows frameworks? i.e. having managed data types where you get saving, data management, deleting, searching all for free? Also wondering if there's anything like this on Linux.

    Read the article

  • Mass data store with SQL SERVER

    - by Leo
    We need management 10,000 GPS devices, each GPS device upload a GPS data every 30 seconds, these data need to store in the database(MS SQL Server 2005). Each GPS device daily data quantity is: 24 * 60 * 2 = 2,880 10 000 10,000 GPS devices daily data quantity is: 10000 * 2880 = 28,800,000 Each GPS data approximately 160Byte, the amount of data per day is: 28,800,000 * 160 = 4.29GB We need hold at least 3 months of GPS data in the database, My question is: 1, whether SQL Server 2005 can support such a large amount of data store? 2, How to plan data table? (all GPS data storage in one table? Daily table? Each GPS device with a GPS data table?) The GPS data: GPSID varchar(21), RecvTime datetime, GPSTime datetime, IsValid bit, IsNavi bit, Lng float, Lat float, Alt float, Spd smallint, Head smallint, PulseValue bigint, Oil float, TSW1 bigint, TSW1Mask bigint, TSW2 bigint, TSW2Mask, BSW bigint, StateText varchar(200), PosText varchar(200), UploadType tinyint

    Read the article

  • What is the most efficient way to use Core Data?

    - by Eric
    I'm developing an iPad application using Core Data, and was hoping someone could clarify something about Core Data. Right now, I populate my table by making a fetch request for all of my data in viewDidLoad. I'd rather make individual fetch requests in my tableView:cellForRowAtIndexPath:. Can anyone tell me which is more efficient, and why? In other words, is it much less efficient to make lots of small requests as opposed to one big request?

    Read the article

  • making query from different related tables using codeigniter

    - by fatemeh karam
    I'm using codeigniter as i mentioned this is a part of my view code foreach($projects_query as $row)// $row indicates the projects { ?> <tr><td><h3><button type="submit" class="button red-gradient glossy" name = "project_click" > <?php echo $row->txtTaskName; ?></button></h3></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr> <?php foreach($tasks_query as $row2) { // if( $row->txtTaskName == "TestProject") if($row->intTaskID == $row2->intInside)// intInside indicades that the current task($row2) is the subset of which task (system , subsystem or project) { if($row2->intSummary == 0)//if the task(the system) is an executable task & doesn't have any subtask: { $query_team_user_id = $this->admin_in_out_model->get_user_team_task_query($row2->intTaskID);//runs the function and generates a query from tbl_userteamtask where intTaskID equals to the selected row's intTaskID foreach($query_team_user_id as $row_teamid) { $query_teamname = $this->admin_in_out_model->get_team_name($row_teamid->intTeamID); $query_fn_ln = $this->admin_in_out_model->get_fn_ln_from_userid($row_teamid->intUserID); foreach($query_teamname as $row_teamname) {?> <tr><td></td><td></td><td><h4> <?php echo $row2->txtTaskName;?></h4></td> <td><b><font color='#F33558'><?php echo $row_teamname->txtTeamName;?></font></b></td> <?php } foreach($query_fn_ln as $row_f_l_name) {?> <td> <?php echo $row_f_l_name->txtFirstname." ".$row_f_l_name->txtLastname;?></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td> <?php }?> </tr> <?php } } else{ ?> <tr><td></td><td></td><td><h4> <?php echo $row2->txtTaskName;?></h4></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><?php } foreach($tasks_query as $row_subsystems) { if($row_subsystems->intInside == $row2->intTaskID )//if the task is the subtask of a system(it means the task is a subsystem) { if($row_subsystems->intSummary == 0)//if the task is an executable task & doesn't have any subtask: { $query_team_user_id = $this->admin_in_out_model->get_user_team_task_query($row_subsystems->intTaskID); foreach($query_team_user_id as $row_teamid) {?> <tr><?php $query_teamname = $this->admin_in_out_model->get_team_name($row_teamid->intTeamID); $query_fn_ln = $this->admin_in_out_model->get_fn_ln_from_userid($row_teamid->intUserID); foreach($query_teamname as $row_teamname) {?> <td></td><td></td><td><h5><?php echo $row_subsystems->txtTaskName?></h5><br/></td> <td><b><font color='#F33558'><?php echo $row_teamname->txtTeamName;?></font></b></td><?php } foreach($query_fn_ln as $row_f_l_name) {?> <td><?php echo $row_f_l_name->txtFirstname." ".$row_f_l_name->txtLastname;?></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><?php }?> </tr><?php } } else{ ?><tr><td></td><td></td><td><h5><?php echo $row_subsystems->txtTaskName?></h5></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td></tr><?php } foreach($tasks_query as $row_tasks) { if($row_tasks->intInside == $row_subsystems->intTaskID )//if the task is the subtask of a subsystem { if($row_tasks->intSummary == 0)//if the task is an executable task & doesn't have any subtask: { $query_team_user_id = $this->admin_in_out_model->get_user_team_task_query($row_tasks->intTaskID); foreach($query_team_user_id as $row_teamid) {?> <tr><?php $query_teamname = $this->admin_in_out_model->get_team_name($row_teamid->intTeamID); $query_fn_ln = $this->admin_in_out_model->get_fn_ln_from_userid($row_teamid->intUserID); foreach($query_teamname as $row_teamname) {?> <td></td><td></td><td><b><?php echo $row_tasks->txtTaskName;?></b></td> <td><b><font color='#F33558'><?php echo $row_teamname->txtTeamName;?></font></b></td><?php } foreach($query_fn_ln as $row_f_l_name) {?> <td><?php echo $row_f_l_name->txtFirstname." ".$row_f_l_name->txtLastname;?></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td><?php }?> </tr><?php } } } } } } } } }?> and in controller i have $projects_query = $this->admin_in_out_model->get_projects(); $tasks_query = $this->admin_in_out_model->get_systems(); $userteamtask = $this->admin_in_out_model->get_user_team_task(); $data['tasks_query'] = $tasks_query; $data['projects_query'] = $projects_query; $this->load->view('project_view',$data); but as you see I'm calling my model functions within the view how can i do something else to do this i mean not calling my model function in my view I have to add that, my model function have parameters these are the model functions: function get_projects() { $this -> db -> select('*'); $this -> db -> from('tbl_task'); $this -> db -> where('intInside','0'); $query = $this->db->get(); return $query->result(); } function get_systems() { $this -> db -> select('*'); $this -> db -> from('tbl_task '); $this -> db -> where('intInside <> ','0'); $query = $this->db->get(); return $query->result(); } function get_user_team_task_query($task_id)//gets information from tbl_userteamtask where the field intTaskID is equal to $task_id { $this -> db -> select('*'); $this -> db -> from('tbl_userteamtask'); $this -> db -> where('intTaskID',$task_id); $query_teamid = $this->db->get(); return $query_teamid->result(); } function get_user_team_task()//gets information from tbl_userteamtask where the field intTaskID is equal to $task_id { $this -> db -> select('*'); $this -> db -> from('tbl_userteamtask'); // $this -> db -> where('intTaskID',$task_id); $query_teamid = $this->db->get(); return $query_teamid->result(); } function get_team_name($query_teamid) { $this -> db -> select('*'); $this -> db -> from('tbl_team'); $this -> db -> where('intTeamID',$query_teamid); $query_teamname = $this->db->get(); return $query_teamname->result(); } function get_user_name($query_userid) { $this -> db -> select('*'); $this -> db -> from('tbl_user'); $this -> db -> where('intUserID',$query_userid); $query_username = $this->db->get(); return $query_username->result(); } function get_fn_ln_from_userid($selected_id) { $this -> db -> select('tbl_user.intUserID, tbl_user.intPersonID,tbl_person.intPersonID,tbl_person.txtFirstname, tbl_person.txtLastname'); $this -> db -> from('tbl_user , tbl_person'); $where = "tbl_user.intPersonID = tbl_person.intPersonID "; $this -> db -> where($where); $this -> db -> where('tbl_user.intUserID', $selected_id); $query = $this -> db -> get();//makes query from DB return $query->result(); } do I have to use subquery ? is this true? i mean can i do this? foreach( $data as $key => $each ) { $data[$key]['team_id'] = $this->get_user_team_task_query( $each['intTaskID'] ); foreach($data[$key]['team_id'] as $key_teamname => $each) { $data[$key_teamname]['team_name'] = $this->get_team_name( $each['intTeamID'] ); } } the model code: foreach( $data as $key => $each ) { $data[$key]['intTaskID'] = $each['intTaskID']; $data[$key]['team_id'] = $this->get_user_team_task_query( $each['intTaskID'] ); foreach($data[$key]['team_id'] as $key => $each) { $data[$key]['team_name'] = $this->get_team_name( $each['intTeamID'] ); #fetching of the teamname and saving in the array $data[$key]['user_name'] = $this->get_fn_ln_from_userid( $each['intUserID'] ); foreach($data[$key]['user_name'] as $key => $each) { $data[$key]['first_name'] = $each['txtFirstname'] ; $data[$key]['last_name'] = $each['txtLastname'] ; } $data[$key]['first_name'] = $data[$key]['first_name']; $data[$key]['last_name'] = $data[$key]['last_name']; } }

    Read the article

  • Accessing two sides of a user-user relationship in rails

    - by Lowgain
    Basically, I have a users model in my rails app, and a fanship model, to facilitate the ability for users to become 'fans' of each other. In my user model, I have: has_many :fanships has_many :fanofs, :through => :fanships In my fanship model, I have: belongs_to :user belongs_to :fanof, :class_name => "User", :foreign_key => "fanof_id" My fanship table basically consists of :id, :user_id and :fanof_id. This all works fine, and I can see what users a specific user is a fan of like: <% @user.fanofs.each do |fan| %> #things <% end %> My question is, how can I get a list of the users that are a fan of this specific user? I'd like it if I could just have something like @user.fans, but if that isn't possible what is the most efficient way of going about this? Thanks!

    Read the article

  • What happens if a user jumps over 10 versions before updating, and every version had a new data mode

    - by dontWatchMyProfile
    Example: User installs app v.1.0, adds data. Then the dev submits 10 updates in 10 weeks. After 11 weeks, the user wants v.11.0 and grabs a copy from the app store. Assuming that the app has got 11 .xcdatamodel versions inside, where ***11.xcdatamodel is the current one, what would happen now since the persistent store of the user is ages old? would the migration happen 10 times, step-by-step through every migration iteration? Or does the actual migration of data (lets assume gigabytes of data) happen exactly once, after Core Data (or the persistent store coordinator) has figured out precisely what to do to go from v.1.0 to v.11.0?

    Read the article

  • Joining the same model twice in a clean way, but making the code reusable

    - by Shako
    I have a model Painting which has a Paintingtitle in each language and a Paintingdescription in each language: class Painting < ActiveRecord::Base has_many :paintingtitles, :dependent => :destroy has_many :paintingdescriptions, :dependent => :destroy end class Paintingtitle < ActiveRecord::Base belongs_to :painting belongs_to :language end class Paintingdescription < ActiveRecord::Base belongs_to :painting belongs_to :language end class Language < ActiveRecord::Base has_many :paintingtitles, :dependent => :nullify has_many :paintingdescriptions, :dependent => :nullify has_many :paintings, :through => :paintingtitles end As you might notice, I reference the Language model from my Painting model via both the Paintingtitle model and Paintingdescription model. This works for me when getting a list of paintings with their title and description in a specific language: cond = {"paintingdescription_languages.code" => language_code, "paintingtitle_languages.code" => language_code} cond['paintings.publish'] = 1 unless admin paginate( :all, :select => ["paintings.id, paintings.publish, paintings.photo_file_name, paintingtitles.title, paintingdescriptions.description"], :joins => " INNER JOIN paintingdescriptions ON (paintings.id = paintingdescriptions.painting_id) INNER JOIN paintingtitles ON (paintings.id = paintingtitles.painting_id) INNER JOIN languages paintingdescription_languages ON (paintingdescription_languages.id = paintingdescriptions.language_id) INNER JOIN languages paintingtitle_languages ON (paintingtitle_languages.id = paintingtitles.language_id) ", :conditions => cond, :page => page, :per_page => APP_CONFIG['per_page'], :order => "id DESC" ) Now I wonder if this is a correct way of doing this. I need to fetch paintings with their title and description in different functions, but I don't want to specify this long join statement each time. Is there a cleaner way, for instance making use of the has_many through? e.g. has_many :paintingdescription_languages, :through => :paintingdescriptions, :source => :language has_many :paintingtitle_languages, :through => :paintingtitles, :source => :language But if I implement above 2 lines together with the following ones, then only paintingtitles are filtered by language, and not the paintingdescriptions: cond = {"languages.code" => language_code} cond['paintings.publish'] = 1 unless admin paginate( :all, :select => ["paintings.id, paintings.publish, paintings.photo_file_name, paintingtitles.title, paintingdescriptions.description"], :joins => [:paintingdescription_languages, :paintingtitle_languages], :conditions => cond, :page => page, :per_page => APP_CONFIG['per_page'], :order => "id DESC" )

    Read the article

  • Iterating over a large data set in long running Python process - memory issues?

    - by user1094786
    I am working on a long running Python program (a part of it is a Flask API, and the other realtime data fetcher). Both my long running processes iterate, quite often (the API one might even do so hundreds of times a second) over large data sets (second by second observations of certain economic series, for example 1-5MB worth of data or even more). They also interpolate, compare and do calculations between series etc. What techniques, for the sake of keeping my processes alive, can I practice when iterating / passing as parameters / processing these large data sets? For instance, should I use the gc module and collect manually? Any advice would be appreciated. Thanks!

    Read the article

  • Many Buttons on a Page, Need to send back Unique Post Data with each

    - by CoffeeAddict
    I'm listing out a bunch of cars with a button next to them that when clicked will need to perform a GET but also sends over that item's model.Name: @using (Html.BeginForm("GetCarUrl", "Car", FormMethod.Get, new { model = Model })) { if(Model.Cars != null && Model.Cars.Count > 0) { foreach (CarContent car in Model.Cars) { <p>@car.Name</p> } <input type="button" value="Get Car Url" class="submit" /> } So the page renders a bunch of hyperlinks and buttons: [hyperlink1] [submit] [hyperlink2] [submit] [hyperlink3] [submit] [hyperlink4] [submit] [hyperlink5] [submit] ... When a user clicks on any of the submits, I need to pass back its corresponding @car.CarType for that specific hyperlink Not sure how to go about this. My action method expects a @car.CarType for that specific car hyperlink to be sent to it

    Read the article

  • WCF – interchangeable data-contract types

    - by nmarun
    In a WSDL based environment, unlike a CLR-world, we pass around the ‘state’ of an object and not the reference of an object. Well firstly, what does ‘state’ mean and does this also mean that we can send a struct where a class is expected (or vice-versa) as long as their ‘state’ is one and the same? Let’s see. So I have an operation contract defined as below: 1: [ServiceContract] 2: public interface ILearnWcfServiceExtend : ILearnWcfService 3: { 4: [OperationContract] 5: Employee SaveEmployee(Employee employee); 6: } 7:  8: [ServiceBehavior] 9: public class LearnWcfService : ILearnWcfServiceExtend 10: { 11: public Employee SaveEmployee(Employee employee) 12: { 13: employee.EmployeeId = 123; 14: return employee; 15: } 16: } Quite simplistic operation there (which translates to ‘absolutely no business value’). Now, the data contract Employee mentioned above is a struct. 1: public struct Employee 2: { 3: public int EmployeeId { get; set; } 4:  5: public string FName { get; set; } 6: } After compilation and consumption of this service, my proxy (in the Reference.cs file) looks like below (I’ve ignored the rest of the details just to avoid unwanted confusion): 1: public partial struct Employee : System.Runtime.Serialization.IExtensibleDataObject, System.ComponentModel.INotifyPropertyChanged I call the service with the code below: 1: private static void CallWcfService() 2: { 3: Employee employee = new Employee { FName = "A" }; 4: Console.WriteLine("IsValueType: {0}", employee.GetType().IsValueType); 5: Console.WriteLine("IsClass: {0}", employee.GetType().IsClass); 6: Console.WriteLine("Before calling the service: {0} - {1}", employee.EmployeeId, employee.FName); 7: employee = LearnWcfServiceClient.SaveEmployee(employee); 8: Console.WriteLine("Return from the service: {0} - {1}", employee.EmployeeId, employee.FName); 9: } The output is: I now change my Employee type from a struct to a class in the proxy class and run the application: 1: public partial class Employee : System.Runtime.Serialization.IExtensibleDataObject, System.ComponentModel.INotifyPropertyChanged { The output this time is: The state of an object implies towards its composition, the properties and the values of these properties and not based on whether it is a reference type (class) or a value type (struct). And as shown above, we’re actually passing an object by its state and not by reference. Continuing on the same topic of ‘type-interchangeability’, WCF treats two data contracts as equivalent if they have the same ‘wire-representation’. We can do so using the DataContract and DataMember attributes’ Name property. 1: [DataContract] 2: public struct Person 3: { 4: [DataMember] 5: public int Id { get; set; } 6:  7: [DataMember] 8: public string FirstName { get; set; } 9: } 10:  11: [DataContract(Name="Person")] 12: public class Employee 13: { 14: [DataMember(Name = "Id")] 15: public int EmployeeId { get; set; } 16:  17: [DataMember(Name="FirstName")] 18: public string FName { get; set; } 19: } I’ve created two data contracts with the exact same wire-representation. Just remember that the names and the types of data members need to match to be considered equivalent. The question then arises as to what gets generated in the proxy class. Despite us declaring two data contracts (Person and Employee), only one gets emitted – Person. This is because we’re saying that the Employee type has the same wire-representation as the Person type. Also that the signature of the SaveEmployee operation gets changed on the proxy side: 1: [System.CodeDom.Compiler.GeneratedCodeAttribute("System.ServiceModel", "4.0.0.0")] 2: [System.ServiceModel.ServiceContractAttribute(ConfigurationName="ServiceProxy.ILearnWcfServiceExtend")] 3: public interface ILearnWcfServiceExtend 4: { 5: [System.ServiceModel.OperationContractAttribute(Action="http://tempuri.org/ILearnWcfServiceExtend/SaveEmployee", ReplyAction="http://tempuri.org/ILearnWcfServiceExtend/SaveEmployeeResponse")] 6: ClientApplication.ServiceProxy.Person SaveEmployee(ClientApplication.ServiceProxy.Person employee); 7: } But, on the service side, the SaveEmployee still accepts and returns an Employee data contract. 1: [ServiceBehavior] 2: public class LearnWcfService : ILearnWcfServiceExtend 3: { 4: public Employee SaveEmployee(Employee employee) 5: { 6: employee.EmployeeId = 123; 7: return employee; 8: } 9: } Despite all these changes, our output remains the same as the last one: This is type-interchangeability at work! Here’s one more thing to ponder about. Our Person type is a struct and Employee type is a class. Then how is it that the Person type got emitted as a ‘class’ in the proxy? It’s worth mentioning that WSDL describes a type called Employee and does not say whether it is a class or a struct (see the SOAP message below): 1: <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" 2: xmlns:tem="http://tempuri.org/" 3: xmlns:ser="http://schemas.datacontract.org/2004/07/ServiceApplication"> 4: <soapenv:Header/> 5: <soapenv:Body> 6: <tem:SaveEmployee> 7: <!--Optional:--> 8: <tem:employee> 9: <!--Optional:--> 10: <ser:EmployeeId>?</ser:EmployeeId> 11: <!--Optional:--> 12: <ser:FName>?</ser:FName> 13: </tem:employee> 14: </tem:SaveEmployee> 15: </soapenv:Body> 16: </soapenv:Envelope> There are some differences between how ‘Add Service Reference’ and the svcutil.exe generate the proxy class, but turns out both do some kind of reflection and determine the type of the data contract and emit the code accordingly. So since the Employee type is a class, the proxy ‘Person’ type gets generated as a class. In fact, reflecting on svcutil.exe application, you’ll see that there are a couple of places wherein a flag actually determines a type as a class or a struct. One example is in the ExportISerializableDataContract method in the System.Runtime.Serialization.CodeExporter class. Seems like these flags have a say in deciding whether the type gets emitted as a struct or a class. This behavior is different if you use the WSDL tool though. WSDL tool does not do any kind of reflection of the data contract / serialized type, it emits the type as a class by default. You can check this using the two command lines below:   Note to self: Remember ‘state’ and type-interchangeability when traversing through the WSDL planet!

    Read the article

  • How do I set up MVP for a Winforms solution?

    - by JonWillis
    Question moved from Stackoverflow - http://stackoverflow.com/questions/4971048/how-do-i-set-up-mvp-for-a-winforms-solution I have used MVP and MVC in the past, and I prefer MVP as it controls the flow of execution so much better in my opinion. I have created my infrastructure (datastore/repository classes) and use them without issue when hard coding sample data, so now I am moving onto the GUI and preparing my MVP. Section A I have seen MVP using the view as the entry point, that is in the views constructor method it creates the presenter, which in turn creates the model, wiring up events as needed. I have also seen the presenter as the entry point, where a view, model and presenter are created, this presenter is then given a view and model object in its constructor to wire up the events. As in 2, but the model is not passed to the presenter. Instead the model is a static class where methods are called and responses returned directly. Section B In terms of keeping the view and model in sync I have seen. Whenever a value in the view in changed, i.e. TextChanged event in .Net/C#. This fires a DataChangedEvent which is passed through into the model, to keep it in sync at all times. And where the model changes, i.e. a background event it listens to, then the view is updated via the same idea of raising a DataChangedEvent. When a user wants to commit changes a SaveEvent it fires, passing through into the model to make the save. In this case the model mimics the view's data and processes actions. Similar to #b1, however the view does not sync with the model all the time. Instead when the user wants to commit changes, SaveEvent is fired and the presenter grabs the latest details and passes them into the model. in this case the model does not know about the views data until it is required to act upon it, in which case it is passed all the needed details. Section C Displaying of business objects in the view, i.e. a object (MyClass) not primitive data (int, double) The view has property fields for all its data that it will display as domain/business objects. Such as view.Animals exposes a IEnumerable<IAnimal> property, even though the view processes these into Nodes in a TreeView. Then for the selected animal it would expose SelectedAnimal as IAnimal property. The view has no knowledge of domain objects, it exposes property for primitive/framework (.Net/Java) included objects types only. In this instance the presenter will pass an adapter object the domain object, the adapter will then translate a given business object into the controls visible on the view. In this instance the adapter must have access to the actual controls on the view, not just any view so becomes more tightly coupled. Section D Multiple views used to create a single control. i.e. You have a complex view with a simple model like saving objects of different types. You could have a menu system at the side with each click on an item the appropriate controls are shown. You create one huge view, that contains all of the individual controls which are exposed via the views interface. You have several views. You have one view for the menu and a blank panel. This view creates the other views required but does not display them (visible = false), this view also implements the interface for each view it contains (i.e. child views) so it can expose to one presenter. The blank panel is filled with other views (Controls.Add(myview)) and ((myview.visible = true). The events raised in these "child"-views are handled by the parent view which in turn pass the event to the presenter, and visa versa for supplying events back down to child elements. Each view, be it the main parent or smaller child views are each wired into there own presenter and model. You can literately just drop a view control into an existing form and it will have the functionality ready, just needs wiring into a presenter behind the scenes. Section E Should everything have an interface, now based on how the MVP is done in the above examples will affect this answer as they might not be cross-compatible. Everything has an interface, the View, Presenter and Model. Each of these then obviously has a concrete implementation. Even if you only have one concrete view, model and presenter. The View and Model have an interface. This allows the views and models to differ. The presenter creates/is given view and model objects and it just serves to pass messages between them. Only the View has an interface. The Model has static methods and is not created, thus no need for an interface. If you want a different model, the presenter calls a different set of static class methods. Being static the Model has no link to the presenter. Personal thoughts From all the different variations I have presented (most I have probably used in some form) of which I am sure there are more. I prefer A3 as keeping business logic reusable outside just MVP, B2 for less data duplication and less events being fired. C1 for not adding in another class, sure it puts a small amount of non unit testable logic into a view (how a domain object is visualised) but this could be code reviewed, or simply viewed in the application. If the logic was complex I would agree to an adapter class but not in all cases. For section D, i feel D1 creates a view that is too big atleast for a menu example. I have used D2 and D3 before. Problem with D2 is you end up having to write lots of code to route events to and from the presenter to the correct child view, and its not drag/drop compatible, each new control needs more wiring in to support the single presenter. D3 is my prefered choice but adds in yet more classes as presenters and models to deal with the view, even if the view happens to be very simple or has no need to be reused. i think a mixture of D2 and D3 is best based on circumstances. As to section E, I think everything having an interface could be overkill I already do it for domain/business objects and often see no advantage in the "design" by doing so, but it does help in mocking objects in tests. Personally I would see E2 as a classic solution, although have seen E3 used in 2 projects I have worked on previously. Question Am I implementing MVP correctly? Is there a right way of going about it? I've read Martin Fowler's work that has variations, and I remember when I first started doing MVC, I understood the concept, but could not originally work out where is the entry point, everything has its own function but what controls and creates the original set of MVC objects.

    Read the article

  • Data management in unexpected places

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Data management in unexpected places When you think of network switches, routers, firewall appliances, etc., it may not be obvious that at the heart of these kinds of solutions is an engine that can manage huge amounts of data at very high throughput with low latencies and high availability. Consider a network router that is processing tens (or hundreds) of thousands of network packets per second. So what really happens inside a router? Packets are streaming in at the rate of tens of thousands per second. Each packet has multiple attributes, for example, a destination, associated SLAs etc. For each packet, the router has to determine the address of the next “hop” to the destination; it has to determine how to prioritize this packet. If it’s a high priority packet, then it has to be sent on its way before lower priority packets. As a consequence of prioritizing high priority packets, lower priority data packets may need to be temporarily stored (held back), but addressed fairly. If there are security or privacy requirements associated with the data packet, those have to be enforced. You probably need to keep track of statistics related to the packets processed (someone’s sure to ask). You have to do all this (and more) while preserving high availability i.e. if one of the processors in the router goes down, you have to have a way to continue processing without interruption (the customer won’t be happy with a “choppy” VoIP conversation, right?). And all this has to be achieved without ANY intervention from a human operator – the router is most likely to be in a remote location – it must JUST CONTINUE TO WORK CORRECTLY, even when bad things happen. How is this implemented? As soon as a packet arrives, it is interpreted by the receiving software. The software decodes the packet headers in order to determine the destination, kind of packet (e.g. voice vs. data), SLAs associated with the “owner” of the packet etc. It looks up the internal database of “rules” of how to process this packet and handles the packet accordingly. The software might choose to hold on to the packet safely for some period of time, if it’s a low priority packet. Ah – this sounds very much like a database problem. For each packet, you have to minimally · Look up the most efficient next “hop” towards the destination. The “most efficient” next hop can change, depending on latency, availability etc. · Look up the SLA and determine the priority of this packet (e.g. voice calls get priority over data ftp) · Look up security information associated with this data packet. It may be necessary to retrieve the context for this network packet since a network packet is a small “slice” of a session. The context for the “header” packet needs to be stored in the router, in order to make this work. · If the priority of the packet is low, then “store” the packet temporarily in the router until it is time to forward the packet to the next hop. · Update various statistics about the packet. In most cases, you have to do all this in the context of a single transaction. For example, you want to look up the forwarding address and perform the “send” in a single transaction so that the forwarding address doesn’t change while you’re sending the packet. So, how do you do all this? Berkeley DB is a proven, reliable, high performance, highly available embeddable database, designed for exactly these kinds of usage scenarios. Berkeley DB is a robust, reliable, proven solution that is currently being used in these scenarios. First and foremost, Berkeley DB (or BDB for short) is very very fast. It can process tens or hundreds of thousands of transactions per second. It can be used as a pure in-memory database, or as a disk-persistent database. BDB provides high availability – if one board in the router fails, the system can automatically failover to another board – no manual intervention required. BDB is self-administering – there’s no need for manual intervention in order to maintain a BDB application. No need to send a technician to a remote site in the middle of nowhere on a freezing winter day to perform maintenance operations. BDB is used in over 200 million deployments worldwide for the past two decades for mission-critical applications such as the one described here. You have a choice of spending valuable resources to implement similar functionality, or, you could simply embed BDB in your application and off you go! I know what I’d do – choose BDB, so I can focus on my business problem. What will you do? /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Metrics - A little knowledge can be a dangerous thing (or 'Why you're not clever enough to interpret metrics data')

    - by Jason Crease
    At RedGate Software, I work on a .NET obfuscator  called SmartAssembly.  Various features of it use a database to store various things (exception reports, name-mappings, etc.) The user is given the option of using either a SQL-Server database (which requires them to have Microsoft SQL Server), or a Microsoft Access MDB file (which requires nothing). MDB is the default option, but power-users soon switch to using a SQL Server database because it offers better performance and data-sharing. In the fashionable spirit of optimization and metrics, an obvious product-management question is 'Which is the most popular? SQL Server or MDB?' We've collected data about this fact, using our 'Feature-Usage-Reporting' technology (available as part of SmartAssembly) and more recently our 'Application Metrics' technology: Parameter Number of users % of total users Number of sessions Number of usages SQL Server 28 19.0 8115 8115 MDB 114 77.6 1449 1449 (As a disclaimer, please note than SmartAssembly has far more than 132 users . This data is just a selection of one build) So, it would appear that SQL-Server is used by fewer users, but more often. Great. But here's why these numbers are useless to me: Only the original developers understand the data What does a single 'usage' of 'MDB' mean? Does this happen once per run? Once per option change? On clicking the 'Obfuscate Now' button? When running the command-line version or just from the UI version? Each question could skew the data 10-fold either way, and the answers only known by the developer that instrumented the application in the first place. In other words, only the original developer can interpret the data - product-managers cannot interpret the data unaided. Most of the data is from uninterested users About half of people who download and run a free-trial from the internet quit it almost immediately. Only a small fraction use it sufficiently to make informed choices. Since the MDB option is the default one, we don't know how many of those 114 were people CHOOSING to use the MDB, or how many were JUST HAPPENING to use this MDB default for their 20-second trial. This is a problem we see across all our metrics: Are people are using X because it's the default or are they using X because they want to use X? We need to segment the data further - asking what percentage of each percentage meet our criteria for an 'established user' or 'informed user'. You end up spending hours writing sophisticated and dubious SQL queries to segment the data further. Not fun. You can't find out why they used this feature Metrics can answer the when and what, but not the why. Why did people use feature X? If you're anything like me, you often click on random buttons in unfamiliar applications just to explore the feature-set. If we listened uncritically to metrics at RedGate, we would eliminate the most-important and more-complex features which people actually buy the software for, leaving just big buttons on the main page and the About-Box. "Ah, that's interesting!" rather than "Ah, that's actionable!" People do love data. Did you know you eat 1201 chickens in a lifetime? But just 4 cows? Interesting, but useless. Often metrics give you a nice number: '5.8% of users have 3 or more monitors' . But unless the statistic is both SUPRISING and ACTIONABLE, it's useless. Most metrics are collected, reviewed with lots of cooing. and then forgotten. Unless a piece-of-data could change things, it's useless collecting it. People get obsessed with significance levels The first things that lots of people do with this data is do a t-test to get a significance level ("Hey! We know with 99.64% confidence that people prefer SQL Server to MDBs!") Believe me: other causes of error/misinterpretation in your data are FAR more significant than your t-test could ever comprehend. Confirmation bias prevents objectivity If the data appears to match our instinct, we feel satisfied and move on. If it doesn't, we suspect the data and dig deeper, plummeting down a rabbit-hole of segmentation and filtering until we give-up and move-on. Data is only useful if it can change our preconceptions. Do you trust this dodgy data more than your own understanding, knowledge and intelligence?  I don't. There's always multiple plausible ways to interpret/action any data Let's say we segment the above data, and get this data: Post-trial users (i.e. those using a paid version after the 14-day free-trial is over): Parameter Number of users % of total users Number of sessions Number of usages SQL Server 13 9.0 1115 1115 MDB 5 4.2 449 449 Trial users: Parameter Number of users % of total users Number of sessions Number of usages SQL Server 15 10.0 7000 7000 MDB 114 77.6 1000 1000 How do you interpret this data? It's one of: Mostly SQL Server users buy our software. People who can't afford SQL Server tend to be unable to afford or unwilling to buy our software. Therefore, ditch MDB-support. Our MDB support is so poor and buggy that our massive MDB user-base doesn't buy it.  Therefore, spend loads of money improving it, and think about ditching SQL-Server support. People 'graduate' naturally from MDB to SQL Server as they use the software more. Things are fine the way they are. We're marketing the tool wrong. The large number of MDB users represent uninformed downloaders. Tell marketing to aggressively target SQL Server users. To choose an interpretation you need to segment again. And again. And again, and again. Opting-out is correlated with feature-usage Metrics tends to be opt-in. This skews the data even further. Between 5% and 30% of people choose to opt-in to metrics (often called 'customer improvement program' or something like that). Casual trial-users who are uninterested in your product or company are less likely to opt-in. This group is probably also likely to be MDB users. How much does this skew your data by? Who knows? It's not all doom and gloom. There are some things metrics can answer well. Environment facts. How many people have 3 monitors? Have Windows 7? Have .NET 4 installed? Have Japanese Windows? Minor optimizations.  Is the text-box big enough for average user-input? Performance data. How long does our app take to start? How many databases does the average user have on their server? As you can see, questions about who-the-user-is rather than what-the-user-does are easier to answer and action. Conclusion Use SmartAssembly. If not for the metrics (called 'Feature-Usage-Reporting'), then at least for the obfuscation/error-reporting. Data raises more questions than it answers. Questions about environment are the easiest to answer.

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >