Search Results

Search found 13516 results on 541 pages for 'common dialog'.

Page 499/541 | < Previous Page | 495 496 497 498 499 500 501 502 503 504 505 506  | Next Page >

  • Precise explanation of JavaScript <-> DOM circular reference issue

    - by Joey Adams
    One of the touted advantages of jQuery.data versus raw expando properties (arbitrary attributes you can assign to DOM nodes) is that jQuery.data is "safe from circular references and therefore free from memory leaks". An article from Google titled "Optimizing JavaScript code" goes into more detail: The most common memory leaks for web applications involve circular references between the JavaScript script engine and the browsers' C++ objects' implementing the DOM (e.g. between the JavaScript script engine and Internet Explorer's COM infrastructure, or between the JavaScript engine and Firefox XPCOM infrastructure). It lists two examples of circular reference patterns: DOM element → event handler → closure scope → DOM DOM element → via expando → intermediary object → DOM element However, if a reference cycle between a DOM node and a JavaScript object produces a memory leak, doesn't this mean that any non-trivial event handler (e.g. onclick) will produce such a leak? I don't see how it's even possible for an event handler to avoid a reference cycle, because the way I see it: The DOM element references the event handler. The event handler references the DOM (either directly or indirectly). In any case, it's almost impossible to avoid referencing window in any interesting event handler, short of writing a setInterval loop that reads actions from a global queue. Can someone provide a precise explanation of the JavaScript ↔ DOM circular reference problem? Things I'd like clarified: What browsers are effected? A comment in the jQuery source specifically mentions IE6-7, but the Google article suggests Firefox is also affected. Are expando properties and event handlers somehow different concerning memory leaks? Or are both of these code snippets susceptible to the same kind of memory leak? // Create an expando that references to its own element. var elem = document.getElementById('foo'); elem.myself = elem; // Create an event handler that references its own element. var elem = document.getElementById('foo'); elem.onclick = function() { elem.style.display = 'none'; }; If a page leaks memory due to a circular reference, does the leak persist until the entire browser application is closed, or is the memory freed when the window/tab is closed?

    Read the article

  • Function Point Analysis -- a seriously overestimating technique?

    - by kizzx2
    I know questions about FPA has been asked numerous times before, but this time I'm taking a more analytical angle at it, backed up with data. 1. First, some data This question is based on a tutorial. He had a "Sample Count" section where he demonstrated it step by step. You can see some screenshots of his sample application here. In the end, he calculated the unadjusted FP to be 99. There is another article on InformIT with industry data on typical hour/FP. It ranges from 2 hours/FP to 27.4 hours/FP. Let's try to stick with 2 for the moment (since SO readers are probably the more efficient crowd :p). 2. Reality check!? Now just check out the screenshots again. Do a little math here 99 * 2 = 198 hours 198 hours / 40 hours per week = 5 weeks Seriously? That sample application is going to take 5 weeks to implement? Is it just my feeling that it wouldn't take any decent programmer longer than one week (I"m not even saying weekend) to have it completed? Now let's try estimating the cost of the project. We'll use New York's minimum wage at the moment (Wikipedia), which is $7.25 198 * 7.25 = $1435.5 From what I could see from the screenshots, this application is a small excel-improvement app. I could have bought MS Office Pro for 200 bucks which gives me greater interoperability (.xls files) and flexibility (spreadsheets). (For the record, that same Web site has another article discussing productivity. It seems like they typically use 4.2 hours/FP, which gives us even more shocking stats: 99 * 4.2 = 415 hours = 10 weeks = almost 3 whopping months! 415 hours * $7.25 = $3000 zomg (That's even assuming that all our poor coders get the minimum wage!) 3. Am I missing something here? Right now, I could come up with several possible explanation: FPA is really only suited for bigger projects (1000+ FPs) so it becomes extremely inaccurate at smaller scale. The hours/FP metric fluctuates abruptly from team to team, project to project. For a small project like this, we could have used something like 0.5 hour/FP or something. (Now this kind of makes the whole estimation thing pointless, unless my firm does the same type of projects for several years with the same team, not really common.) From my experience with several software metrics, Function Point is really not a lightweight metric. If the hour/FP thing fluctuates so much, then what's the point, maybe I could have gone with User Story Points which is a lot faster to get and arguably almost as uncertain. What would be the FP experts' answers to this?

    Read the article

  • Polymorphic :has_many, :through as module in Rails 3.1 plugin

    - by JohnMetta
    I've search everywhere for a pointer to this, but can't find one. Basically, I want to do what everyone else wants to do when they create a polymorphic relationship in a :has_many, :through way… but I want to do it in a module. I keep getting stuck and think I must be overlooking something simple. To wit: module ActsPermissive module PermissiveUser def self.included(base) base.extend ClassMethods end module ClassMethods def acts_permissive has_many :ownables has_many :owned_circles, :through => :ownables end end end class PermissiveCircle < ActiveRecord::Base belongs_to :ownable, :polymorphic => true end end With a migration that looks like this: create_table :permissive_circles do |t| t.string :ownable_type t.integer :ownable_id t.timestamps end The idea, of course, is that whatever loads acts_permissive will be able to have a list of circles that it owns. For simple tests, I have it "should have a list of circles" do user = Factory :user user.owned_circles.should be_an_instance_of Array end which fails with: Failure/Error: @user.circles.should be_an_instance_of Array NameError: uninitialized constant User::Ownable I've tried: using :class_name => 'ActsPermissive::PermissiveCircle' on the has_many :ownables line, which fails with: Failure/Error: @user.circles.should be_an_instance_of Array ActiveRecord::HasManyThroughSourceAssociationNotFoundError: Could not find the source association(s) :owned_circle or :owned_circles in model ActsPermissive::PermissiveCircle. Try 'has_many :owned_circles, :through => :ownables, :source => <name>'. Is it one of :ownable? while following the suggestion and setting :source => :ownable fails with Failure/Error: @user.circles.should be_an_instance_of Array ActiveRecord::HasManyThroughAssociationPolymorphicSourceError: Cannot have a has_many :through association 'User#owned_circles' on the polymorphic object 'Ownable#ownable' Which seems to suggest that doing things with a non-polymorphic-through is necessary. So I added a circle_owner class similar to the setup here: module ActsPermissive class CircleOwner < ActiveRecord::Base belongs_to :permissive_circle belongs_to :ownable, :polymorphic => true end module PermissiveUser def self.included(base) base.extend ClassMethods end module ClassMethods def acts_permissive has_many :circle_owners, :as => :ownable has_many :circles, :through => :circle_owners, :source => :ownable, :class_name => 'ActsPermissive::PermissiveCircle' end end class PermissiveCircle < ActiveRecord::Base has_many :circle_owners end end With a migration: create_table :permissive_circles do |t| t.string :name t.string :guid t.timestamps end create_table :circle_owner do |t| t.string :ownable_type t.string :ownable_id t.integer :permissive_circle_id end which still fails with: Failure/Error: @user.circles.should be_an_instance_of Array NameError: uninitialized constant User::CircleOwner Which brings us back to the beginning. How can I do what seems to be a rather common polymorphic :has_many, :through on a module? Alternatively, is there a good way to allow an object to be collected by arbitrary objects in a similar way that will work with a module?

    Read the article

  • How to learn proper C++?

    - by Chris
    While reading a long series of really, really interesting threads, I've come to a realization: I don't think I really know C++. I know C, I know classes, I know inheritance, I know templates (& the STL) and I know exceptions. Not C++. To clarify, I've been writing "C++" for more than 5 years now. I know C, and I know that C and C++ share a common subset. What I've begun to realize, though, is that more times than not, I wind up treating C++ something vaguely like "C with classes," although I do practice RAII. I've never used Boost, and have only read up on TR1 and C++0x - I haven't used any of these features in practice. I don't use namespaces. I see a list of #defines, and I think - "Gracious, that's horrible! Very un-C++-like," only to go and mindlessly write class wrappers for the sake of it, and I wind up with large numbers (maybe a few per class) of static methods, and for some reason, that just doesn't seem right lately. The professional in me yells "just get the job done," the academic yells "you should write proper C++ when writing C++" and I feel like the point of balance is somewhere in between. I'd like to note that I don't want to program "pure" C++ just for the sake of it. I know several languages. I have a good feel for what "Pythonic" is. I know what clean and clear PHP is. Good C code I can read and write better than English. The issue is that I learned C by example, and picked up C++ as a "series of modifications" to C. And a lot of my early C++ work was creating class wrappers for C libraries. I feel like my own personal C-heavy background while learning C++ has sort of... clouded my acceptance of C++ in it's own right, as it's own language. Do the weathered C++ lags here have any advice for me? Good examples of clean, sharp C++ to learn from? What habits of C does my inner-C++ really need to break from? My goal here is not to go forth and trumpet "good" C++ paradigm from rooftops for the sake of it. C and C++ are two different languages, and I want to start treating them that way. How? Where to start? Thanks in advance! Cheers, -Chris

    Read the article

  • What are the weaknesses of this user authentication method?

    - by byronh
    I'm developing my own PHP framework. It seems all the security articles I have read use vastly different methods for user authentication than I do so I could use some help in finding security holes. Some information that might be useful before I start. I use mod_rewrite for my MVC url's. Passwords are sha1 and md5 encrypted with 24 character salt unique to each user. mysql_real_escape_string and/or variable typecasting on everything going in, and htmlspecialchars on everything coming out. Step-by step process: Top of every page: session_start(); session_regenerate_id(); If user logs in via login form, generate new random token to put in user's MySQL row. Hash is generated based on user's salt (from when they first registered) and the new token. Store the hash and plaintext username in session variables, and duplicate in cookies if 'Remember me' is checked. On every page, check for cookies. If cookies set, copy their values into session variables. Then compare $_SESSION['name'] and $_SESSION['hash'] against MySQL database. Destroy all cookies and session variables if they don't match so they have to log in again. If login is valid, some of the user's information from the MySQL database is stored in an array for easy access. So far, I've assumed that this array is clean so when limiting user access I refer to user.rank and deny access if it's below what's required for that page. I've tried to test all the common attacks like XSS and CSRF, but maybe I'm just not good enough at hacking my own site! My system seems way too simple for it to actually be secure (the security code is only 100 lines long). What am I missing? I've also spent alot of time searching for the vulnerabilities with mysql_real_escape string but I haven't found any information that is up-to-date (everything is from several years ago at least and has apparently been fixed). All I know is that the problem was something to do with encoding. If that problem still exists today, how can I avoid it? Any help will be much appreciated.

    Read the article

  • Get rid of jfreechart chartpanel unnecessary space

    - by ryvantage
    I am trying to get a JFreeChart ChartPanel to remove unwanted extra space between the edge of the panel and the graph itself. To best illustrate, here's a SSCCE (with JFreeChart installed): public static void main(String[] args) { JPanel panel = new JPanel(new GridBagLayout()); GridBagConstraints gbc = new GridBagConstraints(); gbc.fill = GridBagConstraints.BOTH; gbc.gridwidth = 1; gbc.gridheight = 1; gbc.weightx = 1; gbc.weighty = 1; gbc.gridy = 1; gbc.gridx = 1; panel.add(createChart("Sales", Chart_Type.DOLLARS, 100000, 115000), gbc); gbc.gridx = 2; panel.add(createChart("Quotes", Chart_Type.DOLLARS, 250000, 240000), gbc); gbc.gridx = 3; panel.add(createChart("Profits", Chart_Type.PERCENTAGE, 40.00, 38.00), gbc); JFrame frame = new JFrame(); frame.add(panel); frame.setSize(800, 300); frame.setLocationRelativeTo(null); frame.setVisible(true); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); } private static ChartPanel createChart(String title, Chart_Type type, double goal, double actual) { double maxValue = goal * 2; double yellowToGreenNum = goal; double redToYellowNum = goal * .75; DefaultValueDataset dataset = new DefaultValueDataset(actual); JFreeChart jfreechart = createChart(dataset, Math.max(actual, maxValue), redToYellowNum, yellowToGreenNum, title, type); ChartPanel chartPanel = new ChartPanel(jfreechart); chartPanel.setBorder(new LineBorder(Color.red)); return chartPanel; } private static JFreeChart createChart(ValueDataset valuedataset, Number maxValue, Number redToYellowNum, Number yellowToGreenNum, String title, Chart_Type type) { MeterPlot meterplot = new MeterPlot(valuedataset); meterplot.setRange(new Range(0.0D, maxValue.doubleValue())); meterplot.addInterval(new MeterInterval(" Goal Not Met ", new Range(0.0D, redToYellowNum.doubleValue()), Color.lightGray, new BasicStroke(2.0F), new Color(255, 0, 0, 128))); meterplot.addInterval(new MeterInterval(" Goal Almost Met ", new Range(redToYellowNum.doubleValue(), yellowToGreenNum.doubleValue()), Color.lightGray, new BasicStroke(2.0F), new Color(255, 255, 0, 64))); meterplot.addInterval(new MeterInterval(" Goal Met ", new Range(yellowToGreenNum.doubleValue(), maxValue.doubleValue()), Color.lightGray, new BasicStroke(2.0F), new Color(0, 255, 0, 64))); meterplot.setNeedlePaint(Color.darkGray); meterplot.setDialBackgroundPaint(Color.white); meterplot.setDialOutlinePaint(Color.gray); meterplot.setDialShape(DialShape.CHORD); meterplot.setMeterAngle(260); meterplot.setTickLabelsVisible(false); meterplot.setTickSize(maxValue.doubleValue() / 20); meterplot.setTickPaint(Color.lightGray); meterplot.setValuePaint(Color.black); meterplot.setValueFont(new Font("Dialog", Font.BOLD, 0)); meterplot.setUnits(""); if(type == Chart_Type.DOLLARS) meterplot.setTickLabelFormat(NumberFormat.getCurrencyInstance()); else if(type == Chart_Type.PERCENTAGE) meterplot.setTickLabelFormat(NumberFormat.getPercentInstance()); JFreeChart jfreechart = new JFreeChart(title, JFreeChart.DEFAULT_TITLE_FONT, meterplot, false); return jfreechart; } enum Chart_Type { DOLLARS, PERCENTAGE } If you resize the frame, you can see that you cannot make the edge of the graph go to the edge of the panel (the panels are outlined in red). Especially on the bottom - there is always a gap between the bottom the graph and the bottom of the panel. Is there a way to make the graph fill the entire area? Is there a way to at least guarantee that it is touching one edge of the panel (i.e., it is touching the top and bottom or the left and right) ??

    Read the article

  • Loading the last related record instantly for multiple parent records using Entity framework

    - by Guillaume Schuermans
    Does anyone know a good approach using Entity Framework for the problem described below? I am trying for our next release to come up with a performant way to show the placed orders for the logged on customer. Of course paging is always a good technique to use when a lot of data is available I would like to see an answer without any paging techniques. Here's the story: a customer places an order which gets an orderstatus = PENDING. Depending on some strategy we move that order up the chain in order to get it APPROVED. Every change of status is logged so we can see a trace for statusses and maybe even an extra line of comment per status which can provide some extra valuable information to whoever sees this order in an interface. So an Order is linked to a Customer. One order can have multiple orderstatusses stored in OrderStatusHistory. In my testscenario I am using a customer which has 100+ Orders each with about 5 records in the OrderStatusHistory-table. I would for now like to see all orders in one page not using paging where for each Order I show the last relevant Status and the extra comment (if there is any for this last status; both fields coming from OrderStatusHistory; the record with the highest Id for the given OrderId). There are multiple scenarios I have tried, but I would like to see any potential other solutions or comments on the things I have already tried. Trying to do Include() when getting Orders but this still results in multiple queries launched on the database. Each order triggers an extra query to the database to get all orderstatusses in the history table. So all statusses are queried here instead of just returning the last relevant one, plus 100 extra queries are launched for 100 orders. You can imagine the problem when there are 100000+ orders in the database. Having 2 computed columns on the database: LastStatus, LastStatusInformation and a regular Linq-Query which gets those columns which are available through the Entity-model. The problem with this approach is the fact that those computed columns are determined using a scalar function which can not be changed without removing the formula from the computed column, etc... In the end I am very familiar with SQL and Stored procedures, but since the rest of the data-layer uses Entity Framework I would like to stick to it as long as possible, even though I have my doubts about performance. Using the SQL approach I would write something like this: WITH cte (RN, OrderId, [Status], Information) AS ( SELECT ROW_NUMBER() OVER (PARTITION BY OrderId ORDER BY Id DESC), OrderId, [Status], Information FROM OrderStatus ) SELECT o.Id, cte.[Status], cte.Information AS StatusInformation, o.* FROM [Order] o INNER JOIN cte ON o.Id = cte.OrderId AND cte.RN = 1 WHERE CustomerId = @CustomerId ORDER BY 1 DESC; which returns all orders for the customer with the statusinformation provided by the Common Table Expression. Does anyone know a good approach using Entity Framework?

    Read the article

  • Separate specific #ifdef branches

    - by detly
    In short: I want to generate two different source trees from the current one, based only on one preprocessor macro being defined and another being undefined, with no other changes to the source. If you are interested, here is my story... In the beginning, my code was clean. Then we made a new product, and yea, it was better. But the code saw only the same peripheral devices, so we could keep the same code. Well, almost. There was one little condition that needed to be changed, so I added: #if defined(PRODUCT_A) condition = checkCat(); #elif defined(PRODUCT_B) condition = checkCat() && checkHat(); #endif ...to one and only one source file. In the general all-source-files-include-this header file, I had: #if !(defined(PRODUCT_A)||defined(PRODUCT_B)) #error "Don't make me replace you with a small shell script. RTFM." #endif ...so that people couldn't compile it unless they explicitly defined a product type. All was well. Oh... except that modifications were made, components changed, and since the new hardware worked better we could significantly re-write the control systems. Now when I look upon the face of the code, there are more than 60 separate areas delineated by either: #ifdef PRODUCT_A ... #else ... #endif ...or the same, but for PRODUCT_B. Or even: #if defined(PRODUCT_A) ... #elif defined(PRODUCT_B) ... #endif And of course, sometimes sanity took a longer holiday and: #ifdef PRODUCT_A ... #endif #ifdef PRODUCT_B ... #endif These conditions wrap anywhere from one to two hundred lines (you'd think that the last one could be done by switching header files, but the function names need to be the same). This is insane. I would be better off maintaining two separate product-based branches in the source repo and porting any common changes. I realise this now. Is there something that can generate the two different source trees I need, based only on PRODUCT_A being defined and PRODUCT_B being undefined (and vice-versa), without touching anything else (ie. no header inclusion, no macro expansion, etc)?

    Read the article

  • How add loading image using jquery?

    - by user244394
    I'm working on a form, <form id="myform" class="form"> </form> that gets submitted to the server using jquery ajax. How can I refresh the form on success to show the updated form information and add a spinner until the form loads? here is my html and jquery snippet <div class="container"> <div class="page-header"> <div class="span2"> <!--Sidebar content--> <img src="img/emc_logo.png" title="EMC" > </div> <div class="span6"> <h2 class="form-wizard-heading">Configuration</h2> </div> </div> <form id="myform" class="form"> </form> </div> <!-- /container --> <!-- Modal --> <div id="myModal" class="modal hide fade" tabindex="-1" role="dialog" aria-labelledby="myModalLabel" aria-hidden="true"> <div class="modal-header"> <button type="button" class="close" data-dismiss="modal" aria-hidden="true">&times;</button> <h3 id="myModalLabel">Configuration Changes</h3> <p><span class="label label-important">Please review your changes before submitting. Submitting the changes will result in rebooting the cluster</span></p> </div> <div class="modal-body"> <table class="table table-condensed table-striped" id="display"></table> </div> <div class="modal-footer"> <button id="cancel" class="btn" data-dismiss="modal" aria-hidden="true">Close</button> <button id="save" class="btn btn-primary">Save changes</button> </div> </div> //Jquery part $(document).ready(function () { $('input').hover(function () { $(this).popover('show') }); // On mouseout destroy popout $('input').mouseout(function () { $(this).popover('destroy') }); $('#myform').on('submit', function (ev) { ev.preventDefault(); var data = $(this).serializeObject(); json_data = JSON.stringify(data); $('#myModal').modal('show'); $.each(data, function (key, val) { var tablefeed = $('<tr><td>' + key + '</td><td id="' + key + '">' + val + '</td><tr>').appendTo('#display'); }); $(".modal-body").html(tablefeed); }); $("#cancel").click(function () { $("#display").empty(); }); $(function () { $("#save").click(function () { // validate and process form here alert("button submitted" + json_data); $.ajax({ type: "POST", url: "somefile.json.", data: json_data, contentType: 'application/json', success: function (data, textStatus, xhr) { console.log(arguments); console.log(xhr.status); alert("Your form has been submitted: " + textStatus + xhr.status); }, error: function (jqXHR, textStatus, errorThrown) { alert(jqXHR.responseText + " - " + errorThrown + " : " + jqXHR.status); } }); }); }); });

    Read the article

  • Modular GWT design concerns

    - by GlGuru
    Hi, I have a couple of questions regarding a modular GWT based application framework. I have some ideas about them but being new to the field of web development I feel they are far from ideal. I'd appreciate a few comments and suggestions in this regard. Here are my questions: I am developing a framework which will allow third parties to embed GWT applications into our website and do some communication with them using simple iFrame postMessage. All these third party modules are going to use our SDK which is also GWT based. The problem arises that even though all the modules will be using the same codebase there is going to be a massive overheard in the amount of duplicate Javascript code (i.e. our common SDK code base which is quite large) being downloaded on the client's machine. This is highly redundant and problematic, not only due to the sheer size of duplicate code but, also due to the fact that subsequent updates of the SDK would require the modules to be recompiled which is going to create a DLL hell kind of scenario in the long run. What is the best way of doing this kind of thing? Is there a way where I can have some static GWT code (i.e. the SDK) and the dynamic GWT module refers to it (even if lies on a different domain) and it all work happily? The other part of the problem lies in providing robust scripting front end to the SDK. At first it appears to be trivial since Javascript itself is a scripting language. However, I do not know how to load and call a piece of pure Javascript code at runtime? I am willing to put restrictions on the target Javascript (i.e. having a function main and unique namespace or something). Furthermore the Javascript will come as a string from a database and not as a full URL. If its doable in Javascript how does one get this right in GWT i.e. forcing the compiler to emit a certain function in the generated Javascript? This I believe can be lesser of a problem by having a stub Javascript with all the right requirements which just loads up a GWT generated Javascript. Is any of this possible at all? I generally hate to be this verbose but I hope to find a quick solution to the problem as its holding up my progress. I'd highly appreciate any comments, suggestions and experiences.

    Read the article

  • No value given for one or more required parameters in connection initialisation

    - by Jean-François Côté
    I have an C# form application that use an access database. This application works perfectly in debug and release. It works on all version of Windows. But it crash on one computer with Windows 7. The message I got is: System.Data.OleDb.OleDbException: No value given for one or more required parameters. EDIT, after some debugging with messagebox on the computer that have the problem, here is the code that bug.The error is catched on the cmd.ExecuteReader(). The messagebox juste before is shown and the next one is the one in the catch with the exception below. Any ideas? public List<CoeffItem> GetModeleCoeff() { List<CoeffItem> list = new List<CoeffItem>(); try { OleDbDataReader dr; OleDbCommand cmd = new OleDbCommand("SELECT nIDModelAquacad, nIDModeleBorne, fCoefficient FROM tbl_ModelBorne ORDER BY nIDModelAquacad", m_conn); MessageBox.Show("Commande SQL créée avec succès"); dr = cmd.ExecuteReader(); MessageBox.Show("Exécution du reader sans problème!"); while (dr.Read()) { list.Add(new CoeffItem(Convert.ToInt32(dr["nIDModelAquacad"].ToString()), Convert.ToInt32(dr["nIDModeleBorne"].ToString()), Convert.ToDouble(dr["fCoefficient"].ToString()))); } MessageBox.Show("Lecture du reader"); dr.Close(); MessageBox.Show("Fermeture du reader"); } catch (OleDbException err) { MessageBox.Show("Erreur dans la lecture des modèles/coefficient: " + err.ToString()); } return list; } I think it's something related to the connection string but why only on that computer. Thanks for your help! EDIT Here is the complete error message: See the end of this message for details on invoking just-in-time (JIT) debugging instead of this dialog box. ***** Exception Text ******* System.Data.OleDb.OleDbException: No value given for one or more required parameters. at System.Data.OleDb.OleDbCommand.ExecuteCommandTextErrorHandling(OleDbHResult hr) at System.Data.OleDb.OleDbCommand.ExecuteCommandTextForSingleResult(tagDBPARAMS dbParams, Object& executeResult) at System.Data.OleDb.OleDbCommand.ExecuteCommandText(Object& executeResult) at System.Data.OleDb.OleDbCommand.ExecuteCommand(CommandBehavior behavior, Object& executeResult) at System.Data.OleDb.OleDbCommand.ExecuteReaderInternal(CommandBehavior behavior, String method) at System.Data.OleDb.OleDbCommand.ExecuteReader(CommandBehavior behavior) at System.Data.OleDb.OleDbCommand.ExecuteReader() at DatabaseLayer.DatabaseFacade.GetModeleCoeff() at DatabaseLayer.DatabaseFacade.InitConnection(String strFile) at CalculatriceCHW.ListeMesure.OuvrirFichier(String strFichier) at CalculatriceCHW.ListeMesure.nouveauFichierMenu_Click(Object sender, EventArgs e) at System.Windows.Forms.ToolStripItem.RaiseEvent(Object key, EventArgs e) at System.Windows.Forms.ToolStripMenuItem.OnClick(EventArgs e) at System.Windows.Forms.ToolStripItem.HandleClick(EventArgs e) at System.Windows.Forms.ToolStripItem.HandleMouseUp(MouseEventArgs e) at System.Windows.Forms.ToolStripItem.FireEventInteractive(EventArgs e, ToolStripItemEventType met) at System.Windows.Forms.ToolStripItem.FireEvent(EventArgs e, ToolStripItemEventType met) at System.Windows.Forms.ToolStrip.OnMouseUp(MouseEventArgs mea) at System.Windows.Forms.ToolStripDropDown.OnMouseUp(MouseEventArgs mea) at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.ScrollableControl.WndProc(Message& m) at System.Windows.Forms.ToolStrip.WndProc(Message& m) at System.Windows.Forms.ToolStripDropDown.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)

    Read the article

  • Using pointers, references, handles to generic datatypes, as generic and flexible as possible

    - by Patrick
    In my application I have lots of different data types, e.g. Car, Bicycle, Person, ... (they're actually other data types, but this is just for the example). Since I also have quite some 'generic' code in my application, and the application was originally written in C, pointers to Car, Bicycle, Person, ... are often passed as void-pointers to these generic modules, together with an identification of the type, like this: Car myCar; ShowNiceDialog ((void *)&myCar, DATATYPE_CAR); The 'ShowNiceDialog' method now uses meta-information (functions that map DATATYPE_CAR to interfaces to get the actual data out of Car) to get information of the car, based on the given data type. That way, the generic logic only has to be written once, and not every time again for every new data type. Of course, in C++ you could make this much easier by using a common root class, like this class RootClass { public: string getName() const = 0; }; class Car : public RootClass { ... }; void ShowNiceDialog (RootClass *root); The problem is that in some cases, we don't want to store the data type in a class, but in a totally different format to save memory. In some cases we have hundreds of millions of instances that we need to manage in the application, and we don't want to make a full class for every instance. Suppose we have a data type with 2 characteristics: A quantity (double, 8 bytes) A boolean (1 byte) Although we only need 9 bytes to store this information, putting it in a class means that we need at least 16 bytes (because of the padding), and with the v-pointer we possibly even need 24 bytes. For hundreds of millions of instances, every byte counts (I have a 64-bit variant of the application and in some cases it needs 6 GB of memory). The void-pointer approach has the advantage that we can almost encode anything in a void-pointer and decide how to use it if we want information from it (use it as a real pointer, as an index, ...), but at the cost of type-safety. Templated solutions don't help since the generic logic forms quite a big part of the application, and we don't want to templatize all this. Additionally, the data model can be extended at run time, which also means that templates won't help. Are there better (and type-safer) ways to handle this than a void-pointer? Any references to frameworks, whitepapers, research material regarding this?

    Read the article

  • XSD: xs:sequence & xs:choice combination for xs:extension elements?

    - by bguiz
    Hi, My question is about defining an XML schema that will validate the following sample XML: <rules> <other>...</other> <bool>...</bool> <other>...</other> <string>...</string> <other>...</other> </rules> The order of the child nodes does not matter. The cardinality of the child nodes is 0..unbounded. All the child elements of the rules node have a common base type, rule, like so: <xs:complexType name="booleanRule"> <xs:complexContent> <xs:extension base="rule"> ... </xs:extension> </xs:complexContent> </xs:complexType> <xs:complexType name="stringFilterRule"> <xs:complexContent> <xs:extension base="filterRule"> ... </xs:extension> </xs:complexContent> </xs:complexType> My current attempt at defining the schema for the rules node is below. However, Can I nest xs:choice within xs:sequence? If, where do I specify the maxOccurs="unbounded" attribute? Is there a better way to do this, such as an xs:sequence which specifies only the base type of its child elements? <xs:element name="rules"> <xs:complexType> <xs:sequence> <xs:choice> <xs:element name="bool" type="booleanRule" /> <xs:element name="string" type="stringRule" /> <xs:element name="other" type="someOtherRule" /> </xs:choice> </xs:sequence> </xs:complexType> </xs:element>

    Read the article

  • Is the recent trend toward widescreen (16:9) computer monitors a plus or minus for programmers?

    - by DanM
    It's almost gotten to the point where you can't buy a conventional (4:3) monitor anymore. Pretty much everything is widescreen. This is fine for watching movies or TV, but is it good or bad for programming? My initial thoughts on the issue are that widescreens are a net negative for programmers. Here are some of the disadvantages I see: Poor space utiliziation One disadvantage of widescreens you can't argue with is that they offer poor space utilization for the amount of total pixels you get. For example, my Thinkpad, which I bought just before the widescreen craze, has a 15" monitor with a native resolution of 1600 x 1200. The newer 15.4" Thinkpads run at most 1680 x 1050. So (if you do the math) you get fewer pixels in a wider (but not shorter) package. With desktop monitors, you pay a price in terms of desk space used. Two 1680 x 1050 monitors will simply take up more of your desk than two 1600 x 1200 monitors (assuming equal dot pitch). More scrolling If you compare a 1680 x 1050 monitor to a 1600 x 1200 monitor, you get 80 extra pixels of width but 150 fewer pixels of height. The height reduction means you lose approximately 11 lines of code. That's less you can see on the screen at one time and more scrolling you have to do. This harms productivity, maybe not dramatically, but insidiously. Less room for wide panels Widescreens also mean you lose space for wide but short panels common in programming environments. If you use Visual Studio, for example, your code window will be that much shorter when viewing the Find Results, Task List, or Error List (all of which I use frequently). This isn't to say the 80 pixels of extra width you get with widescreen would never be useful, but I tend to keep my lines of code short, so seeing more lines would be more valuable to me than seeing fewer, longer lines. What do you think? Do you agree/disagree? Are you now using one or more widescreen monitors for development? What resolution are you running on each? Do you ever miss the height of the traditional 4:3 monitor? Would you complain if your monitors were one inch narrower but two inches taller?

    Read the article

  • Basic shared memory program in C

    - by nicopuri
    Hi, I want to make a basic chat application in C using Shared memory. I am working in Linux. The application consist in writing the client and the server can read, and if the server write the client can read the message. I tried to do this, but I can't achieve the communication between client and server. The code is the following: Server.c int main(int argc, char **argv) { char *msg; static char buf[SIZE]; int n; msg = getmem(); memset(msg, 0, SIZE); initmutex(); while ( true ) { if( (n = read(0, buf, sizeof buf)) 0 ) { enter(); sprintf(msg, "%.*s", n, buf); printf("Servidor escribe: %s", msg); leave(); }else{ enter(); if ( strcmp(buf, msg) ) { printf("Servidor lee: %s", msg); strcpy(buf, msg); } leave(); sleep(1); } } return 0; } Client.c int main(int argc, char **argv) { char *msg; static char buf[SIZE-1]; int n; msg = getmem(); initmutex(); while(true) { if ( (n = read(0, buf, sizeof buf)) 0 ) { enter(); sprintf(msg, "%.*s", n, buf); printf("Cliente escribe: %s", msg); leave(); }else{ enter(); if ( strcmp(buf, msg) ) { printf("Cliente lee: %s", msg); strcpy(buf, msg); } leave(); sleep(1); } } printf("Cliente termina\n"); return 0; } The shared memory module is the folowing: #include "common.h" void fatal(char *s) { perror(s); exit(1); } char * getmem(void) { int fd; char *mem; if ( (fd = shm_open("/message", O_RDWR|O_CREAT, 0666)) == -1 ) fatal("sh_open"); ftruncate(fd, SIZE); if ( !(mem = mmap(NULL, SIZE, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0)) ) fatal("mmap"); close(fd); return mem; } static sem_t *sd; void initmutex(void) { if ( !(sd = sem_open("/mutex", O_RDWR|O_CREAT, 0666, 1)) ) fatal("sem_open"); } void enter(void) { sem_wait(sd); } void leave(void) { sem_post(sd); }

    Read the article

  • How do I add Objective C code to a FireBreath Project?

    - by jmort253
    I am writing a browser plugin for Mac OS that will place a status bar icon in the status bar, which users can use to interface with the browser plugin. I've successfully built a FireBreath 1.6 project in XCode 4.4.1, and can install it in the browser. However, FireBreath uses C++, whereas a large majority of the existing libraries for Mac OS are written in Objective C. In the /Mac/projectDef.make file, I added the Cocoa Framework and Foundation Framework, as suggested here and in other resources I've found on the Internet: target_link_libraries(${PROJECT_NAME} ${PLUGIN_INTERNAL_DEPS} ${Cocoa.framework} # added line ${Foundation.framework} # added line ) I reran prepmac.sh, expecting a new project to be created in XCode with my .mm files, and .m files; however, it seems that they're being ignored. I only see the .cpp and .h files. I added rules for those in the projectDef.make file, but it doesn't seem to make a difference: file (GLOB PLATFORM RELATIVE ${CMAKE_CURRENT_SOURCE_DIR} Mac/[^.]*.cpp Mac/[^.]*.h Mac/[^.]*.m #added by me Mac/[^.]*.mm #added by me Mac/[^.]*.cmake ) Even if I add the files in manually, I get a series of compilation errors. There are about 20 of them, all related to the file NSObjRuntime.h file: Parse Issue - Expected unqualified-id Parse Issue - Unknown type name 'NSString' Semantic Issue - Use of undeclared identifier 'NSString' Parse Issue - Unknown type name 'NSString' ... ... Semantic Issue - Use of undeclared identifier 'aSelectorName' ... ... Semantic Issue - Use of undeclared identifier 'aClassName' ... It continues like this for some time with similar errors... From what I've read, these errors appear because of dependencies on the Foundation Framework, which I believe I've included in the project. I also tried clicking the project in XCode I'm to the point now where I'm not sure what to try next. People say it's not hard to use Objective C in C/C++ code, but being new to XCode and Objective C might contribute to my confusion. This is only day 4 for me in this new platform. What do I need to do to get XCode to compile the Objective C code? Please remember that I'm a little new to this, so I'd appreciate it if you leave detailed answers as opposed to the vague one-liners that are common in the firebreath tag. I'm just a little in over my head, but if you can get me past this hurdle I'm certain I'll be good to go from there. UPDATE: I edited projects/MyPlugin/CMakeLists.txt and added in the .m and .mm rules there too. after running prepmac.sh, the files are included in the project, but I still get the same compile errors. I moved all the .h files and .mm files from the Obj C code to the MyPlugin root folder and reran the prepmac.sh file. Problem still exists. Same compile errors.

    Read the article

  • Google Code + SVN or GitHub + Git

    - by Nazgulled
    Let me start by telling you that I never used anything besides SVN and I'm also a Windows user. I have a couple of simple projects that are open-source, others are on there way when I'm happy enough to release their source code but either way, I was thinking of using Google Code and SVN to share the source code of my projects instead of providing a link to the source on my website. This as always been a pain cause I had to update the binaries and the code every time I released a new version. This would also help me out to have a backup of my code some where instead of just my local machine (I used to have a local Subversion server running). What I want from a service like this is very simple... I just want a place to store my source code that people can download if they want, allows me to control revisions and provide a simple and easy issue system so people can submit bugs and stuff like that. I guess both of them have this. But I don't want to host any binaries in their websites, I want this to be hosted on my website so I can control download statistics with my own scripts, I also don't have the need for wiki pages as I prefer to have all the documentation in my own website. Does anyone of this services provide a way to "disable" features like wiki and downloads and don't show them at all for my project(s)? Now, I'm sure there are lots of pros and cons about using Google Code with SVN and GitHub with Git (of course) but here's what it's important for me on each one and why I like them: Google Code: As with any Google page, the complexity is almost non-existent Everyone (or almost) as a Google account and this is nice if people want to report problems using the issues system GitHub: May (or may not) be a little more complex (not a problem for me though) than Google's pages but... ...has a much prettier interface than Google's service It needs people to be registered on GitHub to post about issues I like the fact that with Git, you have your own revisions locally (can I use TortoiseGit for this or?) Basically that's it, not much I know... What other, most common, pros and cons can you tell me about each site/software? Keep in mind that my projects are simple, I'm probably the only one who will ever develop these projects on these repositories (or maybe not, for now I will)

    Read the article

  • Newbie - what do I need to do with httpd.conf to make CakePHP work correctly?

    - by EmmyS
    (Not sure if this belongs here or on webmasters; please move if necessary.) I'm a total newbie to Cake and not much better with apache; I've done a lot of PHP but always with a server that's already been set up by someone else. So I'm going through the basic blog tutorial, and it says: A Note On mod_rewrite Occasionally a new user will run in to mod_rewrite issues, so I'll mention them marginally here. If the Cake welcome page looks a little funny (no images or css styles), it probably means mod_rewrite isn't functioning on your system. Here are some tips to help get you up and running: Make sure that an .htaccess override is allowed: in your httpd.conf, you should have a section that defines a section for each Directory on your server. Make sure the AllowOverride is set to All for the correct Directory. Make sure you are editing the system httpd.conf rather than a user- or site-specific httpd.conf. For some reason or another, you might have obtained a copy of CakePHP without the needed .htaccess files. This sometimes happens because some operating systems treat files that start with '.' as hidden, and don't copy them. Make sure your copy of CakePHP is from the downloads section of the site or our SVN repository. Make sure you are loading up mod_rewrite correctly! You should see something like LoadModule rewrite_module libexec/httpd/mod_rewrite.so and AddModule mod_rewrite.c in your httpd.conf." I'm using XAMPP on linux. I've found my httpd.conf file in opt/lampp/ etc, but am not sure what I need to do with it. I've searched for "rewrite", and there's only one line: LoadModule rewrite_module modules/mod_rewrite.so There's nothing about AddModule mod_rewrite.c. Do I just create a Directory section for the directory I've installed Cake in and set AlllowOverride to All? (I created a separate subdirectory of my wwwroot and installed in there, since I also have installs of Joomla and CodeIgniter.) Is there anything else I need to do? My download of Cake did come with two htaccess-type files (._.htaccess and .htaccess) - do I need to do anything with them? Thanks for any help you can provide to this non-server-admin. EDIT TO ADD virtual host sample: <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /www/docs/dummy-host.example.com ServerName dummy-host.example.com ServerAlias www.dummy-host.example.com ErrorLog logs/dummy-host.example.com-error_log CustomLog logs/dummy-host.example.com-access_log common </VirtualHost>

    Read the article

  • approximating log10[x^k0 + k1]

    - by Yale Zhang
    Greetings. I'm trying to approximate the function Log10[x^k0 + k1], where .21 < k0 < 21, 0 < k1 < ~2000, and x is integer < 2^14. k0 & k1 are constant. For practical purposes, you can assume k0 = 2.12, k1 = 2660. The desired accuracy is 5*10^-4 relative error. This function is virtually identical to Log[x], except near 0, where it differs a lot. I already have came up with a SIMD implementation that is ~1.15x faster than a simple lookup table, but would like to improve it if possible, which I think is very hard due to lack of efficient instructions. My SIMD implementation uses 16bit fixed point arithmetic to evaluate a 3rd degree polynomial (I use least squares fit). The polynomial uses different coefficients for different input ranges. There are 8 ranges, and range i spans (64)2^i to (64)2^(i + 1). The rational behind this is the derivatives of Log[x] drop rapidly with x, meaning a polynomial will fit it more accurately since polynomials are an exact fit for functions that have a derivative of 0 beyond a certain order. SIMD table lookups are done very efficiently with a single _mm_shuffle_epi8(). I use SSE's float to int conversion to get the exponent and significand used for the fixed point approximation. I also software pipelined the loop to get ~1.25x speedup, so further code optimizations are probably unlikely. What I'm asking is if there's a more efficient approximation at a higher level? For example: Can this function be decomposed into functions with a limited domain like log2((2^x) * significand) = x + log2(significand) hence eliminating the need to deal with different ranges (table lookups). The main problem I think is adding the k1 term kills all those nice log properties that we know and love, making it not possible. Or is it? Iterative method? don't think so because the Newton method for log[x] is already a complicated expression Exploiting locality of neighboring pixels? - if the range of the 8 inputs fall in the same approximation range, then I can look up a single coefficient, instead of looking up separate coefficients for each element. Thus, I can use this as a fast common case, and use a slower, general code path when it isn't. But for my data, the range needs to be ~2000 before this property hold 70% of the time, which doesn't seem to make this method competitive. Please, give me some opinion, especially if you're an applied mathematician, even if you say it can't be done. Thanks.

    Read the article

  • Can sorting Japanese kanji words be done programatically?

    - by Mason
    I've recently discovered, to my astonishment (having never really thought about it before), machine-sorting Japanese proper nouns is apparently not possible. I work on an application that must allow the user to select a hospital from a 3-menu interface. The first menu is Prefecture, the second is City Name, and the third is Hospital. Each menu should be sorted, as you might expect, so the user can find what they want in the menu. Let me outline what I have found, as preamble to my question: The expected sort order for Japanese words is based on their pronunciation. Kanji do not have an inherent order (there are tens of thousands of Kanji in use), but the Japanese phonetic syllabaries do have an order: ???????????????????... and on for the fifty traditional distinct sounds (a few of which are obsolete in modern Japanese). This sort order is called ???? (gojuu on jun , or '50-sound order'). Therefore, Kanji words should be sorted in the same order as they would be if they were written in hiragana. (You can represent any kanji word in phonetic hiragana in Japanese.) The kicker: there is no canonical way to determine the pronunciation of a given word written in kanji. You never know. Some kanji have ten or more different pronunciations, depending on the word. Many common words are in the dictionary, and I could probably hack together a way to look them up from one of the free dictionary databases, but proper nouns (e.g. hospital names) are not in the dictionary. So, in my application, I have a list of every prefecture, city, and hospital in Japan. In order to sort these lists, which is a requirement, I need a matching list of each of these names in phonetic form (kana). I can't come up with anything other than paying somebody fluent in Japanese (I'm only so-so) to manually transcribe them. Before I do so though: Is it possible that I am totally high on fire, and there actually is some way to do this sorting without creating my own mappings of kanji words to phonetic readings, that I have somehow overlooked? Is there a publicly available mapping of prefecture/city names, from the government or something? That would reduce the manual mapping I'd need to do to only hospital names. Does anybody have any other advice on how to approach this problem? Any programming language is fine--I'm working with Ruby on Rails but I would be delighted if I could just write a program that would take the kanji input (say 40,000 proper nouns) and then output the phonetic representations as data that I could import into my Rails app. ??????????

    Read the article

  • Proper HTML technique to create an web form out of an image

    - by Lars
    I plan to create an interactive golf score card for my website (XHTML). (Btw. thats how such a scorecard looks like: ScoreCard). So at the end one should be able to insert a score for each hole in the appropriated input field in the virtual scorecard on the website. For me it is very important that the interactive scorecard really looks the same as the original (paper-) scorecard does and so my first approach was to scan and slice the scorecard image to reach that appearance. Here you can see the way I sliced the image: The idea was to insert HTML text input for each score field ending up with something like this: After I sliced the image I reconstructed it using the HTML . To do that I put the image slices as the cell background. <table> <tr> <td style="background: url("slice1.jpg") width="58px" height="25px"> <input type="text"></inputText> </td> </tr> ... </table> At the first moment this worked fine (as Gimp offers quite a nice feature for this). Then the problem was that I had to create a HTML table to create the exact layout. As you can see the lower part of the layout is split up into 3 columns. The middle column is split up into several (for each hole) rows. So the left and right column have to be spanned over those rows. Ok finally that worked, but it lead to some kind of scaling problem. If I zoom in or out on the table the middle column (and only that one) is not scaled the right way. Iam not able to fix this, and so I start doubting if this is the right technique for html image virtualization. Iam really no specialist in the area of creating websites, so I would really appriciate any help on this. Maybe there is a complete other and better technique to do that, as I think it is a common job in webcreation. I couldnt find any nice examples or tuts on that.

    Read the article

  • .Net Remote Log Querying

    - by jlafay
    I have a Win Service that I'm working on that consists of the service, WF Service (using WorkflowServiceHost), a Workflow (WorkflowApplication) that queries/processes data from a SQL Server DB, and a Comm Marshall class that handles data flow between the service and the WF. The WF does a lot of heavy data processing and the original app (early VB6) logged all the processing and displayed the results on the screen of the host machine. Critical events will be committed to eventlog because I strongly believe that should be common practice because admins naturally will look there and because it already has support for remote viewing. The workflow will also need to write logging events as it processes and iterates according to our business logic. Such as: records queried, records returned, processed records, etc. The data is very critical and we need to log actions as they occur. The logs are currently kept as text files on disk and I think that is ok. Ideally I would like to record log events in XML so it's easier to query and because it is less costly than a DB, especially since our DB servers do a lot of heavy processing anyways. Since we are replacing essentially a VB6 application with a robust windows service (taking advantage of WF 4.0), it has been requested that a remote client also be created. It receives callbacks from the service after subscribing to it and being added to a collection of subscribers. Basic statistics and summaries are updated client side after receiving basic monitoring data of what is going on with the service. We would like to also provide a way to provide details when we need to examine what is going on further because this is a long running data processing service and issues need to be addressed immediately. What is the best way to implement some type of query from the client that is sent to the service and returned to the client? Would it be efficient to implement another method to expose on the service and then have that pass that off to some querying class/object to examine the XML files by whichever specification and then return it to the client? That's the main concern. I don't want the service to processing to bottleneck much while this occurs. It seems that WF already auto-magically threads well for the most part but I want to make sure this is the right way to go about it. Any suggestions/recommendations on how to architect and implement a small log querying framework for a remote service would be awesome.

    Read the article

  • SSL Authentication with Certificates: Should the Certificates have a hostname?

    - by sixtyfootersdude
    Summary JBoss allows clients and servers to authenticate using certificates and ssl. One thing that seems strange is that you are not required to give your hostname on the certificate. I think that this means if Server B is in your truststore, Sever B can pretend to be any server that they want. (And likewise: if Client B is in your truststore...) Am I missing something here? Authentication Steps (Summary of Wikipeida Page) Client Server ================================================================================================= 1) Client sends Client Hello ENCRIPTION: None - highest TLS protocol supported - random number - list of cipher suites - compression methods 2) Sever Hello ENCRIPTION: None - highest TLS protocol supported - random number - choosen cipher suite - choosen compression method 3) Certificate Message ENCRIPTION: None - 4) ServerHelloDone ENCRIPTION: None 5) Certificate Message ENCRIPTION: None 6) ClientKeyExchange Message ENCRIPTION: server's public key => only server can read => if sever can read this he must own the certificate - may contain a PreMasterSecerate, public key or nothing (depends on cipher) 7) CertificateVerify Message ENCRIPTION: clients private key - purpose is to prove to the server that client owns the cert 8) BOTH CLIENT AND SERVER: - use random numbers and PreMasterSecret to compute a common secerate 9) Finished message - contains a has and MAC over previous handshakes (to ensure that those unincripted messages did not get broken) 10) Finished message - samething Sever Knows The client has the public key for the sent certificate (step 7) The client's certificate is valid because either: it has been signed by a CA (verisign) it has been self-signed BUT it is in the server's truststore It is not a replay attack because presumably the random number (step 1 or 2) is sent with each message Client Knows The server has the public key for the sent certificate (step 6 with step 8) The server's certificate is valid because either: it has been signed by a CA (verisign) it has been self-signed BUT it is in the client's truststore It is not a replay attack because presumably the random number (step 1 or 2) is sent with each message Potential Problem Suppose the client's truststore has certs in it: Server A Server B (malicous) Server A has hostname www.A.com Server B has hostname www.B.com Suppose: The client tries to connect to Server A but Server B launches a man in the middle attack. Since server B: has a public key for the certificate that will be sent to the client has a "valid certificate" (a cert in the truststore) And since: certificates do not have a hostname feild in them It seems like Server B can pretend to be Server A easily. Is there something that I am missing?

    Read the article

  • Design patterns and interview question

    - by user160758
    When I was learning to code, I read up on the design patterns like a good boy. Long after this, I started to actually understand them. Design discussions such as those on this site constantly try to make the rules more and more general, which is good. But there is a line, over which it becomes over-analysis starts to feed off itself and as such I think begins to obfuscate the original point - for example the "What's Alternative to Singleton" post and the links contained therein. http://stackoverflow.com/questions/1300655/whats-alternative-to-singleton I say this having been asked in both interviews I’ve had over the last 2 weeks what a singleton is and what criticisms I have of it. I have used it a few times for items such as user data (simple key-value eg. last file opened by this user) and logging (very common i'm sure). I've never ever used it just to have what is essentially global application data, as this is clearly stupid. In the first interview, I reply that I have no criticisms of it. He seemed disappointed by this but as the job wasn’t really for me, I forgot about it. In the next one, I was asked again and, as I wanted this job, I thought about it on the spot and made some objections, similar to those contained in the post linked to above (I suggested use of a factory or dependency injection instead). He seemed happy with this. But my problem is that I have used the singleton without ever using it in this kind of stupid way, which I had to describe on the spot. Using it for global data and the like isn’t something I did then realised was stupid, or read was stupid so didn’t do, it was just something I knew was stupid from the start. Essentially I’m supposed to be able to think of ways of how to misuse a pattern in the interview? Which class of programmers can best answer this question? The best ones? The medium ones? I'm not sure.... And these were both bright guys. I read more than enough to get better at my job but had never actually bothered to seek out criticisms of the most simple of the design patterns like this one. Do people think such questions are valid and that I ought to know the objections off by heart? Or that it is reasonable to be able to work out what other people who are missing the point would do on the fly? Or do you think I’m at least partially right that the question is too unsubtle and that the questions ought to be better thought out in order to make sure only good candidates can answer. PS. Please don’t think I’m saying that I’m just so clever that I know everything automatically - I’ve learnt the hard way like everyone else. But avoiding global data is hardly revolutionary.

    Read the article

  • What to Expect in Rails 4

    - by mikhailov
    Rails 4 is nearly there, we should be ready before it released. Most developers are trying hard to keep their application on the edge. Must see resources: 1) @sikachu talk: What to Expect in Rails 4.0 - YouTube 2) Rails Guides release notes: http://edgeguides.rubyonrails.org/4_0_release_notes.html There is a mix of all major changes down here: ActionMailer changes excerpt: Asynchronously send messages via the Rails Raise an ActionView::MissingTemplate exception when no implicit template could be found ActionPack changes excerpt Added controller-level etag additions that will be part of the action etag computation Add automatic template digests to all CacheHelper#cache calls (originally spiked in the cache_digests plugin) Add Routing Concerns to declare common routes that can be reused inside others resources and routes Added ActionController::Live. Mix it in to your controller and you can stream data to the client live truncate now always returns an escaped HTML-safe string. The option :escape can be used as false to not escape the result Added ActionDispatch::SSL middleware that when included force all the requests to be under HTTPS protocol ActiveModel changes excerpt AM::Validation#validates ability to pass custom exception to :strict option Changed `AM::Serializers::JSON.include_root_in_json' default value to false. Now, AM Serializers and AR objects have the same default behaviour Added ActiveModel::Model, a mixin to make Ruby objects work with AP out of box Trim down Active Model API by removing valid? and errors.full_messages ActiveRecord changes excerpt Use native mysqldump command instead of structure_dump method when dumping the database structure to a sql file. Attribute predicate methods, such as article.title?, will now raise ActiveModel::MissingAttributeError if the attribute being queried for truthiness was not read from the database, instead of just returning false ActiveRecord::SessionStore has been extracted from Active Record as activerecord-session_store gem. Please read the README.md file on the gem for the usage Fix reset_counters when there are multiple belongs_to association with the same foreign key and one of them have a counter cache Raise ArgumentError if list of attributes to change is empty in update_all Add Relation#load. This method explicitly loads the records and then returns self Deprecated most of the 'dynamic finder' methods. All dynamic methods except for find_by_... and find_by_...! are deprecated Added ability to ActiveRecord::Relation#from to accept other ActiveRecord::Relation objects Remove IdentityMap ActiveSupport changes excerpt ERB::Util.html_escape now escapes single quotes ActiveSupport::Callbacks: deprecate monkey patch of object callbacks Replace deprecated memcache-client gem with dalli in ActiveSupport::Cache::MemCacheStore Object#try will now return nil instead of raise a NoMethodError if the receiving object does not implement the method, but you can still get the old behavior by using the new Object#try! Object#try can't call private methods Add ActiveSupport::Deprecations.behavior = :silence to completely ignore Rails runtime deprecations What are the most important changes for you?

    Read the article

< Previous Page | 495 496 497 498 499 500 501 502 503 504 505 506  | Next Page >