Search Results

Search found 1524 results on 61 pages for 'elegant'.

Page 58/61 | < Previous Page | 54 55 56 57 58 59 60 61  | Next Page >

  • Is there a recommended approach to handle saving data in response to within-site navigation without

    - by Carvell Fenton
    Hello all, Preamble to scope my question: I have a web app (or site, this is an internal LAN site) that uses jQuery and AJAX extensively to dynamically load the content section of the UI in the browser. A user navigates the app using a navigation menu. Clicking an item in the navigation menu makes an AJAX call to php, and php then returns the content that is used to populate the central content section. One of the pages served back by php has a table form, set up like a spreadsheet, that the user enters values into. This table is always kept in sync with data in the database. So, when the table is created, is it populated with the relevant database data. Then when the user makes a change in a "cell", that change immediately is written back to the database so the table and database are always in sync. This approach was take to reassure users that the data they entered has been saved (long story...), and to alleviate them from having to click a save button of some kind. So, this always in sync idea is great, except that a user can enter a value in a cell, not take focus out of the cell, and then take any number of actions that would cause that last value to be lost: e.g. navigate to another section of the site via the navigation menu, log out of the app, close the browser, etc. End of preamble, on to the issue: I initially thought that wasn't a problem, because I would just track what data was "dirty" or not saved, and then in the onunload event I would do a final write to the database. Herein lies the rub: because of my clever (or not so clever, not sure) use of AJAX and dynamically loading the content section, the user never actually leaves the original url, or page, when the above actions are taken, with the exception of closing the browser. Therefore, the onunload event does not fire, and I am back to losing the last data again. My question, is there a recommended way to handle figuring out if a person is navigating away from a "section" of your app when content is dynamically loaded this way? I can come up with a solution I think, that involves globals and tracking the currently viewed page, but I thought I would check if there might be a more elegant solution out there, or a change I could make in my design, that would make this work. Thanks in advance as always!

    Read the article

  • Estimating the boundary of arbitrarily distributed data

    - by Dave
    I have two dimensional discrete spatial data. I would like to make an approximation of the spatial boundaries of this data so that I can produce a plot with another dataset on top of it. Ideally, this would be an ordered set of (x,y) points that matplotlib can plot with the plt.Polygon() patch. My initial attempt is very inelegant: I place a fine grid over the data, and where data is found in a cell, a square matplotlib patch is created of that cell. The resolution of the boundary thus depends on the sampling frequency of the grid. Here is an example, where the grey region are the cells containing data, black where no data exists. OK, problem solved - why am I still here? Well.... I'd like a more "elegant" solution, or at least one that is faster (ie. I don't want to get on with "real" work, I'd like to have some fun with this!). The best way I can think of is a ray-tracing approach - eg: from xmin to xmax, at y=ymin, check if data boundary crossed in intervals dx y=ymin+dy, do 1 do 1-2, but now sample in y An alternative is defining a centre, and sampling in r-theta space - ie radial spokes in dtheta increments. Both would produce a set of (x,y) points, but then how do I order/link neighbouring points them to create the boundary? A nearest neighbour approach is not appropriate as, for example (to borrow from Geography), an isthmus (think of Panama connecting N&S America) could then close off and isolate regions. This also might not deal very well with the holes seen in the data, which I would like to represent as a different plt.Polygon. The solution perhaps comes from solving an area maximisation problem. For a set of points defining the data limits, what is the maximum contiguous area contained within those points To form the enclosed area, what are the neighbouring points for the nth point? How will the holes be treated in this scheme - is this erring into topology now? Apologies, much of this is me thinking out loud. I'd be grateful for some hints, suggestions or solutions. I suspect this is an oft-studied problem with many solution techniques, but I'm looking for something simple to code and quick to run... I guess everyone is, really! Cheers, David

    Read the article

  • Setting up a "to-many" relationship value dependency for a transient Core Data attribute

    - by Greg Combs
    I've got a relatively complicated Core Data relationship structure and I'm trying to figure out how to set up value dependencies (or observations) across various to-many relationships. Let me start out with some basic info. I've got a classroom with students, assignments, and grades (students X assignments). For simplicity's sake, we don't really have to focus much on the assignments yet. StudentObj <--->> ScoreObj <<---> AssignmentObj Each ScoreObj has a to-one relation with the StudentObj and the AssignmentObj. ScoreObj has real attributes for the numerical grade, the turnInDate, and notes. AssignmentObj.scores is the set of Score objects for that assignment (N = all students). AssignmentObj has real attributes for name, dueDate, curveFunction, gradeWeight, and maxPoints. StudentObj.scores is the set of Score objects for that student (N = all assignments). StudentObj also has real attributes like name, studentID, email, etc. StudentObj has a transient (calculated, not stored) attribute called gradeTotal. This last item, gradeTotal, is the real pickle. it calculates the student's overall semester grade using the scores (ScoreObj) from all their assignments, their associated assignment gradeWeights, curves, and maxPoints, and various other things. This gradeTotal value is displayed in a table column, along with all the students and their individual assignment grades. Determining the value of gradeTotal is a relatively expensive operation, particularly with a large class, therefore I want to run it only when necessary. For simplicity's sake, I'm not storing that gradeTotal value in the core data model. I don't mind caching it somewhere, but I'm having a bitch of a time determining where and how to best update that cache. I need to run that calculation for each student whenever any value changes that affects their gradeTotal. If this were a simple to-one relationship, I know I could use something like keyPathsForValuesAffectingGradeTotal ... but it's more like a many-to-one-to-many relationship. Does anyone know of an elegant (and KVC correct) solution? I guess I could tear through all those score and assignment objects and tell them to register their students as observers. But this seems like a blunt force approach.

    Read the article

  • SQL Server CTE referred in self joins slow

    - by Kharlos Dominguez
    Hello, I have written a table-valued UDF that starts by a CTE to return a subset of the rows from a large table. There are several joins in the CTE. A couple of inner and one left join to other tables, which don't contain a lot of rows. The CTE has a where clause that returns the rows within a date range, in order to return only the rows needed. I'm then referencing this CTE in 4 self left joins, in order to build subtotals using different criterias. The query is quite complex but here is a simplified pseudo-version of it WITH DataCTE as ( SELECT [columns] FROM table INNER JOIN table2 ON [...] INNER JOIN table3 ON [...] LEFT JOIN table3 ON [...] ) SELECT [aggregates_columns of each subset] FROM DataCTE Main LEFT JOIN DataCTE BananasSubset ON [...] AND Product = 'Bananas' AND Quality = 100 LEFT JOIN DataCTE DamagedBananasSubset ON [...] AND Product = 'Bananas' AND Quality < 20 LEFT JOIN DataCTE MangosSubset ON [...] GROUP BY [ I have the feeling that SQL Server gets confused and calls the CTE for each self join, which seems confirmed by looking at the execution plan, although I confess not being an expert at reading those. I would have assumed SQL Server to be smart enough to only perform the data retrieval from the CTE only once, rather than do it several times. I have tried the same approach but rather than using a CTE to get the subset of the data, I used the same select query as in the CTE, but made it output to a temp table instead. The version referring the CTE version takes 40 seconds. The version referring the temp table takes between 1 and 2 seconds. Why isn't SQL Server smart enough to keep the CTE results in memory? I like CTEs, especially in this case as my UDF is a table-valued one, so it allowed me to keep everything in a single statement. To use a temp table, I would need to write a multi-statement table valued UDF, which I find a slightly less elegant solution. Did some of you had this kind of performance issues with CTE, and if so, how did you get them sorted? Thanks, Kharlos

    Read the article

  • IE CSS bug: table border showing div with visibility: hidden, position: absolute

    - by Alessandro Vernet
    The issue I have a <div> on a page which is initially hidden with a visibility: hidden; position: absolute. The issue is that if a <div> hidden this way contains a table which uses border-collapse: collapse and has a border set on it cells, that border still shows "through" the hidden <div> on IE. Try this for yourself by running the code below on IE6 or IE7. You should get a white page, but instead you will see: Possible workaround Since this is happening on IE and not on other browsers, I assume that this is an IE bug. One workaround is to add the following code which will override the border: .hide table tr td { border: none; } I am wondering: Is this a known IE bug? Is there a more elegant solution/workaround? The code <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> <style type="text/css"> /* Style for tables */ .table tr td { border: 1px solid gray; } .table { border-collapse: collapse; } /* Class used to hide a section */ .hide { visibility: hidden; position: absolute; } </style> </head> <body> <div class="hide"> <table class="table"> <tr> <td>Gaga</td> </tr> </table> </div> </body> </html>

    Read the article

  • Algorithm to split an article without breaking the reading flow or HTML code

    - by Victor Stanciu
    Hello, I have a very large database of articles, of varying lengths. The articles have HTML elements in them. I have to insert some ads (simple <script> elements) in the body of each article when it is displayed (I know, I hate ads that interrupt my reading too). Now, the problem is that each ad must be inserted at about the same position in each article. The simplest solution is to simply split the article on a fixed number of characters (without breaking words), and insert the ad code. This, however, runs the risk of inserting the ad in the middle of a HTML tag. I could go the regex way, but I was thinking about the following solution, using JS: Establish a character count threshold. For example, "the add should be inserted at about 200 words" Set accepted deviations in each direction, say -20, +20 characters. Loop through each text node inside the article, and while doing so, keep count of the total number of characters so far Once the count exceeds the threshold, make the following decision: 4.1. If count exceeds the threshold by a value lower that the positive accepted deviation (for example, 17 characters), insert the ad code just after the current text node. 4.2. If the count is greater than the sum of the threshold and the deviation, roll back to the previous text node, and make the same decision, only this time use the previous count and check if it's lower than the difference between the threshold and the deviation, and if not, insert the ad between the current node and the previous one. 4.3. If the 4.1 and 4.2 fail (which means that the previous node reached a too low character count and the current node a too high one), insert the ad after whatever character count is needed inside the current element. I know it's convoluted, but it's the first thing out of my mind and it has the advantage that, by trying to insert the ad between text nodes, perhaps it will not break the flow of the article as bad as it would if I would just stick it in (like the final 4.3 case) Here is some pseudo-code I put together, I don't trust my english-explaining skills: threshold = 200 deviation = 20 current_count = 0 for each node in article_nodes { previous_count = current_count current_count = current_count + node.length if current_count < threshold { continue // next interation } if current_count > threshold + deviation { if previous_count < threshdold - deviation { // insert ad in current node } else { // insert ad between the current and previous nodes } } else { // insert ad after the current node } break; } Am I over-complicating stuff, or am I missing a simpler, more elegant solution?

    Read the article

  • c# event fires windows form incorrectly

    - by MikeW
    I'm trying to understand what's happening here. I have a CheckedListBox which contains some ticked and some un-ticked items. I'm trying to find a way of determining the delta in the selection of controls. I've tried some cumbersome like this - but only works part of the time, I'm sure there's a more elegant solution. A maybe related problem is the myCheckBox_ItemCheck event fires on form load - before I have a chance to perform an ItemCheck. Here's what I have so far: void clbProgs_ItemCheck(object sender, ItemCheckEventArgs e) { // i know its awful System.Windows.Forms.CheckedListBox cb = (System.Windows.Forms.CheckedListBox)sender; string sCurrent = e.CurrentValue.ToString(); int sIndex = e.Index; AbstractLink lk = (AbstractLink)cb.Items[sIndex]; List<ILink> _links = clbProgs.DataSource as List<ILink>; foreach (AbstractLink lkCurrent in _links) { if (!lkCurrent.IsActive) { if (!_groupValues.ContainsKey(lkCurrent.Linkid)) { _groupValues.Add(lkCurrent.Linkid, lkCurrent); } } } if (_groupValues.ContainsKey(lk.Linkid)) { AbstractLink lkDirty = (AbstractLink)lk.Clone(); CheckState newValue = (CheckState)e.NewValue; if (newValue == CheckState.Checked) { lkDirty.IsActive = true; } else if (newValue == CheckState.Unchecked) { lkDirty.IsActive = false; } if (_dirtyGroups.ContainsKey(lk.Linkid)) { _dirtyGroups[lk.Linkid] = lkDirty; } else { CheckState oldValue = (CheckState)e.NewValue; if (oldValue == CheckState.Checked) { lkDirty.IsActive = true; } else if (oldValue == CheckState.Unchecked) { lkDirty.IsActive = false; } _dirtyGroups.Add(lk.Linkid, lk); } } else { if (!lk.IsActive) { _dirtyGroups.Add(lk.Linkid, lk); } else { _groupValues.Add(lk.Linkid, lk); } } } Then onclick of a save button - I check whats changed before sending to database: private void btSave_Click(object sender, EventArgs e) { List<AbstractLink> originalList = new List<AbstractLink>(_groupValues.Values); List<AbstractLink> changedList = new List<AbstractLink>(_dirtyGroups.Values); IEnumerable<AbstractLink> dupes = originalList.ToArray<AbstractLink>().Intersect(changedList.ToArray<AbstractLink>()); foreach (ILink t in dupes) { MessageBox.Show("Changed"); } if (dupes.Count() == 0) { MessageBox.Show("No Change"); } } For further info. The definition of type AbstractLink uses: public bool Equals(ILink other) { if (Object.ReferenceEquals(other, null)) return false; if (Object.ReferenceEquals(this, other)) return true; return IsActive.Equals(other.IsActive) && Linkid.Equals(other.Linkid); }

    Read the article

  • How to prevent google chrome from caching my inputs, esp hidden ones when user click back?

    - by melaos
    hi there, i have an asp.net mvc app which have quite a few hidden inputs to keep values around and formatting their names so that i can use the Model binding later when i submit the form. i stumble into a weird bug with chrome which i don't have with IE or Firefox when the user submits the form and click on the back button, i find that chrome will keep my hidden input values as well. this whole chunk is generated via javascript hence i believe chrome is caching this. function addProductRow(productId, productName) { if (productName != "") { //use guid to ensure that the row never repeats var guid = $.Guid.New(); var temp = parseFloat($(".tboProductCount").val()); //need the span to workaround for chrome var szHTML = "<tr valign=\"top\" id=\"productRow\"><td class=\"productIdCol\"><input type=\"hidden\" id=productRegsID" + temp + "\" name=\"productRegs[" + temp + "].productId\" value=\"" + productId + "\"/>" + "<span id=\"spanProdID" + temp + "\" name=\"spanProdID" + temp + "\" >" + productId + "</span>" + "</td>" //+ "<td><input type=\"text\" id=\"productRegName\" name=\"productRegs[" + temp + "].productName\" value=\"" + productName + "\" class=\"productRegName\" size=\"50\" readonly=\"readonly\"/></td>" + "<td><span id=\"productRegName\" name=\"productRegs[" + temp + "].productName\" class=\"productRegName\">"+ productName + "<\span></td>" + "<td id=\"" + guid + "\" class=\"productrowguid\" \>" + "<input type=\"text\" size=\"20\" id=\"productSerialNo" + temp + "\" name=\"productRegs[" + temp + "].serialNo\" value=\"" + "\" class=\"productSerialNo\" maxlength=\"18\" />" + "<a class=\"fancybox\" id=\"btnImgSerialNo" + temp + "\" href=\"#divSerialNo" + temp + "\"><img class=\"btnImgSerialNo\" src=\"Images/landing_14.gif\" /></a>" + "<span id=\"snFlag" + temp + "\" class=\"redWarning\"></span></td>" + "<td><input type=\"text\" id=\"productRegDate" + temp + "\" name=\"productRegs[" + temp + "].PurchaseDate\" readonly=\"readonly\" />" + "<span id=\"snRegDate" + temp + "\" class=\"redWarning\"></span></td>" + "<td align=\"center\"><img style=\"cursor:pointer\" id=\"btnImgDelete\" src=\"Images/btn_remove.gif\" onclick=\"javascript:removeProductRow('" + guid + "')\" /><div style=\"display:none;\"><div id=\"divSerialNo" + temp + "\" style=\"font-family:verdana;font-size:11px;width:600px\">" + serialnumbergeneral + "<br /><br />" + getSNImageByCategory(productId) + "</div></div></td>" + "</tr>"; $(".ProductRegistrationTable").append(szHTML); $("a.fancybox").fancybox(); //initialization $("#productRegDate" + temp).datepicker({ minDate: new Date(1996, 1 - 1, 1), maxDate: 0 }); //sanity check //s7test alert('1 '+$("#spanProdID" + temp)); alert('2 '+$("#productRegsID" + temp)); } //end function addNewProductRow i need the id to be refreshed when the user select a new product, but putting another span tag beside it shows that the span will have the new id will the hidden input will still have the previous id. is there an elegant way to workaround this issue? thanks

    Read the article

  • What is a good generic sibling control Javascript communication strategy?

    - by James
    I'm building a webpage that is composed of several controls, and trying to come up with an effective somewhat generic client side sibling control communication model. One of the controls is the menu control. Whenever an item is clicked in here I wanted to expose a custom client side event that other controls can subscribe to, so that I can achieve a loosely coupled sibling control communication model. To that end I've created a simple Javascript event collection class (code below) that acts as like a hub for control event registration and event subscription. This code certainly gets the job done, but my question is is there a better more elegant way to do this in terms of best practices or tools, or is this just a fools errand? /// Event collection object - acts as the hub for control communication. function ClientEventCollection() { this.ClientEvents = {}; this.RegisterEvent = _RegisterEvent; this.AttachToEvent = _AttachToEvent; this.FireEvent = _FireEvent; function _RegisterEvent(eventKey) { if (!this.ClientEvents[eventKey]) this.ClientEvents[eventKey] = []; } function _AttachToEvent(eventKey, handlerFunc) { if (this.ClientEvents[eventKey]) this.ClientEvents[eventKey][this.ClientEvents[eventKey].length] = handlerFunc; } function _FireEvent(eventKey, triggerId, contextData ) { if (this.ClientEvents[eventKey]) { for (var i = 0; i < this.ClientEvents[eventKey].length; i++) { var fn = this.ClientEvents[eventKey][i]; if (fn) fn(triggerId, contextData); } } } } // load new collection instance. var myClientEvents = new bsdClientEventCollection(); // register events specific to the control that owns it, this will be emitted by each respective control. myClientEvents.RegisterEvent("menu-item-clicked"); Here is the part where this code above is consumed by source and subscriber controls. // menu control $(document).ready(function() { $(".menu > a").click( function(event) { //event.preventDefault(); myClientEvents.FireEvent("menu-item-clicked", $(this).attr("id"), null); }); }); <div style="float: left;" class="menu"> <a id="1" href="#">Menu Item1</a><br /> <a id="2" href="#">Menu Item2</a><br /> <a id="3" href="#">Menu Item3</a><br /> <a id="4" href="#">Menu Item4</a><br /> </div> // event subscriber control $(document).ready(function() { myClientEvents.AttachToEvent("menu-item-clicked", menuItemChanged); myClientEvents.AttachToEvent("menu-item-clicked", menuItemChanged2); myClientEvents.AttachToEvent("menu-item-clicked", menuItemChanged3); }); function menuItemChanged(id, contextData) { alert('menuItemChanged ' + id); } function menuItemChanged2(id, contextData) { alert('menuItemChanged2 ' + id); } function menuItemChanged3(id, contextData) { alert('menuItemChanged3 ' + id); }

    Read the article

  • php pdo connection scope

    - by Scarface
    Hey guys I have a connection class I found for pdo. I am calling the connection method on the page that the file is included on. The problem is that within functions the $conn variable is not defined even though I stated the method was public (bare with me I am very new to OOP), and I was wondering if anyone had an elegant solution other then using global in every function. Any suggestions are greatly appreciated. CONNECTION class PDOConnectionFactory{ // receives the connection public $con = null; // swich database? public $dbType = "mysql"; // connection parameters // when it will not be necessary leaves blank only with the double quotations marks "" public $host = "localhost"; public $user = "user"; public $senha = "password"; public $db = "database"; // arrow the persistence of the connection public $persistent = false; // new PDOConnectionFactory( true ) <--- persistent connection // new PDOConnectionFactory() <--- no persistent connection public function PDOConnectionFactory( $persistent=false ){ // it verifies the persistence of the connection if( $persistent != false){ $this->persistent = true; } } public function getConnection(){ try{ // it carries through the connection $this->con = new PDO($this->dbType.":host=".$this->host.";dbname=".$this->db, $this->user, $this->senha, array( PDO::ATTR_PERSISTENT => $this->persistent ) ); // carried through successfully, it returns connected return $this->con; // in case that an error occurs, it returns the error; }catch ( PDOException $ex ){ echo "We are currently experiencing technical difficulties. We have a bunch of monkies working really hard to fix the problem. Check back soon: ".$ex->getMessage(); } } // close connection public function Close(){ if( $this->con != null ) $this->con = null; } } PAGE USED ON include("includes/connection.php"); $db = new PDOConnectionFactory(); $conn = $db->getConnection(); function test(){ try{ $sql = 'SELECT * FROM topic'; $stmt = $conn->prepare($sql); $result=$stmt->execute(); } catch(PDOException $e){ echo $e->getMessage(); } } test();

    Read the article

  • Model Binding with Parent/Child Relationship

    - by user296297
    I'm sure this has been answered before, but I've spent the last three hours looking for an acceptable solution and have been unable to find anything, so I apologize for what I'm sure is a repeat. I have two domain objects, Player and Position. Player's have a Position. My domain objects are POCOs tied to my database with NHibernate. I have an Add action that takes a Player, so I'm using the built in model binding. On my view I have a drop down list that lets a user select the Position for the Player. The value of the drop down list is the Id of the position. Everything gets populated correctly except that my Position object fails validation (ModelState.IsValid) because at the point of model binding it only has an Id and none of it's other required attributes. What is the preferred solution for solving this with ASP.NET MVC 2? Solutions I've tried... Fetch the Position from the database based on the Id before ModelState.IsValid is called in the Add action of my controller. I can't get the model to run the validation again, so ModelState.IsValid always returns false. Create a custom ModelBinder that inherits from the default binder and fetch the Position from the database after the base binder is called. The ModelBinder seems to be doing the validation so if I use anything from the default binder I'm hosed. Which means I have to completely roll my own binder and grab every value from the form...this seems really wrong and inefficient for such a common use-case. Solutions I think might work, I just can't figure out how to do... Turn off the validation for the Position class when used in Player. Write a custom ModelBinder leverages the default binder for most of the property binding, but lets me get the Position from the database BEFORE the default binder runs validation. So, how do the rest of you solve this? Thanks, Dan P.S. In my opinion having a PositionId on Player just for this case is not a good solution. There has to be solvable in a more elegant fashion.

    Read the article

  • MATLAB query about for loop, reading in data and plotting

    - by mp7
    Hi there, I am a complete novice at using matlab and am trying to work out if there is a way of optimising my code. Essentially I have data from model outputs and I need to plot them using matlab. In addition I have reference data (with 95% confidence intervals) which I plot on the same graph to get a visual idea on how close the model outputs and reference data is. In terms of the model outputs I have several thousand files (number sequentially) which I open in a loop and plot. The problem/question I have is whether I can preprocess the data and then plot later - to save time. The issue I seem to be having when I try this is that I have a legend which either does not appear or is inaccurate. My code (apolgies if it not elegant): fn= xlsread(['tbobserved' '.xls']); time= fn(:,1); totalreference=fn(:,4); totalreferencelowerci=fn(:,6); totalreferenceupperci=fn(:,7); figure plot(time,totalrefrence,'-', time, totalreferencelowerci,'--', time, totalreferenceupperci,'--'); xlabel('Year'); ylabel('Reference incidence per 100,000 population'); title ('Total'); clickableLegend('Observed reference data', 'Totalreferencelowerci', 'Totalreferenceupperci','Location','BestOutside'); xlim([1910 1970]); hold on start_sim=10000; end_sim=10005; h = zeros (1,1000); for i=start_sim:end_sim %is there any way of doing this earlier to save time? a=int2str(i); incidenceFile =strcat('result_', 'Sim', '_', a, 'I_byCal_total.xls'); est_tot=importdata(incidenceFile, '\t', 1); cal_tot=est_tot.data; magnitude=1; t1=cal_tot(:,1)+1750; totalmodel=cal_tot(:,3)+cal_tot(:,5); h(a)=plot(t1,totalmodel); xlim([1910 1970]); ylim([0 500]); hold all clickableLegend(h(a),a,'Location','BestOutside') end Essentially I was hoping to have a way of reading in the data and then plot later - ie. optimise the code. I hope you might be able to help. Thanks. mp

    Read the article

  • .NET Windows Service with timer stops responding

    - by Biri
    I have a windows service written in c#. It has a timer inside, which fires some functions on a regular basis. So the skeleton of my service: public partial class ArchiveService : ServiceBase { Timer tickTack; int interval = 10; ... protected override void OnStart(string[] args) { tickTack = new Timer(1000 * interval); tickTack.Elapsed += new ElapsedEventHandler(tickTack_Elapsed); tickTack.Start(); } protected override void OnStop() { tickTack.Stop(); } private void tickTack_Elapsed(object sender, ElapsedEventArgs e) { ... } } It works for some time (like 10-15 days) then it stops. I mean the service shows as running, but it does not do anything. I make some logging and the problem can be the timer, because after the interval it does not call the tickTack_Elapsed function. I was thinking about rewrite it without a timer, using an endless loop, which stops the processing for the amount of time I set up. This is also not an elegant solution and I think it can have some side effects regarding memory. The Timer is used from the System.Timers namespace, the environment is Windows 2003. I used this approach in two different services on different servers, but both is producing this behavior (this is why I thought that it is somehow connected to my code or the framework itself). Does somebody experienced this behavior? What can be wrong? Edit: I edited both services. One got a nice try-catch everywhere and more logging. The second got a timer-recreation on a regular basis. None of them stopped since them, so if this situation remains for another week, I will close this question. Thank you for everyone so far. Edit: I close this question because nothing happened. I mean I made some changes, but those changes are not really relevant in this matter and both services are running without any problem since then. Please mark it as "Closed for not relevant anymore".

    Read the article

  • Casting to derived type problem in C++

    - by GONeale
    Hey there everyone, I am quite new to C++, but have worked with C# for years, however it is not helping me here! :) My problem: I have an Actor class which Ball and Peg both derive from on an objective-c iphone game I am working on. As I am testing for collision, I wish to set an instance of Ball and Peg appropriately depending on the actual runtime type of actorA or actorB. My code that tests this as follows: // Actors that collided Actor *actorA = (Actor*) bodyA->GetUserData(); Actor *actorB = (Actor*) bodyB->GetUserData(); Ball* ball; Peg* peg; if (static_cast<Ball*> (actorA)) { // true ball = static_cast<Ball*> (actorA); } else if (static_cast<Ball*> (actorB)) { ball = static_cast<Ball*> (actorB); } if (static_cast<Peg*> (actorA)) { // also true?! peg = static_cast<Peg*> (actorA); } else if (static_cast<Peg*> (actorB)) { peg = static_cast<Peg*> (actorB); } if (peg != NULL) { [peg hitByBall]; } Once ball and peg are set, I then proceed to run the hitByBall method (objective c). Where my problem really lies is in the casting procedurel Ball casts fine from actorA; the first if (static_cast<>) statement steps in and sets the ball pointer appropriately. The second step is to assign the appropriate type to peg. I know peg should be a Peg type and I previously know it will be actorB, however at runtime, detecting the types, I was surprised to find actually the third if (static_cast<>) statement stepped in and set this, this if statement was to check if actorA was a Peg, which we already know actorA is a Ball! Why would it have stepped here and not in the fourth if statement? The only thing I can assume is how casting works differently from c# and that is it finds that actorA which is actually of type Ball derives from Actor and then it found when static_cast<Peg*> (actorA) is performed it found Peg derives from Actor too, so this is a valid test? This could all come down to how I have misunderstood the use of static_cast. How can I achieve what I need? :) I'm really uneasy about what feels to me like a long winded brute-casting attempt here with a ton of ridiculous if statements. I'm sure there is a more elegant way to achieve a simple cast to Peg and cast to Ball dependent on actual type held in actorA and actorB. Hope someone out there can help! :) Thanks a lot.

    Read the article

  • ZF Autoloader to load ancestor and requested class

    - by Pekka
    I am integrating Zend Framework into an existing application. I want to switch the application over to Zend's autoloading mechanism to replace dozens of include() statements. I have a specific requirement for the autoloading mechanism, though. Allow me to elaborate. The existing application uses a core library (independent from ZF), for example: /Core/Library/authentication.php /Core/Library/translation.php /Core/Library/messages.php this core library is to remain untouched at all times and serves a number of applications. The library contains classes like class ancestor_authentication { ... } class ancestor_translation { ... } class ancestor_messages { ... } in the application, there is also a Library directory: /App/Library/authentication.php /App/Library/translation.php /App/Library/messages.php these includes extend the ancestor classes and are the ones that actually get instantiated in the application. class authentication extends ancestor_authentication { } class translation extends ancestor_translation { } class messages extends ancestor_messages { } usually, these class definitions are empty. They simply extend their ancestors and provide the class name to instantiate. $authentication = new authentication(); The purpose of this solution is to be able to easily customize aspects of the application without having to patch the core libraries. Now, the autoloader I need would have to be aware of this structure. When an object of the class authentication is requested, the autoloader would have to: 1. load /Core/Library/authentication.php 2. load /App/Library/authentication.php My current approach would be creating a custom function, and binding that to Zend_Loader_Autoloader for a specific namespace prefix. Is there already a way to do this in Zend that I am overlooking? The accepted answer in this question kind of implies there is, but that may be just a bad choice of wording. Are there extensions to the Zend Autoloader that do this? Can you - I am new to ZF - think of an elegant way, conforming with the spirit of the framework, of extending the Autoloader with this functionality? I'm not necessary looking for a ready-made implementation, some pointers (This should be an extension to the xyz method that you would call like this...) would already be enough.

    Read the article

  • ASP.Net / MySQL : Translating content into several languages

    - by philwilks
    I have an ASP.Net website which uses a MySQL database for the back end. The website is an English e-commerce system, and we are looking at the possibility of translating it into about five other languages (French, Spanish etc). We will be getting human translators to perform the translation - we've looked at automated services but these aren't good enough. The static text on the site (e.g. headings, buttons etc) can easily be served up in multiple languages via .Net's built in localization features (resx files etc). The thing that I'm not so sure about it how best to store and retrieve the multi-language content in the database. For example, there is a products table that includes these fields... productId (int) categoryId (int) title (varchar) summary (varchar) description (text) features (text) The title, summary, description and features text would need to be available in all the different languages. Here are the two options that I've come up with... Create additional field for each language For example we could have titleEn, titleFr, titleEs etc for all the languages, and repeat this for all text columns. We would then adapt our code to use the appropriate field depending on the language selected. This feels a bit hacky, and also would lead to some very large tables. Also, if we wanted to add additional languages in the future it would be time consuming to add even more columns. Use a lookup table We could create a new table with the following format... textId | languageId | content ------------------------------- 10 | EN | Car 10 | FR | Voiture 10 | ES | Coche 11 | EN | Bike 11 | FR | Vélo We'd then adapt our products table to reference the appropriate textId for the title, summary, description and features instead of having the text stored in the product table. This seems much more elegant, but I can't think of a simple way of getting this data out of the database and onto the page without using complex SQL statements. Of course adding new languages in the future would be very simple compared to the previous option. I'd be very grateful for any suggestions about the best way to achieve this! Is there any "best practice" guidance out there? Has anyone done this before?

    Read the article

  • current_user.user_type_id = @employer ID

    - by sscirrus
    I am building a system with a User model (authenticated using AuthLogic) and three user types in three models: one of these models is Employer. Each of these three models has_many :users, :as = :authenticable. I start by having a new visitor to the site create their own 'User' record with username, password, which user type they are, etc. Upon creation, the user is sent to the 'new' action for one of the three models. So, if they tell us they are an employer, we redirect_to :controller = "employers, :action = "new". Question: When the employer has submitted, I want to set the current_user.user_type_id equal to the employer ID. This should be simple... but it's not working. # Employers Controller / new def new @employer = Employer.new 1.times {@employer.addresses.build} render :layout => 'forms' end # Employers Controller / create def create @employer = Employer.new(params[:employer]) if @employer.save if current_user.blank? redirect_to :controller => "users", :action => "new" else current_user.user_type_id = @employer.id current_user.user_type = "Employer" redirect_to :action => "home", :id => current_user.user_type_id end else render :action => "new" end end ------UPDATE------ Hi guys. In response: I am using this table structure because each of my three user type models have lots of different fields and each has different relationships to the other models, which is why I've avoided STI. By 1.times (@employer.addresses.build) I'm connecting the employer model to the address polymorphic table in one form, so I'm asking the controller to build a new address to go along with the new employer. Averell: you mentioned encapsulating... something in the model using a 'setter' method. I have no idea what you mean by this - could you please explain how this works (or direct me to an example elsewhere)? With tsdbrown's answer I have managed to create the behavior I want... if there's a more elegant way to accomplish the same thing I'd love to learn how. Thanks very much. Thanks to tsdbrown for answering the current_user.save problem!

    Read the article

  • [C++] Multiple inheritance from template class

    - by Tom P.
    Hello, I'm having issues with multiple inheritance from different instantiations of the same template class. Specifically, I'm trying to do this: template <class T> class Base { public: Base() : obj(NULL) { } virtual ~Base() { if( obj != NULL ) delete obj; } template <class T> T* createBase() { obj = new T(); return obj; } protected: T* obj; }; class Something { // ... }; class SomethingElse { // ... }; class Derived : public Base<Something>, public Base<SomethingElse> { }; int main() { Derived* d = new Derived(); Something* smth1 = d->createBase<Something>(); SomethingElse* smth2 = d->createBase<SomethingElse>(); delete d; return 0; } When I try to compile the above code, I get the following errors: 1>[...](41) : error C2440: '=' : cannot convert from 'SomethingElse *' to 'Something *' 1> Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast 1> [...](71) : see reference to function template instantiation 'T *Base<Something>::createBase<SomethingElse>(void)' being compiled 1> with 1> [ 1> T=SomethingElse 1> ] 1>[...](43) : error C2440: 'return' : cannot convert from 'Something *' to 'SomethingElse *' 1> Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast The issue seems to be ambiguity due to member obj being inherited from both Base< Something and Base< SomethingElse , and I can work around it by disambiguating my calls to createBase: Something* smth1 = d->Base<Something>::createBase<Something>(); SomethingElse* smth2 = d->Base<SomethingElse>::createBase<SomethingElse>(); However, this solution is dreadfully impractical, syntactically speaking, and I'd prefer something more elegant. Moreover, I'm puzzled by the first error message. It seems to imply that there is an instantiation createBase< SomethingElse in Base< Something , but how is that even possible? Any information or advice regarding this issue would be much appreciated.

    Read the article

  • Split string in C every white space

    - by redsolja
    I want to write a program in C that displays each word of a whole sentence (taken as input) at a seperate line. This is what i have done so far: void manipulate(char *buffer); int get_words(char *buffer); int main(){ char buff[100]; printf("sizeof %d\nstrlen %d\n", sizeof(buff), strlen(buff)); // Debugging reasons bzero(buff, sizeof(buff)); printf("Give me the text:\n"); fgets(buff, sizeof(buff), stdin); manipulate(buff); return 0; } int get_words(char *buffer){ // Function that gets the word count, by counting the spaces. int count; int wordcount = 0; char ch; for (count = 0; count < strlen(buffer); count ++){ ch = buffer[count]; if((isblank(ch)) || (buffer[count] == '\0')){ // if the character is blank, or null byte add 1 to the wordcounter wordcount += 1; } } printf("%d\n\n", wordcount); return wordcount; } void manipulate(char *buffer){ int words = get_words(buffer); char *newbuff[words]; char *ptr; int count = 0; int count2 = 0; char ch = '\n'; ptr = buffer; bzero(newbuff, sizeof(newbuff)); for (count = 0; count < 100; count ++){ ch = buffer[count]; if (isblank(ch) || buffer[count] == '\0'){ buffer[count] = '\0'; if((newbuff[count2] = (char *)malloc(strlen(buffer))) == NULL) { printf("MALLOC ERROR!\n"); exit(-1); } strcpy(newbuff[count2], ptr); printf("\n%s\n",newbuff[count2]); ptr = &buffer[count + 1]; count2 ++; } } } Although the output is what i want, i have really many black spaces after the final word displayed, and the malloc() returns NULL so the MALLOC ERROR! is displayed in the end. I can understand that there is a mistake at my malloc() implementation but i do not know what it is. Is there another more elegant - generally better way to do it? Thanks in advance.

    Read the article

  • In R, how do you get the best fitting equation to a set of data?

    - by Matherion
    I'm not sure wether R can do this (I assume it can, but maybe that's just because I tend to assume that R can do anything :-)). What I need is to find the best fitting equation to describe a dataset. For example, if you have these points: df = data.frame(x = c(1, 5, 10, 25, 50, 100), y = c(100, 75, 50, 40, 30, 25)) How do you get the best fitting equation? I know that you can get the best fitting curve with: plot(loess(df$y ~ df$x)) But as I understood you can't extract the equation, see Loess Fit and Resulting Equation. When I try to build it myself (note, I'm not a mathematician, so this is probably not the ideal approach :-)), I end up with smth like: y.predicted = 12.71 + ( 95 / (( (1 + df$x) ^ .5 ) / 1.3)) Which kind of seems to approximate it - but I can't help to think that smth more elegant probably exists :-) I have the feeling that fitting a linear or polynomial model also wouldn't work, because the formula seems different from what those models generally use (i.e. this one seems to need divisions, powers, etc). For example, the approach in Fitting polynomial model to data in R gives pretty bad approximations. I remember from a long time ago that there exist languages (Matlab may be one of them?) that do this kind of stuff. Can R do this as well, or am I just at the wrong place? (Background info: basically, what we need to do is find an equation for determining numbers in the second column based on the numbers in the first column; but we decide the numbers ourselves. We have an idea of how we want the curve to look like, but we can adjust these numbers to an equation if we get a better fit. It's about the pricing for a product (a cheaper alternative to current expensive software for qualitative data analysis); the more 'project credits' you buy, the cheaper it should become. Rather than forcing people to buy a given number (i.e. 5 or 10 or 25), it would be nicer to have a formula so people can buy exactly what they need - but of course this requires a formula. We have an idea for some prices we think are ok, but now we need to translate this into an equation.

    Read the article

  • Searching for duplicate records within a text file where the duplicate is determined by only two fie

    - by plg
    First, Python Newbie; be patient/kind. Next, once a month I receive a large text file (think 7 Million records) to test for duplicate values. This is catalog information. I get 7 fields, but the two I'm interested in are a supplier code and a full orderable part number. To determine if the record is dupliacted, I compress all special characters from the part number (except . and #) and create a compressed part number. The test for duplicates becomes the supplier code and compressed part number combination. This part is fairly straight forward. Currently, I am just copying the original file with 2 new columns (compressed part and duplicate indicator). If the part is a duplicate, I put a "YES" in the last field. Now that this is done, I want to be able to go back (or better yet, at the same time) to get the previous record where there was a supplier code/compressed part number match. So far, my code looks like this: Compress Full Part to a Compressed Part and Check for Duplicates on Supplier Code and Compressed Part combination import sys import re import time ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ start=time.time() try: file1 = open("C:\Accounting\May Accounting\May.txt", "r") except IOError: print sys.stderr, "Cannot Open Read File" sys.exit(1) try: file2 = open(file1.name[0:len(file1.name)-4] + "_" + "COMPRESSPN.txt", "a") except IOError: print sys.stderr, "Cannot Open Write File" sys.exit(1) hdrList="CIGSUPPLIER|FULL_PART|PART_STATUS|ALIAS_FLAG|ACQUISITION_FLAG|COMPRESSED_PART|DUPLICATE_INDICATOR" file2.write(hdrList+chr(10)) lines_seen=set() affirm="YES" records = file1.readlines() for record in records: fields = record.split(chr(124)) if fields[0]=="CIGSupplier": continue #If incoming file has a header line, skip it file2.write(fields[0]+"|"), #Supplier Code file2.write(fields[1]+"|"), #Full_Part file2.write(fields[2]+"|"), #Part Status file2.write(fields[3]+"|"), #Alias Flag file2.write(re.sub("[$\r\n]", "", fields[4])+"|"), #Acquisition Flag file2.write(re.sub("[^0-9a-zA-Z.#]", "", fields[1])+"|"), #Compressed_Part dupechk=fields[0]+"|"+re.sub("[^0-9a-zA-Z.#]", "", fields[1]) if dupechk not in lines_seen: file2.write(chr(10)) lines_seen.add(dupechk) else: file2.write(affirm+chr(10)) print "it took", time.time() - start, "seconds." ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ file2.close() file1.close() It runs in less than 6 minutes, so I am happy with this part, even if it is not elegant. Right now, when I get my results, I import the results into Access and do a self join to locate the duplicates. Loading/querying/exporting results in Access a file this size takes around an hour, so I would like to be able to export the matched duplicates to another text file or an Excel file. Confusing enough? Thanks.

    Read the article

  • Project Euler #18 - how to brute force all possible paths in tree-like structure using Python?

    - by euler user
    Am trying to learn Python the Atlantic way and am stuck on Project Euler #18. All of the stuff I can find on the web (and there's a LOT more googling that happened beyond that) is some variation on 'well you COULD brute force it, but here's a more elegant solution'... I get it, I totally do. There are really neat solutions out there, and I look forward to the day where the phrase 'acyclic graph' conjures up something more than a hazy, 1 megapixel resolution in my head. But I need to walk before I run here, see the state, and toy around with the brute force answer. So, question: how do I generate (enumerate?) all valid paths for the triangle in Project Euler #18 and store them in an appropriate python data structure? (A list of lists is my initial inclination?). I don't want the answer - I want to know how to brute force all the paths and store them into a data structure. Here's what I've got. I'm definitely looping over the data set wrong. The desired behavior would be to go 'depth first(?)' rather than just looping over each row ineffectually.. I read ch. 3 of Norvig's book but couldn't translate the psuedo-code. Tried reading over the AIMA python library for ch. 3 but it makes too many leaps. triangle = [ [75], [95, 64], [17, 47, 82], [18, 35, 87, 10], [20, 4, 82, 47, 65], [19, 1, 23, 75, 3, 34], [88, 2, 77, 73, 7, 63, 67], [99, 65, 4, 28, 6, 16, 70, 92], [41, 41, 26, 56, 83, 40, 80, 70, 33], [41, 48, 72, 33, 47, 32, 37, 16, 94, 29], [53, 71, 44, 65, 25, 43, 91, 52, 97, 51, 14], [70, 11, 33, 28, 77, 73, 17, 78, 39, 68, 17, 57], [91, 71, 52, 38, 17, 14, 91, 43, 58, 50, 27, 29, 48], [63, 66, 4, 68, 89, 53, 67, 30, 73, 16, 69, 87, 40, 31], [04, 62, 98, 27, 23, 9, 70, 98, 73, 93, 38, 53, 60, 4, 23], ] def expand_node(r, c): return [[r+1,c+0],[r+1,c+1]] all_paths = [] my_path = [] for i in xrange(0, len(triangle)): for j in xrange(0, len(triangle[i])): print 'row ', i, ' and col ', j, ' value is ', triangle[i][j] ??my_path = somehow chain these together??? if my_path not in all_paths all_paths.append(my_path) Answers that avoid external libraries (like itertools) preferred.

    Read the article

  • How can I improve my select query for storing large versioned data sets?

    - by Jason Francis
    At work, we build large multi-page web applications, consisting mostly of radio and check boxes. The primary purpose of each application is to gather data, but as users return to a page they have previously visited, we report back to them their previous responses. Worst-case scenario, we might have up to 900 distinct variables and around 1.5 million users. For several reasons, it makes sense to use an insert-only approach to storing the data (as opposed to update-in-place) so that we can capture historical data about repeated interactions with variables. The net result is that we might have several responses per user per variable. Our table to collect the responses looks something like this: CREATE TABLE [dbo].[results]( [id] [bigint] IDENTITY(1,1) NOT NULL, [userid] [int] NULL, [variable] [varchar](8) NULL, [value] [tinyint] NULL, [submitted] [smalldatetime] NULL) Where id serves as the primary key. Virtually every request results in a series of insert statements (one per variable submitted), and then we run a select to produce previous responses for the next page (something like this): SELECT t.id, t.variable, t.value FROM results t WITH (NOLOCK) WHERE t.userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') AND t.id IN (SELECT MAX(id) AS id FROM results WITH (NOLOCK) WHERE userid = '2111846' AND (t.variable='internat' OR t.variable='veteran' OR t.variable='athlete') GROUP BY variable) Which, in this case, would return the most recent responses for the variables "internat", "veteran", and "athlete" for user 2111846. We have followed the advice of the database tuning tools in indexing the tables, and against our data, this is the best-performing version of the select query that we have been able to come up with. Even so, there seems to be significant performance degradation as the table approaches 1 million records (and we might have about 150x that). We have a fairly-elegant solution in place for sharding the data across multiple tables which has been working quite well, but I am open for any advice about how I might construct a better version of the select query. We use this structure frequently for storing lots of independent data points, and we like the benefits it provides. So the question is, how can I improve the performance of the select query? I assume the nested select statement is a bad idea, but I have yet to find an alternative that performs as well. Thanks in advance. NB: Since we emphasize creating over reading in this case, and since we never update in place, there doesn't seem to be any penalty (and some advantage) for using the NOLOCK directive in this case.

    Read the article

  • RequestFactoryEditorDriver getting edited data after flush

    - by Deanna
    Let me start with I have a solution, but I don't really think it is elegant. So, I am looking for a cleaner way to do this. I have an EntityProxy displayed in a view panel. The view panel is a RequestFactoryEditorDriver only using display mode. The user clicks on a data element and opens a popup editor to edit a data element of the EntityProxy with a few more bits of data than is displayed in the view panel. When the user saves the element I need the view panel to update the display. I ran into a problem because the RequestFactoryEditorDriver of the popup editor flow doesn't let you get to the edited data. The driver uses the passed in context and sends it to the server. The context returned out of flush only allows a Receiver even if you cast it to the type of context you stored in the editor driver in the edit() call. It doesn't appear to send and EntityProxyChanged event either, so I couldn't listen for that and update the display view. The solution I found was to change my domain object persist to return the newly saved entity. Then create the popup editor like this editor.getSaveButtonClickHandler().addClickHandler(createSaveHandler(driver, editor)); // initialize the Driver and edit the given text. driver.initialize(rf, editor); PlayerProfileCtx ctx = rf.playerProfile(); ctx.persist().using(playerProfile).with(driver.getPaths()) .to(new Receiver<PlayerProfileProxy>(){ @Override public void onSuccess(PlayerProfileProxy profile) { editor.hide(); playerProfile = profile; viewDriver.display(playerProfile); } }); driver.edit(playerProfile, ctx); editor.centerAndShow(); Then in the save handler I just fire the context I get from the flush. While this approach works, it doesn't seem right. It would seem I should subscribe to the entitychanged event in the display view and update the entity and the view from there. Also this approach saves the complete entity, not just the changed bits, which will increase bandwidth usage. What I would think should happen, is when you flush the entity it should 'optimistically' update the rf managed version of the entity and fire the entity proxy changed event. Only reverting the entity if something went wrong in the save. The actual save should only send the changed bits. In this way there isn't a need to refetch the whole entity and send that complete data over the wire twice. Is there a better solution?

    Read the article

  • I would like to run my Java program on System Startup on Mac OS/Windows. How can I do this?

    - by Misha Koshelev
    Here is what I came up with. It works but I was wondering if there is something more elegant. Thank you! Misha package com.mksoft.fbbday; import java.io.BufferedReader; import java.io.InputStreamReader; import java.io.IOException; import java.net.URI; import java.net.URISyntaxException; import java.io.File; import java.io.FileWriter; import java.io.PrintWriter; public class RunOnStartup { public static void install(String className,boolean doInstall) throws IOException,URISyntaxException { String osName=System.getProperty("os.name"); String fileSeparator=System.getProperty("file.separator"); String javaHome=System.getProperty("java.home"); String userHome=System.getProperty("user.home"); File jarFile=new File(RunOnStartup.class.getProtectionDomain().getCodeSource().getLocation().toURI()); if (osName.startsWith("Windows")) { Process process=Runtime.getRuntime().exec("reg query \"HKCU\\Software\\Microsoft\\Windows\\CurrentVersion\\Explorer\\Shell Folders\" /v Startup"); BufferedReader in=new BufferedReader(new InputStreamReader(process.getInputStream())); String result="",line; while ((line=in.readLine())!=null) { result+=line; } in.close(); result=result.replaceAll(".*REG_SZ[ ]*",""); File startupFile=new File(result+fileSeparator+jarFile.getName().replaceFirst(".jar",".bat")); if (doInstall) { PrintWriter out=new PrintWriter(new FileWriter(startupFile)); out.println("@echo off"); out.println("start /min \"\" \""+javaHome+fileSeparator+"bin"+fileSeparator+"java.exe -cp "+jarFile+" "+className+"\""); out.close(); } else { if (startupFile.exists()) { startupFile.delete(); } } } else if (osName.startsWith("Mac OS")) { File startupFile=new File(userHome+"/Library/LaunchAgents/com.mksoft."+jarFile.getName().replaceFirst(".jar",".plist")); if (doInstall) { PrintWriter out=new PrintWriter(new FileWriter(startupFile)); out.println(""); out.println(""); out.println(""); out.println(""); out.println(" Label"); out.println(" com.mksoft."+jarFile.getName().replaceFirst(".jar","")+""); out.println(" ProgramArguments"); out.println(" "); out.println(" "+javaHome+fileSeparator+"bin"+fileSeparator+"java"); out.println(" -cp"); out.println(" "+jarFile+""); out.println(" "+className+""); out.println(" "); out.println(" RunAtLoad"); out.println(" "); out.println(""); out.println(""); out.close(); } else { if (startupFile.exists()) { startupFile.delete(); } } } } }

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61  | Next Page >