Search Results

Search found 22641 results on 906 pages for 'use case'.

Page 839/906 | < Previous Page | 835 836 837 838 839 840 841 842 843 844 845 846  | Next Page >

  • Calling a function within a class

    - by JM4
    I am having trouble calling a specific function within a class. The call is made: case "Mod10": if (!validateCreditCard($fields[$field_name])) $errors[] = $error_message; break; and the class code is: class CreditCardValidationSolution { var $CCVSNumber = ''; var $CCVSNumberLeft = ''; var $CCVSNumberRight = ''; var $CCVSType = ''; var $CCVSError = ''; function validateCreditCard($Number) { $this->CCVSNumber = ''; $this->CCVSNumberLeft = ''; $this->CCVSNumberRight = ''; $this->CCVSType = ''; $this->CCVSError = ''; // Catch malformed input. if (empty($Number) || !is_string($Number)) { $this->CCVSError = $CCVSErrNumberString; return FALSE; } // Ensure number doesn't overrun. $Number = substr($Number, 0, 20); // Remove non-numeric characters. $this->CCVSNumber = preg_replace('/[^0-9]/', '', $Number); // Set up variables. $this->CCVSNumberLeft = substr($this->CCVSNumber, 0, 4); $this->CCVSNumberRight = substr($this->CCVSNumber, -4); $NumberLength = strlen($this->CCVSNumber); $DoChecksum = 'Y'; // Mod10 checksum process... if ($DoChecksum == 'Y') { $Checksum = 0; // Add even digits in even length strings or odd digits in odd length strings. for ($Location = 1 - ($NumberLength % 2); $Location < $NumberLength; $Location += 2) { $Checksum += substr($this->CCVSNumber, $Location, 1); } // Analyze odd digits in even length strings or even digits in odd length strings. for ($Location = ($NumberLength % 2); $Location < $NumberLength; $Location += 2) { $Digit = substr($this->CCVSNumber, $Location, 1) * 2; if ($Digit < 10) { $Checksum += $Digit; } else { $Checksum += $Digit - 9; } } // Checksums not divisible by 10 are bad. if ($Checksum % 10 != 0) { $this->CCVSError = $CCVSErrChecksum; return FALSE; } } return TRUE; } } When I run the application - I get the following message: Fatal error: Call to undefined function validateCreditCard() in C:\xampp\htdocs\validation.php on line 339 any ideas?

    Read the article

  • Normalizing Item Names & Synonyms

    - by RabidFire
    Consider an e-commerce application with multiple stores. Each store owner can edit the item catalog of his store. My current database schema is as follows: item_names: id | name | description | picture | common(BOOL) items: id | item_name_id | picture | price | description | picture item_synonyms: id | item_name_id | name | error(BOOL) Notes: error indicates a wrong spelling (eg. "Ericson"). description and picture of the item_names table are "globals" that can optionally be overridden by "local" description and picture fields of the items table (in case the store owner wants to supply a different picture for an item). common helps separate unique item names ("Jimmy Joe's Cheese Pizza" from "Cheese Pizza") I think the bright side of this schema is: Optimized searching & Handling Synonyms: I can query the item_names & item_synonyms tables using name LIKE %QUERY% and obtain the list of item_name_ids that need to be joined with the items table. (Examples of synonyms: "Sony Ericsson", "Sony Ericson", "X10", "X 10") Autocompletion: Again, a simple query to the item_names table. I can avoid the usage of DISTINCT and it minimizes number of variations ("Sony Ericsson Xperia™ X10", "Sony Ericsson - Xperia X10", "Xperia X10, Sony Ericsson") The down side would be: Overhead: When inserting an item, I query item_names to see if this name already exists. If not, I create a new entry. When deleting an item, I count the number of entries with the same name. If this is the only item with that name, I delete the entry from the item_names table (just to keep things clean; accounts for possible erroneous submissions). And updating is the combination of both. Weird Item Names: Store owners sometimes use sentences like "Harry Potter 1, 2 Books + CDs + Magic Hat". There's something off about having so much overhead to accommodate cases like this. This would perhaps be the prime reason I'm tempted to go for a schema like this: items: id | name | picture | price | description | picture (... with item_names and item_synonyms as utility tables that I could query) Is there a better schema you would suggested? Should item names be normalized for autocomplete? Is this probably what Facebook does for "School", "City" entries? Is the first schema or the second better/optimal for search? Thanks in advance! References: (1) Is normalizing a person's name going too far?, (2) Avoiding DISTINCT

    Read the article

  • .NET: Is there a way to finagle a default namespace in an XPath 1.0 query?

    - by Cheeso
    I'm building a tool that performs xpath 1.0 queries on XHTML documents. The requirement to use a namespace prefix in the query is killing me. The query looks like this: html/body/div[@class='contents']/div[@class='body']/ div[@class='pgdbbyauthor']/h2[a[@name][starts-with(.,'Quick')]]/ following-sibling::ul[1]/li/a (all on one line) ...which is bad enough, except because it's xpath 1.0, I need to use an explicit namespace prefix on each QName, so it looks like this: ns1:html/ns1:body/ns1:div[@class='contents']/ns1:div[@class='body']/ ns1:div[@class='pgdbbyauthor']/ns1:h2[ns1:a[@name][starts-with(.,'Quick')]]/ following-sibling::ns1:ul[1]/ns1:li/ns1:a To set up the query, I do something like this: var xpathDoc = new XPathDocument(new StringReader(theText)); var nav = xpathDoc.CreateNavigator(); var xmlns = new XmlNamespaceManager(nav.NameTable); foreach (string prefix in xmlNamespaces.Keys) xmlns.AddNamespace(prefix, xmlNamespaces[prefix]); XPathNodeIterator selection = nav.Select(xpathExpression, xmlns); But what I want is for the xpathExpression to use the implicit default namespace. Is there a way for me to transform the unadorned xpath expression, after it's been written, to inject a namespace prefix for each element name in the query? I'm thinking, anything between two slashes, I could inject a prefix there. Excepting of course axis names like "parent::" and "preceding-sibling::" . And wildcards. That's what I mean by "finagle a default namespace". Is this hack gonna work? Addendum Here's what I mean. suppose I have an xpath expression, and before passing it to nav.Select(), I transform it. Something like this: string FixupWithDefaultNamespace(string expr) { string s = expr; s = Regex.Replace(s, "^(?!::)([^/:]+)(?=/)", "ns1:$1"); // beginning s = Regex.Replace(s, "/([^/:]+)(?=/)", "/ns1:$1"); // stanza s = Regex.Replace(s, "::([A-Za-z][^/:*]*)(?=/)", "::ns1:$1"); // axis specifier s = Regex.Replace(s, "\\[([A-Za-z][^/:*\\(]*)(?=[\\[\\]])", "[ns1:$1"); // predicate s = Regex.Replace(s, "/([A-Za-z][^/:]*)(?!<::)$", "/ns1:$1"); // end s = Regex.Replace(s, "^([A-Za-z][^/:]*)$", "ns1:$1"); // edge case s = Regex.Replace(s, "([-A-Za-z]+)\\(([^/:\\.,\\)]+)(?=[,\\)])", "$1(ns1:$2"); // xpath functions return s; } This actually works for simple cases I tried. To use the example from above - if the input is the first xpath expression, the output I get is the 2nd one, with all the ns1 prefixes. The real question is, is it hopeless to expect this Regex.Replace approach to work, as the xpath expressions get more complicated?

    Read the article

  • JSF 2.0: Preserving component state across multiple views

    - by tlind
    The web application I am developing using MyFaces 2.0.3 / PrimeFaces 2.2RC2 is divided into a content and a navigation area. In the navigation area, which is included into multiple pages using templating (i.e. <ui:define>), there are some widgets (e.g. a navigation tree, collapsible panels etc.) of which I want to preserve the component state across views. For example, let's say I am on the home page. When I navigate to a product details page by clicking on a product in the navigation tree, my Java code triggers a redirect using navigationHandler.handleNavigation(context, null, "/detailspage.jsf?faces-redirect=true") Another way of getting to that details page would be by directly clicking on a product teaser that is shown on the home page. The corresponding <h:link> would lead us to the details page. In both cases, the expansion state of my navigation tree (a PrimeFaces tree component) and my collapsible panels is lost. I understand this is because the redirect / h:link results in the creation of a new view. What is the best way of dealing with this? I am already using MyFaces Orchestra in my project along with its conversation scope, but I am not sure if this is of any help here (since I'd have to bind the expansion/collapsed state of the widgets to a backing bean... but as far as I know, this is not possible). Is there a way of telling JSF which component states to propagate to the next view, assuming that the same component exists in that view? I guess I could need a pointer into the right direction here. Thanks! Update 1: I just tried binding the panels and the tree to a session-scoped bean, but this seems to have no effect. Also, I guess I would have to bind all child components (if any) manually, so this doesn't seem like the way to go. Update 2: Binding UI components to non-request scoped beans is not a good idea (see link I posted in a comment below). If there is no easier approach, I might have to proceed as follows: When a panel is collapsed or the tree is expanded, save the current state in a session-scoped backing bean (!= the UI component itself) The components' states are stored in a map. The map key is the component's (hopefully) unique, relative ID. I cannot use the whole absolute component path here, since the IDs of the parent naming containers might change if the view changes, assuming these IDs are generated programmatically. As soon as a new view gets constructed, retrieve the components' states from the map and apply them to the components. For example, in case of the panels, I can set the collapsed attribute to a value retrieved from my session-scoped backing bean.

    Read the article

  • How can I create a Base64-Encoded string from an GDI+ Image in C++?

    - by Schnapple
    I asked a question recently, How can I create an Image in GDI+ from a Base64-Encoded string in C++?, which got a response that led me to the answer. Now I need to do the opposite - I have an Image in GDI+ whose image data I need to turn into a Base64-Encoded string. Due to its nature, it's not straightforward. The crux of the issue is that an Image in GDI+ can save out its data to either a file or an IStream*. I don't want to save to a file, so I need to use the resulting stream. Problem is, this is where my knowledge breaks down. This first part is what I figured out in the other question // Initialize GDI+. GdiplusStartupInput gdiplusStartupInput; ULONG_PTR gdiplusToken; GdiplusStartup(&gdiplusToken, &gdiplusStartupInput, NULL); // I have this decode function from elsewhere std::string decodedImage = base64_decode(Base64EncodedImage); // Allocate the space for the stream DWORD imageSize = decodedImage.length(); HGLOBAL hMem = ::GlobalAlloc(GMEM_MOVEABLE, imageSize); LPVOID pImage = ::GlobalLock(hMem); memcpy(pImage, decodedImage.c_str(), imageSize); // Create the stream IStream* pStream = NULL; ::CreateStreamOnHGlobal(hMem, FALSE, &pStream); // Create the image from the stream Image image(pStream); // Cleanup pStream->Release(); GlobalUnlock(hMem); GlobalFree(hMem); (Base64 code) And now I'm going to perform an operation on the resulting image, in this case rotating it, and now I want the Base64-equivalent string when I'm done. // Perform operation (rotate) image.RotateFlip(Gdiplus::Rotate180FlipNone); IStream* oStream = NULL; CLSID tiffClsid; GetEncoderClsid(L"image/tiff", &tiffClsid); // Function defined elsewhere image.Save(oStream, &tiffClsid); // And here's where I'm stumped. (GetEncoderClsid) So what I wind up with at the end is an IStream* object. But here's where both my knowledge and Google break down for me. IStream shouldn't be an object itself, it's an interface for other types of streams. I'd go down the road from getting string-Image in reverse, but I don't know how to determine the size of the stream, which appears to be key to that route. How can I go from an IStream* to a string (which I will then Base64-Encode)? Or is there a much better way to go from a GDI+ Image to a string?

    Read the article

  • Implementing Skip List in C++

    - by trikker
    [SOLVED] So I decided to try and create a sorted doubly linked skip list... I'm pretty sure I have a good grasp of how it works. When you insert x the program searches the base list for the appropriate place to put x (since it is sorted), (conceptually) flips a coin, and if the "coin" lands on a then that element is added to the list above it(or a new list is created with element in it), linked to the element below it, and the coin is flipped again, etc. If the "coin" lands on b at anytime then the insertion is over. You must also have a -infinite stored in every list as the starting point so that it isn't possible to insert a value that is less than the starting point (meaning that it could never be found.) To search for x, you start at the "top-left" (highest list lowest value) and "move right" to the next element. If the value is less than x than you continue to the next element, etc. until you have "gone too far" and the value is greater than x. In this case you go back to the last element and move down a level, continuing this chain until you either find x or x is never found. To delete x you simply search x and delete it every time it comes up in the lists. For now, I'm simply going to make a skip list that stores numbers. I don't think there is anything in the STL that can assist me, so I will need to create a class List that holds an integer value and has member functions, search, delete, and insert. The problem I'm having is dealing with links. I'm pretty sure I could create a class to handle the "horizontal" links with a pointer to the previous element and the element in front, but I'm not sure how to deal with the "vertical" links (point to corresponding element in other list?) If any of my logic is flawed please tell me, but my main questions are: How to deal with vertical links and whether my link idea is correct Now that I read my class List idea I'm thinking that a List should hold a vector of integers rather than a single integer. In fact I'm pretty positive, but would just like some validation. I'm assuming the coin flip would simply call int function where rand()%2 returns a value of 0 or 1 and if it's 0 then a the value "levels up" and if it's 0 then the insert is over. Is this incorrect? How to store a value similar to -infinite? Edit: I've started writing some code and am considering how to handle the List constructor....I'm guessing that on its construction, the "-infinite" value should be stored in the vectorname[0] element and I can just call insert on it after its creation to put the x in the appropriate place.

    Read the article

  • XY-Scatter Chart In SSRS Won't Display Points

    - by Dalin Seivewright
    I'm a bit confused with this one. I have a Dataset with a BackupDate and a BackupTime as well as a BackupType. The BackupDate is comprised of 12 characters from the left of a datetime string within a table. The BackupTime is comprised of 8 characters from the right of that same datetime string. So for example: BackupDate would be 'December 12 2008' and the BackupTime would be '12:53PM.' I have added an XY-scatter chart to the report. I've added a 'series' value for the BackupType (so one can distinguish between a Full/Incr/Log backup). I've added a category value of BackupDate and set the Scale for the X-axis from the Min of BackupDate to the Max of BackupDate. I've then added an item to the Values with the Y variable set to BackupTime and the X variable set to BackupDate. The interval for the Y-axis is 12:00AM to 11:59PM and the formatting for the labels is 'hh:mmtt'. The BackupTime matches the format of the Y-axis. The BackupDate matches the format of the X-axis. 10 entries are retrieved by my Dataset and the Legend is properly populated by the BackupType field. No points are being plotted on the graph and no markers/pointers are shown if they are enabled. There should be a point on the graph for every point in time of each day there is a backup of a specific type. Am I missing something? Does anyone know of a good tutorial dealing specifically with XY-scatter graphs and using them in a way I intend? I am using the 2005 version of SSRS rather than the 2008 version. Screenshot of what my chart currently looks like: In case it could be dataset related: SELECT TOP (10) backup_type, LTRIM(RTRIM(LEFT(backup_finish_date, 12))) AS BackupDate, LTRIM(RTRIM(RIGHT(backup_finish_date, 8))) AS BackupTime FROM DBARepository.Backup_History As requested, here are the results of this query. There is a Where clause to constrain the results to a specific database of a specific server that was not included in the above SQL Query. Log Dec 26 2008 12:00PM Log Dec 27 2008 4:00AM Log Dec 27 2008 8:00AM Log Dec 27 2008 12:00PM Log Dec 27 2008 4:00PM Log Dec 27 2008 8:00PM Database Dec 27 2008 10:01PM Log Dec 28 2008 12:00AM Log Dec 28 2008 4:00AM Log Dec 28 2008 8:00AM

    Read the article

  • HLSL tex2d sampler seemingly using inconsistent rounding; why?

    - by RJFalconer
    Hello all, I have code that needs to render regions of my object differently depending on their location. I am trying to use a colour map to define these regions. The problem is when I sample from my colour map, I get collisions. Ie, two regions with different colours in the colourmap get the same value returned from the sampler. I've tried various formats of my colour map. I set the colours for each region to be "5" apart in each case; Indexed colour RGB, RGBA: region 1 will have RGB 5%,5%,5%. region 2 will have RGB 10%,10%,10% and so on. HSV Greyscale: region 1 will have HSV 0,0,5%. region 2 will have HSV 0,0,10% and so on. (Values selected in The Gimp) The tex2D sampler returns a value [0..1]. [ I then intend to derive an int array index from region. Code to do with that is unrelated, so has been removed from the question ] float region = tex2D(gColourmapSampler,In.UV).x; Sampling the "5%" colour gave a "region" of 0.05098 in hlsl. From this I assume the 5% represents 5/100*255, or 12.75, which is rounded to 13 when stored in the texture OR when sampled by the sampler; can't tell which. (Reasoning: 0.05098 * 255 ~= 13) By this logic, the 50% should be stored as 127.5. Sampled, I get 0.50196 which implies it was stored as 128. the 70% should be stored as 178.5. Sampled, I get 0.698039, which implies it was stored as 178. What rounding is going on here? (127.5 becomes 128, 178.5 becomes 178 ?!) Edit: OK, http://en.wikipedia.org/wiki/Bankers_rounding#Round_half_to_even Apparently this is "banker's rounding". Is this really what HLSL samplers use? I am using Shader Model 2 and FX Composer. This is my sampler declaration; //Colour map texture gColourmapTexture < string ResourceName = "Globe_Colourmap_Regions_Greyscale.png"; string ResourceType = "2D"; >; sampler2D gColourmapSampler : register(s1) = sampler_state { Texture = <gColourmapTexture>; #if DIRECT3D_VERSION >= 0xa00 Filter = MIN_MAG_MIP_LINEAR; #else /* DIRECT3D_VERSION < 0xa00 */ MinFilter = Linear; MipFilter = Linear; MagFilter = Linear; #endif /* DIRECT3D_VERSION */ AddressU = Clamp; AddressV = Clamp; };

    Read the article

  • C# Property Access vs Interface Implementation

    - by ehdv
    I'm writing a class to represent a Pivot Collection, the root object recognized by Pivot. A Collection has several attributes, a list of facet categories (each represented by a FacetCategory object) and a list of items (each represented by a PivotItem object). Therefore, an extremely simplified Collection reads: public class Collection { private List<FacetCategory> categories; private List<PivotItem> items; // other attributes } What I'm unsure of is how to properly grant access to those two lists. Because declaration order of both facet categories and items is visible to the user, I can't use sets, but the class also shouldn't allow duplicate categories or items. Furthermore, I'd like to make the Collection object as easy to use as possible. So my choices are: Have Collection implement IList<PivotItem> and have accessor methods for FacetCategory: In this case, one would add an item to Collection foo by writing foo.Add(bar). This works, but since a Collection is equally both kinds of list making it only pass as a list for one type (category or item) seems like a subpar solution. Create nested wrapper classes for List (CategoryList and ItemList). This has the advantage of making a consistent interface but the downside is that these properties would no longer be able to serve as lists (because I need to override the non-virtual Add method I have to implement IList rather than subclass List. Implicit casting wouldn't work because that would return the Add method to its normal behavior. Also, for reasons I can't figure out, IList is missing an AddRange method... public class Collection { private class CategoryList: IList<FacetCategory> { // ... } private readonly CategoryList categories = new CategoryList(); private readonly ItemList items = new ItemList(); public CategoryList FacetCategories { get { return categories; } set { categories.Clear(); categories.AddRange(value); } } public ItemList Items { get { return items; } set { items.Clear(); items.AddRange(value); } } } Finally, the third option is to combine options one and two, so that Collection implements IList<PivotItem> and has a property FacetCategories. Question: Which of these three is most appropriate, and why?

    Read the article

  • Need help converting Ruby code to php code

    - by newprog
    Yesterday I posted this queston. Today I found the code which I need but written in Ruby. Some parts of code I have understood (I don't know Ruby) but there is one part that I can't. I think people who know ruby and php can help me understand this code. def do_create(image) # Clear any old info in case of a re-submit FIELDS_TO_CLEAR.each { |field| image.send(field+'=', nil) } image.save # Compose request vm_params = Hash.new # Submitting a file in ruby requires opening it and then reading the contents into the post body file = File.open(image.filename_in, "rb") # Populate the parameters and compute the signature # Normally you would do this in a subroutine - for maximum clarity all # parameters are explicitly spelled out here. vm_params["image"] = file # Contents will be read by the multipart object created below vm_params["image_checksum"] = image.image_checksum vm_params["start_job"] = 'vectorize' vm_params["image_type"] = image.image_type if image.image_type != 'none' vm_params["image_complexity"] = image.image_complexity if image.image_complexity != 'none' vm_params["image_num_colors"] = image.image_num_colors if image.image_num_colors != '' vm_params["image_colors"] = image.image_colors if image.image_colors != '' vm_params["expire_at"] = image.expire_at if image.expire_at != '' vm_params["licensee_id"] = DEVELOPER_ID #in php it's like this $vm_params["sequence_number"] = -rand(100000000);????? vm_params["sequence_number"] = Kernel.rand(1000000000) # Use a negative value to force an error when calling the test server vm_params["timestamp"] = Time.new.utc.httpdate string_to_sign = CREATE_URL + # Start out with the URL being called... #vm_params["image"].to_s + # ... don't include the file per se - use the checksum instead vm_params["image_checksum"].to_s + # ... then include all regular parameters vm_params["start_job"].to_s + vm_params["image_type"].to_s + vm_params["image_complexity"].to_s + # (nil.to_s => '', so this is fine for vm_params we don't use) vm_params["image_num_colors"].to_s + vm_params["image_colors"].to_s + vm_params["expire_at"].to_s + vm_params["licensee_id"].to_s + # ... then do all the security parameters vm_params["sequence_number"].to_s + vm_params["timestamp"].to_s vm_params["signature"] = sign(string_to_sign) #no problem # Workaround class for handling multipart posts mp = Multipart::MultipartPost.new query, headers = mp.prepare_query(vm_params) # Handles the file parameter in a special way (see /lib/multipart.rb) file.close # mp has read the contents, we can close the file now response = post_form(URI.parse(CREATE_URL), query, headers) logger.info(response.body) response_hash = ActiveSupport::JSON.decode(response.body) # Decode the JSON response string ##I have understood below def sign(string_to_sign) #logger.info("String to sign: '#{string_to_sign}'") Base64.encode64(HMAC::SHA1.digest(DEVELOPER_KEY, string_to_sign)) end # Within Multipart modul I have this: class MultipartPost BOUNDARY = 'tarsiers-rule0000' HEADER = {"Content-type" => "multipart/form-data, boundary=" + BOUNDARY + " "} def prepare_query (params) fp = [] params.each {|k,v| if v.respond_to?(:read) fp.push(FileParam.new(k, v.path, v.read)) else fp.push(Param.new(k,v)) end } query = fp.collect {|p| "--" + BOUNDARY + "\r\n" + p.to_multipart }.join("") + "--" + BOUNDARY + "--" return query, HEADER end end end Thanks for your help.

    Read the article

  • SQL/Schema comparison and upgrade

    - by Workshop Alex
    I have a simple situation. A large organisation is using several different versions of some (desktop) application and each version has it's own database structure. There are about 200 offices and each office will have it's own version, which can be one of 7 different ones. The company wants to upgrade all applications to the latest versions, which will be version 8. The problem is that they don't have a separate database for each version. Nor do they have a separate database for each office. They have one single database which is handled by a dedicated server, thus keeping things like management and backups easier. Every office has it's own database schema and within the schema there's the whole database structure for their specific application version. As a result, I'm dealing with 200 different schema's which need to be upgraded, each with 7 possible versions. Fortunately, every schema knows the proper version so checking the version isn't difficult. But my problem is that I need to create upgrade scripts which can upgrade from version 1 to version 2 to version 3 to etc... Basically, all schema's need to be bumped up one version until they're all version 8. Writing the code that will do this is no problem. the challenge is how to create the upgrade script from one version to the other? Preferably with some automated tool. I've examined RedGate's SQL Compare and Altova's DatabaseSpy but they're not practical. Altova is way too slow. RedGate requires too much processing afterwards, since the generated SQL Script still has a few errors and it refers to the schema name. Furthermore, the code needs to become part of a stored procedure and the code generated by RedGate doesn't really fit inside a single procedure. (Plus, it's doing too much transaction-handling, while I need everything within a single transaction. I have been considering using another SQL Comparison tool but it seems to me that my case is just too different from what standard tools can deliver. So I'm going to write my own comparison tool. To do this, I'll be using ADOX with Delphi to read the catalogues for every schema version in the database, then use this to write the SQL Statements that will need to upgrade these schema's to their next version. (Comparing 1 with 2, 2 with 3, 3 with 4, etc.) I'm not unfamiliar with generating SQL-Script-Generators so I don't expect too many problems. And I'll only be upgrading the table structures, not any of the other database objects. So, does anyone have some good tips and tricks to apply when doing this kind of comparisons? Things to be aware of? Practical tips to increase speed?

    Read the article

  • How to update NSMutableDictionary. My code doesn't work.

    - by dawatson833
    I've populated an array using. arrSettings = [[NSMutableArray alloc] initWithContentsOfFile:[self settingsPath]]; The file is a plist with the root as an array and then a dictionary with three three keys defined as number. I've also tried setting the keys to string. I display the values in the plist file on a view using. diaper = [[arrSettings objectAtIndex:0] objectForKey:@"Diaper Expenses"]; oil = [[arrSettings objectAtIndex:0] objectForKey:@"Oil Used"]; tree = [[arrSettings objectAtIndex:0] objectForKey:@"Wood Used"]; This code works fine, the values in the dictionary are assigned to the variables and they are displayed. The user can make changes and then press a save button. I use this code to extract the dictionary part of the array so I can update it. The assignment to editDictionary works. I've double checked the key names including case and that is correct. editDictionary = [[NSMutableDictionary alloc] init]; editDictionary = [arrSettings objectAtIndex:0]; NSNumber *myNumber = [NSNumber numberWithFloat:diaperAmount]; [editDictionary setValue:myNumber forKey:@"Diaper Expenses"]; myNumber = [NSNumber numberWithFloat:oilAmount]; [editDictionary setValue:myNumber forKey:@"Oil Used"]; myNumber = [NSNumber numberWithFloat:treeAmount]; [editDictionary setValue:myNumber forKey:@"Wood Used"]; In this example I've used a nsnumber. But I've also tried the xxxAmount field as part of SetValue instead of creating a NSNumber. Neither implementation works. Several strange things happen. Sometimes the first two setvalue statements work, but the last setvalue fails with a EXC_BAD_ACCESS failure. Other times the first setValue fails with the same error. I have no idea why the first two sometimes work. I'm at a loss of what to do next. I've tried several implentations and none of them work. Also, in the debugger how can I display the editDictionary elements. I can see editDictionary, but I don't know how to display the individual elements.

    Read the article

  • How do I update with a newly-created detached entity using NHibernate?

    - by Daniel T.
    Explanation: Let's say I have an object graph that's nested several levels deep and each entity has a bi-directional relationship with each other. A -> B -> C -> D -> E Or in other words, A has a collection of B and B has a reference back to A, and B has a collection of C and C has a reference back to B, etc... Now let's say I want to edit some data for an instance ofC. In Winforms, I would use something like this: var instanceOfC; using (var session = SessionFactory.OpenSession()) { // get the instance of C with Id = 3 instanceOfC = session.Linq<C>().Where(x => x.Id == 3); } SendToUIAndLetUserUpdateData(instanceOfC); using (var session = SessionFactory.OpenSession()) { // re-attach the detached entity and update it session.Update(instanceOfC); } In plain English, we grab a persistent instance out of the database, detach it, give it to the UI layer for editing, then re-attach it and save it back to the database. Problem: This works fine for Winform applications because we're using the same entity all throughout, the only difference being that it goes from persistent to detached to persistent again. The problem occurs when I'm using a web service and a browser, sending over JSON data. In this case, the data that comes back is no longer a detached entity, but rather a transient one that just happens to have the same ID as the persistent one. If I use this entity to update, it will wipe out the relationship to B and D unless I sent the entire object graph over to the UI and got it back in one piece. Question: My question is, how do I serialize detached entities over the web, receive them back, and save them, while preserving any relationships that I didn't explicitly change? I know about ISession.SaveOrUpdateCopy and ISession.Merge() (they seem to do the same thing?), but this will still wipe out the relationships if I don't explicitly set them. I could copy the fields from the transient entity to the persistent entity one by one, but this doesn't work too well when it comes to relationships and I'd have to handle version comparisons manually.

    Read the article

  • Monitoring UDP socket in glib(mm) eats up CPU time

    - by Gyorgy Szekely
    Hi, I have a GTKmm Windows application (built with MinGW) that receives UDP packets (no sending). The socket is native winsock and I use glibmm IOChannel to connect it to the application main loop. The socket is read with recvfrom. My problem is: this setup eats 25% percent CPU time on a 3GHz workstation. Can somebody tell me why? The application is idle in this case, and if I remove the UDP code, CPU usage drops down to almost zero. As the application has to perform some CPU intensive tasks, I could image better ways to spend that 25% Here are some code excerpts: (sorry for the printf's ;) ) /* bind */ void UDPInterface::bindToPort(unsigned short port) { struct sockaddr_in target; WSADATA wsaData; target.sin_family = AF_INET; target.sin_port = htons(port); target.sin_addr.s_addr = 0; if ( WSAStartup ( 0x0202, &wsaData ) ) { printf("WSAStartup failed!\n"); exit(0); // :) WSACleanup(); } sock = socket( AF_INET, SOCK_DGRAM, 0 ); if (sock == INVALID_SOCKET) { printf("invalid socket!\n"); exit(0); } if (bind(sock,(struct sockaddr*) &target, sizeof(struct sockaddr_in) ) == SOCKET_ERROR) { printf("failed to bind to port!\n"); exit(0); } printf("[UDPInterface::bindToPort] listening on port %i\n", port); } /* read */ bool UDPInterface::UDPEvent(Glib::IOCondition io_condition) { recvfrom(sock, (char*)buf, BUF_SIZE*4, 0, NULL, NULL); /* process packet... */ } /* glibmm connect */ Glib::RefPtr channel = Glib::IOChannel::create_from_win32_socket(udp.sock); Glib::signal_io().connect( sigc::mem_fun(udp, &UDPInterface::UDPEvent), channel, Glib::IO_IN ); I've read here in some other question, and also in glib docs (g_io_channel_win32_new_socket()) that the socket is put into nonblocking mode, and it's "a side-effect of the implementation and unavoidable". Does this explain the CPU effect, it's not clear to me? Whether or not I use glib to access the socket or call recvfrom() directly doesn't seem to make much difference, since CPU is used up before any packet arrives and the read handler gets invoked. Also glibmm docs state that it's ok to call recvfrom() even if the socket is polled (Glib::IOChannel::create_from_win32_socket()) I've tried compiling the program with -pg and created a per function cpu usage report with gprof. This wasn't usefull because the time is not spent in my program, but in some external glib/glibmm dll.

    Read the article

  • a script translatable to JavaScript with callback-hell automatic avoider :-)

    - by m1uan
    I looking for "translator" for JavaScript like already is CoffeScript, which will be work for example with forEach (inspired by Groovy) myArray.forEach() -> val, idx { // do something with value and idx } translate to JavaScript myArray.forEach(function(val, idx){ // do something with value and idx }); or something more usefull... function event(cb){ foo()-> err, data1; bar(data1)-> err, data2; cb(data2); } the method are encapsulated function event(cb){ foo(function(err,data1){ bar(data1, function(err, data2) { cb(data2); }); }); } I want ask if similar "compiler" to JavaScript like this even better already doesn't exists? What would be super cool... my code in nodejs looks mostly like this :-) function dealer(cb){ async.parallel([ function(pcb){ async.watterfall([function(wcb){ first(function(a){ wcb(a); }); }, function(a, wcb){ thirt(a, function(c){ wcb(c); }); fourth(a, function(d){ // dealing with “a” as well and nobody care my result }); }], function(err, array_with_ac){ pcb(array_with_ac); }); }, function(pcb){ second(function(b){ pcb(b);}); }], function(err, data){ cb(data[0][0]+data[1]+data[0][1]); // dealing with “a” “b” and “c” not with “d” }); } but, look how beautiful and readable the code could be: function dealer(cb){ first() -> a; second() -> b; third(a) -> c; // dealing with “a” fourth(a) -> d; // dealing with “a” as well and nobody care about my result cb(a+b+c); // dealing with “a” “b” and “c” not with “d” } yes this is ideal case when the translator auto-decide, method need to be run as parallel and method need be call after finish another method. I can imagine it's works Please, do you know about something similar? Thank you for any advice;-)

    Read the article

  • linq to sql with nservicebus table lock issue

    - by IGoor
    I am building a system using NServiceBus and my DataLayer is using Linq 2 SQL. The system is made up of 2 services. Service1 receives messages from NSB. It will query Table1 in my database and inserts a record into Table1 If a certain condition is met a new NSB message is sent to the 2nd service Service2 will update records also in Table1 when it receives messages from Service1 and it does some other non database related work. Service2 is a long running process. The problem I am having is the moment Service2 updates a record in Table1, the table is locked. The lock seems to be in place until Service2 has completed all it is processing. i.e. The lock is not released after my datacontext is disposed. This causes the query in Service1 to timeout. Once Service2 completes processing, Service1 resumes processing again without problem. So for example Service1 code may look like: int x =0; using (DataContext db = new DataContext()) { x = (from dp in db.Table1 select dp).Count(); // this line will timeout while service2 is processing Table1 t = new Table1(); t.Data = "test"; db.Table1.InsertOnSubmit(t); db.SubmitChanges(); } if(x % 50 == 0) CallService2(); The code in service2 may look like: using (DataContext db = new DataContext()) { Table1 t = db.Table1.Where(t => t.id == myId); t.Data = "updated"; db.SubmitChanges(); } // I would have expected the lock to have been released at this point, but this is not the case. DoSomeLongRunningTasks(); // lock will be released once service2 exits I don't understand why the lock is not released when the datacontext is disposed in Service2. To get around the problem I have been calling: db.ExecuteCommand("SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED"); and this works, but I am not happy using it. I want to solve this problem properly. Has any one experienced this sort of problem before and does any one know how to solve it? Why is the lock not released after the datacontext has been disposed? Thanks in advance. p.s. sorry for the extremely long post.

    Read the article

  • Precise explanation of JavaScript <-> DOM circular reference issue

    - by Joey Adams
    One of the touted advantages of jQuery.data versus raw expando properties (arbitrary attributes you can assign to DOM nodes) is that jQuery.data is "safe from circular references and therefore free from memory leaks". An article from Google titled "Optimizing JavaScript code" goes into more detail: The most common memory leaks for web applications involve circular references between the JavaScript script engine and the browsers' C++ objects' implementing the DOM (e.g. between the JavaScript script engine and Internet Explorer's COM infrastructure, or between the JavaScript engine and Firefox XPCOM infrastructure). It lists two examples of circular reference patterns: DOM element → event handler → closure scope → DOM DOM element → via expando → intermediary object → DOM element However, if a reference cycle between a DOM node and a JavaScript object produces a memory leak, doesn't this mean that any non-trivial event handler (e.g. onclick) will produce such a leak? I don't see how it's even possible for an event handler to avoid a reference cycle, because the way I see it: The DOM element references the event handler. The event handler references the DOM (either directly or indirectly). In any case, it's almost impossible to avoid referencing window in any interesting event handler, short of writing a setInterval loop that reads actions from a global queue. Can someone provide a precise explanation of the JavaScript ↔ DOM circular reference problem? Things I'd like clarified: What browsers are effected? A comment in the jQuery source specifically mentions IE6-7, but the Google article suggests Firefox is also affected. Are expando properties and event handlers somehow different concerning memory leaks? Or are both of these code snippets susceptible to the same kind of memory leak? // Create an expando that references to its own element. var elem = document.getElementById('foo'); elem.myself = elem; // Create an event handler that references its own element. var elem = document.getElementById('foo'); elem.onclick = function() { elem.style.display = 'none'; }; If a page leaks memory due to a circular reference, does the leak persist until the entire browser application is closed, or is the memory freed when the window/tab is closed?

    Read the article

  • for loop with count from array, limit output? PHP

    - by Philip
    print '<div id="wrap">'; print "<table width=\"100%\" border=\"0\" align=\"center\" cellpadding=\"3\" cellspacing=\"3\">"; for($i=0; $i<count($news_comments); $i++) { print ' <tr> <td width="30%"><strong>'.$news_comments[$i]['comment_by'].'</strong></td> <td width="70%">'.$news_comments[$i]['comment_date'].'</td> </tr> <tr> <td></td> <td>'.$news_comments[$i]['comment'].'</td> </tr> '; } print '</table></div>'; $news_comments is a 3 diemensional array from mysqli_fetch_assoc returned from a function elsewhere, for some reason my for loop returns the total of the array sets such as [0][2] etc until it reaches the max amount from the counted $news_comments var which is a return function of LIMIT 10. my problem is if I add any text/html/icons inside the for loop it prints it in this case 11 times even though only array sets 1 and 2 have data inside them. How do I get around this? My function query is as follows: function news_comments() { require_once '../data/queries.php'; // get newsID from the url $urlID = $_GET['news_id']; // run our query for newsID information $news_comments = selectQuery('*', 'news_comments', 'WHERE news_id='.$urlID.'', 'ORDER BY comment_date', 'DESC', '10'); // requires 6 params // check query for results if(!$news_comments) { // loop error session and initiate var foreach($_SESSION['errors'] as $error=>$err) { print htmlentities($err) . 'for News Comments, be the first to leave a comment!'; } } else { print '<div id="wrap">'; print "<table width=\"100%\" border=\"0\" align=\"center\" cellpadding=\"3\" cellspacing=\"3\">"; for($i=0; $i<count($news_comments); $i++) { print ' <tr> <td width="30%"><strong>'.$news_comments[$i]['comment_by'].'</strong></td> <td width="70%">'.$news_comments[$i]['comment_date'].'</td> </tr> <tr> <td></td> <td>'.$news_comments[$i]['comment'].'</td> </tr> '; } print '</table></div>'; } }// End function Any help is greatly appreciated.

    Read the article

  • JPA entity design / cannot delete entity

    - by timaschew
    I though its simple what I want, but I cannot find any solution for my problem. I'm using playframework 1.2.3 and it's using Hibernate as JPA. So I think playframework has nothing to do with the problem. I have some classes (I omit the nonrelevant fields) public class User { ... } public class Task { public DataContainer dataContainer; } public class DataContainer { public Session session; public User user; } public class Session { ... } So I have association from Task to DataContainer and from DataContainer to Sesssion and the DataContainer belongs to a User. The DataContainers can have always the same User, but the Session have to be different for each instance. And the DataContainer of a Task have also to be different in each instance. A DataContainer can have a Sesesion or not (it's optinal). I use only unidirectional assoc. It should be sufficient. In other words: Every Task must has one DataContainer. Every DataContainer must has one/the same User and can have one Session. To create a DB schema I use JPA annotations: @Entity public class User extends Model { ... } @Entity public class Task extends Model { @OneToOne(optional = false, cascade = CascadeType.ALL) public DataContainer dataContainer; } @Entity public class DataContainer extends Model { @OneToOne(optional = true, cascade = CascadeType.ALL) public Session session; @ManyToOne(optional = false, cascade = CascadeType.ALL) public User user; } @Entity public class Session extends Model { ... } BTW: Model is a play class and provides the primary id as long type. When I create some for each entity a object and 'connect them', I mean the associations, it works fine. But when I try to delete a Session, I get a constraint violation exception, because a DataContainer still refers to the Session I want to delete. I want that the Session (field) of the DataContainer will be set to null respectively the foreign key (session_id) should be unset in the database. This will be okay, because its optional. I don't know, I think I have multiple problems. Am I using the right annotation @OneToOne ? I found on the internet some additional annotation and attributes: @JoinColumn and a mappedBy attribute for the inverse relationship. But I don't have it, because its not bidirectional. Or is a bidirectional assoc. essentially? Another try was to use @OnDelete(action = OnDeleteAction.CASCADE) the the contraint changed from NO ACTIONs when update or delete to: ADD CONSTRAINT fk4745c17e6a46a56 FOREIGN KEY (session_id) REFERENCES annotation_session (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE; But in this case, when I delete a session, the DataContainer and User is deleted. That's wrong for me. EDIT: I'm using postgresql 9, the jdbc stuff is included in play, my only db config is db=postgres://app:app@localhost:5432/app

    Read the article

  • Nested Transaction issues within custom Windows Service

    - by pdwetz
    I have a custom Windows Service I recently upgraded to use TransactionScope for nested transactions. It worked fine locally on my old dev machine (XP sp3) and on a test server (Server 2003). However, it fails on my new Windows 7 machine as well as on 2008 Server. It was targeting 2.0 framework; I tried targeting 3.5 instead, but it still fails. The strange part is really in how it fails; no exception is thrown. The service itself merely times out. I added tracing code, and it fails when opening the connection for Database lookup #2 below. I also enabled tracing for System.Transactions; it literally cuts out partway while writing the block for the failed connection. We ran a SQL trace, but only the first lookup shows up. I put in code traces, and it gets to the trace the line before the second lookup, but nothing after. I've had the same experience hitting two different SQL servers (both are SQL 2005 running on Server 2003). The connection string is utilizing a SQL account (not Windows integration). All connections are against the same database in this case, but given the nature of the code it is being escalated to MSDTC. Here's the basic code structure: TransactionOptions options = new TransactionOptions(); options.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted; using (TransactionScope scope = new TransactionScope(TransactionScopeOption.RequiresNew, options)) { // Database lookup #1 TransactionOptions options = new TransactionOptions(); options.IsolationLevel = Transaction.Current != null ? Transaction.Current.IsolationLevel : System.Transactions.IsolationLevel.ReadCommitted; using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required, options)) { // Database lookup #2; fails on connection.Open() // Database save (never reached) scope.Complete();<br/> } scope.Complete();<br/> } My local firewall is disabled. The service normally runs using Network Service, but I also tried my user account (same results). The short of it is that I use the same general technique widely in my web applications and haven't had any issues. I pulled out the code and ran it fine within a local Windows Form application. If anyone has any additional debugging ideas (or, even better, solutions) I'd love to hear them.

    Read the article

  • Android Bluetooth Fails to Pair

    - by CaseyB
    I am having a problem getting my devices to pair in Android. If I go into the settings and pair them manually I can get them to connect using the following code: Server // Make sure the device it discoverable mServerSocket = mAdapter.listenUsingRfcommWithServiceRecord("Moo Productions Bluetooth Server", mUUID); mState = State.ACCEPTING; BluetoothSocket socket = mServerSocket.accept(); mServerSocket.close(); connected(socket); Client Set<BluetoothDevice> pairedDevices = mAdapter.getBondedDevices(); BluetoothSocket socket = null; // Search the list of paired devices for the right one for(BluetoothDevice device : pairedDevices) { try { mState = State.SEARCHING; socket = device.createRfcommSocketToServiceRecord(mUUID); mState = State.CONNECTING; socket.connect(); connected(socket); break; } catch (IOException e) { socket = null; continue; } } But if the devices hadn't already been paired it gets out of the foreach without connecting to a valid socket. In that case I start discovering. // If that didn't work, discover if(socket == null) { mState = State.SEARCHING; mReceiver = new SocketReceiver(); mContext.registerReceiver(mReceiver, new IntentFilter(BluetoothDevice.ACTION_FOUND)); mAdapter.startDiscovery(); } // ... Later ... private class SocketReceiver extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent) { if(BluetoothDevice.ACTION_FOUND.equals(intent.getAction())) { try { // Get the device and try to open a socket BluetoothDevice device = intent.getParcelableExtra(BluetoothDevice.EXTRA_DEVICE); BluetoothSocket socket = device.createRfcommSocketToServiceRecord(mUUID); mState = State.CONNECTING; socket.connect(); // This is our boy, so stop looking mAdapter.cancelDiscovery(); mContext.unregisterReceiver(mReceiver); connected(socket); } catch (IOException ioe) { ioe.printStackTrace(); } } } } But it will never find the other device. I never get a pairing dialog and when I step through I see that it discovers the correct device, but it fails to connect with this exception java.io.IOException: Service discovery failed. Any ideas as to what I'm missing?

    Read the article

  • SQL Server CTE referred in self joins slow

    - by Kharlos Dominguez
    Hello, I have written a table-valued UDF that starts by a CTE to return a subset of the rows from a large table. There are several joins in the CTE. A couple of inner and one left join to other tables, which don't contain a lot of rows. The CTE has a where clause that returns the rows within a date range, in order to return only the rows needed. I'm then referencing this CTE in 4 self left joins, in order to build subtotals using different criterias. The query is quite complex but here is a simplified pseudo-version of it WITH DataCTE as ( SELECT [columns] FROM table INNER JOIN table2 ON [...] INNER JOIN table3 ON [...] LEFT JOIN table3 ON [...] ) SELECT [aggregates_columns of each subset] FROM DataCTE Main LEFT JOIN DataCTE BananasSubset ON [...] AND Product = 'Bananas' AND Quality = 100 LEFT JOIN DataCTE DamagedBananasSubset ON [...] AND Product = 'Bananas' AND Quality < 20 LEFT JOIN DataCTE MangosSubset ON [...] GROUP BY [ I have the feeling that SQL Server gets confused and calls the CTE for each self join, which seems confirmed by looking at the execution plan, although I confess not being an expert at reading those. I would have assumed SQL Server to be smart enough to only perform the data retrieval from the CTE only once, rather than do it several times. I have tried the same approach but rather than using a CTE to get the subset of the data, I used the same select query as in the CTE, but made it output to a temp table instead. The version referring the CTE version takes 40 seconds. The version referring the temp table takes between 1 and 2 seconds. Why isn't SQL Server smart enough to keep the CTE results in memory? I like CTEs, especially in this case as my UDF is a table-valued one, so it allowed me to keep everything in a single statement. To use a temp table, I would need to write a multi-statement table valued UDF, which I find a slightly less elegant solution. Did some of you had this kind of performance issues with CTE, and if so, how did you get them sorted? Thanks, Kharlos

    Read the article

  • Reading column header and column values of a data table using LAMBDA(C#3.0)

    - by Newbie
    Consider the folowing where I am reading the data table values and writing to a text file using (StreamWriter sw = new StreamWriter(@"C:\testwrite.txt",true)) { DataPreparation().AsEnumerable().ToList().ForEach(i => { string col1 = i[0].ToString(); string col2 = i[1].ToString(); string col3 = i[2].ToString(); string col4 = i[3].ToString(); sw.WriteLine( col1 + "\t" + col2 + "\t" + col3 + "\t" + col4 + Environment.NewLine ); }); } The data preparation function is as under private static DataTable DataPreparation() { DataTable dt = new DataTable(); dt.Columns.Add("Col1", typeof(string)); dt.Columns.Add("Col2", typeof(int)); dt.Columns.Add("Col3", typeof(DateTime)); dt.Columns.Add("Col4", typeof(bool)); for (int i = 0; i < 10; i++) { dt.Rows.Add("String" + i.ToString(), i, DateTime.Now.Date, (i % 2 == 0) ? true : false); } return dt; } It is working fine. Now in the above described program, it is known to me the Number of columns and the column headers. How to achieve the same in case when the column headers and number of columns are not known at compile time using the lambda expression? I have already done that which is as under public static void WriteToTxt(string directory, string FileName, DataTable outData, string delimiter) { FileStream fs = null; StreamWriter streamWriter = null; using (fs = new FileStream(directory + "\\" + FileName + ".txt", FileMode.Append, FileAccess.Write)) { try { streamWriter = new StreamWriter(fs); streamWriter.BaseStream.Seek(0, SeekOrigin.End); streamWriter.WriteLine(); DataTableReader datatableReader = outData.CreateDataReader(); for (int header = 0; header < datatableReader.FieldCount; header++) { streamWriter.Write(outData.Columns[header].ToString() + delimiter); } streamWriter.WriteLine(); int row = 0; while (datatableReader.Read()) { for (int field = 0; field < datatableReader.FieldCount; field++) { streamWriter.Write(outData.Rows[row][field].ToString() + delimiter); } streamWriter.WriteLine(); row++; } } catch (Exception ex) { throw ex; } } } I am using C#3.0 and framework 3.5 Thanks in advance

    Read the article

  • Removing elements from heap

    - by user193138
    I made a heap. I am curious if there's something subtley wrong with my remove function: int Heap::remove() { if (n == 0) exit(1); int temp = arr[0]; arr[0] = arr[--n]; heapDown(0); arr[n] = 0; return temp; } void Heap::heapDown(int i) { int l = left(i); int r = right(i); // comparing parent to left/right child // each has an inner if to handle if the first swap causes a second swap // ie 1 -> 3 -> 5 // 3 5 1 5 1 3 if (l < n && arr[i] < arr[l]) { swap(arr[i], arr[l]); heapDown(l); if (r < n && arr[i] < arr[r]) { swap(arr[i], arr[r]); heapDown(r); } } else if (r < n && arr[i] < arr[r]) { swap(arr[i], arr[r]); heapDown(r); if (l < n && arr[i] < arr[l]) { swap(arr[i], arr[l]); heapDown(l); } } } Here's my output i1i2i3i4i5i6i7 p Active heap: 7 4 6 1 3 2 5 r Removed 7 r Removed 6 p Active heap: 5 3 4 1 2 Here's my teacher's sample output: p Active heap : 7 4 6 1 3 2 5 r Removed 7 r Removed 6 p Active heap : 5 4 2 1 3 s Heapsorted : 1 2 3 4 5 While our outputs are completely different, I do seem to hold maxheap principle of having everything left oriented and for all nodes parent child(in every case I tried). I try to do algs like this from scratch, so maybe I'm just doing something really weird and wrong (I would only consider it "wrong" if it's O(lg n), as removes are intended to be for heaps). Is there anything in particular "wrong" about my remove? Thanks, http://ideone.com/PPh4eQ

    Read the article

  • How do you create a MANIFEST.MF that's available when you're testing and running from a jar in produ

    - by warvair
    I've spent far too much time trying to figure this out. This should be the simplest thing and everyone who distributes Java applications in jars must have to deal with it. I just want to know the proper way to add versioning to my Java app so that I can access the version information when I'm testing, e.g. debugging in Eclipse and running from a jar. Here's what I have in my build.xml: <target name="jar" depends = "compile"> <property name="version.num" value="1.0.0"/> <buildnumber file="build.num"/> <tstamp> <format property="TODAY" pattern="yyyy-MM-dd HH:mm:ss" /> </tstamp> <manifest file="${build}/META-INF/MANIFEST.MF"> <attribute name="Built-By" value="${user.name}" /> <attribute name="Built-Date" value="${TODAY}" /> <attribute name="Implementation-Title" value="MyApp" /> <attribute name="Implementation-Vendor" value="MyCompany" /> <attribute name="Implementation-Version" value="${version.num}-b${build.number}"/> </manifest> <jar destfile="${build}/myapp.jar" basedir="${build}" excludes="*.jar" /> </target> This creates /META-INF/MANIFEST.MF and I can read the values when I'm debugging in Eclipse thusly: public MyClass() { try { InputStream stream = getClass().getResourceAsStream("/META-INF/MANIFEST.MF"); Manifest manifest = new Manifest(stream); Attributes attributes = manifest.getMainAttributes(); String implementationTitle = attributes.getValue("Implementation-Title"); String implementationVersion = attributes.getValue("Implementation-Version"); String builtDate = attributes.getValue("Built-Date"); String builtBy = attributes.getValue("Built-By"); } catch (IOException e) { logger.error("Couldn't read manifest."); } } But, when I create the jar file, it loads the manifest of another jar (presumably the first jar loaded by the application - in my case, activation.jar). Also, the following code doesn't work either although all the proper values are in the manifest file. Package thisPackage = getClass().getPackage(); String implementationVersion = thisPackage.getImplementationVersion(); Any ideas?

    Read the article

< Previous Page | 835 836 837 838 839 840 841 842 843 844 845 846  | Next Page >