Search Results

Search found 92 results on 4 pages for 'heuristics'.

Page 3/4 | < Previous Page | 1 2 3 4  | Next Page >

  • Tokenizer for full-text

    - by user72185
    This should be an ideal case of not re-inventing the wheel, but so far my search has been in vain. Instead of writing one myself, I would like to use an existing C++ tokenizer. The tokens are to be used in an index for full text searching. Performance is very important, I will parse many gigabytes of text. Edit: Please note that the tokens are to be used in a search index. Creating such tokens is not an exact science (afaik) and requires some heuristics. This has been done a thousand time before, and probably in a thousand different ways, but I can't even find one of them :) Any good pointers? Thanks!

    Read the article

  • Beginner's resources/introductions to classification algorithms.

    - by Dirk
    Hi, everybody. I am entirely new to the topic of classification algorithms, and need a few good pointers about where to start some "serious reading". I am right now in the process of finding out, whether machine learning and automated classification algorithms could be a worthwhile thing to add to some application of mine. I already scanned through "How to Solve It: Modern heuristics" by Z. Michalewicz and D. Fogel (in particular, the chapters about linear classifiers using neuronal networks), and on the practical side, I am currently looking through the WEKA toolkit source code. My next (planned) step would be to dive into the realm of Bayesian classification algorithms. Unfortunately, I am lacking a serious theoretical foundation in this area (let alone, having used it in any way as of yet), so any hints at where to look next would be appreciated; in particular, a good introduction of available classification algorithms would be helpful. Being more a craftsman and less a theoretician, the more practical, the better... Hints, anyone?

    Read the article

  • Genetic programming in c++, library suggestions?

    - by shuttle87
    I'm looking to add some genetic algorithms to an Operations research project I have been involved in. Currently we have a program that aids in optimizing some scheduling and we want to add in some heuristics in the form of genetic algorithms. Are there any good libraries for generic genetic programming/algorithms in c++? Or would you recommend I just code my own? I should add that while I am not new to c++ I am fairly new to doing this sort of mathematical optimization work in c++ as the group I worked with previously had tended to use a proprietary optimization package. We have a fitness function that is fairly computationally intensive to evaluate and we have a cluster to run this on so parallelized code is highly desirable. So is c++ a good language for this? If not please recommend some other ones as I am willing to learn another language if it makes life easier. thanks!

    Read the article

  • 0/1 Knapsack with irrational weights

    - by user356106
    Consider the 0/1 knapsack problem. The standard Dynamic Programming algorithm applies only when the capacity as well as the weights to fill the knapsack with are integers/ rational numbers. What do you do when the capacity/weights are irrational? The issue is that we can't memoize like we do for integer weights because we may need potentially infinite decimal places for irrational weights - leading to an infinitely large number of columns for the Dynamic Programming Table . Is there any standard method for solving this? Any comments on the complexity of this problem? Any heuristics? What about associated recurrences like (for example): f(x)=1, for x< sqrt(2) f(x)=f(x-sqrt(2))+sqrt(3)

    Read the article

  • Forcing a mixed ISO-8859-1 and UTF-8 multi-line string into UTF-8 in Perl

    - by knorv
    Consider the following problem: A multi-line string $junk contains some lines which are encoded in UTF-8 and some in ISO-8859-1. I don't know a priori which lines are in which encoding, so heuristics will be needed. I want to turn $junk into pure UTF-8 with proper re-encoding of the ISO-8859-1 lines. Also, in the event of errors in the processing I want to provide a "best effort result" rather than throwing an error. My current attempt looks like this: $junk = &force_utf8($junk); sub force_utf8 { my $input = shift; my $output = ''; foreach my $line (split(/\n/, $input)) { if (utf8::valid($line)) { utf8::decode($line); } $output .= "$line\n"; } return $output; } While this appears to work I'm certain this is not the optimal solution. How would you improve the force_utf8(...) sub?

    Read the article

  • Fast, cross-platform timer?

    - by dsimcha
    I'm looking to improve the D garbage collector by adding some heuristics to avoid garbage collection runs that are unlikely to result in significant freeing. One heuristic I'd like to add is that GC should not be run more than once per X amount of time (maybe once per second or so). To do this I need a timer with the following properties: It must be able to grab the correct time with minimal overhead. Calling core.stdc.time takes an amount of time roughly equivalent to a small memory allocation, so it's not a good option. Ideally, should be cross-platform (both OS and CPU), for maintenance simplicity. Super high resolution isn't terribly important. If the times are accurate to maybe 1/4 of a second, that's good enough. Must work in a multithreaded/multi-CPU context. The x86 rdtsc instruction won't work.

    Read the article

  • The Politics of Junk Filtering

    - by mikef
    If the national postal service, such as the Royal Mail in the UK, were to go through your letters and throw away all the stuff it considered to be junk instead of delivering it to you, you might be rather pleased until you discovered that it took a too liberal decision about what was junk. Catalogs you'd asked for? Junk! Requests from charities? Who needs them! Parcels from competing carriers? Toss them away! The possibility for abuse for an agency that was in a monopolistic position is just too scary to tolerate. After all, the postal service could charge 'consultancy fees' to any sender who wanted to guarantee that his stuff got delivered, or they could even farm this out to other companies. Because Microsoft Outlook is just about the only email client used by the international business community in the west, its' SPAM filter is the final arbiter as to what gets read. My Outlook 2007, set to the default settings, junks all the perfectly innocent email newsletters that I subscribe to. Whereas Google Mail, Yahoo, and LIVE are all pretty accurate in detecting spam, Outlook makes all sorts of silly mistakes. The documentation speaks techno-babble about 'advanced heuristics', but the result boils down to an inaccurate mess. The more that Microsoft fiddles with it, the stickier the mess. To make matters worse, it still lets through obvious spam. The filter is occasionally updated along with other automatic 'security' updates you opt for automatic updates. As an editor for a popular online publication that provides a newsletter service, this is an obvious source of frustration. We follow all the best-practices we know about. We ensure that it is a trivial task to opt out of receiving it. We format the newsletter to the requirements of the Service Providers. We follow up, and resolve, every complaint. As a result, it gets delivered. It is galling to discover that, after all that effort, Outlook then often judges the contents to be junk on a whim, so you don't get to see it. A few days ago, Microsoft published the PST file format specification, under pressure from a European Union interoperability investigation by ECIS (the European Committee for Interoperable Systems). The objective was that other applications could then access existing PST files so as to migrate from existing Outlook installations to other solutions. Joaquín Almunia, the current competition commissioner, should now turn his attention to the more subtle problems of Microsoft Outlook. The Junk problem seems to have come from clumsy implementation of client-side spam filtering rather than from deliberate exploitation of a monopoly on the desktop email client for businesses, but it is a growing problem nonetheless. Cheers, Michael Francis

    Read the article

  • Rapid Planning: Next Generation MRP

    - by john.bermudez
    MRP has been a mainstay of manufacturing systems for 40 years. MRP evolved from simple inventory planning systems to become the heart of the MRPII systems which eventually became ERP. While the applications surrounding it have become broader, more sophisticated and web-based, MRP continues to operate in the loneliness of the Saturday night batch window quietly exploding bills of materials and logging exceptions for hours. During this same 40 years, manufacturing business processes have seen countless changes and improvements including JIT, TQM, Six Sigma, Flow Manufacturing, Lean Manufacturing and Supply Chain Management. Although much logic has been added to MRP to deal with new manufacturing processes, it has not been able to keep up with the real-time pace of today's supply chain. As a result, planners have devised ingenious ways to trick MRP to handle new processes but often need to dump the output into spreadsheets of their own design in the hope of wrestling thousands of exceptions to ground. Oracle's new Rapid Planning application is just what companies still running MRP have been waiting for! The newest member of the Value Chain Planning product line, Rapid Planning is designed to empower planners with comprehensive supply planning that runs online in minutes, not hours. It enables a planner simulate the incremental impact of a new order or re-run an entire plan in a separate sandbox. Rapid Planning does a complete multi-level bill of material explosion like MRP but plans orders considering material and capacity constraints. Considering material and capacity constraints in planning can help you quickly reduce inventory and improve on-time shipments. Rapid Planning is an APS application that leverages years of Oracle development experience and customer feedback. Rather than rely exclusively on black-box heuristics, Rapid Planning is designed to give planners the computing power to use their industry experience and business knowledge to improve MRP. For example, Rapid Planning has a powerful worksheet user interface with built-in query capability that allows the planner to locate the orders she is interested in and use a mass update function to make quick work of large changes. The planner can save these queries and unique user interface to personalize their planning environment. Most importantly, Rapid Planning is designed to do supply planning in today's dynamic supply chain environment. It can be used to supplement MRP or replace MRP entirely. It generates plans that provide order-by-order details with aggregate key performance indicators that enable planners to quickly assess the overall business impact of a plan. To find out more about how Rapid Planning can help improve your MRP, please contact me at [email protected] or your Oracle Account Manager.

    Read the article

  • Review: Backbone.js Testing

    - by george_v_reilly
    Title: Backbone.js Testing Author: Ryan Roemer Rating: $stars(4.5) Publisher: Packt Copyright: 2013 ISBN: 178216524X Pages: 168 Keywords: programming, testing, javascript, backbone, mocha, chai, sinon Reading period: October 2013 Backbone.js Testing is a short, dense introduction to testing JavaScript applications with three testing libraries, Mocha, Chai, and Sinon.JS. Although the author uses a sample application of a personal note manager written with Backbone.js throughout the book, much of the material would apply to any JavaScript client or server framework. Mocha is a test framework that can be executed in the browser or by Node.js, which runs your tests. Chai is a framework-agnostic TDD/BDD assertion library. Sinon.JS provides standalone test spies, stubs and mocks for JavaScript. They complement each other and the author does a good job of explaining when and how to use each. I've written a lot of tests in Python (unittest and mock, primarily) and C# (NUnit), but my experience with JavaScript unit testing was both limited and years out of date. The JavaScript ecosystem continues to evolve rapidly, with new browser frameworks and Node packages springing up everywhere. JavaScript has some particular challenges in testing—notably, asynchrony and callbacks. Mocha, Chai, and Sinon meet those challenges, though they can't take away all the pain. The author describes how to test Backbone models, views, and collections; dealing with asynchrony; provides useful testing heuristics, including isolating components to reduce dependencies; when to use stubs and mocks and fake servers; and test automation with PhantomJS. He does not, however, teach you Backbone.js itself; for that, you'll need another book. There are a few areas which I thought were dealt with too lightly. There's no real discussion of Test-driven_development or Behavior-driven_development, which provide the intellectual foundations of much of the book. Nor does he have much to say about testability and how to make legacy code more testable. The sample Notes app has plenty of testing seams (much of this falls naturally out of the architecture of Backbone); other apps are not so lucky. The chapter on automation is extremely terse—it could be expanded into a very large book!—but it does provide useful indicators to many areas for exploration. I learned a lot from this book and I have no hesitation in recommending it. Disclosure: Thanks to Ryan Roemer and Packt for a review copy of this book.

    Read the article

  • Minimizing Dependencies For GUIs

    - by tuba09
    I've been working on a project, and have been charged with designing the projects GUI front-end. I'm coding in Java and using the Swing toolkit. Usability-wise, the GUI front-end follows all of Nielsen's heuristics. Users can easily get to where they want to go through the click of a button / JComboBox. Essentially, in Swing terms, what happens is their actions drive the creation/deletion of custom panels. The GUI is coming along fine for the most part. However, I have to admit to being utterly dismayed at the tight web of dependencies my code is being smothered in. The main problem that I've encountered, that I haven't been able to fix as of yet, is how to keep a reference to the panels/buttons being changed. I'll give an example: Say there's a button A Say there's a panel B displaying picture C Say there's another picture D (not currently being displayed by panel B) When user clicks A, panel B should remove picture C and display picture D My question is, what's the best way of keeping track of panel B? Since I need a global point of access to panel B, my solution has so far been to just shoehorn it into a static variable, and access it through a series of static getters and setters. And this static variable is usually stored in the reference's original class. I.e. UserPanel has a static variable that stores a reference to itself. Is there an easy, tried-and-true way of dealing with these kinds of situations? Like my GUI works fine, but it is not modular and/or robust at all. To add to this, the dreaded 'cyclical dependencies' issue that's shunned by so many programmers is out here in full effect. I'm fairly new to development and just want to make sure that my code will be fairly extensible and won't cause much of a headache to the next person that decides to get a try at it. I know there's loads of books out there that probably have a nice elegant solution to this, but unfortunately I just don't have the time to leisure read right now. I need something that's quick and dirty. Thanks in advance

    Read the article

  • OData &ndash; The easiest service I can create

    - by Jon Dalberg
    I wanted to create an OData service with the least amount of code so I fired up Visual Studio and got cracking. I decided to serve up a list of naughty words and make them read-only. Create a new web project. I created an empty MVC 2 application but MVC is not required for OData. Add a new WCF Data Service to the project. I named mine NastyWords.svc since I’m serving up a list of nasty words. Add a class to expose via the service: NastyWord 1: [DataServiceKey("Word")] 2: public class NastyWord 3: { 4: public string Word { get; set; } 5: }   I need to be able to uniquely identify instances of NastyWords for the DataService so I used the DataServiceKey attribute with the “Word” property as the key. I could have added an “ID” property which would have uniquely identified them and would then not need the “DataServiceKey” attribute because the DataService would apply some reflection and heuristics to guess at which property would be the unique identifier. However, the words themselves are unique so adding an “ID” property would be redundantly repetitive. Then I created a data source to expose my NastyWord objects to the service. This is just a simple class with IQueryable<T> properties exposing the entities for my service: 1: public class NastyWordsDataSource 2: { 3: private static IList<NastyWord> words = new List<NastyWord> 4: { 5: new NastyWord{ Word="crap"}, 6: new NastyWord{ Word="darn"}, 7: new NastyWord{ Word="hell"}, 8: new NastyWord{ Word="shucks"} 9: }; 10:   11: public NastyWordsDataSource() 12: { 13: NastyWords = words.AsQueryable(); 14: } 15:   16: public IQueryable<NastyWord> NastyWords { get; private set; } 17: }   Now I can go to the NastyWords.svc class and tell it which data source to use and which entities to expose: 1: public class NastyWords : DataService<NastyWordsDataSource> 2: { 3: // This method is called only once to initialize service-wide policies. 4: public static void InitializeService(DataServiceConfiguration config) 5: { 6: config.SetEntitySetAccessRule("*", EntitySetRights.AllRead); 7: config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 8: } 9: }   Compile and browse to my NastWords.svc and weep with joy Now I can query my service just like any other OData service. Next time, I’ll modify this service to allow updates to sent so I can build up my list of nasty words. Enjoy!

    Read the article

  • Wireframing: A Day In the Life of UX Workshop at Oracle

    - by ultan o'broin
    The Oracle Applications User Experience team's Day in the Life (DITL) of User Experience (UX) event was run in Oracle's Redwood Shores HQ for Oracle Usability Advisory Board (OUAB) members. I was charged with putting together a wireframing session, together with Director of Financial Applications User Experience, Scott Robinson (@scottrobinson). Example of stunning new wireframing visuals we used on the DITL events. We put on a lively show, explaining the basics of wireframing, the concepts, what it is and isn't, considerations on wireframing tool choice, and then imparting some tips and best practices. But the real energy came when the OUAB customers and partners in the room were challenge to do some wireframing of their own. Wireframing is about bringing your business and product use cases to life in real UX visual terms, by creating a low-fidelity drawing to iterate and agree on in advance of prototyping and coding what is to be finally built and rolled out for users. All the best people wireframe. Leonardo da Vinci used "cartoons" on some great works, tracing outlines first and using red ochre or charcoal dropped through holes in the tracing parchment onto the canvas to outline the subject. (Image distributed under Wikimedia commons license) Wireframing an application's user experience design enables you to: Obtain stakeholder buy-in. Enable faster iteration of different designs. Determine the task flow navigation paths (in Oracle Fusion Applications navigation is linked with user roles). Develop a content strategy (readability, search engine optimization (SEO) of content, and so on) Lay out the pages, widgets, groups of features, and so on. Apply usability heuristics early (no replacement for usability testing, but a great way to do some heavy-lifting up front). Decide upstream which functional user experience design patterns to apply (out of the box solutions that expedite productivity). Assess which Oracle Application Development Framework (ADF) or equivalent technology components can be used (again, developer productivity is enhanced downstream). We ran a lively hands-on exercise where teams wireframed a choice of application scenarios using the time-honored tools of pen and paper. Scott worked the floor like a pro, pointing out great use of features, best practices, innovations, and making sure that the whole concept of wireframing, the gestalt, transferred. "We need more buttons!" The cry of the energized. Not quite. The winning wireframe session (online shopping scenario) from the Applications UX DITL event shown. Great fun, great energy, and great teamwork were evident in the room. Naturally, there were prizes for the best wireframe. Well, actually, prizes were handed out to the other attendees too! An exciting, slightly different aspect to delivery of this session made the wireframing event one of the highlights of the day. And definitely, something we will repeat again when we get the chance. Thanks to everyone who attended, contributed, and helped organize.

    Read the article

  • Should EICAR be updated to test the revision of Antivirus system?

    - by makerofthings7
    I'm posting this here since programmers write viruses, and AV software. They also have the best knowledge of heuristics and how AV systems work (cloaking etc). The EICAR test file was used to functionally test an antivirus system. As it stands today almost every AV system will flag EICAR as being a "test" virus. For more information on this historic test virus please click here. Currently the EICAR test file is only good for testing the presence of an AV solution, but it doesn't check for engine file or DAT file up-to-dateness. In other words, why do a functional test of a system that could have definition files that are more than 10 years old. With the increase of zero day threats it doesn't make much sense to functionally test your system using EICAR. That being said, I think EICAR needs to be updated/modified to be effective test that works in conjunction with an AV management solution. This question is about real world testing, without using live viruses... which is the intent of the original EICAR. That being said I'm proposing a new EICAR file format with the appendage of an XML blob that will conditionally cause the Antivirus engine to respond. X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-EXTENDED-ANTIVIRUS-TEST-FILE!$H+H* <?xml version="1.0"?> <engine-valid-from>2010-1-1Z</engine-valid-from> <signature-valid-from>2010-1-1Z</signature-valid-from> <authkey>MyTestKeyHere</authkey> In this sample, the antivirus engine would only alert on the EICAR file if both the signature or engine file is equal to or newer than the valid-from date. Also there is a passcode that will protect the usage of EICAR to the system administrator. If you have a backgound in "Test Driven Design" TDD for software you may get that all I'm doing is applying the principals of TDD to my infrastructure. Based on your experience and contacts how can I make this idea happen?

    Read the article

  • Accounting for waves when doing planar reflections

    - by CloseReflector
    I've been studying Nvidia's examples from the SDK, in particular the Island11 project and I've found something curious about a piece of HLSL code which corrects the reflections up and down depending on the state of the wave's height. Naturally, after examining the brief paragraph of code: // calculating correction that shifts reflection up/down according to water wave Y position float4 projected_waveheight = mul(float4(input.positionWS.x,input.positionWS.y,input.positionWS.z,1),g_ModelViewProjectionMatrix); float waveheight_correction=-0.5*projected_waveheight.y/projected_waveheight.w; projected_waveheight = mul(float4(input.positionWS.x,-0.8,input.positionWS.z,1),g_ModelViewProjectionMatrix); waveheight_correction+=0.5*projected_waveheight.y/projected_waveheight.w; reflection_disturbance.y=max(-0.15,waveheight_correction+reflection_disturbance.y); My first guess was that it compensates for the planar reflection when it is subjected to vertical perturbation (the waves), shifting the reflected geometry to a point where is nothing and the water is just rendered as if there is nothing there or just the sky: Now, that's the sky reflecting where we should see the terrain's green/grey/yellowish reflection lerped with the water's baseline. My problem is now that I cannot really pinpoint what is the logic behind it. Projecting the actual world space position of a point of the wave/water geometry and then multiplying by -.5f, only to take another projection of the same point, this time with its y coordinate changed to -0.8 (why -0.8?). Clues in the code seem to indicate it was derived with trial and error because there is redundancy. For example, the author takes the negative half of the projected y coordinate (after the w divide): float waveheight_correction=-0.5*projected_waveheight.y/projected_waveheight.w; And then does the same for the second point (only positive, to get a difference of some sort, I presume) and combines them: waveheight_correction+=0.5*projected_waveheight.y/projected_waveheight.w; By removing the divide by 2, I see no difference in quality improvement (if someone cares to correct me, please do). The crux of it seems to be the difference in the projected y, why is that? This redundancy and the seemingly arbitrary selection of -.8f and -0.15f lead me to conclude that this might be a combination of heuristics/guess work. Is there a logical underpinning to this or is it just a desperate hack? Here is an exaggeration of the initial problem which the code fragment fixes, observe on the lowest tessellation level. Hopefully, it might spark an idea I'm missing. The -.8f might be a reference height from which to deduce how much to disturb the texture coordinate sampling the planarly reflected geometry render and -.15f might be the lower bound, a security measure.

    Read the article

  • Antivirus Configuration for dedicated SQL and dedicated IIS Servers

    - by Wayne Arthurton
    Our corporate standard is McAfee Enterprise, unfortunately this is non-negotiable. On two types of servers I'm responsible for, SQL & Web, we have noticed major performance issues with the corporate standard setup. Max scan time 45sec One policy for all processes Scan ALL files on write, read and open for backup Heuristics: Find unknown programs, trojans and macros Detect unwanted programs Exclude: EVT, LDF, LOG, MDF, VMD, , windows file protection) This of course still causes major slowdowns. IIS .NET recompiles are slow especially with SharePoint, SQL backups and restores, SQL Analysis Services, Integration Services and temp data from them as well. I have looked from time to time, for some best practices on setting up McAfee of SQL & SQL Analysis Service, SQL Integration Service, Visual Studio, Sharepoint, and .NET web servers in general. How do people setup McAfee enterprise on their corporate serves keeping security intact, but affecting performance as minimally as possible? Has anyone run across white papers on these setups? Obviously some are case by case, but there must be some best practices out there somewhere.

    Read the article

  • Optimum configuration of McAfee for Servers

    - by Wayne Arthurton
    Our corporate standard is McAfee Enterprise, unfortunately this is non-negotiable. On two types of servers I'm responsible for, SQL & Web, we have noticed major performance issues with the corporate standard setup. Max scan time 45sec One policy for all processes Scan ALL files on write, read and open for backup Heuristics: Find unknown programs, trojans and macros Detect unwanted programs Exclude: EVT, LDF, LOG, MDF, VMD, , windows file protection) This of course still causes major slowdowns. IIS .NET recompiles are slow especially with SharePoint, SQL backups and restores, SQL Analysis Services, Integration Services and temp data from them as well. I have looked from time to time, for some best practices on setting up McAfee of SQL & SQL Analysis Service, SQL Integration Service, Visual Studio, Sharepoint, and .NET web servers in general. How do people setup McAfee enterprise on their corporate serves keeping security intact, but affecting performance as minimally as possible? Has anyone run across white papers on these setups? Obviously some are case by case, but there must be some best practices out there somewhere.

    Read the article

  • What's the state of the art in image upscaling?

    - by monov
    I like to collect cool pics and use them as wallpapers or for other things. Often, artists publish only low-res versions, probably for fear of theft. Example: Gabriel Pulecio's BIRDS Now, if I want to use that as a wallpaper, I'd have to upscale it, and obviously that'd make it look blurry because of the bicubic interpolation. I realize there's no real way to get a high-res version from a low-res pic, because the information is not simply there. That said, I'm wondering if heuristics have been developed for upscaling with less apparent loss of quality. Those would probably be optimized for specific image types. For photorealistic pictures, for cartoons with large flat areas, for pixel art... One algorithm I'm aware of is Seam Carving. It works for some kinds of pics, especially ones with a plain, undetailed or uninteresting background, and a subject that strongly stands out. But it's far from being general-purpose. Applying it to the above pic produces this. It looks quite sharp, but the proportions are horribly distorted because the algorithm is not designed for this kind of pic. Another is Pixel art scaling algorithms. Those are completely unfit for anything other than actual pixel art that's pixelized to begin with. For example, I tried the scale2x windows binary on my pic, but its output was nearly indistinguishable from nearest-neighbour scaling because the algorithm didn't detect any isolated pixely fragments to work from. Something else I tried was: I enlarged the image in Photoshop with bicubic interpolation, then I applied unsharp mask. The result looks pretty bad. The red blotch is actually resized reasonably well, but the dove is far from it. What I'm looking for is some app that makes a best-effort attempt at upscaling any input image while minimizing blurriness. If you know of any, I'll be thankful. Note that the subjective prettiness and sharpness of the result is what matters... the result doesn't need to be completely faithful to the original small image.

    Read the article

  • Is there a realiable way to troubleshoot wake up from sleep in Windows?

    - by Borek
    Is there a reliable way to troubleshoot laptop wakes? I've seen "heuristics" posted here and there but isn't there really a simple and deterministic way to tell what's causing a problem? Specifically, my laptop wakes up about every hour for about 2 minutes. Exported event log entries are here: http://www.mediafire.com/?abcqb00v5wyo6pj. I've tried: powercfg -devicequery wake_armed Empty result set. Scheduled tasks - the main ones are not scheduled to run every hour. When go through a long list of all possible tasks, there are some that are set to be triggered every hour (e.g., MS "RacTask" whatever it is). But when I go to power options, Advanced settings, Sleep, Allow wake timers it is set to "Disable". Also, the specific task is not set to wake the computer if necessary. Power options for my Ethernet card don't enable it to wake the computer - the cable is disconnected anyway. There are no other HW devices attached - no USB disks, no keyboards / mice etc. I am really clueless and quite unhappy that it's so hard to troubleshoot this situation.

    Read the article

  • Fastest pathfinding for static node matrix

    - by Sean Martin
    I'm programming a route finding routine in VB.NET for an online game I play, and I'm searching for the fastest route finding algorithm for my map type. The game takes place in space, with thousands of solar systems connected by jump gates. The game devs have provided a DB dump containing a list of every system and the systems it can jump to. The map isn't quite a node tree, since some branches can jump to other branches - more of a matrix. What I need is a fast pathfinding algorithm. I have already implemented an A* routine and a Dijkstra's, both find the best path but are too slow for my purposes - a search that considers about 5000 nodes takes over 20 seconds to compute. A similar program on a website can do the same search in less than a second. This website claims to use D*, which I have looked into. That algorithm seems more appropriate for dynamic maps rather than one that does not change - unless I misunderstand it's premise. So is there something faster I can use for a map that is not your typical tile/polygon base? GBFS? Perhaps a DFS? Or have I likely got some problem with my A* - maybe poorly chosen heuristics or movement cost? Currently my movement cost is the length of the jump (the DB dump has solar system coordinates as well), and the heuristic is a quick euclidean calculation from the node to the goal. In case anyone has some optimizations for my A*, here is the routine that consumes about 60% of my processing time, according to my profiler. The coordinateData table contains a list of every system's coordinates, and neighborNode.distance is the distance of the jump. Private Function findDistance(ByVal startSystem As Integer, ByVal endSystem As Integer) As Integer 'hCount += 1 'If hCount Mod 0 = 0 Then 'Return hCache 'End If 'Initialize variables to be filled Dim x1, x2, y1, y2, z1, z2 As Integer 'LINQ queries for solar system data Dim systemFromData = From result In jumpDataDB.coordinateDatas Where result.systemId = startSystem Select result.x, result.y, result.z Dim systemToData = From result In jumpDataDB.coordinateDatas Where result.systemId = endSystem Select result.x, result.y, result.z 'LINQ execute 'Fill variables with solar system data for from and to system For Each solarSystem In systemFromData x1 = (solarSystem.x) y1 = (solarSystem.y) z1 = (solarSystem.z) Next For Each solarSystem In systemToData x2 = (solarSystem.x) y2 = (solarSystem.y) z2 = (solarSystem.z) Next Dim x3 = Math.Abs(x1 - x2) Dim y3 = Math.Abs(y1 - y2) Dim z3 = Math.Abs(z1 - z2) 'Calculate distance and round 'Dim distance = Math.Round(Math.Sqrt(Math.Abs((x1 - x2) ^ 2) + Math.Abs((y1 - y2) ^ 2) + Math.Abs((z1 - z2) ^ 2))) Dim distance = firstConstant * Math.Min(secondConstant * (x3 + y3 + z3), Math.Max(x3, Math.Max(y3, z3))) 'Dim distance = Math.Abs(x1 - x2) + Math.Abs(z1 - z2) + Math.Abs(y1 - y2) 'hCache = distance Return distance End Function And the main loop, the other 30% 'Begin search While openList.Count() != 0 'Set current system and move node to closed currentNode = lowestF() move(currentNode.id) For Each neighborNode In neighborNodes If Not onList(neighborNode.toSystem, 0) Then If Not onList(neighborNode.toSystem, 1) Then Dim newNode As New nodeData() newNode.id = neighborNode.toSystem newNode.parent = currentNode.id newNode.g = currentNode.g + neighborNode.distance newNode.h = findDistance(newNode.id, endSystem) newNode.f = newNode.g + newNode.h newNode.security = neighborNode.security openList.Add(newNode) shortOpenList(OLindex) = newNode.id OLindex += 1 Else Dim proposedG As Integer = currentNode.g + neighborNode.distance If proposedG < gValue(neighborNode.toSystem) Then changeParent(neighborNode.toSystem, currentNode.id, proposedG) End If End If End If Next 'Check to see if done If currentNode.id = endSystem Then Exit While End If End While If clarification is needed on my spaghetti code, I'll try to explain.

    Read the article

  • SOA Community Newsletter May 2014

    - by JuergenKress
    Registration for the Fusion Middleware Summer Camps 2014 is open – Register asap for one of our bootcamps August 4th – 8th 2014 in Lisbon. Please read details and pre-requisitions careful before you register. We expect that like in the past, the conference will be booked out soon! If you can’t make it to Lisbon attend our SOA Suite 11c free on-demand Bootcamp or  Managing the Complexity of IoT online trainings. With more than 5000 customers, SOA Suite Achieves Significant Customer Adoption and Industry Recognition.Thanks to all our SOA Specialized partners for making our joins SOA customers successful! As a summary of the Industrial SOA series we published the Podcast Show Notes: SOA and Cloud - Where's This Relationship Going? Make sure you use the Oracle Demo Systems for your customer presentations. The demo systems are hosted by Oracle and include complete scenarios based on the latest Middleware version like the new B2B SOA Suite Demo System! For local presentations without fast internet use the SOA/BPM 11.1.1.7.1 Virtual Machine and Case Management Sample. At our SOA Community Workspace (SOA Community membership required) you can get new IoT presentations for Location Based Offers for Banking & Whitepaper and online Webcast & Utility presentation. In this newsletter you will find many articles about OSB: OSB 11g – A Hands-on Tutorial & Using Split-Joins in OSB Services for parallel processing of messages & OSB, Service Callouts and OQL & Working with Oracle Security Token Service. Thanks for sharing all the additional SOA articles within the community: How to configure Oracle SOA/BPM task auto release & Controlling BPEL process flow at runtime & Upgrading to Oracle SOA Suite 11g PS6 (11.1.1.7)? Do this. & BPEL and BPM's performance monitoring using DMS & SOA 11g - Create RESTful Service In Oracle SOA & Wrong timezone causes TopLink warning in SOA suite. Highlight of the BPM and ACM section is the IDC BPM vendor report. The new bundle Patch including the ACM UI is now available. If you want to learn more about ACM, get the ACM training material at our SOA Community Workspace (SOA Community membership required). A great demo for your next BPM presentation is the BPM iPad app. It’s simpleMobile BPM is Not An Option. It’s a Necessity. Thanks for sharing all the additional BPM articles within the community: BPM update adds Case Management Web Interface and REST APIs & Implementing deadline functionality with Oracle Adaptive Case Management & BPM 11g Timeout Heuristics & Humantask Assignment: Names and Expressions Assignment via Rules. In our last section Architecture, it is all about design. Usability is a key factor for customer satisfaction, worth to spend some time and read the Simplified User Experience Design Patterns eBook. Great blueprint for your project! See you in Lisbon! To read the newsletter please visit www.tinyurl.com/soaNewsMay2014 (OPN Account required) To become a member of the SOA Partner Community please register at http://www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: newsletter,SOA Community newsletter,SOA Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Clever memory usage through the years

    - by Ben Emmett
    A friend and I were recently talking about the really clever tricks people have used to get the most out of memory. I thought I’d share my favorites, and would love to hear yours too! Interleaving on drum memory Back in the ye olde days before I’d been born (we’re talking the 50s / 60s here), working memory commonly took the form of rotating magnetic drums. These would spin at a constant speed, and a fixed head would read from memory when the correct part of the drum passed it by, a bit like a primitive platter disk. Because each revolution took a few milliseconds, programmers took to manually arranging information non-sequentially on the drum, timing when an instruction or memory address would need to be accessed, then spacing information accordingly around the edge of the drum, thus reducing the access delay. Similar techniques were still used on hard disks and floppy disks into the 90s, but have become irrelevant with modern disk technologies. The Hashlife algorithm Conway’s Game of Life has attracted numerous implementations over the years, but Bill Gosper’s Hashlife algorithm is particularly impressive. Taking advantage of the repetitive nature of many cellular automata, it uses a quadtree structure to store the hashes of pieces of the overall grid. Over time there are fewer and fewer new structures which need to be evaluated, so it starts to run faster with larger grids, drastically outperforming other algorithms both in terms of speed and the size of grid which can be simulated. The actual amount of memory used is huge, but it’s used in a clever way, so makes the list . Elite’s procedural generation Ok, so this isn’t exactly a memory optimization – more a storage optimization – but it gets an honorable mention anyway. When writing Elite, David Braben and Ian Bell wanted to build a rich world which gamers could explore, but their 22K memory was something of a limitation (for comparison that’s about the size of my avatar picture at the top of this page). They procedurally generated all the characteristics of the 2048 planets in their virtual universe, including the names, which were stitched together using a lookup table of parts of names. In fact the original plans were for 2^52 planets, but it was decided that that was probably too many. Oh, and they did that all in assembly language. Other games of the time used similar techniques too – The Sentinel’s landscape generation algorithm being another example. Modern Garbage Collectors Garbage collection in managed languages like Java and .NET ensures that most of the time, developers stop needing to care about how they use and clean up memory as the garbage collector handles it automatically. Achieving this without killing performance is a near-miraculous feet of software engineering. Much like when learning chemistry, you find that every time you think you understand how the garbage collector works, it turns out to be a mere simplification; that there are yet more complexities and heuristics to help it run efficiently. Of course introducing memory problems is still possible (and there are tools like our memory profiler to help if that happens to you) but they’re much, much rarer. A cautionary note In the examples above, there were good and well understood reasons for the optimizations, but cunningly optimized code has usually had to trade away readability and maintainability to achieve its gains. Trying to optimize memory usage without being pretty confident that there’s actually a problem is doing it wrong. So what have I missed? Tell me about the ingenious (or stupid) tricks you’ve seen people use. Ben

    Read the article

  • Is my implementation of A* wrong?

    - by Bloodyaugust
    I've implemented the A* algorithm in my program. However, it would seem to be functioning incorrectly at times. Below is a screenshot of one such time. The obviously shorter line is to go immediately right at the second to last row. Instead, they move down, around the tower, and continue to their destination (bottom right from top left). Below is my actual code implementation: nodeMap.prototype.findPath = function(p1, p2) { var openList = []; var closedList = []; var nodes = this.nodes; for (var i = 0; i < nodes.length; i++) { //reset heuristics and parents for nodes var curNode = nodes[i]; curNode.f = 0; curNode.g = 0; curNode.h = 0; curNode.parent = null; if (curNode.pathable === false) { closedList.push(curNode); } } openList.push(this.getNode(p1)); while(openList.length > 0) { // Grab the lowest f(x) to process next var lowInd = 0; for(i=0; i<openList.length; i++) { if(openList[i].f < openList[lowInd].f) { lowInd = i; } } var currentNode = openList[lowInd]; if (currentNode === this.getNode(p2)) { var curr = currentNode; var ret = []; while(curr.parent) { ret.push(curr); curr = curr.parent; } return ret.reverse(); } closedList.push(currentNode); for (i = 0; i < openList.length; i++) { //remove currentNode from openList if (openList[i] === currentNode) { openList.splice(i, 1); break; } } for (i = 0; i < currentNode.neighbors.length; i++) { if(closedList.indexOf(currentNode.neighbors[i]) !== -1 ) { continue; } if (currentNode.neighbors[i].isPathable === false) { closedList.push(currentNode.neighbors[i]); continue; } var gScore = currentNode.g + 1; // 1 is the distance from a node to it's neighbor var gScoreIsBest = false; if (openList.indexOf(currentNode.neighbors[i]) === -1) { //save g, h, and f then save the current parent gScoreIsBest = true; currentNode.neighbors[i].h = currentNode.neighbors[i].heuristic(this.getNode(p2)); openList.push(currentNode.neighbors[i]); } else if (gScore < currentNode.neighbors[i].g) { //current g better than previous g gScoreIsBest = true; } if (gScoreIsBest) { currentNode.neighbors[i].parent = currentNode; currentNode.neighbors[i].g = gScore; currentNode.neighbors[i].f = currentNode.neighbors[i].g + currentNode.neighbors[i].h; } } } return false; } Towers block pathability. Is there perhaps something I am missing here, or does A* not always find the shortest path in a situation such as this? Thanks in advance for any help.

    Read the article

  • Multidimensional multiple-choice knapsack problem: find a feasible solution

    - by Onheiron
    My assignment is to use local search heuristics to solve the Multidimensional multiple-choice knapsack problem, but to do so I first need to find a feasible solution to start with. Here is an example problem with what I tried so far. Problem R1 R2 R3 RESOUCES : 8 8 8 GROUPS: G1: 11.0 3 2 2 12.0 1 1 3 G2: 20.0 1 1 3 5.0 2 3 2 G3: 10.0 2 2 3 30.0 1 1 3 Sorting strategies To find a starting feasible solution for my local search I decided to ignore maximization of gains and just try to fit the resources requirements. I decided to sort the choices (strategies) in each group by comparing their "distance" from the multidimensional space origin, thus calculating SQRT(R1^2 + R2^2 + ... + RN^2). I felt like this was a keen solution as it somehow privileged those choices with resouce usages closer to each other (e.g. R1:2 R2:2 R3:2 < R1:1 R2:2 R3:3) even if the total sum is the same. Doing so and selecting the best choice from each group proved sufficent to find a feasible solution for many[30] different benchmark problems, but of course I knew it was just luck. So I came up with the problem presented above which sorts like this: R1 R2 R3 RESOUCES : 8 8 8 GROUPS: G1: 12.0 1 1 3 < select this 11.0 3 2 2 G2: 20.0 1 1 3 < select this 5.0 2 3 2 G3: 30.0 1 1 3 < select this 10.0 2 2 3 And it is not feasible because the resources consmption is R1:3, R2:3, R3:9. The easy solution is to pick one of the second best choices in group 1 or 2, so I'll need some kind of iteration (local search[?]) to find the starting feasible solution for my local search solution. Here are the options I came up with Option 1: iterate choices I tried to find a way to iterate all the choices with a specific order, something like G1 G2 G3 1 1 1 2 1 1 1 2 1 1 1 2 2 2 1 ... believeng that feasible solutions won't be that far away from the unfeasible one I start with and thus the number of iterations will keep quite low. Does this make any sense? If yes, how can I iterate the choices (grouped combinations) of each group keeping "as near as possibile" to the previous iteration? Option 2: Change the comparation term I tried to think how to find a better variable to sort the choices on. I thought at a measure of how "precious" a resource is based on supply and demand, so that an higer demand of a more precious resource will push you down the list, but this didn't help at all. Also I thought there probably isn't gonna be such a comparsion variable which assures me a feasible solution at first strike. I there such a variable? If not, is there a better sorting criteria anyways? Option 3: implement any known sub-optimal fast solving algorithm Unfortunately I could not find any of such algorithms online. Any suggestion?

    Read the article

  • Why are floating point values so prolific?

    - by Kibbee
    So, title says it all. Why are floating point values so prolific in computer programming. Due to problems like rounding errors, and not being able to even accurately represent numbers such as 0.1, I really can't see how they got as far as they did. I understand that the computation is faster with floating point numbers, however, I can think of only a few cases that they actually the right data type would be using. If you sat back and think about every time you used a floating point value, how many times did you say, well, some error would be ok, as long as the result was a few microseconds faster. It really makes me think because Jeff was talking about NP completeness, and how heuristics give an answer that is kind of right. And well, computers shouldn't do that. They should give you the answer that is correct. Yet we see floating point values used in many applications where they are completely not valid. What really bugs me, isn't that floating point exists, but that in many languages, there isn't even a viable alternative, non-floating point, decimal value. A lot of programmers when doing financial applications have to fall back to storing the number of cents in an integer field. Which brings with it all kinds of other problems. Why do floats continue to be so prolific, even though they can't represent the real answer, and we expect computers to be accurate? [EDIT] Just to clarify, I was talking about Base 2 floating points, and not base 10 floating points. .Net offers the Decimal data type, which is a base 10 floating point value which offers a much better representation of the numbers we deal with on a daily basis in most computer programs. I find it hard to believe that even modern languages like Java don't support base 10 floating point values, unless you want to move into the realm of things like BigDecimal, which isn't really the right answer either in a lot of situations.

    Read the article

  • Interface Casting vs. Class Casting

    - by Legatou
    I've been led to believe that casting can, in certain circumstances, become a measurable hindrance on performance. This may be moreso the case when we start dealing with incoherent webs of nasty exception throwing\catching. Given that I wish to create more correct heuristics when it comes to programming, I've been prompted to ask this question to the .NET gurus out there: Is interface casting faster than class casting? To give a code example, let's say this exists: public interface IEntity { IParent DaddyMommy { get; } } public interface IParent : IEntity { } public class Parent : Entity, IParent { } public class Entity : IEntity { public IParent DaddyMommy { get; protected set; } public IParent AdamEve_Interfaces { get { IEntity e = this; while (this.DaddyMommy != null) e = e.DaddyMommy as IEntity; return e as IParent; } } public Parent AdamEve_Classes { get { Entity e = this; while (this.DaddyMommy != null) e = e.DaddyMommy as Entity; return e as Parent; } } } So, is AdamEve_Interfaces faster than AdamEve_Classes? If so, by how much? And, if you know the answer, why?

    Read the article

< Previous Page | 1 2 3 4  | Next Page >