Search Results

Search found 1484 results on 60 pages for 'practical'.

Page 48/60 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • The Minimalist Approach to Content Governance - Request Phase

    - by Kellsey Ruppel
    Originally posted by John Brunswick. For each project, regardless of size, it is critical to understand the required ownership, business purpose, prerequisite education / resources needed to execute and success criteria around it. Without doing this, there is no way to get a handle on the content life-cyle, resulting in a mass of orphaned material. This lowers the quality of end user experiences.     The good news is that by using a simple process in this request phase - we will not have to revisit this phase unless something drastic changes in the project. For each of the elements mentioned above in this stage, the why, how (technically focused) and impact are outlined with the intent of providing the most value to a small team. 1. Ownership Why - Without ownership information it will not be possible to track and manage any of the content and take advantage of many features of enterprise content management technology. To hedge against this, we need to ensure that both a individual and their group or department within the organization are associated with the content. How - Apply metadata that indicates the owner and department or group that has responsibility for the content. Impact - It is possible to keep the content system optimized by running native reports against the meta-data and acting on them based on what has been outlined for success criteria. This will maximize end user experience, as content will be faster to locate and more relevant to the user by virtue of working through a smaller collection. 2. Business Purpose Why - This simple step will weed out requests that have tepid justification, as users will most likely not spend the effort to request resources if they do not have a real need. How - Use a simple online form to collect and workflow the request to management native to the content system. Impact - Minimizes the amount user generated content that is of low value to the organization. 3. Prerequisite Education Resources Needed Why - If a project cannot be properly staffed the probability of its success is going to be low. By outlining the resources needed - in both skill set and duration - it will cause the requesting party to think critically about the commitment needed to complete their project and what gap must be closed with regard to education of those resources. How - In the simple request form outlined above, resources and a commitment to fulfilling any needed education should be included with a brief acceptance clause that outlines the requesting party's commitment. Impact - This stage acts as a formal commitment to ensuring that resources are able to execute on the vision for the project. 4. Success Criteria Why - Similar to the business purpose, this is a key element in helping to determine if the project and its respective content should continue to exist if it does not meet its intended goal. How - Set a review point for the project content that will check the progress against the originally outlined success criteria and then determine the fate of the content. This can even include logic that will tell the content system to remove items that have not been opened by any users in X amount of time. Impact - This ensures that projects and their contents do not live past their useful lifespans. Just as with orphaned content, non-relevant information will slow user's access to relevant materials for the jobs. Request Phase Summary With a simple form that outlines the ownership of a project and its content, business purpose, education and resources, along with success criteria, we can ensure that an enterprise content management system will stay clean and relevant to end users - allowing it to deliver the most value possible. The key here is to make it straightforward to make the request and let the content management technology manage as much as possible through metadata, retention policies and workflow. Doing these basic steps will allow project content to get off to a great start in the enterprise! Stay tuned for the next installment - the "Create Phase" - covering security access and workflow involved in content creation, enabling a practical layer of governance over our enterprise content repository.

    Read the article

  • How to handle multi-processing of libraries which already spawn sub-processes?

    - by exhuma
    I am having some trouble coming up with a good solution to limit sub-processes in a script which uses a multi-processed library and the script itself is also multi-processed. Both, the library and script are modifiable by us. I believe the question is more about design than actual code, but for what it's worth, it's written in Python. The goal of the library is to hide implementation details of various internet routers. For that reason, the library has a "Proxy" factory method which takes the IP of a router as parameter. The factory then probes the device using a set of possible proxies. Usually, there is one proxy which immediately knows that is is able to send commands to this device. All others usually take some time to return (given a timeout). One thought was already to simply query the device for an identifier, and then select the proper proxy using that, but in order to do so, you would already need to know how to query the device. Abstracting this knowledge is one of the main purposes of the library, so that becomes a little bit of a "circular-requirement"/deadlock: To connect to a device, you need to know what proxy to use, and to know what proxy to create, you need to connect to a device. So probing the device is - as we can see - the best solution so far, apart from keeping a lookup-table somewhere. The library currently kills all remaining processes once a valid proxy has been found. And yes, there is always only one good proxy per device. Currently there are about 12 proxies. So if one create a proxy instance using the factory, 12 sub-processes are spawned. So far, this has been really useful and worked very well. But recently someone else wanted to use this library to "broadcast" a command to all devices. So he took the library, and wrote his own multi-processed script. This obviously spawned 12 * n processes where n is the number of IPs to which he broadcasted. This has given us two problems: The host on which the command was executed slowed down to a near halt. Aborting the script with CTRL+C ground the system to a total halt. Not even the hardware console responded anymore! This may be due to some Python strangeness which still needs to be investigated. Maybe related to http://bugs.python.org/issue8296 The big underlying question, is how to design a library which does multi-processing, so other applications which use this library and want to be multi-processed themselves do not run into system limitations. My first thought was to require a pool to be passed to the library, and execute all tasks in that pool. In that way, the person using the library has control over the usage of system resources. But my gut tells me that there must be a better solution. Disclaimer: My experience with multiprocessing is fairly limited. I have implemented a few straightforward which did not require access control to resources. So I have not yet any practical experience with semaphores or mutexes. p.s.: In the future, we may have enough information to do this without the probing. But the database which would contain the proper information is not yet operational. Also, the design about multiprocessing a multiprocessed library intrigues me :)

    Read the article

  • EPPM Is a Must-Have Capability as Global Energy and Power Industries Eye US$38 Trillion in New Investments

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} “The process manufacturing industry is facing an unprecedented challenge: from now until 2035, cumulative worldwide investments of US$38 trillion will be required for drilling, power generation, and other energy projects,” Iain Graham, director of energy and process manufacturing for Oracle’s Primavera, said in a recent webcast. He adds that process manufacturing organizations such as oil and gas, utilities, and chemicals must manage this level of investment in an environment of constrained capital markets, erratic supply and demand, aging infrastructure, heightened regulations, and declining global skills. In the following interview, Graham explains how the right enterprise project portfolio management (EPPM) technology can help the industry meet these imperatives. Q: Why is EPPM so important for today’s process manufacturers? A: If the industry invests US$38 trillion without proper cost controls in place, a huge amount of resources will be put at risk, especially when it comes to cost overruns that may occur in large capital projects. Process manufacturing companies must not only control costs, but also monitor all the various contractors that will be involved in each project. If you’re not managing your own workers and all the interdependencies among the different contractors, then you’ve got problems. Q: What else should process manufacturers look for? A: It’s also important that an EPPM solution has the ability to manage more than just capital projects. For example, it’s best to manage maintenance and capital projects in the same system. Say you’re due to install a new transformer in a power station as part of a capital project, but routine maintenance in that area of the facility is scheduled for that morning. The lack of coordination could lead to unforeseen delays. There are also IT considerations that impact capital projects, such as adding servers and network cable for a control system in a power station. What organizations need is a true EPPM system that’s not just for capital projects, maintenance, or IT activities, but instead an enterprisewide solution that provides visibility into all types of projects. Read the complete Q&A here and discover the practical framework for successfully managing this massive capital spending.

    Read the article

  • How to programmatically add x509 certificate to local machine store using c#

    - by David
    I understand the question title may be a duplicate but I have not found an answer for my situation yet so here goes; I have this simple peice of code // Convert the Filename to an X509 Certificate X509Certificate2 cert = new X509Certificate2(certificateFilePath); // Get the server certificate store X509Store store = new X509Store(StoreName.TrustedPeople, StoreLocation.LocalMachine); store.Open(OpenFlags.MaxAllowed); store.Add(cert); // x509 certificate created from a user supplied filename But keep being presented with an "Access Denied" exception. I have read some information that suggests using StorePermissions would solve my issue but I don't think this is relevant in my code. Having said that, I did test it to to be sure and I couldn't get it to work. I also found suggestions that changing folder permissions within Windows was the way to go and while this may work(not tested), it doesn't seem practical for what will become distributed code. I also have to add that as the code will be running as a service on a server, adding the certificates to the current user store also seems wrong. Is there anyway to programmatically add a certificate into the local machine store?

    Read the article

  • Applying Test Driven Development to a tightly coupled architecture

    - by Chris D
    Hi all, I've recently been studying TDD, attended a conference and have dabbled in few tests and already I'm 100% sold, I absolutely love it TDD. As a result I've raised this with my seniors and they are prepared to give it a chance, so they have tasked me with coming up with a way to implement TDD in the development of our enterprise product. The problem is our system has evolved since the days of VB6 to .NET and implements alot of legacy technology and some far from best practice development techniques i.e. alot of business logic in the ASP.NET code behind and client script. The largest problem however is how our classes are tightly coupled with database access; properties, methods, constructors - usually has some database access in some form or another. We use an in-house data access code generator tool that creates sqlDataAdapters that gives us all the database access we could ever want, which helps us develop extremely quickly, however, classes in our business layer are very tightly coupled to this data layer - we aren't even close to implementing some form of repository design. This and the issues above have created me all sorts of problems. I have tried to develop some unit tests for some existing classes I've already written but the tests take ALOT longer to run since db access is required, not to mention since we use the MS Enterprise Caching framework I am forced to fake a httpcontext for my tests to run successfully which isn't practical. Also, I can't see how to use TDD to drive the design of any new classes I write since they have to be soo tightly coupled to the database ... help! Because of the architecture of the system it appears I can't implement TDD without some real hack which in my eyes just defeats the aim of TDD and the huge benefits that come with. Does anyone have any suggestions how I could implement TDD with the constraints I'm bound too? or do I need to push the repository design pattern down my seniors throats and tell them we either change our architecture/development methodology or forget about TDD altogether? :) Thanks

    Read the article

  • How to break WinDbg in an anonymous method?

    - by Richard Berg
    Title kinda says it all. The usual SOS command !bpmd doesn't do a lot of good without a name. Some ideas I had: dump every method, then use !bpmd -md when you find the corresponding MethodDesc not practical in real world usage, from what I can tell. Even if I wrote a macro to limit the dump to anonymous types/methods, there's no obvious way to tell them apart. use Reflector to dump the MSIL name doesn't help when dealing with dynamic assemblies and/or Reflection.Emit. Visual Studio's inability to read local vars inside such scenarios is the whole reason I turned to Windbg in the first place... set the breakpoint in VS, wait for it to hit, then change to Windbg using the noninvasive trick attempting to detach from VS causes it to hang (along with the app). I think this is due to the fact that the managed debugger is a "soft" debugger via thread injection instead of a standard "hard" debugger. Or maybe it's just a VS bug specific to Silverlight (would hardly be the first I've encountered). set a breakpoint on some other location known to call into the anonymous method, then single-step your way in my backup plan, though I'd rather not resort to it if this Q&A reveals a better way

    Read the article

  • MVVM Madness: Commands

    - by JP
    I like MVVM. I don't love it, but like it. Most of it makes sense. But, I keep reading articles that encourage you to write a lot of code so that you can write XAML and don't have to write any code in the code-behind. Let me give you an example. Recently I wanted to hookup a command in my ViewModel to a ListView MouseDoubleClickEvent. I wasn't quite sure how to do this. Fortunately, Google has answers for everything. I found the following articles: http://blog.functionalfun.net/2008/09/hooking-up-commands-to-events-in-wpf.html http://joyfulwpf.blogspot.com/2009/05/mvvm-invoking-command-on-attached-event.html http://sachabarber.net/?p=514 http://geekswithblogs.net/HouseOfBilz/archive/2009/08/27/adventures-in-mvvm-ndash-binding-commands-to-any-event.aspx http://marlongrech.wordpress.com/2008/12/13/attachedcommandbehavior-v2-aka-acb/ While the solutions were helpful in my understanding of commands, there were problems. Some of the aforementioned solutions rendered the WPF designer unusable because of a common hack of appending "Internal" after a dependency property; the WPF designer can't find it, but the CLR can. Some of the solutions didn't allow multiple commands to the same control. Some of the solutions didn't allow parameters. After experimenting for a few hours I just decided to do this: private void ListView_MouseDoubleClick(object sender, MouseButtonEventArgs e) { ListView lv = sender as ListView; MyViewModel vm = this.DataContext as MyViewModel; vm.DoSomethingCommand.Execute(lv.SelectedItem); } So, MVVM purists, please tell me what's wrong with this? I can still Unit test my command. This seems very practical, but seems to violate the guideline of "ZOMG... you have code in your code-behind!!!!" Please share your thoughts. Thanks in advance.

    Read the article

  • How to make Python Extensions for Windows for absolute beginners

    - by JR
    Hello: I've been looking around the internet trying to find a good step by step guide to extend Python in Windows, and I haven't been able to find something for my skill level. let's say you have some c code that looks like this: #include <stdio.h> #include <math.h> double valuex(float value, double rate, double timex) { float value; double rate, timex; return value / double pow ((1 + rate), (timex)); } and you want to turn that into a Python 3 module for use on a windows (64bit if that makes a difference) system. How would you go about doing that? I've looked up SWIG and Pyrex and in both circumstances they seem geared towards the unix user. With Pyrex I am not sure if it works with Python 3. I'm just trying to learn the basics of programing, using some practical examples. Lastly, if there is a good book that someone can recommend for learning to extend, I would greatly appreciate it. Thank you.

    Read the article

  • HTML 4, HTML 5, XHTML, MIME types - the definitive resource

    - by deceze
    The topics of HTML vs. XHTML and XHTML as text/html vs. XHTML as XHTML are quite complex. Unfortunately it's hard to get a complete picture, since information is spread mostly in bits and pieces around the web or is buried deep in W3C tech jargon. In addition there's some misinformation being circulated. I propose to make this the definite SO resource about the topic, describing the most important aspects of: HTML 4 HTML 5 XHTML 1.0/1.1 as text/html XHTML 1.0/1.1 as XHTML What are the practical implications of each? What are common pitfalls? What is the importance of proper MIME types for each? How do different browsers handle them? I'd like to see one answer per technology. I'm making this a community wiki, so rather than contributing redundant answers, please edit answers to complete the picture. Feel free to start with stubs. Also feel free to edit this question.

    Read the article

  • [OffTopic] PhD is it worth it?

    - by Zenzen
    So I'll be graduating next year (hopefully) and lately I started thinking about getting a PhD in computer science (to be more precise I was thinking about something related to "distributed systems", don't have any specific topics yet in mind), but is it really worth it? A PhD course here lasts 4 years and is free IF you're doing the "normal one" (which means you have class from monday to friday) OR you have to pay if you want to study on the weekends. So I've been thinking about getting a normal full time job (as a JEE programmer) and doing my PhD during the weekends OR doing the normal PhD course (mon-fri) and getting a part time job as a software dev (the main reason I want a job is simple - I need to eat). That would give me practical working experience and theoretical knowledge+a PhD, but it would also mean working 7days a week from 9 to 5 for 4 years straight (sounds like an overkill). Is it, job wise, really worth getting a PhD? As an employer do you prefer people with PhDs or MScs? This might be kinda important - I'm not from the US or western europe, how are PhDs from other countries (I am graduating one of the best, if not the best, school in my country but it's still a central european country...) seen? Are they useless? I won't hide it - after graduation I am thinking about moving abroad.

    Read the article

  • HTML 4, HTML 5, XHTML, MIME types - the definite resource

    - by deceze
    The topics of HTML vs. XHTML and XHTML as text/html vs. XHTML as XHTML are quite complex. Unfortunately it's hard to get a complete picture, since information is spread mostly in bits and pieces around the web or is buried deep in W3C tech jargon. In addition there's some misinformation being circulated. I propose to make this the definite SO resource about the topic, describing the most important aspects of: HTML 4 HTML 5 XHTML 1.0/1.1 as text/html XHTML 1.0/1.1 as XHTML What are the practical implications of each? What are common pitfalls? What is the importance of proper MIME types for each? How do different browsers handle them? I'd like to see one answer per technology. I'm making this a community wiki, so rather than contributing redundant answers, please edit answers to complete the picture. Feel free to start with stubs. Also feel free to edit this question.

    Read the article

  • How can I write a clean Repository without exposing IQueryable to the rest of my application?

    - by Simucal
    So, I've read all the Q&A's here on SO regarding the subject of whether or not to expose IQueryable to the rest of your project or not (see here, and here), and I've ultimately decided that I don't want to expose IQueryable to anything but my Model. Because IQueryable is tied to certain persistence implementations I don't like the idea of locking myself into this. Similarly, I'm not sure how good I feel about classes further down the call chain modifying the actual query that aren't in the repository. So, does anyone have any suggestions for how to write a clean and concise Repository without doing this? One problem I see, is my Repository will blow up from a ton of methods for various things I need to filter my query off of. Having a bunch of: IEnumerable GetProductsSinceDate(DateTime date); IEnumberable GetProductsByName(string name); IEnumberable GetProductsByID(int ID); If I was allowing IQueryable to be passed around I could easily have a generic repository that looked like: public interface IRepository<T> where T : class { T GetById(int id); IQueryable<T> GetAll(); void InsertOnSubmit(T entity); void DeleteOnSubmit(T entity); void SubmitChanges(); } However, if you aren't using IQueryable then methods like GetAll() aren't really practical since lazy evaluation won't be taking place down the line. I don't want to return 10,000 records only to use 10 of them later. What is the answer here? In Conery's MVC Storefront he created another layer called the "Service" layer which received IQueryable results from the respository and was responsible for applying various filters. Is this what I should do, or something similar? Have my repository return IQueryable but restrict access to it by hiding it behind a bunch of filter classes like GetProductByName, which will return a concrete type like IList or IEnumerable?

    Read the article

  • What are best practices for collecting, maintaining and ensuring accuracy of a huge data set?

    - by Kyle West
    I am posing this question looking for practical advice on how to design a system. Sites like amazon.com and pandora have and maintain huge data sets to run their core business. For example, amazon (and every other major e-commerce site) has millions of products for sale, images of those products, pricing, specifications, etc. etc. etc. Ignoring the data coming in from 3rd party sellers and the user generated content all that "stuff" had to come from somewhere and is maintained by someone. It's also incredibly detailed and accurate. How? How do they do it? Is there just an army of data-entry clerks or have they devised systems to handle the grunt work? My company is in a similar situation. We maintain a huge (10-of-millions of records) catalog of automotive parts and the cars they fit. We've been at it for a while now and have come up with a number of programs and processes to keep our catalog growing and accurate; however, it seems like to grow the catalog to x items we need to grow the team to y. I need to figure some ways to increase the efficiency of the data team and hopefully I can learn from the work of others. Any suggestions are appreciated, more though would be links to content I could spend some serious time reading. THANKS! Kyle

    Read the article

  • Implementing Naïve Bayes algorithm in Java - Need some guidance

    - by techventure
    hello stackflow people As a School assignment i'm required to implement Naïve Bayes algorithm which i am intending to do in Java. In trying to understand how its done, i've read the book "Data Mining - Practical Machine Learning Tools and Techniques" which has a section on this topic but am still unsure on some primary points that are blocking my progress. Since i'm seeking guidance not solution in here, i'll tell you guys what i thinking in my head, what i think is the correct approach and in return ask for correction/guidance which will very much be appreciated. please note that i am an absolute beginner on Naïve Bayes algorithm, Data mining and in general programming so you might see stupid comments/calculations below: The training data set i'm given has 4 attributes/features that are numeric and normalized(in range[0 1]) using Weka (no missing values)and one nominal class(yes/no) 1) The data coming from a csv file is numeric HENCE * Given the attributes are numeric i use PDF (probability density function) formula. + To calculate the PDF in java i first separate the attributes based on whether they're in class yes or class no and hold them into different array (array class yes and array class no) + Then calculate the mean(sum of the values in row / number of values in that row) and standard divination for each of the 4 attributes (columns) of each class + Now to find PDF of a given value(n) i do (n-mean)^2/(2*SD^2), + Then to find P( yes | E) and P( no | E) i multiply the PDF value of all 4 given attributes and compare which is larger, which indicates the class it belongs to In temrs of Java, i'm using ArrayList of ArrayList and Double to store the attribute values. lastly i'm unsure how to to get new data? Should i ask for input file (like csv) or command prompt and ask for 4 values? I'll stop here for now (do have more questions) but I'm worried this won't get any responses given how long its got. I will really appreciate for those that give their time reading my problems and comment.

    Read the article

  • Axis2 webservice (aar archive) properties file

    - by XpiritO
    Hi there, guys. I'm currently developing a set of SOAP webservices over Axis2, deployed over a clustered WebLogic 10.3.2 environment. My webservices use some user settings that I want to be editable without the need for recompiling and regenerating the AAR archive. With this in mind, I chose to put them into a properties file that is loaded and consumed in runtime. Unfortunately, I'm having some questions about this: As far as I know, to achieve what I want, the only option is to put the properties file into the ../axis2/WEB-INF/classes directory of each one of the deployments (on each WebLogic instance) I currently have on my clustered configuration, and then load the file, as follows (or equivalent, this has not been verified for optimization): InputStreamReader fMainProp = new InputStreamReader(this.getClass().getResourceAsStream("myfile.properties")); Properties mainProp = new Properties(); mainProp.load(fMainProp); This is not as practical as I wanted it to be, because each time I want to alter some setting on the properties file, I have to edit each one of the files (deployed over different WebLogic instances) and there is a high probability of modifying one of these files without modifying the others. What I would like to know is if there is any (better) alternative to accomplish what I want, minimizing the potential conflict of configuration that is created by distributing and replicating the properties file through multiple WebLogic instances.

    Read the article

  • Any way for a class to prevent outside code from declaring variables of its type?

    - by supercat
    Is it possible for a class of exposing a type for function returns, without allowing users of that class to create variables of that type? A couple usage scenarios: A Fluent interface on a large class; a statement like "foo=bar.WithX(5).WithY(9).WithZ(19);" would be inefficient if it had to create three new instances of the class, but could be much more efficient if the WithX could create one instance, and the other statements could simply use it. A class may wish to support a statement like "foo[19].x = 9;" even when foo itself isn't an array, and does not hold the data in class instances that can be exposed to the public; one way to do that is to have foo[19] return a struct which holds a reference to 'foo' and the value '19', and has a member property 'x' which could call "foo.SetXValue(19, 9);" Such a struct could have a conversion operator to convert itself to the "apparent" type of foo[19]. In both of these scenarios, storing the value returned by a method or property into a variable and then using it more than once would cause strange behavior. It would be desirable if the designer of the class exposing such methods or properties could ensure that callers wouldn't be able to use them more than once. Is there any practical way to accomplish that?

    Read the article

  • x86 opcode alignment references and guidelines

    - by mrjoltcola
    I'm generating some opcodes dynamically in a JIT compiler and I'm looking for guidelines for opcode alignment. 1) I've read comments that briefly "recommend" alignment by adding nops after calls 2) I've also read about using nop for optimizing sequences for parallelism. 3) I've read that alignment of ops is good for "cache" performance Usually these comments don't give any supporting references. Its one thing to read a blog or a comment that says, "its a good idea to do such and such", but its another to actually write a compiler that implements specific op sequences and realize most material online, especially blogs, are not useful for practical application. So I'm a believer in finding things out myself (disassembly, etc. to see what real world apps do). This is one case where I need some outside info. I notice compilers will usually start an odd byte instruction immediately after whatever previous instruction sequence there was. So the compiler is not taking any special care in most cases. I see "nop" here or there, but usually it seems nop is used sparingly, if at all. How critical is opcode alignment? Can you provide references for cases that I can actually use for implementation? Thanks.

    Read the article

  • Programmatically creating vector arrows in KML

    - by mettadore
    Does anyone have any practical examples of programmatically drawing icons as vectors in KML? Specifically, I have data with a magnitude and an azimuth at given coordinates, and I would like to have icons (or another graphical element) generated based on these values. Some thoughts on how I might approach it: Image directory (a brute force way): Make an image director of 360 different image files (probably batch rotate a single image) each pointing in a cooresponding azimuth. I've seen things like "Excel to KML," but am looking for code that I can use within a program, rather than a web utility. Issue: Arrow does not contain magnitude context, so that would have to be a label. I'd rather dynamically lengthen the arrow. Line creation in KML: Perhaps create a formula that creates a line with the origin at the coordinate points, with the length of the line proportional to the magnitute, and angled according to azimuth. There would then be two more lines, perhaps 30 degrees or so extending from the end of the previous line to make the arrow head. Issues: Not a separate image icon, so not sure how it would work in KML. Also not sure how easy it would be to generate this algorithm. Separate image generation: Perhaps create a PHP file that uses imagemagick or something similar to dynamically generate a .png file in a similar method to the above, and then link to the icon using the URI "domain.tld/imagegen.php?magnitude=magvalue&azimuth=azmvalue". Issue: Still have the problem of actually writing the algorithm for image generation. So, the question: has anyone else come up with solutions for programmatic vector (rather than merely arrow) generation?

    Read the article

  • Worse is better. Is there an example?

    - by J.F. Sebastian
    Is there a widely-used algorithm that has time complexity worse than that of another known algorithm but it is a better choice in all practical situations (worse complexity but better otherwise)? An acceptable answer might be in a form: There are algorithms A and B that have O(N**2) and O(N) time complexity correspondingly, but B has such a big constant that it has no advantages over A for inputs less then a number of atoms in the Universe. Examples highlights from the answers: Simplex algorithm -- worst-case is exponential time -- vs. known polynomial-time algorithms for convex optimization problems. A naive median of medians algorithm -- worst-case O(N**2) vs. known O(N) algorithm. Backtracking regex engines -- worst-case exponential vs. O(N) Thompson NFA -based engines. All these examples exploit worst-case vs. average scenarios. Are there examples that do not rely on the difference between the worst case vs. average case scenario? Related: The Rise of ``Worse is Better''. (For the purpose of this question the "Worse is Better" phrase is used in a narrower (namely -- algorithmic time-complexity) sense than in the article) Python's Design Philosophy: The ABC group strived for perfection. For example, they used tree-based data structure algorithms that were proven to be optimal for asymptotically large collections (but were not so great for small collections). This example would be the answer if there were no computers capable of storing these large collections (in other words large is not large enough in this case). Coppersmith–Winograd algorithm for square matrix multiplication is a good example (it is the fastest (2008) but it is inferior to worse algorithms). Any others? From the wikipedia article: "It is not used in practice because it only provides an advantage for matrices so large that they cannot be processed by modern hardware (Robinson 2005)."

    Read the article

  • method for specialized pathfinding?

    - by rlbond
    I am working on a roguelike in my (very little) free time. Each level will basically be a few rectangular rooms connected together by paths. I want the paths between rooms to be natural-looking and windy, however. For example, I would not consider the following natural-looking: B X X X XX XX XX AXX I really want something more like this: B X XXXX X X X X AXXXXXXXX These paths must satisfy a few properties: I must be able to specify an area inside which they are bounded, I must be able to parameterize how windy and lengthy they are, The lines should not look like they started at one path and ended at the other. For example, the first example above looks as if it started at A and ended at B, because it basically changed directions repeatedly until it lined up with B and then just went straight there. I was hoping to use A*, but honestly I have no idea what my heuristic would be. I have also considered using a genetic algorithm, but I don't know how practical that method might end up. My question is, what is a good way to get the results I want? Please do not just specify a method like "A*" or "Dijkstra's algorithm," because I also need help with a good heuristic.

    Read the article

  • Using UpdatePanels inside of a ListView

    - by Jim B
    Hey everyone, I'm wondering if anybody has run across something similar to this before. Some quick pseudo-code to get started: <UpdatePanel> <ContentTemplate> <ListView> <LayoutTemplate> <UpdatePanel> <ContentTemplate> <ItemPlaceholder /> </ContentTemplate> </UpdatePanel> </LayoutTemplate> <ItemTemplate> Some stuff goes here </ItemTemplate> </ListView> </ContentTemplate> </UpdatePanel> The main thing to take away from the above is that I have an update panel which contains a listview; and then each of the listview items is contained in its own update panel. What I'm trying to do is when one of the ListView update panels triggers a postback, I'd want to also update one of the other ListView item update panels. A practical implementation would be a quick survey, that has 3 questions. We'd only ask Question #3 if the user answered "Yes" to Question #1. When the page loads; it hides Q3 because it doesn't see "Yes" for Q1. When the user clicks "Yes" to Q1, I want to refresh the Q3 update panel so it now displays. I've got it working now by refreshing the outer UpdatePanel on postback, but this seems inefficient because I don't need to re-evaluate every item; just the ones that would be affected by the prerequisite i detailed out above. I've been grappling with setting up triggers, but i keep coming up empty mainly because I can't figure out a way to set up a trigger for the updatepanel for Q3 based off of the postback triggered by Q1. Any suggestions out there? Am I barking up the wrong tree?

    Read the article

  • Efficiently Determine if EF 4 POCO Already in ObjectSet

    - by Eric J.
    I'm trying EF 4 with POCO's on a small project for the first time. In my Repository implementation, I want to provide a method AddOrUpdate that will add a passed-in POCO to the repository if it's new, else do nothing (as the updated POCO will be saved when SaveChanges is called). My first thought was to do this: public void AddOrUpdate(Poco p) { if (!Ctx.Pocos.Contains<Poco>(p)) { Ctx.Pocos.AddObject(p); } } However that results in a NotSupportedException as documented under Referencing Non-Scalar Variables Not Supported (bonus question: why would that not be supported?) Just removing the Contains part and always calling AddObject results in an InvalidStateException: An object with the same key already exists in the ObjectStateManager. The existing object is in the Unchanged state. An object can only be added to the ObjectStateManager again if it is in the added state. So clearly EF 4 knows somewhere that this is a duplicate based on the key. What's a clean, efficient way for the Repository to update Pocos for either a new or pre-existing object when AddOrUpdate is called so that the subsequent call to SaveChanges() will do the right thing? I did consider carrying an isNew flag on the object itself, but I'm trying to take persistence ignorance as far as practical.

    Read the article

  • Textbox autofill not in correct position in safari

    - by jerjer
    Hello All, Has anyone experience this weird issue on safari? Textbox autofill is not at its correct position, please see screenshot below. I have been searching for answers in google for almost a day, still no luck. This is the built-in autofill feature of safari. Here is the markup: index.php <html> <body> <div id="nav"></div> <div id="content"> <iframe style="width:100%;height:100%" src="add_user.php" frameborder="0"></iframe> </div> .... </body> </html> add_user.php <html> .... <body> <form method="post"> <h3>Add User (Admin only) <div id="msg">Please enter First and Last Name.</div> <ul> <li><label>* Email</label> <input type="text" id="email" /></li> <li><label>* First Name</label> <input type="text" id="fname" /></li> <li><label>* Last Name</label> <input type="text" id="lname" /></li> .... </ul> </form> </body> </html> I am suspecting that this is caused by the iframe, but it works just fine in other browsers. Also I could not change the page design(using iframe) right away for practical reasons. Thanks

    Read the article

  • .NET WF4: Should it be in the middle of everything?

    - by stimpy77
    I am aware that WF4 (Windows Workflow 4.0, part of .NET 4.0) is a significant rework and redesign of WF3, where much of what made WF3 a poor technology choice has been cleaned up in WF4. For example, as far as I can tell, WF4 (Windows Workflow 4.0) activities are more or less testable with [TestMethod] and mocking. This among other things, like improved performance, has grabbed my attention about the technology again, whereas I had previously pooh-poohed WF3. I'm working on a new architecture for essentially an n-tier collaborative application (not enterprise-class, just a smallish project with potential to grow significantly) where I'm already trying to discipline myself to use IoC and, to some extent, TDD, and I'm wondering, in general terms, whether it is wiser to just hand-code workflow logic or if I should delve into learning and integrating WF4 so that WF becomes literally the controller of the entire application, i.e. the practical C in "MVC" (not ASP.NET MVC but rather the pattern). So should workflow activities in WF4 be the primary controller for a highly expandable/growable web-based collaborative application? Or am I asking entirely the wrong question? This is a vague question, I'm sure, so abstract answers are as welcome as specific ones.

    Read the article

  • Data in two databases, eager spool resulting in query

    - by Valkyrie
    I have two databases in SQL2k5: one that holds a large amount of static data (SQL Database 1) (never updated but frequently inserted into) and one that holds relational data (SQL Database 2) related to the static data. They're separated mainly because of corporate guidelines and business requirements: assume for the following problem that combining them is not practical. There are places in SQLDB2 that PKs in SQLDB1 are referenced; triggers control the referential integrity, since cross-database relationships are troublesome in SQL Server. BUT, because of the large amount of data in SQLDB1, I'm getting eager spools on queries that join from the Id in SQLDB2 that references the data in SQLDB1. (With me so far? Maybe an example will help:) SELECT t.Id, t.Name, t2.Company FROM SQLDB1.table t INNER JOIN SQLDB2.table t2 ON t.Id = t2.FKId This query results in a eager spool that's 84% of the load of the query; the table in SQLDB1 has 35M rows, so it's completely choking this query. I can't create a view on the table in SQLDB1 and use that as my FK/index; it doesn't want me to create a constraint based on a view. Anyone have any idea how I can fix this huge bottleneck? (Short of putting the static data in the first db: believe me, I've argued that one until I'm blue in the face to no avail.) Thanks! valkyrie Edit: also can't create an indexed view because you can't put schemabinding on a view that references a table outside the database where the view resides. Dang it.

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >