Search Results

Search found 20442 results on 818 pages for 'software evaluation'.

Page 321/818 | < Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >

  • How to branch with TortoiseHG

    - by Michael Tiller
    I downloaded TortoiseHg 1.0 for evaluation. For the life of me I can't figure out how to make a branch. It seems to understand branches (e.g. in its repository browser) but I just can't seem to find a way to make a branch. This seems like such a fundamental capability since out of the often touted benefits of DVC is the lightweight branching. I Googled around and couldn't find much discussion of this topic (at least for recent versions) so I have to assume I'm missing something, right?

    Read the article

  • Trouble with copying dictionaries and using deepcopy on an SQLAlchemy ORM object

    - by Az
    Hi there, I'm doing a Simulated Annealing algorithm to optimise a given allocation of students and projects. This is language-agnostic pseudocode from Wikipedia: s ? s0; e ? E(s) // Initial state, energy. sbest ? s; ebest ? e // Initial "best" solution k ? 0 // Energy evaluation count. while k < kmax and e > emax // While time left & not good enough: snew ? neighbour(s) // Pick some neighbour. enew ? E(snew) // Compute its energy. if enew < ebest then // Is this a new best? sbest ? snew; ebest ? enew // Save 'new neighbour' to 'best found'. if P(e, enew, temp(k/kmax)) > random() then // Should we move to it? s ? snew; e ? enew // Yes, change state. k ? k + 1 // One more evaluation done return sbest // Return the best solution found. The following is an adaptation of the technique. My supervisor said the idea is fine in theory. First I pick up some allocation (i.e. an entire dictionary of students and their allocated projects, including the ranks for the projects) from entire set of randomised allocations, copy it and pass it to my function. Let's call this allocation aOld (it is a dictionary). aOld has a weight related to it called wOld. The weighting is described below. The function does the following: Let this allocation, aOld be the best_node From all the students, pick a random number of students and stick in a list Strip (DEALLOCATE) them of their projects ++ reflect the changes for projects (allocated parameter is now False) and lecturers (free up slots if one or more of their projects are no longer allocated) Randomise that list Try assigning (REALLOCATE) everyone in that list projects again Calculate the weight (add up ranks, rank 1 = 1, rank 2 = 2... and no project rank = 101) For this new allocation aNew, if the weight wNew is smaller than the allocation weight wOld I picked up at the beginning, then this is the best_node (as defined by the Simulated Annealing algorithm above). Apply the algorithm to aNew and continue. If wOld < wNew, then apply the algorithm to aOld again and continue. The allocations/data-points are expressed as "nodes" such that a node = (weight, allocation_dict, projects_dict, lecturers_dict) Right now, I can only perform this algorithm once, but I'll need to try for a number N (denoted by kmax in the Wikipedia snippet) and make sure I always have with me, the previous node and the best_node. So that I don't modify my original dictionaries (which I might want to reset to), I've done a shallow copy of the dictionaries. From what I've read in the docs, it seems that it only copies the references and since my dictionaries contain objects, changing the copied dictionary ends up changing the objects anyway. So I tried to use copy.deepcopy().These dictionaries refer to objects that have been mapped with SQLA. Questions: I've been given some solutions to the problems faced but due to my über green-ness with using Python, they all sound rather cryptic to me. Deepcopy isn't playing nicely with SQLA. I've been told thatdeepcopy on ORM objects probably has issues that prevent it from working as you'd expect. Apparently I'd be better off "building copy constructors, i.e. def copy(self): return FooBar(....)." Can someone please explain what that means? I checked and found out that deepcopy has issues because SQLAlchemy places extra information on your objects, i.e. an _sa_instance_state attribute, that I wouldn't want in the copy but is necessary for the object to have. I've been told: "There are ways to manually blow away the old _sa_instance_state and put a new one on the object, but the most straightforward is to make a new object with __init__() and set up the attributes that are significant, instead of doing a full deep copy." What exactly does that mean? Do I create a new, unmapped class similar to the old, mapped one? An alternate solution is that I'd have to "implement __deepcopy__() on your objects and ensure that a new _sa_instance_state is set up, there are functions in sqlalchemy.orm.attributes which can help with that." Once again this is beyond me so could someone kindly explain what it means? A more general question: given the above information are there any suggestions on how I can maintain the information/state for the best_node (which must always persist through my while loop) and the previous_node, if my actual objects (referenced by the dictionaries, therefore the nodes) are changing due to the deallocation/reallocation taking place? That is, without using copy?

    Read the article

  • Block All Keyboard Input in a Linux Application (Using Qt or Mono)

    - by Evans
    Hi, I'm working on a online quiz client where we use a dedicated custom-made linux distro which contains only the quiz client software along with text editors and other utility software. When the user has started the quiz, I want to prevent him/her from minimizing the window/closing it/switching to the desktop or other windows. The quizzes can be attempted using only the mouse, so I need the keyboard to be completed disabled for the period of the quiz. How could I do this, using Qt or Mono? I'm ready to use any low-level libraries/drivers, if required. Thanks Evans

    Read the article

  • What 'best practices' exist for handing enum heirarchies?

    - by FerretallicA
    I'm curious as to any solutions out there for addressing enum heirarchies. I'm working through some docs on Entity Framework 4 and trying to apply it to a simple inventory tracking program. The possible types for inventory to fall into are as follows: INVENTORY ITEM TYPES: Hardware PC Desktop Server Laptop Accessory Input (keyboards, scanners etc) Output (monitors, printers etc) Storage (USB sticks, tape drives etc) Communication (network cards, routers etc) Software What recommendations are there for handling enums in a situation like this? Are enums even the solution? I don't really want to have a ridiculously normalised database for such a relatively simple experiment (eg tables for InventoryType, InventorySubtype, InventoryTypeToSubtype etc). I don't really want to over-complicate my data model with each subtype being inherited even though no additional properties or methods are included (except PC types which would ideally have associated accessories and software but that's probably out of scope here). It feels like there should be a really simple, elegant solution to this but I can't put my finger on it. Any assistance or input appreciated!

    Read the article

  • Storing Templates and Object-Oriented vs Relational Databases

    - by syrion
    I'm designing some custom blog software, and have run into a conundrum regarding database design. The software requires that there be multiple content types, each of which will require different entry forms and presentation templates. My initial instinct is to create these content types as objects, then serialize them and store them in the database as JSON or YAML, with the entry forms and templates as simple strings attached to the "contentTypes" table. This seems cumbersome, however. Are there established best practices for dealing with this design? Is this a use case where I should consider an object database? If I should be using an object database, which should I consider? I am currently working in Python and would prefer a capable Python library, but can move to Java if need be.

    Read the article

  • Evaluating a function at a particular value in parallel

    - by Gaurav Kalra
    Hi. The question may seem vague, but let me explain it. Suppose we have a function f(x,y,z ....) and we need to find its value at the point (x1,y1,z1 .....). The most trivial approach is to just replace (x,y,z ...) with (x1,y1,z1 .....). Now suppose that the function is taking a lot of time in evaluation and I want to parallelize the algorithm to evaluate it. Obviously it will depend on the nature of function, too. So my question is: what are the constraints that I have to look for while "thinking" to parallelize f(x,y,z...)? If possible, please share links to study.

    Read the article

  • Best compiled language for Mac OS X and Linux compatibility

    - by corydoras
    We need to write some software that will compile and run on both an Mac OS X server and Ubuntu. We would love to use Objective-C with all of its Cocoa goodness, however the GNUstep implementations of the parts we are using are broken (in the latest Ubuntu package anyway.) In light of this should we use C++ (I would really rather not), C or something else that we have not thought of? It is a server/back-end process that is very resource intensive, Java and other interpreted versions of this software perform much worse than the Objective-C proof of concept we have written, hence we now wish to re-write in a "compiled[1]" language. (NB: Some people might consider this subjective, however at the end of the day we do need to get a job done, there has to be a reasonably appropriate correct answer here). [1] Compiled to native CPU instructions, not compiled into "byte codes" that then have to be run by an interpreter.

    Read the article

  • Cobol: science and fiction

    - by user847
    There are a few threads about the relevance of the Cobol programming language on this forum, e.g. this thread links to a collection of them. What I am interested in here is a frequently repeated claim based on a study by Gartner from 1997: that there were around 200 billion lines of code in active use at that time! I would like to ask some questions to verify or falsify a couple of related points. My goal is to understand if this statement has any truth to it or if it is totally unrealistic. I apologize in advance for being a little verbose in presenting my line of thought and my own opinion on the things I am not sure about, but I think it might help to put things in context and thus highlight any wrong assumptions and conclusions I have made. Sometimes, the "200 billion lines" number is accompanied by the added claim that this corresponded to 80% of all programming code in any language in active use. Other times, the 80% merely refer to so-called "business code" (or some other vague phrase hinting that the reader is not to count mainstream software, embedded systems or anything else where Cobol is practically non-existent). In the following I assume that the code does not include double-counting of multiple installations of the same software (since that is cheating!). In particular in the time prior to the y2k problem, it has been noted that a lot of Cobol code is already 20 to 30 years old. That would mean it was written in the late 60ies and 70ies. At that time, the market leader was IBM with the IBM/370 mainframe. IBM has put up a historical announcement on his website quoting prices and availability. According to the sheet, prices are about one million dollars for machines with up to half a megabyte of memory. Question 1: How many mainframes have actually been sold? I have not found any numbers for those times; the latest numbers are for the year 2000, again by Gartner. :^( I would guess that the actual number is in the hundreds or the low thousands; if the market size was 50 billion in 2000 and the market has grown exponentially like any other technology, it might have been merely a few billions back in 1970. Since the IBM/370 was sold for twenty years, twenty times a few thousand will result in a couple of ten-thousands of machines (and that is pretty optimistic)! Question 2: How large were the programs in lines of code? I don't know how many bytes of machine code result from one line of source code on that architecture. But since the IBM/370 was a 32-bit machine, any address access must have used 4 bytes plus instruction (2, maybe 3 bytes for that?). If you count in operating system and data for the program, how many lines of code would have fit into the main memory of half a megabyte? Question 3: Was there no standard software? Did every single machine sold run a unique hand-coded system without any standard software? Seriously, even if every machine was programmed from scratch without any reuse of legacy code (wait ... didn't that violate one of the claims we started from to begin with???) we might have O(50,000 l.o.c./machine) * O(20,000 machines) = O(1,000,000,000 l.o.c.). That is still far, far, far away from 200 billion! Am I missing something obvious here? Question 4: How many programmers did we need to write 200 billion lines of code? I am really not sure about this one, but if we take an average of 10 l.o.c. per day, we would need 55 million man-years to achieve this! In the time-frame of 20 to 30 years this would mean that there must have existed two to three million programmers constantly writing, testing, debugging and documenting code. That would be about as many programmers as we have in China today, wouldn't it? Question 5: What about the competition? So far, I have come up with two things here: 1) IBM had their own programming language, PL/I. Above I have assumed that the majority of code has been written exclusively using Cobol. However, all other things being equal I wonder if IBM marketing had really pushed their own development off the market in favor of Cobol on their machines. Was there really no relevant code base of PL/I? 2) Sometimes (also on this board in the thread quoted above) I come across the claim that the "200 billion lines of code" are simply invisible to anybody outside of "governments, banks ..." (and whatnot). Actually, the DoD had funded their own language in order to increase cost effectiveness and reduce the proliferation of programming language. This lead to their use of Ada. Would they really worry about having so many different programming languages if they had predominantly used Cobol? If there was any language running on "government and military" systems outside the perception of mainstream computing, wouldn't that language be Ada? I hope someone can point out any flaws in my assumptions and/or conclusions and shed some light on whether the above claim has any truth to it or not.

    Read the article

  • Help with Event-Based Components

    - by Joel in Gö
    I have started to look at Event-Based Components (EBCs), a programming method currently being explored by Ralf Wesphal in Germany, in particular. This is a really interesting and promising way to architect a software solution, and gets close to the age-old idea of being able to stick software components together like Lego :) A good starting point is the Channel 9 video here, and there is a fair bit of discussion in German at the Google Group on EBCs. I am however looking for more concrete examples - while the ideas look great, I am finding it hard to translate them into real code for anything more than a trivial project. Does anyone know of any good code examples (in C# preferably), or any more good sites where EBCs are discussed?

    Read the article

  • How to study programming with C language

    - by gurugio
    I am using only C for 5 years. So I am sure that I know C grammer, but I have no idea how to advance programming skills. There are many books for modern languages (such as C++, Java) to study programming skills like the refactoring or pattern, software architecture. But no book is written with C language. The book author say that his/her book is not language-dependent, but I don't think so. How can I advance my programming skills? I have to study modern language and read the books? Are there books about software design or programming skill written with C?

    Read the article

  • How to evaluate a custom math expression in Python

    - by taynaron
    I'm writing a custom dice rolling parser (snicker if you must) in python. Basically, I want to use standard math evaluation but add the 'd' operator: #xdy sum = 0 for each in range(x): sum += randInt(1, y) return sum So that, for example, 1d6+2d6+2d6-72+4d100 = (5)+(1+1)+(6+2)-72+(5+39+38+59) = 84 I was using regex to replace all 'd's with the sum and then using eval, but my regex fell apart when dealing with parentheses on either side. Is there a faster way to go about this than implementing my own recursive parsing? Perhaps adding an operator to eval?

    Read the article

  • What's the reason both Image and Bitmap classes don't implement a custom equality/hashcode logic?

    - by devoured elysium
    From MSDN documentation, it seems as both GetHashCode() and Equals() haven't been overriden in Bitmap. Neither have them been overriden in Image. So both classes are using the Object's version of them just compares references. I wasn't too convinced so I decided to fire up Reflector to check it out. It seems MSDN is correct in that matter. So, is there any special reason why MS guys wouldn't implement "comparison logic", at least for the Bitmap class? I find it is kinda acceptable for Image, as it is an abstract class, but not so much for the Bitmap class. I can see in a lot of situations calculating the hash code can be an expensive operation, but it'd be alright if it used some kind of lazy evaluation (storing the computed hash code integer in a variable a variable, so it wouldn't have to calculate it later again). When wanting to compare 2 bitmaps, will I have to resort to having to run all over the picture comparing each one of its pixels? Thanks

    Read the article

  • What is the best euphemism for a non-developer?

    - by Edward Tanguay
    I'm writing a description for a piece of software that targets the user who is "not technically minded", i.e. a person who uses "browser/office/email" and has a low tolerance for anything technical, he just "wants it to work" without being involved in any of the technical details. What is the best non-disparaging term you have seen to describe this kind of user? non-technical user low-tech user office user normal user technically challenged user non-developer computer joe Surely there is some official, politically-correct retronym for this kind of user that the press and software marketing use.

    Read the article

  • How to schedule daily backup in MSSQL Server 2008 Web Edition

    - by Xenon
    In MSSQL Management Studio I created a maintenance plan but it won't work Error is; "Message Executed as user: LITESPELL-19C34\Administrator. Microsoft (R) SQL Server Execute Package Utility Version 10.0.1600.22 for 32-bit Copyright (C) Microsoft Corp 1984-2005. All rights reserved. The SQL Server Execute Package Utility requires Integration Services to be installed by one of these editions of SQL Server 2008: Standard, Enterprise, Developer, or Evaluation. To install Integration Services, run SQL Server Setup and select Integration Services. The package execution failed. The step failed." But in Microsoft page http://www.microsoft.com/sqlserver/2008/en/us/web.aspx in Automate tasks and policies section it is written that backup can be scheduled in this edition but how?

    Read the article

  • Startup in Windows 7

    - by iira
    Hi, I am trying to add my program run in Windows 7 startup, but it does'nt works. My program has embedded uac manifest. My current way is by adding String Value at HKCU..\Run I found a manual solution for Vista from http://social.technet.microsoft.com/Forums/en/w7itprosecurity/thread/81c3c1f2-0169-493a-8f87-d300ea708ecf 1. Click Start, right click on Computer and choose “Manage”. 2. Click “Task Scheduler” on the left panel. 3. Click “Create Task” on the right panel. 4. Type a name for the task. 5. Check “Run with highest privileges”. 6. Click Actions tab. 7. Click “New…”. 8. Browse to the program in the “Program/script” box. Click OK. 9. On desktop, right click, choose New and click “Shortcut”. 10. In the box type: schtasks.exe /run /tn TaskName where TaskName is the name of task you put in on the basics tab and click next. 11. Type a name for the shortcut and click Finish. Additionally, you need to run the saved scheduled task shortcut to run the program instead of running the application shortcut to ignore the IAC prompt. When startup the system will run the program via the original shortcut. Therefore you need to change the location to run the saved task. Please: 1. Open Regedit. 2. Find the entry of the startup item in Registry. It will be stored in one of the following branches. HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run HKEY_USERS\.DEFAULT\Software\Microsoft\Windows\CurrentVersion\Run HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run 3. Double-click on the correct key, change the path to the saved scheduled task you created. Is there any free code to add item with privileges option in scheduled task? I havent found the free one in torry.net Thanks a lot.

    Read the article

  • Taking Responsibilites too early - danger or boon?

    - by narresh
    Just I am only six months experienced guy in software industry.Due to recession season,a large volume of employees from our company are asked to leave.This impact affects the new inexperienced guys. The problem is We have to interact with the client directly to gather the requirement We have to design all HLD and LLD ,use cases and DB Diagrams ,that should staisfy the customers who keeping the high software insdutry standards In this do or die situation (Client gives us narrow dead line),we spend most of time with interacting client (only less time is available for development and testing,even sit out for 24 X 7 won't help us to finish our task) If task will not be finished with in the expected time frame ,that will trigger poor performance ration to our profile There are two important questions flashing in our mind : (1) Quit the job and do other business (2) Face the challenge : (if the option is to face the challenge ,What are all the tools should we follow to automate requirement gathering,testing,DB diagram,Code review and the rest.(We are working on ASP.NET 3.5 with SQL Server 2005).

    Read the article

  • What are good ways to guarantee business continuity with a SaaS product?

    - by tommyvdz
    For my Bachelor Thesis I am researching how SaaS providers can arrange some sort of business continuity guarantee. You probably know the Source Code Escrow arrangements for 'shrink-wrapped' software. They give customers access to the source code and all applicable documentation whenever the software supplier gets into (financial) trouble. This clearly does not work for SaaS, because customers have no use for just the source code, and customers probably can not afford not being able to login to their CRM system for a couple weeks because the SaaS provider went bankrupt. I am currently researching different methods to solve this problem. Do you know good and practical solutions to solve this continuity problem? Or companies that already offer a good solution? Thanks!

    Read the article

  • gdb: SIGTRAP on std::string::c_str() call

    - by sheepsimulator
    So I've been trying to use gdb to return the value of a string I have by calling > print <member variable name>.c_str() But everytime I do so, I get this: Program received signal SIGTRAP, Trace/breakpoint trap. <some address> in std::string::c_str() from /usr/lib/libstdc++.so.6 GDB remains in the frame where the signal was received. To change this behavior use "set unwindonsignal on" Evaluation of the expression containing the function (std::string::c_str() const) will be abandoned. Two questions: Why/how is the standard library throwing SIGTRAP? I checked basic_string.h and c_str() is defined as: const _CharT* c_str() const { return _M_data(); } I don't see any SIGTRAP-throwing here... is there a way to get around this SIGTRAP? How can I read the text value of the std::string out (without getting some crazy extension library) in gdb?

    Read the article

  • Code equivalence between Javascript and PHP

    - by xdevel2000
    I'm trying to learn PHP and I wish to know if there are some function equivalence constructs: In JS I can do: var t = function() {}; t(); myObj.DoStuff = function() {} // here I add a method at run-time myObj.DoStuff(); myObj["DoStuff"](); var j = myObj.DoStuff; j(); and so other things with function evaluation. In Js objects are associative arrays so every properties is accessible with the subscript notation... Can I add a method at run-time to a class template so next its object instances can have it? In JS I can do that via a prototype function property.

    Read the article

  • Developer oriented hardware benchmarks?

    - by Promit
    Perhaps I'm looking in the wrong places, but every hardware benchmark I've found, for nearly any component, is oriented towards gamers and/or workstations (video editing etc). Is there anyone doing benchmarks that are relevant to software developers? For example, take SSDs. I don't care how fast Crysis loads off an SSD -- that is completely worthless information. What I want to know is, which drive yields the quickest build times? What about Intellisense and refactoring operations? What RAID configuration has the biggest benefit? I could probably come up with more examples, but you get the point. Long story short, where are the benchmarks that tell me which hardware will be most effective in helping me be a productive software developer?

    Read the article

  • A scheme for expiring downloaded content?

    - by Chad Johnson
    I am going to offer a web API service that allows users to download and "rent" content for a monthly subscription fee. The API will either be open to everyone or possibly just select parties (not sure yet). Each developer must agree to a license, and they receive a developer key for their person. Each software application will have its own key as well. So then end-users will download the software which will interact with my service's API. Each user will have a key for each application as well (probably using OAuth). Content will be cached on first download and accessible offline via just the third-party application that cached the content. If a user cancels their subscription, I plan on doing the following: Deactivate the user's OAuth key for all applications. Do not allow the user's account to download new content via the API (and subsequently any software that uses the API). Now, the big question is: how do I make content expire if they cancel their subscription? If they cancel, they should not have access to content anymore. Here are ideas I've thought of (some of these are half-solutions, not yet fully fleshed out): Require that applications encrypt downloaded content using the user's OAuth key, making it available to only the application. This will prevent most users from going to the cache directory and just copying and keeping files. Update the user's key once a month, forcing content to re-cache on a monthly basic. Users could then access content for a month after they cancel their subscription. Require applications to "phone home" [to the service] periodically and check whether the user's subscription has terminated. If so, require in the API developer license that applications expire cache. If it is found that applications do not comply, their keys (and possibly keys for all developers) are permanently deactivated as a consequence. One major worry is that some applications may blatantly ignore constraints of the license. Is it generally acceptable to rely on applications abiding by the licensing constraints? Bad idea? Any other ideas? Maybe a way to make content auto-expire after x days? Something else? I'm open to out-of-the-box ideas.

    Read the article

  • Good memory profiling, leak and error detection for Windows

    - by Fernando
    I'm currently looking for a good memory / leak detection tool for Windows. A few years ago, I used Numega's Boundschecker, which was VERY good. Right now it seems to have been sold to Compuware, which apparently sold it again to some other company. Trying to evaluate a demo of the current version has been so far very frustrating, in the best "enterprisy" tradition: (a) no advertised prices on their website (Great Red Flashing Lights of Warning); (b) contact form asked for number of employeers and other private information; (c) no response to my emails asking for a evaluation and price. I had to conclude that BoundsChecker is now one of "those" products. Y'know, the type where you innocently call and tomorrow 3 men in black suits turn up at your building wanting to talk to you about "partnerships" and not-so-secretly gauge the size of your company and therefore how much they can get away with charging you. SO, rant aside, can anyone recommend an excellent memory checking/leak detection tool, how much it costs, and suggestions for where to buy?

    Read the article

< Previous Page | 317 318 319 320 321 322 323 324 325 326 327 328  | Next Page >