Search Results

Search found 18214 results on 729 pages for 'computer industry'.

Page 592/729 | < Previous Page | 588 589 590 591 592 593 594 595 596 597 598 599  | Next Page >

  • How do I compute the approximate entropy of a bit string?

    - by dreeves
    Is there a standard way to do this? Googling -- "approximate entropy" bits -- uncovers multiple academic papers but I'd like to just find a chunk of pseudocode defining the approximate entropy for a given bit string of arbitrary length. (In case this is easier said than done and it depends on the application, my application involves 16,320 bits of encrypted data (cyphertext). But encrypted as a puzzle and not meant to be impossible to crack. I thought I'd first check the entropy but couldn't easily find a good definition of such. So it seemed like a question that ought to be on StackOverflow! Ideas for where to begin with de-cyphering 16k random-seeming bits are also welcome...) See also this related question: http://stackoverflow.com/questions/510412/what-is-the-computer-science-definition-of-entropy

    Read the article

  • connect to mssql from batch script

    - by JPro
    Hi, I am supplying a batch script to create a DSN connection in the user computer before they start to use my application. I am using this in a .bat file. ODBCConf ConfigSysDSN "SQL Server" "DSN=CONNAME|SERVER=PCNAME\INSTANCENAME But I want to make sure they will be able to connect to the database, considering the fact that proper drivers may not be installed in their systems. So, is there any way to check the connection from the same batch file and inform the user if something unable to connect to the database? thanks.

    Read the article

  • Can someone explain this "endian-ness" function for me?

    - by Mike
    Write a program to determine whether a computer is big-endian or little-endian. bool endianness() { int i = 1; char *ptr; ptr = (char*) &i; return (*ptr); } So I have the above function. I don't really get it. ptr = (char*) &i, which I think means a pointer to a character at address of where i is sitting, so if an int is 4 bytes, say ABCD, are we talking about A or D when you call char* on that? and why? Would some one please explain this in more detail? Thanks. So specifically, ptr = (char*) &i; when you cast it to char*, what part of &i do I get?

    Read the article

  • Java: Netbeans debugging session works faster than normal run

    - by Martijn Courteaux
    Hello, I'm making Braid in Netbeans 6.7.1. Computer Spec: Windows 7 Running processes: 46 Running threads: +/- 650 NVidia GeForce 9200M GS Intel Core 2 Duo CPU P8400 @ 2.26Ghz Game-spec with normal run: Memory: between 80 MB and 110 MB CPU: between 9% and 20% CPU when time rewinding: 90% The same values for the debugging session, except when I rewind the time: CPU: 20%. Is there any reason for? Is there a way to reach the same performance with a normal run. This is my repaint code: @Override public void repaint() { BufferStrategy bs = getBufferStrategy(); // numBuffers: 4 Graphics g = bs.getDrawGraphics(); g.setColor(Color.BLACK); g.fillRect(-1, -1, 2000, 2000); gamePanel.paint(g.create(x, y, gameDim.width, gameDim.height)); bs.show(); g.dispose(); Toolkit.getDefaultToolkit().sync(); update(g); } The game runs in fullscreen (undecorated + frame.size = screensize) Martijn

    Read the article

  • Unlocking Productivity

    - by Michael Snow
    Unlocking Productivity in Life Sciences with Consolidated Content Management by Joe Golemba, Vice President, Product Management, Oracle WebCenter As life sciences organizations look to become more operationally efficient, the ability to effectively leverage information is a competitive advantage. Whether data mining at the drug discovery phase or prepping the sales team before a product launch, content management can play a key role in developing, organizing, and disseminating vital information. The goal of content management is relatively straightforward: put the information that people need where they can find it. A number of issues can complicate this; information sits in many different systems, each of those systems has its own security, and the information in those systems exists in many different formats. Identifying and extracting pertinent information from mountains of farflung data is no simple job, but the alternative—wasted effort or even regulatory compliance issues—is worse. An integrated information architecture can enable health sciences organizations to make better decisions, accelerate clinical operations, and be more competitive. Unstructured data matters Often when we think of drug development data, we think of structured data that fits neatly into one or more research databases. But structured data is often directly supported by unstructured data such as experimental protocols, reaction conditions, lot numbers, run times, analyses, and research notes. As life sciences companies seek integrated views of data, they are typically finding diverse islands of data that seemingly have no relationship to other data in the organization. Information like sales reports or call center reports can be locked into siloed systems, and unavailable to the discovery process. Additionally, in the increasingly networked clinical environment, Web pages, instant messages, videos, scientific imaging, sales and marketing data, collaborative workspaces, and predictive modeling data are likely to be present within an organization, and each source potentially possesses information that can help to better inform specific efforts. Historically, content management solutions that had 21CFR Part 11 capabilities—electronic records and signatures—were focused mainly on content-enabling manufacturing-related processes. Today, life sciences companies have many standalone repositories, requiring different skills, service level agreements, and vendor support costs to manage them. With the amount of content doubling every three to six months, companies have recognized the need to manage unstructured content from the beginning, in order to increase employee productivity and operational efficiency. Using scalable and secure enterprise content management (ECM) solutions, organizations can better manage their unstructured content. These solutions can also be integrated with enterprise resource planning (ERP) systems or research systems, making content available immediately, in the context of the application and within the flow of the employee’s typical business activity. Administrative safeguards—such as content de-duplication—can also be applied within ECM systems, so documents are never recreated, eliminating redundant efforts, ensuring one source of truth, and maintaining content standards in the organization. Putting it in context Consolidating structured and unstructured information in a single system can greatly simplify access to relevant information when it is needed through contextual search. Using contextual filters, results can include therapeutic area, position in the value chain, semantic commonalities, technology-specific factors, specific researchers involved, or potential business impact. The use of taxonomies is essential to organizing information and enabling contextual searches. Taxonomy solutions are composed of a hierarchical tree that defines the relationship between different life science terms. When overlaid with additional indexing related to research and/or business processes, it becomes possible to effectively narrow down the amount of data that is returned during searches, as well as prioritize results based on specific criteria and/or prior search history. Thus, search results are more accurate and relevant to an employee’s day-to-day work. For example, a search for the word "tissue" by a lab researcher would return significantly different results than a search for the same word performed by someone in procurement. Of course, diverse data repositories, combined with the immense amounts of data present in an organization, necessitate that the data elements be regularly indexed and cached beforehand to enable reasonable search response times. In its simplest form, indexing of a single, consolidated data warehouse can be expected to be a relatively straightforward effort. However, organizations require the ability to index multiple data repositories, enabling a single search to reference multiple data sources and provide an integrated results listing. Security and compliance Beyond yielding efficiencies and supporting new insight, an enterprise search environment can support important security considerations as well as compliance initiatives. For example, the systems enable organizations to retain the relevance and the security of the indexed systems, so users can only see the results to which they are granted access. This is especially important as life sciences companies are working in an increasingly networked environment and need to provide secure, role-based access to information across multiple partners. Although not officially required by the 21 CFR Part 11 regulation, the U.S. Food and Drug Administraiton has begun to extend the type of content considered when performing relevant audits and discoveries. Having an ECM infrastructure that provides centralized management of all content enterprise-wide—with the ability to consistently apply records and retention policies along with the appropriate controls, validations, audit trails, and electronic signatures—is becoming increasingly critical for life sciences companies. Making the move Creating an enterprise-wide ECM environment requires moving large amounts of content into a single enterprise repository, a daunting and risk-laden initiative. The first key is to focus on data taxonomy, allowing content to be mapped across systems. The second is to take advantage new tools which can dramatically speed and reduce the cost of the data migration process through automation. Additional content need not be frozen while it is migrated, enabling productivity throughout the process. The ability to effectively leverage information into success has been gaining importance in the life sciences industry for years. The rapid adoption of enterprise content management, both in operational processes as well as in scientific management, are clear indicators that the companies are looking to use all available data to be better informed, improve decision making, minimize risk, and increase time to market, to maintain profitability and be more competitive. As more and more varieties and sources of information are brought under the strategic management umbrella, the ability to divine knowledge from the vast pool of information is increasingly difficult. Simple search engines and basic content management are increasingly unable to effectively extract the right information from the mountains of data available. By bringing these tools into context and integrating them with business processes and applications, we can effectively focus on the right decisions that make our organizations more profitable. More Information Oracle will be exhibiting at DIA 2012 in Philadelphia on June 25-27. Stop by our booth Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} (#2825) to learn more about the advantages of a centralized ECM strategy and see the Oracle WebCenter Content solution, our 21 CFR Part 11 compliant content management platform.

    Read the article

  • What is your longest-held programming assumption that turned out to be incorrect?

    - by Demi
    I am doing some research into common errors and poor assumptions made by junior (and perhaps senior) software engineers. What was your longest-held poor assumption that was eventually corrected? For example: I at one point failed to understand that the size of an integer was not a standard (depends on the language and target). A bit embarrassing to state, but there it is. Be frank: what hard-held belief did you have, and roughly how long did you maintain the assumption? It can be about an algorithm, a language, a programming concept, testing, anything under the computer science domain.

    Read the article

  • java/swing: Shape questions: serializing and combining

    - by Jason S
    I have two questions about java.awt.Shape. Suppose I have two Shapes, shape1 and shape2. How can I serialize them in some way so I can save the information to a file and later recreate it on another computer? (Shape is not Serializable but it does have the getPathIterator() method which seems like you could get the information out but it would be kind of a drag + I'm not sure how to reconstruct a Shape object afterwards.) How can I combine them into a new Shape so that they form a joint boundary? (e.g. if shape1 is a large square and shape2 is a small circle inside the square, I want the combined shape to be a large square with a small circular hole)

    Read the article

  • Secure login on your domain with Google App Engine

    - by mhost
    Hi, We are starting a very large web based service project. We are trying to decide what hosting environment to use. We would really like to use Google App Engine for scalability reasons and to eliminate the need to deal with servers ourselves. Secure logins/registrations is very important to us, as well as using our own domain. Our target audience is not very computer savvy. For this reason, we don't want to have the users have to sign up with OpenID as this can't be done within our site. We also do not want to force our customers to sign up with Google. As far as I can see, I am out of luck. I am hoping to have a definite answer to this question. Can I have an encrypted login to our site accessed via our domain, without having to send the customers to another site for the login (OpenID/Google). Thanks.

    Read the article

  • Can Java ServerSocket and Sockets using ObjectIOStreams lose packets?

    - by Joel Garboden
    I'm using a ServerSocket on my server and Sockets that use ObjectIOStreams to send serializable objects over the network connection. I'm developing an essentially more financial version of monopoly and thus packets being sent and confirmed as sent/received is required. Do I need to implement my own packet loss watcher or is that already taken care of with (Server)Sockets? I'm primarily asking about losing packets during network blips or whatnot, not full connection error. E.g. siblings move a lead plate between my router and computer's wi-fi adapter. http://code.google.com/p/inequity/source/browse/#svn/trunk/src/network Code can be found under network-ClientController and network-Server

    Read the article

  • Activator.CreateInstance fails in Windows Service

    - by kbe
    I have an issue with a Windows Service which throws a NullReference exception whenever I try to use var myType = Activator.CreateInstance(typeof(MyType)) There is no problem whenever I run the exact same code in a console window - and after debugging the obvious "which user is running this code" I doubt that it is a mere fact of changing the user running the service as I've tried to run the service using the computer's Administrator account as well as LocalSystem. I'm suspecting a Windows Update fiddling with default user rights but that's a bit of a desperate guess I feel. Remember: The type and assembly exist and works fine - otherwise I wouldn't be able to run the code in a console window. It is only when running in the context of a Windows Service I get an error. The question is: Can I in any way impersonate i.e. LocalSystem in a unittest by creating a GenericPrincipal (how would that GenericPrincipal look)?

    Read the article

  • What are the prerequisites for learning embedded systems programming ?

    - by WarDoGG
    I have completed my graduation in Computer engineering. We had some basic electronics courses in Digital signal processing, Information theory etc but my primary field is Programming. However, i was looking to get into Embedded sytems programming with NO knowledge of how it is done. However, i am very keen on going into this field. My questions : what are the languages used to program embedded system programs ? Will i be able to learn without having any basics in electronics ? any other prerequisites that i should know ?

    Read the article

  • Windows XP Home Edition SP3 cant recognise PCMIA SD Card

    - by Pozo
    System Specifications: Laptop : Dell Inspiron 6000 OS: Windows Home Edition SP3 SD Adapter: Hagiwara Smart Media Adapter I inserted the card into the slot, windows xp recognises the device, lists the pcmia controller on the device manager list, an entry appears under the IDE ATA/ATAPI category on the device manager as well. However, the device does not show under my computer and the driver does not get assigned a letter number. I checked the system logs from the device manager and there were no logged errors. Checking the Hagiwara support website, the manufacturer indicates that the adapter driver is the same as the windows xp pcmia controller. Checking Dell's website, no specific drivers were listed for that either. General Search on the web indicates that multiple people face similar problems with their SD cards, yet none actually spell out the route issue that causes this. Please let me know if you have any suggestions for further debugging. Thanks in advance

    Read the article

  • Saving data to server with user accounts.

    - by AKRamkumar
    Ok, so for an app I am making, I want the user to be able to save data online. On my website, I will provide a web server with tables of UserName/Password/SaveData. How can I do this without crashing the server load? How can I guarantee security ? Is there a Design Pattern for this?Is there a better way of doing this? This is going to be a free application, available to the public and I would like for their settings to be available, no matter the computer they are using. Is there a better way of doing this? I am using MEF for plugins so is there a way I can save plugin data as well?

    Read the article

  • Cannot connect to local database.

    - by Smickie
    Hi, I have apache and mysql set up on my local machine (Mac). Whenever I go to any of my local sites I get the "Error establishing a database connection" whenever anything mySql related happens. It was working perfectly yesterday, but when I started my computer today it is not. I can login to mysql on the terminal and that all works fine, I can view database and run queries in the terminal. Look at all my table etc. I've tried restarting apache. Anyone know what's up? This is worrying.

    Read the article

  • Is this the right way to get the grand total of processors with WMI on a multi-cpu system?

    - by John Sheares
    I don't have access to a multi-socketed computer, so I am unsure if the following will get the grand total of processors and logical processors. I assume ManagementObjectSearcher will return an instance for each socketed CPU and I just keep a running total? int totalCPUs = 0; int totalLogicalCPUs = 0; ManagementObjectSearcher mos = new ManagementObjectSearcher("Select * from Win32_ComputerSystem"); foreach (var mo in mos.Get()) { string num = mo.Properties["NumberOfProcessors"].Value.ToString(); totalCPUs += Convert.ToInt32(num); num = mo.Properties["NumberOfLogicalProcessors"].Value.ToString(); totalLogicalCPUs += Convert.ToInt32(num); }

    Read the article

  • allowDefinition='MachineToApplication' error when publishing from VS2010 (but only after a previous

    - by burnt_hand
    I can run my Asp.Net MVC 2 application without an issue on my local computer. Just Run / Debug. But if I have already built it, I can't publish it! I have to clean the solution and publish it again. I know this is not system critical, but it's really annoying. "One Click Publish" is not "Clean solution and then One click publish" The exact error is as follows: Error 11 It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level. This error can be caused by a virtual directory not being configured as an application in IIS. I suspect it's something to do with the Web.Config in the Views folder, but then why only after I build once previously. And just to note, the app works fine once published.

    Read the article

  • Include Problem with Objective-C++ and OpenGL

    - by Stephen Furlani
    Hello, I feel silly asking this but I've searched for 'include problems' and have only come up with basic stuff. I'm working with an API that includes/imports in their header files (ARGH! HATE ANGER DESTRUCTION). One of these Obj-C files #import "OpenGL/CGLMacros.h" which #define's things like glMatrixMode(...); In my code I need the glMatrixMode(...); from #include "OpenGL/gl.h" but it won't access it! I can't edit the headers from the (poorly) coded API to put the includes in their definition files. What can I do? If the CGLMacros.h file starts out like /* Copyright: (c) 1999 by Apple Computer, Inc., all rights reserved. */ #ifndef _CGLMACRO_H #define _CGLMACRO_H Can I put a #define _CGLMACRO_H before I include the offending API header file? -Stephen

    Read the article

  • Possible apache/mysql/php wrapper?

    - by MrStatic
    I was tasked with writing a smallish program for data input and manipulation. My language of choice happens to be PHP. I have written out the program/site it all works fine is easily portable and everything but have one question. Is there anyway to wrap up apache/mysql/php into a bundle of one exe? I know of MoWes Portable and other options but I am looking to basically wrap everything up into a single exe without much fuss to the end user. My target users are very low on the computer savvy scale. I am trying to avoid a batch file for launching and don't really want them able to shutdown one part on accident and not another (IE shut off mysql but leave apache on).

    Read the article

  • Kind of stumped with some basic C# constructors.

    - by Sergio Tapia
    public class Parser { Downloader download = new Downloader(); HtmlDocument Page; public Parser(string MovieTitle) { Page = download.FindMovie(MovieTitle); } public Parser(string ActorName) { Page = download.FindActor(ActorName); } } I want to create a constructor that will allow other developers who use this library to easily create a Parser object with the relevant HtmlDocument already loaded as soon as it's done creating it. The problem lies in that a constructor cannot exist twice with the same type of parameters. Sure I can tell the logical difference between the two paramters, but the computer can't. Any suggestions on how to handle this? Thank you!

    Read the article

  • How can I create two contructors that act differently but recieve the same data type?

    - by Sergio Tapia
    public class Parser { Downloader download = new Downloader(); HtmlDocument Page; public Parser(string MovieTitle) { Page = download.FindMovie(MovieTitle); } public Parser(string ActorName) { Page = download.FindActor(ActorName); } } I want to create a constructor that will allow other developers who use this library to easily create a Parser object with the relevant HtmlDocument already loaded as soon as it's done creating it. The problem lies in that a constructor cannot exist twice with the same type of parameters. Sure I can tell the logical difference between the two paramters, but the computer can't. Any suggestions on how to handle this? Thank you!

    Read the article

  • Is it worth it to learn an esoteric programming language?

    - by Thomas Owens
    Wikipedia: An esoteric programming language (sometimes shortened to esolang) is a programming language designed as a test of the boundaries of computer programming language design, as a proof of concept, or as a joke. There is usually no intention of the language being adopted for real-world programming. Such languages are often popular among hackers and hobbyists. This use of esoteric is meant to distinguish these languages from more popular programming languages. Some more popular languages may appear esoteric (in the usual sense of the word) to some, and though these could arguably be called "esoteric programming languages" too, this is not what is meant. I think it might be worth it, just to learn a new language and go through the process, although only if you don't have anything else to do (like a real project or learning a new real language). But what does the community think? Is there some value in these languages?

    Read the article

  • Source-control 'wet-work'?

    - by Phil Factor
    When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of accident has become a rare event.  If it weren’t for a deranged laptop, and my distraction, the code wouldn’t have been lost this time.  As always, I sighed, had a soothing cup of tea, and typed it all in again.  The new code I hastily tapped in  was much better: I’d held in my head the essence of how the code should work rather than the details: I now knew for certain  the start point, the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write logical code that performed better.  Because I could work so quickly, I was able to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact, easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling and half-baked ideas. What a shame that technology is now so good that developers rarely experience the cleansing shock of losing one’s code and having to rewrite it from scratch.  If you’ve never accidentally lost  your code, then it is worth doing it deliberately once for the experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them in the bin, and started again from scratch.  Leonardo’s obsessive reworking of the Mona Lisa was renowned because it was so unusual:  Most artists have been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.  Now, any writer or artist is seduced by technology into altering or refining their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from the blank page.  It is easy to pick away at a flawed work, but the real creative process is far more brutal. Once, many years ago whilst running a software house that supplied commercial software to local businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and it was all hand-cut code.  For us, it represented a breakthrough as it was for a government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a faulty tape drive.  There were some fragments left on individual machines, but they were all of different versions.  The developers were in despair.  Strangely, I managed to re-write the bulk of a three-month project in a manic and caffeine-soaked weekend.  Sure, that elegant universally-applicable input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.  Yes, the code lacked architectural elegance and reusability. By dawn on Monday, the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new bugs that had crept in. I still have the disk that crashed, up in the attic. In IT, we have had mixed experiences from complete re-writes. Lotus 123 never really recovered from a complete rewrite from assembler into C, Borland made the mistake with Arago and Quattro Pro  and Netscape’s complete rewrite of their Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme circumstances where no other course of action seemed possible.   The rewrite didn’t come out of the blue. I prefer to remember the rewrite of Minix by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly older Linus.  The rewrite of CP/M didn’t do too badly either, did it? Come to think of it, the guy who decided to rewrite the windowing system of the Xerox Star never regretted the decision. I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim experience brings out the humility in any experienced programmer.  I’m referring to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and coding standards, and they already have a solution, then what is wrong with considering  a complete rewrite? Rewrites are so painful in the early stages, until that point where one realises the payoff, that even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control systems, and disaster recovery systems, are just too good nowadays.   If I were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll know I didn’t have the nerve to delete it and start again.  There was a time that one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has made us almost entirely immune to such a merciful act of God. An old friend of mine with long experience in the software industry has long had the idea of the ‘source-control wet-work’,  where one hires a malicious hacker in some wild eastern country to hack into one’s own  source control system to destroy all trace of the source to an application. Alas, backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying into the idea. In reading the full story of the near-loss of Toy Story 2, it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be rewritten anyway.  Was this an early  case of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are far too prone to cling to our existing source-code.

    Read the article

  • Script to download file and rename according to date on Windows Vista machine?

    - by Dave
    Hi, I need to daily run a script that will download a file from a fixed location and save it on my computer with an appropriate filename-YYYYMMDD-HHSS.ext timestamp. I need a historical record of what that file was at that particular time. I can manually check and see what the changes were, so compairson not needed. (I was looking for an online service that would do this for me, but I think a locally running script on my machine would be good enough). Although i do have php on my machine, i would prefer if its a pure windows builtin solution, just in case i have to (likely) adapt it to someone else's system (non-techies). If someone has something like this and can contribute the code - help would be most appreciated!! D

    Read the article

  • rubygem "Argument list too long"

    - by mehmermaid
    My problem is that during or after running a process which uses Ruby intensively, when I use any gem command including gem --version or gem install rake, it hangs for just over a minute and then gives me this error: $ gem list /Users/username/.rvm/bin/gem: line 5: /Users/username/.rvm/bin/gem: Argument list too long /Users/username/.rvm/bin/gem: line 5: /Users/username/.rvm/bin/gem: Unknown error: 0 file at : line 5: /Users/username/.rvm/bin/gem #!/usr/bin/env bash if [[ -s "/Users/username/.rvm/environments/ruby-1.8.7-p334" ]] ; then source "/Users/username/.rvm/environments/ruby-1.8.7-p334" exec gem "$@" # this is line 5 else echo "ERROR: Missing RVM environment file: '/Users/username/.rvm/environments/ruby- 1.8.7-p334'" >&2 exit 1 fi The only way that I have found to get this working again is to restart my computer, which is obviously undesirable. I am using OSX 10.6.5 I have spent quite a while trying to find anyone else who has had this problem, and been unsuccessful. Do you have any idea why this might be happening?

    Read the article

  • LockWorkStation - Compilation error - identifier not found

    - by Microkernel
    Hi All, I am writing an application in which I got to lock the computer screen (OS is Windows). My Application is in C++. For this purpose I used the LockWorkStation() API defined on msdn, http://msdn.microsoft.com/en-us/library/aa376875%28VS.85%29.aspx I have included windows.h as told but still I am getting compilation error: .\source.cpp(5) : error C3861: 'LockWorkStation': identifier not found here is a sample code thats giving error. #include <Windows.h> int main() { LockWorkStation(); return 0; } Please tell me what I am missing here :( I am using MS-Visual studio 2005. Regards.

    Read the article

< Previous Page | 588 589 590 591 592 593 594 595 596 597 598 599  | Next Page >