Search Results

Search found 10548 results on 422 pages for 'standard deviation'.

Page 211/422 | < Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >

  • How should I take being told that I was wrong?

    - by Chris
    On a fairly important project with short timelines I decided to use SubSonic for straight forward data access. I wired up a handful of forms, created matching database tables and POCO's for each and used SubSonic's simple repository mode for the data access. Everything worked well and I was able to bang these forms out pretty quickly and I moved on to other things. Since that time I have heard that using SubSonic was a 'cowboy move' and that it was implemented 'incorrectly' and that 'the person who used it, didn't even know how to use SubSonic'. What I would like to know is, how should I take this? There were and still are no standards for data access at this company, so there is no violation of a standard. The forms worked exactly as requested and saved the data to the database correctly. And with only spending a few days on the forms instead of weeks, saved a lot of time which was used for other functionality in the project. So in light of all of this, I am confused as to what was 'incorrect'. Am I missing something here? Thanks for your answers.

    Read the article

  • Install Adobe AIR on Ubuntu/Linux

    Since quite some time Adobe Technologies released the Linux version of Adobe AIR to bring web applications and widgets to your desktop. Installing new applications on a Linux system is not always as easy as switching the computer on. The following instructions might be helpful to install Adobe AIR on any Linux system. First of all, get the latest installer of Adobe AIR from http://get.adobe.com/air/ - as of writing this article the file name is AdobeAIRInstaller.bin. Save the download in your preferred folder. Now, there are two ways to run the installer - visual style or console style. Visual Installation Launch your favorite or standard file manager like thunar or nautilus and browse to the folder where the AdobeAIRInstaller.bin has been saved. Right click on the file and choose 'Properties' in the context menu Set 'Execute' permissions and confirm modifications with OK Rename file into AdobeAIRInstaller Double click and follow the instructions Using the console Open a terminal like xterm Change into the directory where you stored the download Run this command:[code]chmod +x AdobeAIRInstaller.bin[/code] Now run this command:[code]sudo ./AdobeAIRInstaller.bin[/code] The normal installer will open, install it. From now whenever you download a .air file, just double click it and it will be installed. Troubleshooting In case that the installation does not start properly, try to install via console. This gives you more details about the reasons. Should you run into something like this: [code]AdobeAIRInstaller.bin: 1: Syntax error: "(" unexpected[/code] Double check the execute permission of the installer file and try again.

    Read the article

  • Introducing the Industry's First Analytics Machine, Oracle Exalytics

    - by Manan Goel
    Analytics is all about gaining insights from the data for better decision making. The business press is abuzz with examples of leading organizations across the world using data-driven insights for strategic, financial and operational excellence. A recent study on “data-driven decision making” conducted by researchers at MIT and Wharton provides empirical evidence that “firms that adopt data-driven decision making have output and productivity that is 5-6% higher than the competition”. The potential payoff for firms can range from higher shareholder value to a market leadership position. However, the vision of delivering fast, interactive, insightful analytics has remained elusive for most organizations. Most enterprise IT organizations continue to struggle to deliver actionable analytics due to time-sensitive, sprawling requirements and ever tightening budgets. The issue is further exasperated by the fact that most enterprise analytics solutions require dealing with a number of hardware, software, storage and networking vendors and precious resources are wasted integrating the hardware and software components to deliver a complete analytical solution. Oracle Exalytics In-Memory Machine is the world’s first engineered system specifically designed to deliver high performance analysis, modeling and planning. Built using industry-standard hardware, market-leading business intelligence software and in-memory database technology, Oracle Exalytics is an optimized system that delivers answers to all your business questions with unmatched speed, intelligence, simplicity and manageability. Oracle Exalytics’s unmatched speed, visualizations and scalability delivers extreme performance for existing analytical and enterprise performance management applications and enables a new class of intelligent applications like Yield Management, Revenue Management, Demand Forecasting, Inventory Management, Pricing Optimization, Profitability Management, Rolling Forecast and Virtual Close etc. Requiring no application redesign, Oracle Exalytics can be deployed in existing IT environments by itself or in conjunction with Oracle Exadata and/or Oracle Exalogic to enable extreme performance and best in class user experience. Based on proven hardware, software and in-memory technology, Oracle Exalytics lowers the total cost of ownership, reduces operational risk and provides unprecedented analytical capability for workgroup, departmental and enterprise wide deployments. Click here to learn more about Oracle Exalytics.  

    Read the article

  • Conditional attribute in XML - most concise solution?

    - by Lech Rzedzicki
    I am tasked with setting up conditional profiling - a method of tagging chunks of XML with an attribute, which will then be used as a conditional value to extract subset of that XML. Have a look at another definition/example: DITA profiling The XML is documents that are equivalent to printed books - i.e. documents that are often looked at by a human, even if indirectly. Therefore I am looking at a few requirements here: 1. keeping the value list brief - so it doesn't affect the readability of the document 2. be able to process with standard XML tools - a space-separated list inside an attribute is still probably fine, but I'd rather not use too much regexp for this 3. be obvious for various users, including 3rd parties, which content goes where 4. Be easy to maintain going forward Therefore one easy solution is: The problem with this: 1. As the list grows the value of the attribute can be a bit verbose 2. One needs to explicitly state every value even if it's a scenario of this vs everything else Therefore I am also looking at other approaches such as: 1. Using + and - modifiers, Apache htaccess style to override the default cascading of profiling - by default all content goes everywhere and if we want to exclude a bit we just say "-kindle". It does require parsing the whole tree, is not supported by editing tools and one needs to regexp the attribute value a bit deeper... 2. Using an intermediate file to define groups of values such as "other" or "non-print", example of this in DITA. It allows concise XML as well as different grouping and values for each document but it does create a certain level of abstraction which may make it a little less obvious for a 3rd party? Altogether, if you received such XML and were tasked to process it, which option you'd rather receive? If you have any experiences like that, even in an unrelated areas such a builds, don't hesitate to comment!

    Read the article

  • basic beginning emacs questions - install latest version and pick appropriate UI

    - by MountainX
    I'm running the latest Kubuntu (12.04 beta 2) and I would like to run the latest emacs (currently v24). The repos are one version behind. What's the best way to install v24 or later (and avoid future version conflicts)? Also, is there any reason not to aways use the GUI version of emacs if X is running? For example, could I set the GUI emacs version as the default text editor and use it to edit cron jobs (crontab -e)? I'm assuming the answer is yes, but since I haven't done that yet (my default editor is nano), I want to check if there are reasons I should leave nano as the default editor. Usually when I'm working on the command line I end up using nano. Now that I think about it, I have no idea why I keep doing that. Is there any downside to calling a GUI editor when working in an X terminal? EDIT: I briefly tested these two versions GNU Emacs 24.0.94.1 (x86_64-pc-linux-gnu, GTK+ Version 3.3.20) from GNU Emacs 23.3.1 (x86_64-pc-linux-gnu) installed by default in Kubuntu. This post explains some of the differences between versions. Unfortunately (for me) the defaults installed version (23.3.1, 23.3+1-1ubuntu9) is the nox version. Package: emacs23-nox Status: install ok installed Version: 23.3+1-1ubuntu9 Replaces: emacs23, emacs23-gtk, emacs23-lucid The package with version 24 opens in GUI mode by default. That's what I prefer. Some of the version 24 changes that interest me are listed in the references below. But there appear to be a multitude of different packages and versions I could install. References: What’s New In Emacs 24 (part 1) | Mastering Emacs http://www.masteringemacs.org/articles/2011/12/06/what-is-new-in-emacs-24-part-1/ " shell-mode uses pcomplete rules, with the standard completion UI. Yowzah! There’s a lot of cool, new functionality hidden away in this gem of a change." EmacsWiki: Recent Changes http://www.emacswiki.org/emacs/?action=rc;showedit=0

    Read the article

  • Extreme Optimization –Mathematical Constants and Basic Functions

    - by JoshReuben
    Machine constants The MachineConstants class - contains constants for floating-point arithmetic because the CLS System.Single and Double floating-point types do not follow the standard conventions and are useless. machine constants for the Double type: machine precision: Epsilon , SqrtEpsilon CubeRootEpsilon largest possible value: MaxDouble , SqrtMaxDouble, LogMaxDouble smallest Double-precision floating point number that is greater than zero: MinDouble , SqrtMinDouble , LogMinDouble A similar set of constants is available for the Single Datatype  Mathematical Constants The Constants class contains static fields for many mathematical constants and common expressions involving small integers – if you are doing thousands of iterations, you wouldn't want to calculate OneOverSqrtTwoPi , Sqrt17 or Log17 !!! Fundamental constants E - The base for the natural logarithm, e (2.718...). EulersConstant - (0.577...). GoldenRatio - (1.618...). Pi - the ratio between the circumference and the diameter of a circle (3.1415...). Expressions involving fundamental constants: TwoPi, PiOverTwo, PiOverFour, LogTwoPi, PiSquared, SqrPi, SqrtTwoPi, OneOverSqrtPi, OneOverSqrtTwoPi Square roots of small integers: Sqrt2, Sqrt3, Sqrt5, Sqrt7, Sqrt17 Logarithms of small integers: Log2, Log3, Log10, Log17, InvLog10  Elementary Functions The IterativeAlgorithm<T> class in the Extreme.Mathematics namespace defines many elementary functions that are missing from System.Math. Hyperbolic Trig Functions: Cosh, Coth, Csch, Sinh, Sech, Tanh Inverse Hyperbolic Trig Functions: Acosh, Acoth, Acsch, Asinh, Asech, Atanh Exponential, Logarithmic and Miscellaneous Functions: ExpMinus1 - The exponential function minus one, ex-1. Hypot - The hypotenuse of a right-angled triangle with specified sides. LambertW - Lambert's W function, the (real) solution W of x=WeW. Log1PlusX - The natural logarithm of 1+x. Pow - A number raised to an integer power.

    Read the article

  • Internal Data Masking

    - by ACShorten
    By default, the data in the product is unmasked for authorized users. If particular data within the object is considered a candidate for data masking then the masking capabilities with the product can be used to mask the data in an appropriate fashion. The inbuilt Data Masking capabilities of the Oracle Utilities Application Framework uses a number of configuration elements: An algorithm, of type F1-MASK, is specified to configure the elements of the data masking including the masking character, number of suffix characters left unmasked, characters to ignore in the string, the application service, security type and authorization levels applicable to the mask. A Data Masking Feature Configuration is created to define where the algorithm applies. The specification of the feature allows you to define the fields to encrypt using the configured algorithm. The algorithm can be attached to a schema field, table field, characteristic, search field and even a child record (such as an identifier). The appropriate user groups are then connected to the application services with the appropriate service types and level to indicate whether the masking applies to the user group or not. For example, say there is a field called CCNBR in the product which holds the credit card details. I would create an algorithm, say CCformatCC, to mask the credit card number with the last few digits as unmasked (as the standard in most systems dictate). I would specify on the Field Mask the following: field="CCNBR", alg="CMformatCC" On the algorithm CMfomatCC, I would specify the mask, application service, security type and the authorization level which users would see the credit card unmasked. To finish the configuration off and to implemention I would connect the appropriate user groups to the application service I specified with the security type and appropriate authorization level for that group. Whenever a user accesses the CCNBR field on any of the maintenance screens, searches and other screens that use the CCNBR meta data definition would then be masked according to the user group that the user was a member of. Refer to the documentation supplied with F1-MASK algorithm type entry for more examples of what is possible.

    Read the article

  • Designing a Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { var fileData = File.ReadAllBytes(file); // Upload using some wrapper for an ORM or DAL someInterface.Upload(fileData, meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • File saving disabled 'Saving has been disabled by system admin'

    - by Gubuntu
    I have coded my own html website recently, and today wished to add a Google calender object to it. I have not put this website on the web because it is for my own personal use and I can't buy a domain. So I just have a folder on my pc that I load the index.html from now and then. As I was saying, today I got an error while trying to save the Google calender object in. I am system admin on my PC, in fact no one else uses but me, except when I have friends round, but for once my PC seems to think I'm some standard account user, because I couldn't save. I thought of clicking close and seeing if it came up with save as, but it didn't, it said 'Are you sure you want to close without saving?' or something along the lines of that, and 'Saving has been disabled by your system admin.' I couldn't do anything. I tried looking at the settings of the file, and it had me as read only in one of the selections, so I changed that to read & write, but to no avail. I did not save as root when I last edited the file, so I don't get what's going on. Help! P.S. This is on Ask Ubuntu not Superuser because it is on my Ubuntu PC and it appears to be a problem with Ubuntu not root or hardware.

    Read the article

  • Working with data and meta data that are separated on different servers

    - by afuzzyllama
    While developing a product, I've come across a situation where my group wants to store meta data for data entry forms (questions, layout, etc) in a different database then the database where the collected data is stored. This is mostly for security because we want to be able to have our meta data public facing, while keeping collected data as secure as possible. I was thinking about writing a web service that provides the meta information that the data collection program could access. The only issue I see with this approach is the front end is going to have to match the meta data with the collected data, which would be more efficient as a join on the back end. Currently, this system is slated to run on .NET and MSSQL. I haven't played around with .NET libraries running in SQL, but I'm considering trying to create logic that would pull from the web service, convert the meta data into a table that SQL can join on, and return the combined data and meta data that way. Is this solution the wrong way to approach the problem? Is there a pattern or "industry standard" way of bringing together two datasets that don't live in the same database?

    Read the article

  • Missed OpenWorld 2011 or JavaOne? See the Key Announcements Today

    - by Dain C. Hansen
    Learn more about Oracle OpenWorld and JavaOne Key Announcements through our six On Demand Webcasts or Podcasts. Your time is precious and you can't make time to watch all keynotes and sessions on demand. Want to get a concise overview on the Oracle OpenWorld and JavaOne key announcements? Presented by Oracle experts in EMEA, these six webcasts will help you decide which keynotes, general or solution sessions on Oracle OpenWorld and JavaOne could be of more interest to you. Six informative, on-demand sessions are available as podcasts and webcasts, on Oracle Hardware and Software, each taking just 15-20 minutes. Be updated in an hour on Oracle OpenWorld on… Oracle Exadata and Exalogic Engineered Systems with Oracle Applications Oracle Exalytics Business Intelligence Machine, the industry's first in-memory hardware and software system Oracle Big Data Appliance, the end-to-end solution for Big Data Oracle Enterprise Manager 12c, the industry's first solution to combine management of the full Oracle stack with complete enterprise cloud lifecycle management Oracle Fusion Applications, a complete suite with 100+ modules Oracle Public Cloud with subscription-based, self-service access to Oracle Fusion Applications, Oracle Fusion Middleware and Oracle Database Watch the Six JavaOne Key Announcement Webcasts anywhere you can access the Internet and learn more about: Plans for advancing the Java Platform, Standard Edition (Java SE) and an update on Java SE 8 Plans announced for the evolution Java Platform, Micro Edition Availability of JavaFX 2.0 The NetBeans IDE Availability for Windows, Mac, Linux and Oracle Solaris Latest developments in the evolution of Java Platform, Enterprise Edition (Java EE). Oracle Java Cloud Service. Follow other informative, on-demand sessions on Oracle Hardware and Software on topics like Cloud, Exadata, Exalogic, Exalytics, Big Data Appliance, Enterprise Manager 12c, Hardware - SuperCluster, Server - and Storage, Oracle Fusion Applications Register now!

    Read the article

  • WebLogic Server Provisioning and Patching with Enterprise Manager Cloud Control 12c Now Available

    - by JuergenKress
    For access to the Oracle demo systems please visit OPN and talk to your Partner Expert. SOA Suite and BPM Suite runs on WebLogic! We are pleased to announce the availability of a WebLogic Server Management demo that showcases some of the key provisioning and patching capabilities of WebLogic Server Management Pack Enterprise Edition (EE). To learn more about these features - as well as other features of the pack - please visit the pack's saleskit page. Demo Highlights The demo showcases the following capabilities: Patching Oracle WebLogic Servers Standardizing WebLogic Server Patch Rollouts Creating a WebLogic Domain Provisioning Profile Cloning a WebLogic Domain from a Provisioning Profile Deploying a Java EE Application Scaling Out an Oracle WebLogic Cluster Demo Instructions Go to the DSS website for Oracle Partners. On the Standard Demo Launchpad page, under the “Software Lifecycle Automation” section, click on the link “EM Cloud Control 12c WLS Provisioning and Patching” (tagged as “NEW”). Specific demo launchpad page contains a link to the detailed demo script with instructions on how to show the demo. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: WebLogic,Enterprise Manager,EM12c,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

  • ZenGallery: a minimalist image gallery for Orchard

    - by Bertrand Le Roy
    There are quite a few image gallery modules for Orchard but they were not invented here I wanted something a lot less sophisticated that would be as barebones and minimalist as possible out of the box, to make customization extremely easy. So I made this, in less than two days (during which I got distracted a lot). Nwazet.ZenGallery uses existing Orchard features as much as it can: Galleries are just a content part that can be added to any type The set of photos in a gallery is simply defined by a folder in Media Managing the images in a gallery is done using the standard media management from Orchard Ordering of photos is simply alphabetical order of the filenames (use 1_, 2_, etc. prefixes if you have to) The path to the gallery folder is mapped from the content item using a token-based pattern The pattern can be set per content type You can edit the generated gallery path for each item The default template is just a list of links over images, that get open in a new tab No lightbox script comes with the module, just customize the template to use your favorite script. Light, light, light. Rather than explaining in more details this very simple module, here is a video that shows how I used the module to add photo galleries to a product catalog: Adding a gallery to a product catalog You can find the module on the Orchard Gallery: https://gallery.orchardproject.net/List/Modules/Orchard.Module.Nwazet.ZenGallery/ The source code is available from BitBucket: https://bitbucket.org/bleroy/nwazet.zengallery

    Read the article

  • Oracle Developer Days 2013

    - by Anne Manke
    Die Oracle Datenbank in der Praxis Was steckt in den Editionen? Einsatzgebiete, Tipps und Tricks zum Mitnehmen, inkl. Ausblick auf neue Funktionen Die Einsatzgebiete für die Oracle Datenbank sind vielfältig, und so bietet Oracle seine marktführende Datenbank in unterschiedlichen Editionen an. Über 30 Jahre Erfahrung in der Weiterentwicklung haben zu einer Fülle von nützlichen Features geführt, welche in den verschiedenen Ausführungen sinnvoll aufgeteilt sind. Ein Ausblick auf die Funktionen der für 2013 geplanten neuen Datenbank-Version rundet den Workshop ab. In dieser speziell von der BU DB zusammengestellten Veranstaltung werden wir Sie neben vielen Tipps und Tricks zu folgenden Themen auf den neuesten Stand bringen: Die Unterschiede der Editionen und ihre Geheimnisse Umfangreiche Basisausstattung auch ohne Option Performance und Skalierbarkeit in den einzelnen Editionen Kosten- und Ressourceneinsparung leicht gemacht Sicherheit in der Datenbank Steigerung der Verfügbarkeit mit einfachen Mitteln Der Umgang mit großen Datenmengen Cloud Technologien in der Oracle Datenbank Termine 23.01.2013: Oracle Niederlassung Stuttgart Liebknechtstr. 35 D-70565 Stuttgart [Anmeldung per Email] 30.01.2013: Oracle Niederlassung Potsdam Schiffbauergasse 14 D-14467 Potsdam [Anmeldung per Email] 05.02.2013: Oracle Niederlassung Düsseldorf Hamborner Str. 51 D-40472 Düsseldorf [Anmeldung per Email] Anmeldung Melden Sie sich noch heute zur Veranstaltung an - die Teilnahme ist kostenlos! Per Mail an Barbara Frank, ORACLE Deutschland B.V. & Co KG Per Telefon: +49 (0)711 72840-211 Agenda 10:00 Beginn der Veranstaltung Die Oracle Datenbank in ihren Editionen im Überblick OracleXE, SE1, SE, EE: Wer braucht was? Was sind die Unterschiede ...? Die Standard Edition - Eine umfangreiche Grundausstattung SQL und PL/SQL: Mehr als SELECT, Application Express, Oracle TEXT und mehr ... Mittagspause Mehr Performance: Die Sportausstattung in der Enterprise Edition Performante Statementausführung, Garantierte Ressourcenverwendung, Speicherplatz sparen ... Mehr Sicherheit: Die Sicherheitsausstattung in der Enterprise Edition Mandantenfähigkeit out-of-the-box, Audit-Möglichkeiten Mehr Verfügbarkeit: Die Mobilitätsausstattung in der Enterprise Edition Flashback Database, Möglichkeiten mit Data Guard, ... 17:00: Ende der Veranstaltung Wir freuen uns auf Sie!

    Read the article

  • Copying files to Truecrypt file container hangs

    - by Wagner Maestrelli
    I have a dual boot installation with Windows 7 Ultimate (32-bits, NTFS file sytem) and Ubuntu 10.10 (32-bits, ext4 file system). I have installed the version 7.0a of Truecrypt in both Operating Systems. Located in the Windows 7 HDD I have a 150 GB encrypted file container. It is a standard and dynamic file container, which means it's not hidden and uses a sparse file. This file was created using the Windows version of the Truecrypt program. When I logon in Windows the container is mounted as the drive E: and everything works fine! In Ubuntu the Windows's NTFS file system is automaticaly mounted after I logon. I've configured that using the ntfs-config package. In my ~/.profile I have this line to mount the truecrypt's file container: truecrypt /media/7EDEBCFADEBCABB1/Users/Wagner/hd/hd.tc /media/truecrypt1 The file container is mounted after the logon without any problem. I can access it, copy files to/from it, etc. But when I try do copy relatively large amounts of data (~50 MB) to it via nautilus or cp -R, it starts the copy, copies some data until certain point and then it just hangs! The progress bar does not move anymore and nothing happens. There is no error, it just hangs and that's it. I have to kill the process myself. This problem does not happen in Windows: I can copy very large amounts of data to the container and it works great. But in Ubuntu the problem always happens! I mean, whenever I try to copy a bunch of files together the copy process hangs. Does anyone ever faced this problem? Can anyone help? Thanks!

    Read the article

  • How to facilitate code reviews in a small team for embedded software?

    - by Adam Lewis
    Short Question Does a cost-effective tool / workflow exist to facilitate code reviews in a small team? More specifically, a small team that relies on post-commit code reviews. Background Our team currently consists of 3 full time and 1 part time software engineers, with plans on hiring more in the near future. Due to our team size and volume of projects we all must juggle, the pre-commit workflow that major tools (such as Review Board and Code Collaborator) use is not obtainable for us right now. The best we can do at the moment is to perform post-commit reviews before major releases or as time permits. Nearly all of our projects are hosted on RepositoryHosting.com (which I highly recommend) and contain a mixture of SVN and GIT repositories. Current Thoughts Since I cannot find a tool that fits our needs right now, I am turning to TRAC that is built into our repository's site. At the moment we use TRAC to file tickets and track milestones, so to me this seems like a natural fit for code review results as well. The direction I am heading in right now is to use a spread sheet(s) to log all of the bugs and comments. Do some macro magic to get it in a format that I can use TRAC's import ticket method and use TRAC's ticketing system to create the action items / bug reports automatically. The auto ticket generation is darn near a must have, adding in bugs and comments one at a time from a web-gui is really painful. Secondary Question If this workflow makes sense, is there a good / standard template to use as a code review log?

    Read the article

  • Automated Error Reporting in .NET Reflector - harnessing the most powerful test rig in existence

    - by Alex.Davies
    I know a testing system that will find more bugs than all the unit testing, integration testing, and QA you could possibly do. And the chances are you're not using it. It's called your users. It's a cliché that you should test so that you find your bugs rather than your users. Of course you should. But it's also a cliché that no software is ever shipped bug-free. Lost cause? No, opportunity! I think .NET Reflector 6 is pretty stable. In fact I know exactly how stable it is, because some (surprisingly high) proportion of its users tell me every time it crashes: If they press "Send Error Report", I get: And then I fix it. As a rough guess, while a standard stack trace is enough to fix a problem 30% of the time, having all those local variables in the stack trace means I can fix it about 80% of the time. How does this all happen? Did it take ages to code this swish system? Nope, it was one checkbox in SmartAssembly. It adds some clever code to your assembly to capture local variables every time an exception is thrown, and to ask your user to report it to you, with a variety of other useful information. Of course not all bugs show up as exceptions. But if you get used to knowing that SmartAssembly will tell you when an exception happens, you begin to change your coding style. Now, as long as an exception gets thrown in any situation you don't expect, you'll fix it if it ever happens. You'll start throwing exceptions liberally, and stop having to think about whether tiny edge cases are possible, as long as they throw an exception if they happen.

    Read the article

  • Old programmer disappeared. About to hire another programmer. How do I approach this?

    - by pocto
    After spending over one year working on a social network project for me using WordPress and BuddyPress, my programmer has disappeared, even though he got paid every single week, for the whole period. Yes, he's not dead as I used an email tracker to confirm and see he opens my emails, but he doesn't respond. It seems he got another job. I wonder why he just couldn't say so. And I even paid him an advance salary for work he hasn't done. The problem is that I never asked for full documentation for most of the functions he coded in. And there were MANY functions for this 1+ year period, and some of them have bugs that he still didn't fix. Now it seems all confusing. What's the first thing I should do now? How do I proceed? I guess the first thing to do will be to get another programmer, but I want to start on the right foot by having all the current code documented so that any programmer can work on all the functions without issues. Is that the first thing I should do? If yes, how do I go about it? What's the standard type of documentation required for something like this? Can I get a programmer that will just do the documentation for all the codes and fix the bugs or is documentation not really important? Also, do you think getting another "individual" programmer is better or get a company that has programmers working for them, so that if the programmer assigned to my project disappears, another can replace him, without my involvement? I feel this is the approach I should have taken in the beginning.

    Read the article

  • How do I boot into console mode (redux)

    - by Leo Simon
    I'm running Ubuntu 12.04. This question was asked some time ago How do I disable the boot splash screen? but the answers didn't work for me. The standard way to boot into console mode used to be to edit /etc/default/grub and set GRUB_CMDLINE_LINUX_DEFAULT="text" This worked fine until I ran the fix proposed in https://help.ubuntu.com/community/SoundTroubleshootingProcedure in order to get sound to work. Since then, I have disabled the boot-splash-screen, but I can avoid what I presume is the lightdm login prompt screen. All I want to do is disable this gui and be prompted with a console login prompt. (Shouldnt be so hard should it???) I read in three 33416 mentioned above that there was a bug in lightdm (it wasn't recognizing "text" properly as an option for GRUB_CMDLINE_LINUX_DEFAULT.) But this discussion happened more than a year ago, and it's surely been fixed. Yet my lightdm is uptodate (so I'm told when I try to update it with apt-get). As suggested in one of the above, I tried sudo update-rc.d -f lightdm remove which resulted in a hung machine. I managed to recover using recovery mode, but now I still get the gui again. Another suggestion is to edit /etc/init/lightdm.override. I've done this and set it to "manual" as suggested, but lightdm simply ignores this. Could somebody suggest how to proceed please? Thanks very much, Leo

    Read the article

  • A Trip to the Moon (Le Voyage dans la Lune) [Super Retro Classic Sci-Fi Video]

    - by Asian Angel
    If you are into retro sci-fi movies, then you will definitely want to have a look at this French classic from 1902. This silent movie is only 10.5 minutes long, but is well worth watching and makes for a fun romp through the early days of sci-fi. From YouTube: A Trip to the Moon (French: Le Voyage dans la lune) is a 1902 French black and white silent science fiction film. It is loosely based on two popular novels of the time: From the Earth to the Moon by Jules Verne and The First Men in the Moon by H. G. Wells. The film was written and directed by Georges Melies, assisted by his brother Gaston. The film runs 14 minutes if projected at 16 frames per second, which was the standard frame rate at the time the film was produced. It was extremely popular at the time of its release and is the best-known of the hundreds of fantasy films made by Melies. A Trip to the Moon is the first science fiction film, and utilizes innovative animation and special effects, including the iconic shot of the rocketship landing in the Moon’s eye. A Trip to the Moon / Le Voyage dans la lune – 1902 [via 20 best designs in sci-fi movies - Page 3 (Creative Bloq)] How to Use an Xbox 360 Controller On Your Windows PC Download the Official How-To Geek Trivia App for Windows 8 How to Banish Duplicate Photos with VisiPic

    Read the article

  • Java ME SDK 3.0.5 is released!

    - by SungmoonCho
      Java ME SDK 3.0.5 went live! For many months, we have been working hard to fix bugs from previous version, and add a lot of new features demanded by Java ME community. You can download the new version from this link. Please see below for more information. NetBeans Integration All Java ME tools are implemented as NetBeans plugins. Device Manager Java ME SDK now supports multiple device managers. You can switch between different versions of device managers. LWUIT 1.5 Support The Resource Editor is available from the Java ME menu to help you design and organize resources for LWUIT applications. For a description of LWUIT 1.5 features, visit the LWUIT download page Network Monitor Integrated with NetBeans profiling tools, the Network Monitor now supports WMA, SIP, Bluetooth and OBEX, SATSA APDU and JCRMI, and server sockets. CPU Profiler Now uses standard NetBeans profiling facilities to view snapshots. Profiling of VM classes can also be toggled on or off. WURFL Device Database The database has been updated with more than 1000 new devices. Tracing - New tracing functionality now includes CLDC VM events, and monitors events such as exceptions, class loading, garbage collection, and methods invocation. New or updated JSR support - Includes support for JSR 234 (Advanced Multimedia Supplements), JSR 253 (Mobile Telephony API), JSR 257 (Contactless Communication API), JSR 258 (Mobile User Interface Customization API), and JSR 293 (XML API for Java ME).

    Read the article

  • Shared Folders in VirtualBox on Windows 7

    In my adventures with VirtualBox, my latest victory was in figuring out how to share folders between my host OS (Windows 7) and my virtual OS (Windows Server 2008).  Im familiar with VirtualPC and other such products, which allow you to share local folders with the VM.  When you do, they just show up in Windows Explorer and all is good.  However, after configuring shared folders in VirtualBox like so:   I couldnt see them anywhere within the machine. Where are Shared Folders in a VirtualBox VM? Fortunately a bit of searching yielded this article, which describes the problem nicely.  It turns out that there is a magic word you have to know, and that is the share name for the host OS: \\vboxsrv Once you know this, mapping shared folders is straightforward.  From Windows Explorer, click on the Map network drive option, and then map a drive to \\vboxsrv\YOURSHAREDFOLDER Like so: With that, its easy to share folders between the client and host OS using VirtualBox.  The reason I didnt simply use a standard network share to my host OS machine name is that both guest and host are in a VPN, and the VPN is over the Internet and in a different country, so when I went that route my files were (apparently) traveling from host to guest by way of the remote VPN network, rather than locally.  Using the Shared Folders feature dramatically sped up my ability to transfer files between Host and Guest machines. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Developing an internet-enabled application as a Kiosk on Windows 7

    - by maple_shaft
    I am finalizing development of a desktop Java application that communicates with an outside web server, and now I need to start seriously considering deployment. This application will run on a large touchscreen all-in-one workstation running Windows 7. It will be located in a public-area and thus must be LOCKED-DOWN Hanibal Lecter style. Early in the project nobody really concerned themselves with this fact just assuming that we can buy some magical software for Windows 7 that will automatically take care of all this, however I am finding now that this looks to be a LOT more complicated than my manager ever thought. I need to: - Lock down the standard hot-keys (ALT+TAB, ALT+CTRL+DEL, etc...) Prevent the user from opening ANY programs other than the kiosk application and its spawned executables Prevent the user from closing the application Start the kiosk application on startup (this can be done without kiosk software) Auto-login to Windows on reboot (Windows Updates, power failure, bratty kid pressing the power button, etc...) Administrator passcode escape sequence for routine maintenance by desktop support professionals. To my dismay I am having a really hard time finding software that contains the whole package and am finding numerous swaths of competing information on the best way to do this. I am not necessarily looking for free or open source software and am willing to pay for software that can help me achieve this. Have any of you ever wrote kiosk software before and if so what approaches have you taken to do this?

    Read the article

  • How should I share variables between instances/classes?

    - by tesselode
    I'm making a game using LOVE, so everything is programmed in Lua. I've been experimenting with using classes and object orientation recently. I've found out that a nice system to use is having most of the game's code in different classes, and having a table of instances with all of the instances of any class in it. This way, I can go through every instance of every class and update and draw it by calling the same function. There is a problem, though. Let's say I have an instance of a player with variables for health and recharge time of a weapon. I also have a master instance which is responsible for drawing the HUD. How can I tell the master instance what the player's health is? Bad solutions: Assuming that the player instance will always have the same position in the table - that can be easily changed. Using global variables. Global variables are evil. Have the master instance outside of the instances table, and have the player set variables inside the master instance, which it then uses for HUD drawing. This is really bad because now I have to make a duplicate of every variable the master instance needs. What is the proper, standard way of sharing variables between instances? Do I need to change the way I keep track of instances?

    Read the article

  • Implementing a post-notification function to perform custom validation

    - by Alejandro Sosa
    Introduction Oracle Workflow Notification System can be extended to perform extra validation or processing via PLSQL procedures when the notification is being responded to. These PLSQL procedures are called post-notification functions since they are executed after a notification action such as Approve, Reject, Reassign or Request Information is performed. The standard signature for the post-notification function is     procedure <procedure_name> (itemtype  in varchar2,                                itemkey   in varchar2,                                actid     in varchar2,                                funcmode  in varchar2,                                resultout in out nocopy varchar2); Modes The post-notification function provides the parameter 'funcmode' which will have the following values: 'RESPOND', 'VALIDATE, and 'RUN' for a notification is responded to (Approve, Reject, etc) 'FORWARD' for a notification being forwarded to another user 'TRANSFER' for a notification being transferred to another user 'QUESTION' for a request of more information from one user to another 'QUESTION' for a response to a request of more information 'TIMEOUT' for a timed-out notification 'CANCEL' when the notification is being re-executed in a loop. Context Variables Oracle Workflow provides different context information that corresponds to the current notification being acted upon to the post-notification function. WF_ENGINE.context_nid - The notification ID  WF_ENGINE.context_new_role - The new role to which the action on the notification is directed WF_ENGINE.context_user_comment - Comments appended to the notification   WF_ENGINE.context_user - The user who is responsible for taking the action that updated the notification's state WF_ENGINE.context_recipient_role - The role currently designated as the recipient of the notification. This value may be the same as the value of WF_ENGINE.context_user variable, or it may be a group role of which the context user is a member. WF_ENGINE.context_original_recipient - The role that has ownership of and responsibility for the notification. This value may differ from the value of the WF_ENGINE.context_recipient_role variable if the notification has previously been reassigned.  Example Let us assume there is an EBS transaction that can only be approved by a certain people thus any attempt to transfer or delegate such notification should be allowed only to users SPIERSON or CBAKER. The way to implement this functionality would be as follows: Edit the corresponding workflow definition in Workflow Builder and open the notification. In the Function Name enter the name of the procedure where the custom code is handled, for instance, TEST_PACKAGE.Post_Notification In PLSQL create the corresponding package TEST_PACKAGE with a procedure named Post_Notification, as follows:     procedure Post_Notification (itemtype  in varchar2,                                  itemkey   in varchar2,                                  actid     in varchar2,                                  funcmode  in varchar2,                                  resultout in out nocopy varchar2) is     l_count number;     begin       if funcmode in ('TRANSFER','FORWARD') then         select count(1) into l_count         from WF_ROLES         where WF_ENGINE.context_new_role in ('SPIERSON','CBAKER');               --and/or any other conditions         if l_count<1 then           WF_CORE.TOKEN('ROLE', WF_ENGINE.context_new_role);           WF_CORE.RAISE('WFNTF_TRANSFER_FAIL');         end if;       end if;     end Post_Notification; Launch the workflow process with the changed notification and attempt to reassign or transfer it. When trying to reassign the notification to user CBROWN the screen would like like below: Check the Workflow API Reference Guide, section Post-Notification Functions, to see all the standard, seeded WF_ENGINE variables available for extending notifications processing. 

    Read the article

< Previous Page | 207 208 209 210 211 212 213 214 215 216 217 218  | Next Page >