Search Results

Search found 2423 results on 97 pages for 'human readable'.

Page 32/97 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Content in Context: The right medicine for your business applications

    - by Lance Shaw
    For many of you, your companies have already invested in a number of applications that are critical to the way your business is run. HR, Payroll, Legal, Accounts Payable, and while they might need an upgrade in some cases, they are all there and handling the lifeblood of your business. But are they really running as efficiently as they could be? For many companies, the answer is no. The problem has to do with the important information caught up within documents and paper. It’s everywhere except where it truly needs to be – readily available right within the context of the application itself. When the right information cannot be easily found, business processes suffer significantly. The importance of this recently struck me when I recently went to meet my new doctor and get a routine physical. Walking into the office lobby, I couldn't help but notice rows and rows of manila folders in racks from floor to ceiling, filled with documents and sensitive, personal information about various patients like myself.  As I looked at all that paper and all that history, two things immediately popped into my head.  “How do they find anything?” and then the even more alarming, “So much for information security!” It sure looked to me like all those documents could be accessed by anyone with a key to the building. Now the truth is that the offices of many general practitioners look like this all over the United States and the world.  But it had me thinking, is the same thing going on in just about any company around the world, involving a wide variety of important business processes? Probably so. Think about all the various processes going on in your company right now. Invoice payments are being processed through Accounts Payable, contracts are being reviewed by Procurement, and Human Resources is reviewing job candidate submissions and doing background checks. All of these processes and many more like them rely on access to forms and documents, whether they are paper or digital. Now consider that it is estimated that employee’s spend nearly 9 hours a week searching for information and not finding it. That is a lot of very well paid employees, spending more than one day per week not doing their regular job while they search for or re-create what already exists. Back in the doctor’s office, I saw this trend exemplified as well. First, I had to fill out a new patient form, even though my previous doctor had transferred my records over months previously. After filling out the form, I was later introduced to my new doctor who then interviewed me and asked me the exact same questions that I had answered on the form. I understand that there is value in the interview process and it was great to meet my new doctor, but this simple process could have been so much more efficient if the information already on file could have been brought directly together with the new patient information I had provided. Instead of having a highly paid medical professional re-enter the same information into the records database, the form I filled out could have been immediately scanned into the system, associated with my previous information, discrepancies identified, and the entire process streamlined significantly. We won’t solve the health records management issues that exist in the United States in this blog post, but this example illustrates how the automation of information capture and classification can eliminate a lot of repetitive and costly human entry and re-creation, even in a simple process like new patient on-boarding. In a similar fashion, by taking a fresh look at the various processes in place today in your organization, you can likely spot points along the way where automating the capture and access to the right information could be significantly improved. As you evaluate how content-process flows through your organization, take a look at how departments and regions share information between the applications they are using. Business applications are often implemented on an individual department basis to solve specific problems but a holistic approach to overall information management is not taken at the same time. The end result over the years is disparate applications with separate information repositories and in many cases these contain duplicate information, or worse, slightly different versions of the same information. This is where Oracle WebCenter Content comes into the story. More and more companies are realizing that they can significantly improve their existing application processes by automating the capture of paper, forms and other content. This makes the right information immediately accessible in the context of the business process and making the same information accessible across departmental systems which has helped many organizations realize significant cost savings. Here on the Oracle WebCenter team, one of our primary goals is to help customers find new ways to be more effective, more cost-efficient and manage information as effectively as possible. We have a series of three webcasts occurring over the next few weeks that are focused on the integration of enterprise content management within the context of business applications. We hope you will join us for one or all three and that you will find them informative. Click here to learn more about these sessions and to register for them. There are many aspects of information management to consider as you look at integrating content management within your business applications. We've barely scratched the surface here but look for upcoming blog posts where we will discuss more specifics on the value of delivering documents, forms and images directly within applications like Oracle E-Business Suite, PeopleSoft Enterprise, JD Edwards Enterprise One, Siebel CRM and many others. What do you think?  Are your important business processes as healthy as they can be?  Do you have any insights to share on the value of delivering content directly within critical business processes? Please post a comment and let us know the value you have realized, the lessons learned and what specific areas you are interested in.

    Read the article

  • Snooker Android Application [closed]

    - by Rzarect
    I am working currently on my final year project / dissertation for the university and I have a "crazy" idea for it. I was thinking of designing an android app for Snooker players, different bars or tournaments, an app that will use the mobile camera to detect every movement and change on the table and in the same time will keep the score for the players without any human input. I want to know if it is an impossible thing. If it is plausible I really need some ideas, advices from where to start. I got to say that I have some experience in Android development and I already started to read a lot of articles, projects about the shape detection, color detection and edge detection.

    Read the article

  • What is Pseudocode?

    - by Jae
    I've seen a lot of mentions of Pseudocode lately, on this site and others. But I don't get it: What is Pseudocode? For example, the Wikipedia article below says "It uses the structural conventions of a programming language, but is intended for human reading rather than machine reading." Does this mean that it isn't actually used to make programs? Why is it used? How is it used? Is it considered a Programming Language? See the above Wikipedia quote. Is it commonly known/used? Anything else... I honestly don't know where to start with this. I have Googled it and I've seen the Wikipedia article on the topic, but I still don't fully understand what it is.

    Read the article

  • Using implode, explode etc.. on one line vs separating them into multiple lines with meaningful variable names

    - by zhenka
    I see a lot of people coding in PHP being rather proud if they manage to write a complicated one line statement that does clever things. But what is the advantage? It is not only harder to keep in once head while writing, but makes code much less readable. In my opinion reading short statements, if well written, can be like reading an essay, while complicated one liners can potentially make me pause and think for much longer then it would take for the coder to simply separate them into meaningful units. Am I wrong in thinking this? How would you go about proving your point to another programmer regarding this?

    Read the article

  • ATI graphics card, with gnome shell, screen flickers

    - by bioShark
    After installing gnome shell, without any problem, after log in the fonts are missing and it looks like crap...nothing is readable. Don't want to make it a double post, because my issue is similar to the one from this question but for me the problems have not been solved properly. After running the commands from that post, and installing the latest AMD 11.10 driver, the Gnome shell display issues have been solved. But each time I move the mouse in the upper left corner, to bring up the applications...my entire screen flickers. Without the applications been displayed, everything looks fine. Hardware: ATI HD4870, Intel Q6600.

    Read the article

  • What is testable code?

    - by Michael Freidgeim
    We are improving quality of code and trying to develop more unit tests. The question that developers asked  was  "How to make code testable ?"  From http://openmymind.net/2010/8/17/Write-testable-code-even-if-you-dont-write-tests/ First and foremost, its loosely coupled, taking advantage of dependency injection (and auto-wiring), composition and interface-programming. Testable code is also readable - meaning it leverages single responsibility principle and Liskov substitution principle.A few practical suggestions are listed in http://misko.hevery.com/code-reviewers-guide/More recommendations are in http://googletesting.blogspot.com/2008/08/by-miko-hevery-so-you-decided-to.htmlIt is slightly too theoretical - " the trick is translating these abstract concepts into concrete decisions in your code."

    Read the article

  • How i can sign and/or group a specific set of vertices in a 3D file container like OBJ ? - in Blender

    - by user827992
    I would like to export a 3D model with each part having a name or a label if you will. For example i would like to export a model of an human body and name each part in specifics vertex groups like: left hand, right hand, right foot, head, ears, ... and you got the idea; so i can have a single 3D model that i can explode in various parts if needed. If there is a better technique about how to mark vertex groups in a 3D file please share your solution. As 3D editor i use Blender.

    Read the article

  • What are the benefits vs costs of comment annotation in PHP?

    - by Patrick
    I have just started working with symfony2 and have run across comment annotations. Although comment annotation is not an inherent part of PHP, symfony2 adds support for this feature. My understanding of commenting is that it should make the code more intelligible to the human. The computer shouldn't care what is in comments. What benefits come from doing this type of annotation versus just putting a command in the normal PHP code? ie- /** * @Route("/{id}") * @Method("GET") * @ParamConverter("post", class="SensioBlogBundle:Post") * @Template("SensioBlogBundle:Annot:post.html.twig", vars={"post"}) * @Cache(smaxage="15") */ public function showAction(Post $post) { }

    Read the article

  • JSIL - a Dot Net to JavaScript translator

    - by TATWORTH
    JSI is described at http://jsil.org/ as:"JSIL is a compiler that transforms .NET applications and libraries from their native executable format - CIL bytecode - into standards-compliant, cross-browser JavaScript. You can take this JavaScript and run it in a web browser or any other modern JavaScript runtime. Unlike other cross-compiler tools targeting JavaScript, JSIL produces readable, easy-to-debug JavaScript that resembles the code a developer might write by hand, while still maintaining the behavior and structure of the original .NET code. Because JSIL transforms bytecode, it can support most .NET-based languages - C# to JavaScript and VB.NET to JavaScript work right out of the box."

    Read the article

  • When should an API favour optimization over readability and ease-of-use?

    - by jmlane
    I am in the process of designing a small library, where one of my design goals is to use as much of the native domain language as possible in the API. While doing so, I've noticed that there are some cases in the API outline where a more intuitive, readable attribute/method call requires some functionally unnecessary encapsulation. Since the final product will not necessarily require high performance, I am unconcerned about making the decision to favour ease-of-use in my current project over the most efficient implementation of the code in question. I know not to assume readability and ease-of-use are paramount in all expected use-cases, such as when performance is required. I would like to know if there are more general reasons that argue for an API design preferring (marginally) more efficient implementations?

    Read the article

  • Coherence Data Guarantees for Data Reads - Basic Terminology

    - by jpurdy
    When integrating Coherence into applications, each application has its own set of requirements with respect to data integrity guarantees. Developers often describe these requirements using expressions like "avoiding dirty reads" or "making sure that updates are transactional", but we often find that even in a small group of people, there may be a wide range of opinions as to what these terms mean. This may simply be due to a lack of familiarity, but given that Coherence sits at an intersection of several (mostly) unrelated fields, it may be a matter of conflicting vocabularies (e.g. "consistency" is similar but different in transaction processing versus multi-threaded programming). Since almost all data read consistency issues are related to the concept of concurrency, it is helpful to start with a definition of that, or rather what it means for two operations to be concurrent. Rather than implying that they occur "at the same time", concurrency is a slightly weaker statement -- it simply means that it can't be proven that one event precedes (or follows) the other. As an example, in a Coherence application, if two client members mutate two different cache entries sitting on two different cache servers at roughly the same time, it is likely that one update will precede the other by a significant amount of time (say 0.1ms). However, since there is no guarantee that all four members have their clocks perfectly synchronized, and there is no way to precisely measure the time it takes to send a given message between any two members (that have differing clocks), we consider these to be concurrent operations since we can not (easily) prove otherwise. So this leads to a question that we hear quite frequently: "Are the contents of the near cache always synchronized with the underlying distributed cache?". It's easy to see that if an update on a cache server results in a message being sent to each near cache, and then that near cache being updated that there is a window where the contents are different. However, this is irrelevant, since even if the application reads directly from the distributed cache, another thread update the cache before the read is returned to the application. Even if no other member modifies a cache entry prior to the local near cache entry being updated (and subsequently read), the purpose of reading a cache entry is to do something with the result, usually either displaying for consumption by a human, or by updating the entry based on the current state of the entry. In the former case, it's clear that if the data is updated faster than a human can perceive, then there is no problem (and in many cases this can be relaxed even further). For the latter case, the application must assume that the value might potentially be updated before it has a chance to update it. This almost aways the case with read-only caches, and the solution is the traditional optimistic transaction pattern, which requires the application to explicitly state what assumptions it made about the old value of the cache entry. If the application doesn't want to bother stating those assumptions, it is free to lock the cache entry prior to reading it, ensuring that no other threads will mutate the entry, a pessimistic approach. The optimistic approach relies on what is sometimes called a "fuzzy read". In other words, the application assumes that the read should be correct, but it also acknowledges that it might not be. (I use the qualifier "sometimes" because in some writings, "fuzzy read" indicates the situation where the application actually sees an original value and then later sees an updated value within the same transaction -- however, both definitions are roughly equivalent from an application design perspective). If the read is not correct it is called a "stale read". Going back to the definition of concurrency, it may seem difficult to precisely define a stale read, but the practical way of detecting a stale read is that is will cause the encompassing transaction to roll back if it tries to update that value. The pessimistic approach relies on a "coherent read", a guarantee that the value returned is not only the same as the primary copy of that value, but also that it will remain that way. In most cases this can be used interchangeably with "repeatable read" (though that term has additional implications when used in the context of a database system). In none of cases above is it possible for the application to perform a "dirty read". A dirty read occurs when the application reads a piece of data that was never committed. In practice the only way this can occur is with multi-phase updates such as transactions, where a value may be temporarily update but then withdrawn when a transaction is rolled back. If another thread sees that value prior to the rollback, it is a dirty read. If an application uses optimistic transactions, dirty reads will merely result in a lack of forward progress (this is actually one of the main risks of dirty reads -- they can be chained and potentially cause cascading rollbacks). The concepts of dirty reads, fuzzy reads, stale reads and coherent reads are able to describe the vast majority of requirements that we see in the field. However, the important thing is to define the terms used to define requirements. A quick web search for each of the terms in this article will show multiple meanings, so I've selected what are generally the most common variations, but it never hurts to state each definition explicitly if they are critical to the success of a project (many applications have sufficiently loose requirements that precise terminology can be avoided).

    Read the article

  • Practicing SEO

    As with any other city on the planet these, SEO companies in Toronto are equally conscious of businesses and companies serve two markets, and we are not talking about demographics. We mean the walk in customers and also virtual customers. Online retail income is not a line of revenue any business owner can afford to dismiss ever again. With a lot of the world's human population connected to the internet, people eat, sleep, listen to music, watch TV, buffer sitcoms and movies, chat with their friends, Google and Wikipedia any and almost everything beneath the sun they randomly encounter.

    Read the article

  • When should code favour optimization over readability and ease-of-use?

    - by jmlane
    I am in the process of designing a small library, where one of my design goals is that the API should be as close to the domain language as possible. While working on the design, I've noticed that there are some cases in the code where a more intuitive, readable attribute/method call requires some functionally unnecessary encapsulation. Since the final product will not necessarily require high performance, I am unconcerned about making the decision to favour ease-of-use in my current project over the most efficient implementation of the code in question. I know not to assume readability and ease-of-use are paramount in all expected use-cases, such as when performance is required. I would like to know if there are more general reasons that argue for a design preferring more efficient implementations—even if only marginally so?

    Read the article

  • Who should respond to collision: Unit or projectile?

    - by aleguna
    In an RTS if a projectile hits a unit. Who should handle the collision? If projectile handles the collision, it must be aware of all possible types of units, to know what damage to inflict. For example a bullet will likely kill a human, but it will do nothing to a tank. The same goes if unit handles a collision. So either way one of them should be aware of all possible types of the other. Of course the 'true' way would be to do full physics simulation, but that's not an option for an RTS with 1000s of units and projectiles... So what are the common practicies in this regards?

    Read the article

  • Programming to ANSI standards (for engineering)

    - by Jake
    I am currently tasked to write a software to help engineers design standard compliant designs. If there is a bad design, software will report an error or warning. Maybe it's just me, but anyone who has done this should be familiar with the massive amounts of ANSI standards tables like this one: http://en.wikipedia.org/wiki/Nominal_Pipe_Size Computers are, as its name suggest, computing machines, not lookup machines. I feel that feeding formulas into computers and churning out standard compliant designs is much more efficient than doing memory intensive data lookups that are prone to human input errors and susceptible to "data updates". I actually think that there are formulas to calculate all those numbers, but nobody so far could give me that information. Anyone been through this before? What is THE best approach to this? Thanks for sharing.

    Read the article

  • New customer references for Exadata-based projects

    - by Javier Puerta
    Milletech Systems, Inc. shows a large state university how to improve query response times 15-fold using its grant management solution built on Oracle’s Extreme Performance Infrastructure. Read More. Ação Informática helped Valdecard realize a 15-Fold Improvement in Fleet-Management and Benefit Card Data Processing using Oracle Exadata Machine. Read More. Neusoft deployed Benxi Municipal Human Resources and Social Security Bureau’s cloud-based database platform to process social insurance payments 50-times faster using Oracle Exadata Database Machine. Read More.

    Read the article

  • When failure is a feature

    Warning: this post is going to be slightly off-topic and non-technical. Well, not computer science technical at least. I was reading an article in SciAm this morning about the possibility of a robot uprising. Dont laugh yet, this is a very real, if still quite remote possibility. The main idea that was described was that AI could rise one day to self-awareness and to an ability to improve itself through self-replication beyond human abilities to control it. Sure, thats one possibility, and some...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Generating Wrappers for REST APIs

    - by Kyle
    Would it be feasible to generate wrappers for REST APIs? An earlier question asked about machine readable descriptions of RESTful services addressed how we could write (and then read) API specifications in a standardized way which would lend itself well to generated wrappers. Could a first pass parser generate a decent wrapper that human intervention could fix up? Perhaps the first pass wouldn't be consistent, but would remove a lot of the grunt work and make it easy to flesh out the rest of the API and types. What would need to be considered? What's stopping people from doing this? Has it already been done and my google fu is weak for the day?

    Read the article

  • Where can I find resources for RPG Character Sprites? [closed]

    - by IcySnow
    I'm developing a turn-based 2D RPG game. Everything's going fine except the lack of characters' sprites such as moving, attacking animation, etc.... By characters I mean both human-like and monster-like creatures. Is there a website providing sprites for free? Or a program (free or paid, whichever is fine) which will let me create sprites from scratch and automatically generate the images? I tried my best to search for one but the best I've managed to find so far is http://spriters-resource.com/. However, is there something else similar and better out there?

    Read the article

  • How to safely back up the "Private" folder?

    - by ImaginaryRobots
    I have an ecryptfs "Private" folder in my home directory, and it is set up to automatically mount whenever I log in. I want to set up automatic backups to a network drive, but I don't want the contents of Private to be readable on the remote server. My understanding is that the Ubuntu "Backup" utility would run while I'm logged in, so it would see the folder contents without encryption. I'm backing up from a laptop, so it is essentially only on when I am logged in. I know that the Private folder is essentially a mounted filesystem, so it seems like I should be able to backup the encrypted image rather than the cleartext contents. What steps are needed to safely back it up, while maintaining the encryption? Note that I'm already familiar with the backup tools available, this question is about dealing with the ecryptfs folder safely.

    Read the article

  • Help Improve Oracle Products Usability at OOW

    - by Shay Shmeltzer
    We already wrote about all the great ADF related activities at OOW. But we wanted to also let you know about an additional activity you can participate in at OpenWorld: The Oracle Middleware User Experience team will be conducting focus groups and customer feedback activities at Oracle OpenWorld 2012 (Oct. 1st - Oct. 3rd). Customer participation helps Oracle develop outstanding products and solutions. Professionals of all types are invited to participate: Directors, Project & Product Managers, Finance, Sales, Human Resources, Marketing, Recruiters, Budget Managers,  and more. **To participate in these sessions you do not have to be registered for Oracle OpenWorld.** If you or someone you know is interested in participating, please email [email protected] with the following information: Name: Company Name:  Job Title: Email: Phone Number (work, mobile, include country code):

    Read the article

  • Given two sets of DNA, what does it take to computationally "grow" that person from a fertilised egg and see what they become? [closed]

    - by Nicholas Hill
    My question is essentially entirely in the title, but let me add some points to prevent some "why on earth would you want to do that" sort of answers: This is more of a mind experiment than an attempt to implement real software. For fun. Don't worry about computational speed or the number of available memory bytes. Computers get faster and better all of the time. Imagine we have two data files: Mother.dna and Father.dna. What else would be required? (Bonus point for someone who tells me approx how many GB each file will be, and if the size of the files are exactly the same number of bytes for everyone alive on Earth!) There would ideally need to be a way to see what the egg becomes as it becomes a human adult. If you fancy, feel free to outline the design. I am initially thinking that there'd need to be some sort of volumetric voxel-based 3D environment for simulation purposes.

    Read the article

  • Typescript - A free add-on for Visual Studio 2012

    - by TATWORTH
    At http://www.microsoft.com/en-us/download/details.aspx?id=34790, Microsoft are providing a free add-on for Visual Studio. If you have any version of Visual Studio 2012, it provides an editor for Typescript."TypeScript is a language for application-scale JavaScript. TypeScript adds optional types, classes, and modules to JavaScript. TypeScript supports tools for large-scale JavaScript applications for any browser, for any host, on any OS. TypeScript compiles to clean, readable, standards-based JavaScript. Try it out at http://www.typescriptlang.org/playground."I look forward to type-safe JavaScript!There is a tutorial for it at http://www.typescriptlang.org/tutorial/

    Read the article

  • Verb+Noun Parsers and Old School Visual Novels [duplicate]

    - by user38943
    This question already has an answer here: How should I parse user input in a text adventure game? 6 answers Hi I'm working on a simple old school visual novel engine in Lua. Basically I have most of the code set up besides one important feature. The Text Parser. Lets get into how words are generally structured. In the screenshot I input the command "my wish is for you to die" --How would a human understand this? my = noun/object wish = verb is = connective_equator similar to = for = connective_object (for all objects of ..) you = noun/object to = connective_action similar to do die = verb --the computer can then parse this and understand it like this (pseudo example) my = user you = get_current_label() you = "Lost Coatl" wish = user_command user_command = for all_objects of "Lost Coatl" do die() end execute user_command() What other ways do videogames use text parsers, what would be the simplest way for a newbie coder such as myself?

    Read the article

  • Is there any guarantee about the graphical output of different GPUs in DirectX?

    - by cloudraven
    Let's say that I run the same game in two different computers with different GPUs. If for example they are both certified for DirectX 10. Is there a guarantee that the output for a given program (game) is going to be the same regardless the manufacturer or model of the GPU? I am assuming the configurable settings are exactly the same in both cases. I heard that it is not the case for DirectX 9 and older, but that it is true for DirectX 10. If someone could provide a source confirming or denying it, it would be great. Also what is the guarantee offered. Will the output be exactly the same or just perceptually the same to the human eye?

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >