Search Results

Search found 22000 results on 880 pages for 'worker process'.

Page 383/880 | < Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >

  • Webinar: Riding the Fence or Planning the Upgrade to 11gR2?

    - by Greg Jensen
     Is your organization riding the Identity and Access fence where you can't decide if you are ready to upgrade?  Are you unsure what the technical and business value gains are, in upgrading to Oracle's 11gR2?  Or are you planning for the upgrade and just unsure of what to expect? In this webinar, experts from Oracle and AmerIndia will discuss the new features of 11gR2, latest market trends, and how IAM transforms organizations. In addition, planning and implementation strategy of the upgrade process will be discussed. The presenters will also share success stories and highlight challenges faced by organizations belonging to different verticals and how Oracle’s solutions and AmerIndia’s services addressed those challenges. Topics include: Market trends and 11gR2 Planning an upgrade Approach and Implementation Strategy Success stories Registration is now open for this Webinar for December 5th from 2pm - 3pm EST. https://blogs.oracle.com/OracleIDM/resource/amerindia-logo.png

    Read the article

  • Business Case for investing time developing Stubs and BizUnit Tests

    - by charlie.mott
    I was recently in a position where I had to justify why effort should be spent developing Stubbed Integration Tests for BizTalk solutions. These tests are usually developed using the BizUnit framework. I assumed that most seasoned BizTalk developers would consider this best practice. Even though Microsoft suggest use of BizUnit on MSDN, I've not found a single site listing the justifications for investing time writing stubs and BizUnit tests. Stubs Stubs should be developed to isolate your development team from external dependencies. This is described by Michael Stephenson here. Failing to do this can result in the following problems: In contract-first scenarios, the external system interface will have been defined.  But the interface may not have been setup or even developed yet for the BizTalk developers to work with. By the time you open the target location to see the data BizTalk has sent, it may have been swept away. If you are relying on the UI of the target system to see the data BizTalk has sent, what do you do if it fails to arrive? It may take time for the data to be processed or it may be scheduled to be processed later. Learning how to use the source\target systems and investigations into where things go wrong in these systems will slow down the BizTalk development effort. By the time the data is visible in a UI it may have undergone further transformations. In larger development teams working together, do you all use the same source and target instances. How do you know which data was created by whose tests? How do you know which event log error message are whose?  Another developer may have “cleaned up” your data. It is harder to write BizUnit tests that clean up the data\logs after each test run. What if your B2B partners' source or target system cannot support the sort of testing you want to do. They may not even have a development or test instance that you can work with. Their single test instance may be used by the SIT\UAT teams. There may be licencing costs of setting up an instances of the external system. The stubs I like to use are generic stubs that can accept\return any message type.  Usually I need to create one per protocol. They should be driven by BizUnit steps to: validates the data received; and select a response messages (or error response). Once built, they can be re-used for many integration tests and from project to project. I’m not saying that developers should never test against a real instance.  Every so often, you still need to connect to real developer or test instances of the source and target endpoints\services. The interface developers may ask you to send them some data to see if everything still works.  Or you might want some messages sent to BizTalk to get confidence that everything still works beyond BizTalk. Tests Automated “Stubbed Integration Tests” are usually built using the BizUnit framework. These facilitate testing of the entire integration process from source stub to target stub. It will ensure that all of the BizTalk components are configured together correctly to meet all the requirements. More fine grained unit testing of individual BizTalk components is still encouraged.  But BizUnit provides much the easiest way to test some components types (e.g. Orchestrations). Using BizUnit with the Behaviour Driven Development approach described by Mike Stephenson delivers the following benefits: source: http://biztalkbddsample.codeplex.com – Video 1. Requirements can be easily defined using Given/When/Then Requirements are close to the code so easier to manage as features and scenarios Requirements are defined in domain language The feature files can be used as part of the documentation The documentation is accurate to the build of code and can be published with a release The scenarios are effective to document the scenarios and are not over excessive The scenarios are maintained with the code There’s an abstraction between the intention and implementation of tests making them easier to understand The requirements drive the testing These same tests can also be used to drive load testing as described here. If you don't do this ... If you don't follow the above “Stubbed Integration Tests” approach, the developer will need to manually trigger the tests. This has the following risks: Developers are unlikely to check all the scenarios each time and all the expected conditions each time. After the developer leaves, these manual test steps may be lost. What test scenarios are there?  What test messages did they use for each scenario? There is no mechanism to prove adequate test coverage. A test team may attempt to automate integration test scenarios in a test environment through the triggering of tests from a source system UI. If this is a replacement for BizUnit tests, then this carries the following risks: It moves the tests downstream, so problems will be found later in the process. Testers may not check all the expected conditions within the BizTalk infrastructure such as: event logs, suspended messages, etc. These automated tests may also get in the way of manual tests run on these environments.

    Read the article

  • How can Java be improved so that it no longer needs to perform type erasure? [closed]

    - by user63904
    The official Java tutorial on generics explains type erasure and why it was added to the compiler: When a generic type is instantiated, the compiler translates those types by a technique called type erasure — a process where the compiler removes all information related to type parameters and type arguments within a class or method. Type erasure enables Java applications that use generics to maintain binary compatibility with Java libraries and applications that were created before generics. This most likely was a pragmatic approach, or perhaps the least painful one. However, now that generics is widely supported across the industry, what can be done in order for us to not need type erasure? Is it feasible with out needing to break backwards compatibility, or if it is feasible, is it practical? Has the last the last statement in the quote above become self referential? That is: "type erasure enables Java applications that use generics to maintain binary compatibility with Java libraries and applications that were created with Java versions that perform type erasure."

    Read the article

  • Podcast with AJI about iOS development coming from a .NET background

    - by Tim Hibbard
    I talked with Jeff and John from AJI Software the other day about developing for the iOS platform. We chatted about learning Xcode and Objective-C, provisioning devices and the app publishing process. We all have a .NET background and made lots of comparisons between the two platforms/ecosystems/fanbois. They even let me throw in a plug for Christian Radio Locator. Jeff was my first contact with the Kansas City .NET community. It was probably about 10 years ago. He pushed me to talk more (and rescued me from my first talk that bombed) and blog more. One time a group of us took a 16 hour car trip to South Carolina for a code camp and live podcasted the whole thing. Good times.Listen to the show Click here to subscribe to more AJI Reports in the future.

    Read the article

  • A bacon- (and module-) saving PowerShell incident

    - by AaronBertrand
    Earlier today I made a big goof. I opened a module in Notepad, intending to use it as the basis for a new module. I was in the process of using "File > Save As" when my phone rang just at the precise instant that, for some reason, made me click on "File > Save" by mistake. After hitting Ctrl+Z 30 times to try to get the old version of the module back, I remembered that Notepad has never had more than one level of Undo. Back when I was coding ASP by hand, I was very well aware of this, but I...(read more)

    Read the article

  • Where should the database and mail parameters be stored in a Symfony2 app?

    - by Songo
    In the default folder structure for a Symfony2 project the database and mail server credentials are stored in parameters.yml file inside ProjectRoot/app/config/parameters.yml with these default values: parameters: database_driver: pdo_mysql database_host: 127.0.0.1 database_port: null database_name: symfony database_user: root database_password: null mailer_transport: smtp mailer_host: 127.0.0.1 mailer_user: null mailer_password: null locale: en secret: ThisTokenIsNotSoSecretChangeIt During development we change these parameters to the development database and mail servers. This file is checked into the source code repository. The problem is when we want to deploy to the production server. We are thinking about automating the deployment process by checking out the project from git and deploy it to the production server. The thing is that our project manager has to manually update these parameters after each update. The production database and mail servers parameters are confidential and only our project manager knows them. I need a way to automate this step and suggestion on where to store the production parameters until they are applied?

    Read the article

  • Game engine like Unity 3D that allow me to use .NET code

    - by Pking
    I've been looking at Unity 3D for developing a 3D PC game and I really like the scene editor and how it simplifies the process of constructing 3D scenes, managing assets, animations, transitions etc. However, I don't want to restrict myself to using the Unity 3D scripts for handling every bit of game logic in the game. E.g. If I want to construct a RPG dialogue system I don't want to do it with unity 3d scripts - I'd like to use C#/.net. Also, I might want to use e.g. windows azure and sql azure as backend, and use 3rd party .net libraries such as reactive-extensions etc. Is there a .net engine out there that helps me with asset loading, animations, physics, transitions, etc. with a scene editor, but allow me to plug it into a visual studio .net project? Thanks

    Read the article

  • Testing To Prevent Cascading Bugs

    - by jfrankcarr
    Yesterday, Twitter was hit with a "Cascading Bug" as described in this blog post: A “cascading bug” is a bug with an effect that isn’t confined to a particular software element, but rather its effect “cascades” into other elements as well. I've seen this kind of bug, on a smaller scale of course, on some projects I've worked on. They can be difficult to identify in dev/test environments, even within a test driven development environment. My questions are... What are some strategies you use, beyond the basic TDD and standard regression testing, to identify and prevent the potential trouble points that might only occur in the production environment? Does the presence of such problems indicate a breakdown in the software development process or simply a by-product of complex software systems?

    Read the article

  • VS2010 crashes when opening a vsp generated using VS 2012

    - by Tarun Arora
    I recently profiled some web applications using Visual Studio 2012, a vsp (Visual Studio Profile) file was generated as a result of the profiling session. I could successfully open the vsp file in Visual Studio 2012 as expected but when I tried to open the vsp file in Visual Studio 2010 the VS2010 IDE crashed. As a responsible citizen I raised bug # 762202 on Microsoft Connect site using the Microsoft Visual Studio 2012 Feedback Client. Note – In case you didn’t already know, VSP generated in Visual Studio 2012 is not backward compatible. Please refer below for the steps to reproduce the issue and the resolution of the connect bug. 1. Behaviour and Steps to Reproduce the Issue Description I have generated a vsp file by using the Visual Studio 2012 Standalone profiler. When I try and open the vsp file in Visual Studio 2010 the IDE crashes. I understand that a vsp generated by using VS 2012 cannot be opened in VS 2010, but the IDE crashing is not the behaviour I would expect to see. Steps to Reproduce the Issue 1. Pick up the Stand lone profiler from the VS 2012 installation media. The folder has both x 64 and x86 installer, since the machine I am using is x64 bit. I have installed the x64 version of the standalone profiler. 2. I have configured the system path by setting the 'environment variable' path to where the profiler is installed. In my case this is, C:\Program Files (x86)\Microsoft Visual Studio 11.0\Team Tools\Performance Tools 3. Created a new environment variable _NT_SYMBOL_PATH and set its value to CACHE*C:\SYMBOLSCACHE;SRV*C:\SYMBOLSCACHE*HTTP://MSDL.MICROSOFT.COM/DOWNLOAD/SYMBOLS;\\FOO\BUILD1234 4. Open up CMD as an administrator and run 'VSPerfASPNETCmd /tip http://localhost:56180/ /o:C:\Temp\SampleEISK.vsp' 5. This generates the following message on the cmd       Microsoft (R) VSPerf ASP.NET Command, Version 11.0.0.0     Copyright (C) Microsoft Corporation. All rights reserved.     Configuring and attaching to ASP.NET process. Please wait.     Setting up profiling environment.     Starting monitor.     Launching ASP.NET service.     Attaching Monitor to process.     Launching Internet Explorer.     The profiler is attached to ASP.net. Please run your application scenario now.     Press Enter to stop data collection...   6. I perform certain actions and then I come back to the cmd and hit enter to shut down the profiling. Once I do this, the following message is written to the cmd, Press Enter to stop data collection... Profiling now shut down. Report file "C:\Temp\SampleEISK.vsp" was generated. Running VsPerfReport, packing symbols into the .VSP. Shutting down profiling and restarting ASP.NET. Please wait. Restarting w3wp.exe.   7. I look in the C:\Temp folder and I can see the SampleEISK.vsp file generated. I can successfully open this file in Visual Studio 2012. 8. When I am trying to open the vsp file in VS 2010 the VS 2010 IDE crashes. Kaboooom! What I would expect to happen I expect to receive a message "VS 2010 does not support the vsp file generated by VS 2012". What actually happened The VS 2010 IDE crashed 2. Resolution This is a valid bug! However, there isn’t much value in releasing a hotfix for this issue. Refer below to the resolution provided by the Visual Studio Profiler Team.  Thank you for taking the time to report this issue. We completely agree that Visual Studio 2010 should not crash. However in this particular case this is not a bug we are going to retroactively release a fix to 2010 for at this point. Given that a fix would not unblock the scenario of opening a 2012 created file on Visual Studio 2010, and there is not an active update channel for Visual Studio 2010 other than manually locating and installing hot fixes, we will not be fixing this particular issue. Best Regards, Visual Studio Profiler Team   Though it would be great to improve the behaviour however, this is not a defect that would stop you from progressing in any way. It’s important to note however that VSP files generated by Visual Studio 2012 are not backward compatible so you should refrain from opening these files in Visual Studio 2010.

    Read the article

  • What could be a reason for cross-platform server applications developer to make his app work in multiple processes?

    - by Kabumbus
    So we consider a server app development - heavily loaded with messing with big data streams.An app will be running on one powerful server. a server app shall be developed in form of crossplatform application - so to work on Windows, Mac OS X and Linux. So same code many platforms for standing alone server architecture. We wonder what benefits does distributing applications not only over threads but over processes as wall would bring to programmers and to server end users and why? Some people sad to me that even having 48 cores, 4 process threads would be shared via OS throe all cores... is it true BTW?

    Read the article

  • Scheme of work contract

    - by Tommy
    I'm in the process of setting up a (one man) company and got to a item on my list "contracts and insurance". I will primary be offering custom software development but may also offer some "open source solutions", install, configure / manage e.t.c. I'm not sure how to approach a contract with a potential customer. I want to be flexible and offer the customer rights over the product (when not using open source of course) but I obviously want / need to be able to reuse code, already written and any future work. Is this possible or is it just something that people do but strictly they shouldn't? Is there a standard freelancing / contacting developer agreement? Going to a lawyer I'm sure is an answer but a very expensive one! If not do you end up with a fresh contract with each job / client and lots of trips to solicitors?

    Read the article

  • Python service using Upstart on Ubuntu

    - by Soumya Simanta
    I want to create to deploy a heartbeat service (a python script) as a service using Upstart. My understanding is that I've to add a /etc/init/myheartbeatservice.conf with the following contents. # my heartbeat service description "Heartbeat monitor" start on startup stop on shutdown script exec /path/to/my/python/script.py end script My script starts another service process and the monitors the processes and sends heartbeat to an outside server regularly. Are startup and shutdown the correct events ? Also my script create a new thread. I'm assuming I also need to add fork daemon to my conf file? Thanks.

    Read the article

  • java framework for a web wizard

    - by snippl
    I'd like to create a web wizard or a wizard on/within a website. I know there are several java frameworks for wizards but there is none for the web in particular. If I search for a framework, I always get wizards to create a website, applet, servlet and what not. But never how to create such a wizard itself. So the question(s) is: What framework should I use to create a web wizard? And what frameworks are there? The wizard will be the visualisation for a BPEL Process deployment on a apache ode. I thought a webapp would be an easy way because the apache is running anyway. I'm not bound to that, perhaps someone has a better idea.

    Read the article

  • What issues carry the highest risk in a software project?

    - by Mehrdad
    Clearly, software projects are different from other industries in terms of many things like for instance, quality assurance, project progress measurement, and many other things. Unique characteristics of software projects also makes the risk management process unique. Lots of issues in a project might lead it to unacceptable delay or failure to deliver business value. They might even make a complete disaster in the project. What are the deadliest risk factors in a software project? How to analyze, prevent and handle them? Particularly, I'm interested in the issues that you can detect from the beginning and you should keep an eye on (for example, you might be told about a third-party API that the current application uses and lacks documentation). Please share your experiences if they are relevant.

    Read the article

  • open source database project

    - by Jeff V
    What is the best way to build an open source database? I would like to build a database of all vehicles and the related maintenance information (i.e Oil Weight, Quantity, Tire Pressure, Windshield wipers etc). Currently this information is fragmented or just not put on line in an open way. Once collection began I would want to import into a DB and then be able to distribute freely. Is there a process (site or group) that I can start gathering this information in a reliable and verifiable way? Is there any issues that I should watch out for?

    Read the article

  • Profiling NetBeans 7.0 Beta 2 and Reporting Problems

    - by christopher.jones
    With NetBeans 7.0 recently going into Beta 2 phase, now is the time to test it out properly and report issues. The development team has been squashing bugs, including memory issues with the PHP bundle.There are some great new PHP related features in NetBeans 7.0, so you know you want to try it out.If you identify something wrong with NetBeans, please report it following the guidelines http://wiki.netbeans.org/IssueReportingGuidelinesDepending on the issues, data to attach to the report is mentioned on: http://wiki.netbeans.org/FaqLogMessagesFile and http://wiki.netbeans.org/FaqProfileMeNowIf you have a memory issue then a memory dump would also be useful. Run the jmap tool for this. There is some background information on http://wiki.netbeans.org/FaqMemoryDump. Here's how I used it.First I set my environment to match the JDK used by NetBeans. In my case I am using a nightly build so the JDK is in the configuration file under $HOME/netbeans-dev-201102210501:$ egrep netbeans_jdkhome $HOME/netbeans-dev-201102210501/etc/netbeans.conf netbeans_jdkhome="/home/cjones/src/jdk1.6.0_24" $ export JAVA_HOME=/home/cjones/src/jdk1.6.0_24 $ export PATH=$JAVA_HOME/bin:$PATH Next, I found the correct process number to examine:$ ps -ef | egrep 'netbeans|jdk'cjones   23230     1  0 16:07 ?        00:00:00 /bin/bash /home/cjones/netbeans-cjones   23438 23230  2 16:07 ?        00:00:09 /home/cjones/src/jdk1.6.0_24/binFinally I used the parent JDK process as the jmap argument:$ jmap -histo:live 23438 num     #instances         #bytes  class name----------------------------------------------   1:         12075        9028656  [I   2:         49535        6581920  <constMethodKlass>   3:         49535        3964128  <methodKlass>   4:         80256        3840776  <symbolKlass>   5:         36093        3635336  [C   6:          5095        3341312  <constantPoolKlass>   7:          5095        2486016  <instanceKlassKlass>   8:          4325        1961432  <constantPoolCacheKlass>   9:         18729        1763976  [B  10:         59952        1438848  java.util.HashMap$Entry  . . .This histogram memory report will help identify the kind of memory issues you are seeing. It may not be as complete as an often tens of megabyte jmap -dump:live,file=/tmp/nbheap.log 23438 heap dump, but is much more easily attached to a bug report.If you want to keep up to date with NetBeans, nightly builds are at: http://bits.netbeans.org/download/trunk/nightly/latest/zip/

    Read the article

  • Creating my own kill cam

    - by DalexL
    I plan on creating my own kill cam system for a sandbox tool set. After thinking about the mechanics of the kill cam itself, however, I'm quite lost. I'm trying to recreate the ones commonly seen in call of duty games that show, from the view of the killer, the actual killing scene. My Thoughts: -I can't just keep in memory when people kill others because I wouldn't know when to start the 'recording process'. There is on way for me to accurately determine when somebody is 'about' to kill someone. -My only real idea so far is to have a complete duplicate of everything loaded off to the side copying all the movement from the original world but with a 10 second delay. That way, all the kill cams would be 10 seconds long and the persons camera would just be moved to the second world of their killer. My Questions: Is there already an accepted way to do this? Does anybody have any good ideas for something like this? Thanks if you can!

    Read the article

  • Silverlight XAP Signing Certificate promotion from Thawte

    And the offers keep coming in! Another one of our key partners for testing XAP signing for trusted applications was Thawte. Their group helped provide us with valid certificates to verify their process and signing worked as expected (and verified) for Silverlight 4. Today I just got an email from their marketing department that they would like to offer Silverlight developers a discount on Thawte code-signing certificates to $89 for a 1-yearabout 70% off their current rate. Thats pretty amazing of...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Which specific practices could be called "software craftsmanship" rather than "software engineering"?

    - by FinnNk
    Although not a new idea there seems to have been a big increase in the interest in software craftsmanship over the last couple of years (notably the often recommended book Clean Code's full title is Clean Code: A Handbook of Agile Software Craftsmanship). Personally I see software craftsmanship as good software engineering with an added interest in ensuring that the end result is a joy to work with (both as an end user and as someone maintaining that software) - and also that its focus is more at the coding level of things than the higher level process things. To draw an analogy - there were lots of buildings constructed in the 50s and 60s in a very modern style which took very little account of the people who would be living in them or how those buildings would age over time. Many of those buildings rapidly developed into slums or have been demolished long before their expected lifespans. I'm sure most developers with a few years under their belts will have experienced similar codebases. What are the specific things that a software craftsman might do that a software engineer (possibly a bad one) might not?

    Read the article

  • ffmpeg: cut multiple input files with seeking to one output file

    - by Josef Kufner
    I have list of video files (loaded from database), each with start and end time of requested interval: # file begin end v1.mp4 1:01 2:01 v2.mp4 3:02 3:32 v3.mp4 2:03 5:23 And I need to create single video file containing these intervals: [0:00]---v1---[2:00]---v2---[2:30]---v3---[5:50] I preffer usig ffmpeg, since it is installed on server. Caller program is written in PHP. It is easy to cut one input to one output (argument escaping removed for clarity): exec("ffmpeg -ss $begin -i $input_file -ss $begin -c copy $output_file"); I there any easier way than executing ffmpeg for each interval and then execute it once more to concatenate prepared clips together? I really do not like to have a lot of temporary files or dealing with complex process handling.

    Read the article

  • Application Composer Series: Where and When to use Groovy

    - by Richard Bingham
    This brief post is really intended as more of a reference than an article. The table below highlights two things, firstly where you can add you own custom logic via groovy code (end column), and secondly (middle column) when you might use each particular feature. Obviously this applies only where Application Composer exists, namely Fusion CRM and Oracle Sales Cloud, and is based on current (release 8) functionality. Feature Most Common Use Case Groovy Field Triggers React to run-time data changes. Only fired when the field is changed and upon submit. Y Object Triggers To extend the standard processing logic for an object, based on record creation, updates and deletes. There is a split between these firing events, with some related to UI/ADF actions and others originating in the database. UI Trigger Points: After Create - fires when a new object record is created. Commonly used to set default values for fields. Before Modify - Fires when the end-user tries to modify a field value. Could be used for generic warnings or extra security logic. Before Invalidate - Fires on the parent object when one of its child object records is created, updated, or deleted. For building in relationship logic. Before Remove - Fires when an attempt is made to delete an object record. Can be used to create conditions that prevent deletes. Database Trigger Points: Before Insert in Database - Fires before a new object is inserted into the database. Can be used to ensure a dependent record exists or check for duplicates. After Insert in Database - Fires after a new object is inserted into the database. Could be used to create a complementary record. Before Update in Database -Fires before an existing object is modified in the database. Could be used to check dependent record values. After Update in Database - Fires after an existing object is modified in the database. Could be used to update a complementary record. Before Delete in Database - Fires before an existing object is deleted from the database. Could be used to check dependent record values. After Delete in Database - Fires after an existing object is deleted from the database. Could be used to remove dependent records. After Commit in Database - Fires after the change pending for the current object (insert, update, delete) is made permanent in the current transaction. Could be used when committed data that has passed all validation is required. After Changes Posted to Database - Fires after all changes have been posted to the database, but before they are permanently committed. Could be used to make additional changes that will be saved as part of the current transaction. Y Field Validation Displays a user entered error message based groovy logic validating the field value. The message is shown only when the validation logic returns false, and the logic is triggered only when tabbing out of the field on the user interface. Y Object Validation Commonly used where validation is needed across multiple related fields on the object. Triggered on the submit UI action. Y Object Workflows All Object Workflows are fired upon either record creation or update, along with the option of adding a custom groovy firing condition. Y Field Updates - change another field when a specified one changes. Intended as an easy way to set different run-time values (e.g. pick values for LOV's) plus the value field permits groovy logic entry. Y E-Mail Notification - sends an email notification to specified users/roles. Templates support using run-time value tokens and rich text. N Task Creation - for adding standard tasks for use in the worklist functionality. N Outbound Message - will create and send an XML payload of the related object SDO to a specified endpoint. N Business Process Flow - intended for approval using the seeded process, however can also trigger custom BPMN flows. N Global Functions Utility functions that can be called from any groovy code in Application Composer (across applications). Y Object Functions Utility functions that are local to the parent object. Usually triggered from within 'Buttons and Actions' definitions in Application Composer, although can be called from other code for that object (e.g. from a trigger). Y Add Custom Fields When adding custom fields there are a few places you can include groovy logic. Y Default Value - to add logic within setting the default value when new records are entered. Y Conditionally Updateable - to add logic to set the field to read-only or not. Y Conditionally Required - to add logic to set the field to required or not. Y Formula Field - Used to provide a new aggregate field that is entirely based on groovy logic and other field values. Y Simplified UI Layouts - Advanced Expressions Used for creating dynamic layouts for simplified UI pages where fields and regions show/hide based on run-time context values and logic. Also includes support for the depends-on feature as a trigger. Y Related References This Blog: Application Composer Series Extending Sales Guide: Using Groovy Scripts Groovy Scripting Reference Guide

    Read the article

  • Ubuntu 12.04 boot freezes

    - by Lajos Soukup
    I installed win7 on one of my machines. It works without any problem. Now I can not install Ubuntu 12.04 (11.10, 11.04) because the installation process stops in a very early stage: I can see that ISOLinux was loaded, then I can see some very simple icons, then I can see text messages of the kernel. The last message I can see: [6.177983] sr 5:0:0:0: attached scsi general ss1 type 5 At that point the system stops. I tried different versions (11.04) , I disabled, and then disconnected the hdd, but the same problem occured independently from my actions. I can boot a gparted live cd, but that system can not see my hdd. On the other hand, I can boot the installation cd of the latest stable Debian distribution without any problem, but I would prefer to use Ubuntu, because I use that distribution on all of our machines.s Configuration: Asus P8H61-MLE motheboard, Gigabyte GTS 450, Intel i5, 4gb ram, wd 1T sata hdd Any idea? Best regards, Lajos

    Read the article

  • Is the copy/paste approach professionally viable when working with the Google Maps API?

    - by Ian Campbell
    I find that I understand much of the Javascript concepts used in the Google Maps API code, but then again there is quite a bit that is way over my head in syntax. For example, the geocoder syntax seems to be of Ajax form, though I don't understand what is happening under the hood (especially with lines like results[0].geometry.location). I am able to modify the body of if (status == google.maps.GeocoderStatus.OK) for different purposes though. So, being that I am able to take various code from the Developer's Guide and rework it to an extent for my own purposes, all the while not fully understanding what Google Maps is actually doing, does this make me a copy-paste programmer? Is this a bad practice, or is this professionally viable? I am, of course, interested in learning as much as I can, but what if time-constraints outweigh the learning process?

    Read the article

  • Cloudcel: Excel Meets the Cloud

    - by kaleidoscope
    Cloudscale  is launching Cloudcel Cloudcel is the first product that demonstrates the full power of integrated "Client-plus-Cloud" computing. You use desktop Excel in the normal way, but can also now seamlessly tap into the scalability and massive parallelism of the cloud, entirely from within Excel, to handle your Big Data. Building an app in Cloudcel is really easy – no databases, no programming. Simply drop building blocks onto the spreadsheet (in any order, in any location) and launch the app to the cloud with a single click. Parallelism, scalability and fault tolerance are automatic. With Cloudcel, you can process realtime data streams continuously, and get alerts pushed to you as soon as important events or patterns are detected ("Set it and forget it"). Cloudcel is offered as a pay-per-use cloud service – so no hardware, no software licenses, and no IT department required to set it up. Private cloud deployments are also available. Please find below link for more detail : http://billmccoll.sys-con.com/node/1326645 http://cloudcel.com/ Technorati Tags: Tanu

    Read the article

  • How to Disable or Uninstall Android Bloatware

    - by Chris Hoffman
    Manufacturers and carriers often load Android phones with their own apps. If you don’t use them, they just clutter your system and sometimes in the background, draining resources. Take control of your device and stop the bloatware. We’ll be focusing on disabling – also known as “freezing” bloatware here. It’s a safer process than uninstalling the bloatware completely, and is also easier to accomplish with free apps. HTG Explains: Learn How Websites Are Tracking You Online Here’s How to Download Windows 8 Release Preview Right Now HTG Explains: Why Linux Doesn’t Need Defragmenting

    Read the article

< Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >