Search Results

Search found 8429 results on 338 pages for 'batch processing'.

Page 148/338 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • Pro Google App Engine developer interview questions (with answers)

    - by WooYek
    What are good questions to determine if applicant is pro Google App Egine developer? Questions that can distinguish that someone is not an ad-hoc GAE programmer, but is really doing professional GAE development, with all areas concerned (eg. performance, transactions, async/batch data processing). Please provide answers, so an intermediate developer (such as myself :) can interview someone more experienced. Please avoid open questions. If possible please provide a link to a documentation part that's covering a topic in question. Please keep one interview question/answer per response for better reading experience and easier interview preparation.

    Read the article

  • Tuxedo 12c

    - by JuergenKress
    Tuxedo 12c (12.1.1) release is now generally available. This major release includes a significant number of new features, In the case you missed the launch webcast – you can watch it on.demand. Key new Features include: Cloud Ready Infrastructure Optimized for Exalogic with 8X throughput Management/Monitoring Integrated with Enterprise Manager 12c For Mainframe COBOL Applications running on CICS, IMS, Batch New Messaging Solution: Tuxedo Message Queue 12c Ease of Application Development Solaris Studio IDE for Developing Tuxedo Applications Extend C, C++, COBOL Applications with Java POJOs Accelerated Migration of Large-scale Mainframe Applications At our WebLogic Community Workspace you can get the latest ppt presentations for your customer meetings: Tux ART 12c Launch Webcast Hasan Ajay v18.pptx Tux12claunch-techwebcast_v11.pptx Tuxedo_on_exalogic_external_v3.pptx For the more Tuxedo information, please visit the WebLogic Community Workspace (WebLogic Community membership required). WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: Tuxedo,Tuxedo 12c,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • What's Your Supply Chain+Manufacturing Strategy for Success

    - by [email protected]
    Forward thinking enterprises look to eliminate their dependence on legacy applications that manage information in batch - replacing them with real-time integrated/modern information managment. With rapid manufacturing and global supply chains much more complex today, with the pace of chance ever increasing, leading organizations need better ways to orchestrate their supply chain synchronization with their partner and customer base. EM magazine Mar/Apr'10 edition, covers this topic in an article "Strategising for Success" pgs 26-27, and discusses the available options to organizations as they drive improvements in the levels of collaboration with their partners, suppliers, shippers, distributors and ultimately their end-users, the customer! I'll past the link to the article here as soon as i validate/confirm it.

    Read the article

  • Best practices for managing deployment of code from dev to production servers?

    - by crosenblum
    I am hoping to find an easy tool or method, that allow's managing our code deployment. Here are the features I hope this solution has: Either web-based or batch file, that given a list of files, will communicate to our production server, to backup those files in different folders, and zip them and put them in a backup code folder. Then it records the name, date/time, and purpose of the deployment. Then it sends the files to their proper spot on the production server. I don't want too complex an interface to doing the deployment's because then they might never use it. Or is what I am asking for too unrealistic? I just know that my self-discipline isn't perfect, and I'd rather have a tool I can rely on to do what needs to be done, then my own memory of what exact steps I have to take every time. How do you guys, make sure everything get's deployed correctly, and have easy rollback in case of any mistakes?

    Read the article

  • Handbrake-powered VidCoder gets a native 64-bit version

    A while back, VidCoder -- the Windows video disc ripping program -- added support for Blu-ray discs. With Handbrake's engine under the hood, VidCoder offers an easy-to-use interface and simple batch processing of your video files. With the release of version 0.8, there's also now a native 64-bit version for those of you running Windows x64. A number of stability tweaks have also been introduced. As Baz pointed out in our comments last time, VidCoder is particular useful on netbooks. If you've got a 1024x600 screen, Handbrake may not even launch for you -- but VidCoder will fire up just fine. Take the new 64-bit version for a spin, and share your thoughts in the comments. Download VidCoderHandbrake-powered VidCoder gets a native 64-bit version originally appeared on Download Squad on Fri, 07 Jan 2011 17:00:00 EST. Please see our terms for use of feeds.Permalink | Email this | Comments

    Read the article

  • Week in Geek: Rogue Antivirus Caught Using AVG Logo Edition

    - by Asian Angel
    This week we learned how to quickly cut a clip from a video file with Avidemux, “tile windows, remote control a desktop using an iOS device, taking advantage of Windows 7 libraries”, turn a home Ubuntu PC into a LAMP web server, enable desktop notifications for Gmail in Chrome, “what image channels are and what they mean”, and more Latest Features How-To Geek ETC How to Integrate Dropbox with Pages, Keynote, and Numbers on iPad RGB? CMYK? Alpha? What Are Image Channels and What Do They Mean? How to Recover that Photo, Picture or File You Deleted Accidentally How To Colorize Black and White Vintage Photographs in Photoshop How To Get SSH Command-Line Access to Windows 7 Using Cygwin The How-To Geek Video Guide to Using Windows 7 Speech Recognition Android Notifier Pushes Android Notices to Your Desktop Dead Space 2 Theme for Chrome and Iron Carl Sagan and Halo Reach Mashup – We Humans are Capable of Greatness [Video] Battle the Necromorphs Once Again on Your Desktop with the Dead Space 2 Theme for Windows 7 HTC Home Brings HTC’s Weather Widget to Your Windows Desktop Apps Uninstall Batch Removes Android Applications

    Read the article

  • Weekend Project: Make Your Own Ferromagnetic Fluid

    - by Jason Fitzpatrick
    Experiments this simple and fun give you no reason to leave all science-based goofing off to the professionals: whip up a beaker of ferromagnetic fluid to capture magnetic waves in motion. The premise is simple: by combing a viscous liquid (in this case vegetable oil) with a magnetic powder (in this case MICR copy toner) and introducing a strong magnetic source (such as neodymium rare earth magnets), you can actually see the magnetic waves in physical space. It’s like the old magnetic filings on the table top trick, but in 3D. Check out the video above to see how you can mix up a batch of your own. How to Make Magnetic Fluid [YouTube] What Is the Purpose of the “Do Not Cover This Hole” Hole on Hard Drives? How To Log Into The Desktop, Add a Start Menu, and Disable Hot Corners in Windows 8 HTG Explains: Why You Shouldn’t Use a Task Killer On Android

    Read the article

  • Tuning Default WorkManager - Advantages and Disadvantages

    - by Murali Veligeti
    Before discussing on Tuning Default WorkManager, lets have a brief introduction on What is Default WorkManger Before Weblogic Server 9.0 release, we had the concept of Execute Queues. WebLogic Server (before WLS 9.0), processing was performed in multiple execute queues. Different classes of work were executed in different queues, based on priority and ordering requirements, and to avoid deadlocks. In addition to the default execute queue, weblogic.kernel.default, there were pre-configured queues dedicated to internal administrative traffic, such as weblogic.admin.HTTP and weblogic.admin.RMI.Users could control thread usage by altering the number of threads in the default queue, or configure custom execute queues to ensure that particular applications had access to a fixed number of execute threads, regardless of overall system load. From WLS 9.0 release onwards WebLogic Server uses is a single thread pool (single thread pool which is called Default WorkManager), in which all types of work are executed. WebLogic Server prioritizes work based on rules you define, and run-time metrics, including the actual time it takes to execute a request and the rate at which requests are entering and leaving the pool.The common thread pool changes its size automatically to maximize throughput. The queue monitors throughput over time and based on history, determines whether to adjust the thread count. For example, if historical throughput statistics indicate that a higher thread count increased throughput, WebLogic increases the thread count. Similarly, if statistics indicate that fewer threads did not reduce throughput, WebLogic decreases the thread count. This new strategy makes it easier for administrators to allocate processing resources and manage performance, avoiding the effort and complexity involved in configuring, monitoring, and tuning custom executes queues. The Default WorkManager is used to handle thread management and perform self-tuning.This Work Manager is used by an application when no other Work Managers are specified in the application’s deployment descriptors. In many situations, the default Work Manager may be sufficient for most application requirements. WebLogic Server’s thread-handling algorithms assign each application its own fair share by default. Applications are given equal priority for threads and are prevented from monopolizing them. The default work-manager, as its name tells, is the work-manager defined by default.Thus, all applications deployed on WLS will use it. But sometimes, when your application is already in production, it's obvious you can't take your EAR / WAR, update the deployment descriptor(s) and redeploy it.The default work-manager belongs to a thread-pool, as initial thread-pool comes with only five threads, that's not much. If your application has to face a large number of hits, you may want to start with more than that.Well, that's quite easy. You have  two option to do so.1) Modify the config.xmlJust add the following line(s) in your server definition : <server> <name>AdminServer</name> <self-tuning-thread-pool-size-min>100</self-tuning-thread-pool-size-min> <self-tuning-thread-pool-size-max>200</self-tuning-thread-pool-size-max> [...] </server> 2) Adding some JVM parameters Add the following system property in setDomainEnv.sh/setDomainEnv.cmd or startWebLogic.sh/startWebLogic.cmd : -Dweblogic.threadpool.MinPoolSize=100 -Dweblogic.threadpool.MaxPoolSize=100 Reboot WLS and see the option has been taken into account . Disadvantage: So far its fine. But here there is an disadvantage in tuning Default WorkManager. Internally Weblogic Server has many work managers configured for different types of work.  if we run out of threads in the self-tuning pool(because of system property -Dweblogic.threadpool.MaxPoolSize) due to being undersized, then important work that WLS might need to do could be starved.  So, while limiting the self-tuning would limit the default WorkManager and internally it also limits all other internal WorkManagers which WLS uses.So the best alternative is to override the default WorkManager that means creating a WorkManager for the Application and assign the WorkManager for the application instead of tuning the Default WorkManager.

    Read the article

  • libgdx - removing the circle outline rendered on Box2d CircleShape

    - by Brett
    How can I remove the outline on the circleshape below.. CircleShape circle = new CircleShape(); circle.setRadius(1f); ... using ... batch.draw(textureRegion, position.x - 1, position.y - 1, 1f, 1f, 2, 2, 1, 1, angle); I use this to set the body for a Box2d collision but I get a silly circle shape around my texture in libGdx, i.e. my textured sprite (ball) has a circle over the top of it with a line running from center along the radius. Any ideas on how to remove the overlying circle lines?

    Read the article

  • Updated Whitepapers for OUFW 4.0.1

    - by Anthony Shorten
    The whitepapers are progressively being updated for new facilities in Oracle Utilities Application Framework V4.0.1. The documents now will cover other versions of Oracle Utilities Application Framework (V2.1, V2.2, V4.0.1). The first round of updates are now available on My Oracle Support: 942074.1 - XAI Best Practices 836362.1 - Batch Best Practices 773473.1 - Oracle Utilities Application Framework Security Overview These have been updated for the new whitepaper format as well as the content. Content that is related to specific versions of the Framework are marked accordingly. New content since the last update are also indicated accordingly.

    Read the article

  • Opening a new Windows from ASP.NET code behind

    - by TATWORTH
    At http://weblogs.asp.net/infinitiesloop/archive/2007/09/25/response-redirect-into-a-new-window-with-extension-methods.aspx there is an excellent post on how to open a new windows from code behind. The purists may not like it but it helped solve a problem for a client's client. Here is an update for VS2010 users: using System; using System.Web; using System.Web.UI; /// <summary> /// Response Helper for opening popup windo from code behind. /// </summary> public static class ResponseHelper {   /// <summary>   /// Redirect to popup window   /// </summary>   /// <param name="response">The response.</param>   /// <param name="url">URL to open to</param>   /// <param name="target">Target of window _self or _blank</param>   /// <param name="windowFeatures">Features such as window bar</param>   /// <remarks>   ///     <list type="bullet">   ///         <item>   /// From http://weblogs.asp.net/infinitiesloop/archive/2007/09/25/response-redirect-into-a-new-window-with-extension-methods.aspx   /// </item>   /// <item>   /// Note: If you use it outside the context of a Page request, you can't redirect to a new window. The reason is the need to call the ResolveClientUrl method on Page, which I can't do if there is no Page. I could have just built my own version of that method, but it's more involved than you might think to do it right. So if you need to use this from an HttpHandler other than a Page, you are on your own.   /// </item>   ///         <item>   /// Beware of popup blockers.   /// </item>   /// <item>   /// Note: Obviously when you are redirecting to a new window, the current window will still be hanging around. Normally redirects abort the current request -- no further processing occurs. But for these redirects, processing continues, since we still have to serve the response for the current window (which also happens to contain the script to open the new window, so it is important that it completes).   /// </item>   /// <item>   /// Sample call Response.Redirect("popup.aspx", "_blank", "menubar=0,width=100,height=100");   /// </item>   ///     </list>   /// </remarks>   public static void Redirect(this HttpResponse response, string url, string target, string windowFeatures)   {     if ((String.IsNullOrEmpty(target) || target.Equals("_self", StringComparison.OrdinalIgnoreCase)) && String.IsNullOrEmpty(windowFeatures))     {       response.Redirect(url);     }     else     {       Page page = (Page)HttpContext.Current.Handler;       if (page == null)       {         throw new InvalidOperationException("Cannot redirect to new window outside Page context.");       }       url = page.ResolveClientUrl(url);       string script;       if (!String.IsNullOrEmpty(windowFeatures))       {         script = @"window.open(""{0}"", ""{1}"", ""{2}"");";       }       else       {         script = @"window.open(""{0}"", ""{1}"");";       }       script = String.Format(script, url, target, windowFeatures);       ScriptManager.RegisterStartupScript(page, typeof(Page), "Redirect", script, true);     }   } }

    Read the article

  • Algorithm for tracking progress of controller method running in background

    - by SilentAssassin
    I am using Codeigniter framework for PHP on Windows platform. My problem is I am trying to track progress of a controller method running in background. The controller extracts data from the database(MySQL) then does some processing and then stores the results again in the database. The complete aforesaid process can be considered as a single task. A new task can be assigned while another task is running. The newly assigned task will be added in a queue. So if I can track progress of the controller, I can show status for each of these tasks. Like I can show "Pending" status for tasks in the queue, "In Progress" for tasks running and "Done" for tasks that are completed. Main Issue: Now first thing I need to find is an algorithm to track the progress of how much amount of execution the controller method has completed and that means tracking how much amount of method has completed execution. For instance, this PHP script tracks progress of array being counted. Here the current state and state after total execution are known so it is possible to track its progress. But I am not able to devise anything analogous to it in my case. Maybe what I am trying to achieve is programmtically not possible. If its not possible then suggest me a workaround or a completely new approach. If some details are pending you can mention them. Sorry for my ignorance this is my first post here. I welcome you to point out my mistakes. EDIT: Database outline: The URL(s) and keyword(s) are first entered by user which are stored in a database table called link_master and keyword_master respectively. Then keywords are extracted from all the links present in this table and compared with keywords entered by user and their frequency is calculated which is the final result. And the results are stored in another table called link_result. Now sub-links are extracted from the domain links and stored in a table called sub_link_master. Now again the keywords are extracted from these sub-links and the corresponding results are stored in a table called sub_link_result. The number of records cannot be defined beforehand as the number of links on any web page can be different. Only the cardinality of *link_result* table can be known which will be equal to multiplication of number of keyword(s) and URL(s) . I insert multiple records at a time using this resource. Controller outline: The controller extracts keywords from a web page and also extracts keywords from all the links present on that page. There is a method called crawlLink. I used Rolling Curl to extract keywords and web page content. It has callback function which I used for extracting keywords alongwith generating results and extracting valid sub-links. There is a insertResult method which stores results for links and sub-links in the respective tables. Yes, the processing depends on the number of records. The more the number of records, the more time it takes to execute: Consider this scenario: Number of Domain Links = 1 Number of Keywords = 3 Number of Domain Links Result generated = 3 (3 x 1 as described in the question) Number of Sub Links generated = 41 Number of Sub Links Result = 117 (41 x 3 = 123 but some links are not valid or searchable) Approximate time taken for above process to complete = 55 seconds. The above result is for a single link. I want to track the progress of the above results getting stored in database. When all results are stored, the task is complete. If results are getting stored, the task is In Progress. I am not clear how can I track this progress.

    Read the article

  • Question on the implementation of my Entity System

    - by miguel.martin
    I am currently creating an Entity System, in C++, it is almost completed (I have all the code there, I just have to add a few things and test it). The only thing is, I can't figure out how to implement some features. This Entity System is based off a bit from the Artemis framework, however it is different. I'm not sure if I'll be able to type this out the way my head processing it. I'm going to basically ask whether I should do something over something else. Okay, now I'll give a little detail on my Entity System itself. Here are the basic classes that my Entity System uses to actually work: Entity - An Id (and some methods to add/remove/get/etc Components) Component - An empty abstract class ComponentManager - Manages ALL components for ALL entities within a Scene EntitySystem - Processes entities with specific components Aspect - The class that is used to help determine what Components an Entity must contain so a specific EntitySystem can process it EntitySystemManager - Manages all EntitySystems within a Scene EntityManager - Manages entities (i.e. holds all Entities, used to determine whether an Entity has been changed, enables/disables them, etc.) EntityFactory - Creates (and destroys) entities and assigns an ID to them Scene - Contains an EntityManager, EntityFactory, EntitySystemManager and ComponentManager. Has functions to update and initialise the scene. Now in order for an EntitySystem to efficiently know when to check if an Entity is valid for processing (so I can add it to a specific EntitySystem), it must recieve a message from the EntityManager (after a call of activate(Entity& e)). Similarly the EntityManager must know when an Entity is destroyed from the EntityFactory in the Scene, and also the ComponentManager must know when an Entity is created AND destroyed. I do have a Listener/Observer pattern implemented at the moment, but with this pattern I may remove a Listener (which is this case is dependent on the method being called). I mainly have this implemented for specific things related to a game, i.e. Teams, Tagging of entities, etc. So... I was thinking maybe I should call a private method (using friend classes) to send out when an Entity has been activated, deleted, etc. i.e. taken from my EntityFactory void EntityFactory::killEntity(Entity& e) { // if the entity doesn't exsist in the entity manager within the scene if(!getScene()->getEntityManager().doesExsist(e)) { return; // go back to the caller! (should throw an exception or something..) } // tell the ComponentManager and the EntityManager that we killed an Entity getScene()->getComponentManager().doOnEntityWillDie(e); getScene()->getEntityManager().doOnEntityWillDie(e); // notify the listners for(Mouth::iterator i = getMouth().begin(); i != getMouth().end(); ++i) { (*i)->onEntityWillDie(*this, e); } _idPool.addId(e.getId()); // add the ID to the pool delete &e; // delete the entity } As you can see on the lines where I am telling the ComponentManager and the EntityManager that an Entity will die, I am calling a method to make sure it handles it appropriately. Now I realise I could do this without calling it explicitly, with the help of that for loop notifying all listener objects connected to the EntityFactory's Mouth (an object used to tell listeners that there's an event), however is this a good idea (good design, or what)? I've gone over the PROS and CONS, I just can't decide what I want to do. Calling Explicitly: PROS Faster? Since these functions are explicitly called, they can't be "removed" CONS Not flexible Bad design? (friend functions) Calling through Listener objects (i.e. ComponentManager/EntityManager inherits from a EntityFactoryListener) PROS More Flexible? Better Design? CONS Slower? (virtual functions) Listeners can be removed, i.e. may be removed and not get called again during the program, which could cause in a crash. P.S. If you wish to view my current source code, I am hosting it on BitBucket.

    Read the article

  • Pro Google App Engine developer interview questions (with answers)

    - by WooYek
    What are good questions to determine if applicant is pro Google App Engine developer? Questions that can distinguish that someone is not an ad-hoc GAE programmer, but is really doing professional GAE development, with all areas concerned (eg. performance, transactions, async/batch data processing). Please provide answers, so an intermediate developer (such as myself :) can interview someone more experienced. Please avoid open questions. If possible please provide a link to a documentation part that's covering a topic in question. Please keep one interview question/answer per response for better reading experience and easier interview preparation.

    Read the article

  • Tool to convert Textures to power of two?

    - by 3nixios
    I'm currently porting a game to a new platform, the problem being that the old platform accepted non power of two textures and this new platform doesn't. To add to the headache, the new platform has much less memory so we want to use the tools provided by the vendor to compress them; which of course only takes power of two textures. The current workflow is to convert the non power of tho textures to dds with 'texconv', then use the vendors compression tools in a batch. So, does anyone know of a tool to convert textures to their nearest 'power of two' counterparts? Thanks

    Read the article

  • How to remove java.sql.BatchUpdateException in Grails? [closed]

    - by aman.nepid
    I have a domain like this: class BusinessOrganization { static hasMany = [organizationBusinessTypes:OrganizationBusinessType] String name String icon static constraints = { name(blank:false,unique:true) icon(unique:true) } String toString() { return "${name}" } } When I save some data for first time it works fine. But when by the next time it shows this error : Error 500: Internal Server Error URI /nLocatePortal/businessOrganization/save Class java.sql.BatchUpdateException Message Batch entry 0 insert into business_organization (version, icon, name, id) values ('0', '', 'dddd', '2') was aborted. Call getNextException to see the cause. **Around line 24 of grails-app/controllers/com/nlocate/portal/BusinessOrganizationController.groovy** 21: 22: def save() { 23: def businessOrganizationInstance = new BusinessOrganization(params) 24: if (!businessOrganizationInstance.save(flush: true)) { 25: render(view: "create", model: [businessOrganizationInstance: businessOrganizationInstance]) 26: return 27: } Please someone help me why this is happening. I am new to Grails. I have not modified the controllers but still I get this error.

    Read the article

  • Find the latest file by modified date

    - by Rich
    If I want to find the latest file (mtime) in a (big) directory containing subdirectories, how would I do it? Lots of posts I've found suggest some variation of ls -lt | head (amusingly, many suggest ls -ltr | tail which is the same but less efficient) which is fine unless you have subdirectories (I do). Then again, you could find . -type f -exec ls -lt \{\} \+ | head which will definitely do the trick for as many files as can be specified by one command, i.e. if you have a big directory, -exec...\+ will issue separate commands; therefore each group will be sorted by ls within itself but not over the total set; the head will therefore pick up the lastest entry of the first batch. Any answers?

    Read the article

  • Login crash loop

    - by dawmail333
    After about the second/third batch of updates on a fresh install of Ubuntu 12.04, I end up getting stuck in an infinite login loop: I enter my password, screen goes black and shows me the end of the initialisation messages (including things like CUPS initialising), and then the greeter reappears. Kicker is, as I'm using gnome-shell, I just decided to uninstall LightDM, ubuntu-desktop and unity-greeter, using GDM as my manager, and the problem still happens in exactly the same manner. I'm lost even as to where to start looking - Xorg logs, LightDM logs (before I removed it), syslog and dmesg logs have held no unusual information at all. I have a TeX assignment due next week, and reinstalling Ubuntu every time I want to work on it is not going to work (nor is using TeX on Windows ;). Anything else I should try?

    Read the article

  • How to completely remove a package that didn't install properly?

    - by chtfn
    I recently installed a package from a ppa on Ubuntu 12.04 beta, but apparently it didn't work out, and now it is giving me errors at every update or install I make, even after deactivating the ppa from my sources. This is what I get when I try uninstalling from USC: installArchives() failed: (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 295120 files and directories currently installed.) Removing oracle-java7-installer ... update-alternatives: error: unknown argument `cdrom' dpkg: error processing oracle-java7-installer (--remove): subprocess installed pre-removal script returned error exit status 2 No apport report written because MaxReports is reached already Downloading... --2012-04-12 13:13:21-- http://download.oracle.com/otn-pub/java/jdk/7u3-b04/jdk-7u3-linux-i586.tar.gz Rsolution de download.oracle.com (download.oracle.com)... 203.13.161.233, 203.13.161.234 Connexion vers download.oracle.com (download.oracle.com)|203.13.161.233|:80... connect. requte HTTP transmise, en attente de la rponse... 302 Moved Temporarily Emplacement: https://edelivery.oracle.com/otn-pub/java/jdk/7u3-b04/jdk-7u3-linux-i586.tar.gz [suivant] --2012-04-12 13:13:21-- https://edelivery.oracle.com/otn-pub/java/jdk/7u3-b04/jdk-7u3-linux-i586.tar.gz Rsolution de edelivery.oracle.com (edelivery.oracle.com)... 173.223.150.174 Connexion vers edelivery.oracle.com (edelivery.oracle.com)|173.223.150.174|:443... connect. requte HTTP transmise, en attente de la rponse... 302 Moved Temporarily Emplacement: http://download.oracle.com/errors/download-fail-1505220.html [suivant] --2012-04-12 13:13:22-- http://download.oracle.com/errors/download-fail-1505220.html Connexion vers download.oracle.com (download.oracle.com)|203.13.161.233|:80... connect. requte HTTP transmise, en attente de la rponse... 200 OK Longueur: 5307 (5,2K) [text/html] Sauvegarde en : ./jdk-7u3-linux-i586.tar.gz 0K ..... 100% 4,94M=0,001s 2012-04-12 13:13:22 (4,94 MB/s) - ./jdk-7u3-linux-i586.tar.gz sauvegard [5307/5307] Download done. sha256sum mismatch jdk-7u3-linux-i586.tar.gz Oracle JDK 7 is NOT installed. dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: oracle-java7-installer Error in function: I also tried "remove completely" from synaptic but it doesn't work either. Thank you for your help in advance!

    Read the article

  • ClearTrace Shows Execution History

    - by Bill Graziano
    The latest release of ClearTrace (Build 38) now shows the execution history of a particular statement. You’ll need to save the trace files to a trace group instead of just using the default.  That’s as easy as typing something into the trace group name when you upload the trace.  I usually put the server name in this field. Build 38 also re-enables support for statement level events.  If your trace includes RPC:StmtCompleted or SQL:StmtCompleted events those will be processed and save.  In the results tab you can choose to view statement level or batch level events.  Please note that saving statement level events in a trace can generate HUGE trace files very quickly.

    Read the article

  • Common mistakes made by new programmers without CS backgrounds [on hold]

    - by mblinn
    I've noticed that there seems to be a class of mistakes that new programmers without CS backgrounds tend to make, that programmers with CS backgrounds tend not to. I'm not talking about not understanding source control, or how to design large programs, or a whole host of other things that both freshly minted CS graduates and non-CS graduates tend to not understand, I'm talking about basic mistakes that having a CS background will prevent a programmer from making. One obvious and well trod example is that folks who don't have a basic understanding of formal languages will often try to parse arbitrary HTML or XML using regular expressions, and possibly summon Cthulu in the process. Another fairly common one that I've seen is using common data structures in suboptimal ways like using a vector and a search function as if it were a hash map. What sorts of other things along these lines would you look out for when on-boarding a batch of newly minted, non-CS programmers.

    Read the article

  • Java Developers: Is Ant still in the "main stream" for builds? Do we push new developers to learn it?

    - by Sam Goldberg
    We have been slowly replacing batch command files (windows .bat) which were simply jarring up the classes compiled in the developers IDE, with more comprehensive Ant builds (i.e. get from CVS, clean compile, jar, archive, email, etc.) I've spent a lot of time learning (and debugging issues) with Ant, so I'm most comfortable using it for these tasks. But I wonder if Ant is still in as wide usage as it was when I first started learning, or whether "the world has moved on" to something newer (and maybe slicker). (I've started to see more Maven build stuff distributed, which I've never used, for example.) The practical import of this question, is whether I push new developers to learn Ant, or whether they should be learning something else for builds? I'm never too on top of the trends, so it would be great to hear from other Java developers what they think is the best build tool, and what they think new developers should be learning.

    Read the article

  • Dropbox indicator icon dissapears right after login

    - by garvamel
    Even though Dropbox's app indicator dissapearing from the tray area seems like a recurrent enough problem, my issue is a litte different. When I login, I can see the app panel populating, and the dropbox icon does indeed appear (config'd as startup application), but after some other icons show up (bluetooth, battery, etc.) it's gone. It's still running though. I'm guessing it's having issues with staying pinned, and I don't know how to start addressing this problem. I have tried many if not all solutions provided here in the forums for the "icon missing" questions. So far: I've whitelisted everything regarding panel Uninstalled-reinstalled (with and without rebooting in between) Overwritten current installation Purged installation from terminal Installed from Software Center and from .deb file batch deleted every "dropbox" ocurrences from terminal (files and folders) and reinstalled Ran sudo apt-get install libappindicator1, it installed, but didn't solve anything I'm on Ubuntu 12.04 LTS - 64 bits. Any insight would me much appreciated!

    Read the article

  • Cloud to On-Premise Connectivity Patterns

    - by Rajesh Raheja
    Do you have a requirement to convert an Opportunity in Salesforce.com to an Order/Quote in Oracle E-Business Suite? Or maybe you want the creation of an Oracle RightNow Incident to trigger an on-premise Oracle E-Business Suite Service Request creation for RMA and Field Scheduling? If so, read on. In a previous blog post, I discussed integrating TO cloud applications, however the use cases above are the reverse i.e. receiving data FROM cloud applications (SaaS) TO on-premise applications/databases that sit behind a firewall. Oracle SOA Suite is assumed to be on-premise with with Oracle Service Bus as the mediation and virtualization layer. The main considerations for the patterns are are security i.e. shielding enterprise resources; and scalability i.e. minimizing firewall latency. Let me use an analogy to help visualize the patterns: the on-premise system is your home - with your most valuable possessions - and the SaaS app is your favorite on-line store which regularly ships (inbound calls) various types of parcels/items (message types/service operations). You need the items at home (on-premise) but want to safe guard against misguided elements of society (internet threats) who may masquerade as postal workers and vandalize property (denial of service?). Let's look at the patterns. Pattern: Pull from Cloud The on-premise system polls from the SaaS apps and picks up the message instead of having it delivered. This may be done using Oracle RightNow Object Query Language or SOAP APIs. This is particularly suited for certain integration approaches wherein messages are trickling in, can be centralized and batched e.g. retrieving event notifications on an hourly schedule from the Oracle Messaging Service. To compare this pattern with the home analogy, you are avoiding any deliveries to your home and instead go to the post office/UPS/Fedex store to pick up your parcel. Every time. Pros: On-premise assets not exposed to the Internet, firewall issues avoided by only initiating outbound connections Cons: Polling mechanisms may affect performance, may not satisfy near real-time requirements Pattern: Open Firewall Ports The on-premise system exposes the web services that needs to be invoked by the cloud application. This requires opening up firewall ports, routing calls to the appropriate internal services behind the firewall. Fusion Applications uses this pattern, and auto-provisions the services on the various virtual hosts to secure the topology. This works well for service integration, but may not suffice for large volume data integration. Using the home analogy, you have now decided to receive parcels instead of going to the post office every time. A door mail slot cut out allows the postman can drop small parcels, but there is still concern about cutting new holes for larger packages. Pros: optimal pattern for near real-time needs, simpler administration once the service is provisioned Cons: Needs firewall ports to be opened up for new services, may not suffice for batch integration requiring direct database access Pattern: Virtual Private Networking The on-premise network is "extended" to the cloud (or an intermediary on-demand / managed service offering) using Virtual Private Networking (VPN) so that messages are delivered to the on-premise system in a trusted channel. Using the home analogy, you entrust a set of keys with a neighbor or property manager who receives the packages, and then drops it inside your home. Pros: Individual firewall ports don't need to be opened, more suited for high scalability needs, can support large volume data integration, easier management of one connection vs a multitude of open ports Cons: VPN setup, specific hardware support, requires cloud provider to support virtual private computing Pattern: Reverse Proxy / API Gateway The on-premise system uses a reverse proxy "API gateway" software on the DMZ to receive messages. The reverse proxy can be implemented using various mechanisms e.g. Oracle API Gateway provides firewall and proxy services along with comprehensive security, auditing, throttling benefits. If a firewall already exists, then Oracle Service Bus or Oracle HTTP Server virtual hosts can provide reverse proxy implementations on the DMZ. Custom built implementations are also possible if specific functionality (such as message store-n-forward) is needed. In the home analogy, this pattern sits in between cutting mail slots and handing over keys. Instead, you install (and maintain) a mailbox in your home premises outside your door. The post office delivers the parcels in your mailbox, from where you can securely retrieve it. Pros: Very secure, very flexible Cons: Introduces a new software component, needs DMZ deployment and management Pattern: On-Premise Agent (Tunneling) A light weight "agent" software sits behind the firewall and initiates the communication with the cloud, thereby avoiding firewall issues. It then maintains a bi-directional connection either with pull or push based approaches using (or abusing, depending on your viewpoint) the HTTP protocol. Programming protocols such as Comet, WebSockets, HTTP CONNECT, HTTP SSH Tunneling etc. are possible implementation options. In the home analogy, a resident receives the parcel from the postal worker by opening the door, however you still take precautions with chain locks and package inspections. Pros: Light weight software, IT doesn't need to setup anything Cons: May bypass critical firewall checks e.g. virus scans, separate software download, proliferation of non-IT managed software Conclusion The patterns above are some of the most commonly encountered ones for cloud to on-premise integration. Selecting the right pattern for your project involves looking at your scalability needs, security restrictions, sync vs asynchronous implementation, near real-time vs batch expectations, cloud provider capabilities, budget, and more. In some cases, the basic "Pull from Cloud" may be acceptable, whereas in others, an extensive VPN topology may be well justified. For more details on the Oracle cloud integration strategy, download this white paper.

    Read the article

  • Cloud to On-Premise Connectivity Patterns

    - by Rajesh Raheja
    Do you have a requirement to convert an Opportunity in Salesforce.com to an Order/Quote in Oracle E-Business Suite? Or maybe you want the creation of an Oracle RightNow Incident to trigger an on-premise Oracle E-Business Suite Service Request creation for RMA and Field Scheduling? If so, read on. In a previous blog post, I discussed integrating TO cloud applications, however the use cases above are the reverse i.e. receiving data FROM cloud applications (SaaS) TO on-premise applications/databases that sit behind a firewall. Oracle SOA Suite is assumed to be on-premise with with Oracle Service Bus as the mediation and virtualization layer. The main considerations for the patterns are are security i.e. shielding enterprise resources; and scalability i.e. minimizing firewall latency. Let me use an analogy to help visualize the patterns: the on-premise system is your home - with your most valuable possessions - and the SaaS app is your favorite on-line store which regularly ships (inbound calls) various types of parcels/items (message types/service operations). You need the items at home (on-premise) but want to safe guard against misguided elements of society (internet threats) who may masquerade as postal workers and vandalize property (denial of service?). Let's look at the patterns. Pattern: Pull from Cloud The on-premise system polls from the SaaS apps and picks up the message instead of having it delivered. This may be done using Oracle RightNow Object Query Language or SOAP APIs. This is particularly suited for certain integration approaches wherein messages are trickling in, can be centralized and batched e.g. retrieving event notifications on an hourly schedule from the Oracle Messaging Service. To compare this pattern with the home analogy, you are avoiding any deliveries to your home and instead go to the post office/UPS/Fedex store to pick up your parcel. Every time. Pros: On-premise assets not exposed to the Internet, firewall issues avoided by only initiating outbound connections Cons: Polling mechanisms may affect performance, may not satisfy near real-time requirements Pattern: Open Firewall Ports The on-premise system exposes the web services that needs to be invoked by the cloud application. This requires opening up firewall ports, routing calls to the appropriate internal services behind the firewall. Fusion Applications uses this pattern, and auto-provisions the services on the various virtual hosts to secure the topology. This works well for service integration, but may not suffice for large volume data integration. Using the home analogy, you have now decided to receive parcels instead of going to the post office every time. A door mail slot cut out allows the postman can drop small parcels, but there is still concern about cutting new holes for larger packages. Pros: optimal pattern for near real-time needs, simpler administration once the service is provisioned Cons: Needs firewall ports to be opened up for new services, may not suffice for batch integration requiring direct database access Pattern: Virtual Private Networking The on-premise network is "extended" to the cloud (or an intermediary on-demand / managed service offering) using Virtual Private Networking (VPN) so that messages are delivered to the on-premise system in a trusted channel. Using the home analogy, you entrust a set of keys with a neighbor or property manager who receives the packages, and then drops it inside your home. Pros: Individual firewall ports don't need to be opened, more suited for high scalability needs, can support large volume data integration, easier management of one connection vs a multitude of open ports Cons: VPN setup, specific hardware support, requires cloud provider to support virtual private computing Pattern: Reverse Proxy / API Gateway The on-premise system uses a reverse proxy "API gateway" software on the DMZ to receive messages. The reverse proxy can be implemented using various mechanisms e.g. Oracle API Gateway provides firewall and proxy services along with comprehensive security, auditing, throttling benefits. If a firewall already exists, then Oracle Service Bus or Oracle HTTP Server virtual hosts can provide reverse proxy implementations on the DMZ. Custom built implementations are also possible if specific functionality (such as message store-n-forward) is needed. In the home analogy, this pattern sits in between cutting mail slots and handing over keys. Instead, you install (and maintain) a mailbox in your home premises outside your door. The post office delivers the parcels in your mailbox, from where you can securely retrieve it. Pros: Very secure, very flexible Cons: Introduces a new software component, needs DMZ deployment and management Pattern: On-Premise Agent (Tunneling) A light weight "agent" software sits behind the firewall and initiates the communication with the cloud, thereby avoiding firewall issues. It then maintains a bi-directional connection either with pull or push based approaches using (or abusing, depending on your viewpoint) the HTTP protocol. Programming protocols such as Comet, WebSockets, HTTP CONNECT, HTTP SSH Tunneling etc. are possible implementation options. In the home analogy, a resident receives the parcel from the postal worker by opening the door, however you still take precautions with chain locks and package inspections. Pros: Light weight software, IT doesn't need to setup anything Cons: May bypass critical firewall checks e.g. virus scans, separate software download, proliferation of non-IT managed software Conclusion The patterns above are some of the most commonly encountered ones for cloud to on-premise integration. Selecting the right pattern for your project involves looking at your scalability needs, security restrictions, sync vs asynchronous implementation, near real-time vs batch expectations, cloud provider capabilities, budget, and more. In some cases, the basic "Pull from Cloud" may be acceptable, whereas in others, an extensive VPN topology may be well justified. For more details on the Oracle cloud integration strategy, download this white paper.

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >