Search Results

Search found 19555 results on 783 pages for 'job performance'.

Page 405/783 | < Previous Page | 401 402 403 404 405 406 407 408 409 410 411 412  | Next Page >

  • Oracle University Nuovi corsi (Week 10)

    - by swalker
    Oracle University ha recentemente rilasciato i seguenti nuovi corsi in inglese: Database RAC & Grid Infrastructure for Oracle Solaris System Administration (1 day) Oracle Database 11g: Performance Tuning (Training On Demand) Development Tools Oracle Database: Program with PL/SQL (Training On Demand) MySQL MySQL for Database Administrators (Training On Demand) Fusion Middleware Oracle WebCenter Portal 11g: Build Portals With Spaces (3 days) Oracle WebCenter Content 11g: Site Studio Essentials (5 days) Oracle BPM 11g Modeling (3 days) Business Intelligence & Datawarehousing Oracle BI Applications 7.9.6: Implementation for Oracle EBS (4 days) Oracle BI Applications 7.9.6: Implementation for Siebel CRM (4 days) Oracle BI 11g R1: Build Repositories (Training on Demand) Fusion Applications Fusion Applications: Extend Applications with ADF (5 days) E-Business Suite R12.x Extend Oracle Applications: Building OA Framework Applications (Training On Demand) PeopleSoft PeopleSoft Integration Tools Rel 8.50 (Training On Demand) Per ulteriori informazioni e per conoscere le date dei corsi, contattate il vostro Oracle University team locale.

    Read the article

  • Great Java EE Concurrency Write-up!

    - by reza_rahman
    As you are aware JSR-236, Concurrency Utilities for the Java EE platform, is now a candidate for addition into Java EE 7. While it is a critical enabling API it is not necessarily obvious why it is so important. This is especially true with existing features like EJB 3 @Asynchronous, Servlet 3 async and JAX-RS 2 async. On his blog DZone MVB Sander Mak does an excellent job of explaining the motivation and importance of JSR-236. Perhaps even more importantly, he discusses potential issues with the API such alignment with CDI and Java SE Fork/Join. Read the excellent write-up here!

    Read the article

  • ANTLRWorks 2: Early Access Preview 10

    - by Geertjan
    I took a quick look at how the ANTLRWorks 2 project is getting on... and discovered that today, March 23, the new early access preview 10 has been released: http://www.antlr.org/wiki/display/ANTLR4/1.+Overview Downloaded it immediately and was impressed when browsing through the Java.g file that I also found on the Antlr site: (Click to enlarge the image above.) On the page above, the following enhancements are listed: Add tooltips for rule references Finally fixed the navigator update bug Major improvements to code completion Fix legacy mode Many performance and stability updates I've blogged before about how the developers on the above project consider their code completion to be "scary fast". Some discussions have taken place about how code developed by the ANTLRWorks team could be contributed to the NetBeans project, since NetBeans IDE and ANTLRWorks 2 are both based on the NetBeans Platform.

    Read the article

  • Having a generic data type for a database table column, is it "good" practice?

    - by Yanick Rochon
    I'm working on a PHP project where some object (class member) may contain different data type. For example : class Property { private $_id; // (PK) private $_ref_id; // the object reference id (FK) private $_name; // the name of the property private $_type; // 'string', 'int', 'float(n,m)', 'datetime', etc. private $_data; // ... // ..snip.. public getters/setters } Now, I need to perform some persistence on these objects. Some properties may be a text data type, but nothing bigger than what a varchar may hold. Also, later on, I need to be able to perform searches and sorting. Is it a good practice to use a single database table for this (ie. is there a non negligible performance impact)? If it's "acceptable", then what could be the data type for the data column?

    Read the article

  • Will a programming professional certificate from a university enhance my resume/prospect for being hired?

    - by Cliff
    Hi. I am a junior in Software Engineering. I'm planning to take up an Online Certification in web programming or .NET programming from the University of Illinois through O'Reilly (see link) to keep busy this summer. Will that help me get a great job? I've heard some say that certifications will give you a cutting edge. I've heard others say that it doesn't really matter. What do you think? Thanks so much for you advice and point of view.

    Read the article

  • Desktop does not show when I installed nvidia drivers!

    - by Levan
    The desktop does not show after I installed nvidia experimental drivers. I tried nvidia simple proprietary drivers, and they did not work either. Here is how it looks. This is not cropped or any thing. This is how it looks, after the installation of the drivers the desktop resolution decreased from 1440X900 to 1024X768 The desktop only shows desh and panels when I use the open source drivers. Is there any way to fix this so I can get better performance? Thank you in advance. Thanks to rft183 who provided the solution here is another link to the post that states that he found a solution http://ubuntuforums.org/showthread.php?p=12303179#post12303179

    Read the article

  • HD Video Peformance Unacceptable

    - by Mike Hasselbeck
    Was wondering if anyone could help me boost HD 1080p video performance on my machine? I've got an AMD Athlon X2 Dual Core processor, 2 gb RAM & an ATI Radeon 5450 video card. I've installed the latest ATI Catalyst drivers, I installed the hardware acceleration things and linked them (I believe) to VLC. Still, it's still not running as well as I would like. Any thoughts or suggestions? Any help would be much appreciated. Thanks!

    Read the article

  • Database for survey

    - by zfm
    One of my job now is to design a database for a survey. Let's say we have a series of questions (web-based), in which one page contains one question. Not every person will be given the same questions, those are based on their previous answers and also randomness. I would like to know whether it is better to have database like this user question answer userX question1 answer1A userX question2 answer2C userX question5 answer5F userY question1 answer1B userY question3 answer3B userY question6 answer6D ... or user q1 q2 q3 q4 q5 q6 userX 1A 2C null null 5F null userY 1B null 3B null null 6D ... My idea here is, using the second approach seems better, however I would like to know whether updating the table is (much) slower than inserting a new row? Also with the first approach, I can omit having some null answers. The total questions given are fix, the client wont add any more question later on. So my question is, what will you do if you were me?

    Read the article

  • Virtual Developer Day: MySQL - July 31st

    - by Cassandra Clark - OTN
    Virtual Developer Day: MySQL is a one-stop shop for you to learn all the essential MySQL skills. With a combination of presentations and hands-on lab experience, you’ll have the opportunity to practice in your own environment and gain more in-depth knowledge to successfully design, develop, and manage your MySQL databases.This FREE virtual event has two tracks tailored for both fresh and experienced MySQL users. Attend the sessions on July 31st and sharpen your skills to: Develop your new applications cost-effectively using MySQL Improve performance of your existing MySQL databases Manage your MySQL environment more efficiently When? Wednesday, July 31, 2013Mumbai 10:30 a.m. (GMT +5:30) - 2:30 p.m.Singapore 1:00 p.m. (GMT +8:00) - 5:00 p.m.Sydney 3:00 p.m. (GMT +10:00) - 7:00 p.m. Register TODAY! 

    Read the article

  • Which one scales better asp or php?

    - by Marin
    Let's say the website is doing fine(forums,pictures,ajax). And it needs scaling up/scaling out. I feel more comfortable with php but I have worked with asp.net as well. Would you say asp.net is much more powerful, more robust and thus easier to scale out? What would be the pros and cons of converting the website to asp.net in regards to scalability and performance versus keeping the website written in PHP? Examples of personal experience in making such a conversion would be a plus. Thank you.

    Read the article

  • Getting Started with Oracle Fusion Human Capital Management

    Designed from the ground-up using the latest technology advances and incorporating the best practices gathered from Oracle's thousands of customers, Fusion Applications are 100 percent open standards-based business applications that set a new standard for the way we innovate, work and adopt technology. Delivered as a complete suite of modular applications, Fusion Applications work with your existing portfolio to evolve your business to a new level of performance. In this AppCast, part of a special series on Fusion Applications, you hear about the unique advantages of Fusion Human Capital Management, learn about the scope of the first release and discover how Fusion HCM modules can be used to complement and enhance your existing HCM solutions.

    Read the article

  • Is there a point to writing in C or C++ instead of C# without knowing specifically what would make a program faster?

    - by user828584
    I wrote a small library in Python for handling the xbox 360's STFS files to be used on my web applications. I would like to rewrite it for use in the many desktop programs people are writing for 360 game modding, but I'm not quite if I should continue using C# or delve into C++ or even C. STFS is an in-file file system used by the xbox 360 and the job of the library would be extracting/injecting files, which could take noticeable amounts of time to do. What I know in C# comes from internet tutorials and resources, as would anything I learn about C++, so what I'm asking is if it's better to bring myself to a slightly lower-level language without knowing beforehand the features of the language that increase performance, or continue assuming that compiler optimizations and that my lack of experience will mean that the language I choose won't matter.

    Read the article

  • Ping works , but unable to do ssh

    - by gpuguy
    I disabled the firewall with sudo ufw disable, I can ping the server, the server can ping me but I can't ssh to it: root@ubuntu:/home/acme# ssh 192.168.1.6 ssh: connect to host 192.168.1.6 port 22: Connection refused I removed ssh and reinstalled : sudo apt-get remove openssh-client openssh-server sudo apt-get install openssh-client openssh-server But still ssh is not working and I get the error connection refused How do I tackle this issue? Here are some other stuff I have tried so far: root@ubuntu:/home/acme# sudo service ssh start start: Job is already running: ssh root@ubuntu:/home/acme# ps aux | grep ssh acme 6548 0.0 0.0 12576 320 ? Ss 04:09 0:00 /usr/bin/ssh-agent /usr/bin/dbus-launch --exit-with-session gnome-session --session=ubuntu root 22219 0.0 0.1 50040 2852 ? Ss 05:10 0:00 /usr/sbin/sshd -D root 22277 0.0 0.0 8116 896 pts/0 S+ 05:17 0:00 grep --color=auto ssh Update for future visitors removing and reinstalling ssh on the server worked for me : sudo apt-get remove openssh-client openssh-server sudo apt-get install openssh-client openssh-server

    Read the article

  • OpenGL behaviour depending on the graphics card?

    - by Dan
    This is something that never happened to me before. I have an OpenGL code that uses GLSL shaders to texture a 3D model. The code involves a lot of GPU texture processing, blending, etc... I wanted to check how the performance of my code improves using a faster graphics card (both new and old are NVIDIA, using always the NVIDIA development drivers). But now I have found that once I run the code using the new graphics card, it behaves completely different (the final render looks wrong), probably because some blending effect is not performed correctly. I haven't really look into what has changed, but I am guessing that some OpenGL states are, by default, set different. Is this possible? Have you ever found different OpenGL/GLSL behaviour using different graphics cards? Any "fast" solution? (So far I've thought of plugging back the old one, push all OpenGL default states, and compare with the ones I initially get using the new card..)

    Read the article

  • How to manage two video cards on a laptop (ATI and Intel)?

    - by Marc-François Cochaux-Laberge
    I have a laptop with two video cards. One ATI and on integrated Intel. On Windows, I can choose which video card I want to use. For example, I use the Intel card for normal use and for gaming, I switch to my ATI card for better performance, but a shorter battery life. In Ubuntu 10.10, only the Intel driver is installed, the ATI driver for my card doesn't work at all and there's heat coming out of my computer all the time, like when I'm playing video games on Windows. I think both cards are active, but only the Intel one is usefull. How can I solve this by making sure Ubuntu is aware of the two video cards and by disabling my ATI. Or may be I am all wrong about this?

    Read the article

  • Should tests be in the same Ruby file or in separated Ruby files?

    - by Junior Mayhé
    While using Selenium and Ruby to do some functional tests, I am worried with the performance. So is it better to add all test methods in the same Ruby file, or I should put each one in separated code files? Below a sample with all tests in the same file: # encoding: utf-8 require "selenium-webdriver" require "test/unit" class Tests < Test::Unit::TestCase def setup @driver = Selenium::WebDriver.for :firefox @base_url = "http://mysite" @driver.manage.timeouts.implicit_wait = 30 @verification_errors = [] @wait = Selenium::WebDriver::Wait.new :timeout => 10 end def teardown @driver.quit assert_equal [], @verification_errors end def element_present?(how, what) @driver.find_element(how, what) true rescue Selenium::WebDriver::Error::NoSuchElementError false end def verify(&blk) yield rescue Test::Unit::AssertionFailedError => ex @verification_errors << ex end def test_1 @driver.get(@base_url + "/") # a huge test here end def test_2 @driver.get(@base_url + "/") # a huge test here end def test_3 @driver.get(@base_url + "/") # a huge test here end def test_4 @driver.get(@base_url + "/") # a huge test here end def test_5 @driver.get(@base_url + "/") # a huge test here end end

    Read the article

  • In IE8, When File Download Completes, No Save Dialog Appears

    For quite some time now, when I download file using IE8, the file finishes downloading and there is no pop up that allows me to open the containing folder, or open the file itself.  Ive had to... This site is a resource for asp.net web programming. It has examples by Peter Kellner of techniques for high performance programming...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Free (Or Cheap) Alternatives For An SSL Certificate For Facebook Apps

    - by mickburkejnr
    In October (from what I remember) Facebook will require HTTPS connections to pages and app's that are hosted away from Facebook. At the moment, it comes up with a popup saying "do you want to turn secure browsing off". I think (as far as I know) that once October comes people won't be able to access these pages any more. Now, I know you have to pay for good SSL certificates. However, for a lot of clients this is just going to be a Facebook page, and not mission critical to their businesses. With this in mind, they may not want to pay for an SSL certificate. I was wondering if there are any free SSL certificates that could do the job? Even if there are no free ones, are there any cheap alternatives? Also, if you do use a free certificate, will it still work in the same way as a paid for certificate?

    Read the article

  • AppFabric &ndash; where are all the monitoring events?

    - by Shawn Cicoria
    When you’ve just gone through a setup of AppFabric and you’ve got some WF/WCF things happening, if you start looking at the Dashboard and you see nothing, it might be as simple as restarting SQL Agent. I generally don’t reboot my system for several days and after installing AppFabric the SQL Agent jobs didn’t start firing right away.  Yes, even running a boot to VHD, you can still put the machine to sleep (just logoff and click on Sleep)… So, after spending time looking through the SQL monitoring DB that AppFabric was configured to use, I saw a bunch of records in the [AppFabric_Monitoring].[dbo].[ASStagingTable] table.  This table is the stopping point before the SQL Agent job (or Service Broker in SQL Express) pushes the items to their final resting place. This post goes through a few things to check on AppFabric monitoring http://social.technet.microsoft.com/wiki/contents/articles/appfabric-items-to-check-when-configuring-appfabric-monitoring.aspx Of course, during development you might want to clean up regularly For that there’s the PowerShell command Clear-AsMonitoringSqlDatabase -Database AppFabric_Monitoring

    Read the article

  • Recommended solutions for integrating iOS with .NET, at the service tier

    - by George
    I'm developing an application, in iOS, that is required to connect to my Windows Server to poll for new data, update, etc. As a seasoned C# developer, my first instinct is to start a new project in Visual Studio and select Web Service, letting my bias (and comfort level) dictate the service layer of my application. However, I don't want to be biased, and I don't base my decision on a service which I am very familiar with, at the cost of performance. I would like to know what other developers have had success using, and if there is a default standard for iOS service layer development? Are there protocols that are easier to consume than others within iOS? Better ones for the size and/or compression of data? Is there anything wrong with using SOAP? I know it's "big" in comparison to protocols like JSON.

    Read the article

  • Do you use to third party companies to review your company's code?

    - by CodeToGlory
    I am looking to get the following - Basic code review to make sure they follow the guidelines imposed. Security code analysis to make sure there are no loopholes. No performance bottlenecks by doing a load test etc. We have lot of code coming in from third parties and is becoming laborious to manage code reviews and hence looking to see if others employ such practices. I understand that it may be a concern for some and would raise the question "Well, who is going to make sure the agency is doing their job right?" But basically I am just looking for a third party who can hold all vendor code to the same standards.

    Read the article

  • Penny auction concept and how the timer works

    - by madi
    I am creating a penny auction site using PHP yii framework. The main consideration of the system is to update the database records of all active auctions (max 15 auctions) with the current ticker timer. I am seeking for advice on how i should design the system where every auction item will have a its own countdown timer stored in the database and when someone bids the auction item, the counter resets to 2 min. Every users who are connected to the system should see the same countdown timer for that particular auction. I am little confused on how i should design the system. Will there be a performance issue when there are frequent updates to the database (Mysql) where 15 active auctions are updated every seconds, the countdown timer decreases by a second in the database table for the particular auction. Schema Sample for auction_lots: Auction_id,startdatetime,counter_timer,status I am seeking for advice on how I should design this. Please help. Thank you!

    Read the article

  • How to Implement a Parallel Workflow

    - by Paul
    I'm trying to implement a parallel split task using a workflow system. I'm using .NET but my process is very simple and I don't want to use WF or anything heavy like that. I've tried using Stateless. So far is was easy to set up and run, but I may be using the wrong tool for the job because I'm not sure how you're supposed to model parallel split workflows, where you have multiple sub-tasks required before you can advance to the next state, but the steps don't require being performed in any particular order. I can easily use the dynamic configuration options to check my data model manually to see if the model is in the correct state (all sub-tasks completed) and can transition to the next state, but this seems to completely break the workflow paradigm. What is the proper, orthodox way to implement a parallel split process? Thanks

    Read the article

  • OpenGL ES 2.0: Mixing 2D with 3D

    - by Bunkai.Satori
    Is it possible to mix 2D and 3D graphics in a single OpenGL ES 2.0 game, please? I have plenty of 2D graphics in my game. The 2D graphics is represented by two triangular polygons (making up a rectangle) with texture on them. I use orthographic matrix to render the whole scene. However, I need to add some 3D effects into my game. Threfore, I wish to use perspective camera to render the meshes. Is it possible to mix orthographic and perspective camera in one scene? If yes, is there going to be a large performance cost for this? Is there any recommended approach to do this effectively? I wil have 90% of 2D graphics and only 10% of 3D. Target platform is OpenGL ES 2.0 (iOS, Android). I use C++ to develop. Thank you.

    Read the article

  • CPU Usage in Very Large Coherence Clusters

    - by jpurdy
    When sizing Coherence installations, one of the complicating factors is that these installations (by their very nature) tend to be application-specific, with some being large, memory-intensive caches, with others acting as I/O-intensive transaction-processing platforms, and still others performing CPU-intensive calculations across the data grid. Regardless of the primary resource requirements, Coherence sizing calculations are inherently empirical, in that there are so many permutations that a simple spreadsheet approach to sizing is rarely optimal (though it can provide a good starting estimate). So we typically recommend measuring actual resource usage (primarily CPU cycles, network bandwidth and memory) at a given load, and then extrapolating from those measurements. Of course there may be multiple types of load, and these may have varying degrees of correlation -- for example, an increased request rate may drive up the number of objects "pinned" in memory at any point, but the increase may be less than linear if those objects are naturally shared by concurrent requests. But for most reasonably-designed applications, a linear resource model will be reasonably accurate for most levels of scale. However, at extreme scale, sizing becomes a bit more complicated as certain cluster management operations -- while very infrequent -- become increasingly critical. This is because certain operations do not naturally tend to scale out. In a small cluster, sizing is primarily driven by the request rate, required cache size, or other application-driven metrics. In larger clusters (e.g. those with hundreds of cluster members), certain infrastructure tasks become intensive, in particular those related to members joining and leaving the cluster, such as introducing new cluster members to the rest of the cluster, or publishing the location of partitions during rebalancing. These tasks have a strong tendency to require all updates to be routed via a single member for the sake of cluster stability and data integrity. Fortunately that member is dynamically assigned in Coherence, so it is not a single point of failure, but it may still become a single point of bottleneck (until the cluster finishes its reconfiguration, at which point this member will have a similar load to the rest of the members). The most common cause of scaling issues in large clusters is disabling multicast (by configuring well-known addresses, aka WKA). This obviously impacts network usage, but it also has a large impact on CPU usage, primarily since the senior member must directly communicate certain messages with every other cluster member, and this communication requires significant CPU time. In particular, the need to notify the rest of the cluster about membership changes and corresponding partition reassignments adds stress to the senior member. Given that portions of the network stack may tend to be single-threaded (both in Coherence and the underlying OS), this may be even more problematic on servers with poor single-threaded performance. As a result of this, some extremely large clusters may be configured with a smaller number of partitions than ideal. This results in the size of each partition being increased. When a cache server fails, the other servers will use their fractional backups to recover the state of that server (and take over responsibility for their backed-up portion of that state). The finest granularity of this recovery is a single partition, and the single service thread can not accept new requests during this recovery. Ordinarily, recovery is practically instantaneous (it is roughly equivalent to the time required to iterate over a set of backup backing map entries and move them to the primary backing map in the same JVM). But certain factors can increase this duration drastically (to several seconds): large partitions, sufficiently slow single-threaded CPU performance, many or expensive indexes to rebuild, etc. The solution of course is to mitigate each of those factors but in many cases this may be challenging. Larger clusters also lead to the temptation to place more load on the available hardware resources, spreading CPU resources thin. As an example, while we've long been aware of how garbage collection can cause significant pauses, it usually isn't viewed as a major consumer of CPU (in terms of overall system throughput). Typically, the use of a concurrent collector allows greater responsiveness by minimizing pause times, at the cost of reducing system throughput. However, at a recent engagement, we were forced to turn off the concurrent collector and use a traditional parallel "stop the world" collector to reduce CPU usage to an acceptable level. In summary, there are some less obvious factors that may result in excessive CPU consumption in a larger cluster, so it is even more critical to test at full scale, even though allocating sufficient hardware may often be much more difficult for these large clusters.

    Read the article

< Previous Page | 401 402 403 404 405 406 407 408 409 410 411 412  | Next Page >