Search Results

Search found 59278 results on 2372 pages for 'time estimation'.

Page 217/2372 | < Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >

  • Solutions for software using many calls to a server

    - by Val
    I am developing software that uses many calls to a server. On a client side it's a Silverlight application. Almost every time a user clicks on a button in it, it sends 1-5 WCF calls to a server. There can be up to dozen or so users at a time. The server is a database server that serves data to a client. I am an intermediate level developer and am thinking about caching some data and syncing my changes from time to time. Are there any official solutions or technologies for it, like, patterns and such?

    Read the article

  • Overwrite SOA expiry in a bind9 slave name server.

    - by Joachim Breitner
    I run a slave name server of a domain that I do not have full control over (i.e. changing the SOA is not possibly). The SOA specifies an expiry time of one week. For various reasons, I’d like to override that value on my specific slave server to something larger. Is there a way to do that? N.B: I know that for the refresh and retry fields, bind9 provides the options min-refresh-time, max-refresh-time, min-retry-time and max-retry-time to overrule the SOA, as mentioned in the documentation. For some reason this just does not inclucde expiry.

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups have completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • Computer logout after closing the lid

    - by ejp1087
    I would like my computer to logout every time I close the lid. And by that I mean log out completely. Basically I want the startup applications to run every time I open the lid to my computer. Right now, it just locks the screen and comes right back when I type in my password. It would be really great if I can have it such that the startup applications run every time I open the lid. However, would logging out lose any unsaved files? Alternatively, do you know if there is a way to run a command every time I open the lid? Thanks!

    Read the article

  • Choosing a (browser) game environment [closed]

    - by Iain
    I apologise in advance if this post is something you've heard a million times already or seems like a trolling attempt. I just want some advice and I'm coming up short with my own Google searches. Basically, I would like to start learning some game development in my own free time (nothing serious, just purely as a hobbyist for fun). I'd like to know what the communities opinions are on the old HTML5/Javascript v Flash argument but purely from a game development perspective. I know people say Flash is dying because of issues like SEO, memory/bandwidth usage and Apple dropping it on tablet and mobile devices, so is it worth me dedicating my free time to learning to use Flash/AS3 for game development or should I focus on HTML5/Javascript? At the moment, I'm not sure HTML5/Javascript is mature enough or has the support tools that Flash does (framework, IDE, etc) and there seems to be a lot more resources online for beginner Flash/AS3 programming. When I'm reading tutorials online for Flash/AS3 I always have it in the back of my head that I'm wasting my time because it won't be around in a few years and I should be investing that time learning HTML5/Javascript. Thoughts? Disclaimer: I'm not trying to spark a flame war or troll anyone - I believe in the right tools for the job and I don't want to waste my time learning something that won't be around in a few years.

    Read the article

  • Why do apt-get and wget fail on my server when ping is working?

    - by klox
    Yesterday my server still OK, but today after try to sudo apt-get update i got this error: update process. I try: sudo rm /var/lib/apt/lists/* -vf And got This.Then try update again, but it's not solving my problem then show May be still same error. I checked my internet connection try ping google.com, get result : PING google.com (74.125.235.40) 56(84) bytes of data. From 136.198.117.254: icmp_seq=1 Redirect Network(New nexthop: fw1.jvc-jein.co.id (136.198.117.6)) 64 bytes from sin01s05-in-f8.1e100.net (74.125.235.40): icmp_req=1 ttl=53 time=20.6 ms 64 bytes from sin01s05-in-f8.1e100.net (74.125.235.40): icmp_req=2 ttl=53 time=18.2 ms 64 bytes from sin01s05-in-f8.1e100.net (74.125.235.40): icmp_req=3 ttl=53 time=33.0 ms 64 bytes from sin01s05-in-f8.1e100.net (74.125.235.40): icmp_req=4 ttl=53 time=30.0 ms 64 bytes from sin01s05-in-f8.1e100.net (74.125.235.40): icmp_req=5 ttl=53 time=28.1 ms In some sites said that may be it caused by getdeb server is down. try to install: jeinqa@SVRQAR:~$ sudo apt-get install pastebinit Reading package lists... Error! E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/security.ubuntu.com_ubuntu_dists_precise-security_restricted_binary-amd64_Packages E: The package lists or status file could not be parsed or opened. try : sudo ufw status verbose result : Status: inactive

    Read the article

  • C#/.NET Little Wonders: Getting Caller Information

    - by James Michael Hare
    Originally posted on: http://geekswithblogs.net/BlackRabbitCoder/archive/2013/07/25/c.net-little-wonders-getting-caller-information.aspx Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. There are times when it is desirable to know who called the method or property you are currently executing.  Some applications of this could include logging libraries, or possibly even something more advanced that may server up different objects depending on who called the method. In the past, we mostly relied on the System.Diagnostics namespace and its classes such as StackTrace and StackFrame to see who our caller was, but now in C# 5, we can also get much of this data at compile-time. Determining the caller using the stack One of the ways of doing this is to examine the call stack.  The classes that allow you to examine the call stack have been around for a long time and can give you a very deep view of the calling chain all the way back to the beginning for the thread that has called you. You can get caller information by either instantiating the StackTrace class (which will give you the complete stack trace, much like you see when an exception is generated), or by using StackFrame which gets a single frame of the stack trace.  Both involve examining the call stack, which is a non-trivial task, so care should be done not to do this in a performance-intensive situation. For our simple example let's say we are going to recreate the wheel and construct our own logging framework.  Perhaps we wish to create a simple method Log which will log the string-ified form of an object and some information about the caller.  We could easily do this as follows: 1: static void Log(object message) 2: { 3: // frame 1, true for source info 4: StackFrame frame = new StackFrame(1, true); 5: var method = frame.GetMethod(); 6: var fileName = frame.GetFileName(); 7: var lineNumber = frame.GetFileLineNumber(); 8: 9: // we'll just use a simple Console write for now 10: Console.WriteLine("{0}({1}):{2} - {3}", 11: fileName, lineNumber, method.Name, message); 12: } So, what we are doing here is grabbing the 2nd stack frame (the 1st is our current method) using a 2nd argument of true to specify we want source information (if available) and then taking the information from the frame.  This works fine, and if we tested it out by calling from a file such as this: 1: // File c:\projects\test\CallerInfo\CallerInfo.cs 2:  3: public class CallerInfo 4: { 5: Log("Hello Logger!"); 6: } We'd see this: 1: c:\projects\test\CallerInfo\CallerInfo.cs(5):Main - Hello Logger! This works well, and in fact CallStack and StackFrame are still the best ways to examine deeper into the call stack.  But if you only want to get information on the caller of your method, there is another option… Determining the caller at compile-time In C# 5 (.NET 4.5) they added some attributes that can be supplied to optional parameters on a method to receive caller information.  These attributes can only be applied to methods with optional parameters with explicit defaults.  Then, as the compiler determines who is calling your method with these attributes, it will fill in the values at compile-time. These are the currently supported attributes available in the  System.Runtime.CompilerServices namespace": CallerFilePathAttribute – The path and name of the file that is calling your method. CallerLineNumberAttribute – The line number in the file where your method is being called. CallerMemberName – The member that is calling your method. So let’s take a look at how our Log method would look using these attributes instead: 1: static int Log(object message, 2: [CallerMemberName] string memberName = "", 3: [CallerFilePath] string fileName = "", 4: [CallerLineNumber] int lineNumber = 0) 5: { 6: // we'll just use a simple Console write for now 7: Console.WriteLine("{0}({1}):{2} - {3}", 8: fileName, lineNumber, memberName, message); 9: } Again, calling this from our sample Main would give us the same result: 1: c:\projects\test\CallerInfo\CallerInfo.cs(5):Main - Hello Logger! However, though this seems the same, there are a few key differences. First of all, there are only 3 supported attributes (at this time) that give you the file path, line number, and calling member.  Thus, it does not give you as rich of detail as a StackFrame (which can give you the calling type as well and deeper frames, for example).  Also, these are supported through optional parameters, which means we could call our new Log method like this: 1: // They're defaults, why not fill 'em in 2: Log("My message.", "Some member", "Some file", -13); In addition, since these attributes require optional parameters, they cannot be used in properties, only in methods. These caveats aside, they do let you get similar information inside of methods at a much greater speed!  How much greater?  Well lets crank through 1,000,000 iterations of each.  instead of logging to console, I’ll return the formatted string length of each.  Doing this, we get: 1: Time for 1,000,000 iterations with StackTrace: 5096 ms 2: Time for 1,000,000 iterations with Attributes: 196 ms So you see, using the attributes is much, much faster!  Nearly 25x faster in fact.  Summary There are a few ways to get caller information for a method.  The StackFrame allows you to get a comprehensive set of information spanning the whole call stack, but at a heavier cost.  On the other hand, the attributes allow you to quickly get at caller information baked in at compile-time, but to do so you need to create optional parameters in your methods to support it. Technorati Tags: Little Wonders,CSharp,C#,.NET,StackFrame,CallStack,CallerFilePathAttribute,CallerLineNumberAttribute,CallerMemberName

    Read the article

  • Looking at EMEA and Telecommunications

    - by Brian Dayton
    With Summer holidays starting up we've been spending a lot of time speaking with our counterparts in EMEA. Often we talk about recent customer successes. One of my recent discoveries is this great video covering BT's move towards SOA and how this initiative not only accelerated order delivery time from 6 days to 6 minutes but created new revenue streams and reduced time to implementation.

    Read the article

  • Looking for new language and new technology [closed]

    - by Basim
    back when Microsoft relased .Net in 2002 or whatever, when I look at that time I say to myself what I if I picked one of Microsoft language in that time and still work on it, of course I will be professional by now. I am looking for a new language that is going up and will be big thing in the next 5-10 years, so in that time i can see the big picture and I know that I'm one of the few people who started from the beginning with X programming language or technology. My interest is web development.

    Read the article

  • Why profile applications using AOP?

    - by Vance
    When tuning performance in a web application, I am looking for good and light-weight performance profiling tools to measure the execution time for each method. I know that the easiest profiling method is to log the start time and end time for each method, but I see more and more people using AOP to profile (add @profiled before each method). What's the benefit of AOP profiling compared to the common "log" way? Thanks in advance Vance

    Read the article

  • The Benefits of Using Online SEO Tools

    Search Engine Optimization (SEO) can be a labour intensive process. Why not save some time and effort and use a few online tools to help accomplish the task in less time? In this article, I will look at some common tools, and show how they can improve your optimization and save you time.

    Read the article

  • Location Services are always disabled in Mac OS X Lion

    - by rplusg
    A simple location services program was working fine on my machine and suddenly stopped working. Upon further exploring the problem, I realized that some process has disabled location services in System Preferences » Security & Privacy » Privacy. I checked Enable Location Services, but again it got disabled automatically. After some research I found that it's not just my program, even built-in system functions are also failing because of this problem for example System Preferences » Date & Time » Time Zone failed to get the current location. Every time I check Enable Location Services, I see the following error in the console logs: 16/10/12 11:23:15.636 AM [0x0-0x42042].com.apple.systempreferences: ERROR,Time,372059595.636,Function,"CLInternalSetLocationServicesEnabled",CLInternalSetLocationServicesEnabled failed 16/10/12 11:23:15.638 AM [0x0-0x42042].com.apple.systempreferences: STACK,Time,372059595.636,1 CoreLocation 0x00007fff8f9957be CLInternalSetLocationServicesEnabled + 110 Notes: WiFi is on I didn't install iOS Simulator I use Xcode Version 4.5 (4G182) I use Boot Camp and made my MacBook Pro dual boot (Mac OS X Lion and Windows 7) I do only Mac development but not iOS

    Read the article

  • Harnessing Business Events for Predictive Decision Making - part 1 / 3

    - by Sanjeev Sharma
    Businesses have long relied on data mining to elicit patterns and forecast future demand and supply trends. Improvements in computing hardware, specifically storage and compute capacity, have significantly enhanced the ability to store and analyze mountains of data in ever shrinking time-frames. Nevertheless, the reality is that data growth is outpacing storage capacity by a factor of two and computing power is still very much bounded by Moore's Law, doubling only every 18 months.Faced with this data explosion, businesses are exploring means to develop human brain-like capabilities in their decision systems (including BI and Analytics) to make sense of the data storm, in other words business events, in real-time and respond pro-actively rather than re-actively. It is more like having a little bit of the right information just a little bit before hand than having all of the right information after the fact. To appreciate this thought better let's first understand the workings of the human brain.Neuroscience research has revealed that the human brain is predictive in nature and that talent is nothing more than exceptional predictive ability. The cerebral-cortex, part of the human brain responsible for cognition, thought, language etc., comprises of five layers. The lowest layer in the hierarchy is responsible for sensory perception i.e. discrete, detail-oriented tasks whereas each of the above layers increasingly focused on assembling higher-order conceptual models. Information flows both up and down the layered memory hierarchy. This allows the conceptual mental-models to be refined over-time through experience and repetition. Secondly, and more importantly, the top-layers are able to prime the lower layers to anticipate certain events based on the existing mental-models thereby giving the brain a predictive ability. In a way the human brain develops a "memory of the future", some sort of an anticipatory thinking which let's it predict based on occurrence of events in real-time. A higher order of predictive ability stems from being able to recognize the lack of certain events. For instance, it is one thing to recognize the beats in a music track and another to detect beats that were missed, which involves a higher order predictive ability.Existing decision systems analyze historical data to identify patterns and use statistical forecasting techniques to drive planning. They are similar to the human-brain in that they employ business rules very much like mental-models to chunk and classify information. However unlike the human brain existing decision systems are unable to evolve these rules automatically (AI still best suited for highly specific tasks) and  predict the future based on real-time business events. Mistake me not,  existing decision systems remain vital to driving long-term and broader business planning. For instance, a telco will still rely on BI and Analytics software to plan promotions and optimize inventory but tap into business events enabled predictive insight to identify specifically which customers are likely to churn and engage with them pro-actively. In the next post, i will depict the technology components that enable businesses to harness real-time events and drive predictive decision making.

    Read the article

  • What impact would a young developer in a consultancy struggling on a project have?

    - by blade3
    I am a youngish developer (working for 3 yrs). I took a job 3 months ago as an IT consultant (for the first time, I'm a consultant). In my first project, all went will till the later stages where I ran into problems with Windows/WMI (lack of documentation etc). As important as it is to not leave surprises for the client, this did happen. I was supposed to go back to finish the project about a month and a half ago, after getting a date scheduled, but this did not happen either. The project (code) was slightly rushed too and went through QA (no idea what the results are). My probation review is in a few weeks time, and I was wondering, what sort of impact would this have? My manager hasn't mentioned this project to me and apart from this, everything's been ok and he has even said, at the beginning, if you are tight on time just ask for more, so he has been accomodating (At this time, I was doing well, the problems came later).

    Read the article

  • Problems installing clamcour

    - by user10778
    Hello People, when i try to install clamcour from terminal, it gives me this error, somebody can help me? calmcourdir# ./configure checking for libraries containing socket functions... -lc checking for socket... yes checking for bind... yes checking for listen... yes checking for accept... yes checking for shutdown... yes checking for socklen_t... yes checking for struct sockaddr_un.sun_len... no System log functions checking syslog.h usability... yes checking syslog.h presence... yes checking for syslog.h... yes checking for openlog... yes checking for syslog... yes checking for closelog... yes Time functions checking whether time.h and sys/time.h may both be included... yes checking whether struct tm is in sys/time.h or time.h... time.h checking for localtime_r... yes checking for strftime... yes checking for unistd.h... (cached) yes checking limits.h usability... yes checking limits.h presence... yes checking for limits.h... yes checking for sysconf... yes BZip2 support checking bzlib.h usability... no checking bzlib.h presence... no checking for bzlib.h... no checking for BZ2_bzWriteOpen in -lbz2... no GZip support checking zlib.h usability... no checking zlib.h presence... no checking for zlib.h... no checking for gzopen in -lz... no LibClamAV support checking for /usr/bin/clamav-config... no checking for /usr/local/bin/courier-config... no checking for /usr/clamav/bin/clamav-config... no checking for /usr/local/clamav/bin/clamav-config... no ./configure: line 25234: : command not found configure: error: Cannot find clamav-config

    Read the article

  • Strange date relationships with #PowerPivot

    - by Marco Russo (SQLBI)
    A reader of my PowerPivot book highlighted a strange behavior of the relationship between a datetime column and a Calendar table. Long story short: it seems that PowerPivot automatically round the date to the “neareast day”, but instead of simply removing the time (truncating the decimal part of the decimal number internally used to represent a datetime value) a rounding function seems used, moving the date to the next day if the time part contain a PM time. As you can imagine, this becomes particularly...(read more)

    Read the article

  • The Best Data Integration for Exadata Comes from Oracle

    - by maria costanzo
    Oracle Data Integrator and Oracle GoldenGate offer unique and optimized data integration solutions for Oracle Exadata. For example, customers that choose to feed their data warehouse or reporting database with near real-time throughout the day, can do so without decreasing  performance or availability of source and target systems. And if you ask why real-time, the short answer is: in today’s fast-paced, always-on world, business decisions need to use more relevant, timely data to be able to act fast and seize opportunities. A longer response to "why real-time" question can be found in a related blog post. If we look at the solution architecture, as shown on the diagram below,  Oracle Data Integrator and Oracle GoldenGate are both uniquely designed to take full advantage of the power of the database and to eliminate unnecessary middle-tier components. Oracle Data Integrator (ODI) is the best bulk data loading solution for Exadata. ODI is the only ETL platform that can leverage the full power of Exadata, integrate directly on the Exadata machine without any additional hardware, and by far provides the simplest setup and fastest overall performance on an Exadata system. We regularly see customers achieving a 5-10 times boost when they move their ETL to ODI on Exadata. For  some companies the performance gain is even much higher. For example a large insurance company did a proof of concept comparing ODI vs a traditional ETL tool (one of the market leaders) on Exadata. The same process that was taking 5hrs and 11 minutes to complete using the competing ETL product took 7 minutes and 20 seconds with ODI. Oracle Data Integrator was 42 times faster than the conventional ETL when running on Exadata.This shows that Oracle's own data integration offering helps you to gain the most out of your Exadata investment with a truly optimized solution. GoldenGate is the best solution for streaming data from heterogeneous sources into Exadata in real time. Oracle GoldenGate can also be used together with Data Integrator for hybrid use cases that also demand non-invasive capture, high-speed real time replication. Oracle GoldenGate enables real-time data feeds from heterogeneous sources non-invasively, and delivers to the staging area on the target Exadata system. ODI runs directly on Exadata to use the database engine power to perform in-database transformations. Enterprise Data Quality is integrated with Oracle Data integrator and enables ODI to load trusted data into the data warehouse tables. Only Oracle can offer all these technical benefits wrapped into a single intelligence data warehouse solution that runs on Exadata. Compared to traditional ETL with add-on CDC this solution offers: §  Non-invasive data capture from heterogeneous sources and avoids any performance impact on source §  No mid-tier; set based transformations use database power §  Mini-batches throughout the day –or- bulk processing nightly which means maximum availability for the DW §  Integrated solution with Enterprise Data Quality enables leveraging trusted data in the data warehouse In addition to Starwood Hotels and Resorts, Morrison Supermarkets, United Kingdom’s fourth-largest food retailer, has seen the power of this solution for their new BI platform and shared their story with us. Morrisons needed to analyze data across a large number of manufacturing, warehousing, retail, and financial applications with the goal to achieve single view into operations for improved customer service. The retailer deployed Oracle GoldenGate and Oracle Data Integrator to bring new data into Oracle Exadata in near real-time and replicate the data into reporting structures within the data warehouse—extending visibility into operations. Using Oracle's data integration offering for Exadata, Morrisons produced financial reports in seconds, rather than minutes, and improved staff productivity and agility. You can read more about Morrison’s success story here and hear from Starwood here. From an Irem Radzik article.

    Read the article

  • Upcoming EBS Webcasts for June, July, August 2012

    - by user793553
    See the following upcoming webcasts for June, July and August 2012. Flag Doc ID 740966.1 as a favourite, to keep up to date with latest advisor schedule. Additionally, see Doc ID 740964.1 for access to all archived advisor webcasts Oracle E-Business Suite Oracle E-Business Suite Title Date Summary None at this time.     EBS Agile Title Date Summary None at this time.     EBS Applications Technologies Group (ATG) Title Date Summary EBS – OAM Tuning and Monitoring EMEA July 10, 2012 Abstract EBS – OAM Tuning and Monitoring US July 11, 2012 Abstract Workflow Analyzer Followup EMEA July 24, 2012 Abstract Workflow Analyzer Followup US July 25, 2012 Abstract EBS CRM & Industries Title Date Summary None at this time.     EBS Financials Title Date Summary EBS Fixed Assets: Achieve Success Using Proactive Tools For Fixed Assets Support July 10, 2012 Abstract Overview and Flow of Oracle Project Resource Management July 17, 2012 Abstract Leveraging My Oracle Support To Increase Knowledge July 30, 2012 Abstract EBS HCM (HRMS) Title Date Summary Oracle Time and Labor (OTL) Rollback Functionality Session 1 July 25, 2012 Abstract Oracle Time and Labor (OTL) Rollback Functionality Session 2 July 25, 2012 Abstract EBS Manufacturing Title Date Summary Using Personalization in Oracle eAM June 21, 2012 Abstract OM Guided Resolutions - Finding Known Resolutions Easily July 17, 2012 Abstract Material Move Orders Flow July 25, 2012 Abstract Diagnosing Signal 11 Issues In ASCP Planning August 9, 2012 Abstract Interface Trip Stop - Best Practices and Debugging August 21, 2012 Abstract EBS Procurement Title Date Summary Punchout in iProcurement June 26, 2012 Abstract

    Read the article

  • Profiling Startup Of VS2012 &ndash; YourKit Profiler

    - by Alois Kraus
    The YourKit (v7.0.5) profiler is interesting in terms of price (79€ single place license, 409€ + 1 year support and upgrades) and feature set. You do get a performance and memory profiler in one package for which you normally need also to pay extra from the other vendors. As an interesting side note the profiler UI is written in Java because they do also sell Java profilers with the same feature set. To get all methods of a VS startup you need first to configure it to include System* in the profiled methods and you need to configure * to measure wall clock time. By default it does record only CPU times which allows you to optimize CPU hungry operations. But you will never see a Thread.Sleep(10000) in the profiler blocking the UI in this mode. It can profile as all others processes started from within the profiler but it can also profile the next or all started processes. As usual it can profile in sampling and tracing mode. But since it is a memory profiler as well it does by default also record all object allocations > 1MB. With allocation recording enabled VS2012 did crash but without allocation recording there were no problems. The CPU tab contains the time line of the application and when you click in the graph you the call stacks of all threads at this time. This is really a nice feature. When you select a time region you the CPU Usage estimation for this time window. I have seen many applications consuming 100% CPU only because they did create garbage like crazy. For this is the Garbage Collection tab interesting in conjunction with a time range. This view is like the CPU table only that the CPU graph (green) is missing. All relevant information except for GCs/s is already visible in the CPU tab. Very handy to pinpoint excessive GC or CPU bound issues. The Threads tab does show the thread names and their lifetime. This is useful to see thread interactions or which thread is hottest in terms of CPU consumption. On the CPU tab the call tree does exist in a merged and thread specific view. When you click on a method you get below a list of all called methods. There you can sort for methods with a high own time which are worth optimizing. In the Method List you can select which scope you want to see. Back Traces are the methods which did call you. Callees ist the list of methods called directly or indirectly by your method as a flat list. This is not a call stack but still very useful to see which methods were slow so you can see the “root” cause quite quickly without the need to click trough long call stacks. The last view Merged Calles is a call stacked view of the previous view. This does help a lot to understand did call each method at run time. You would get the same view with a debugger for one call invocation but here you get the full statistics (invocation count) as well. Since YourKit is also a memory profiler you can directly see which objects you have on your managed heap and which objects do hold most of your precious memory. You can in in the Object Explorer view also examine the contents of your objects (strings or whatsoever) to get a better understanding which objects where potentially allocating this stuff.   YourKit is a very easy to use combined memory and performance profiler in one product. The unbeatable single license price makes it very attractive to straightly buy it. Although it is a Java UI it is very responsive and the memory consumption is considerably lower compared to dotTrace and ANTS profiler. What I do really like is to start the YourKit ui and then start the processes I want to profile as usual. There is no need to alter your own application code to be able to inject a profiler into your new started processes. For performance and memory profiling you can simply select the process you want to investigate from the list of started processes. That's the way I like to use profilers. Just get out of the way and let the application run without any special preparations.   Next: Telerik JustTrace

    Read the article

  • Clock drift even though NTPD running

    - by droffo
    I'm having a problem with the clock drifting on my PC. I'M running Ubuntu 10.10 on an somewhat crusty IBM e-server (1.5GB RAM, 2.4GHz CPU) ntpd is running (started at run level 2) servers are defined: server 1.us.pool.ntp.org server 2.us.pool.ntp.org server 3.us.pool.ntp.org server time.nrc.ca server ntp1.cmc.ec.gc.ca server ntp2.cmc.ec.gc.ca server wuarchive.wustl.edu server clock.psu.edu Looking at the log file, it would seem that the ntp daemon is running, but the system clock never seems to be set, however. If I manually set the time from a Casio "atomic" watch, the date/time displayed by the Clock applet drifts out of sync over time. Looking at the log file (below) it would seem the ntp daemon started ok and is running. So I am totally flummoxed right now :-( Here's a copy of my ntp.log file.

    Read the article

  • An Oracle's Interns Story by Samarth Varshney

    - by user769227
    I have written a short write-up about my experience at Oracle and am attaching some pics along:  I joined Oracle on 5th January 2011 as part of my internship program in BITS Pilani Goa Campus. In the short period of six months, I had the most beautiful and interesting time of my life. It was fun to work in Oracle, thanks to the whole team. I had an excellent manager, simple and sophisticated, who gave me the utmost enthusiasm to work. I gained a lot of knowledge during my internship, thanks to my colleagues. They were very helpful and motivated us (interns) in every possible way. In the initial stages of work, in which you know almost nothing, they helped me gain knowledge at a rapid speed. Thanks to the vast database of study material at the Oracle site, that I could start on with my project in a very short time.  For me, the time flew like anything and made the 6 months look like a few days. It was probably due to the team, that the work was so much fun. We had our deadlines but had full freedom as to how to work and when to work. I don't remember a single instance, in which I was working and not listening to songs. I mean it will always be a time to remember. I hope to join this company and make this time last forever.  Samarth 

    Read the article

  • Why do we keep using CSV?

    - by Stephen
    Why do we keep using CSV? I recently made a shift to working the health domain and despite the wonderful work in data transfer standards, all data transfer is in CSV, both for reporting to external organisations, and for data migrations when implementing new systems. Unfortunately the use of CSV is the cause of the endless repetition of the same stupid errors, with the same waste of developer time. (bad escaping, failing to handle null fields etc.) I know we can do better, and anything between JSON and XML (depending on the instance) would be fine. (Most of the time this is data going from one MS SQLserver 2005 to another!) I feel as if each time I see this happening I am literally watching one developer waste anothers time. So why do we keep shafting each other? When will we stop?

    Read the article

  • Unity3D - Android pause screen - double click issue

    - by user3666251
    I made a pause script for the game im developing for android. I added the script to the GUITexture I created and placed on the top right side of the screen.The issue stands at the part where if the player clicks the pause button then clicks resume then he wants to pause the game again.When he clicks pause the second time the buttons dont show up unless he clicks again. This is the script : #pragma strict var paused = false; var isButtonVisible : boolean = true; function OnMouseDown(){ this.paused = !this.paused; Time.timeScale = 0; isButtonVisible = true; } function OnGUI(){ if ( isButtonVisible ) { if(this.paused){ if (GUI.Button(Rect(Screen.width/2-100,Screen.height/2+3,200,50),"Restart")){ Application.LoadLevel(Application.loadedLevel); Time.timeScale = 1; isButtonVisible = false; } if (GUI.Button(Rect(Screen.width/2-100,Screen.height/2-50,200,50),"Resume")){ Time.timeScale = 1; isButtonVisible = false; } // Insert the rest of the pause menu logic if (GUI.Button(Rect(Screen.width/2-100,Screen.height/2+56,200,50),"Main Menu")){ Application.LoadLevel ("MainMenu"); isButtonVisible = false; Time.timeScale = 1; } } } } Thank you.

    Read the article

  • Problems building clamcour

    - by user10778
    Hello People, when i try to install clamcour from terminal, it gives me this error, somebody can help me? calmcourdir# ./configure checking for libraries containing socket functions... -lc checking for socket... yes checking for bind... yes checking for listen... yes checking for accept... yes checking for shutdown... yes checking for socklen_t... yes checking for struct sockaddr_un.sun_len... no System log functions checking syslog.h usability... yes checking syslog.h presence... yes checking for syslog.h... yes checking for openlog... yes checking for syslog... yes checking for closelog... yes Time functions checking whether time.h and sys/time.h may both be included... yes checking whether struct tm is in sys/time.h or time.h... time.h checking for localtime_r... yes checking for strftime... yes checking for unistd.h... (cached) yes checking limits.h usability... yes checking limits.h presence... yes checking for limits.h... yes checking for sysconf... yes BZip2 support checking bzlib.h usability... no checking bzlib.h presence... no checking for bzlib.h... no checking for BZ2_bzWriteOpen in -lbz2... no GZip support checking zlib.h usability... no checking zlib.h presence... no checking for zlib.h... no checking for gzopen in -lz... no LibClamAV support checking for /usr/bin/clamav-config... no checking for /usr/local/bin/courier-config... no checking for /usr/clamav/bin/clamav-config... no checking for /usr/local/clamav/bin/clamav-config... no ./configure: line 25234: : command not found configure: error: Cannot find clamav-config

    Read the article

< Previous Page | 213 214 215 216 217 218 219 220 221 222 223 224  | Next Page >