Search Results

Search found 25563 results on 1023 pages for 'negative number'.

Page 187/1023 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • A Patent for Workload Management Based on Service Level Objectives

    - by jsavit
    I'm very pleased to announce that after a tiny :-) wait of about 5 years, my patent application for a workload manager was finally approved. Background Many operating systems have a resource manager which lets you control machine resources. For example, Solaris provides controls for CPU with several options: shares for proportional CPU allocation. If you have twice as many shares as me, and we are competing for CPU, you'll get about twice as many CPU cycles), dedicated CPU allocation in which a number of CPUs are exclusively dedicated to an application's use. You can say that a zone or project "owns" 8 CPUs on a 32 CPU machine, for example. And, capped CPU in which you specify the upper bound, or cap, of how much CPU an application gets. For example, you can throttle an application to 0.125 of a CPU. (This isn't meant to be an exhaustive list of Solaris RM controls.) Workload management Useful as that is (and tragic that some other operating systems have little resource management and isolation, and frighten people into running only 1 app per OS instance - and wastefully size every server for the peak workload it might experience) that's not really workload management. With resource management one controls the resources, and hope that's enough to meet application service objectives. In fact, we hold resource distribution constant, see if that was good enough, and adjust resource distribution if that didn't meet service level objectives. Here's an example of what happens today: Let's try 30% dedicated CPU. Not enough? Let's try 80% Oh, that's too much, and we're achieving much better response time than the objective, but other workloads are starving. Let's back that off and try again. It's not the process I object to - it's that we to often do this manually. Worse, we sometimes identify and adjust the wrong resource and fiddle with that to no useful result. Back in my days as a customer managing large systems, one of my users would call me up to beg for a "CPU boost": Me: "it won't make any difference - there's plenty of spare CPU to be had, and your application is completely I/O bound." User: "Please do it anyway." Me: "oh, all right, but it won't do you any good." (I did, because he was a friend, but it didn't help.) Prior art There are some operating environments that take a stab about workload management (rather than resource management) but I find them lacking. I know of one that uses synthetic "service units" composed of the sum of CPU, I/O and memory allocations multiplied by weighting factors. A workload is set to make a target rate of service units consumed per second. But this seems to be missing a key point: what is the relationship between artificial 'service units' and actually meeting a throughput or response time objective? What if I get plenty of one of the components (so am getting enough service units), but not enough of the resource whose needed to remove the bottleneck? Actual workload management That's not really the answer either. What is needed is to specify a workload's service levels in terms of externally visible metrics that are meaningful to a business, such as response times or transactions per second, and have the workload manager figure out which resources are not being adequately provided, and then adjust it as needed. If an application is not meeting its service level objectives and the reason is that it's not getting enough CPU cycles, adjust its CPU resource accordingly. If the reason is that the application isn't getting enough RAM to keep its working set in memory, then adjust its RAM assignment appropriately so it stops swapping. Simple idea, but that's a task we keep dumping on system administrators. In other words - don't hold the number of CPU shares constant and watch the achievement of service level vary. Instead, hold the service level constant, and dynamically adjust the number of CPU shares (or amount of other resources like RAM or I/O bandwidth) in order to meet the objective. Instrumenting non-instrumented applications There's one little problem here: how do I measure application performance in a way relating to a service level. I don't want to do it based on internal resources like number of CPU seconds it received per minute - We need to make resource decisions based on externally visible and meaningful measures of performance, not synthetic items or internal resource counters. If I have a way of marking the beginning and end of a transaction, I can then measure whether or not the application is meeting an objective based on it. If I can observe the delay factors for an application, I can see which resource shortages are slowing an application enough to keep it from meeting its objectives. I can then adjust resource allocations to relieve those shortages. Fortunately, Solaris provides facilities for both marking application progress and determining what factors cause application latency. The Solaris DTrace facility let's me introspect on application behavior: in particular I can see events like "receive a web hit" and "respond to that web hit" so I can get transaction rate and response time. DTrace (and tools like prstat) let me see where latency is being added to an application, so I know which resource to adjust. Summary After a delay of a mere few years, I am the proud creator of a patent (advice to anyone interested in going through the process: don't hold your breath!). The fundamental idea is fairly simple: instead of holding resource constant and suffering variable levels of success meeting service level objectives, properly characterise the service level objective in meaningful terms, instrument the application to see if it's meeting the objective, and then have a workload manager change resource allocations to remove delays preventing service level attainment. I've done it by hand for a long time - I think that's what a computer should do for me.

    Read the article

  • A Knights Tale

    - by Phil Factor
    There are so many lessons to be learned from the story of Knight Capital losing nearly half a billion dollars as a result of a deployment gone wrong. The Knight Capital Group (KCG N) was an American global financial services firm engaging in market making, electronic execution, and institutional sales and trading. According to the recent order (File No.3.15570) against Knight Capital by U.S. Securities and Exchange Commission?, Knight had, for many years used some software which broke up incoming “parent” orders into smaller “child” orders that were then transmitted to various exchanges or trading venues for execution. A tracking ‘cumulative quantity’ function counted the number of ‘child’ orders and stopped the process once the total of child orders matched the ‘parent’ and so the parent order had been completed. Back in the mists of time, some code had been added to it  which was excuted if a particular flag was set. It was called ‘power peg’ and seems to have had a similar design and purpose, but, one guesses, would have shared the same tracking function. This code had been abandoned in 2003, but never deleted. In 2005, The tracking function was moved to an earlier point in the main process. It would seem from the account that, from that point, had that flag ever been set, the old ‘Power Peg’ would have been executed like Godzilla bursting from the ice, making child orders without limit without any tracking function. It wasn’t, presumably because the software that set the flag was removed. In 2012, nearly a decade after ‘Power Peg’ was abandoned, Knight prepared a new module to their software to cope with the imminent Retail Liquidity Program (RLP) for the New York Stock Exchange. By this time, the flag had remained unused and someone made the fateful decision to reuse it, and replace the old ‘power peg’ code with this new RLP code. Had the two actions been done together in a single automated deployment, and the new deployment tested, all would have been well. It wasn’t. To quote… “Beginning on July 27, 2012, Knight deployed the new RLP code in SMARS in stages by placing it on a limited number of servers in SMARS on successive days. During the deployment of the new code, however, one of Knight’s technicians did not copy the new code to one of the eight SMARS computer servers. Knight did not have a second technician review this deployment and no one at Knight realized that the Power Peg code had not been removed from the eighth server, nor the new RLP code added. Knight had no written procedures that required such a review.” (para 15) “On August 1, Knight received orders from broker-dealers whose customers were eligible to participate in the RLP. The seven servers that received the new code processed these orders correctly. However, orders sent with the repurposed flag to the eighth server triggered the defective Power Peg code still present on that server. As a result, this server began sending child orders to certain trading centers for execution. Because the cumulative quantity function had been moved, this server continuously sent child orders, in rapid sequence, for each incoming parent order without regard to the number of share executions Knight had already received from trading centers. Although one part of Knight’s order handling system recognized that the parent orders had been filled, this information was not communicated to SMARS.” (para 16) SMARS routed millions of orders into the market over a 45-minute period, and obtained over 4 million executions in 154 stocks for more than 397 million shares. By the time that Knight stopped sending the orders, Knight had assumed a net long position in 80 stocks of approximately $3.5 billion and a net short position in 74 stocks of approximately $3.15 billion. Knight’s shares dropped more than 20% after traders saw extreme volume spikes in a number of stocks, including preferred shares of Wells Fargo (JWF) and semiconductor company Spansion (CODE). Both stocks, which see roughly 100,000 trade per day, had changed hands more than 4 million times by late morning. Ultimately, Knight lost over $460 million from this wild 45 minutes of trading. Obviously, I’m interested in all this because, at one time, I used to write trading systems for the City of London. Obviously, the US SEC is in a far better position than any of us to work out the failings of Knight’s IT department, and the report makes for painful reading. I can’t help observing, though, that even with the breathtaking mistakes all along the way, that a robust automated deployment process that was ‘all-or-nothing’, and tested from soup to nuts would have prevented the disaster. The report reads like a Greek Tragedy. All the way along one wants to shout ‘No! not that way!’ and ‘Aargh! Don’t do it!’. As the tragedy unfolds, the audience weeps for the players, trapped by a cruel fate. All application development and deployment requires defense in depth. All IT goes wrong occasionally, but if there is a culture of defensive programming throughout, the consequences are usually containable. For financial systems, these defenses are required by statute, and ignored only by the foolish. Knight’s mistakes weren’t made by just one hapless sysadmin, but were progressive errors by an  IT culture spanning at least ten years.  One can spell these out, but I think they’re obvious. One can only hope that the industry studies what happened in detail, learns from the mistakes, and draws the right conclusions.

    Read the article

  • xrander problem with 1900x1080p

    - by Eslam
    i have a problem with xrandr i successfully executed the following : #cvt 1900 1080 #xrandr 1900x1080 170.75 1904 2024 2224 2544 1080 1083 1093 1120 -hsync +vsync but unfortunately the following command : #xrandr --addmode VGA-0 1900x1080 returned the following error : X Error of failed request: BadMatch (invalid parameter attributes) Major opcode of failed request: 153 (RANDR) Minor opcode of failed request: 18 (RRAddOutputMode) Serial number of failed request: 29 Current serial number in output stream: 30 the following command output might help in identifying problem : #glxinfo |grep -i opengl OpenGL vendor string: NVIDIA Corporation OpenGL renderer string: GeForce 310M/PCIe/SSE2 OpenGL version string: 3.3.0 NVIDIA 310.14 OpenGL shading language version string: 3.30 NVIDIA via Cg compiler OpenGL extensions: #lspci |grep -i vga 01:00.0 VGA compatible controller: NVIDIA Corporation GT218 [GeForce 310M] (rev a2) any ideas what's gonna be wrong ?

    Read the article

  • Is there any difference between processor and core?

    - by Salvador
    The following two command seems to give me different information about the same hardware srs@ubuntu:~$ cat /proc/cpuinfo | grep -e processor -e cores processor : 0 cpu cores : 4 processor : 1 cpu cores : 4 processor : 2 cpu cores : 4 processor : 3 cpu cores : 4 srs@ubuntu:~$ sudo dmidecode -t processor # dmidecode 2.9 SMBIOS 2.6 present. Handle 0x0004, DMI type 4, 42 bytes Processor Information Socket Designation: LGA1155 Type: Central Processor Family: <OUT OF SPEC> Manufacturer: Intel ID: A7 06 02 00 FF FB EB BF Version: Intel(R) Core(TM) i5-2500K CPU @ 3.30GHz Voltage: 1.0 V External Clock: 100 MHz Max Speed: 3800 MHz Current Speed: 3300 MHz Status: Populated, Enabled Upgrade: Other L1 Cache Handle: 0x0005 L2 Cache Handle: 0x0006 L3 Cache Handle: 0x0007 Serial Number: To Be Filled By O.E.M. Asset Tag: To Be Filled By O.E.M. Part Number: To Be Filled By O.E.M. Core Count: 4 Core Enabled: 1 Characteristics: 64-bit capable Until today I thought I had a single processor with 4 independent cores. I also thought that within each core can be used different threads.

    Read the article

  • How to get the correct battery status?

    - by GUI Junkie
    At this moment, ever since I installed Ubuntu on this machine, the battery status says: not present. Looking at this answer, however, I find that /proc/acpi/battery/BAT1/info (sometimes its /proc/acpi/battery/BAT0/info, use tab complete to help) has the following info: present: yes design capacity: 4400 mAh last full capacity: 4400 mAh battery technology: rechargeable design voltage: 11100 mV design capacity warning: 300 mAh design capacity low: 132 mAh cycle count: 0 capacity granularity 1: 32 mAh capacity granularity 2: 32 mAh model number: BAT1 serial number: 11 battery type: 11 OEM info: 11 In accordance to this answer, I've checked the /proc/acpi/battery/BAT1/state file: present: yes capacity state: ok charging state: charged present rate: unknown remaining capacity: unknown present voltage: 10000 mV The acpi -b command returns: Battery 0: Unknown, 0%, rate information unavailable Any suggestions on getting the battery info updated?

    Read the article

  • How to reboot into Windows from Ubuntu?

    - by andrewsomething
    I'm looking for a way to reboot into Windows from Ubuntu on a 10.10/Vista dual boot system. The specific use case is that I would like to be able to ssh into my running Ubuntu instance and issue a command that will initiate a reboot directly into Windows. I found a promising blog post, but the script that it suggests isn't working: #!/bin/bash WINDOWS_ENTRY=`grep menuentry /boot/grub/grub.cfg | grep --line-number Windows` MENU_NUMBER=$(( `echo $WINDOWS_ENTRY | sed -e "s/:.*//"` - 1 )) sudo grub-reboot $MENU_NUMBER sudo reboot man grub-reboot isn't much help, but it seems to be leading me in the right direction: set the default boot entry for GRUB, for the next boot only WINDOWS_ENTRY=`grep menuentry /boot/grub/grub.cfg | grep --line-number Windows` MENU_NUMBER=$(( `echo $WINDOWS_ENTRY | sed -e "s/:.*//"` - 1 )) echo $MENU_NUMBER This returns the expected value, but on reboot the first menu entry is still highlighted. Any ideas why this isn't working or suggestions for other solutions?

    Read the article

  • What is the replacement for the Web Intents HTML standard?

    - by Tom
    "Web Intents" were deprecated in Chrome 24 (November/2011) and are no longer supported in any browser: We also gathered a lot of valuable data and feedback from our experimental support for Web Intents and decided to disable the feature in today's Beta release. Is there an HTML5 standard that I can look into as an alternative to what Web Intents intended? I'm interested in how web services can be stitched together. For example, imagine a website that can import a image from any number of web-services, modify the image in some way, then push the file back to any number of other web-services, all via HTML5 standards.

    Read the article

  • Is it a good idea to use a formula to balance a game's complexity, in order to keep players in constant flow?

    - by user1107412
    I read a lot about Flow theory and its applications to video games, and I got an idea sticking in my mind. If a number of weight values are applied to different parameters of a certain game level (i.e. the size of the level, the number of enemies, their overal strength, the variance in their behavior, etc), then it should be technically possible to find an overal score mechanism for each level in the game. If a constant ratio of complexity increase were empirically defined, for instance 1,3333, or say, the Golden Ratio, would it be a good idea to arrange the levels in such an order that the increase of overal complexity tends to increase that much? Has somebody tried it?

    Read the article

  • How to spot a good social media marketer

    - by fiftyeight
    It's a bit subjective but helpful and on-topic IMO, mayebe should be made community wiki I've been in contact with some guest bloggers lately which are interested in publishing posts on my website's blog, so far I've done a regular weekly blog and had a guy marketing it, but he's too busy to do more work right now. Now I need someone to market these blog posts on social media, the last guy I just got by recommendation from a friend, but now I need to find one myself. What criteria should I check when it comes to social media marketers? Do the number of followers and fans on their accounts and/or the number of votes they get for articles they market mean anything, or is impossible to know if it's spam? That's about the only criterion I could think of so far... Thanx to anyone who helps

    Read the article

  • Design leaderboard ratings for quiz games

    - by PeterK
    Back in March 2011 i started the following post: How to design a leaderboard? Now my quiz game have been out for approximately a year and sold pretty decently. I am working on to update the game design and is again looking into the leaderboard design to make it better as i am not happy with it. Currently i rate players on number of correct answers, which is not good as it does not consider things like number of games, difficulty levels etc. I also have "extended" stats behind the UITableView (Leaderboard). A player can play based on three levels of difficulty: hard, medium or easy Difficulty levels can be mixed between players in a game Each game can be one to six players, so there can be single games or duels Between 2 and 30 questions per game As i am considering integrating Game Center Leaderboard i need to design a better rating system so i would like to ask for some ideas how to do the rating based on the above. I am thinking about how much a point would be worth and what it includes.

    Read the article

  • Debugging Minimum Translation Vector

    - by SyntheCypher
    I implemented the minimum translation vector from codezealot's tutorial on SAT (Separating Axis Theorem) but I'm having an issue I can't quite figure out. Here's the example I have: As you can see in top and bottom left images regardless of the side the of the green car which red car is penetrating the MTV for the red car still remains as a negative number also here is the same example when the front of the red car is facing the opposite direction the number will always be positive. When the red car is past the half way through the green car it should switch polarity. I thought I'd compensated for this in my code, but apparently not either that or it's a bug I can find. Here is my function for finding and returning the MTV, any help would be much appreciated: Code

    Read the article

  • AS3 Calculating Delta Time In Seconds

    - by user1133079
    Here is how I've been trying to implement delta time based on different internet resources. var startTime:Number = getTimer(); game.Update(deltaTime); deltaTime = Number(getTimer() - startTime) * 0.001; My issue with this is it doesn't seem to be giving me accurate timing. The main update shows the frame time at 0.001 and when reinitializing the level it goes to 0.002. I'm using dt else where for a timer and later on time based physics so I would like it to work as expected. I must be missing something silly.

    Read the article

  • Unknown C# keywords: params

    - by Chris Skardon
    Often overlooked, and (some may say) unloved, is the params keyword, but it’s an awesome keyword, and one you should definitely check out. What does it do? Well, it lets you specify a single parameter that can have a variable number of arguments. You what? Best shown with an example, let’s say we write an add method: public int Add(int first, int second) { return first + second; } meh, it’s alright, does what it says on the tin, but it’s not exactly awe-inspiring… Oh noes! You need to add 3 things together??? public int Add(int first, int second, int third) { return first + second + third; } oh yes, you code master you! Overloading a-plenty! Now a fourth… Ok, this is starting to get a bit ridiculous, if only there was some way… public int Add(int first, int second, params int[] others) { return first + second + others.Sum(); } So now I can call this with any number of int arguments? – well, any number > 2..? Yes! int ret = Add(1, 2, 3); Is as valid as: int ret = Add(1, 2, 3, 4); Of course you probably won’t really need to ever do that method, so what could you use it for? How about adding strings together? What about a logging method? We all know ToString can be an expensive method to call, it might traverse every node on a class hierarchy, or may just be the name of the type… either way, we don’t really want to call it if we can avoid it, so how about a logging method like so: public void Log(LogLevel level, params object[] objs) { if(LoggingLevel < level) return; StringBuilder output = new StringBuilder(); foreach(var obj in objs) output.Append((obj == null) ? "null" : obj.ToString()); return output; } Now we only call ‘ToString’ when we want to, and because we’re passing in just objects we don’t have to call ToString if the Logging Level isn’t what we want… Of course, there are a multitude of things to use params for…

    Read the article

  • Creating a shared library that might be used with desktop applications and web projects

    - by dreza
    I have been involved in a number of MVC.NET and c# desktop projects in our company over the last year or so while also managing to kept my nose poked into other projects (in a read-only learning capacity of course). From this I've noticed that across the various projects and teams there is a-lot of functionality that has been well designed against good interfaces and abstractions. Because we tend to like our own work at times, I noticed a couple of projects had the exact same class, method copied into it as it had obviously worked on one and so was easily moved to a new project (probably by the same developer who originally wrote it) I mentioned this fact in one of our programmer meetings we have occasionally and suggested we pull some of this functionality into a core company library that we can build up over time and use across multiple projects. Everyone agreed and I started looking into this possibility. However, I've come across a stumbling block pretty early on. Our team primarily focuses on MVC at the moment and we have projects mainly in 2.0 but are starting to branch to 3.0. We also have a number of desktop applications that might benefit from some shared classes and basic helper methods. Initially when creating this DLL I included some shared classes that could be used across any project type (Web, Client etc) but then I started looking at adding some shared modules that would be useful in our MVC applications only. However this meant I had to include a reference to some Microsoft Web DLL's in order to leverage some of the classes I was creating (at this stage MVC 2.0). Now my issue is that we have a shared DLL that has references to web specific libraries that could also possibly be used in a client application. Not only that, our DLL referenced initially MVC 2.0 and we will eventually move onto MVC 3.0 for all projects. But alot of the classes in this library I expect to still be relevant to MVC 3 etc Our code within this DLL is separated into it's own namespaces such as: CompanyDLL.Primitives CompanyDLL.Web.Mvc CompanyDLL.Helpers etc etc So, my questions are: Is it OK to do a shared library like this, or if we have web specific features in it should we create a separate web DLL only targeted at a specific framework or MVC version? If it's OK, what kind of issues might we face when using the library that references MVC 2 in a MVC 3 project for example. I would be thinking that we might run into some sort of compatibility issue, or even issues where the developers using the library doesn't realize they need MVC 2.0 libraries. They might only want to use some of the generic classes etc The concept seemed like a good idea at the time, but I'm starting to think maybe it's not really a practical solution. But the number of times I've seen copied classes and methods across projects because they are proven tested code is a bit unnerving to be perfectly honest!

    Read the article

  • ATG Live Webcast Nov. 29th: Endeca "Evolutionizes" E-Business Suite

    - by Bill Sawyer
    If you have ever wanted any of the following within Oracle E-Business Suite: Complete Data View Advanced Searching Across Organizations and Flexfields Advanced Visualization including Charts, Metrics, and Cross Tabs Guided Navigation Then you might want to attend this webcast to learn more about Oracle Endeca's integration with Oracle E-Business Suite. Oracle Endeca includes an unstructured data correlation and analytics engine, together with catalog search and guided navigation capabilities. This webcasts focuses on the details behind Oracle Endeca's integration with Oracle E-Business Suite. It demonstrates how you can extend the use of Oracle Endeca into other areas of Oracle E-Business Suite. Date:             Thursday, November 29, 2012Time:             8:00 AM - 9:00 AM Pacific Standard TimePresenter:   Osama Elkady, Senior DirectorWebcast Registration Link (Preregistration is optional but encouraged) To hear the audio feed:   Domestic Participant Dial-In Number:           877-697-8128    International Participant Dial-In Number:      706-634-9568    Additional International Dial-In Numbers Link:    Dial-In Passcode:                                              103192To see the presentation:    The Direct Access Web Conference details are:    Website URL: https://ouweb.webex.com    Meeting Number:  595335921If you miss the webcast, or you have missed any webcast, don't worry -- we'll post links to the recording as soon as it's available from Oracle University.  You can monitor this blog for pointers to the replay. And, you can find our archive of our past webcasts and training here. If you have any questions or comments, feel free to email Bill Sawyer (Senior Manager, Applications Technology Curriculum) at BilldotSawyer-AT-Oracle-DOT-com.

    Read the article

  • Best Practices for High Volume CPA Import Operations with ebXML in B2B 11g

    - by Shub Lahiri, A-Team
    Background B2B 11g supports ebXML messaging protocol, where multiple CPAs can be imported via command-line utilities.  This note highlights one aspect of the best practices for import of CPA, when large numbers of CPAs in the excess of several hundreds are required to be maintained within the B2B repository. Symptoms The import of CPA usually is a 2-step process, namely creating a soa.zip file using b2bcpaimport utility based on a CPA properties file and then using b2bimport to import the b2b repository.  The commands are provided below: ant -f ant-b2b-util.xml b2bcpaimport -Dpropfile="<Path to cpp_cpa.properties>" -Dstandard=true ant -f ant-b2b-util.xml b2bimport -Dlocalfile=true -Dexportfile="<Path to soa.zip>" -Doverwrite=true Usually the first command completes fairly quickly regardless of the number of CPAs in the repository. However, as the number of trading partners within the repository goes up, the time to complete the second command could go up to ~30 secs per operation. So, this could add up to a significant amount, if there is a need to import hundreds of CPA in a production system within a limited downtime, maintenance window.  Remedy In situations, where there is a large number of entries to be imported, it is best to setup a staging environment and go through the import operation of each individual CPA in an empty repository. Since, this will be done in an empty repository, the time taken for completion should be reasonable.  After all the partner profiles have been imported, a full repository export can be taken to capture the metadata for all the entries in one file.  If this single file with all the partner entries is imported in a loaded repository, the total time taken for import of all the CPAs should see a dramatic reduction. Results Let us take a look at the numbers to see the benefit of this approach. With a pre-loaded repository of ~400 partners, the individual import time for each entry takes ~30 secs. So, if we had to import another 100 partners, the individual entries will take ~50 minutes (100 times ~30 secs). On the other hand, if we prepare the repository export file of the same 100 partners from a staging environment earlier, the import takes about ~5 mins. The total processing time for the loading of metadata, specially in a production environment, can thus be shortened by almost a factor of 10. Summary The following diagram summarizes the entire approach and process. Acknowledgements The material posted here has been compiled with the help from B2B Engineering and Product Management teams.

    Read the article

  • Enable Configurator for Return Orders

    - by ChristineS-Oracle
    With release 12.2.4 for non-referenced RMAs, Order Management will allow you to configure the model from Sales Order / Quick Sales Order windows. This is only allowable when profile  OM: Enable Configuration UI for RMA is set to Yes.  All selected options must be returnable, as well as all included items. Order Management explodes included items and creates options and option classes in a way similar to outbound orders. The application creates all selected components with same line number but different option/component number.  Additionally, the application does not allow re-configuration and/or deletion of any line if any line in the same configuration is received, fulfilled, closed, cancelled, or split. For additional information refer to the Oracle Order Management Release Notes for Release 12.2.4 (Doc ID 1906521.1).

    Read the article

  • How to solve linear recurrences involving two functions?

    - by Aditya Bahuguna
    Actually I came across a question in Dynamic Programming where we need to find the number of ways to tile a 2 X N area with tiles of given dimensions.. Here is the problem statement Now after a bit of recurrence solving I came out with these. F(n) = F(n-1) + F(n-2) + 2G(n-1), and G(n) = G(n-1) + F(n-1) I know how to solve LR model where one function is there.For large N as is the case in the above problem we can do the matrix exponentiation and achieve O(k^3log(N)) time where k is the minimum number such that for all km F(n) does not depend on F(n-k). The method of solving linear recurrence with matrix exponentiation as it is given in that blog. Now for the LR involving two functions can anyone suggest an approach feasible enough for large N.

    Read the article

  • Google Keyword Competition rating

    - by Eric
    Google offers a Keyword application that allows me to see the number of time a particular query has been made in Google. There is a column in the results named "Competition" (Actually its Concurrence in French, I'm just translating). Its a rating from 0 to 1, as in percentage. What indicator is that? EDIT * Is this something useful I should rely on? I'm not sure about how to interpret this data. Should I go for less competitive keywords with a lower number of searches or not worry about it and go for the highly searched keywords anyway? Is 50% considered high? what about 75% ? I have a very niche market that sell expensive offline services, so the very long tail is my goal (I assume). If you didn't already figured out, I'm very new to SEO =)

    Read the article

  • How You Helped Shape Java EE 7...

    - by reza_rahman
    For many of us working with the JCP for years, the commitment to transparency and openness is very clear. For others, perhaps the most visible sign to date of this high regard for grassroots level input is a survey on Java EE 7 gathered a few months ago. The survey was designed to get open feedback on a number of critical issues central to the Java EE 7 umbrella specification including what APIs to include in the standard. The survey was highly successful with a large number of high quality responses. With Java EE 7 under our belt and the horizons for Java EE 8 emerging, this is a good time to thank everyone that took the survey once again for their thoughts and let you know what the impact of your voice actually was. I've posted the details on my personal blog. I hope you are encouraged by how your input to the survey helped shape Java EE 7 and continues to shape Java EE 8. Maybe now is the time for you to get more involved :-)?

    Read the article

  • ipod not mounting

    - by rls
    Tried to connect my iPod, but got this message: Error mounting: mount: wrong fs type, bad option, bad superblock on /dev/sdb2, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so Have seen links to this here, but beeing rather green, I don't understand much. https://bugs.launchpad.net/ubuntu/+source/util-linux/+bug/734883 What do I do now? The dmesg|tail says [ 2819.709437] sd 6:0:0:0: [sdb] 3901376 4096-byte logical blocks: (15.9 GB/14.8 GiB) [ 2819.710161] sd 6:0:0:0: [sdb] Assuming drive cache: write through [ 2819.735294] sdb: [mac] sdb1 sdb2 [ 2819.738060] sd 6:0:0:0: [sdb] 3901376 4096-byte logical blocks: (15.9 GB/14.8 GiB) [ 2819.738671] sd 6:0:0:0: [sdb] Assuming drive cache: write through [ 2819.738688] sd 6:0:0:0: [sdb] Attached SCSI removable disk [ 2820.420130] sd 6:0:0:0: [sdb] Bad block number requested [ 2820.420167] hfs: unable to find HFS+ superblock [ 2820.612140] sd 6:0:0:0: [sdb] Bad block number requested [ 2820.612191] hfs: unable to find HFS+ superblock

    Read the article

  • Using The Windows Server AppFabric Cache with ASP.NET

    Did you know that you can use the AppFabric Cache with ASP.NET?  AppFabric Cache provides an ASP.NET session state provider.  There are a number of reasons that you would want to consider using the AppFabric Cache instead of other caching technologies, including the built in ASP.NET caching. The AppFabric Cache provides a number of benefits to ASP.NET programmers.  When web applications need to maintain state, especially across a Web Farm, or needs to maintain objects across restarts...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >