Search Results

Search found 28744 results on 1150 pages for 'higher order functions'.

Page 436/1150 | < Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >

  • Representing complex object dependencies

    - by max
    I have several classes with a reasonably complex (but acyclic) dependency graph. All the dependencies are of the form: class X instance contains an attribute of class Y. All such attributes are set during initialization and never changed again. Each class' constructor has just a couple parameters, and each object knows the proper parameters to pass to the constructors of the objects it contains. class Outer is at the top of the dependency hierarchy, i.e., no class depends on it. Currently, the UI layer only creates an Outer instance; the parameters for Outer constructor are derived from the user input. Of course, Outer in the process of initialization, creates the objects it needs, which in turn create the objects they need, and so on. The new development is that the a user who knows the dependency graph may want to reach deep into it, and set the values of some of the arguments passed to constructors of the inner classes (essentially overriding the values used currently). How should I change the design to support this? I could keep the current approach where all the inner classes are created by the classes that need them. In this case, the information about "user overrides" would need to be passed to Outer class' constructor in some complex user_overrides structure. Perhaps user_overrides could be the full logical representation of the dependency graph, with the overrides attached to the appropriate edges. Outer class would pass user_overrides to every object it creates, and they would do the same. Each object, before initializing lower level objects, will find its location in that graph and check if the user requested an override to any of the constructor arguments. Alternatively, I could rewrite all the objects' constructors to take as parameters the full objects they require. Thus, the creation of all the inner objects would be moved outside the whole hierarchy, into a new controller layer that lies between Outer and UI layer. The controller layer would essentially traverse the dependency graph from the bottom, creating all the objects as it goes. The controller layer would have to ask the higher-level objects for parameter values for the lower-level objects whenever the relevant parameter isn't provided by the user. Neither approach looks terribly simple. Is there any other approach? Has this problem come up enough in the past to have a pattern that I can read about? I'm using Python, but I don't think it matters much at the design level.

    Read the article

  • Workflow 4.5 is Awesome, cant wait for 5.0!

    - by JoshReuben
    About 2 years ago I wrote a blog post describing what I would like to see in Workflow vnext: http://geekswithblogs.net/JoshReuben/archive/2010/08/25/workflow-4.0---not-there-yet.aspx At the time WF 4.0 was a little rough around the edges – the State Machine was on codeplex and people were simulating state machines with Flowcharts. Last year I built a near- realtime machine management system using WF 4.0.1 – its managing the internal operations of this device: http://landanano.com/products/commercial   Well WF 4.5 has come a long way – many of my gripes have been addressed: C# expressions - no more VB 'AndAlso' clauses state machine awesomeness - can query current state many designer improvements - Document Outline is so much more succinct than Designer! Separate WCF Service Contract interfaces and ability to generate activities from contract operations ability to rehydrate to updated flow definitions via DynamicUpdateMap and WorkflowIdentity you can read about the new features here: http://msdn.microsoft.com/en-us/library/hh305677(VS.110).aspx   2013 could be the year of Workflow evangelism for .NET, as it comes together as the DSL language. Eg on Azure it could be used to graphically orchestrate between WebRoles, WorkerRoles and AppFabric Queues and the ServiceBus – that would be grand.   Here’s a list of things I’d like to see in Workflow 5.0: Stronger Parallelism support for true multithreaded workflows . A Workflow executes on a single thread – wouldn’t it be great if we had the ability to model TPL DataFlow? Parallel is not really parallel, just allows AsyncCodeActivity.     support for recursion an ExpressionTree activity with an editor design surface a math activity pack return of application level protocol (3.51 WF services) – automatically expose a state machine as a WCF service with bookmark Receive activities generated from OperationContract automatically placed in state transition triggers. A new HTML5 ActivityDesigner control – support with different CSS3  skinnable hooks,  remote connectivity (had to roll my own) A data flow view – crucial to understanding the big picture Ability to refactor a Sequence to custom activity in a separate .xaml file – like Expression Blend does for UserControl state machine global error handling - if all states goto an error state, you quickly get visual spagetti. Now you could nest a state machine, but what if you want an application level protocol whereby each state exposes certain WCF ops. DSL RAD editing - Make the Document Outline into a DSL editor for adding activities  – For WF to really succeed as a higher level of abstraction, It needs to be more productive than raw coding - drag & drop on the designer is currently too slow compared to just typing code. Extensible Wizard API - for pluggable WF editor experience other execution models beyond Sequence, Flowchart & StateMachine: SSIS, Behavior Trees,  Wolfram Model tool – surprise us! improvements to Designer debugging API - SourceLocation is tied to XAML file line number and char position, and ModelService access seems convoluted - why not leverage WPF LogicalTreeHelper / VisualTreeHelper ? Workflow Team , keep on rocking!

    Read the article

  • How can I fix broken i915 drivers for Intel GPUs?

    - by Alen Mujezinovic
    I've got troubles getting the i915 drivers to work correctly on my laptop (HP Pavilion DM4 2101ea). Specifically, the laptop screen goes black and stays black after the splash graphic when booting both from USB key and from harddrive. To get anything on to the display after the splash screen I have to boot either with acpi=off nomodeset i915.modeset=0 I'd rather not turn ACPI off because I like my fans spinning and nomodeset is a bit overkill, so for now I'm booting with i915.modeset=0. Unfortunately, this turns off KMS and my current maximum resolution on the laptop screen is fixed to 1024x768 instead of its real capability. When not setting any of the above boot flags and I plug in an external monitor, the external monitor works fine. When booting with the flags, the external monitor works fine too, but can only do 1024x768 and can't do anything else than mirroring the laptop display. I did upgrade the i915 drivers from 2.17 that ship with Precise to 2.19 which are the most recent ones but without luck of getting anything to display. Here's my lspci output: 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05) 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b5) 00:1c.2 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 3 (rev b5) 00:1c.4 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 5 (rev b5) 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05) 00:1f.0 ISA bridge: Intel Corporation HM65 Express Chipset Family LPC Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 05) 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 05) 01:00.0 Network controller: Broadcom Corporation BCM4313 802.11b/g/n Wireless LAN Controller (rev 01) 02:00.0 Unassigned class [ff00]: Realtek Semiconductor Co., Ltd. RTS5116 PCI Express Card Reader (rev 01) 08:00.0 Ethernet controller: Atheros Communications Inc. AR8151 v2.0 Gigabit Ethernet (rev c0) Here's lshw -C video *-display UNCLAIMED description: VGA compatible controller product: 2nd Generation Core Processor Family Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 09 width: 64 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list configuration: latency=0 resources: memory:c0000000-c03fffff memory:b0000000-bfffffff ioport:4000(size=64) Both outputs are generated after booting with i915.modeset=0. Here's a complete Xorg.log file from a boot into a black screen: https://gist.github.com/479ce06454e47d6123e1 The graphics card is a Intel HD 3000 integrated GPU. I've never had problems with Intel hardware on Ubuntu before so this is very surprising. If you could provide a method to make i915 work, suggest alternative drivers a way to boot with i915.modeset=0 but higher resolutions and KMS on or explain what is happening and how to fix it I'll give you an answer badge. :) Thanks

    Read the article

  • Emaroo 1.4.0 Released

    - by WeigeltRo
    Emaroo is a free utility for browsing most recently used (MRU) lists of various applications. Quickly open files, jump to their folder in Windows Explorer, copy their path - all with just a few keystrokes or mouse clicks. tl;dr: Emaroo 1.4.0 is out, go download it on www.roland-weigelt.de/emaroo   Why Emaroo? Let me give you a few examples. Let’s assume you have pinned Emaroo to the first spot on the task bar so you can start it by hitting Win+1. To start one of the most recently used Visual Studio solutions you type Win+1, [maybe arrow key down a few times], Enter This means that you can start the most recent solution simply by Win+1, Enter What else? If you want to open an Explorer window at the file location of the solution, you type Ctrl+E instead of Enter.   If you know that the solution contains “foo” in its name, you can type “foo” to filter the list. Because this is not a general purpose search like e.g. the Search charm, but instead operates only on the MRU list of a single application, you usually have to type only a few characters until you can press Enter or Ctrl+E.   Ctrl+C copies the file path of the selected MRU item, Ctrl+Shift+C copies the directory If you have several versions of Visual Studio installed, the context menu lets you open a solution in a higher version.   Using the context menu, you can open a Visual Studio solution in Blend. So far I have only mentioned Visual Studio, but Emaroo knows about other applications, too. It remembers the last application you used, you can change between applications with the left/right arrow or accelerator keys. Press F1 or click the Emaroo icon (the tab to the right) for a quick reference. Which applications does Emaroo know about? Emaroo knows the MRU lists of Visual Studio 2008/2010/2012/2013 Expression Blend 4, Blend for Visual Studio 2012, Blend for Visual Studio 2013 Microsoft Word 2007/2010/2013 Microsoft Excel 2007/2010/2013 Microsoft PowerPoint 2007/2010/2013 Photoshop CS6 IrfanView (most recently used directories) Windows Explorer (directories most recently typed into the address bar) Applications that are not installed aren’t shown, of course. Where can I download it? On the Emaroo website: www.roland-weigelt.de/emaroo Have fun!

    Read the article

  • Calculated Fields - Idiosyncracies

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Calculated Fields and some of their Idiosyncrasies Did you try to write a calculate field formula directly into the screen? Good Luck – You’ll need it! Calculated Fields are a sophisticated OOB feature of SharePoint, so you could think that they are best left to the end users – at least to the power users. But they reach their limits before the “Professionals “do, and the tough ones come back to us anyway. Back to business; the simpler the formula, the easier it is. Still, use your favorite editor to write it, then cut it and paste it to the ridiculously small window. What about complex formulae? Write them in steps! Here is a case in point and an idiosyncrasy or two. Our welders need to be certified and recertified every two years. Some of them are certifiable…., but I digress. To be certified you need to pass an eye exam, and two more tests – test A and test B. for each of those you have an expiry date. When renewed, each expiry date is advanced by two years from the date of renewal. My users wanted a visual clue so that when the supervisor looks at the list, she’ll have a KPI symbol telling her if anything expired (Red), is going to expire within the next 90 days (Yellow) or is not to be worried about (green). Not all the dates are filled and any blank date implies a complete lack of certification in the particular requirement. Obviously, I needed to figure the minimal of these 3 dates – a simple enough formula: =MIN([Date_EyeExam], {Date_TestA], [Date_TestB]). Aha! Here is idiosyncrasy #1. When one of the dates is a null, MIN(Date1, Date2) returns the non null date. Null is construed as “Far, far away”. The funny thing is that when you compare it to Today, the null is the lesser one. So a null it is less than today, but not when MIN is calculated. Now, to me the fact that the welder does not have an exam date, is synonymous with his exam being prehistoric, or at least past due. So here is what I did: Solution: Let’s set a blank date to 1/1/1800. How will we do that? Use the IF. IF([Field] rel relValue, TrueValue, FalseValue). rel is any relationship operator <, >, <=, >=, =, <>. If the field is related to the relValue as prescribed, the “IF” returns the TrueValue, otherwise it returns the FalseValue. Thus: =IF([SomeDate]="",1/1/1800,[SomeDate]) will return 1/1/1800 if the date is blank and the date itself if not. So, using this formula, if the welder missed an exam, the returned exam date will be far in the past. It would be nice if we could take such a formula and make it into a reusable function. Alas, here is a calculated field serious shortcoming: You cannot write subs and functions!! Aha, but we can use interim calculated fields! So let’s create 3 calculated fields as follows: 1: c_DateTestA as a calculated field of the date type, with the formula:  IF([Date_TestA]="",1/1/1800,[Date_TestA]) 2: c_DateTestB as a calculated field of the date type, with the formula:  IF([Date_TestB]="",1/1/1800,[Date_TestB]) 3: c_DateEyeExam as a calculated field of the date type, with the formula:  IF([Date_EyeExam]="",1/1/1800,[Date_EyeExam]) And now use these to get c_MinDate. This is again a calculated field of type date with the formula: MIN(c_DateTestA, cDateTestB, c_DateEyeExam) Note that I missed the square parentheses. In “properly named fields – where there are no embedded spaces, we don’t need the square parentheses. I actually strongly recommend using underscores in place of spaces in all the field names in your lists. Among other things, it makes using CAML much simpler. Now, we still need to apply the KPI to this minimal date. I am going to use the available KPI graphics that come with SharePoint and are always available in your 12 hive. "/_layouts/images/kpidefault-2.gif" is the Red KPI "/_layouts/images/kpidefault-1.gif" is the Yellow KPI "/_layouts/images/kpidefault-0.gif" is the Green KPI And here is the nested IF formula that will do the trick: =IF(c_MinDate<=Today,"/_layouts/images/kpidefault-2.gif", IF(cMinDate<Today+90,"/_layouts/images/kpidefault-1.gif","/_layouts/images/kpidefault-0.gif")) Nice! BUT when I tested, it did not work! This is Idiosyncrasy #2: A calculated field based on a calculated field based on a calculated field does not work. You have to stop at two levels! Back to the drawing board: We have to reduce by one level. How? We’ll eliminate the c_DateX items in the formula and replace them with the proper IF formulae. Notice that this needs to be done with precision. You are much better off in doing it in your favorite line editor, than inside the cramped space that SharePoint gives you. So here is the result: MIN(IF([Date_TestA]="",1/1/1800,[ Date_TestA]), IF([Date_TestB]="",1/1/1800,[ Date_TestB]), 1/1/1800), IF([Date_EyeExam]="",1/1/1800,[Date_EyeExam])) Note that I bolded the parentheses and painted them red. They have to match for this formula to work. Now we can leave the KPI formula as is and test again. This time with SUCCESS! Conclusion: build the inner functions first, and then embed them inside the outer formulae. Do this as long as necessary. Use your favorite line editor. Limit yourself to 2 levels. That’s all folks! Almost! As soon as I finished doing all of the above, my users added yet another level of complexity. They added another test, a test that must be passed, but never expires and asked for yet another KPI, this time in Black to denote that any test is not just past due, but altogether missing. I just finished this. Let’s hope it ends here! And OH, the formula  =IF(c_MinDate<=Today,"/_layouts/images/kpidefault-2.gif",IF(cMinDate<Today+90,"/_layouts/images/kpidefault-1.gif","/_layouts/images/kpidefault-0.gif")) Deals with “Today” and this is a subject deserving a discussion of its own!  That’s all folks?! (and this time I mean it)

    Read the article

  • 4.8M wasn't enough so we went for 5.055M tpmc with Unbreakable Enterprise Kernel r2 :-)

    - by wcoekaer
    We released a new set of benchmarks today. One is an updated tpc-c from a few months ago where we had just over 4.8M tpmc at $0.98 and we just updated it to go to 5.05M and $0.89. The other one is related to Java Middleware performance. You can find the press release here. Now, I don't want to talk about the actual relevance of the benchmark numbers, as I am not in the benchmark team. I want to talk about why these numbers and these efforts, unrelated to what they mean to your workload, matter to customers. The actual benchmark effort is a very big, long, expensive undertaking where many groups work together as a big virtual team. Having the virtual team be within a single company of course helps tremendously... We already start with a very big server setup with tons of storage, many disks, lots of ram, lots of cpu's, cores, threads, large database setups. Getting the whole setup going to start tuning, by itself, is no easy task, but then the real fun starts with tuning the system for optimal performance -and- stability. A benchmark is not just revving an engine at high rpm, it's actually hitting the circuit. The tests require long runs, require surviving availability tests, such as surviving crashes -and- recovery under load. In the TPC-C example, the x4800 system had 4TB ram, 160 threads (8 sockets, hyperthreaded, 10 cores/socket), tons of storage attached, tons of luns visible to the OS. flash storage, non flash storage... many things at high scale that all have to be perfectly synchronized. During this process, we find bugs, we fix bugs, we find performance issues, we fix performance issues, we find interesting potential features to investigate for the future, we start new development projects for future releases and all this goes back into the products. As more and more customers, for Oracle Linux, are running larger and larger, faster and faster, more mission critical, higher available databases..., these things are just absolutely critical. Unrelated to what anyone's specific opinion is about tpc-c or tpc-h or specjenterprise etc, there is a ton of effort that the customer benefits from. All this work makes Oracle Linux and/or Oracle Solaris better platforms. Whether it's faster, more stable, more scalable, more resilient. It helps. Another point that I always like to re-iterate around UEK and UEK2 : we have our kernel source git repository online. Complete changelog of the mainline kernel, and our changes, easy to pull, easy to dissect, easy to know what went in when, why and where. No need to go log into a website and manually click through pages to hopefully discover changes or patches. No need to untar 2 tar balls and run a diff.

    Read the article

  • PeopleSoft Grants & the Federal Agency Letter of Credit Draw Changes

    - by Mark Rosenberg
    For decades, most, if not all, US Federal agencies that sponsor research allowed grant recipients to request and receive payments using pooled accounts, commonly known as pooled letter of credit (LOC) draws. This enabled organizations, such as universities and hospitals, fast and efficient access to reimbursement of the expenditures they incurred conducting research across a portfolio of grants. To support this business practice, the PeopleSoft Grants solution has delivered an LOC Draw report to provide the total request amount along with all of the supporting invoice details for reconciliation and audit purposes. Now, in an attempt to provide greater transparency, eliminate fraud, strengthen accountability for grant-related financial transactions, and simplify grant award closeout, many US Federal sponsors are transitioning from the “pooling” letter of credit draw method to requesting on a “grant-by-grant” basis. The National Science Foundation, the second largest issuer of Federal awards, already transitioned to detailed grant draws in 2013. And, in response to the U.S. Department of Health and Human Services (HHS) directive to HHS-supported Agencies, the largest Federal awards sponsor, the National Institutes of Health (NIH), will fully transition to the new HHS subaccount draw method. This will require NIH award recipients to request payments based on actual expenses incurred on an award-by-award basis. NIH is expected to fully transition to this new draw method by the end of Federal fiscal year 2015.  (The NIH had planned to fully transition to this new method by the end of fiscal 2014; however, the impact to institutions was deemed to be significant enough that a reprieve was recently granted.) In light of these new Federal draw requirements, we have recently released these new features to aid our customers on both PeopleSoft Grants releases 9.1 and 9.2:1. Federal Award Identification Number on the Proposal and Award Profile 2. Letter of credit fields on contract lines to support award basis draws and comply with Federal close out mandates3. Process to produce both pro forma and final LOC Draw Reports in BI Publisher report format4. Subacccount ID field on the LOC Summary and a new BI Publisher version of the LOC Summary report 5. Added Subaccount Field and contract info to be displayed on the LOC summary page6. Ability to generate by a variety of dimensions pro forma and invoiced draw listings 7. Queries for generation and manipulation of data to upload into sponsor payment request systems and perform payment matching8. Contracts LOC Close Out query to quickly review final balances prior to initiating final draws and preparing Federal Financial Reports prior to close The PeopleSoft Development team actively monitors this and other major Federal changes and continues working closely with the Grants Product Advisory Group of the Higher Education User Group to ensure a clear understanding of what our customers need in order to transition to new approaches for doing business with the Federal government. For more information regarding the enhancements to the PeopleSoft Grants solution, existing customers can login to My Oracle Support and review the Enhancements to Letter of Credit Process (Doc ID 1912692.1) associated with resolution ID 904830. This enhanced LOC functionality is available in both PeopleSoft FSCM 9.1 Bundle #31 and PeopleSoft FSCM 9.2 Update Image 8.

    Read the article

  • How granular should a command be in a CQ[R]S model?

    - by Aaronaught
    I'm considering a project to migrate part of our WCF-based SOA over to a service bus model (probably nServiceBus) and using some basic pub-sub to achieve Command-Query Separation. I'm not new to SOA, or even to service bus models, but I confess that until recently my concept of "separation" was limited to run-of-the-mill database mirroring and replication. Still, I'm attracted to the idea because it seems to provide all the benefits of an eventually-consistent system while sidestepping many of the obvious drawbacks (most notably the lack of proper transactional support). I've read a lot on the subject from Udi Dahan who is basically the guru on ESB architectures (at least in the Microsoft world), but one thing he says really puzzles me: As we get larger entities with more fields on them, we also get more actors working with those same entities, and the higher the likelihood that something will touch some attribute of them at any given time, increasing the number of concurrency conflicts. [...] A core element of CQRS is rethinking the design of the user interface to enable us to capture our users’ intent such that making a customer preferred is a different unit of work for the user than indicating that the customer has moved or that they’ve gotten married. Using an Excel-like UI for data changes doesn’t capture intent, as we saw above. -- Udi Dahan, Clarified CQRS From the perspective described in the quotation, it's hard to argue with that logic. But it seems to go against the grain with respect to SOAs. An SOA (and really services in general) are supposed to deal with coarse-grained messages so as to minimize network chatter - among many other benefits. I realize that network chatter is less of an issue when you've got highly-distributed systems with good message queuing and none of the baggage of RPC, but it doesn't seem wise to dismiss the issue entirely. Udi almost seems to be saying that every attribute change (i.e. field update) ought to be its own command, which is hard to imagine in the context of one user potentially updating hundreds or thousands of combined entities and attributes as it often is with a traditional web service. One batch update in SQL Server may take a fraction of a second given a good highly-parameterized query, table-valued parameter or bulk insert to a staging table; processing all of these updates one at a time is slow, slow, slow, and OLTP database hardware is the most expensive of all to scale up/out. Is there some way to reconcile these competing concerns? Am I thinking about it the wrong way? Does this problem have a well-known solution in the CQS/ESB world? If not, then how does one decide what the "right level" of granularity in a Command should be? Is there some "standard" one can use as a starting point - sort of like 3NF in databases - and only deviate when careful profiling suggests a potentially significant performance benefit? Or is this possibly one of those things that, despite several strong opinions being expressed by various experts, is really just a matter of opinion?

    Read the article

  • Big-name School for Undergrad Students

    - by itaiferber
    As a soon-to-be graduating high school senior in the U.S., I'm going to be facing a tough decision in a few months: which college should I go to? Will it be worth it to go to Cornell or Stanford or Carnegie Mellon (assuming I get in, of course) to get a big-name computer science degree, internships, and connections with professors, while taking on massive debt; or am I better off going to SUNY Binghamton (probably the best state school in New York) and still get a pretty decent education while saving myself from over a hundred-thousand dollars worth of debt? Yes, I know questions like this has been asked before (namely here and here), but please bear with me because I haven't found an answer that fits my particular situation. I've read the two linked questions above in depth, but they haven't answered what I want to know: Yes, I understand that going to a big-name college can potentially get me connected with some wonderful professors and leaders in the field, but on average, how does that translate financially? I mean, will good connections pay off so well that I'd be easily getting rid of over a hundred-thousand dollars of debt? And how does the fact that I can get a fifth-years master's degree at Carnegie Mellon play into the equation? Will the higher degree right off the bat help me get a better-paying job just out of college, or will the extra year only put me further into debt? Not having to go to graduate school to get a comparable degree will, of course, be a great financial relief, but will getting it so early give it any greater worth? And if I go to SUNY Binghamton, which is far lesser-known than what I've considered (although if there are any alumni out there who want to share their experience, I would greatly appreciate it), would I be closing off doors that would potentially offset my short-term economic gain with long-term benefits? Essentially, is the short-term benefit overweighed by a potential long-term loss? The answers to these questions all tie in to my final college decision (again, permitting I make it to these schools), so I hope that asking the skilled and knowledgeable people of the field will help me make the right choice (if there is such a thing). Also, please note: I'm in a rather peculiar situation where I can't pay for college without taking out a bunch of loans, but will be getting little to no financial aid (likely federal or otherwise). I don't want to elaborate on this too much (so take it at face value), but this is mainly the reason I'm asking the question. Thanks a lot! It means a lot to me.

    Read the article

  • MatheMagics - Guess My Age Method 1

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved MatheMagic – Guess My Age – Method 1 The Mathemagician stands on the stage and asks an adult to do the following: ·         Do the next few steps on your calculator, or the calculator in your phone, or even on a piece of paper. ·         Do it silently! Don’t tell me the results until I ask for them directly ·         Compute a single digit multiple of 9 – any one of 9, 18, 27, … all the way to 81, will do. ·         Now multiply your age by 10 ·         Subtract the 9 multiple from this number. ·         Tell me the result. Notice that I don’t know which multiple of 9 you subtracted from 10 times your age. I will nonetheless immediately tell you what your age is. How do I do this? Let’s do the algebra. 10X – 9Y = 10X – 10Y + Y = 10(X – Y) + Y Now remember, you asked an adult, so his/her age is a two digit number (maybe even 3 digits), thus reducing it by the single digit multiplied by nine is still positive – the lowest is can be is 100 – 81 which yields 19. Now make two numbers out of the result. The last digit and the number before it. This number is X – Y or the age minus the single digit you selected. The last digit is this very single digit. This is always so regardless of the digit you selected. So… Add tis digit to the other number and you get back the age! Q.E.D Example: I am 76 years old and here is what happens when I do the steps 76 x 10 = 760 760 – 18 = 742 made of 74 and 2. My age is 74 + 2 760 – 81 = 679 made of 67 and 9. My age is 67 + 9 A note to the socially aware mathemagician – it is safer to do it with a man. The chances of a veracious answer are much, much higher! The trick may be accomplished on any 2 or 3 digit number, not just one’s age, but if you want to know your date’s age, it’s a good way to elicit it. That’s All Folks PS for more Ageless “Age” mathemagics go to www.mgsltns.com/games.htm and also here: http://geekswithblogs.net/PointsToShare/archive/2011/11/15/mathemagics---guess-my-age---method-2.aspx

    Read the article

  • Behaviour Trees with irregular updates

    - by Robominister
    I'm interested in behaviour trees that aren't iterated every game tick, but every so often. (Edit: the tree could specify how many frames within the main game loop to wait before running its tick function again). Every theoretical implementation I have seen of behaviour trees talks of the tree search being carried out every game update - which seems necessary, because a leaf node (eg a behaviour, like 'return to base') needs to be constantly checked to see if is still running, failed or completed. Can anyone suggest how I might start implementing a tree that isnt run every tick, or point me in the direction of good material specific to this case (I am struggling to find anything)? My thoughts so far: action leaf nodes (when they start) must only push some kind of action object onto a list for an entity, rather than directly calling any code that makes the entity do something. The list of actions for the entity would be run every frame (update any that need to run, pop any that have completed from the list). the return state from a given action must be fed back into the tree, so that when we run the tree iteration again (and reach the same action leaf node - so the tree has so far determined that we ought to still be trying this action) - that the action has completed, or is still running etc. If my actual action code is running from an action list on an entity, then I possibly need to cancel previously running actions in the list - i am thinking that I can just delete the entire stack of queued up actions. I've seen the idea of ActionLists which block lower priority actions when a higher priority one is added, but this seems like very close logic to behaviour trees, and I dont want to be duplicating behaviour. This leaves me with some questions 1) How would I feed the action return state back into the tree? Its obvious I need to store some information relating to 'currently executing actions' on the entity, and check that in the tree tick, but I can't imagine how. 2) Does having a seperate behaviour tree (for deciding behaviour) and action list (for carrying out actual queued up actions) sound like a reasonable approach? 3) Is the approach of updating a behaviour tree irregularly actually used by anyone? It seems like a nice idea for budgeting ai search time when you have a lot of ai entities to process. (Edit) - I am also thinking about storing a single instance of a given behaviour tree in memory, and providing it by reference to any entity that uses it. So any information about what action was last selected for execution on an entity must be stored in a data context relative to the entity (which the tree can check). (I am probably answering my own questions as i go!) I hope I have expressed my questions adequately! Thanks in advance for any help :)

    Read the article

  • algorithm for project euler problem no 18

    - by Valentino Ru
    Problem number 18 from Project Euler's site is as follows: By starting at the top of the triangle below and moving to adjacent numbers on the row below, the maximum total from top to bottom is 23. 3 7 4 2 4 6 8 5 9 3 That is, 3 + 7 + 4 + 9 = 23. Find the maximum total from top to bottom of the triangle below: 75 95 64 17 47 82 18 35 87 10 20 04 82 47 65 19 01 23 75 03 34 88 02 77 73 07 63 67 99 65 04 28 06 16 70 92 41 41 26 56 83 40 80 70 33 41 48 72 33 47 32 37 16 94 29 53 71 44 65 25 43 91 52 97 51 14 70 11 33 28 77 73 17 78 39 68 17 57 91 71 52 38 17 14 91 43 58 50 27 29 48 63 66 04 68 89 53 67 30 73 16 69 87 40 31 04 62 98 27 23 09 70 98 73 93 38 53 60 04 23 NOTE: As there are only 16384 routes, it is possible to solve this problem by trying every route. However, Problem 67, is the same challenge with a triangle containing one-hundred rows; it cannot be solved by brute force, and requires a clever method! ;o) The formulation of this problems does not make clear if the "Traversor" is greedy, meaning that he always choosed the child with be higher value the maximum of every single walkthrough is asked The NOTE says, that it is possible to solve this problem by trying every route. This means to me, that is is also possible without! This leads to my actual question: Assumed that not the greedy one is the max, is there any algorithm that finds the max walkthrough value without trying every route and that doesn't act like the greedy algorithm? I implemented an algorithm in Java, putting the values first in a node structure, then applying the greedy algorithm. The result, however, is cosidered as wrong by Project Euler. sum = 0; void findWay(Node node){ sum += node.value; if(node.nodeLeft != null && node.nodeRight != null){ if(node.nodeLeft.value > node.nodeRight.value){ findWay(node.nodeLeft); }else{ findWay(node.nodeRight); } } }

    Read the article

  • Common SOA Problems by C2B2

    - by JuergenKress
    SOA stands for Service Oriented Architecture and has only really come together as a concrete approach in the last 15 years or so, although the concepts involved have been around for longer. Oracle SOA Suite is based around the Service Component Architecture (SCA) devised by the Open SOA collaboration of companies including Oracle and IBM. SCA, as used in SOA suite, is designed as a way to crystallise the concepts of SOA into a standard which ensures that SOA principles like the separation of application and business logic are maintained. Orchestration or Integration? A common thing to see with many people who are beginning to either build a new SOA based infrastructure, or move an old system to be service oriented, is confusion in the purpose of SOA technologies like BPEL and enterprise service buses. For a lot of problems, orchestration tools like BPEL or integration tools like an ESB will both do the job and achieve the right objectives; however it’s important to remember that, although a hammer can be used to drive a screw into wood, that doesn’t mean it’s the best way to do it. Service Integration is the act of connecting components together at a low level, which usually results in a single external endpoint for you to expose to your customers or other teams within your organisation – a simple product ordering system, for example, might integrate a stock checking service and a payment processing service. Process Orchestration, however, is generally a higher level approach whereby the (often externally exposed) service endpoints are brought together to track an end-to-end business process. This might include the earlier example of a product ordering service and couple it with a business rules service and human task to handle edge-cases. A good (but not exhaustive) rule-of-thumb is that integrations performed by an ESB will usually be real-time, whereas process orchestration in a SOA composite might comprise processes which take a certain amount of time to complete, or have to wait pending manual intervention. BPEL vs BPMN For some, with pre-existing SOA or business process projects, this decision is effectively already made. For those embarking on new projects it’s certainly an important consideration for those using Oracle SOA software since, due to the components included in SOA Suite and BPM Suite, the choice of which to buy is determined by what they offer. Oracle SOA suite has no BPMN engine, whereas BPM suite has both a BPMN and a BPEL engine. SOA suite has the ESB component “Mediator”, whereas BPM suite has none. Decisions must be made, therefore, on whether just one or both process modelling languages are to be used. The wrong decision could be costly further down the line. Design for performance: Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: C2B2,SOA best practice,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Jitter during wall collisions with Bullet Physics: contact/penetration tolerance?

    - by Niriel
    I use the bullet physics engine through Panda3d. My scene is still very simple, think 'Wolfenstein3d': tile-based, walls are solid cubes. I expect walls to block the player, and I expect the player to slide along the walls in case of non-normal incidence. What I get is what I expect, with one difference: there is some jitter. If I try to force myself into the wall, then I see the frames blinking quickly between two positions. These differ by about 0.04 units of distance, which corresponds to 4 cm in my game. I noticed a 4 cm elsewhere: the bottom of my player capsule is 4 cm below ground, when at rest. Does that mean that there is somewhere in the Bullet engine a default 0.04-units-long tolerance to differentiate contact from collision? If so, what should I do ? Should I change the scale of my game so that these 0.04 units correspond to 0.4 cm, making the jitter ten times smaller? Or can I ask bullet to change its tolerance to a smaller value? Edit This is the jitter I get: 6.155 - 6.118 = 0.036 LPoint3f(0, 6.11694, 0.835) LPoint3f(0, 6.15499, 0.835) LPoint3f(0, 6.11802, 0.835) LPoint3f(0, 6.15545, 0.835) LPoint3f(0, 6.11817, 0.835) LPoint3f(0, 6.15726, 0.835) LPoint3f(0, 6.11876, 0.835) LPoint3f(0, 6.15911, 0.835) LPoint3f(0, 6.11937, 0.835) I found a setMargin method. I set it to 5 mm both on the BoxShape for the walls and on the Capsule shape for the player. It still jitters by about 35 mm as illustrated by this log (11.117 - 11.082 = 0.035): LPoint3f(0, 11.0821, 0.905) LPoint3f(0, 11.1169, 0.905) LPoint3f(0, 11.082, 0.905) LPoint3f(0, 11.117, 0.905) LPoint3f(0, 11.082, 0.905) LPoint3f(0, 11.117, 0.905) LPoint3f(0, 11.0821, 0.905) LPoint3f(0, 11.1175, 0.905) LPoint3f(0, 11.0822, 0.905) LPoint3f(0, 11.1178, 0.905) LPoint3f(0, 11.0823, 0.905) LPoint3f(0, 11.1183, 0.905) The margin on the capsule did change my penetration with the floor though, I'm a bit higher (0.905 instead of 0.835). However, it did not change anything when colliding with the walls. How can I make the collisions against the walls less jittery? Edit, the day after: After more investigation, it appears that dynamic objects behave well. My problem comes from the btKinematicCharacterController that I use for moving my character; that stuff is totally bugged, according to the whole Internet :/.

    Read the article

  • How can I get my ATI / AMD drivers to work with any kernel above 3.2.0.x?

    - by TorakTu
    How can I get my ATI / AMD drivers to work with any kernel above 3.2.0.x ? WHAT DID WORK Installed original AMD64 version of Ubuntu 12.04 ISO image. Burned DVD and installed which shown kernel 3.2.0-23 to begin with. Got 5.1 surround sound working. Got ATI ( Now its AMD ) video drivers installed for my Radeon HD R6870 Video card from AMD's website. fglrxinfo came up and reported as normal. THE PROBLEM Kernel 3.2.0.x kept locking up so I tried higher kernel versions. But ATI / AMD Drivers do not install on any kernel Above 3.2.0.x WHAT I HAVE TRIED I have gone over this tutorial many times ( https://help.ubuntu.com/community/BinaryDriverHowto/ATI ) and it doesn't work on ANY kernel except 3.2.0.x. The problems I am having here are that the ATI / AMD drivers working for the 12.04 Precise with kernel 3.2.0-23 and 24, But the computer kept locking up. Although all my games would work, the lock ups were random and were constant. So I looked all over the web for 3 days trying to find an answer and the lock up issue was said to just update the kernel. So I did. Have tried many kernels. All of them .. no lock ups. BUT the Restricted AMD drivers from the AMD website will not install. And none of the OpenSource AMD drivers have EVER installed no matter what Kernel or version I tried. EXAMPLE OUTPUT OF 3D TYPE OF ERRORS Javax.media.opengl.GLException: glXGetConfig failed: error code GLX_NO_EXTENSION at com.sun.opengl.impl.x11.X11GLDrawableFactory.glXGetConfig(X11GLDrawableFactory.java:651) at com.sun.opengl.impl.x11.X11GLDrawableFactory.xvi2GLCapabilities(X11GLDrawableFactory.java:350) at com.sun.opengl.impl.x11.X11GLDrawableFactory.chooseGraphicsConfiguration(X11GLDrawableFactory.java:174) at javax.media.opengl.GLCanvas.chooseGraphicsConfiguration(GLCanvas.java:520) at javax.media.opengl.GLCanvas.<init>(GLCanvas.java:131) at haven.HavenPanel.<init>(HavenPanel.java:68) at haven.HavenPanel.<init>(HavenPanel.java:78) at haven.MainFrame.<init>(MainFrame.java:182) at haven.MainFrame.main2(MainFrame.java:306) at haven.MainFrame.access$100(MainFrame.java:34) at haven.MainFrame$7.run(MainFrame.java:360) at java.lang.Thread.run(Thread.java:722) And of course this is what fglrxinfo shows : X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 139 (ATIFGLEXTENSION) Minor opcode of failed request: 66 () Serial number of failed request: 13 Current serial number in output stream: 13 EDIT : I forgot to mention that I DID look at this post over the last few days and it did not help.

    Read the article

  • Partner BI Applications 4-Day Hands-on Training Workshop

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} 12th - 15th February 2012, Oracle Reading (UK) - REGISTER NOW This training will provide attendees with an in-depth working understanding of the architecture, the technical and the functional content of the Oracle Business Intelligence Applications, whilst also providing an understanding of their installation, configuration and extension. The course will cover the following topics: Overview of Oracle Business Intelligence Applications Oracle BI Applications Fundamentals and Features Configuring BI Applications for Oracle E-Business Suite Understanding BI Applications Architecture Fundamentals of BI Applications Security Prerequisites - This training is only for OPN member Partners. Good understanding of basic data warehousing concepts Hands on experience in Oracle Business Intelligence Enterprise Edition Hands on experience in Informatica Good understanding of any of the following Oracle EBS modules: General Ledger, Accounts Receivables, Accounts Payables Some understanding of  Oracle BI Applications is required (See Sales & Technical Tutorials for OBI, BI-Apps and Hyperion EPM)  Please note that attendees are required to bring a laptop. Laptop 4GB RAM-Recognized by Windows 64 bits 80GB free space in Hard drive or External Device CPU Core 2 Duo or Higher Operating System Requirements Windows 7, Windows XP, Windows 2003 NOT ALLOWED with Windows Vista An Administrator User

    Read the article

  • Avoiding the Black Hole of Leads

    - by Charles Knapp
    Sales says, "Marketing doesn’t deliver enough qualified leads. So, we generate 90% of our own leads." Meanwhile, Marketing says, "We generate most of the leads. But, Sales doesn’t contact them quickly enough, while the lead is still interested." According to Sirius Decisions: Up to 90% of leads never make it to closure Sales works on only 11% of the leads supplied by Marketing Only 18% of the leads Sales accepts convert to opportunities Yet, 45% of prospects typically buy a product from someone within 12 months The root cause of these commonplace complaints is a disconnect between the funnels of marketing and sales. Unfortunately, we often see companies with an assortment of poorly integrated marketing tools. It takes too long and too many people to move the data around, scrub it, upload it from one system to another, and get it routed to the right sales teams. As a result, leads fall through the cracks, contextual information is lost, and by the time sales actually contacts a customer it may be too late. Sales automation alone is not enough. Marketing automation (including social) is not enough. Sales and Marketing must work together. It’s time to connect the silos of marketing and sales pipelines and analytics. It’s time for integrated Sales and Marketing automation. Integrated pipelines improve lead quality and timeliness. Marketing systems can track a rich set of contextual information about a prospect–self-disclosed information about interests, content viewed, and so on. This insight can equip the sales rep with rich information to make a face-to-face conversation more relevant and more likely to convert to the next stage in the sales process. Integrated lead to revenue (LTR) management provides end-to-end visibility, enabling the company to measure what is working. Marketing can measure its impact on revenue and other business outcomes, and sales can harness and redirect marketing investments to areas where they most help achieve sales objectives. It’s a win-win play. Marketing delivers more leads that are qualified, cuts cost per lead, and demonstrates a strong Return on Marketing Investment (ROMI). Sales spends more time with warm leads and less time on cold calls, achieves higher close rates, and delivers more revenue. Learn more by attending our Integrated Sales and Marketing session at the upcoming CloudWorld conferences. Or, visit our Sales and Marketing Cloud Service site for videos and other learning resources.

    Read the article

  • Proven Approach to Financial Progress Using Modern Best Practice

    - by Oracle Accelerate for Midsize Companies
    Normal 0 false false false EN-US X-NONE X-NONE by Larry Simcox, Sr. Director, Oracle Midsize Programs Top performing organizations generate 25 percent higher profit margins and grow at twice the rate of their competitors. How do they do it? Recently, Dr. Stephen G. Timme, President of FinListics Solutions and Adjunct Professor at the Georgia Institute of Technology, joined me on a webcast to answer that question. I've know Dr. Timme since my days at G-log when we worked together to help customers determine the ROI of transportation management solutions. We were also joined by Steve Cox, Vice President of Oracle Midsize Programs, who recently published an Oracle E-book, "Modern Best Practice Explained". In this webcast, Cox provides his perspective on how best performing companies are moving from best practice to modern best practice.  Watch the webcast replay and you'll learn about the easy to follow, top down approach to: Identify processes that should be targeted for improvement Leverage a modern best practice maturity model to start a path to progress Link financial performance gaps to operational KPIs Improve cash flow by benchmarking key financial metrics Develop intelligent estimates of achievable cash flow benefits Click HERE to watch a replay of the webcast. You might also be interested in the following: Video: Modern Best Practices Defined  AppCast: Modern Best Practices for Growing Companies Looking for more news and information about Oracle Solutions for Midsize Companies? Read the latest Oracle for Midsize Companies Newsletter Sign-up to receive the latest communications from Oracle’s industry leaders and experts Larry Simcox Senior Director, Oracle Midsize Programs responsible for supporting and creating marketing content ,communications, sales and partner program support for Oracle's go to market activities for midsize companies. I have over 17 years experience helping customers identify the value and ROI from their IT investment. I live in Charlotte NC with my family and my dog Dingo. The views expressed here are my own, and not necessarily those of Oracle. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Can a 10-bit monitor connection preserve all tones in 8-bit sRGB gradients on a wide-gamut monitor?

    - by hjb981
    This question is about color management and the use of a higher color depth, 10 bits per channel (30 bits in total, resulting in 1.07 billion colors, or 1024 shades of gray, sometimes referred to as "deep color") compared to the standard of 8 bits per channel (24 bits in total, 16.7 million colors, 256 shades of gray, sometimes referred to as "true color"). Do not confuse with "32 bit color", which usually refers to standard 8 bit color with an extra channel ("alpha channel") for transparency (used to achieve effects like semi-transparent windows etc). The following can be assumed to be in place: 1: A wide-gamut monitor that supports 10-bit input. Further, it can be assumed that the monitor has been calibrated to its native gamut and that an ICC color profile has been created. 2: A graphics card that supports 10-bit output (and is connected to the monitor via DisplayPort). 3: Drivers for the graphics card that support 10-bit output. If applications that support 10-bit output and color profiles would be used, I would expect them to display images that were saved using different color spaces correctly. For example, both an sRGB and an adobeRGB image should be displayed correctly. If an sRGB image was saved using 8 bits per channel (almost always the case), then the 10-bit signal path would ensure that no tonal gradients were lost in the conversion from the sRGB of the image to the native color space of the monitor. For example: If the image contains a pixel that is pure red in 8 bits (255,0,0), the corresponding value in 10 bits would be (1023,0,0). However, since the monitor has a larger color space than sRGB, sending the signal (1023,0,0) to the monitor would result in a red that was too saturated. Therefore, according to the ICC color profile, the signal would be transformed into a different value with less red saturation, for example (987,0,0). Since there are still plenty of levels left between 0 and 987, all 256 values (0-255) for red in the sRGB color space of the file could be uniquely mapped to color-corrected 10-bit values in the monitor's native color space. However, if the conversion was done in 8 bits, (255,0,0) would be translated to (246,0,0), and there would now only be 247 available levels for the red channel instead of 256, degrading the displayed image quality. My question is: how does this work on Ubuntu? Let's say that I use Firefox (which is color-aware and uses ICC color profiles). Would I get 10-bit processing, thus preserving all levels of an 8-bit picture? What is the situation like for other applications, especially photo applications like Shotwell, Rawtherapee, Darktable, RawStudio, Photivo etc? Does Ubuntu differ from other operating systems (Linux and others) on this point?

    Read the article

  • Critical Patch Updates During EBS 11i Exception to Sustaining Support Period

    - by Elke Phelps (Oracle Development)
    As previously blogged in the EBS 11i and 12.1 Support Timeline Changes entry, two important changes to the Oracle Lifetime Support policies were announced at Oracle OpenWorld 2012 - San Francisco.  These changes affect E-Business Suite Releases 11i and 12.1. Critical Patch Updates for EBS 11i during the Exception to Sustaining Support Period You may be wondering about the availability of Critical Patch Updates (CPU) for EBS 11i during the Exception to Sustaining Support period.  The following details the E-Business Suite Critical Patch Update support policy for EBS 11i during the Exception to Sustaining Support period: Oracle will continue to provide CPUs containing critical security fixes for E-Business Suite 11i.  CPUs will be packaged and released as as cumulative patches for both ATG RUP 6 and ATG RUP 7. As always, we try to minimize the number of patches and dependencies required for uptake of a CPU; however, there have been quite a few changes to the 11i baseline since its release.  For dependency reasons the 11i CPUs may require a higher number of files in order to bring them up to a consistent, stable, and well tested level. EBS 11i customer will continue to receive CPUs up to and including the October 2014 CPU. Where can I learn more? There are two interlocking policies that affect the E-Business Suite:  Oracle's Lifetime Support policies for each EBS release (timelines which were updated by this announcement), and the Error Correction Support policies (which state the minimum baselines for new patches). For more information about how these policies interact, see: Understanding Support Windows for E-Business Suite Releases What about E-Business Suite technology stack components? Things get more complicated when one considers individual techstack components such as Oracle Forms or the Oracle Database.  To learn more about the interlocking EBS+techstack component support windows, see these two articles: On Apps Tier Patching and Support: A Primer for E-Business Suite Users On Database Patching and Support: A Primer for E-Business Suite Users Where can I learn more about Critical Patch Updates?The Critical Patch Update Advisory is the starting point for relevant information. It includes a list of products affected, pointers to obtain the patches, a summary of the security vulnerabilities, and links to other important documents.  Related Articles EBS 11i and 12.1 Support Timeline Changes Frequently Asked Questions about Latest EBS Support Changes Extended Support Fees Waived for E-Business Suite 11i and 12.0

    Read the article

  • How Do I Do Alpha Transparency Properly In XNA 4.0?

    - by Soshimo
    Okay, I've read several articles, tutorials, and questions regarding this. Most point to the same technique which doesn't solve my problem. I need the ability to create semi-transparent sprites (texture2D's really) and have them overlay another sprite. I can achieve that somewhat with the code samples I've found but I'm not satisfied with the results and I know there is a way to do this. In mobile programming (BREW) we did it old school and actually checked each pixel for transparency before rendering. In this case it seems to render the sprite below it blended with the alpha above it. This may be an artifact of how I'm rendering the texture but, as I said before, all examples point to this one technique. Before I go any further I'll go ahead and paste my example code. public void Draw(SpriteBatch batch, Camera camera, float alpha) { int tileMapWidth = Width; int tileMapHeight = Height; batch.Begin(SpriteSortMode.Texture, BlendState.AlphaBlend, SamplerState.PointWrap, DepthStencilState.Default, RasterizerState.CullNone, null, camera.TransformMatrix); for (int x = 0; x < tileMapWidth; x++) { for (int y = 0; y < tileMapHeight; y++) { int tileIndex = _map[y, x]; if (tileIndex != -1) { Texture2D texture = _tileTextures[tileIndex]; batch.Draw( texture, new Rectangle( (x * Engine.TileWidth), (y * Engine.TileHeight), Engine.TileWidth, Engine.TileHeight), new Color(new Vector4(1f, 1f, 1f, alpha ))); } } } batch.End(); } As you can see, in this code I'm using the overloaded SpriteBatch.Begin method which takes, among other things, a blend state. I'm almost positive that's my problem. I don't want to BLEND the sprites, I want them to be transparent when alpha is 0. In this example I can set alpha to 0 but it still renders both tiles, with the lower z ordered sprite showing through, discolored because of the blending. This is not a desired effect, I want the higher z-ordered sprite to fade out and not effect the color beneath it in such a manner. I might be way off here as I'm fairly new to XNA development so feel free to steer me in the correct direction in the event I'm going down the wrong rabbit hole. TIA

    Read the article

  • Maximizing the Value of Software

    - by David Dorf
    A few years ago we decided to increase our investments in documenting retail processes and architectures.  There were several goals but the main two were to help retailers maximize the value they derive from our software and help system integrators implement our software faster.  The sale is only part of our success metric -- its actually more important that the customer realize the benefits of the software.  That's when we actually celebrate. This week many of our customers are gathered in Chicago to discuss their successes during our annual Crosstalk conference.  That provides the perfect forum to announce the release of the Oracle Retail Reference Library.  The RRL is available for free to Oracle Retail customers and partners.  It contains 1000s of hours of work and represents years of experience in the retail industry.  The Retail Reference Library is composed of three offerings: Retail Reference Model We've been sharing the RRM for several years now, with lots of accolades.  The RRM is a set of business process diagrams at varying levels of granularity. This release marks the debut of Visio documents, which should make it easier for retailers to adopt and edit the diagrams.  The processes represent an approximation of the Oracle Retail software, but at higher levels they are pretty generic and therefore usable with other software as well.  Using these processes, the business and IT are better able to communicate the expectations of the software.  They can be used to guide customization when necessary, and help identify areas for optimization in the organization. Retail Reference Architecture When embarking on a software implementation project, it can be daunting to start from a blank sheet of paper.  So we offer the RRA, a comprehensive set of documents that describe the retail enterprise in terms of logical architecture, physical deployments, and systems integration.  These documents and diagrams describe how all the systems typically found in a retailer enterprise work together.  They serve as a way to jump-start implementations using best practices we've captured over the years. Retail Semantic Glossary Have you ever seen two people argue over something because they're using misaligned terminology?  Its a huge waste and happens all the time.  The Retail Semantic Glossary is a simple application that allows retailers to define terms and metrics in a centralized database.  This initial version comes with limited content with the goal of adding more over subsequent releases.  This is the single source for defining key performance indicators, metrics, algorithms, and terms so that the retail organization speaks in a consistent language. These three offerings are downloaded from MyOracleSupport separately and linked together using the start page above.  Everything is navigated using a Web browser.  See the Oracle Retail Documentation blog for more details.

    Read the article

  • Do I need to go to a big-name university?

    - by itaiferber
    As a soon-to-be graduating high school senior in the U.S., I'm going to be facing a tough decision in a few months: which college should I go to? Will it be worth it to go to Cornell or Stanford or Carnegie Mellon (assuming I get in, of course) to get a big-name computer science degree, internships, and connections with professors, while taking on massive debt; or am I better off going to SUNY Binghamton (probably the best state school in New York) and still get a pretty decent education while saving myself from over a hundred-thousand dollars worth of debt? Yes, I know questions like this has been asked before (namely here and here), but please bear with me because I haven't found an answer that fits my particular situation. I've read the two linked questions above in depth, but they haven't answered what I want to know: Yes, I understand that going to a big-name college can potentially get me connected with some wonderful professors and leaders in the field, but on average, how does that translate financially? I mean, will good connections pay off so well that I'd be easily getting rid of over a hundred-thousand dollars of debt? And how does the fact that I can get a fifth-years master's degree at Carnegie Mellon play into the equation? Will the higher degree right off the bat help me get a better-paying job just out of college, or will the extra year only put me further into debt? Not having to go to graduate school to get a comparable degree will, of course, be a great financial relief, but will getting it so early give it any greater worth? And if I go to SUNY Binghamton, which is far lesser-known than what I've considered (although if there are any alumni out there who want to share their experience, I would greatly appreciate it), would I be closing off doors that would potentially offset my short-term economic gain with long-term benefits? Essentially, is the short-term benefit overweighed by a potential long-term loss? The answers to these questions all tie in to my final college decision (again, permitting I make it to these schools), so I hope that asking the skilled and knowledgeable people of the field will help me make the right choice (if there is such a thing). Also, please note: I'm in a rather peculiar situation where I can't pay for college without taking out a bunch of loans, but will be getting little to no financial aid (likely federal or otherwise). I don't want to elaborate on this too much (so take it at face value), but this is mainly the reason I'm asking the question. Thanks a lot! It means a lot to me.

    Read the article

  • Update in Certification Exam Score Report Access Process!

    - by Richard Lefebvre
    Please note that exam results for all Oracle Certification exams will be accessed through CertView, starting October 30th, 2012. Exam results will no longer be available at the test center, or on the Pearson VUE website. Candidates will receive an email from Oracle within 30 minutes of completing the exam to let them know that their exam results are available on CertView. Candidates must have an Oracle Web Account to access CertView. This new process applies to exam results for all Oracle Certification exams - proctored and non-proctored as well beta exams. CertView, Oracle's self-service certification portal will be the partners’ one stop source for all their certification and exam history! Other benefits of this change include: driving all candidates to have an Oracle Web Account which will lead to tighter integration with Oracle University records in the future, increased security around data privacy and a higher validity rate for candidate email addresses. Existing benefits of CertView include, self-service access to exam and certification records and logos, and access to Oracle's self service certification verification. Accessing Exam Results  Returning CertView Users ·         Click the link in the email sent by Oracle or go to certview.oracle.com ·         Select the See My New Exam Results Now link to view exam results ·         Select the Print My New Exam Results Now link to print exam results  New CertView Users - Who Have An Oracle Web Account ·         First time Users must authenticate their CertView account ·         Account Authentication requires the Oracle Testing ID and email address from your Pearson VUE profile ·         Click the link in the email sent by Oracle or go to certview.oracle.com and follow the Authenticate My CertView Account link.  New CertView Users - Who Do Not Have An Oracle Web Account ·         CertView users are required to have an Oracle Web Account ·         To create an Oracle Web Account, go to certview.oracle.com and select theCreate My Oracle Web Account Now link. Then follow the remaining instructions under I do not have an Oracle Web Account on that page.

    Read the article

  • Career advice: stay with PHP or start a new career in something else ( .Net?)

    - by Christian P
    I'm planning on moving to NY in 6-12 months tops, so I'm forced to find a new job. When I'm planing to start my life in another city it's also probably a good time to think about career changes. I've found a lot of different opinions about PHP vs .Net vs Java and this is not topic here. I don't want to start a new fight about which language is better. Knowing programming language is not the most important thing for being a software developer. To be a really good developer you need to know OOP, design patterns, testing... and language is just a tool to make things happen. So back to my question. I have mixed experience in IT - 1 year as an IT support guy (Windows administration and support), around 2 years of experience in embedded programming (VB.Net 2005) and for the last 2 years I'm working with PHP/MySQL. I have worked with Magento web shop, assisted in some projects in Symfony, modified few Drupal sites. My main concerns are following: Do I continue to improve my skills in PHP e.g. to start learning some major PHP framework like Zend, Symfony maybe get some PHP certification. Or do I start learning .NET or Java. I'm more familiar to .NET so I'll probably choose it if choice falls between .NET and Java ( or you could convince me to choose Java :). Career-wise, I don't know what is the best choice. Learning new framework and language is more time consuming then improving my existing skills in PHP. But with .NET you have a lot of possibilities (Windows 7 Phone development, Silverlight, WPF) and possibly bigger chances to find better jobs. PHP jobs are less payed then .NET, at least, according to my researches (correct me if I'm wrong). But if I start now with .NET I'm just a beginner and my salary will be low. I need at least 2+ years of experience in some language to even try to find some job that is paying higher than $50-60k in NY. My main goal in next 2-3 years is to try to find a job in a $60-80k category. Don't get me wrong, I'm not just chasing money, but money is an important factor when you're trying to start a family. I'm 27 years old and I feel that there isn't a lot of room for wrong decisions regarding my career, so any advice will be very welcome. Update Thank you all for spending time to help me with my problem. All of the answers and comments have been very helpful. I have decided to stick with PHP but also to learn C# and Silverlight 4. We'll see where the life will take me.

    Read the article

< Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >