Search Results

Search found 11759 results on 471 pages for 'isolation level'.

Page 341/471 | < Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >

  • How to avoid the exception “Substitution controls cannot be used in cached User Controls or cached M

    - by DigiMortal
    Recently I wrote example about using user controls with donut caching. Because cache substitutions are not allowed inside partially cached controls you may get the error Substitution controls cannot be used in cached User Controls or cached Master Pages when breaking this rule. In this posting I will introduce some strategies that help to avoid this error. How Substitution control checks its location? Substitution control uses the following check in its OnPreRender method. protected internal override void OnPreRender(EventArgs e) {     base.OnPreRender(e);     for (Control control = this.Parent; control != null;          control = control.Parent)     {         if (control is BasePartialCachingControl)         {             throw new HttpException(SR.GetString("Substitution_CannotBeInCachedControl"));         }     } } It traverses all the control tree up to top from its parent to find at least one control that is partially cached. If such control is found then exception is thrown. Reusing the functionality If you want to do something by yourself if your control may cause exception mentioned before you can use the same code. I modified the previously shown code to be method that can be easily moved to user controls base class if you have some. If you don’t you can use it in controls where you need this check. protected bool IsInsidePartialCachingControl() {     for (Control control = Parent; control != null;         control = control.Parent)         if (control is BasePartialCachingControl)             return true;       return false; } Now it is up to you how to handle the situation where your control with substitutions is child of some partially cache control. You can add here also some debug level output so you can see exactly what controls in control hierarchy are cached and cause problems.

    Read the article

  • 1000baseT/Full Supported and Advertised but not working!

    - by user11973
    Hello, i'm using a AT3IONT-I motherboard with integrated card. If I ethtool it to 1000 full duplex it wont work! Here is sudo ethtool eth0: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised pause frame use: Symmetric Receive-only Advertised auto-negotiation: Yes Speed: 100Mb/s Duplex: Full Port: Twisted Pair PHYAD: 0 Transceiver: internal Auto-negotiation: on MDI-X: Unknown Supports Wake-on: pumbg Wake-on: g Current message level: 0x00000033 (51) Link detected: yes here is sudo lshw -C network: *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:04:00.0 logical name: eth0 version: 03 serial: bc:ae:c5:8b:7d:33 size: 100MB/s capacity: 1GB/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8168 driverversion=8.021.00-NAPI duplex=full ip=192.168.0.2 latency=0 link=yes multicast=yes port=twisted pair speed=100MB/s resources: irq:42 ioport:e800(size=256) memory:f8fff000-f8ffffff memory:f8ff8000-f8ffbfff memory:fbff0000-fbffffff And lspci -nn: 00:00.0 Host bridge [0600]: nVidia Corporation MCP79 Host Bridge [10de:0a82] (rev b1) 00:00.1 RAM memory [0500]: nVidia Corporation MCP79 Memory Controller [10de:0a88] (rev b1) 00:03.0 ISA bridge [0601]: nVidia Corporation MCP79 LPC Bridge [10de:0aad] (rev b3) 00:03.1 RAM memory [0500]: nVidia Corporation MCP79 Memory Controller [10de:0aa4] (rev b1) 00:03.2 SMBus [0c05]: nVidia Corporation MCP79 SMBus [10de:0aa2] (rev b1) 00:03.3 RAM memory [0500]: nVidia Corporation MCP79 Memory Controller [10de:0a89] (rev b1) 00:03.5 Co-processor [0b40]: nVidia Corporation MCP79 Co-processor [10de:0aa3] (rev b1) 00:04.0 USB Controller [0c03]: nVidia Corporation MCP79 OHCI USB 1.1 Controller [10de:0aa5] (rev b1) 00:04.1 USB Controller [0c03]: nVidia Corporation MCP79 EHCI USB 2.0 Controller [10de:0aa6] (rev b1) 00:06.0 USB Controller [0c03]: nVidia Corporation MCP79 OHCI USB 1.1 Controller [10de:0aa7] (rev b1) 00:06.1 USB Controller [0c03]: nVidia Corporation MCP79 EHCI USB 2.0 Controller [10de:0aa9] (rev b1) 00:08.0 Audio device [0403]: nVidia Corporation MCP79 High Definition Audio [10de:0ac0] (rev b1) 00:09.0 PCI bridge [0604]: nVidia Corporation MCP79 PCI Bridge [10de:0aab] (rev b1) 00:0b.0 RAID bus controller [0104]: nVidia Corporation MCP79 RAID Controller [10de:0abc] (rev b1) 00:0c.0 PCI bridge [0604]: nVidia Corporation MCP79 PCI Express Bridge [10de:0ac4] (rev b1) 00:10.0 PCI bridge [0604]: nVidia Corporation MCP79 PCI Express Bridge [10de:0aa0] (rev b1) 00:15.0 PCI bridge [0604]: nVidia Corporation MCP79 PCI Express Bridge [10de:0ac6] (rev b1) 03:00.0 VGA compatible controller [0300]: nVidia Corporation ION VGA [10de:087d] (rev b1) 04:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller [10ec:8168] (rev 03) If i use Code: sudo ethtool -s eth0 speed 1000 duplex full autoneg off then in ethtool speed is Unknown and it doesn't work; if I set it via pre-up it wont work either... Please help!! Thanks!

    Read the article

  • How employable am I as a programmer?

    - by dsimcha
    I'm currently a Ph.D. student in Biomedical Engineering with a concentration in computational biology and am starting to think about what I want to do after graduate school. I feel like I've accumulated a lot of programming skills while in grad school, but taken a very non-traditional path to learning all this stuff. I'm wondering whether I would have an easy time getting hired as a programmer and could fall back on that if I can't find a good job directly in my field, and if so whether I would qualify for a more prestigious position than "code monkey". Things I Have Going For Me Approximately 4 years of experience programming as part of my research. I believe I have a solid enough grasp of the fundamentals that I could pick up new languages and technologies pretty fast, and could demonstrate this in an interview. Good math and statistics skills. An extensive portfolio of open source work (and the knowledge that working on these projects implies): I wrote a statistics library in D, mostly from scratch. I wrote a parallelism library (parallel map, reduce, foreach, task parallelism, pipelining, etc.) that is currently in review for adoption by the D standard library. I wrote a 2D plotting library for D against the GTK Cairo backend. I currently use it for most of the figures I make for my research. I've contributed several major performance optimizations to the D garbage collector. (Most of these were low-hanging fruit, but it still shows my knowledge of low-level issues like memory management, pointers and bit twiddling.) I've contributed lots of miscellaneous bug fixes to the D standard library and could show the change logs to prove it. (This demonstrates my ability read other people's code.) Things I Have Going Against Me Most of my programming experience is in D and Python. I have very little to virtually no experience in the more established, "enterprise-y" languages like Java, C# and C++, though I have learned a decent amount about these languages from small, one-off projects and discussions about language design in the D community. In general I have absolutely no knowledge of "enterprise-y" technlogies. I've never used a framework before, possibly because most reusable code for scientific work and for D tends to call itself a "library" instead. I have virtually no formal computer science/software engineering training. Almost all of my knowledge comes from talking to programming geek friends, reading blogs, forums, StackOverflow, etc. I have zero professional experience with the official title of "developer", "software engineer", or something similar.

    Read the article

  • Finding furthermost point in game world

    - by user13414
    I am attempting to find the furthermost point in my game world given the player's current location and a normalized direction vector in screen space. My current algorithm is: convert player world location to screen space multiply the direction vector by a large number (2000) and add it to the player's screen location to get the distant screen location convert the distant screen location to world space create a line running from the player's world location to the distant world location loop over the bounding "walls" (of which there are always 4) of my game world check whether the wall and the line intersect if so, where they intersect is the furthermost point of my game world in the direction of the vector Here it is, more or less, in code: public Vector2 GetFurthermostWorldPoint(Vector2 directionVector) { var screenLocation = entity.WorldPointToScreen(entity.Location); var distantScreenLocation = screenLocation + (directionVector * 2000); var distantWorldLocation = entity.ScreenPointToWorld(distantScreenLocation); var line = new Line(entity.Center, distantWorldLocation); float intersectionDistance; Vector2 intersectionPoint; foreach (var boundingWall in entity.Level.BoundingWalls) { if (boundingWall.Intersects(line, out intersectionDistance, out intersectionPoint)) { return intersectionPoint; } } Debug.Assert(false, "No intersection found!"); return Vector2.Zero; } Now this works, for some definition of "works". I've found that the further out my distant screen location is, the less chance it has of working. When digging into the reasons why, I noticed that calls to Viewport.Unproject could result in wildly varying return values for points that are "far away". I wrote this stupid little "test" to try and understand what was going on: [Fact] public void wtf() { var screenPositions = new Vector2[] { new Vector2(400, 240), new Vector2(400, -2000), }; var viewport = new Viewport(0, 0, 800, 480); var projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, viewport.Width / viewport.Height, 1, 200000); var viewMatrix = Matrix.CreateLookAt(new Vector3(400, 630, 600), new Vector3(400, 345, 0), new Vector3(0, 0, 1)); var worldMatrix = Matrix.Identity; foreach (var screenPosition in screenPositions) { var nearPoint = viewport.Unproject(new Vector3(screenPosition, 0), projectionMatrix, viewMatrix, worldMatrix); var farPoint = viewport.Unproject(new Vector3(screenPosition, 1), projectionMatrix, viewMatrix, worldMatrix); Console.WriteLine("For screen position {0}:", screenPosition); Console.WriteLine(" Projected Near Point = {0}", nearPoint.TruncateZ()); Console.WriteLine(" Projected Far Point = {0}", farPoint.TruncateZ()); Console.WriteLine(); } } The output I get on the console is: For screen position {X:400 Y:240}: Projected Near Point = {X:400 Y:629.571 Z:599.0967} Projected Far Point = {X:392.9302 Y:-83074.98 Z:-175627.9} For screen position {X:400 Y:-2000}: Projected Near Point = {X:400 Y:626.079 Z:600.7554} Projected Far Point = {X:390.2068 Y:-767438.6 Z:148564.2} My question is really twofold: what am I doing wrong with the unprojection such that it varies so wildly and, thus, does not allow me to determine the corresponding world point for my distant screen point? is there a better way altogether to determine the furthermost point in world space given a current world space location, and a directional vector in screen space?

    Read the article

  • How granular should a command be in a CQ[R]S model?

    - by Aaronaught
    I'm considering a project to migrate part of our WCF-based SOA over to a service bus model (probably nServiceBus) and using some basic pub-sub to achieve Command-Query Separation. I'm not new to SOA, or even to service bus models, but I confess that until recently my concept of "separation" was limited to run-of-the-mill database mirroring and replication. Still, I'm attracted to the idea because it seems to provide all the benefits of an eventually-consistent system while sidestepping many of the obvious drawbacks (most notably the lack of proper transactional support). I've read a lot on the subject from Udi Dahan who is basically the guru on ESB architectures (at least in the Microsoft world), but one thing he says really puzzles me: As we get larger entities with more fields on them, we also get more actors working with those same entities, and the higher the likelihood that something will touch some attribute of them at any given time, increasing the number of concurrency conflicts. [...] A core element of CQRS is rethinking the design of the user interface to enable us to capture our users’ intent such that making a customer preferred is a different unit of work for the user than indicating that the customer has moved or that they’ve gotten married. Using an Excel-like UI for data changes doesn’t capture intent, as we saw above. -- Udi Dahan, Clarified CQRS From the perspective described in the quotation, it's hard to argue with that logic. But it seems to go against the grain with respect to SOAs. An SOA (and really services in general) are supposed to deal with coarse-grained messages so as to minimize network chatter - among many other benefits. I realize that network chatter is less of an issue when you've got highly-distributed systems with good message queuing and none of the baggage of RPC, but it doesn't seem wise to dismiss the issue entirely. Udi almost seems to be saying that every attribute change (i.e. field update) ought to be its own command, which is hard to imagine in the context of one user potentially updating hundreds or thousands of combined entities and attributes as it often is with a traditional web service. One batch update in SQL Server may take a fraction of a second given a good highly-parameterized query, table-valued parameter or bulk insert to a staging table; processing all of these updates one at a time is slow, slow, slow, and OLTP database hardware is the most expensive of all to scale up/out. Is there some way to reconcile these competing concerns? Am I thinking about it the wrong way? Does this problem have a well-known solution in the CQS/ESB world? If not, then how does one decide what the "right level" of granularity in a Command should be? Is there some "standard" one can use as a starting point - sort of like 3NF in databases - and only deviate when careful profiling suggests a potentially significant performance benefit? Or is this possibly one of those things that, despite several strong opinions being expressed by various experts, is really just a matter of opinion?

    Read the article

  • WebCenter Customer Spotlight: Azul Brazilian Airlines

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryAzul Linhas Aéreas Brasileiras (Azul Brazilian Airlines) is the third-largest airline in Brazil serving  42 destinations with a fleet of 49 aircraft and employs 4,500 crew members. The company wanted to offer an innovative site with a simple purchasing process for customers to search for and buy tickets and for the company’s marketing team to more effectively conduct its campaigns. To this end, Azul implemented Oracle WebCenter Sites, succeeding in gathering all of the site’s key information onto a single platform. Azul can now complete the Web site content updating process—which used to take approximately 48 hours—in less than five minutes. Company OverviewAzul Linhas Aéreas Brasileiras (Azul Brazilian Airlines) has established itself as the third-largest airline in Brazil, based on a business model that combines low prices with a high level of service. Azul serves 42 destinations with a fleet of 49 aircraft. It operates 350 daily flights with a team of 4,500 crew members. Last year, the company transported 15 million passengers, achieving a 10% share of the Brazilian market, according to the Agência Nacional de Aviação Civil (ANAC, or the National Civil Aviation Agency). Business ChallengesThe company wanted to offer an innovative site with a simple purchasing process for customers to search for and buy tickets and for the company’s marketing team to more effectively conduct its campaigns. Provide customers with an  innovative Web site with a simple process for purchasing flight tickets Bring dynamism to the Web site’s content updating process to provide autonomy to the airline’s strategic departments, such as marketing and product development Facilitate integration among the site’s different application providers, such as ticket availability and payment process, on which ticket sales depend Solution DeployedAzul worked with the  Oracle partner TQI to implement Oracle WebCenter Sites, succeeding in gathering all of the site’s key information onto a single platform. Previously, at least three servers and corporate information environments had directed data to the portal. The single Oracle-based platform now facilitates site updates, which are daily and constant. Business Results Gained development freedom in all processes—from implementation to content editing Gathered all of the Web site’s key information onto a single platform, facilitating its daily and constant updating, whereas the information was previously spread among at least three IT environments and had to go through a complex process to be made available online to customers Reduced time needed to update banners and other Web site content from an average of 48 hours to less than five minutes Simplified the flight ticket sales process thanks to tool flexibility that enabled the company to improve Website usability “Oracle WebCenter Sites provides an easy-to-use platform that enables our marketing department to spend less time updating content and more time on innovative activities. Previously, it would take 48 hours to update content on our Web site; now it takes less than five minutes. We have shown the market that we are innovators, enabling customer convenience through an improved flight ticket purchase process.” Kleber Linhares, Information Technology and E-Commerce Director, Azul Linhas Aéreas Brasileiras Additional Information Azul Brazilian Airlines Case Study Oracle WebCenter Sites Oracle WebCenter Sites Satellite Server

    Read the article

  • Parallel Port Problem in 12.04

    - by Frank Oberle
    I have a “dumb” printer attached to a parallel port in my machine which works fine under the “other” resident operating system (from Redmond) on the same machine. I recently added Ubuntu 12.04 as a dual boot on the machine, but Ubuntu doesn't seem to recognize the parallel port at all. All I need to set up a printer is a really plain-vanilla fixed pitch text-only generic driver, which is present, but no parallel ports show up. (The other printers, all on USB ports, seem to work just fine). Following what appeared to me to be the most reasonable of the many conflicting pieces of advice on the web, here's what I did: I added the following lines to /etc/modules parport_pc ppdev parport Then, after rebooting, I checked to see that the lines were still present, and they were. I ran dmesg | grep par and got the following references in the output that seemed like they might have to do with the parallel port: [ 14.169511] parport_pc 0000:03:07.0: PCI INT A -> GSI 21 (level, low) -> IRQ 21 [ 14.169516] PCI parallel port detected: 9710:9805, I/O at 0xce00(0xcd00), IRQ 21 [ 14.169577] parport0: PC-style at 0xce00 (0xcd00), irq 21, using FIFO [PCSPP,TRISTATE,COMPAT,ECP] [ 14.354254] lp0: using parport0 (interrupt-driven). [ 14.571358] ppdev: user-space parallel port driver [ 16.588304] type=1400 audit(1347226670.386:5): apparmor="STATUS" operation="profile_load" name="/usr/lib/cups/backend/cups-pdf" pid=964 comm="apparmor_parser" [ 16.588756] type=1400 audit(1347226670.386:6): apparmor="STATUS" operation="profile_load" name="/usr/sbin/cupsd" pid=964 comm="apparmor_parser" [ 16.673679] type=1400 audit(1347226670.470:7): apparmor="STATUS" operation="profile_load" name="/usr/lib/lightdm/lightdm/lightdm-guest-session-wrapper" pid=1010 comm="apparmor_parser" [ 16.675252] type=1400 audit(1347226670.470:8): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/mission-control-5" pid=1014 comm="apparmor_parser" [ 16.675716] type=1400 audit(1347226670.470:9): apparmor="STATUS" operation="profile_load" name="/usr/lib/telepathy/telepathy-*" pid=1014 comm="apparmor_parser" [ 16.676636] type=1400 audit(1347226670.474:10): apparmor="STATUS" operation="profile_replace" name="/usr/lib/cups/backend/cups-pdf" pid=1015 comm="apparmor_parser" [ 16.677124] type=1400 audit(1347226670.474:11): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/cupsd" pid=1015 comm="apparmor_parser" [ 1545.725328] parport0: ppdev0 forgot to release port I have no idea what any of that means, but the line “parport0: ppdev0 forgot to release port ” seems unusual. I was still unable to add a printer for my old clunker, so I tried the direct approach, typing echo “Hello” > /dev/lp0 and received a Permission denied message. I then tried echo “Hello” > /dev/parport0 which didn't give me any message at all, but still didn't print anything. Running the command sudo /usr/lib/cups/backend/parallel gives the following: direct parallel:/dev/lp0 "unknown" "LPT #1" "" "" Checking the permissions for /dev/parport0, Owner, Group, and Other are all set to read and write. crw-rw---- 1 root lp 6, 0 Sep 9 16:37 /dev/lp0 crw-rw-rw- 1 root lp 99, 0 Sep 9 16:37 /dev/parport0 The output of the command lpinfo -v includes the following line: direct parallel:/dev/lp0 I've read several web postings that seem to suggest this has been a problem for several years, but the bug reports were closed because there wasn't enough information to address the issue (shades of Microsoft!). Any suggestions as to what I might be missing here?

    Read the article

  • Are there any concerns with using a static read-only unit of work so that it behaves like a cache?

    - by Rowan Freeman
    Related question: How do I cache data that rarely changes? I'm making an ASP.NET MVC4 application. On every request the security details about the user will need to be checked with the area/controller/action that they are accessing to see if they are allowed to view it. The security information is stored in the database. For example: User Permission UserPermission Action ActionPermission A "Permission" is a token that is applied to an MVC action to indicate that the token is required in order to access the action. Once a user is given the permission (via the UserPermission table) then they have the token and can therefore access the action. I've been looking in to how to cache this data (since it rarely changes) so that I'm only querying in-memory data and not hitting a database (which is a considerable performance hit at the moment). I've tried storing things in lists, using a caching provider but I either run in to problems or performance doesn't improve. One problem that I constantly run in to is that I'm using lazy loading and dynamic proxies with EntityFramework. This means that even if I ToList() everything and store them somewhere static, the relationships are never populated. For example, User.Permissions is an ICollection but it's always null. I don't want to Include() everything because I'm trying to keep things simple and generic (and easy to modify). One thing I know is that an EntityFramework DbContext is a unit of work that acts with 1st-level caching. That is, for the duration of the unit of work, everything that is accessed is cached in memory. I want to create a read-only DbContext that will exist indefinitely and will only be used to read about permission data. Upon testing this it worked perfectly; my page load times went from 200ms+ to 20ms. I can easily force the data to refresh at certain intervals or simply leave it to refresh when the application pool is recycled. Basically it will behave like a cache. Note that the rest of the application will interact with other contexts that exist per request as normal. Is there any disadvantage to this approach? Could I be doing something different?

    Read the article

  • SQL Azure Security: DoS Part II

    - by Herve Roggero
    Ah!  When you shoot yourself in the foot... a few times... it hurts! That's what I did on Sunday, to learn more about the behavior of the SQL Azure Denial Of Service prevention feature. This article is a short follow up to my last post on this feature. In this post, I will outline some of the lessons learned that were the result of testing the behavior of SQL Azure from two machines. From the standpoint of SQL Azure, they look like one machine since they are behind a NAT. All logins affected The first thing to note is that all the logins are affected. If you lock yourself out to a specific database, none of the logins will work on that database. In fact the database size becomes "--" in the SQL Azure Portal.   Less than 100 sessions I was able to see 50+ sessions being made in SQL Azure (by looking at sys.dm_exec_sessions) before being locked out. The the DoS feature appears to be triggered in part by the number of open sessions. I could not determine if the lockout is triggered by the speed at which connection requests are made however.   Other Databases Unaffected This was interesting... the DoS feature works at the database level. Other databases were available for me to use.   Just Wait Initially I thought that going through SQL Azure and connecting from there would reset the database and allow me to connect again. Unfortunately this doesn't seem to be the case. You will have to wait. And the more you lock yourself out, the more you will have to wait... The first time the database became available again within 30 seconds or so; the second time within 2-3 minutes and the third time... within 2-3 hours...   Successful Logins The DoS feature appears to engage only for valid logins. If you have a login failure, it doesn't seem to count. I ran a test with over 100 login failures without being locked.

    Read the article

  • PeopleSoft New Design Solves Navigation Problem

    - by Applications User Experience
    Anna Budovsky, User Experience Principal Designer, Applications User Experience In PeopleSoft we strive to improve User Experience on all levels. Simplifying navigation and streamlining access to the most important pages is always an important goal. No one likes to waste time waiting for pages to load and watching a spinning glass going on and on. Those performance-affecting server trips, page-load waits and just-too-many clicks were complained about for a long time. Something had to be done. A few new designs came in PeopleSoft 9.2 helping users to access their everyday work areas easier and faster. For example, Dashboard and Work Center aggregate most accessed information sections on a single page; Related Information allows users to complete transaction-related-research without interrupting a transaction and Secure Search gets users to a specific page directly. Today we’ll talk about the Actions menu. Most PeopleSoft pages are shared between individual products and product lines. It means changing the content on a single page involves Oracle development and quality assurance time for making and testing the changes. In order to streamline the navigation and cut down on accessing PeopleSoft pages one-page-at-a-time, we introduced a new menu design. The new menu allows accessing shared pages without the Oracle development team making any local changes, and it works as an additional one-click-path to specific high-traffic actionable pages. Let’s look at how many steps it took to Change Salary for an employee in HCM 9.1 before: Figure 1. BEFORE: The 6 steps a user would take to Change Salary in PeopleSoft HCM 9.1 In PeopleSoft 9.1 it took 5 steps + page loading time + additional verification time for making sure a correct employee is selected from the table. In PeopleSoft 9.2 it only takes 2 steps. To complete Ad Hoc Change Salary action, the user can start from the HCM Manager's Dashboard, click the Action menu within a table, choose a menu option, and access a correct employee’s details page to take an action. Figure 2. AFTER: The 2 steps a user would take to Change Salary in PeopleSoft HCM 9.2 The new menu is placed on a row level which ensures the user accesses the correct employee’s details page. The Actions menu separates menu options into hierarchical sections which help to scan and access the correct option quickly. The new menu’s small size and its structure enabled users to access high-traffic pages from any page and from any part of the page. No more spinning hourglass, no more multiple pages upload. The flexible design fits anywhere on a page and provides a fast and reliable path to the correct destination within the product. Now users can: Access any target page no matter how far it is buried from the starting point; Reduce navigation and page-load time; Improve productivity and reduce errors. The new menu design is available and widely used in all PeopleSoft 9.2 product lines.

    Read the article

  • Addressing threats introduced by the BYOD trend

    - by kyap
    With the growth of the mobile technology segment, enterprises are facing a new type of threats introduced by the BYOD (Bring Your Own Device) trend, where employees use their own devices (laptops, tablets or smartphones) not necessarily secured to access corporate network and information.In the past - actually even right now, enterprises used to provide laptops to their employees for their daily work, with specific operating systems including anti-virus and desktop management tools, in order to make sure that the pools of laptop allocated are spyware or trojan-horse free to access the internal network and sensitive information. But the BYOD reality is breaking this paradigm and open new security breaches for enterprises as most of the username/password based systems, especially the internal web applications, can be accessed by less or none protected device.To address this reality we can adopt 3 approaches:1. Coué's approach: Close your eyes and assume that your employees are mature enough to know what he/she should or should not do.2. Consensus approach: Provide a list of restricted and 'certified' devices to the internal network. 3. Military approach: Access internal systems with certified laptop ONLYIf you choose option 1: Thanks for visiting my blog and I hope you find the others entries more useful :)If you choose option 2: The proliferation of new hardware and software updates every quarter makes this approach very costly and difficult to maintain.If you choose option 3: You need to find a way to allow the access into your sensitive application from the corporate authorized machines only, managed by the IT administrators... but how? The challenge with option 3 is to find out how end-users can restrict access to certain sensitive applications only from authorized machines, or from another angle end-users can not access the sensitive applications if they are not using the authorized machine... So what if we find a way to store the applications credential secretly from the end-users, and then automatically submit them when the end-users access the application? With this model, end-users do not know the username/password to access the applications so even if the end-users use their own devices they will not able to login. Also, there's no need to reconfigure existing applications to adapt to the new authenticate scheme given that we are still leverage the same username/password authenticate model at the application level. To adopt this model, you can leverage Oracle Enterprise Single Sign On. In short, Oracle ESSO is a desktop based solution, capable to store credentials of Web and Native based applications. At the application startup and if it is configured as an esso-enabled application - check out my previous post on how to make Skype essso-enabled, Oracle ESSO takes over automatically the sign-in sequence with the store credential on behalf of the end-users. Combined with Oracle ESSO Provisioning Gateway, the credentials can be 'pushed' in advance from an actual provisioning server, like Oracle Identity Manager or Tivoli Identity Manager, so the end-users can login into sensitive application without even knowing the actual username and password, so they can not login with other machines rather than those secured by Oracle ESSO.Below is a graphical illustration of this approach:With this model, not only you can protect the access to sensitive applications only from authorized machine, you can also implement much stronger Password Policies in terms of Password Complexity as well as Password Reset Frequency but end-users will not need to remember the passwords anymore.If you are interested, do not hesitate to check out the Oracle Enterprise Single Sign-on products from OTN !

    Read the article

  • Dadaism and Agility

    - by alexhildyard
    We all have our little bugbears, and something that has given me particular pause over the years is the place of Agility in the software development life cycle. While I have seen it used successfully on both small and Enterprise-level projects, I have also seen many instances in which long-standing technical debt has also originated under its watch. Ironically the problem in such cases seems to me not that the practitioners in question have failed to follow due process (Test, Develop, Refactor -- a common "what" of Agile), but basically that they have missed the point (the "why" of Agile). It's probably a sign of my age that I'm much more interested in the "why" than the "what", since I feel that the latter falls out naturally from the former, but that this is not a reciprocal relationship.Consider Dadaism, precursor to the Surrealist movement in the early part of the twentieth century. Anyone could stand up and proclaim he or she was Dada; anyone could write cut-ups, or pull words out a hat, or produce gibberish on duelling typewriters under the inspiration of Dada. And all that took place at such performances was a manifestation of Dada, and all the artefacts that resulted were also Dada. Hence one commentator's engimatic observation that 'when one speaks of Dada, then one speaks of Dada. But when one does not speak of Dada, one still speaks of Dada.'What is Dada? Literally, Dada is what you say it is. But that's also missing the point. Dada is about erecting a framework within which utterances like this are valid; Dada is about preparing a stage for itself. Dadaism exemplifies the purity of a process-driven ideology -- in fact an ideology that is almost pure process, with nothing extraneous in the way of formal method, and while perhaps Agile delivery should not embrace the liberties of Dadaism too literally, some of the similarities nevertheless are salutary.Agile -- like Dada -- is an attitude; it is about *being* agile; it is not really about doing a specific set of things that are somehow *part* of being Agile. It is an abstract base rather than an implementation, a characteristic rather than a factor. It is the pragmatic response to the need for change in the face of partial information, ephemeral requirements and a healthy dose of systematic uncertainty. In practice this will usually mean repeatedly making the smallest useful changes to a system, recognising that systems evolve, and that all change carries risk. It will usually mean that instead of investing effort in future-proofing a system against a known technology roadmap, one instead invests one's energies in the daily repetition and incremental development of processes best designed to accommodate change quickly. But though it may mean these things in practice, it isn't actually *about* either of these things; it's about the mindset, the attitude that conceives of such responses as sensible solutions given the larger and ultimately unclassifiable thing that constitutes the development lifecycle of a specific project.

    Read the article

  • Techniques to re-factor garbage and maintain sanity?

    - by Incognito
    So I'm sitting down to a nice bowl of c# spaghetti, and need to add something or remove something... but I have challenges everywhere from functions passing arguments that doesn't make sense, someone who doesn't understand data structures abusing strings, redundant variables, some comments are red-hearings, internationalization is on a per-every-output-level, SQL doesn't use any kind of DBAL, database connections are left open everywhere... Are there any tools or techniques I can use to at least keep track of the "functional integrity" of the code (meaning my "improvements" don't break it), or a resource online with common "bad patterns" that explains a good way to transition code? I'm basically looking for a guidebook on how to spin straw into gold. Here's some samples from the same 500 line function: protected void DoSave(bool cIsPostBack) { //ALWAYS a cPostBack cIsPostBack = true; SetPostBack("1"); string inCreate ="~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"; parseValues = new string []{"","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","",""}; if (!cIsPostBack) { //....... //.... //.... if (!cIsPostBack) { } else { } //.... //.... strHPhone = StringFormat(s1.Trim()); s1 = parseValues[18].Replace(encStr," "); strWPhone = StringFormat(s1.Trim()); s1 = parseValues[11].Replace(encStr," "); strWExt = StringFormat(s1.Trim()); s1 = parseValues[21].Replace(encStr," "); strMPhone = StringFormat(s1.Trim()); s1 = parseValues[19].Replace(encStr," "); //(hundreds of lines of this) //.... //.... SQL = "...... lots of SQL .... "; SqlCommand curCommand; curCommand = new SqlCommand(); curCommand.Connection = conn1; curCommand.CommandText = SQL; try { curCommand.ExecuteNonQuery(); } catch {} //.... } I've never had to refactor something like this before, and I want to know if there's something like a guidebook or knowledgebase on how to do this sort of thing, finding common bad patterns and offering the best solutions to repair them. I don't want to just nuke it from orbit,

    Read the article

  • SOA Summit - Oracle Session Replay

    - by Bruce Tierney
    If you think you missed the most recent Integration Developer News (IDN) "SOA Summit" 2013...good news, you didn't.  At least not the replay of the Oracle session titled: Three Solutionsfor Simplifying Cloud/On-Premises Integration As you will see in the reply below, this session introduces Three common reasons for integration complexity: Disparate Toolkits Lack of API Management Rigid, Brittle Infrastructure and then the Three solutions to these challenges: Unify Cloud On-premises Integration Enable Multi-channel Development with API Management Plan for the Unexpected - Future Readiness The last solution on future readiness describes how you can transition from being reactive to new trends, such as the Internet of Things (IoT), by modifying your integration strategy to enable business agility and how to recognize trends through Fast Data event processing ahead of your competition. Oracle SOA Suite customer SFpark's (San Francisco Metropolitan Transit Authority) implementation with API Management is covered as shown in the screenshot to the right This case study covers the core areas of API Management for partners to build their own applications by leveraging parking availability and real-time pricing as well as mobile enablement of data integrated by SOA Suite underneath.  Download the free SFpark app from the Apple and Android app stores to check it out. When looking into the future, the discussion starts with a historical look to better prepare for what comes next.   As shown in the image below, one of the next frontiers after mobile and cloud integration is a deeper level of direct "enterprise to customer" interaction.  Much of this relates to the Internet of Things.  Examples of IoT from the perspective of SOA and integration is also covered in the session. For example, early adopter Turkcell and their tracking of mobile phone users as they move from point A to B to C is shown in the image the right.   As you look into more "smart services" such as Location-Based Services, how "future ready" is your application infrastructure?  . . . Check out the replay by clicking the video image below to learn about these three challenges and solution including how to "future ready" your application infrastructure:

    Read the article

  • Help with Strategy-game AI

    - by f20k
    Hi, I am developing a strategy-game AI (think: Final Fantasy Tactics), and I am having trouble coming up for the design of the AI. My main problem is determining which is the optimal thing for it to do. First let me describe the priority of what action I would like the AI to take: Kill nearest player unit Fulfill primary directive (kill all player units, kill target unit, survive for x turns) Heal ally unit / cast buffer Now the AI can do the following in its turn: Move - {Attack / Ability / Item} (either attack or ability or item) {Attack / Ability / Item} - Move Move closer (if targets not in range) {Attack / Ability / Item} (if move not available) Notes Abilities have various ranges / effects / costs / effects. Each ai unit has maybe 5-10 abilities to choose from. The AI will prioritize killing over safety unless its directive is to survive for x turns. It also doesn't care about ability cost much. While a player may want to save a big spell for later, the AI will most likely use it asap. Movement is on a (hex) grid num of player units: 3-6 num of ai units: 3-7 or more. Probably max 10. AI and player take turns controlling ONE unit, instead of all at the same time. Platform is Android (if program doesnt respond after some time, there will be a popup saying to Force Quit or Wait - which looks really bad!). Now comes the questions: The best ability to use would obviously be the one that hits the most targets for the most damage. But since each ability has different ranges, I won't know if they are in range without exploring each possible place I can move to. One solution would be to go through each possible places to move to, determine the optimal attack at that location - which gives me a list of optimal moves for each location. Then choose the optimal out of the list and execute it. But this will take a lot of CPU time. Is there a better solution? My current idea is to move as close as possible towards the closest, largest group of people, and determine the optimal attack/ability from there. I think this would be a lot less work for the CPU and still allow for wide-range attacks. Its sub-optimal but the AI will still seem 'smart'. Other notes/questions: Am I over-thinking/over-complicating it? Better solution? I am open to all sorts of suggestions I have taken a look at the spell-casting question, but it doesn't take into account the movement - so perhaps use that algo for each possible move location? The top answer mentioned it wasn't great for area-of-effect and group fights - so maybe requires more tweaking? Please, if you mention a graph/tree, let me know basically how to use it. E.g. Node means ability, level corresponds to damage, then search for the deepest node.

    Read the article

  • It's possible to fulfill the social necessity of a human being through a social game in 3D like IMVU?

    - by Totty
    (I'm not advertising nor promoting this game, as it's just an example of my experience and I would like to have your opinion about the matter if possible) I've been started researching "things" about games and I've decided to begin to play IMVU as a friend of mine said it's cool. At first it seemed just another 3d social game, not so cool.. But I've "tried to like" and after 1 day I can say I'm addicted to it! Yes; I will explain better: About the game: You can go in chat-rooms, move to positions. Some positions are like sitting in a sofa, floor, dancing alone or with a partner, kissing and more in this way. In the free version of the game there is no nudity. You can even listen to music, view youtube... The 3d graphics are quite low end, so it's not as real as the paid PC games of today. About my experience: At first I was going with my friend in chat-rooms, they seemed very nice. There were people talking about general stuff, quite like in a real life. Well, I begin to know some girls (yes, virtual girls commanded by a real girl, I hope!). Things happened: Some girls are just crazy, not like in real life, they make out in before even talking; Other girls you can speak a little bit, then they add you to their friend-list. Sometimes they invite to their virtual places. Some girls have really IMVU boyfriends only (but not in reality) and most of them don't even make up in the game, so it's really a level of commitment involved here! But from what my friend told they last for him, at least, about 3 days... Some others have real and IMVU boyfriends that are the same. Until now I haven't find a girl with different boyfriend in the IMVU and reality. Nor multiple boyfriends. There are rooms where the same people find each selves every day and speak about general stuff, relationships and so on... They are nice with you, they "feel" you and show careness. This is what amazes me, they treat you like a real human being and as being their friend in the real world. (of course it's not always like this) There are jealous girls too and competitiveness between females lol, I know you loled! This is kind of social. So today I closed my door in my room and I've played it all day long and guess what, I didn't feel a need to stay with a real person at all. Normally, If I would stay a full day alone I would get quite crazy... So the question is: It's just me that seemed to be able to fulfill my social needs or there is something more? thanks for your precious time for reading my full question,

    Read the article

  • Ubuntu 10.10 not recognizing external hard drive

    - by sr71
    Installed Ubuntu 10.10 on a hard drive by itself (no windows). All updates are up to date. Everything`works fine except when I plug in a 320 gig Toshiba external hard drive....it is not recognized. When I plug in an 8 gig flash drive it is recognized no problem. What do I mean by "not recognized"? I mean: how do you know it was not recognized. You can check the command "cat /proc/partitions" in a terminal with and without the hdd attached. If you see some difference, it's ok. If the difference is something like "sda1" (sd+letter+number) then you have partition on it, maybe it can't be handled in Ubuntu (no filesystem on it or so). If there is only "sda" in the difference for example (sd+letter, but no number after it) then the drive itself is detected, just no partition is created on it. Also you can check out the messages of the kernel, with the "dmesg" command in the terminal. If there is no disk/partition/anything in /proc/partitions, there can be an USB level problem, you can issue command "lsusb" as it was suggested before my answer. It's really matter of what do you mean about "not recognized". @Marco Ceppi @Oli @Pitto By "not recognized" I meant that when I plug in a 8 gig flash drive an icon immediately appears on the desktop imdicating that there is a flash drive plugged in and I can click on it and view the files on the flash drive. It also shows up on the "Nautilus" default file manager. When I plug in the 320 gig Toshiba external hard drive, I get no indication on the desktop or the file manager (thus "not recogized"). When I run the command "cat/proc/partitions/" I get an error message Could not open location 'file:///home/bob/cat/proc/partitions' with or without the external hard drive installed. With the "dmeg" command I get about 10 pages of info with no mention of disk/partition/anything in /proc/partitions. When I run "lsub" command I get the following: bob@bob-desktop:~$ lsusb Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 003: ID 04b3:3003 IBM Corp. Rapid Access III Keyboard Bus 003 Device 002: ID 04b3:3004 IBM Corp. Media Access Pro Keyboard Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub bob@bob-desktop:~$

    Read the article

  • Which programming language to get into?

    - by user602479
    I'm ending my third term in a few weeks so I have some spare time coming up. I'd like to spend it seriously digging into programming. My problem: I'm not sure which language to begin with. Just to be clear, I don't want to start a language-y-compared-to-language-z discussion. There are a some other issues that play a major role. In my 5th term I'm going to be participating in a major practical course which will include either Java or C programming. It will take a lot of time and energy, as I found out while talking to a few students who passed the final exams (only 15% pass on their first try). Which practical course I will take is randomly decided. My skills so far are the absolute basics of Java and C programming. I know the different data types and how to handle them, objects, pointers, thread programming, etc. All of that is on a very low level, though. My question now is, what language should I start seriously practicing? Java: I did my first GUIs with this language. I'm familiar with Eclipse but I need a project to work on (which I don't have) to really keep me pushing. Besides that, I don't think it would help me if I have to do C in a year. C: As with Java, I can't think of a personal project to keep me working and keep me interested in programming. If I get assigned to Java in a year, this wouldn't give me any advantages either, would it? (No objects, etc.) Objective-C: I recently came up with this idea. I have a Mac; I'm not really familiar with Xcode but I have one or two personal projects I'd like to work on. Further, I would be working with objects (as in Java) and C language constructs which would both be great for this practical course in a year. What do you think I should begin with? Should I just stick to Java and hope for the best, force myself through C or start (nearly) completely from the beginning with Objective C? Maybe you folks could give me some good advice that would stop me from switching from one language to the next?

    Read the article

  • Best Frameworks/libraries/engines for 2D multiplayer C# Webbased RPG

    - by Thirlan
    Title is a mouthful but important because I'm looking to meet a specific criteria and it's complex enough that I need a lot of help in finding what I'm looking for. I really want people's suggestions because I trust it a lot more than anything else, so I just need to clearly define what it is I want heh : P Game is a 2D RPG. Think of Secret of Mana. Game is online multiplayer, but not MMO sized. Game must be webbased! I'm looking to the future and want to hit as many platforms as possible. I'm leaning to Webgl because of this, but still looking around. Since the users are seeing the game through the webbrowser the front-end should be mainly responsible for drawing, taking input and some basic checking such as preliminary collision detection. This is important because it means the game engine is NOT on the client's machine. The server should be responsible for the game engine and all the calculations. This means the server is doing all the work and the client is mostly a dumb terminal. Server language is c# I'm looking for fast project execution so I want to use as many pre-existing tools as possible. This would make sense because I'm making a game here, not an engine. I'm not creating some new revolutionary graphics or pushing the physics engines to the next level. Preference for commercially supported tools. For game mechanics reasons and for reasons 4 and 5, don't think I can use existing 2D rpg engines. I've seen them out there and I fear that if I try and use them they will have too many restrictions, but will be happy to hear out suggestions. So all this means I need a game engine on the server, or maybe just a physics engine, and then I need another engine/library to draw everything that the server is sending to the client on the webbrowser. Maybe this is how 50% of games work on the web and there are plenty of frameworks that support this! I wouldn't actually don't know : ( but my gut is telling me that most webgames are single player and 90% of the game is running on the client. So... any suggestions?

    Read the article

  • How do we provide valid time estimates during Sprint Planning without doing "too much" design?

    - by Michael Edenfield
    My team is getting up to speed with Scrum, but most of us are more familiar with non-agile or "pseudo-"agile methodologies. The part that is the biggest hurdle for us is running an efficient Sprint Planning meeting where we break our backlog items into tasks, and estimate hours. (I'm using the terminology from the VS2010 Scrum Template; apologies if I use the wrong word somewhere.) When we try to figure out how long a task is going to take, we often fall into the trap of designing the feature at the code level -- table layout, interfaces, etc -- in order to figure out how long that's going to take. I'm pretty sure this is not the appropriate place to be doing that kind of design. We should be scheduling tasks for these design meetings during the sprint. However, we are having trouble figuring out how else to come up with meaningful estimates for the tasks. Are there any practical habits/techniques/etc. for making a judgement call about how long a feature is going to take, without knowing how you plan to implement it? If our time estimates are going to change significantly once the design has been completed, how can we properly budget our Sprint backlog ahead of time? EDIT: Just to clarify, since some of the comments/answers are very valid but I think addressing the wrong question. We know that what we're doing is not right, and that we should be building time into the sprint for this design. Conceptually all of the developers understand that. We also also bringing in a team member with Scrum experience to keep us on track if we start going off into the weeds. The problem is that, without going through this design process, we are finding it difficult to provide concrete time estimates for anything. We are constantly saying things like "well if we design it this way it might take 8 hours but if we end up having to do this other way instead that will take about 32 but it might not be as bad once we start trying to write it...". I also assume that this process will get better once we have some historical velocity to work from, but many of the technologies and architectural patterns we are using are new to us. But if potentially-wildly-wrong estimates are just a natural part of adapting this process then we will just need to recondition ourselves to accept that :)

    Read the article

  • Move Data into the Grid for Scalable, Predictable Response Times

    - by JuergenKress
    CloudTran is pleased to introduce the availability of the CloudTran Transaction and Persistence Manager for creating scalable, reliable data services on the Oracle Coherence In-Memory Data Grid (IMDG). Use of IMDG architectures has been key to handling today’s web-scale loads because it eliminates database latency by storing important and frequently access data in memory instead of on disk. The CloudTran product lets developers easily use an IMDG for full ACID-compliant transactions without having to be concerned about the location or spread of data. The system has its own implementation of fast, scalable distributed transactions that does NOT depend on XA protocols but still guarantees all ACID properties. Plus, CloudTran asynchronously replicates data going into the IMDG to back-end datastores and back-up data centers, again ensuring ACID properties. CloudTran can be accessed through Java Persistence API (JPA via TopLink Grid) and now, through a new Low-Level API, or LLAPI. This is ideal for use in SOA applications that need data reliability, high availability, performance, and scalability. Still in limited beta release, the LLAPI gives developers the ability to use standard put/remove logic available in Coherence and then wrap logic with simple Spring annotations or XML+AspectJ to start transactions. An important feature of LLAPI is the ability to join transactions. This is a common outcome for SOA applications that need to reduce network traffic by aggregating data into single cache entries and then doing SOA service processing in the node holding the data. This results in the need to orchestrate transaction processing across multiple service calls. CloudTran has the capability to handle these “multi-client” transactions at speed with no loss in ACID properties. Developing software around an IMDG like Oracle Coherence is an important choice for today’s web-scale applications and services. But this introduces new architectural considerations to maintain scalability in light of increased network loads and data movement. Without using CloudTran, developers are faced with an incredibly difficult task to ensure data reliability, availability, performance, and scalability when working with an IMDG. Working with highly distributed data that is entirely volatile while stored in memory presents numerous edge cases where failures can result in data loss. The CloudTran product takes care of all of this, leaving developers with the confidence and peace of mind that all data is processed correctly. For those interested in evaluating the CloudTran product and IMDGs, take a look at this link for more information: http://www.CloudTran.com/downloadAPI.php, or, send your questions to [email protected]. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: Coherence,cloudtran,cache,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Does it make the game more fun when the user is forced to progress thru the levels sequentially rather than letting them pick and play?

    - by BeachRunnerJoe
    Hello. For the first time in my game, I'm stuck with a real design dilemma. I guess that's a good thing ;) I'm building a word puzzle game that has five levels, each with 30 puzzles. Currently, the user has to solve one puzzle at a time before moving to the next. However, I'm finding the user occasionally gets stuck on a puzzle, at which point they can no longer play until they solve it. This is obviously bad because many people will just quit playing the game and delete the app since they get frustrated and can't play any other puzzles until the current puzzle is solved. The only elegant solution I can find to helping the player get unstuck is changing the design of the game to allow the users to pick any puzzle to play at any time. This way, if they get stuck, they can come back to it later and at least they have other puzzles to play in the meantime. It's my opinion, however, that this new flow design doesn't make the game as fun as the original flow design where the player has to complete a puzzle before moving to the next. To me, it's like anything else, when you only have one of something, it's more enjoyable, but when you have 30 of something, it's far less enjoyable. In fact, when I present the user with 30 puzzles to choose from that they need to solve before unlocking the next level, it almost seems as tho I'm making them feel like it's work they have to do. I even had a tester voluntarily tell me that being forced to complete a puzzle before moving to the next is more motivating. My questions are... Do you agree/disagree? Do you have any suggestions for how I can help the player get unstuck? Thanks so much in advance for your thoughts! EDIT: I should mention that I've already considered a few other solutions to helping the user get unstuck, but none of them seem like good ideas. They are... Add more hints: Currently, the user gets two hints per puzzle. If I increase the hint count, it only makes the game more easy and still leaves the possibility of the user getting stuck. Add a "Show Solution" button: This seems like a bad idea because it's my opinion this takes the fun out of the game for many people who would probably otherwise solve the puzzle if they didn't have the quick option to see the solution.

    Read the article

  • Critical Patch Updates During EBS 11i Exception to Sustaining Support Period

    - by Elke Phelps (Oracle Development)
    As previously blogged in the EBS 11i and 12.1 Support Timeline Changes entry, two important changes to the Oracle Lifetime Support policies were announced at Oracle OpenWorld 2012 - San Francisco.  These changes affect E-Business Suite Releases 11i and 12.1. Critical Patch Updates for EBS 11i during the Exception to Sustaining Support Period You may be wondering about the availability of Critical Patch Updates (CPU) for EBS 11i during the Exception to Sustaining Support period.  The following details the E-Business Suite Critical Patch Update support policy for EBS 11i during the Exception to Sustaining Support period: Oracle will continue to provide CPUs containing critical security fixes for E-Business Suite 11i.  CPUs will be packaged and released as as cumulative patches for both ATG RUP 6 and ATG RUP 7. As always, we try to minimize the number of patches and dependencies required for uptake of a CPU; however, there have been quite a few changes to the 11i baseline since its release.  For dependency reasons the 11i CPUs may require a higher number of files in order to bring them up to a consistent, stable, and well tested level. EBS 11i customer will continue to receive CPUs up to and including the October 2014 CPU. Where can I learn more? There are two interlocking policies that affect the E-Business Suite:  Oracle's Lifetime Support policies for each EBS release (timelines which were updated by this announcement), and the Error Correction Support policies (which state the minimum baselines for new patches). For more information about how these policies interact, see: Understanding Support Windows for E-Business Suite Releases What about E-Business Suite technology stack components? Things get more complicated when one considers individual techstack components such as Oracle Forms or the Oracle Database.  To learn more about the interlocking EBS+techstack component support windows, see these two articles: On Apps Tier Patching and Support: A Primer for E-Business Suite Users On Database Patching and Support: A Primer for E-Business Suite Users Where can I learn more about Critical Patch Updates?The Critical Patch Update Advisory is the starting point for relevant information. It includes a list of products affected, pointers to obtain the patches, a summary of the security vulnerabilities, and links to other important documents.  Related Articles EBS 11i and 12.1 Support Timeline Changes Frequently Asked Questions about Latest EBS Support Changes Extended Support Fees Waived for E-Business Suite 11i and 12.0

    Read the article

  • The Minimalist Approach to Content Governance - Create Phase

    - by Kellsey Ruppel
     Originally posted by John Brunswick. In this installment of our Minimalist Approach to Content Governance we finally get to the fun part of the content creation process! Once the content requester has addressed the items outlined in the Request Phase it is time to setup and begin the production of content.   For this to be done correctly it is important the the content be assigned appropriate workflow and security information. As in our prior phase, let's take a look at what can be done to streamline this process - as contributors are focused on getting information to their end users as quickly as possible. This often means that details around how to ensure that the materials are properly managed can be overlooked, but fortunately there are some techniques that leverage our content management system's native capabilities to automatically take care of some of the details. 1. Determine Access Why - Even if content is not something that needs to restricted due to security reasons, it is helpful to apply access rights so that the content ends up being visible only to users that it relates to. This will greatly improve user experience. For instance, if your team is working on a group project many of your fellow company employees do not need to see the content that is being worked on for that project. How - Make use of native content features that allow propagation of security and meta data from parent folders within your content system that have been setup for your particular effort. This makes it painless to enforce security, as well as meta data policies for even the most unorganized users. The default settings at a parent level can be set once the content creation request has been accepted and a location in the content management system is assigned for your specific project. Impact - Users can find information will less effort, as they will only be exposed to what they need for their work and can leverage advanced search features to take advantage of meta data assigned to content. The combination of default security and meta data will also help in running reports against the content in the Manage and Retire stages that we will discuss in the next 2 posts. 2. Assign Workflow (optional depending on nature of content) Why - Every case for workflow is going to be a bit different, but it generally involves ensuring that content conforms to management, legal and or editorial requirements. How - Oracle's Universal Content Management offers two ways of helping to workflow content without much effort. Workflow can be applied to content based on Criteria acting on meta data or explicitly assigned to content with a Basic workflow. Impact - Any content that needs additional attention before release is addressed, allowing users to comment and version until a suitable result is reached. By using inheritance from parent folders within the content management system content can automatically be given the right security, meta data and workflow information for a particular project's content. This relieves the burden of doing this for every piece of content from management teams and content contributors. We will cover more about the management phase within the content lifecycle in our next installment.

    Read the article

  • How to enable and connect to RDP on a Windows Azure Web Role Instance?

    - by Enrique Lima
    We all know there have been some updates to Windows Azure, and one of the biggest I would say is the capability of being able to remote into the “OS level” of the image running a role.  And I am not talking about VM Role, I am talking about a Web Role for example. As developers we use Visual Studio, and when we are getting ready to deploy a project, we have the option of enabling this. Here is how: 1.  We publish our Project 2. On the Deployment dialog, provide all the details for your account, and before clicking OK, click on Configure Remote Desktop connections. 3.  Enable connections and the rest of the configuration.  Now, here is where there is an extra set of steps.  The first thing to know: The certificate used here is different from the other certs you have in place.  I created a new one, the went into certmgr.msc, then to Personal, then I selected the cert I just created.  Did a right-click, then All Tasks > Export.  Because what is needed is a pfx package, make sure when exporting you select to export the private key. 4. Click OK, on the Remote Desktop Configuration screen, now before you click OK on the Deployment, you will need to visit the Azure Portal. And perform the following: Go to your hosted services. Then with the service available, select the Certificates folder location. Then, select Add Certificate from the toolbar (more like Azure Portal Ribbon) Provide the details to upload the recently create pfx file. That will create the Certificate. Click OK on the deployment dialog, this kick off the deployment process. 5. Now, we need to go to the Windows Azure Portal.  Here we will select the Web Role deployed and Configure RDP. 6. Time to test.  Click on the Instance (not the role), this will make the Remote Access Connect Button available.  A file will start the process to be downloaded too 7. You will then be prompted for the credentials you configured. 8.  Validate connectivity … 9. Open IIS Manager … From here on, this is a way to manage and work with your Instance.

    Read the article

< Previous Page | 337 338 339 340 341 342 343 344 345 346 347 348  | Next Page >