Search Results

Search found 18677 results on 748 pages for 'current'.

Page 648/748 | < Previous Page | 644 645 646 647 648 649 650 651 652 653 654 655  | Next Page >

  • Sprinkle Some Magik on that Java Virtual Machine

    - by Jim Connors
    GE Energy, through its Smallworld subsidiary, has been providing geospatial software solutions to the utility and telco markets for over 20 years.  One of the fundamental building blocks of their technology is a dynamically-typed object oriented programming language called Magik.  Like Java, Magik source code is compiled down to bytecodes that run on a virtual machine -- in this case the Magik Virtual Machine. Throughout the years, GE has invested considerable engineering talent in the support and maintenance of this virtual machine.  At the same time vast energy and resources have been invested in the Java Virtual Machine. The question for GE has been whether to continue to make that investment on its own or to leverage massive effort provided by the Java community? Utilizing the Java Virtual Machine instead of maintaining its own virtual machine would give GE more opportunity to focus on application solutions.   At last count, there are dozens, perhaps hundreds of examples of programming languages that have been hosted atop the Java Virtual Machine.  Prior to the release of Java 7, that effort, although certainly possible, was generally less than optimal for languages like Magik because of its dynamic nature.  Java, as a statically typed language had little use for this capability.  In the quest to be a more universal virtual machine, Java 7, via JSR-292, introduced a new bytecode called invokedynamic.  In short, invokedynamic affords a more flexible method call mechanism needed by dynamic languages like Magik. With this new capability GE Energy has succeeded in hosting their Magik environment on top of the Java Virtual Machine.  So you may ask, why would GE wish to do such a thing?  The benefits are many: Competitors to GE Energy claimed that the Magik environment was proprietary.  By utilizing the Java Virtual Machine, that argument gets put to bed.  JVM development is done in open source, where contributions are made world-wide by all types of organizations and individuals. The unprecedented wealth of class libraries and applications written for the Java platform are now opened up to Magik/JVM platform as first class citizens. In addition, the Magik/JVM solution vastly increases the developer pool to include the 9 million Java developers -- the largest developer community on the planet. Applications running on the JVM showed substantial performance gains, in some cases as much as a 5x speed up over the original Magik platform. Legacy Magik applications can still run on the original platform.  They can be seamlessly migrated to run on the JVM by simply recompiling the source code. GE can now leverage the huge Java community.  Undeniably the best virtual machine ever created, hundreds if not thousands of world class developers continually improve, poke, prod and scrutinize all aspects of the Java platform.  As enhancements are made, GE automatically gains access to these. As Magik has little in the way of support for multi-threading, GE will benefit from current and future Java offerings (e.g. lambda expressions) that aim to further facilitate multi-core/multi-threaded application development. As the JVM is available for many more platforms, it broadens the reach of Magik, including the potential to run on a class devices never envisioned just a few short years ago.  For example, Java SE compatible runtime environments are available for popular embedded ARM/Intel/PowerPC configurations that could theoretically host this software too. As compared to other JVM language projects, the Magik integration differs in that it represents a serious commercial entity betting a sizable part of its business on the success of this effort.  Expect to see announcements not only from General Electric, but other organizations as they realize the benefits of utilizing the Java Virtual Machine.

    Read the article

  • Indexed view deadlocking

    - by Dave Ballantyne
    Deadlocks can be a really tricky thing to track down the root cause of.  There are lots of articles on the subject of tracking down deadlocks, but seldom do I find that in a production system that the cause is as straightforward.  That being said,  deadlocks are always caused by process A needs a resource that process B has locked and process B has a resource that process A needs.  There may be a longer chain of processes involved, but that is the basic premise. Here is one such (much simplified) scenario that was at first non-obvious to its cause: The system has two tables,  Products and Stock.  The Products table holds the description and prices of a product whilst Stock records the current stock level. USE tempdb GO CREATE TABLE Product ( ProductID INTEGER IDENTITY PRIMARY KEY, ProductName VARCHAR(255) NOT NULL, Price MONEY NOT NULL ) GO CREATE TABLE Stock ( ProductId INTEGER PRIMARY KEY, StockLevel INTEGER NOT NULL ) GO INSERT INTO Product SELECT TOP(1000) CAST(NEWID() AS VARCHAR(255)), ABS(CAST(CAST(NEWID() AS VARBINARY(255)) AS INTEGER))%100 FROM sys.columns a CROSS JOIN sys.columns b GO INSERT INTO Stock SELECT ProductID,ABS(CAST(CAST(NEWID() AS VARBINARY(255)) AS INTEGER))%100 FROM Product There is a single stored procedure of GetStock: Create Procedure GetStock as SELECT Product.ProductID,Product.ProductName FROM dbo.Product join dbo.Stock on Stock.ProductId = Product.ProductID where Stock.StockLevel <> 0 Analysis of the system showed that this procedure was causing a performance overhead and as reads of this data was many times more than writes,  an indexed view was created to lower the overhead. CREATE VIEW vwActiveStock With schemabinding AS SELECT Product.ProductID,Product.ProductName FROM dbo.Product join dbo.Stock on Stock.ProductId = Product.ProductID where Stock.StockLevel <> 0 go CREATE UNIQUE CLUSTERED INDEX PKvwActiveStock on vwActiveStock(ProductID) This worked perfectly, performance was improved, the team name was cheered to the rafters and beers all round.  Then, after a while, something else happened… The system updating the data changed,  The update pattern of both the Stock update and the Product update used to be: BEGIN TRAN UPDATE... COMMIT BEGIN TRAN UPDATE... COMMIT BEGIN TRAN UPDATE... COMMIT It changed to: BEGIN TRAN UPDATE... UPDATE... UPDATE... COMMIT Nothing that would raise an eyebrow in even the closest of code reviews.  But after this change we saw deadlocks occuring. You can reproduce this by opening two sessions. In session 1 begin transaction Update Product set ProductName ='Test' where ProductID = 998 Then in session 2 begin transaction Update Stock set Stocklevel = 5 where ProductID = 999 Update Stock set Stocklevel = 5 where ProductID = 998 Hop back to session 1 and.. Update Product set ProductName ='Test' where ProductID = 999 Looking at the deadlock graphs we could see the contention was between two processes, one updating stock and the other updating product, but we knew that all the processes do to the tables is update them.  Period.  There are separate processes that handle the update of stock and product and never the twain shall meet, no reason why one should be requiring data from the other.  Then it struck us,  AH the indexed view. Naturally, when you make an update to any table involved in a indexed view, the view has to be updated.  When this happens, the data in all the tables have to be read, so that explains our deadlocks.  The data from stock is read when you update product and vice-versa. The fix, once you understand the problem fully, is pretty simple, the apps did not guarantee the order in which data was updated.  Luckily it was a relatively simple fix to order the updates and deadlocks went away.  Note, that there is still a *slight* risk of a deadlock occurring, if both a stock update and product update occur at *exactly* the same time.

    Read the article

  • Using rel=next and rel=prev with multiple sets of paginated content on the same page

    - by jakejgordon
    We are running into issues with trying to figure out how to implement rel="next" and rel="prev" -- coupled with rel="canonical" -- with multiple sets of paginated content on the same page, with pages in multiple cultures. In other words, how do we implement these when we have a pager for both Product Reviews and Questions and Answers (aka "Q&A") on the same page, with duplicate content across culture-specific URLs (e.g. /us/en/my-product vs. /ca/en/my-product)? Our current implementation will actually do a full postback when you click Page 2, and will add something to the query string (e.g. website.com/ca/en/my-product?previewpage=2 or website.com/ca/en/my-product?questionpage=2). If we only had one set of paginated content then the implementation would certainly be more straightforward. Adding a second set of paginated content (i.e. Q&A) complicates things. Let's assume that we want the United States English page to be the canonical target (i.e. /us/en/my-product) based on culture. If you go to the /ca/en/my-product page you'll have a rel="canonical" href="/us/en/my-product". So far so good. Let's also assume that we are not implementing a page that lists ALL Product Reviews and Q&A. This would likely solve a number of our problems by using rel="canonical" to this page, but is not an option for reasons that are out of scope for this discussion. Now if you click on page 2 of Product Reviews, it will reload the page with /ca/en/my-product?reviewpage=2 as the URL. Given this scenario, here are my questions: On page 2 of the my-product page on the Canadian site, should there be a rel="canonical" to /us/en/my-product?reviewpage=2 (assuming the content is identical in the United States and Canada)? Should the rel="prev" go to /ca/en/my-product?reviewpage=1 or should it go to /ca/en/my-product ? The query-string version would really only be accessible if using the pager and shows the exact same content as the base page. The following two questions are closely related to this one. Should the /ca/en/my-product?reviewpage=1 have a rel canonical directly to /us/en/my-product (United States page with nothing in query string) since the content is identical)? Given that Q&A content is also paginated, should there be a rel="next" on the base page without query string? In other words, should the /ca/en/my-product page have a rel="next" to /ca/en/my-product?reviewpage=2 AND rel="next" to /ca/en/my-product?questionpage=2 . So far as I can tell it doesn't make sense to have multiple rel="next" implementations on the same page. I suspect that the pages with query string values should have rel="next" and rel="prev" that only point to other pages with query strings and not to the base page. The ?reviewpage=1 and ?questionpage=1 pages would then just have a rel="canonical" to /us/en/my-product . Thoughts? I know this is a tough one -- that's why I brought it to this community. Thanks so much for your help in advance!

    Read the article

  • Algorithm for tracking progress of controller method running in background

    - by SilentAssassin
    I am using Codeigniter framework for PHP on Windows platform. My problem is I am trying to track progress of a controller method running in background. The controller extracts data from the database(MySQL) then does some processing and then stores the results again in the database. The complete aforesaid process can be considered as a single task. A new task can be assigned while another task is running. The newly assigned task will be added in a queue. So if I can track progress of the controller, I can show status for each of these tasks. Like I can show "Pending" status for tasks in the queue, "In Progress" for tasks running and "Done" for tasks that are completed. Main Issue: Now first thing I need to find is an algorithm to track the progress of how much amount of execution the controller method has completed and that means tracking how much amount of method has completed execution. For instance, this PHP script tracks progress of array being counted. Here the current state and state after total execution are known so it is possible to track its progress. But I am not able to devise anything analogous to it in my case. Maybe what I am trying to achieve is programmtically not possible. If its not possible then suggest me a workaround or a completely new approach. If some details are pending you can mention them. Sorry for my ignorance this is my first post here. I welcome you to point out my mistakes. EDIT: Database outline: The URL(s) and keyword(s) are first entered by user which are stored in a database table called link_master and keyword_master respectively. Then keywords are extracted from all the links present in this table and compared with keywords entered by user and their frequency is calculated which is the final result. And the results are stored in another table called link_result. Now sub-links are extracted from the domain links and stored in a table called sub_link_master. Now again the keywords are extracted from these sub-links and the corresponding results are stored in a table called sub_link_result. The number of records cannot be defined beforehand as the number of links on any web page can be different. Only the cardinality of *link_result* table can be known which will be equal to multiplication of number of keyword(s) and URL(s) . I insert multiple records at a time using this resource. Controller outline: The controller extracts keywords from a web page and also extracts keywords from all the links present on that page. There is a method called crawlLink. I used Rolling Curl to extract keywords and web page content. It has callback function which I used for extracting keywords alongwith generating results and extracting valid sub-links. There is a insertResult method which stores results for links and sub-links in the respective tables. Yes, the processing depends on the number of records. The more the number of records, the more time it takes to execute: Consider this scenario: Number of Domain Links = 1 Number of Keywords = 3 Number of Domain Links Result generated = 3 (3 x 1 as described in the question) Number of Sub Links generated = 41 Number of Sub Links Result = 117 (41 x 3 = 123 but some links are not valid or searchable) Approximate time taken for above process to complete = 55 seconds. The above result is for a single link. I want to track the progress of the above results getting stored in database. When all results are stored, the task is complete. If results are getting stored, the task is In Progress. I am not clear how can I track this progress.

    Read the article

  • Is It Worth It To Learn Experimental Languages

    - by Xander Lamkins
    I'm a young programmer who desires to work in the field someday as a programmer. I know Java, VB.NET and C#. I want to learn a new language (as I programmer, I know that it is valuable to extend what I know - to learn languages that make you think differently). I took a look online to see what languages were common. Everybody knows C and C++ (even those muggles who know so little about computers in general), so I thought, maybe I should push for C. C and C++ are nice but they are old. Things like Haskell and Forth (etc. etc. etc.) are old and have lost their popularity. I'm scared of learning C (or even C++) for this same reason. Java is pretty old as well and is slow because it's run by the JVM and not compiled to native code. I've been a Windows developer for quite a while. I recently started using Java - but only because it was more versatile and spreadable to other places. The problem is that it doesn't look like a very usable language for these reasons: It's most used purpose is for web application and cellphone apps (specifically Android) As far as actual products made with it, the only things that come to mind are Netbeans, Eclipse (hurrah for making and IDE with the language the IDE is for - it's like making a webpage for writing HTML/CSS/Javascript), and Minecraft which happens to be fun but laggy and bipolar as far as computer spec. support. Other than that it's used for servers but heck - I don't only want to make/configure servers. The .NET languages are nice, however: People laugh if I even mention VB.NET or C# in a serious conversation. It isn't cross-platform unless you use MONO (which is still in development and has some improvements to be made). Lacks low level stuff because, like Java with the JVM, it is run/managed by the CLR. My first thought was learning something like C and then using it to springboard into C++ (just to make sure I would have a strong understanding/base), but like I said earlier, it's getting older and older by the minute. What I've Looked Into Fantom looks nice. It's like a nice middleman between my two favorite languages and even lets me publish between the two interchangeably, but, unlike what I want, it compiles to the CLR or JVM (depending on what you publish it to) instead of it being a complete compile. D also looks nice. It seems like a very usable language and from multiple sources it appears to actually be better than C/C++. I would jump right with it, but I'm still unsure of its success because it obviously isn't very mainstream at this point. There are a couple others that looked pretty nice that focused on other things such as Opa with web development and Go by GOOGLE. My Question Is it worth learning these "experimental" languages? I've read other questions that say that if you aren't constantly learning languages and open to all languages that you aren't in the right mindset for programming. I understand this and I still might not quite be getting it, but in truth, if a language isn't going to become mainstream, should I spend my time learning something else? I don't want to learn old (or any going to soon be old) programming languages. I know that many people see this as something important, *but would any of you ever actually consider (assuming you didn't already know) FORTRAN? My goal is to stay current to make sure I'm successful in the future. Disclaimer Yes, I am a young programmer, so I probably made a lot of naive statements in my question. Feel free to correct me on ANYTHING! I have to start learning somewhere so I'm sure a lot of my knowledge is sketchy enough to have caused to incorrect statements or flaws in my thinking. Please leave any feelings you have in the comments.

    Read the article

  • Determining explosion radius damage - Circle to Rectangle 2D

    - by Paul Renton
    One of the Cocos2D games I am working on has circular explosion effects. These explosion effects need to deal a percentage of their set maximum damage to all game characters (represented by rectangular bounding boxes as the objects in question are tanks) within the explosion radius. So this boils down to circle to rectangle collision and how far away the circle's radius is from the closest rectangle edge. I took a stab at figuring this out last night, but I believe there may be a better way. In particular, I don't know the best way to determine what percentage of damage to apply based on the distance calculated. Note : All tank objects have an anchor point of (0,0) so position is according to bottom left corner of bounding box. Explosion point is the center point of the circular explosion. TankObject * tank = (TankObject*) gameSprite; float distanceFromExplosionCenter; // IMPORTANT :: All GameCharacter have an assumed (0,0) anchor if (explosionPoint.x < tank.position.x) { // Explosion to WEST of tank if (explosionPoint.y <= tank.position.y) { //Explosion SOUTHWEST distanceFromExplosionCenter = ccpDistance(explosionPoint, tank.position); } else if (explosionPoint.y >= (tank.position.y + tank.contentSize.height)) { // Explosion NORTHWEST distanceFromExplosionCenter = ccpDistance(explosionPoint, ccp(tank.position.x, tank.position.y + tank.contentSize.height)); } else { // Exp center's y is between bottom and top corner of rect distanceFromExplosionCenter = tank.position.x - explosionPoint.x; } // end if } else if (explosionPoint.x > (tank.position.x + tank.contentSize.width)) { // Explosion to EAST of tank if (explosionPoint.y <= tank.position.y) { //Explosion SOUTHEAST distanceFromExplosionCenter = ccpDistance(explosionPoint, ccp(tank.position.x + tank.contentSize.width, tank.position.y)); } else if (explosionPoint.y >= (tank.position.y + tank.contentSize.height)) { // Explosion NORTHEAST distanceFromExplosionCenter = ccpDistance(explosionPoint, ccp(tank.position.x + tank.contentSize.width, tank.position.y + tank.contentSize.height)); } else { // Exp center's y is between bottom and top corner of rect distanceFromExplosionCenter = explosionPoint.x - (tank.position.x + tank.contentSize.width); } // end if } else { // Tank is either north or south and is inbetween left and right corner of rect if (explosionPoint.y < tank.position.y) { // Explosion is South distanceFromExplosionCenter = tank.position.y - explosionPoint.y; } else { // Explosion is North distanceFromExplosionCenter = explosionPoint.y - (tank.position.y + tank.contentSize.height); } // end if } // end outer if if (distanceFromExplosionCenter < explosionRadius) { /* Collision :: Smaller distance larger the damage */ int damageToApply; if (self.directHit) { damageToApply = self.explosionMaxDamage + self.directHitBonusDamage; [tank takeDamageAndAdjustHealthBar:damageToApply]; CCLOG(@"Explsoion-> DIRECT HIT with total damage %d", damageToApply); } else { // TODO adjust this... turning out negative for some reason... damageToApply = (1 - (distanceFromExplosionCenter/explosionRadius) * explosionMaxDamage); [tank takeDamageAndAdjustHealthBar:damageToApply]; CCLOG(@"Explosion-> Non direct hit collision with tank"); CCLOG(@"Damage to apply is %d", damageToApply); } // end if } else { CCLOG(@"Explosion-> Explosion distance is larger than explosion radius"); } // end if } // end if Questions: 1) Can this circle to rect collision algorithm be done better? Do I have too many checks? 2) How to calculate the percentage based damage? My current method generates negative numbers occasionally and I don't understand why (Maybe I need more sleep!). But, in my if statement, I ask if distance < explosion radius. When control goes through, distance/radius must be < 1 right? So 1 - that intermediate calculation should not be negative. Appreciate any help/advice!

    Read the article

  • Wisdom Lies in Collaborative Power and Intelligence

    - by kellsey.ruppel
    By Alakh Verma, Director, Platform Technology Solutions   In my recent blog posts, I shared insights on Predictive Analytics (Will Predictive Analytics at 'Speed of Thoughts' Help Businesses?), Real Time Decisions (How critical are Real Time decisions in business today?) and their significance in our lives in general and in businesses today. In the current business paradigm shift- with evolutionary social business, it is paramount that businesses look for wisdom in collaborative power and intelligence and equip their employees with the tools to engage with one another. There is an old time saying that 5 sticks tied together are stronger and unable to break as opposed to an individual stick. We have recently witnessed the power of ordinary people uniting together and fought collaboratively using Facebook and Twitter to topple down dictators in Tunisia, Egypt and Libya—and are threatening absolute rule in Syria. And an India one man’s (Anna Hazare) campaign against corruption went viral, bringing thousands to the streets in support. As anyone who has worked in a sizeable organization knows, there is no guarantee that the organization as a whole will perform efficiently and achieve its goals, even if each employee is individually efficient and every team has a high level of productivity. To achieve enterprise productivity, it is necessary not only for individuals and groups to “do things right” by working productively but also for the enterprise as a whole to “do the right things” - form the right teams, make the right decisions, allocate resources correctly, and effectively coordinate activities across the entire organization. Most organizations fall short of the optimal level of enterprise productivity because of one or more of these reasons, all at a great cost to the business.  They are disconnected from themselves with various parts of the organization unintentionally working at cross-purposes with each other.  Information that exists is not getting shared or reused.  Human talent is not being applied where it is most needed.  The same problems are being solved repeatedly by multiple groups. Intelligent collaboration through automated business processes has the ability to alter the course of any important business activity, with a potentially dramatic impact on the financial performance of the business. Whether it is a simple email exchange, a physical or virtual meeting, a task force, or a large-scale project, the activity is inherently collaborative.  In fact, collaboration can be defined as the work that takes place among people when a business process is not pre-determining how the work should take place. Collaboration is many things: information sharing, brainstorming, problem solving, best practice negotiation, innovation, coordination of activity, alignment of purpose, and so forth.  Collaboration is the “white space” between the business processes; it is the glue that holds an organization together, and the lubricant that allows the machinery to keep running.  Real time search and collaborative capabilities of the right people with the right content supported by defined processes will provide unparallel wisdom in the organization in the most competitive business environment today. Interestingly, technologies such as Oracle WebCenter offer these capabilities in our Web based business transactions and compliment in the overall collaborative intelligence and power to truly transform organizations to social businesses. Looking to learn more about engaging your employees to collaborate together and providing a complete user experience for your customers? You won't want to miss our webcast today! Drive Online Engagement with Intuitive Portals and Websites

    Read the article

  • JDeveloper 11.1.2 : Command Link in Table Column Work Around

    - by Frank Nimphius
    Just figured that in Oracle JDeveloper 11.1.2, clicking on a command link in a table does not mark the table row as selected as it is the behavior in previous releases of Oracle JDeveloper. For the time being, the following work around can be used to achieve the "old" behavior: To mark the table row as selected, you need to build and queue the table selection event in the code executed by the command link action listener. To queue a selection event, you need to know about the rowKey of the row that the command link that you clicked on is located in. To get to this information, you add an f:attribute tag to the command link as shown below <af:column sortProperty="#{bindings.DepartmentsView1.hints.DepartmentId.name}" sortable="false"    headerText="#{bindings.DepartmentsView1.hints.DepartmentId.label}" id="c1">   <af:commandLink text="#{row.DepartmentId}" id="cl1" partialSubmit="true"       actionListener="#{BrowseBean.onCommandItemSelected}">     <f:attribute name="rowKey" value="#{row.rowKey}"/>   </af:commandLink>   ... </af:column> The f:attribute tag references #{row.rowKey} wich in ADF translates to JUCtrlHierNodeBinding.getRowKey(). This information can be used in the command link action listener to compose the RowKeySet you need to queue the selected row. For simplicitly reasons, I created a table "binding" reference to the managed bean that executes the command link action. The managed bean code that is referenced from the af:commandLink actionListener property is shown next: public void onCommandItemSelected(ActionEvent actionEvent) {   //get access to the clicked command link   RichCommandLink comp = (RichCommandLink)actionEvent.getComponent();   //read the added f:attribute value   Key rowKey = (Key) comp.getAttributes().get("rowKey");     //get the current selected RowKeySet from the table   RowKeySet oldSelection = table.getSelectedRowKeys();   //build an empty RowKeySet for the new selection   RowKeySetImpl newSelection = new RowKeySetImpl();     //RowKeySets contain List objects with key objects in them   ArrayList list = new ArrayList();   list.add(rowKey);   newSelection.add(list);     //create the selectionEvent and queue it   SelectionEvent selectionEvent = new SelectionEvent(oldSelection, newSelection, table);   selectionEvent.queue();     //refresh the table   AdfFacesContext.getCurrentInstance().addPartialTarget(table); }

    Read the article

  • How can I fix my keyboard layout?

    - by Scott Severance
    For a long time, I've had my keyboard configured to use the layout currently known as "English (international AltGr dead keys)." I like this layout because without any modifier keys, it's identical to the US English keyboard, but when I hold Right Alt I can get accented letters and other characters not available on a standard US English keyboard. In Oneiric, however, the layout is messed up. Right Alt+N produces "ñ" as expected. And another method works: Right Alt+`, E produces "è", also as expected. But there's no way to type "é", which is probably the accented letter I type the most. I expect Right Alt+A, E to do the trick. But instead of a dead key for the acute accent, it uses a method for combining characters to create the hybrid "´e". This hybrid looks like the proper "é" in some settings, but it isn't the same character and doesn't always work. (For example, in the text input box as I type this, it looks the same as the proper character, but when displayed on the site for all so see, it looks very wrong--at least on my machine.) Ditto for all other characters with an acute accent, though some are available directly as pre-composed characters: For example, Right Alt+I yields "í". How can I change the acute accent on the A key to a proper dead key? Perhaps the more general version of this is: How can I tweak my keyboard layout? Update I just tested this on my other machine, also running Oneiric, but upgraded from previous versions. I have no problems with the second machine. The problem machine was a fresh install of Oneiric, but I kept my old $HOME when I did the fresh install. Clarification Even if an answer doesn't address my specific examples, I would still accept it if it provided enough detail for me to find the layout and tweak it according to my needs. Major Update After working through the information gained through Jim C's and Chascon's helpful replies, I've learned something new: The problem isn't with the layout itself, but with the fact that the selected layout isn't being applied. When I look at the definition in /usr/share/X11/xkb/symbols/us of the layout I've been running for a long time, I found that the definition doesn't match what I get when I type. In addition, the keyboard layout dialog that's supposed to show the current layout looks different from the way the layout is defined in the file I mentioned, and matches what actually happens when I type. Following Jim C's suggestion, I created a new layout in /usr/share/X11/xkb/symbols/us containing some modifications to the layout I want. I can select my layout from the keyboard properties, and I can use in on the console following Chascon's post, but the layout I get when typing is unchanged. Apparently, there's a different layout defined somewhere that's overriding what I've set. Where is that layout hiding? This problem occurs in Unity (3D and 2D), but I was able to get the correct layout set in Xfce. In case it's relevant, this problem has occurred since I installed Oneiric fresh on this machine (though I preserved my $HOME). I don't recall whether this problem occurred before the reinstall. Also, in case it's relevant, I also run iBus so I can type Korean. I have a few difficulties with iBus, but I doubt they're related.

    Read the article

  • How would you gather client's data on Google App Engine without using Datastore/Backend Instances too much?

    - by ruslan
    I'm relatively new to StackExchange and not sure if it's appropriate place to ask design question. Site gives me a hint "The question you're asking appears subjective and is likely to be closed". Please let me know. Anyway.. One of the projects I'm working on is online survey engine. It's my first big commercial project on Google App Engine. I need your advice on how to collect stats and efficiently record them in DataStore without bankrupting me. Initial requirements are: After user finishes survey client sends list of pairs [ID (int) + PercentHit (double)]. This list shows how close answers of this user match predefined answers of reference answerers (which identified by IDs). I call them "target IDs". Creator of the survey wants to see aggregated % for given IDs for last hour, particular timeframe or from the beginning of the survey. Some surveys may have thousands of target/reference answerers. So I created entity public class HitsStatsDO implements Serializable { @Id transient private Long id; transient private Long version = (long) 0; transient private Long startDate; @Parent transient private Key parent; // fake parent which contains target id @Transient int targetId; private double avgPercent; private long hitCount; } But writing HitsStatsDO for each target from each user would give a lot of data. For instance I had a survey with 3000 targets which was answered by ~4 million people within one week with 300K people taking survey in first day. Even if we assume they were answering it evenly for 24 hours it would give us ~1040 writes/second. Obviously it hits concurrent writes limit of Datastore. I decided I'll collect data for one hour and save that, that's why there are avgPercent and hitCount in HitsStatsDO. GAE instances are stateless so I had to use dynamic backend instance. There I have something like this: // Contains stats for one hour private class Shard { ReadWriteLock lock = new ReentrantReadWriteLock(); Map<Integer, HitsStatsDO> map = new HashMap<Integer, HitsStatsDO>(); // Key is target ID public void saveToDatastore(); public void updateStats(Long startDate, Map<Integer, Double> hits); } and map with shard for current hour and previous hour (which doesn't stay here for long) private HashMap<Long, Shard> shards = new HashMap<Long, Shard>(); // Key is HitsStatsDO.startDate So once per hour I dump Shard for previous hour to Datastore. Plus I have class LifetimeStats which keeps Map<Integer, HitsStatsDO> in memcached where map-key is target ID. Also in my backend shutdown hook method I dump stats for unfinished hour to Datastore. There is only one major issue here - I have only ONE backend instance :) It raises following questions on which I'd like to hear your opinion: Can I do this without using backend instance ? What if one instance is not enough ? How can I split data between multiple dynamic backend instances? It hard because I don't know how many I have because Google creates new one as load increases. I know I can launch exact number of resident backend instances. But how many ? 2, 5, 10 ? What if I have no load at all for a week. Constantly running 10 backend instances is too expensive. What do I do with data from clients while backend instance is dead/restarting? Thank you very much in advance for your thoughts.

    Read the article

  • database design help for game / user levels / progress

    - by sprugman
    Sorry this got long and all prose-y. I'm creating my first truly gamified web app and could use some help thinking about how to structure the data. The Set-up Users need to accomplish tasks in each of several categories before they can move up a level. I've got my Users, Tasks, and Categories tables, and a UserTasks table which joins the three. ("User 3 has added Task 42 in Category 8. Now they've completed it.") That's all fine and working wonderfully. The Challenge I'm not sure of the best way to track the progress in the individual categories toward each level. The "business" rules are: You have to achieve a certain number of points in each category to move up. If you get the number of points needed in Cat 8, but still have other work to do to complete the level, any new Cat 8 points count toward your overall score, but don't "roll over" into the next level. The number of Categories is small (five currently) and unlikely to change often, but by no means absolutely fixed. The number of points needed to level-up will vary per level, probably by a formula, or perhaps a lookup table. So the challenge is to track each user's progress toward the next level in each category. I've thought of a few potential approaches: Possible Solutions Add a column to the users table for each category and reset them all to zero each time a user levels-up. Have a separate UserProgress table with a row for each category for each user and the number of points they have. (Basically a Many-to-Many version of #1.) Add a userLevel column to the UserTasks table and use that to derive their progress with some kind of SUM statement. Their current level will be a simple int in the User table. Pros & Cons (1) seems like by far the most straightforward, but it's also the least flexible. Perhaps I could use a naming convention based on the category ids to help overcome some of that. (With code like "select cats; for each cat, get the value from Users.progress_{cat.id}.") It's also the one where I lose the most data -- I won't know which points counted toward leveling up. I don't have a need in mind for that, so maybe I don't care about that. (2) seems complicated: every time I add or subtract a user or a category, I have to maintain the other table. I foresee synchronization challenges. (3) Is somewhere in between -- cleaner than #2, but less intuitive than #1. In order to find out where a user is, I'd have mildly complex SQL like: SELECT categoryId, SUM(points) from UserTasks WHERE userId={user.id} & countsTowardLevel={user.level} groupBy categoryId Hmm... that doesn't seem so bad. I think I'm talking myself into #3 here, but would love any input, advice or other ideas. P.S. Sorry for the cross-post. I wrote this up on SO and then remembered that there was a game dev-focused one. Curious to see if I get different answers one place than the other....

    Read the article

  • Tile map collision is not working properly

    - by Sigh-AniDe
    I am having problems setting collision between my sprite and the tiles. I have only done the code for colision for moving upwards but some places on the map it moves up and some places it doesn't. Here is what I have so far: Vector2 position; private static float scalingFactor = 32; static int tileWidth = 32; static int tileHeight = 32; int[ , ] map = { {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, {0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, }; // This is in turtle.update if ( keyboardState.IsKeyDown( Keys.Up ) ) { if ( position.Y > screenHeight / 4 ) { //// current position of the turtle on the tiles int mapX = ( int )( position.X / scalingFactor ); int mapY = ( int )( position.Y / scalingFactor ) - 1; if ( isMovable( mapX , mapY , map ) ) { position.Y = position.Y - scalingFactor; } } else { MoveUp(); } } private void MoveUp() { motion.Y = -1; } public bool isMovable( int mapX , int mapY , int[ , ] map ) { if ( mapX < 0 || mapX > 19 || mapY < 0 || mapY > 20 ) { return false; } int tile = map[ mapX , mapY ]; if ( tile == 0 ) { return false; } return true; } protected override void Update( GameTime gameTime ) { turtle.Update( screenHeight , scalingFactor , map ); base.Update( gameTime ); }

    Read the article

  • SJS AS 9.1 U2 (GF v2 U2) - Patch 25 // GF v2.1 - Patch 19 // Sun GlassFish Enterprise Server v2.1.1 Patch 13

    - by arungupta
    SJS AS 9.1 U2 (GF v2 U2) patch 25 is a commercial (Restricted) patch (see Overview of GFv2) available as part of Oracle's Commercial Support for GlassFish. This release is also patch 19 of GlassFish 2.1 and patch 13 of GlassFish 2.1.1. The file-based patches were released onSep 1, 2011; package-based patches were released on Sep 13, 2011. Release Overview Description SJS AS 9.1 U2 (GFv2 U2) - Patch 25 - File and Package-Based Patch for Solaris SPARC, Solaris x86, Linux, Windows and AIX. GlassFish 2.1 - Patch 19 - File and Package-Based Patch for Solaris SPARC, Solaris x86, Linux, Windows and AIX. GlassFish 2.1.1 - Patch 13 - File and Package-Based Patch for Solaris SPARC, Solaris x86, Linux, Windows and AIX. Patch Ids This release comes in 3 different variants: Package-based patches with HADB • Solaris SPARC - [128640-27] • Solarix i586 - [128641-27] • Linux RPM - [128642-27] File-based patches with HADB • Solaris SPARC - [128643-27] • Solaris i586 - [128644-27] • Linux - [128645-27] • Windows - [128646-27] File based patches without HADB • Solaris SPARC - [128647-27] • Solaris i586 - [128648-27] • Linux - [128649-27] • Windows - [128650-27] • AIX - [137916-27] Update Date Nov 23, 2011 Comment Commercial (for-fee) release with regular bug fixes. This is patch 25 for SJS AS 9.1 U2; it is also patch 19 for GlassFish v2.1 and patch 13 for GlassFish v2.1.1. It contains the fixes from the previous patches plus fixes for 18 unique defects. Status CURRENT Bugs Fixed in this Patch: • [12823919]: RESPONSE BYTECHUNK FLUSH WILL GENERATE A MIMEHEADER WHEN SESSION REPLICATION ON • [12818767]: INTEGRATE NEW GRIZZLY 1.0.40 • [12807660]: BUILD, STAGE AND INTEGRATING HADB • [12807643]: INTEGRATE MQ 4.4 U2 P4 • [12802648]: GLASSFISH BUILD FAILED DUE TO METRO INTEGRATION • [12799002]: JNDI RESOURCE NOT ENABLED IF TARGETTING USING ADMIN GUI ON GF 2.1.1 PATCH 11 • [12794672]: ORG.APACHE.JASPER.RUNTIME.BODYCONTENTIMPL DOES NOT COMPACT CB BUFFER • [12772029]: BUG 12308270 - NEED HOTFIX FROM GF RUNNING OPENSSO • [12749346]: VERSION CHANGES FOR GLASSFISH V2.1.1 PATCH 13 • [12749151]: INTEGRATING METRO 1.6.1-B01 INTO GF 2.1.1 P13 • [12719221]: PORTUNIFICATION WSTCPPROTOCOLFINDER.FIND NULLPOINTEREXCEPTION THROWN • [12695620]: HADB: LOGBUFFERSIZE CALCULATED INCORRECTLY FOR VALUES 120 MB AND THE MEMORY FO • [12687345]: ENVIRONMENT VARIABLE PARSING FOR SUN_APPSVR_NOBACKUP CAN FAIL DEPENDING ENV VARS • [12547651]: GLASSFISH DISPLAY BUG • [12359965]: GEREQUESTURI RETURNS URI WITH NULL PREPENDED INTERMITTENT AFTER UPGRADE • [12308270]: SUNBT7020210 ENHANCE JAXRPC SOAP RESPONSE USE PREVIOUS CONFIGURED NAMESPACE PREF • [12308003]: SUNBT7018895 FAILURE TO DEPLOY OR RUN WEBSERVICE AFTER UPDATING TO GF 2.1.1 P07 • [12246256]: SUNBT6739013 [RN]GLASSFISH/SUN APPLICATION INSTALLER CRASHES ON LINUX Additional Notes: More details about these bugs can be found at My Oracle Support.

    Read the article

  • Programmatically disclosing a node in af:tree and af:treeTable

    - by Frank Nimphius
    A common developer requirement when working with af:tree or af:treeTable components is to programmatically disclose (expand) a specific node in the tree. If the node to disclose is not a top level node, like a location in a LocationsView -> DepartmentsView -> EmployeesView hierarchy, you need to also disclose the node's parent node hierarchy for application users to see the fully expanded tree node structure. Working on ADF Code Corner sample #101, I wrote the following code lines that show a generic option for disclosing a tree node starting from a handle to the node to disclose. The use case in ADF Coder Corner sample #101 is a drag and drop operation from a table component to a tree to relocate employees to a new department. The tree node that receives the drop is a department node contained in a location. In theory the location could be part of a country and so on to indicate the depth the tree may have. Based on this structure, the code below provides a generic solution to parse the current node parent nodes and its child nodes. The drop event provided a rowKey for the tree node that received the drop. Like in af:table, the tree row key is not of type oracle.jbo.domain.Key but an implementation of java.util.List that contains the row keys. The JUCtrlHierBinding class in the ADF Binding layer that represents the ADF tree binding at runtime provides a method named findNodeByKeyPath that allows you to get a handle to the JUCtrlHierNodeBinding instance that represents a tree node in the binding layer. CollectionModel model = (CollectionModel) your_af_tree_reference.getValue(); JUCtrlHierBinding treeBinding = (JUCtrlHierBinding ) model.getWrappedData(); JUCtrlHierNodeBinding treeDropNode = treeBinding.findNodeByKeyPath(dropRowKey); To disclose the tree node, you need to create a RowKeySet, which you do using the RowKeySetImpl class. Because the RowKeySet replaces any existing row key set in the tree, all other nodes are automatically closed. RowKeySetImpl rksImpl = new RowKeySetImpl(); //the first key to add is the node that received the drop //operation (departments).            rksImpl.add(dropRowKey);    Similar, from the tree binding, the root node can be obtained. The root node is the end of all parent node iteration and therefore important. JUCtrlHierNodeBinding rootNode = treeBinding.getRootNodeBinding(); The following code obtains a reference to the hierarchy of parent nodes until the root node is found. JUCtrlHierNodeBinding dropNodeParent = treeDropNode.getParent(); //walk up the tree to expand all parent nodes while(dropNodeParent != null && dropNodeParent != rootNode){    //add the node's keyPath (remember its a List) to the row key set    rksImpl.add(dropNodeParent.getKeyPath());      dropNodeParent = dropNodeParent.getParent(); } Next, you disclose the drop node immediate child nodes as otherwise all you see is the department node. Its not quite exactly "dinner for one", but the procedure is very similar to the one handling the parent node keys ArrayList<JUCtrlHierNodeBinding> childList = (ArrayList<JUCtrlHierNodeBinding>) treeDropNode.getChildren();                     for(JUCtrlHierNodeBinding nb : childList){   rksImpl.add(nb.getKeyPath()); } Next, the row key set is defined as the disclosed row keys on the tree so when you refresh (PPR) the tree, the new disclosed state shows tree.setDisclosedRowKeys(rksImpl); AdfFacesContext.getCurrentInstance().addPartialTarget(tree.getParent()); The refresh in my use case is on the tree parent component (a layout container), which usually shows the best effect for refreshing the tree component. 

    Read the article

  • How do I revert updates/tweaks to get to a usable GUI?

    - by Frankenmartin
    I just installed 12.04 the other day and then ran into trouble upon restarting after installing updates. What I did before the problem occurred: I did not make many changes before this problem occurred. Changes I did make included: Downloading and installing Adobe Flash Player (off topic but: I am under the impression that Java, "C&C" and Shockwave can not be run in Ubuntu. Could anybody verify this?) I also installed gnome-tweak-tool and used it to install several themes. These themes worked well until restarting after the update. Is it possible that one of these themes caused the problem (in combination with the update or because of the restart)? Installed 215 updates from update manager and restarted my system. Current Situation: Unity 3D is unusable since restarting after running updates. When I log in after entering my password the following things happen: the overhead panel disappears and the screen goes black for a minute my wallpaper flashes for a couple seconds but then the screen goes black again after another minute the wallpaper reappears but nothing else does and I am not able to open anything or even right click. after 5 minutes I can finally get a right click menu eventually a box comes up warning about a Compiz failure and asking to let it quit--which I did. Using the right click functionality I was able to create a new folder on the desktop and use this to open a file browser. In doing so I noticed that the downloads I had made were missing (music, image files, etc., even after unpacking several .zip and .rar files) even though I believe that everything should still be there. Any new windows that I create are un-closable/minimizable/movable/etc, because the window bars are missing. I have tried rebooting several times but the results are the same. I was able to browse some off the System Settings windows by clicking on the wallpaper link in the right click menu. In doing so I navigated into the update manager and noticed that updates were selected to be accepted from some "unsupported sources". I do not recall setting these options myself and wonder why these--potentially dangerous--options would be selected by default. Unity 2D is usable but not free of bugs--I stumbled across the ability to log into a Unity 2D session while trying to log into Unity 3D. So far I have only noticed one bug in Unity 2D: the close, minimize and maximize buttons are invisible--however they are still usable despite being invisible. What I need: I'm very new to Linux and Ubuntu and still am in the feeling out stages. As such I will have some trouble answering clarifying questions. I haven't used the terminal yet and would probably not be comfortable using it without very clear instructions. What I do need is to know how I can roll back/remove all those updates so I can use my computer regularly again. I do believe that I could follow step-by-step instructions as long as they are clear and concise if someone knows what my problem is.

    Read the article

  • Information on upgrading Kinect Applications to MS SDK Beta 2.

    - by mbcrump
    Introduction Microsoft recently released the Kinect for Windows SDK Beta 2. It contains many enhancements and fixes that can be found here. The only problem with it is that a lot of current demo applications no longer function properly. Today, I’m going to walk you through a typical scenario of upgrading a Kinect application built with Beta 1 to Beta 2. Note: This tutorial covers WPF, but you can use the same techniques for WinForms. 1) Fix the references Let’s start with a fairly popular Kinect demo called Kinect User Interface Demo. This project uses the beta 1 version of Microsoft.Research.Kinect.dll and version 1.0.0.0 of Coding4Fun’s Kinect library. After you download the source code and extract the zip you will see the following references in Visual Studio 2010: Pay attention to the following references as these are the .dlls that you will have to update: Coding4Fun.Kinect.Wpf Microsoft.Research.Kinect If you click on Coding4Fun.Kinect.Wpf file you will see the following version information (v1.0.0.0): This needs to be upgraded to the Coding4Fun Kinect library built against Beta 2. So head over to http://c4fkinect.codeplex.com/ and hit download and you will have the following files. Go ahead and hit the delete key on your keyboard to remove the Coding4Fun.Kinect.Wpf.dll file from your project. Select “Add Reference” and navigate out to the folder where you extracted the files and select Coding4Fun.Kinect.Wpf.dll. If you click on the Coding4Fun.Kinect.Wpf.dll file and check properties it should be listed at 1.1.0.0: Fix Microsoft.Research.Kinect.dll The official SDK Beta 2 released a new .dll that you will need to reference in your application. Go ahead and select Microsoft.Research.Kinect.dll in your application and hit the Delete key on your keyboard. Go ahead and select Add Reference again and select Microsoft.Research.Kinect.dll from the .NET tab. Double check and make sure the version number is 1.0.0.45 as shown below. References fixed – Runtime needs to be updated. So we have fixed the references in a typical Kinect application that uses Microsoft’s SDK and C4F Kinect libraries. Now, we will need to update the runtime. All Beta 1 Kinect applications will instantiate the Runtime with the following code: Can you see that it is now marked with [Depreciated]? That means we need to update it before Microsoft decides to remove it from future versions of the SDK. We can fix this very easily by replacing this code: readonly Runtime _runtime = new Runtime(); with Microsoft.Research.Kinect.Nui.Runtime _nui; and adding similar code to our Loaded event as shown below public MainWindow() { InitializeComponent(); Loaded += new RoutedEventHandler(MainWindow_Loaded); } void MainWindow_Loaded(object sender, RoutedEventArgs e) { if (Runtime.Kinects.Count == 0) { txtInfo.Text = "Missing Kinect"; } else { _nui = Runtime.Kinects[0]; _nui.Initialize(RuntimeOptions.UseColor); // Video Frame Ready Event can happen now!!! //_nui.VideoFrameReady += new EventHandler<ImageFrameReadyEventArgs>(_nui_VideoFrameReady); _nui.VideoStream.Open(ImageStreamType.Video, 2, ImageResolution.Resolution640x480, ImageType.Color); } } In this sample, I am testing to see if a Kinect is detected and if it is then I initialize the runtime with my first Kinect by using the Runtime.Kinects[0]. You can also specify other Kinect devices here. The rest of the code is standard code that you simply modify however you wish (ie Skeletal, Depth, etc) depending on what type of video feed you want. Conclusion As you can see it really wasn’t that painful to upgrade your project to Beta 2. I would recommend that you go ahead and upgrade to Beta 2 as future versions of the SDK will use these methods.  Thanks for reading. Subscribe to my feed

    Read the article

  • Question on the implementation of my Entity System

    - by miguel.martin
    I am currently creating an Entity System, in C++, it is almost completed (I have all the code there, I just have to add a few things and test it). The only thing is, I can't figure out how to implement some features. This Entity System is based off a bit from the Artemis framework, however it is different. I'm not sure if I'll be able to type this out the way my head processing it. I'm going to basically ask whether I should do something over something else. Okay, now I'll give a little detail on my Entity System itself. Here are the basic classes that my Entity System uses to actually work: Entity - An Id (and some methods to add/remove/get/etc Components) Component - An empty abstract class ComponentManager - Manages ALL components for ALL entities within a Scene EntitySystem - Processes entities with specific components Aspect - The class that is used to help determine what Components an Entity must contain so a specific EntitySystem can process it EntitySystemManager - Manages all EntitySystems within a Scene EntityManager - Manages entities (i.e. holds all Entities, used to determine whether an Entity has been changed, enables/disables them, etc.) EntityFactory - Creates (and destroys) entities and assigns an ID to them Scene - Contains an EntityManager, EntityFactory, EntitySystemManager and ComponentManager. Has functions to update and initialise the scene. Now in order for an EntitySystem to efficiently know when to check if an Entity is valid for processing (so I can add it to a specific EntitySystem), it must recieve a message from the EntityManager (after a call of activate(Entity& e)). Similarly the EntityManager must know when an Entity is destroyed from the EntityFactory in the Scene, and also the ComponentManager must know when an Entity is created AND destroyed. I do have a Listener/Observer pattern implemented at the moment, but with this pattern I may remove a Listener (which is this case is dependent on the method being called). I mainly have this implemented for specific things related to a game, i.e. Teams, Tagging of entities, etc. So... I was thinking maybe I should call a private method (using friend classes) to send out when an Entity has been activated, deleted, etc. i.e. taken from my EntityFactory void EntityFactory::killEntity(Entity& e) { // if the entity doesn't exsist in the entity manager within the scene if(!getScene()->getEntityManager().doesExsist(e)) { return; // go back to the caller! (should throw an exception or something..) } // tell the ComponentManager and the EntityManager that we killed an Entity getScene()->getComponentManager().doOnEntityWillDie(e); getScene()->getEntityManager().doOnEntityWillDie(e); // notify the listners for(Mouth::iterator i = getMouth().begin(); i != getMouth().end(); ++i) { (*i)->onEntityWillDie(*this, e); } _idPool.addId(e.getId()); // add the ID to the pool delete &e; // delete the entity } As you can see on the lines where I am telling the ComponentManager and the EntityManager that an Entity will die, I am calling a method to make sure it handles it appropriately. Now I realise I could do this without calling it explicitly, with the help of that for loop notifying all listener objects connected to the EntityFactory's Mouth (an object used to tell listeners that there's an event), however is this a good idea (good design, or what)? I've gone over the PROS and CONS, I just can't decide what I want to do. Calling Explicitly: PROS Faster? Since these functions are explicitly called, they can't be "removed" CONS Not flexible Bad design? (friend functions) Calling through Listener objects (i.e. ComponentManager/EntityManager inherits from a EntityFactoryListener) PROS More Flexible? Better Design? CONS Slower? (virtual functions) Listeners can be removed, i.e. may be removed and not get called again during the program, which could cause in a crash. P.S. If you wish to view my current source code, I am hosting it on BitBucket.

    Read the article

  • You Probably Already Have a “Private Cloud”

    - by BuckWoody
    I’ve mentioned before that I’m not a fan of the word “Cloud”. It’s too marketing-oriented, gimmicky and non-specific. A better definition (in many cases) is “Distributed Computing”. That means that some or all of the computing functions are handled somewhere other than under your specific control. But there is a current use of the word “Cloud” that does not necessarily mean that the computing is done somewhere else. In fact, it’s a vector of Cloud Computing that can better be termed “Utility Computing”. This has to do with the provisioning of a computing resource. That means the setup, configuration, management, balancing and so on that is needed so that a user – which might actually be a developer – can do some computing work. To that person, the resource is just “there” and works like they expect, like the phone system or any other utility. The interesting thing is, you can do this yourself. In fact, you probably already have been, or are now. It’s got a cool new trendy term – “Private Cloud”, but the fact is, if you have your setup automated, the HA and DR handled, balancing and performance tuning done, and a process wrapped around it all, you can call yourself a “Cloud Provider”. A good example here is your E-Mail system. your users – pretty much your whole company – just logs into e-mail and expects it to work. To them, you are the “Cloud” provider. On your side, the more you automate and provision the system, the more you act like a Cloud Provider. Another example is a database server. In this case, the “end user” is usually the development team, or perhaps your SharePoint group and so on. The data professionals configure, monitor, tune and balance the system all the time. The more this is automated, the more you’re acting like a Cloud Provider. Lots of companies help you do this in your own data centers, from VMWare to IBM and many others. Microsoft's offering in this is based around System Center – they have a “cloud in a box” provisioning system that’s actually pretty slick. The most difficult part of operating a Private Cloud is probably the scale factor. In the case of Windows and SQL Azure, we handle this in multiple ways – and we're happy to share how we do it. It’s not magic, and the algorithms for balancing (like the one we started with called Paxos) are well known. The key is the knowledge, infrastructure and people. Sure, you can do this yourself, and in many cases such as top-secret or private systems, you probably should. But there are times where you should evaluate using Azure or other vendors, or even multiple vendors to spread your risk. All of this should be based on client need, not on what you know how to do already. So congrats on your new role as a “Cloud Provider”. If you have an E-mail system or a database platform, you can just put that right on your resume.

    Read the article

  • Seamless STP with Oracle SOA Suite

    - by user12339860
    STP stands for “Straight Through Processing”. Wikipedia describes STP as a solution that enables “the entire trade process for capital markets and payment transactions to be conducted electronically without the need for re-keying or manual intervention, subject to legal and regulatory restrictions” .I will deal with the later part of the definition i.e “payment transactions without manual intervention” in this article. The STP that I am writing about involves the interaction between a Bank and its’ corporate customers,to that extent this business case is also called “Corporate Payments”.Simply put a  Corporate Payment-STP solution needs to connect the payment transaction right from the Corporate ERP into the Bank’s Payment Hub. A SOA based STP solution can do a lot more than just process transaction. But before I get to the solution let me describe the perspectives of the two primary parties in this interaction. The Corporate customer and the Bank. Corporate's Interaction with Bank:  Typically it is the treasury department of an enterprise which interacts with the Bank on a daily basis. Here is how a day of interaction would look like from the treasury department of a corp. Corporate Cash Retrieve Beginning of day totals Monitor Cash Accounts Send or receive cash between accounts Supply chain payments Payment Settlements Calculate settlement positions Retrieve End of Day totals Assess Transaction Financial Impact Short Term Investment Desk Retrieve Current Account information Conduct Investment activities Bank’s Interaction with the Corporate :  From the Bank’s perspective, the interaction starts from the point of on boarding a corporate customer to billing the corporate for the value added services it provides. Once the corporate is on-boarded the daily interaction involves Handle the various formats of data arriving from customers Process Beginning of Day & End of Day reporting request from customers Meet compliance requirements Process Payments Transmit Payment Status Challenges with this Interaction :  Both the Bank & the Corporate face many challenges from these interactions. Some of the challenges include Keeping a consistent view of transaction data for various LOBs of the corporate & the Bank Corporate customers use different ERPs, hence the data formats are bound to be different Can the Bank’s IT systems convert the data formats that can be easily mapped to the corporate ERP How does the Bank manage the communication profiles of these customers?  Corporate customers are demanding near real time visibility on their corporate accounts Corporate customers can make better cash management decisions if they can analyse the impact. Can the Bank create opportunities to sell its products to the investment desks at corporate houses & manage their orders? How will the Bank bill the corporate customer for the value added services it provides. What does a SOA based Seamless STP solution bring to the table? Highlights of Oracle SOA based STP solution For the Corporate Customer: No Manual or Paper based banking transactions Secure Delivery of Payment data to the Bank from multiple ERPs without customization Single Portal for monitoring & administering payment transactions Rule based validation of payments Customer has data necessary for more effective handling of payment and cash management decisions  Business measurements track progress toward payment cost goals  For the Bank: Reduces time & complexity of transactions Simplifies the process of introducing new products to corporate customers Single Payment hub for all corporate ERP payments across multiple instruments New Revenue sources by delivering value added services to customers Leverages existing payment infrastructure Remove Inconsistent data formats and interchange between bank and corporate systems  Compliance and many other benefits

    Read the article

  • How can I make sound work without starting X?

    - by Magnus Hoff
    I have a headless machine connected to my sound system, and I am using it to run a music playing daemon that I control over the network. (Among other things) However, I can't seem to be able to have sound come out of my speakers without running X. I am running pulse audio in a system wide instance and my daemon is not running within X. Nevertheless, when my daemon is playing music without me hearing it, I can fix it by running startx in an unrelated session. After X starts, I can hear the sound. The sound disappears again if I kill the X server. Interestingly/annoyingly, the sound also stops after X has been running for a few minutes. This could possibly be because of a screen saver of some sort, but I haven't been able to verify or falsify this theory. So my current workaround is to ssh into the box whenever I want music and startx, and restart it every fifteen minutes or so. I'd like to do better. I have been able to verify the following: Adjustments in alsamixer have no effect on this problem. The relevant output channel is never muted In alsamixer, I can see no difference between when the sound is working and when it isn't Nothing is muted in pactl list There is no difference in the output from pactl list between before starting X and after it's started. (Except the identifier of the pactl instance connected to pulse, which is different each time you run pactl) The user running the music daemon is a member of the groups audio, pulse and pulse-access The music daemon program does not report any error messages and acts as if it is playing the music like it should Some form of dbus daemon is running. ps aux|grep dbus reports dbus-daemon --system --fork --activation=upstart before and after I have started X Some details about my hardware: Motherboard: http://www.asus.com/Motherboards/AT5IONTI_DELUXE/ Sound chip: Nvidia GPU 0b HDMI/DP (from alsamixer) Using HDMI for output (Machine also has an Intel Realtek ALC887 that I am not using) Output of lsmod: Module Size Used by deflate 12617 0 zlib_deflate 27139 1 deflate ctr 13201 0 twofish_generic 16635 0 twofish_x86_64_3way 25287 0 twofish_x86_64 12907 1 twofish_x86_64_3way twofish_common 20919 3 twofish_generic,twofish_x86_64_3way,twofish_x86_64 camellia 29348 0 serpent 29125 0 blowfish_generic 12530 0 blowfish_x86_64 21466 0 blowfish_common 16739 2 blowfish_generic,blowfish_x86_64 cast5 25112 0 des_generic 21415 0 xcbc 12815 0 rmd160 16744 0 bnep 18281 2 rfcomm 47604 12 sha512_generic 12796 0 crypto_null 12918 0 parport_pc 32866 0 af_key 36389 0 ppdev 17113 0 binfmt_misc 17540 1 nfsd 281980 2 ext2 73795 1 nfs 436929 1 lockd 90326 2 nfsd,nfs fscache 61529 1 nfs auth_rpcgss 53380 2 nfsd,nfs nfs_acl 12883 2 nfsd,nfs sunrpc 255224 16 nfsd,nfs,lockd,auth_rpcgss,nfs_acl btusb 18332 2 vesafb 13844 2 pl2303 17957 1 ath3k 12961 0 bluetooth 180153 24 bnep,rfcomm,btusb,ath3k snd_hda_codec_hdmi 32474 4 nvidia 11308613 0 ftdi_sio 40679 1 usbserial 47113 6 pl2303,ftdi_sio psmouse 97485 0 snd_hda_codec_realtek 224173 1 snd_hda_intel 33719 5 snd_hda_codec 127706 3 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel serio_raw 13211 0 snd_seq_midi 13324 0 snd_hwdep 17764 1 snd_hda_codec snd_pcm 97275 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec snd_rawmidi 30748 1 snd_seq_midi snd_seq_midi_event 14899 1 snd_seq_midi snd_seq 61929 2 snd_seq_midi,snd_seq_midi_event snd_timer 29990 2 snd_pcm,snd_seq snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq snd 79041 20 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device asus_atk0110 18078 0 mac_hid 13253 0 jc42 13948 0 soundcore 15091 1 snd snd_page_alloc 18529 2 snd_hda_intel,snd_pcm coretemp 13554 0 i2c_i801 17570 0 lp 17799 0 parport 46562 3 parport_pc,ppdev,lp r8169 62154 0 Any ideas? What does X do that's so important?

    Read the article

  • XNA RenderTarget2D Sample

    - by Michael B. McLaughlin
    I remember being scared of render targets when I first started with XNA. They seemed like weird magic and I didn’t understand them at all. There’s nothing to be frightened of, though, and they are pretty easy to learn how to use. The first thing you need to know is that when you’re drawing in XNA, you aren’t actually drawing to the screen. Instead you’re drawing to this thing called the “back buffer”. Internally, XNA maintains two sections of graphics memory. Each one is exactly the same size as the other and has all the same properties (such as surface format, whether there’s a depth buffer and/or a stencil buffer, and so on). XNA flips between these two sections of memory every update-draw cycle. So while you are drawing to one, it’s busy drawing the other one on the screen. Then the current update-draw cycle ends, it flips, and the section you were just drawing to gets drawn to the screen while the one that was being drawn to the screen before is now the one you’ll be drawing on. This is what’s meant by “double buffering”. If you drew directly to the screen, the player would see all of those draws taking place as they happened and that would look odd and not very good at all. Those two sections of graphics memory are render targets. All a render target is, is a section of graphics memory to which things can be drawn. In addition to the two that XNA maintains automatically, you can also create and set your own using RenderTarget2D and GraphicsDevice.SetRenderTarget. Using render targets lets you do all sorts of neat post-processing effects (like bloom) to make your game look cooler. It also just lets you do things like motion blur and lets you create mirrors in 3D games. There are quite a lot of things that render targets let you do. To go along with this post, I wrote up a simple sample for how to create and use a RenderTarget2D. It’s available under the terms of the Microsoft Public License and is available for download on my website here: http://www.bobtacoindustries.com/developers/utils/RenderTarget2DSample.zip . Other than the ‘using’ statements, every line is commented in detail so that it should (hopefully) be easy to follow along with and understand. If you have any questions, leave a comment here or drop me a line on Twitter. One last note. While creating the sample I came across an interesting quirk. If you start by creating a Windows Game, and then make a copy for Windows Phone 7, the drop-down that lets you choose between drawing to a WP7 device and the WP7 emulator stays grayed-out. To resolve this, you need to right click on the Windows Phone 7 version in the Solution Explorer, and choose “Set as StartUp Project”. The bar will then become active, letting you change the target you which to deploy to. If you want another version to be the one that starts up when you press F5 to start debugging, just go and right-click on that version and choose “Set as StartUp Project” for it once you’ve set the WP7 target (device or emulator) that you want.

    Read the article

  • ArchBeat Link-o-Rama Top 10 for October 21-27, 2012

    - by Bob Rhubart
    The Top 10 most popular items shared on the OTN ArchBeat Facebook Page for the week of October 21-27, 2012. OTN Architect Day: Los Angeles This is your brain on IT architecture. Stuff your cranium with architecture by attending Oracle Technology Network Architect Day in Los Angeles, October 25, 2012, at the Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048. Technical sessions, panel Q&A, and peer roundtables—plus a free lunch. [NOTE: The event was last week, of course. Big thanks to the session presenters and especially to those Angelinos who came out for the event.] WebLogic Server 11gR1 Interactive Quick Reference"The WebLogic Server 11gR1 Administration interactive quick reference," explains Juergen Kress, "is a multimedia tool for various terms and concepts used in WebLogic Server architecture. This tool is available for administrators for online or offline use. This is built as a multimedia web page which provides descriptions of WebLogic Server Architectural components, and references to relevant documentation. This tool offers valuable reference information for any complex concept or product in an intuitive and useful manner." Podcast: Are You Future Proof? The latest OTN ArchBeat Podcast series features Oracle ACE Directors Ron Batra, Basheer Khan, and Ronald van Luttikhuizen, three practicing architects in an open discussion about how changes in enterprise IT are raising the bar for success for software architects and developers. Play Oracle Vanquisher Here's a little respite from whatever it is you normally spend your time on. Oracle Vanquisher is an online diversion that makes a game of data center optimization. According to the description: "Armed with a cool Oracle vacuum pack suit and a strategic IT roadmap, you will thwart threats and optimize your data center to increase your company’s stock price and boost your company’s position." Mainly you avoid electric shock and killer birds. The current high score belongs to someone identified as 'TEN." My score? Never mind. Advanced Oracle SOA Suite OOW 2012 PresentationsThe Oracle SOA Product Management team has compiled a complete list of all twelve of their Oracle SOA Suite presentations from Oracle OpenWorld 2012, with links to the slide decks. OAM and OIM 11g Academies Looking for technical how-to content covering Oracle Access Manager and Oracle Identity Manager? The people behind the Oracle Middleware Security blog have indexed relevant blog posts into what they call "Academies." "These indexes contain the articles we’ve written that we believe provide long lasting guidance on OAM and OIM. Posts covered in these series include articles on key aspects of OAM and OIM 11g, best practice architectural guidance, integrations, and customizations." Oracle’s Analytics, Engineered Systems, and Big Data Strategy | Mark Rittman Part 1 of 3 in Oracle ACE Director Mark Rittman's series on Oracle Exalytics, Oracle R Enterprise and Endeca. Oracle ACE Directors Nordic Tour 2012 : Venues and BI Presentations | Mark RittmanOracle ACE Director Mark Rittman shares information on the Oracle ACE Director Tour, as the community leaders make their way through the land of the midnight sun, with events in Copenhagen, Stockholm, Oslo and Helsinki. Following the Thread in OSB | Antony Reynolds Antony Reynolds recently led an Oracle Service Bus POC in which his team needed to get high throughput from an OSB pipeline. "Imagine our surprise when, on stressing the system, we saw it lock up, with large numbers of blocked threads." He shares the details of the problem and the solution in this extensive technical post. OW12: Oracle Business Process Management/Oracle ADF Integration Best Practices | Andrejus Baranovskis The Oracle OpenWorld presentations keep coming! Oracle ACE Director Andrejus Baranovskis shares the slides from "Oracle Business Process Management/Oracle ADF Integration Best Practices," co-presented with Danilo Schmiedel from Opitz Consulting. Thought for the Day "Not everything that can be counted counts, and not everything that counts can be counted." — Albert Einstein (March 14, 1879 – April 18, 1955) Source: Quotes For Software Engineers

    Read the article

  • How to give my user permission to add/edit files on local apache server? [duplicate]

    - by Logan
    Possible Duplicate: How to make Apache run as current user I'm setting up my local test server again, and I seem to have forgotten how to successfully set up the LAMP server. I have installed LAMP server via tasksel command and I have configured the /var/www directory according to a guide I've found: After the lamp server installation you will need write permissions to the /var/www directory. Follow these steps to configure permissions. Add your user to the www-data group sudo usermod -a -G www-data <your user name> now add the /var/www folder to the www-data group sudo chgrp -R www-data /var/www now give write permissions to the www-data group sudo chmod -R g+w /var/www So logan user is now part of www-data group and the file/folder permissions look like the output below: logan@computer:/var/www$ ls -lart total 172 -rw-r--r-- 1 www-data www-data 1997 Oct 23 2010 wp-links-opml.php -rw-r--r-- 1 www-data www-data 3177 Nov 1 2010 wp-config-sample.php -rw-r--r-- 1 www-data www-data 3700 Jan 8 2012 wp-trackback.php -rw-r--r-- 1 www-data www-data 271 Jan 8 2012 wp-blog-header.php -rw-r--r-- 1 www-data www-data 395 Jan 8 2012 index.php -rw-r--r-- 1 www-data www-data 3522 Apr 10 2012 wp-comments-post.php -rw-r--r-- 1 www-data www-data 19929 May 6 2012 license.txt -rw-r--r-- 1 www-data www-data 18219 Sep 11 08:27 wp-signup.php -rw-r--r-- 1 www-data www-data 2719 Sep 11 16:11 xmlrpc.php -rw-r--r-- 1 www-data www-data 2718 Sep 23 12:57 wp-cron.php -rw-r--r-- 1 www-data www-data 7723 Sep 25 01:26 wp-mail.php -rw-r--r-- 1 www-data www-data 2408 Oct 26 15:40 wp-load.php -rw-r--r-- 1 www-data www-data 4663 Nov 17 10:11 wp-activate.php -rw-r--r-- 1 www-data www-data 9899 Nov 22 04:52 wp-settings.php -rw-r--r-- 1 www-data www-data 9175 Nov 29 19:57 readme.html -rw-r--r-- 1 www-data www-data 29310 Nov 30 08:40 wp-login.php drwxr-xr-x 14 root root 4096 Dec 24 17:41 .. drwx------ 9 www-data www-data 4096 Dec 26 16:11 wp-admin drwx------ 9 www-data www-data 4096 Dec 26 16:11 wp-includes -rw-rw-rw- 1 www-data www-data 3448 Dec 26 16:14 wp-config.php drwxrwxr-x 5 www-data www-data 4096 Dec 26 16:14 . drwx------ 6 www-data www-data 4096 Dec 26 16:19 wp-content Things work perfectly at http://localhost, I can view the website fine. The thing with this is that I will be working on a plugin for wordpress and I don't want to deal with separate owners under www directory to create or modify files/folders. When I give my user the ownership of /var/www recursively as logan:www-data I can create/modify files but cannot view the http://localhost. I get a Forbidden error. I'm assuming that this is because of the Apache's configuration? Which one is healthier or easier considering this is just a local test website, configuring apache to give user logan to view website and chmod /var/www logan:logan so that I can create files etc. without any sudo commands; or is it easier to configure user groups to get www-data user to act like my logan user? (Idk how that's possible, maybe putting www-data user under logan group?) Please shed some light to this subject. All I want is to be able to create/modifiy files under my user, and yet to be able to successfully view http://localhost I appreciate the help!

    Read the article

  • The Evolution of Oracle Direct EMEA by John McGann

    - by user769227
    John is expanding his Dublin based team and is currently recruiting a Director with marketing and sales leadership experience: http://bit.ly/O8PyDF Should you wish to apply, please send your CV to [email protected] Hi, my name is John McGann and I am part of the Oracle Direct management team, based in Dublin.   Today I’m writing from the Oracle London City office, right in the heart of the financial district and up to very recently at the centre of a fantastic Olympic Games. The Olympics saw individuals and teams from across the globe competing to decide who is Citius, Altius, Fortius - “Faster, Higher, Stronger" There are lots of obvious parallels between the competitive world of the Olympics and the Business environments that many of us operate in, but there are also some interesting differences – especially in my area of responsibility within Oracle. We are of course constantly striving to be the best - the best solution on offer for our clients, bringing simplicity to their management, consumption and application of information technology, and the best provider when compared with our many niche competitors.   In Oracle and especially in Oracle Direct, a key aspect of how we achieve this is what sets us apart from the Olympians.  We have long ago eliminated geographic boundaries as a limitation to what we can achieve. We assemble the strongest individuals across multiple countries and bring them together in teams focussed on a single goal. One such team is the Oracle Direct Sales Programs team. In case you don’t know, Oracle Direct EMEA (Europe Middle East and Africa) is the inside sales division in Oracle and it is where I started my Oracle career.  I remember that my first role involved putting direct mail in envelopes.... things have moved on a bit since then – for me, for Oracle Direct and in how we interact with our customers. Today, the team of over 1000 people is located in the different Oracle Direct offices around Europe – the main ones are Malaga, Berlin, Prague and Dubai plus the headquarters in Dublin. We work in over 20 languages and are in constant contact with current and future Oracle customers, using the latest internet and telephone technologies to effectively communicate and collaborate with each other, our customers and prospects. One of my areas of responsibility within Oracle Direct is the Sales Programs team. This team of 25 people manages the planning and execution of demand generation, leading the process of finding new and incremental revenue within Oracle Direct. The Sales Programs Managers or ‘SPMs’ are embedded within each of the Oracle Direct sales teams, focussed on distinct geographies or product groups. The SPMs are virtual members of the regional sales management teams, and work closely with the sales and marketing teams to define and deliver demand generation activities. The customer contact elements of these activities are executed via the Oracle Direct Sales and Business Development/Lead Generation teams, to deliver the pipeline required to meet our revenue goals. Activities can range from pan-EMEA joint sales and marketing campaigns, to very localised niche campaigns. The campaigns might focus on particular segments of our existing customers, introducing elements of our evolving solution portfolio which customers may not be familiar with. The Sales Programs team also manages ‘Nurture’ activities to ensure that we develop potential business opportunities with contacts and organisations that do not have immediate requirements. Looking ahead, it is really important that we continue to evolve our ability to add value to our clients and reduce the physical limitations of our distance from them through the innovative application of technology. This enables us to enhance the customer buying experience and to enable the Inside Sales teams to manage ever more complex sales cycles from start to finish.  One of my expectations of my team is to actively drive innovation in how we leverage data to better understand our customers, and exploit emerging technologies to better communicate with them.   With the rate of innovation and acquisition within Oracle, we need to ensure that existing and potential customers are aware of all we have to offer that relates to their business goals.   We need to achieve this via a coherent communication and sales strategy to effectively target the right people using the most effective medium. This is another area where the Sales Programs team plays a key role.

    Read the article

  • Best Practices - Core allocation

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (also called Logical Domains) Introduction SPARC T-series servers currently have up to 4 CPU sockets, each of which has up to 8 or (on SPARC T3) 16 CPU cores, while each CPU core has 8 threads, for a maximum of 512 dispatchable CPUs. The defining feature of Oracle VM Server for SPARC is that each domain is assigned CPU threads or cores for its exclusive use. This avoids the overhead of software-based time-slicing and emulation (or binary rewriting) of system state-changing privileged instructions used in traditional hypervisors. To create a domain, administrators specify either the number of CPU threads or cores that the domain will own, as well as its memory and I/O resources. When CPU resources are assigned at the individual thread level, the logical domains constraint manager attempts to assign threads from the same cores to a domain, and avoid "split core" situations where the same CPU core is used by multiple domains. Sometimes this is unavoidable, especially when domains are allocated and deallocated CPUs in small increments. Why split cores can matter Split core allocations can silenty reduce performance because multiple domains with different address spaces and memory contents are sharing the core's Level 1 cache (L1$). This is called false cache sharing since even identical memory addresses from different domains must point to different locations in RAM. The effect of this is increased contention for the cache, and higher memory latency for each domain using that core. The degree of performance impact can be widely variable. For applications with very small memory working sets, and with I/O bound or low-CPU utilization workloads, it may not matter at all: all machines wait for work at the same speed. If the domains have substantial workloads, or are critical to performance then this can have an important impact: This blog entry was inspired by a customer issue in which one CPU core was split among 3 domains, one of which was the control and service domain. The reported problem was increased I/O latency in guest domains, but the root cause might be higher latency servicing the I/O requests due to the control domain being slowed down. What to do about it Split core situations are easily avoided. In most cases the logical domain constraint manager will avoid it without any administrative action, but it can be entirely prevented by doing one of the several actions: Assign virtual CPUs in multiples of 8 - the number of threads per core. For example: ldm set-vcpu 8 mydomain or ldm add-vcpu 24 mydomain. Each domain will then be allocated on a core boundary. Use the whole core constraint when assigning CPU resources. This allocates CPUs in increments of entire cores instead of virtual CPU threads. The equivalent of the above commands would be ldm set-core 1 mydomain or ldm add-core 3 mydomain. Older syntax does the same thing by adding the -c flag to the add-vcpu, rm-vcpu and set-vcpu commands, but the new syntax is recommended. When whole core allocation is used an attempt to add cores to a domain fails if there aren't enough completely empty cores to satisfy the request. See https://blogs.oracle.com/sharakan/entry/oracle_vm_server_for_sparc4 for an excellent article on this topic by Eric Sharakan. Don't obsess: - if the workloads have minimal CPU requirements and don't need anywhere near a full CPU core, then don't worry about it. If you have low utilization workloads being consolidated from older machines onto a current T-series, then there's no need to worry about this or to assign an entire core to domains that will never use that much capacity. In any case, make sure the most important domains have their own CPU cores, in particular the control domain and any I/O or service domain, and of course any important guests. Summary Split core CPU allocation to domains can potentially have an impact on performance, but the logical domains manager tends to prevent this situation, and it can be completely and simply avoided by allocating virtual CPUs on core boundaries.

    Read the article

< Previous Page | 644 645 646 647 648 649 650 651 652 653 654 655  | Next Page >