Search Results

Search found 8274 results on 331 pages for 'offtopic but important'.

Page 225/331 | < Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >

  • How to get lookahead symbol when constructing LR(1) NFA for parser?

    - by greenoldman
    I am reading an explanation (awesome "Parsing Techniques" by D.Grune and C.J.H.Jacobs; p.292 in the 2nd edition) about how to construct an LR(1) parser, and I am at the stage of building the initial NFA. What I don't understand is how to get/compute a lookahead symbol. Here is the example from the book, the grammar: S -> E E -> E - T E -> T T -> ( E ) T -> n n is terminal. The "weird" transitions for me are is the sequence: 1) S -> . E eof 2) E -> . E - T eof 3) E -> . E - T - 4) E -> E . - T - 5) E -> E - . T - (Note: In the above table, the state numbers are in front and the lookahead symbol is at the end.) What puzzles me is that transition from (4) to (5) means reading - token, right? So how is it that - is still a lookahead symbol and even more important why is it that eof is no longer a lookahead symbol? After all in an input such as n - n eof there is only one - symbol. My naive thinking tells me (5) should be written as: 5) E -> E - . T - eof And another thing -- n is terminal. Why it is not used at all as a lookahead symbol? I mean -- we expect to see - or (, it is ok, but lack of n means we are sure it won't appear in input? Update: after more reading I am only more confused ;-) I.e. what is really a lookahead? Because I see such state as (p.292, 2nd column, 2nd row): E -> E . - T eof Lookahead says eof but the incoming input says -. Isn't it a contradiction? And it is not only in this book.

    Read the article

  • Two New CRM USER Communities just launched

    - by Divya Malik
    Here comes an announcement from Chris Gallen, from our Support Services team. For those of you who are EBS CRM users, here are two new recently launched communities that are now available to discuss topics that are important to you. These communities are for Sales & Marketing and  Telesales  The Sales & Marketing community is open to discuss a wide range of topics from Oracle Sales, Sales Online, Territory Management, Partner Management, Leads Management, Sales Offline, Sales for Handhelds, Sales Foundation, and Oracle Marketing. Some possible topics include Oracle Sales Implementations, TCA and DQM Integrations, Territory Management Setups and Definitions, Product Catalog Integrations, Sales Forecasting, Lead and Opportunity management, Sales Manager and Sales User responsibilities and Reports, Resource Management including Roles and Groups, Oracle Sales Personalizations, Concurrent Requests for Sales Reps and Sales Manager Dashboards, Integration with Quoting, Proposals, General Ledger, Advanced Product Catalog, CRM Resource Administration, etc. The Telesales community is available to discuss topics such as Customer/Org/Person/Party Relationships, TCA/DQM Integration, Lead and Opportunity Management, Universal Work Queue, Universal Search Features, Purchase Items/Product Integration, eBusiness Center Setup Issues, Interactions, Tasks and Notes Integrations, and Form Personalizations. How Can You Get Started? Here are the two ways to get engaged. A) Click here to access all our communities  OR B) My Oracle Support as follows: Log into My Oracle Support (Flash or Classic).                                                                                                                           Click the "Community" link at the top of the page. Click [Enter Here] on the following page. Select the community from the "My Communities" list on the top-left. Take advantage TODAY!

    Read the article

  • Are closures with side-effects considered "functional style"?

    - by Giorgio
    Many modern programming languages support some concept of closure, i.e. of a piece of code (a block or a function) that Can be treated as a value, and therefore stored in a variable, passed around to different parts of the code, be defined in one part of a program and invoked in a totally different part of the same program. Can capture variables from the context in which it is defined, and access them when it is later invoked (possibly in a totally different context). Here is an example of a closure written in Scala: def filterList(xs: List[Int], lowerBound: Int): List[Int] = xs.filter(x => x >= lowerBound) The function literal x => x >= lowerBound contains the free variable lowerBound, which is closed (bound) by the argument of the function filterList that has the same name. The closure is passed to the library method filter, which can invoke it repeatedly as a normal function. I have been reading a lot of questions and answers on this site and, as far as I understand, the term closure is often automatically associated with functional programming and functional programming style. The definition of function programming on wikipedia reads: In computer science, functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It emphasizes the application of functions, in contrast to the imperative programming style, which emphasizes changes in state. and further on [...] in functional code, the output value of a function depends only on the arguments that are input to the function [...]. Eliminating side effects can make it much easier to understand and predict the behavior of a program, which is one of the key motivations for the development of functional programming. On the other hand, many closure constructs provided by programming languages allow a closure to capture non-local variables and change them when the closure is invoked, thus producing a side effect on the environment in which they were defined. In this case, closures implement the first idea of functional programming (functions are first-class entities that can be moved around like other values) but neglect the second idea (avoiding side-effects). Is this use of closures with side effects considered functional style or are closures considered a more general construct that can be used both for a functional and a non-functional programming style? Is there any literature on this topic? IMPORTANT NOTE I am not questioning the usefulness of side-effects or of having closures with side effects. Also, I am not interested in a discussion about the advantages / disadvantages of closures with or without side effects. I am only interested to know if using such closures is still considered functional style by the proponent of functional programming or if, on the contrary, their use is discouraged when using a functional style.

    Read the article

  • core.* files eating up server space (~50MB)

    - by skytreader
    I'm renting server space from someone and, upon logging in my control panel after quite sometime, noticed an abnormal spike (~50MB) in the disk usage. Upon investigating, I found a lot of core.* files scattered around my public_html directory. Each one is more than 5MB in size but no more than 6MB. The * part is all numbers (in programming regex, that should be core\.\d+). I downloaded one and checked the contents. There was a lot of balderdash characters (NUL mostly, but also a scattering of ETB, ETX, STX) but there's this block of readable text which says: This text is part of the internal format of your mail folder, and is not a real message. It is created automatically by the mail system software. If deleted, important folder data will be lost, and it will be re-created with the data reset to initial values. Pretty self-explanatory. A few blocks above the text are some more readable messages that look like logs but is sandwiched in between non printable characters. I've extracted some below. Scan not valid for mh mailboxes Bogus character 0x%x in news state Can't rewrite news state %.80s Error closing backup news state %.80s No state for newsgroup %.80s found Now, a few concerns: Am I under attack? The messages seem to be about my webmail but I don't use my personal webmail that much---only for a vanity email address and an inbox for an outdated comments system. However, lately, I seem to notice a spike in the spam for my vanity mail. (Note: the comments system is covered by a captcha but every now and then some get through. My vanity email has a spam filter but it isn't as good as I'd like). Next, if this is a feature, can I turn it off? Is it advisable to? I've only 150MB so you see why I'm fretting over a 50MB spike. Some final details: my only server-side scripts are in PHP. The directory which accumulated the most number of these core files is the one containing the Wordpress-managed subdomain of my site. I manage my server through CPanel. Lastly, I decided to delete this files and after some checking nothing seems amiss in my websites nor in my mail. They are indeed the ones responsible for the ~50MB spike as my disk space usage is back to expected.

    Read the article

  • How to Send the Contents of the Clipboard to a Text File via the Send to Menu

    - by Jason Faulkner
    We have previously covered how to send the contents of a text file to the Windows Clipboard with a simple Send To shortcut, but what if you want to do the opposite? That is: send the contents of the clipboard to a text file with a simple shortcut. No problem. Here’s how. Copy the ClipOut Utility While Windows offers the command line tool ‘clip’ as a way to direct console output to the clipboard, it does not have a tool to direct the clipboard contents to the console. To do this, we are going to use a small utility named ClipOut (download link at the bottom). Simply download and extract this file to a location in your Windows PATH variable (if you don’t know what this means, just extract the EXE to your C:\Windows folder) and you are ready to go. Add the Send To Shortcut Open your Send To folder location by going to Run > shell:sendto Create a new shortcut with the command: CMD /C ClipOut > Note the above command will overwrite the contents of the selected file. If you would like to append to the contents of the selected file, use this command instead: CMD /C ClipOut >> Of course, you could make shortcuts for both. Give a descriptive name to the shortcut. You’re finished. Using this shortcut will now send the text contents copied to your Windows Clipboard to the selected file. It is important to note that the ClipOut tool only supports outputting text. If you had binary data copied to your clipboard, then the output would be empty. Changing the Icon By default, the icon for the shortcut will appear as a command prompt, but you can easily change this by editing the properties of the shortcut and clicking the Change Icon button. We used an icon located in “%SystemRoot%\System32\shell32.dll”, but any icon of your liking will do. As an additional tweak, you can set the properties of the shortcut to run minimized. This will prevent the command window from “blinking” when the send to command is run (instead it will blink in your taskbar, which is hardly noticeable). Links Download ClipOut Utility     

    Read the article

  • Where does a "Technical Programmer" fit in, and what does the title mean? [closed]

    - by Mike E
    Was: "What is a 'Technical Programmer'"? I've noticed in job posting boards a few postings, all from European companies in the games industry, for a "Technical Programmer". The job description was similar, having to do with tools development, 3d graphics programming, etc. It seems to be somewhere between a Technical Artist who's more technical than artist or who can code, and a Technical Director but perhaps without the seniority/experience. Information elsewhere on the position is sparse. The title seems redundant and I haven't seen any American companies post jobs by that name, exactly. One example is this job posting on gamedev.net which isn't exactly thorough. In case the link dies: Subject: Technical Programmer Frictional Games, the creators of Amnesia: The Dark Descent and the Penumbra series, are looking for a talented programmer to join the company! You will be working for a small team with a big focus on finding new and innovating solutions. We want you who are not afraid to explore uncharted territory and constantly learn new things. Self-discipline and independence are also important traits as all work will be done from home. Some the things you will work with include: 3D math, rendering, shaders and everything else related. Console development (most likely Xbox 360). Hardware implementations (support for motion controls, etc). All coding is in C++, so great skills in that is imperative. Revised Summarised Question: So, where does a programmer of this nature fit in to software development team? If I had these on my team, what tasks am I expecting them to complete? Can I ask one to build a new level editor, or optimize the rendering engine? It doesn't seem to be a "tools programmer" which focuses on producing artist tools, often in high-level languages like C#, Python, or Java. Nor does it seem to be working directly on the engine, nor a graphics programmer, as such. Yet, a strong C++ requirement, which was mirrored in other postings besides this one I quoted. Edited To Add As far as it being a low-level programmer, I had considered that but lacking from the posting was a requirement of Assembly. Instead, they tend to require familiarity with higher-level hardware APIs such as DirectX, or DirectInput. I wasn't fully clear in my original post. I think, however, that Mathew Foscarini has it right in his answer, so barring someone who definitely works with or as a "Technical Programmer" stepping in to provide a clearer explanation, I'll go with that. A generalist, which also fits the description of a more-technical-than-artist TA.

    Read the article

  • How do you conquer the challenge of designing for large screen real-estate?

    - by Berin Loritsch
    This question is a bit more subjective, but I'm hoping to get some new perspective. I'm so used to designing for a certain screen size (typically 1024x768) that I find that size to not be a problem. Expanding the size to 1280x1024 doesn't buy you enough screen real estate to make an appreciable difference, but will give me a little more breathing room. Basically, I just expand my "grid size" and the same basic design for the slightly smaller screen still works. However, in the last couple of projects my clients were all using 1080p (1920x1080) screens and they wanted solutions to use as much of that real estate as possible. 1920 pixels across provides just under twice the width I am used to, and the wide screen makes some of my old go to design approaches not to work as well. The problem I'm running into is that when presented with so much space, I'm confronted with some major problems. How many columns should I use? The wide format lends itself to a 3 column split with a 2:1:1 split (i.e. the content column bigger than the other two). However, if I go with three columns what do I do with that extra column? How do I make efficient use of the screen real estate? There's a temptation to put everything on the screen at once, but too much information actually makes the application harder to use. White space is important to help make sense of complex information, but too much makes related concepts look too separate. I'm usually working with web applications that have complex data, and visualization and presentation is key to making sense of the raw data. When your user also has a large screen (at least 24"), some information is out of eye sight and you need to move the pointer a long distance. How do you make sure everything that's needed stays within the visual hot points? Simple sites like blogs actually do better when the width is constrained, which results in a lot of wasted real estate. I kind of wonder if having the text box and the text preview side by side would be a big benefit for the admin side of that type of screen? (1:1 two column split). For your answers, I know almost everything in design is "it depends". What I'm looking for is: General principles you use How your approach to design has changed I'm finding that i have to retrain myself how to work with this different format. Every bump in resolution I've worked through to date has been about 25%: 640 to 800 (25% increase), 800 to 1024 (28% increase), and 1024 to 1280 (25% increase). However, the jump from 1280 to 1920 is a good 50% increase in space--the equivalent from jumping from 640 straight to 1024. There was no commonly used middle size to help learn lessons more gradually.

    Read the article

  • Separate Action from Assertion in Unit Tests

    - by DigitalMoss
    Setup Many years ago I took to a style of unit testing that I have come to like a lot. In short, it uses a base class to separate out the Arrangement, Action and Assertion of the test into separate method calls. You do this by defining method calls in [Setup]/[TestInitialize] that will be called before each test run. [Setup] public void Setup() { before_each(); //arrangement because(); //action } This base class usually includes the [TearDown] call as well for when you are using this setup for Integration tests. [TearDown] public void Cleanup() { after_each(); } This often breaks out into a structure where the test classes inherit from a series of Given classes that put together the setup (i.e. GivenFoo : GivenBar : WhenDoingBazz) with the Assertions being one line tests with a descriptive name of what they are covering [Test] public void ThenBuzzSouldBeTrue() { Assert.IsTrue(result.Buzz); } The Problem There are very few tests that wrap around a single action so you end up with lots of classes so recently I have taken to defining the action in a series of methods within the test class itself: [Test] public void ThenBuzzSouldBeTrue() { because_an_action_was_taken(); Assert.IsTrue(result.Buzz); } private void because_an_action_was_taken() { //perform action here } This results in several "action" methods within the test class but allows grouping of similar tests (i.e. class == WhenTestingDifferentWaysToSetBuzz) The Question Does someone else have a better way of separating out the three 'A's of testing? Readability of tests is important to me so I would prefer that, when a test fails, that the very naming structure of the tests communicate what has failed. If someone can read the Inheritance structure of the tests and have a good idea why the test might be failing then I feel it adds a lot of value to the tests (i.e. GivenClient : GivenUser : WhenModifyingUserPermissions : ThenReadAccessShouldBeTrue). I am aware of Acceptance Testing but this is more on a Unit (or series of units) level with boundary layers mocked. EDIT : My question is asking if there is an event or other method for executing a block of code before individual tests (something that could be applied to specific sets of tests without it being applied to all tests within a class like [Setup] currently does. Barring the existence of this event, which I am fairly certain doesn't exist, is there another method for accomplishing the same thing? Using [Setup] for every case presents a problem either way you go. Something like [Action("Category")] (a setup method that applied to specific tests within the class) would be nice but I can't find any way of doing this.

    Read the article

  • Movement prediction for non-shooters

    - by ShadowChaser
    I'm working on an isometric (2D) game with moderate-scale multiplayer - 20-30 players. I've had some difficulty getting a good movement prediction implementation in place. Right now, clients are authoritative for their own position. The server performs validation and broad-scale cheat detection, and I fully realize that the system will never be fully robust against cheating. However, the performance and implementation tradeoffs work well for me right now. Given that I'm dealing with sprite graphics, the game has 8 defined directions rather than free movement. Whenever the player changes their direction or speed (walk, run, stop), a "true" 3D velocity is set on the entity and a packet it sent to the server with the new movement state. In addition, every 250ms additional packets are transmitted with the player's current position for state updates on the server as well as for client prediction. After the server validates the packet, it gets automatically distributed to all of the other "nearby" players. Client-side, all entities with non-zero velocity (ie/ moving entities) are tracked and updated by a rudimentary "physics" system - basically nothing more than changing the position by the velocity according to the elapsed time slice (40ms or so). What I'm struggling with is how to implement clean movement prediction. I have the nagging suspicion that I've made a design mistake somewhere. I've been over the Unreal, Half-life, and all other movement prediction/lag compensation articles I could find, but they all seam geared toward shooters: "Don't send each control change, send updates every 120ms, server is authoritative, client predicts, etc". Unfortunately, that style of design won't work well for me - there's no 3D environment so each individual state change is important. 1) Most of the samples I saw tightly couple movement prediction right into the entities themselves. For example, storing the previous state along with the current state. I'd like to avoid that and keep entities with their "current state" only. Is there a better way to handle this? 2) What should happen when the player stops? I can't interpolate to the correct position, since they might need to walk backwards or another strange direction if their position is too far ahead. 3) What should happen when entities collide? If the current player collides with something, the answer is simple - just stop the player from moving. But what happens if two entities take up the same space on the server? What if the local prediction causes a remote entity to collide with the player or another entity - do I stop them as well? If the prediction had the misfortune of sticking them in front of a wall that the player has gone around, the prediction will never be able to compensate and once the error gets to high the entity will snap to the new position.

    Read the article

  • What HTML and CSS markup is best for SEO for a list of questions (like on Stack Exchange sites)

    - by Oleg9
    On the StackOverflow a question block (in the q-list on the index page and so on) represented by the following html code: <div class="question-summary narrow tagged-interesting" id="question-summary-19832613"> <div onclick="window.location.href='/questions/19832613/how-to-display-only-transit-routesfor-trains-in-google-maps-api'" class="cp"> <div class="votes"> <div class="mini-counts">0</div> <div>votes</div> </div> <div class="status unanswered"> <div class="mini-counts">0</div> <div>answers</div> </div> <div class="views"> <div class="mini-counts">3</div> <div>views</div> </div> </div> <div class="summary"> <h3>...</h3> <div class="tags t-javascript t-google-maps t-google t-google-maps-api-3"> </div> <div class="started"> <a href="/questions/19832613/how-to-display-only-transit-routesfor-trains-in-google-maps-api" class="started-link"><span title="2013-11-07 09:52:29Z" class="relativetime">1 min ago</span></a> <a href="/users/1309392/shirish">Shirish</a> <span class="reputation-score" title="reputation score " dir="ltr">189</span> </div> </div> </div> It uses float positioning. My questions is: Would use of css styled tables be a better choice? (It's a table, isn't it?) Or it just depends on what are you prefer to use and doesn't affect the technical side (search engines or something)? The background information (such as number of views, votes etc.) comes first in the code. And I know that search engines have a limit at viewing each page. So would it better to place div's depending on their importance and then markup them on the page using css methods (like negative margins and absolute positioning)? Or it isn't so important in this instance?

    Read the article

  • Understanding the SQL Server 2008 R2 Installation Center

    - by Enrique Lima
    What is available to us through those links?  Have you taken the time to explore and identify what could be useful to you? One of many gems that has come to my attention is the possibility of provisioning SQL Server to work in an image based environment (hint: Virtualization Template perhaps?!?).   Planning: Includes requirements information, documentation, how to guides, online documentation installation and other tools. Among the other tools you will find the System Configuration checker and The Upgrade Advisor. Both tools very important to ensure your deployment and installation would be successful.     Installation:  This sections focuses on getting installations going, from standalone to cluster when it comes to new instances.  Add new nodes to an existing cluster, and also perform upgrades (in this case to SQL Server 2008 R2).  Also part of this is the option to find updates available.   Maintenance: We find in this section, options that will assist us in tasks like repairing corrupt installations to removing nodes from a cluster. An option that is interesting (and we should discuss benefits later in another post) is to be able to do an Edition Upgrade, this is a feature expansion and addition based on your product installation (Developer to Enterprise, for example)   Tools:  From the System Configuration Checker to identify readiness for deployment in a successful manner, to being able to report on features installed.  And being able to run upgrades of existing packages developed in the 2005 offering to the 2008 R2 release for SSIS.   Resources: Useful and essential links to gather information and guidance.   Advanced: Here is where it gets interesting.  I break this down into 3 main groups: Installation Automation: When you install SQL Server there is a configuration file that gets dropped (ConfigurationFile.ini) that would allow for you to perform automated installations.  There are switches and options that go with this to have that process working. Cluster configuration for Sysprep: Create images that are cluster ready, 2 options, start the prep work, and then the complete once at the final destination. Stand-alone configuration for Sysprep:  Like the clustering counterpart, 2 options, prep and complete.  Giving you the option to create standard templates for your SQL Server deployments. I find it fitting that the 3 topics listed here should (and will) be additional topics I will discuss.   Options: Very clear and specific about what this means. Select the Processor Type or the Installation Media Root Path.

    Read the article

  • JCP.next.3: time to get to work

    - by Patrick Curran
    As I've previously reported in this blog, we planned three JSRs to improve the JCP’s processes and to meet our members’ expectations for change. The first - JCP.next.1, or more formally JSR 348: Towards a new version of the Java Community Process - was completed in October 2011. This focused on a small number of simple but important changes to make our process more transparent and to enable broader participation. We're already seeing the benefits of these changes as new and existing JSRs adopt the new requirements. However, because we wanted to complete this JSR quickly we deliberately postponed a number of more complex items, including everything that would require modifying the JSPA (the legal agreement that members sign when they join the organization) to a follow-on JSR. The second JSR (JSR 355: JCP Executive Committee Merge) is in progress now and will complete later this year. This JSR is even simpler than the first, and is focused solely on merging the two Executive Committees into one for greater efficiency and to encourage synergies between the Java ME and Java SE platforms. Continuing the momentum to move Java and the JCP forward we have just filed the third JSR (JCP.next.3) as JSR 358: A major revision of the Java Community Process. This JSR will modify the JSPA as well as the Process Document, and will tackle a large number of complex issues, many of them postponed from JSR 348. For these reasons we expect to spend a considerable amount of time working on it - at least a year, and probably more. The current version of the JSPA was created back in 2002, although some minor changes were introduced in 2005. Since then the organization and the environment in which we operate have changed significantly, and it is now time to revise our processes to ensure that they meet our current needs. We have a long list of topics to be considered, including the role of independent implementations (those not derived from the Reference Implementation), licensing and open source, ensuring that our new transparency requirements are implemented correctly, compatibility policy and TCKs, the role of individual members, patent policy, and IP flow. The Expert Group for JSR 358, as with all process-change JSRs, consists of all members of the Executive Committees. Even though the JSR has just been filed we started discussions on the various topics several months ago (see the EC's meeting minutes for details) and our EC members - including the new members who joined within the last year or two - are actively engaged. Now it's your opportunity to get involved. As required by version 2.8 of our Process (introduced with JSR 348) we will conduct all our business in the open. We have a public java.net project where you can follow and participate in our work. All of our deliberations will be copied to a public Observer mailing list, we'll track our issues on a public Issue Tracker, and all our documents (meeting agendas and minutes, task lists, working drafts) will be published in our Document Archive. We're just getting started, but we do want your input. Please visit us on java.net where you can learn how to participate. Let's get to work...

    Read the article

  • WebLogic Partner Community Newsletter September 2012

    - by JuergenKress
    Dear WebLogic partner community member Happy Birthday to our WebLogic partner Community! We launched the community a year ago, it is growing fast with almost 1,000 members and with a significant impact in our business. The WebLogic partner revenue grew significant last fiscal year. I would like to thank you for your contribution. It is indeed a great opportunity for your WebLogic service revenue, like consulting, implementation or training. There will be thousands of opportunities at our joint customer base, like iAs to WebLogic migration, J2EE platform consolidation or private clouds. We will continue to highlight these opportunities in our newsletter and offer you campaign kits. Please feel free to let us know if you are interested. I would also recommend you to give us your feedback in our WebLogic Partner Community Survey 2012! Your feedback is very important for us. We continue to offer free WebLogic 12c Bootcamps across Europe. Please make sure you register asap for your local training! In addition to this we plan to offer Exalogic 2.01 Bootcamp. If you are interested to attend it then please add your details to our wiki. Our ExaLogic kit is updated with ExaLogic 2.01 ppt & training & Installation check-list & tips & Web tier roadmap. In case you want to learn more about ExaLogic, please visit Qualogy virtual demo center. We have not only released the latest version of Tuxedo 12c but Andrejus also made a Performance Audit Tool - Runtime Diagnosis for ADF Applications which is available now. We uploaded the latest WebLogic 12c and Glassfish ppt presentation for your customer meetings to the WebLogic Community Workspace (WebLogic Community membership required). Are you ready and prepared for Oracle Open World 2012? Make sure you read our tips and enjoy the conference! WebLogic Server 11gR1 Interactive Quick Reference is a wonderful online overview. Make sure you do not miss it! If you want to try WebLogic why not in the Oracle Cloud - Java Cloud Service. Our Java Guru Adam Bien published a new book Real World Java EE Patterns. If you use Java on your machine, Please make sure that you update your Java SE. Jürgen Kress Oracle WebLogic Partner Adoption EMEA To read the newsletter please visit http://tinyurl.com/WebLogicnewsSeptember2012 (OPN Account required) To become a member of the WebLogic Partner Community please register at http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic Community newsletter,newsletter,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Why are MVC & TDD not employed more in game architecture?

    - by secoif
    I will preface this by saying I haven't looked a huge amount of game source, nor built much in the way of games. But coming from trying to employ 'enterprise' coding practices in web apps, looking at game source code seriously hurts my head: "What is this view logic doing in with business logic? this needs refactoring... so does this, refactor, refactorrr" This worries me as I'm about to start a game project, and I'm not sure whether trying to mvc/tdd the dev process is going to hinder us or help us, as I don't see many game examples that use this or much push for better architectural practices it in the community. The following is an extract from a great article on prototyping games, though to me it seemed exactly the attitude many game devs seem to use when writing production game code: Mistake #4: Building a system, not a game ...if you ever find yourself working on something that isn’t directly moving your forward, stop right there. As programmers, we have a tendency to try to generalize our code, and make it elegant and be able to handle every situation. We find that an itch terribly hard not scratch, but we need to learn how. It took me many years to realize that it’s not about the code, it’s about the game you ship in the end. Don’t write an elegant game component system, skip the editor completely and hardwire the state in code, avoid the data-driven, self-parsing, XML craziness, and just code the damned thing. ... Just get stuff on the screen as quickly as you can. And don’t ever, ever, use the argument “if we take some extra time and do this the right way, we can reuse it in the game”. EVER. is it because games are (mostly) visually oriented so it makes sense that the code will be weighted heavily in the view, thus any benefits from moving stuff out to models/controllers, is fairly minimal, so why bother? I've heard the argument that MVC introduces a performance overhead, but this seems to me to be a premature optimisation, and that there'd more important performance issues to tackle before you worry about MVC overheads (eg render pipeline, AI algorithms, datastructure traversal, etc). Same thing regarding TDD. It's not often I see games employing test cases, but perhaps this is due to the design issues above (mixed view/business) and the fact that it's difficult to test visual components, or components that rely on probablistic results (eg operate within physics simulations). Perhaps I'm just looking at the wrong source code, but why do we not see more of these 'enterprise' practices employed in game design? Are games really so different in their requirements, or is a people/culture issue (ie game devs come from a different background and thus have different coding habits)?

    Read the article

  • Are You Meeting Social Customer Service Expectations?

    - by Mike Stiles
    Whether it’s B2B or B2C, one sure path to repeat business is making sure your buyer has a memorably pleasant and successful customer service experience with you. If they get that kind of treatment consistently, that’s called a relationship. And those aren’t broken easily. Social customer service, driven by integrated SRM (social relationship management) technology, is the venue that can effectively connect customers not only to the brand, but to other customers. Positive experiences, once administered, don’t just rest with the recipient. They’re published in the form of public raves and peer-to-peer recommendation, a force far more actionable than push advertising. What’s more, your customers have come to expect access to you and satisfaction from you using social. An NM Incite study shows 83% of Twitter users and 71% of Facebook users expect to get an answer from brands the same day they post to them on their social assets. To make sure you’re responding, you’ve got to have a tech platform that’s set up to moderate and alert so you’ll know ASAP a customer needs help. The more integrated your social enterprise is, the faster you can not only respond, but respond with the answer they’re looking for, because your system is connected to the internal resources that can surface the answer or put wheels in motion to rectify the situation in the shortest amount of time possible. But if you go to the necessary lengths to make sure your customers feel valued and important, will they really reward you? The study says 71% of consumers who got quick and effective responses from companies they contacted via social were more likely to recommend the brand to their friends and followers. So yes, sweeping people off their feet pays big dividends in terms of word-of-mouth marketing. But you should be keenly aware of the reverse side of that coin. Give people a negative experience, either in real world or virtual customer service, and that message is highly likely to get amplified through social channels faster and louder. Only 36% of the NM Incite study’s respondents reported that their problems were solved quickly and effectively. 36%? That’s hardly an impressive number. It gets worse. 10% never got so much as a response - at all. Going back to the relationship analogy, companies that are this deep in the ditch where customer service is concerned are making their girl or boyfriends really easy for a competitor to steal. Given the technology tools and data available right now for having an intimate knowledge of the customer, what products they’ve purchased, likely problems with those products, effective resolutions to those problems, and follow-up communication to gauge satisfaction, there are fewer excuses than ever for making the lifeblood of your business feel like you couldn’t care less. @mikestiles

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 13

    - by MarkPearl
    Learning Outcomes Explain the advantages of using a large number of registers Discuss the way in which compilers optimize register usage Discuss the evolution of CISC machines Describe the characteristics of RISC architecture Discuss the RISC vs. CISC controversy Describe the way in which RISC and CISC design principles can be combined Instruction Execution Characteristics To understand the the line of reasoning of RISC advocates, we need a brief overview of instruction execution characteristics. These include… Operations Operands Procedure Calls These three sections can be studied in depth in the textbook at pages 503 - 505 A number of groups have come up with the conclusion that the attempt to make the instruction set architecture closer to HLLs (High Level Languages) is not the most effective design strategy. Rather HLL’s can be best supported by optimizing performance of the most time-consuming features of typical HLL programs. Generally 3 main characteristics came up to improve performance… Use a large number of registers or use a compiler to optimize register usage Careful attention needs to be paid to the design of instruction pipelines A simplified (reduced) instruction set is indicated The use of a large register optimization One of the most important design principles of RISC machines is the use of a large number of registers. The concept of register windows and the use of a large register file versus the use of cache memory are discussed. On the face of it, the use of a large set of registers should decrease the need to access memory. The design task is to organize the registers in such a fashion that this goal is realized. Read page 507 – 510 for a detailed explanation. Compiler-based register optimization   Reduced Instructions Set Architecture There are two advantages to smaller programs… Because the program takes up less memory, there is a savings in that resource (this was more compelling when memory was more expensive) Smaller programs should improve performance, and this will happen in two ways – fewer instructions means fewer instruction bytes to be fetched and in a paging environment smaller programs occupy fewer pages, reducing page faults. Certain characteristics are common to RISC processors… One instruction per cycle Register-to-register operations Simple addressing modes Simple instruction formats RISC vs. CISC After initial enthusiasm for RISC machines, there has been a growing realization that RISC designs may benefit from the inclusion of some CISC features CISC designs may benefit from the inclusion of some RISC features

    Read the article

  • How to deal with fellow programmer who likes to delegate task with lack any support from boss [closed]

    - by Rudy
    I have a problem with my fellow programmer. We are currently working together in a small project that need to be shipped every 2 weeks. She has a tendency to ask for help for every issues that she is facing. Whether it's a compile error, algorithm problem or even sync/merge issue that caused by herself. She does not even bother to check Google or try to find out by herself. I can be asked to help her for 5-10 times a day. Everyday her husband keeps calling (4-6 times a day), and most of the code that has been delivered by her are actually incorrect. Today she framed me for sending the wrong delivery product. She went home after lunch on the delivery day without telling PM and other team member on that day and her code she commited does not work at all. It's not even tested. I have no choice to roll back her code and cleaning her code just for sake to able to run the product. I have warned her about her defective codes for almost 3 iterations. She said when she was not around I should be able to test her module for her. I snapped and yelled that I am not her slave and directly reported to my boss. However, my boss is not a person that can manage and care about software quality. What is the most important thing to my boss is delivery of product, whether it tested or not. He can even asked us to deliver something that not even tested by QA to the client, on the next day. Most of our suggestion is not followed by him. He even asked me to apologize to her because I snapped. I am tired of the whole situation. This kind of thing keeps repeated. I do have saving to be able to survive for 6 months and the idea of resigning is keep haunting. There is nothing else that can be learned in my current job and I had been in a better environment than this. What should I do with the situation?

    Read the article

  • Hierachies....from the Top Down

    - by Joe G
    I've been struggling with how to write on the topic of the importance of hierarchy design.  It's not so much that hierarchies haven't always been important, it's more of that with Fusion, the timing of when the hierarchies are designed should take a higher priority.    I will attempt to explain..... When I was implementing applications, back in the day, we had the list of detailed account values to enter with the obvious parent accounts. Then, after the setup was complete and things were functioning, the reporting phase started.  Users explained the elements that they want on the reports, what totals should be included, and how things should be compared.  Frequently, there was at least one calculation that became a nightmare either because it was based on very specific things that didn't relate to anything else or because it was "hardcoded" so that when something changed, someone need to "fix" the report. With Fusion, the process changes slightly.  You still want to enter all of the detailed accounts, but before you start adding parent values, you should investigate the reporting requirements from the top-down.  It's better to build hierarchies based on the reporting requirements than it is to build reports based on random hierarchies. Build reports based on hierarchies that resemble the reports themselves, and maintain the hierarchies without rework of the reports. For example, if you look at an income statement, you may have line items for Material Costs, Employee Costs, Travel & Entertainment, and Total Operating Expenses.  In your hierarchy, you have detail values that roll up to Material Costs, Employee Costs, and Travel & Entertainment which roll up to Total Operating Expenses. Balances are stored automatically in the cube for each of these.  When you define the report, you pick each of these members - no calculations required.  If a new detail value is added, you simply add it to the hierarchy, and there is no need to modify the report. I realize that there are always exceptions that require special handling, but I am confident that you will end up with much fewer exceptions if you make reporting a priority and design your hierarchies from the top-down.

    Read the article

  • NVIDIA Additional Drivers Empty - maximum resolution 640x480 - Driver disappears

    - by Hannibal
    EDIT: Optimus card. For resolution please read this thread: "You do not appear to be using the nvidia x server"(screenshot included) And this: Ubuntu 11.10 problem with Nvidia Thanks! I know, I know yet another NVIDIA question. So I did all the research. I uninstalled and installed nvidia-settings and drivers and nvidia-current from PPE repositories which are the most updated ones. I executed nvidia-xconfig. I have two major problems. One: Additional Drivers setting is empty! It doesn't contain any driver although one is installed. I have executed apt-get update too. But still the list is empty. Two: If I execute nvidia-xconfig it will properly configure an xorg.conf file. I restart but the maximum resolution I got is 640x480. I tried the xrandr but I can't add any resolution to display LVDS1. Some weird error occurs. So I can't add a proprietary driver and I can't boot in with the xorg file created by Nvidia... What can I do? With some work ( unistall nvidia-current and install libgl1-mesa-glx I was able to activate some kind of usage of my card because the resolution got better... and I added bumbelbee to because I have multiple video cards... ) but still the list is empty. I don't know what to do at this point??!!! Also: this is the most important part. When I first installed my ubuntu yesterday 11.10 one I saw the driver!! The driver was there... And then I ofc updated every package from internet.. And after that it was gone. And I can't bring it back. So there must be something wrong with one of the updates. But which???? Thanks for any extra info you can provide! I'm really desperate to solve this issue.

    Read the article

  • The type of programmer I want to be [closed]

    - by Aventinus_
    I'm an undergraduate Software Engineer student, although I've decided that pure programming is what I want to do for the rest of my life. The thing is that programming is a vast field and although most of its aspects are extremely interesting, soon or later I'll have to choose one (?) to focus on. I have several ideas on small projects I'd like to develop this summer, having in mind that this will gain me some experience and, in the best scenario, some cash. But the most important reason I'd like to develop something close to “professional” is to give myself direction on what I want to do as a programmer. One path is that of the Web Programmer. I enjoy PHP and MySQL, as well as HTML and CSS, although I don't really like ASP.NET. I can see myself writing web apps, using the above technologies, as well as XML and Javascript. I also have a neat idea on a Facebook app. The other path is that of the Desktop Programmer. This is a little more complicated cause I really-really enjoy high level languages such as Java and Python but not the low level ones, such as C. I use both Linux and Windows for the last 6 years and I like their latest DEs (meaning Gnome Shell and Metro). I can see myself writing desktop applications for both OSs as long as it means high level programming. Ideally I'd like being able to help the development of GNOME. The last path that interests me is the path of the Smartphone Programmer. I have created some sample applications on Android and due to Java I found it a quite interesting experience. I can also see myself as an independent smartphone developer. These 3 paths seem equally interesting at the moment due to the shallowness of my experience, I guess. I know that I should spend time with all of them and then choose the right one for me but I'd like to know what are the pros and cons in terms of learning curve, fun, job finding and of course financial rewards with each of these paths. I have fair or basic understanding of the languages/technologies I described earlier and this question will help me choose where to focus, at least for now.

    Read the article

  • How to be successfull at BDD Specifications Workshops?

    - by sigo
    Today we tried to introduce BDD in our software development process by having a specification workshop. For this workshop we had 2 developers, 1 tester and 1 business analyst. The workshop lasted 1h30 and by the end of it we managed to figure out some BDD scenarios for our new feature. We tried to focus on finding the scenarios that we could miss, and the difficult ones. At the end of the workshop some people were actually unhappy with the workshop. One developer felt he wasted his time as he was used to be given out the scenarios directly by the business analyst and review them with her. The business analyst didn't feel confident with our scenario coverage (Had a feeling that we could have missed out other important stuff) but more importantly felt that this workshop was also a waste of time as she could have figured out all these scenarios by herself and in a shorter period of time. So my question is how that kind of workshop can actually work. In the theory, given you have a new feature to develop, you put the tree 'amigos' (dev/tester/ba) in the same room so that they can collaborate together on writing the differents requirements for the new feature using examples. I can see all the benefits from that. Specially in term of knowledge sharing and common product/end goal/done vision. But in practice, we still think it is more cost effective to first have a BA to work on his own on the examples and only then to have the scenarios to be reviewed/reworked by the 3 'amigos'. By having the BA to work on his own, we actually feel more confident that we are less going to miss out stuff + we still get to review the scenarios afterward to double check. We don't think than simple brainstorming/deliberate discovery is actually enought to seriously cover all the requirement for a feature. The business analyst is actually the best person for that kind of stuff. The thing we just do is to review what she wrote and see if then we have a common understanding (which could then lead to rewrite some of her scenarios or add new ones she could have missed). This workshop lasted 1h30, and by the end of it, we didn't feel confident enought about wha we did...sure we could have spent more time on it but honestly most people get exhausted after 1h30 of brainstorming. So how can you get that to work effectively in practice ?

    Read the article

  • Building a Store Locator ASP.NET Application Using Google Maps API

    The past couple of projects I've been working on have included the use of the Google Maps API and geocoding service in websites for various reasons. I decided to tie together some of the lessons learned, build an ASP.NETstore locator demo, and write about it on 4Guys. Last week I published the first article in what I think will be a three-part series: Building a Store Locator ASP.NET Application Using Google Maps (Part 1). Part 1 walks through creating a demo where a user can type in an address and any stores within a (roughly) 15 mile area will be displayed in a grid.The article begins with a look at the database used to power the store locator (namely, a single table that contains one row for every location, with each location storing its store number, address, and, most important, latitude and longitude coordinates) and then turns to usingGoogle's geocoding service to translatea user-entered address into latitude and longitude coordinates. The latitude and longitude coordinates are used to find nearby stores, which are then displayed in a grid. Part 2 looks at enhancing the search results to include a map with markers indicating the position of each nearby store location. The Google Maps API, along with a bit of client-side script and server-side logic, make this actually pretty straightforward and easy to implement. Here's a screen shot of the improved store locator results. Part 3, which I plan on publishing next week, looks at how to enhance the map by using information windows to display address information when clicking a marker. Additionally, I'll show how to use custom icons for the markers so that instead of having the same marker for each nearby location the markers will be images numbered 1, 2, 3, and so on, which will correspond to a number assigned to each search result in the grid. The idea here is that by numbering the search results in the grid and the markers on the map visitors will quickly be able to see what marker corresponds to what search result. This article and demo has been a lot of fun to write and create, and I hope you enjoy reading it, too. Building a Store Locator ASP.NET Application Using Google Maps API (Part 1) Building a Store Locator ASP.NET Application Using Google Maps API (Part 2) Happy Programming!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Why doesn't Ubuntu detect my second hard drive?

    - by user93179
    I am new to Linux and to Ubuntu, I was wondering, I have two hard drives setup in SATA ports (non-raid, at least I don't think they are). I installed ubuntu unto the drives fresh without any previous versions or windows at all. However when I got the Ubuntu 12.04 LTS working, all I see is 1 x 120 gigabyte harddrive. Also, not sure if this is important or not, my hard drives are SSD. My computer specs are Asus P9Z77-V-LK Nvidia Geforce GTX 660 TI Intel i5 3570k 3.4 /proc/partitions shows: major minor #blocks name 8 0 117220824 sda 8 1 117219328 sda1 8 16 117220824 sdb 8 17 96256 sdb1 8 18 108780544 sdb2 8 19 8342528 sdb3 11 0 1048575 sr0 and ls -l /sys/block/ | grep -v /virtual/: lrwxrwxrwx 1 root root 0 Sep 27 17:26 sda - ../devices/pci0000:00/0000:00:1f.2/host0/target0:0:0/0:0:0:0/block/sda lrwxrwxrwx 1 root root 0 Sep 27 17:26 sdb - ../devices/pci0000:00/0000:00:1f.2/host1/target1:0:0/1:0:0:0/block/sdb lrwxrwxrwx 1 root root 0 Sep 27 22:26 sdc - ../devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.1/1-1.1:1.0/host6/target6:0:0/6:0:0:0/block/sdc lrwxrwxrwx 1 root root 0 Sep 27 22:04 sr0 - ../devices/pci0000:00/0000:00:1f.2/host3/target3:0:0/3:0:0:0/block/sr0 sudo file -s /dev/sd*: /dev/sda: x86 boot sector; partition 1: ID=0x7, starthead 32, startsector 2048, 234438656 sectors, code offset 0xc0, OEM-ID " ?", Bytes/sector 190, sectors/cluster 124, reserved sectors 191, FATs 6, root entries 185, sectors 64514 (volumes 32 MB) , physical drive 0x7e, dos 32 MB) , FAT (32 bit), sectors/FAT 749, reserved3 0x800000, serial number 0x35361a2b, unlabeled /dev/sdb2: Linux rev 1.0 ext4 filesystem data, UUID=387761ac-5eba-4d0f-93ba-746a82fb541d (needs journal recovery) (extents) (large files) (huge files) /dev/sdb3: data /dev/sdc: x86 boot sector; partition 1: ID=0xc, active, starthead 0, startsector 8064, 30473088 sectors, code offset 0xc0 /dev/sdc1: x86 boot sector, code offset 0x58, OEM-ID "SYSLINUX", sectors/cluster 64, reserved sectors 944, Media descriptor 0xf8, heads 128, hidden sectors 8064, sectors 30473088 (volumes 32 MB) , FAT (32 bit), sectors/FAT 3720, Backup boot sector 8, serial number 0xf90c12e9, label: "KINGSTON " /dev/sda1: x86 boot sector, code offset 0x52, OEM-ID "NTFS ", sectors/cluster 8, reserved sectors 0, Media descriptor 0xf8, heads 255, hidden sectors 2048, dos 32 MB) , FAT (32 bit), sectors/FAT 749, reserved3 0x800000, serial number 0x35361a2b, unlabeled Any help would be greatly appreciated! Thanks Another thing I noticed is, when i use gparted to locate my drives, it seems that sda1 is my second drive that I am not detecting when I boot up and my ubuntu + FAT Boot files are installed in sdb1

    Read the article

  • Are there any good Java/JVM libraries for my Expression Tree architecture?

    - by Snuggy
    My team and I are developing an enterprise-level application and I have devised an architecture for it that's best described as an "Expression Tree". The basic idea is that the leaf nodes of the tree are very simple expressions (perhaps simple values or strings). Nodes closer to the trunk will get more and more complex, taking the simpler nodes as their inputs and returning more complex results for their parents. Looking at it the other way, the application performs some task, and for this it creates a root expression. The root expression divides its input into smaller units and creates child expressions, which when evaluated it can use to build it's own result. The subdividing process continues until the simplest leaf nodes. There are two very important aspects of this architecture: It must be possible to manipulate nodes of the tree after it is built. The nodes may be given new input values to work with and any change in result for that node needs to be propagated back up the tree to the root node. The application must make best use of available processors and ultimately be scalable to other computers in a grid or in the cloud. Nodes in the tree will often be updating concurrently and notifying other interested nodes in the tree when they get a new value. Unfortunately, I'm not at liberty to discuss my actual application, but to aid understanding a little bit, you might imagine a kind of spreadsheet application being implemented with a similar architecture, where changes to cells in the table are propagated all over the place to other cells that need the result. The spreadsheet could get so massive that applying multi-core multi-computer distributed system to solve it would be of benefit. I've got my prototype "Expression Engine" working nicely on a single multi-core PC but I've started to run into a few concurrency issues (as expected because I haven't been taking too much care so far) so it's now time to start thinking about migrating the Engine to a more robust library, and that leads to a number of related questions: Is there any precedent for my "Expression Tree" architecture that I could research? What programming concepts should I consider. I realise this approach has many similarities to a functional programming style, and I'm already aware of the concepts of using futures and actors. Are there any others? Are there any languages or libraries that I should study? This question is inspired by my accidental discovery of Scala and the Akka library (which has good support for Actors, Futures, Distributed workloads etc.) and I'm wondering if there is anything else I should be looking at as well?

    Read the article

  • Upgrading to 9.2 - Info You Can Use (part 2 - Get Hands On Experience)

    - by John Webb
    Our guest blogger, Rebekah Jackson, continues with a series of helpful hints on planning your upgrade to PeopleSoft 9.2. Get Hands-On Experience With a an Easy to Install Demo Image Once you have identified the features you believe can add value (see part 1 of this series here), we recommend that you use our Demo Images to quickly create a PeopleSoft environment where you can try out the new features. The Demo Images are virtual machines that run in VirtualBox.   They contain a fully functioning PeopleSoft environment, including the database, PeopleTools, and the application.  These images can be loaded onto nearly any sized machine that meets the specs outlined in the documentation, including a powerful desktop machine.     The image allows you to very quickly deploy a fully functioning PeopleSoft instance that you can use to explore new features and functions or run a conference room pilot. To take advantage these Demo Images: Go to the Demo Images Home Page (Doc id 1552548.1) on My Oracle Support. Use the " Demo Image Quick Start Guide" and additional documentation links on that page to download and install your desired Demo Image. New Demo Images are posted for each product area approximately every 10 weeks, available shortly after the corresponding patching image. When installed, use the Demo Image to explore the desired features and capabilities. Important Notes: - It is not required to use the Demo Images to evaluate features and functions – any 9.2 instance will support this. We recommend use of the Demo Images because the setup and configuration is dramatically faster than doing a traditional PeopleSoft install. - For those looking to explore new features and capabilities on PeopleSoft 9.1 releases, we have provided virtual machine images using the Oracle Virtual Machine technology. Details and links are available in Oracle’s PeopleSoft Virtualization Products page (Doc id 1538142.1). /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

< Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >