Search Results

Search found 5521 results on 221 pages for 'deeper understanding'.

Page 45/221 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Application Lifecycle Management Tools

    - by John K. Hines
    Leading a team comprised of three former teams means that we have three of everything.  Three places to gather requirements, three (actually eight or nine) places for customers to submit support requests, three places to plan and track work. We’ve been looking into tools that combine these features into a single product.  Not just Agile planning tools, but those that allow us to look in a single place for requirements, work items, and reports. One of the interesting choices is Software Planner by Automated QA (the makers of Test Complete).  It's a lovely tool with real end-to-end process support.  We’re probably not going to use it for one reason – cost.  I’m sure our company could get a discount, but it’s on a concurrent user license that isn’t cheap for a large number of users.  Some initial guesswork had us paying over $6,000 for 3 concurrent users just to get started with the Enterprise version.  Still, it’s intuitive, has great Agile capabilities, and has a reputation for excellent customer support. At the moment we’re digging deeper into Rational Team Concert by IBM.  Reading the docs on this product makes me want to submit my resume to Big Blue.  Not only does RTC integrate everything we need, but it’s free for up to 10 developers.  It has beautiful support for all phases of Scrum.  We’re going to bring the sales representative in for a demo. This marks one of the few times that we’re trying to resist the temptation to write our own tool.  And I think this is the first time that something so complex may actually be capably provided by an external source.   Hooray for less work! Technorati tags: Scrum Scrum Tools

    Read the article

  • Ongoing confusion about ivars and properties in objective C

    - by Earl Grey
    After almost 8 months being in ios programming, I am again confused about the right approach. Maybe it is not the language but some OOP principle I am confused about. I don't know.. I was trying C# a few years back. There were fields (private variables, private data in an object), there were getters and setters (methods which exposed something to the world) ,and properties which was THE exposed thing. I liked the elegance of the solution, for example there could be a class that would have a property called DailyRevenue...a float...but there was no private variable called dailyRevenue, there was only a field - an array of single transaction revenues...and the getter for DailyRevenue property calculated the revenue transparently. If somehow the internals of daily revenue calculation would change, it would not affect somebody who consumed my DailyRevenue property in any way, since he would be shielded from getter implementation. I understood that sometimes there was , and sometimes there wasn't a 1-1 relationship between fields and properties. depending on the requirements. It seemed ok in my opinion. And that properties are THE way to acces the data in object. I know the difference betweeen private, protected, and public keyword. Now lets get to objectiveC. On what factor should I base my decision about making someting only an ivar or making it as a property? Is the mental model the same as I describe above? I know that ivars are "protected" by default, not "private" asi in c#..But thats ok I think, no big deal for my presnet level of understanding the whole ios development. The point is ivars are not accesible from outside (given i don't make them public..but i won't). The thing that clouds my clear understanding is that I can have IBOutlets from ivars. Why am I seeing internal object data in the UI? *Why is it ok?* On the other hand, if I make an IBOutlet from property, and I do not make it readonly, anybody can change it. Is this ok too? Let's say I have a ParseManager object. This object would use a built in Foundation framework class called NSXMLParser. Obviously my ParseManager will utilize this nsxmlparser's capabilities but will also do some additional work. Now my question is, who should initialize this NSXMLParser object and in which way should I make a reference to it from the ParseManager object, when there is a need to parse something. A) the ParseManager -1) in its default init method (possible here ivar - or - ivar+ppty) -2) with lazyloading in getter (required a ppty here) B) Some other object - who will pass a reference to NSXMLParser object to the ParseManager object. -1) in some custom initializer (initWithParser:(NSXMLPArser *) parser) when creating the ParseManager object.. A1 - the problem is, we create a parser and waste memory while it is not yet needed. However, we can be sure that all methods that are part ot ParserManager object, can use the ivar safely, since it exists. A2 - the problem is, the nsxmlparser is exposed to outside world, although it could be read only. Would we want a parser to be exposed in some scenario? B1 - this could maybe be useful when we would want to use more types of parsers..i dont know... I understand that architectural requirements and and language is not the same. But clearly the two are in relation. How to get out of that mess of my? Please bear with me, I wasn't able to come up with a single ultimate question. And secondly, it's better to not scare me with some superadvanced newspeak that talks about some crazy internals (what the compiler does) and edge cases.

    Read the article

  • Lies, damned lies, and statistics Part 2

    - by Maria Colgan
    There was huge interest in our OOW session last year on Managing Optimizer Statistics. It seems statistics and the maintenance of them continues to baffle people. In order to help dispel the mysteries surround statistics management we have created a two part white paper series on Optimizer statistics.  Part one of this series was released in November last years and describes in detail, with worked examples, the different concepts of Optimizer statistics. Today we have published part two of the series, which focuses on the best practices for gathering statistics, and examines specific use cases including, the fears that surround histograms and statistics management of volatile tables like Global Temporary Tables. Here is a quick look at the Introduction and the start of the paper. You can find the full paper here. Happy Reading! Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman","serif";} Introduction The Oracle Optimizer examines all of the possible plans for a SQL statement and picks the one with the lowest cost, where cost represents the estimated resource usage for a given plan. In order for the Optimizer to accurately determine the cost for an execution plan it must have information about all of the objects (table and indexes) accessed in the SQL statement as well as information about the system on which the SQL statement will be run. This necessary information is commonly referred to as Optimizer statistics. Understanding and managing Optimizer statistics is key to optimal SQL execution. Knowing when and how to gather statistics in a timely manner is critical to maintaining acceptable performance. This whitepaper is the second of a two part series on Optimizer statistics. The first part of this series, Understanding Optimizer Statistics, focuses on the concepts of statistics and will be referenced several times in this paper as a source of additional information. This paper will discuss in detail, when and how to gather statistics for the most common scenarios seen in an Oracle Database. The topics are · How to gather statistics · When to gather statistics · Improving the efficiency of gathering statistics · When not to gather statistics · Gathering other types of statistics How to gather statistics The preferred method for gathering statistics in Oracle is to use the supplied automatic statistics-gathering job. Automatic statistics gathering job The job collects statistics for all database objects, which are missing statistics or have stale statistics by running an Oracle AutoTask task during a predefined maintenance window. Oracle internally prioritizes the database objects that require statistics, so that those objects, which most need updated statistics, are processed first. The automatic statistics-gathering job uses the DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC procedure, which uses the same default parameter values as the other DBMS_STATS.GATHER_*_STATS procedures. The defaults are sufficient in most cases. However, it is occasionally necessary to change the default value of one of the statistics gathering parameters, which can be accomplished by using the DBMS_STATS.SET_*_PREF procedures. Parameter values should be changed at the smallest scope possible, ideally on a per-object bases. You can find the full paper here. Happy Reading! +Maria Colgan

    Read the article

  • Tidbits of goodness - Podcasts, REST, JSON

    - by jeff.x.davies
    I've been quiet for a while, busy with a variety of projects. I did want to let you all know about a couple of things going on. First, I have been participating in architectural podcasts with Bob Rhubart. If you are interested in hearing these short (about 10 minutes each) recordings where a group of us discuss enterprise architecture and its future, check out http://blogs.oracle.com/archbeat/2010/05/podcast_show_notes_evolving_en.html Next, I have been working on the public sample code for the Oracle Service Bus 11g release. I'm now expanding my samples to include SCA, BPEL and the Oracle Adapters. This is really great experience for me because I have been learning these other tools to a deeper level and this provides insight into developing better solutions. You know the old saying, "If the only tool you have is a hammer, you tend to appraoch every problem as if it were a nail." However, I'm not the only one working on these samples. We have alot of our best and brightest working on sample code for the 11g release. Take a look at https://soasamples.samplecode.oracle.com/ to see all of the samples for SOA Suite 11g A reader wrote to me and asked me about using OSB to return information in JSON format. I don't have a sample posted for this yet, but I am working on getting one packaged up. In the mean time I can tell you that it is dead simple to do in OSB. Use the instructions I gave in an earlier blog entry on creating REST services using OSB, specify Messaging Service as the service type that takes a Text message and returns a Text message. Then have the OSB proxy service return a JSON formatted string (by replacing the contents of the $body variable with the JSON text) and you're done! This approach allows you to use OSB services from within Javascript/AJAX seamlessly. As I get more samples posted to the OTN site, I'll let you know. I have lots of interesting stuff on the way.

    Read the article

  • Windows SBS 2011 DNS Role (service) failing & needing restarting

    - by HaydnWVN
    Have a Windows SBS 2011 with Exchange that is handling all DNS for the network. Since getting a 3rd party (Hardware & Support) to setup a recieving FTP service and restricting Exchanges memory useage for another 3rd party product (Stock software) the local network seems to periodically 'lose the internet connection'. Delving deeper I found that the DNS service is somehow failing/stopping without the actual service on the server reporting such (nothing in event logs). A simple restart of the 'DNS Role' on the server solves the problem. The manager onsite reports that he has to do this most days in the afternoon - yet not at the same time and other days it works fine without a restart being required. I'm unable (lacking sufficient SBS2011 knowledge) to diagnose this further, ideally I would like the DNS Role to report (and log) the failure, then automatically restart itself.

    Read the article

  • symlink for dbus headers

    - by DarenW
    Source code for something that won't compile has the line #include but in real life that header file is in /usr/include/dbus-1.0/ Similarsituation exists for the dbus-c++ package. Why doesn't Ubuntu provide a symlink /usr/include/dbus pointing to the dbus-1.0 directory? Is this an bug in the dbus package? If intended, what it the purpose? Is it a proper fix to add a symlink myself? (Changing the source is not practical - there are many files, and they need to match what other people have.) update: ok, I totally misunderstood the situation, though it still comes down to a problem I think should be solved by a symlink. The dbus directory referred to in the #include statement is a deeper level directory under /usr/include/dbus-1.0/. The real problem is that the file dbus-arch-deps.h appears to be missing, but is actually stored in the weird location /usr/lib/x86_64-linux-gnu/dbus-1.0/include/dbus/ so now - why doesn't ubuntu provide a symlink to this in /usr/include/dbus-1.0/dbus, or actually store it there?

    Read the article

  • Beginning on MySQL 5.6? Take the New MySQL for Beginners Training

    - by Antoinette O'Sullivan
    The MySQL for Beginners training course is a great way of for you to learn about the world's more popular open source database. During this 4 day course, epxert instructors will teach you how to use MySQL Server 5.6 and the latest tools while helping you develop deeper knowledge of using relational databases. You can take this live-instructor course as a: Live-Virtual event: Take this course from your own desk, choosing from a selection of events on the schedule to suit different time-zones. In-Class Event: Travel to an education center to follow this course. Below is a selection of events already on the schedule.  Location  Date  Delivery Language  Brussels, Belgium  8 September 2013  English  London, England  1 July 2013  English  Berlin, Germany  2 September 2013  German  Stuttgart, Germany  28 October 2013  German  Riga, Latvia  26 August 2013  Latvian Utrecht, Netherlands  9 September 2013  English   Warsaw, Poland  15 July 2013  Polish  Cape Town, South Africa  22 July 2013  English  Petaling Jaya, Malaysia  22 July 2013  English  Sao Paulo, Brazil  7 October 2013  Brazilian Portugese To register for this course or to learn more about the authentic MySQL curriculum, go to http://oracle.com/education/mysql.

    Read the article

  • SQL SERVER – Performance Tuning Resolution

    - by pinaldave
    This blog post is written in response to T-SQL Tuesday hosted by MidnightDBAs. Taking resolutions is such an interesting subject. I think just like records, these are broken way more often. I find this is the funniest thing as we all take resolutions every year but not every year, we can manage to keep them. Well, does it mean we should not take resolutions? In fact I support resolutions. Every year, I take a resolution that I will strive reduce my body weight and I usually manage to keep eating healthy till the end of January. When February begins, I begin to loose focus from my goal and as March starts, the “As usual” eating habits begin. Looking at the positive side, what would happen if every year I do not eat healthy in January, I think that might cause terrible consequences to my health in the long run. So keeping resolutions is a good practise and following them to the extent one can is commendable. Let us come back to the world of SQL Server. What is my resolution for year 2011 for SQL Server? There are many, I am going to list three of very important resolutions that I have taken this new year over here. To understand SQL Server Performance Tuning at a deeper Level I think I am already half way through. I have been being very much busy during any given month doing hands-on performance tuning for at least 12 days on an average. That means, I am doing this activity for almost doing 2 weeks a month. I believe that I have a good understanding of the subject. Note that the word that I have used is “good,” and not “best.” There are often cases when I am stumped, and I have no clue of what to do next. Then, I usually go for my “trial and error” method - whichever method works, I make sure to keep a note on my blog. My goal is that I should never ever go for the trial and error method again to achieve the same solution. I should know the solution right away when I see the problem. I do understand that Performance Tuning can be a strange animal at times and one cannot guess the right step every time. However, aiming a high goal never hurts and I am going to learn more and more in this focused area. Going further from Basic BI understanding I do fairly decent with BI concepts. I know the nbasics of SSIS, SSRS, SSAS, PowerPivot and SharePoint (and few other things MDS, StreamInsight, etc). However, I still consider myself as a beginner. I do not have hands-on experience like many other BI Gurus around. I think I want to take my learning further in this direction. I do not want to be a BI expert as the first step but the goal is to move ahead from basic level towards an advanced level. I am going to start presenting in User Group Sessions and other places on this subject. When I have to prepare new subject for presentations, I think I force myself to learn more. I am committed to learn a bit more in this direction. Learning new features SQL Server 2011 Denali This is new thing from “Microsoft” for all the SQL Geeks. I am eagerly waiting for final product later this year and I am planning to learn it well. I think if I follow my above two goals, I think this goal will be automatically covered. I am eager and excited for this new offering from Microsoft. I guess, these are my resolutions; may be next year about the same time, I must revisit this post and see how much successful I am in following my goal. On a lighter note, I am particularly fan of following cartoon strip (Courtesy: Calvin and Hobbes). I think when we cannot resolve our resolutions, we tend to act like Calvin. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Which FLOSS text editor is most like kwrite without being KDE-based?

    - by darenw
    Among text editors on Linux, I usually prefer KWrite. I like that I can quickly turn on/off line numbers and line wrap in the View menu. Other settings are easy to change. Other text editors I've used in the past, such as Gnome's gedit, bury line numbering and wrapping checkboxes deeper into the menu system, making it more distracting to change while concentrating on real work. However, KWrite is a KDE app. On Ubuntu it drags in over a dozen other packages, which I suspect I don't really need. Why would a text editor need all that? It's slower to start up than some other editors I've tried. I'm also trying to run an all-gnome system w/o any KDE, just to see how far I get with it. So, what GUI text editor isn't KDE-based, has few dependencies and quick start-up, easy to change line wrap and numbering, and general similarity to KWrite? What comes closest?

    Read the article

  • Code Reuse is (Damn) Hard

    - by James Michael Hare
    Being a development team lead, the task of interviewing new candidates was part of my job.  Like any typical interview, we started with some easy questions to get them warmed up and help calm their nerves before hitting the hard stuff. One of those easier questions was almost always: “Name some benefits of object-oriented development.”  Nearly every time, the candidate would chime in with a plethora of canned answers which typically included: “it helps ease code reuse.”  Of course, this is a gross oversimplification.  Tools only ease reuse, its developers that ultimately can cause code to be reusable or not, regardless of the language or methodology. But it did get me thinking…  we always used to say that as part of our mantra as to why Object-Oriented Programming was so great.  With polymorphism, inheritance, encapsulation, etc. we in essence set up the concepts to help facilitate reuse as much as possible.  And yes, as a developer now of many years, I unquestionably held that belief for ages before it really struck me how my views on reuse have jaded over the years.  In fact, in many ways Agile rightly eschews reuse as taking a backseat to developing what's needed for the here and now.  It used to be I was in complete opposition to that view, but more and more I've come to see the logic in it.  Too many times I've seen developers (myself included) get lost in design paralysis trying to come up with the perfect abstraction that would stand all time.  Nearly without fail, all of these pieces of code become obsolete in a matter of months or years. It’s not that I don’t like reuse – it’s just that reuse is hard.  In fact, reuse is DAMN hard.  Many times it is just a distraction that eats up architect and developer time, and worse yet can be counter-productive and force wrong decisions.  Now don’t get me wrong, I love the idea of reusable code when it makes sense.  These are in the few cases where you are designing something that is inherently reusable.  The problem is, most business-class code is inherently unfit for reuse! Furthermore, the code that is reusable will often fail to be reused if you don’t have the proper framework in place for effective reuse that includes standardized versioning, building, releasing, and documenting the components.  That should always be standard across the board when promoting reusable code.  All of this is hard, and it should only be done when you have code that is truly reusable or you will be exerting a large amount of development effort for very little bang for your buck. But my goal here is not to get into how to reuse (that is a topic unto itself) but what should be reused.  First, let’s look at an extension method.  There’s many times where I want to kick off a thread to handle a task, then when I want to reign that thread in of course I want to do a Join on it.  But what if I only want to wait a limited amount of time and then Abort?  Well, I could of course write that logic out by hand each time, but it seemed like a great extension method: 1: public static class ThreadExtensions 2: { 3: public static bool JoinOrAbort(this Thread thread, TimeSpan timeToWait) 4: { 5: bool isJoined = false; 6:  7: if (thread != null) 8: { 9: isJoined = thread.Join(timeToWait); 10:  11: if (!isJoined) 12: { 13: thread.Abort(); 14: } 15: } 16: return isJoined; 17: } 18: } 19:  When I look at this code, I can immediately see things that jump out at me as reasons why this code is very reusable.  Some of them are standard OO principles, and some are kind-of home grown litmus tests: Single Responsibility Principle (SRP) – The only reason this extension method need change is if the Thread class itself changes (one responsibility). Stable Dependencies Principle (SDP) – This method only depends on classes that are more stable than it is (System.Threading.Thread), and in itself is very stable, hence other classes may safely depend on it. It is also not dependent on any business domain, and thus isn't subject to changes as the business itself changes. Open-Closed Principle (OCP) – This class is inherently closed to change. Small and Stable Problem Domain – This method only cares about System.Threading.Thread. All-or-None Usage – A user of a reusable class should want the functionality of that class, not parts of that functionality.  That’s not to say they most use every method, but they shouldn’t be using a method just to get half of its result. Cost of Reuse vs. Cost to Recreate – since this class is highly stable and minimally complex, we can offer it up for reuse very cheaply by promoting it as “ready-to-go” and already unit tested (important!) and available through a standard release cycle (very important!). Okay, all seems good there, now lets look at an entity and DAO.  I don’t know about you all, but there have been times I’ve been in organizations that get the grand idea that all DAOs and entities should be standardized and shared.  While this may work for small or static organizations, it’s near ludicrous for anything large or volatile. 1: namespace Shared.Entities 2: { 3: public class Account 4: { 5: public int Id { get; set; } 6:  7: public string Name { get; set; } 8:  9: public Address HomeAddress { get; set; } 10:  11: public int Age { get; set;} 12:  13: public DateTime LastUsed { get; set; } 14:  15: // etc, etc, etc... 16: } 17: } 18:  19: ... 20:  21: namespace Shared.DataAccess 22: { 23: public class AccountDao 24: { 25: public Account FindAccount(int id) 26: { 27: // dao logic to query and return account 28: } 29:  30: ... 31:  32: } 33: } Now to be fair, I’m not saying there doesn’t exist an organization where some entites may be extremely static and unchanging.  But at best such entities and DAOs will be problematic cases of reuse.  Let’s examine those same tests: Single Responsibility Principle (SRP) – The reasons to change for these classes will be strongly dependent on what the definition of the account is which can change over time and may have multiple influences depending on the number of systems an account can cover. Stable Dependencies Principle (SDP) – This method depends on the data model beneath itself which also is largely dependent on the business definition of an account which can be very inherently unstable. Open-Closed Principle (OCP) – This class is not really closed for modification.  Every time the account definition may change, you’d need to modify this class. Small and Stable Problem Domain – The definition of an account is inherently unstable and in fact may be very large.  What if you are designing a system that aggregates account information from several sources? All-or-None Usage – What if your view of the account encompasses data from 3 different sources but you only care about one of those sources or one piece of data?  Should you have to take the hit of looking up all the other data?  On the other hand, should you have ten different methods returning portions of data in chunks people tend to ask for?  Neither is really a great solution. Cost of Reuse vs. Cost to Recreate – DAOs are really trivial to rewrite, and unless your definition of an account is EXTREMELY stable, the cost to promote, support, and release a reusable account entity and DAO are usually far higher than the cost to recreate as needed. It’s no accident that my case for reuse was a utility class and my case for non-reuse was an entity/DAO.  In general, the smaller and more stable an abstraction is, the higher its level of reuse.  When I became the lead of the Shared Components Committee at my workplace, one of the original goals we looked at satisfying was to find (or create), version, release, and promote a shared library of common utility classes, frameworks, and data access objects.  Now, of course, many of you will point to nHibernate and Entity for the latter, but we were looking at larger, macro collections of data that span multiple data sources of varying types (databases, web services, etc). As we got deeper and deeper in the details of how to manage and release these items, it quickly became apparent that while the case for reuse was typically a slam dunk for utilities and frameworks, the data access objects just didn’t “smell” right.  We ended up having session after session of design meetings to try and find the right way to share these data access components. When someone asked me why it was taking so long to iron out the shared entities, my response was quite simple, “Reuse is hard...”  And that’s when I realized, that while reuse is an awesome goal and we should strive to make code maintainable, often times you end up creating far more work for yourself than necessary by trying to force code to be reusable that inherently isn’t. Think about classes the times you’ve worked in a company where in the design session people fight over the best way to implement a class to make it maximally reusable, extensible, and any other buzzwordable.  Then think about how quickly that design became obsolete.  Many times I set out to do a project and think, “yes, this is the best design, I can extend it easily!” only to find out the business requirements change COMPLETELY in such a way that the design is rendered invalid.  Code, in general, tends to rust and age over time.  As such, writing reusable code can often be difficult and many times ends up being a futile exercise and worse yet, sometimes makes the code harder to maintain because it obfuscates the design in the name of extensibility or reusability. So what do I think are reusable components? Generic Utility classes – these tend to be small classes that assist in a task and have no business context whatsoever. Implementation Abstraction Frameworks – home-grown frameworks that try to isolate changes to third party products you may be depending on (like writing a messaging abstraction layer for publishing/subscribing that is independent of whether you use JMS, MSMQ, etc). Simplification and Uniformity Frameworks – To some extent this is similar to an abstraction framework, but there may be one chosen provider but a development shop mandate to perform certain complex items in a certain way.  Or, perhaps to simplify and dumb-down a complex task for the average developer (such as implementing a particular development-shop’s method of encryption). And what are less reusable? Application and Business Layers – tend to fluctuate a lot as requirements change and new features are added, so tend to be an unstable dependency.  May be reused across applications but also very volatile. Entities and Data Access Layers – these tend to be tuned to the scope of the application, so reusing them can be hard unless the abstract is very stable. So what’s the big lesson?  Reuse is hard.  In fact it’s damn hard.  And much of the time I’m not convinced we should focus too hard on it. If you’re designing a utility or framework, then by all means design it for reuse.  But you most also really set down a good versioning, release, and documentation process to maximize your chances.  For anything else, design it to be maintainable and extendable, but don’t waste the effort on reusability for something that most likely will be obsolete in a year or two anyway.

    Read the article

  • Which Linux book for aspiring sysadmin?

    - by Ramy
    I have a co-worker who insists that he will never buy a book unless it is considered "THE" book. So, in this vein, I thought I'd ask what the ultimate Linux book is. I wouldn't quite call myself a complete beginner since I can get around in Linux in general pretty well. But, beyond that, I'm also looking for a book with an eye towards becoming a Sys Admin someday. I saw a Junior Sys Admin position open up recently but with the requisite 2-3 years experience, I may have to wait a little while longer before I'm ready to apply for such a position. Having said all that, I'll summarize my question: What is the ultimate Linux book for someone who is ok with the basic tasks of getting around in Linux but also wants to aim towards full Sys Admin status someday? A few examples of the books I'm considering: Linux-Administration-Beginners-Guide-Fifth Linux-System-Administration Linux-System-Administration EDIT: Before you close this question as a dup, I'd like to say that I'm looking for something that goes deeper than this: Book for linux newbies I already have "Linux in a nutshell"

    Read the article

  • Conditional attribute in XML - most concise solution?

    - by Lech Rzedzicki
    I am tasked with setting up conditional profiling - a method of tagging chunks of XML with an attribute, which will then be used as a conditional value to extract subset of that XML. Have a look at another definition/example: DITA profiling The XML is documents that are equivalent to printed books - i.e. documents that are often looked at by a human, even if indirectly. Therefore I am looking at a few requirements here: 1. keeping the value list brief - so it doesn't affect the readability of the document 2. be able to process with standard XML tools - a space-separated list inside an attribute is still probably fine, but I'd rather not use too much regexp for this 3. be obvious for various users, including 3rd parties, which content goes where 4. Be easy to maintain going forward Therefore one easy solution is: The problem with this: 1. As the list grows the value of the attribute can be a bit verbose 2. One needs to explicitly state every value even if it's a scenario of this vs everything else Therefore I am also looking at other approaches such as: 1. Using + and - modifiers, Apache htaccess style to override the default cascading of profiling - by default all content goes everywhere and if we want to exclude a bit we just say "-kindle". It does require parsing the whole tree, is not supported by editing tools and one needs to regexp the attribute value a bit deeper... 2. Using an intermediate file to define groups of values such as "other" or "non-print", example of this in DITA. It allows concise XML as well as different grouping and values for each document but it does create a certain level of abstraction which may make it a little less obvious for a 3rd party? Altogether, if you received such XML and were tasked to process it, which option you'd rather receive? If you have any experiences like that, even in an unrelated areas such a builds, don't hesitate to comment!

    Read the article

  • How to create water like in new super mario bros?

    - by user1103457
    I assume the water in New super mario bros works the same as in the first part of this tutorial: http://gamedev.tutsplus.com/tutorials/implementation/make-a-splash-with-2d-water-effects/ But in new super mario bros the water also has constant waves on the surface, and the splashes look very different. What's also a difference is that in the tutorial, if you create a splash, it first creates a deep "hole" in the water at the origin of the splash. In new super mario bros this hole is absent or much smaller. When I refer to the splashes in new super mario bros I am referring to the splashes that the player creates when jumping in and out of the water. For reference you could use this video: http://www.ign.com/videos/2012/11/17/new-super-mario-bros-u-3-star-coin-walkthrough-sparkling-waters-1-waterspout-beach just after 00:50, when the camera isn't moving you can get a good look at the water and the constant waves. there are also some good examples of the splashes during that time. How do they create the constant waves and the splashes? I am programming in XNA. (I have tried this myself but couldn't really get it all to work well together) (and as bonus questions: how do they create the light spots just under the surface of the waves, and how do they texture the deeper parts of the water? This is the first time I try to create water like this.)

    Read the article

  • "cannot open file system. File system seems damaged "

    - by suresh kadiri
    I was using windows 7 till yesterday. I tried to install ubuntu 14. 04 Lts version yesterday with in windows 7. But it was not succeeded. Then I decided to install ubuntu only. By mistake I installed ubuntu in whole disk. After that to get deleted partitions I installed testdisk. I also used deeper search option. Now I'm getting "file system damaged". It shows The hard disk (320GB /298 GiB) seems to small! (<473 GB /441 GB) Check the Harddisk size: HD Jumpers setings, BIOS detection... The following partitions can't be recovered: Partition start end size in sectrors Linux 19077 177 45 57604 81 13 618930716 Linux 19080 192 57 57607 96 25 618930716 With ubcd also I used testdisk option. Same result comes."cannot open file system. File system seems damaged ". I have all my stuff in hard disk. Please help me to get recover my files in deleted partitions.

    Read the article

  • Bootstrapping in CloudFormation with Autoscale

    - by PapelPincel
    My CloudFormation template creates an autoscale group and bootstrap it with utility script /opt/aws/bin/cfn-init. When I remove the bootstrap part out of my template the, autoscale get created without any problem, but I add it the CloudFormation Stack fails and add line in /var/log/cloud-init.log : Error: AutoScalingGroupName does not specify any metadata The line above appens right after the following command : /opt/aws/bin/cfn-init --verbose --configsets orderedConfig --region us-east-1 --stack AS15 --resource AutoScalingGroupName --access-key XXXXXXXXXXXXX --secret-key XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Digging a little bit deeper, in cfn-init I added the following lines the point where it exit: from pprint import pprint pprint(vars(detail)) and I get the following trace when running the previous cfn-init command : {'_description': None, '_lastUpdated': datetime.datetime(2012, 7, 12, 14, 52, 42), '_logicalResourceId': u'AutoScalingGroupName', '_metadata': None, '_physicalResourceId': u'AS15-AutoScalingGroupName-HNPOXXXXXXXX', '_resourceStatus': u'CREATE_COMPLETE', '_resourceStatusReason': None, '_resourceType': u'AWS::AutoScaling::AutoScalingGroup', '_stackId': u'arn:aws:cloudformation:us-east-1:XXXXXXXXXXXXX:stack/AS15/XXXXXXXX-cc30-11e1-XXXXXX-XXXXXXXXXX', '_stackName': u'AS15'} As you can see, the metadata field is empty and that's the reason why it fails to create the stack. Is there any known side effects for cfn-init when used with autoscale ?

    Read the article

  • I want to build a Virtual Machine, are there any good references?

    - by Michael Stum
    I'm looking to build a Virtual Machine as a platform independent way to run some game code (essentially scripting). The Virtual Machines that I'm aware of in games are rather old: Infocom's Z-Machine, LucasArts' SCUMM, id Software's Quake 3. As a .net Developer, I'm familiar with the CLR and looked into the CIL Instructions to get an overview of what you actually implement on a VM Level (vs. the language level). I've also dabbled a bit in 6502 Assembler during the last year. The thing is, now that I want¹ to implement one, I need to dig a bit deeper. I know that there are stack based and register based VMs, but I don't really know which one is better at what and if there are more or hybrid approaches. I need to deal with memory management, decide which low level types are part of the VM and need to understand why stuff like ldstr works the way it does. My only reference book (apart from the Z-Machine stuff) is the CLI Annotated Standard, but I wonder if there is a better, more general/fundamental lecture for VMs? Basically something like the Dragon Book, but for VMs? I'm aware of Donald Knuth's Art of Computer Programming which uses a register-based VM, but I'm not sure how applicable that series still is, especially since it's still unfinished? Clarification: The goal is to build a specialized VM. For example, Infocom's Z-Machine contains OpCodes for setting the Background Color or playing a sound. So I need to figure out how much goes into the VM as OpCodes vs. the compiler that takes a script (language TBD) and generates the bytecode from it, but for that I need to understand what I'm really doing. ¹ I know, modern technology would allow me to just interpret a high level scripting language on the fly. But where is the fun in that? :) It's also a bit hard to google because Virtual Machines is nowadays often associated with VMWare-type OS Virtualization...

    Read the article

  • Analyzing Linux NFS server performance

    - by Kamil Kisiel
    I'd like to do some analysis of our NFS server to help track down potential bottlenecks in our applications. The server is running SUSE Enterprise Linux 10. The kind of things I'm looking to know are: Which files are being accessed by which clients Read/write throughput on a per-client basis Overhead imposed by other RPC calls Time spent waiting on other NFS requests, or disk I/O, to service a client I already know about the statistics available in /proc/net/rpc/nfsd and in fact I wrote a blog post describing them in depth. What I'm looking for is a way to dig deeper and help understand what factors are contributing to the performance seen by a particular client. I want to analyze the role the NFS server plays in the performance of an application on our cluster so that I can think of ways to best optimize it.

    Read the article

  • BI&EPM in Focus June 2012

    - by Mike.Hallett(at)Oracle-BI&EPM
    General News Thomas Kurian Discusses Oracle Exalytics, SAP HANA (replay | preso | press)  Accenture & Oracle Study: The Challenges of Corporate Financial Reporting  (link) Flash Demo: Oracle Hyperion Planning on Exalytics in the Public Sector (link) Flash Demo: OBIEE & Exalytics in Retail (link) Customers Italian Partner Alfa Sistemi implemented at Autovie Venete S.p.A. Integrates Business Intelligence and Performance Management to Improve Efficiency and Speed for Managing Public Works Projects (English version)  / Autovie Venete implementa un sistema integrato di Business Intelligence e Performance Management per migliorare l’efficienza e la tempestività dell’attività di Controlling di Commessa (Italian version). FANCL Gains 360-Degree View of Customers across Multiple Sales Channels, Reduces Reports by 75% Korea Yakult Improves Profit & Loss Analysis with Oracle Hyperion Planning and OBIEE Hill International Streamlines Forecasting, Improves Visibility into Project Productivity and Profitability Children’s Rights in Society Better Supports Organizational Mission with Advanced, Integrated, and Streamlined Business Intelligence Tools Profit: International utility Enel monitors the performance of global subsidiaries with Oracle Hyperion Applications (link) Profit: Charting a New Course: Korean Air gains altitude by leveraging its greatest asset: information (link)   Events June 12: Breaking Away from the Excel Add-In: Welcome to Hyperion Smart View 11.1.2.2 (link) June 13: Upgrading OBIEE 10g to 11g: Best Practices and Lessons Learned (performance architects) (link) June 14, The Netherlands: Strategies for Business Excellence, New Release of Oracle Hyperion EPM Suite (link) June 21: Comprehensive and Accurate Forecasting for Healthcare (link) June 26: What Exactly is Exalytics? (KPI Partners) (link) Webcast Replay: Is Your Company Able to Navigate Through Market Volatility? (link)  Webcast Replay: Is Hope and Email The Core of Your Reconciliation Process? (link) Webcast Replay: Troubleshooting EPM Reporting & Analysis 11.1.2.x  (link) Webcast Replay: Is your Organization Flying Blind when it comes to Understanding Profitability?  (link) Enterprise Performance Management Final Oracle EPM  Information Panel (CIP) survey on cost, profitability and performance reporting/scorecards is now OPEN (link) New on EPM Blog: What's Going on With IFRS? (link) How does Crystal Ball integrate with EPM Solutions? New collateral and demos on Crystal Ball Solution Factory!  (link) New Youtube Video: Business Case Analysis with Oracle Crystal Ball (link) Crystal Ball 11.1.2.2 is released! Grouped Assumptions in Sensitivity Charts, Data Filtering When Fitting Distributions and Parameter Edits When Fitting Distributions to name a few. Get full details from the online New Features Guide (link) New DRM Oracle-by-Examples now available (link) Support Blog: Hyperion Ledgerlink Sample Record and Windows 7: Now you see it, now you don’t  (link) Use Enterprise Manager FMW Control to Troubleshoot Oracle EPM 11.1.2 Family of Products (link) Business  Intelligence Whitepaper: Real-Time Operational Reporting for E-Business Suite via GoldenGate Replication to an Operational Data Store.  How Oracle enabled real-time operational reporting for its $20B services contract business with Golden Gate & OBIEE (link) KPI Partners ebook: Understanding Oracle BI Components and Repository Modeling Basics (link) “Getting Started with Oracle Endeca Information Discovery” video tutorials now available (link) Oracle BI Publisher Conversion Center: Convert from Crystal, Actuate, or Oracle Reports to Oracle BI Publisher (link) Oracle Fusion Applications: Monthly Partner Updates Webcast Replays to help BI partners understand how OBI, Essbase, BI-Apps and Fusion work together: More on Fusion CRM: Fusion Marketing More on Fusion CRM: Fusion CRM Sales Start-Up Packs and Expert Services for Implementation Partners Introducing the Oracle Fusion Accounting Hub Implementing Fusion Applications using Oracle's Composers Oracle Fusion Applications Co-Existence

    Read the article

  • 13 Things From the Oracle Social Summit You Should Know

    - by Mike Stiles
    Oracle held its first annual Oracle Social Summit, “The School for the Socially Gifted,” this past week in Las Vegas.  If anyone came to the event uncertain as to why Oracle has such an interest in social, and what its plans for social are, they left with an entirely new vision of where social is headed, and why.For those unable to attend, I was able to keep my MacBook charged just long enough to capture some of the more pertinent takeaways.1. The social enterprise is inevitable.  Social technology is disrupting the hierarchies of big companies.  It’s a revolution in corporate structures, just as it has been in various governments.  It’s not crazy to ask yourself if your CEO is the next Mubarak.  (David Kilpatrick Author of “The Facebook Effect” and founder of the Techonomy Conference) 2. The social enterprise represents collaboration on steroids.  It’s tapping into the power of your people, as opposed to keeping them “in their place.”  3. 1 in every 7 humans on earth is an active Facebook user.  75% have posted a negative comment after a poor customer experience.  The average user will inform 53 people of a bad experience.4. Checking social media is the 2nd biggest use of phones now.  Reading posts from brands is 4th.5. 70% of marketers have little or no understanding of the social conversations happening around their brand.6. Advertising, when done well, is content we care about, preferably informed by those we trust.7. Acquiring low-quality fans through gimmicks, or focusing purely on fan acquisition is a mistake.  And relying purely on organic distribution is a mistake.  (John Yi, Head of Marketing Partnerships – Facebook)8. Using all this newfound data and insight serves to positively affect the customer experience.  It allows organizations to now leverage the investments they’ve made in social up to now.9. Social is not a marketing utopia where everything is free.  It’s pay to play.  The paid component is about driving attention.  10. We are only in the infancy of ad-targeting opportunities in social.  There’s an evolution underway from interest-based targeting to action-based targeting.11. There’s actually very little overlap of the people following you on different social platforms.  Don’t assume it’s the same audience on each.12. People who can create content and who also have an understanding of what drives that content are growing increasingly valuable.13. Oracle Social’s future is enterprise SRM, integrated across marketing, selling, service, HR and every other corner of the organization.And in case you thought those were the only gems to come out of the summit, you may want to keep an eye out for Tuesday’s Social Spotlight, ever so aptly titled “13 More Things from the Oracle Social Summit You Should Know.”

    Read the article

  • can anyone help me through the preparation of Eclipse IDE for android developer in ubuntu 12.04?

    - by csbl
    I'm new to to Linux, in this particular case, to Ubuntu. I have a small android project I have to finnish until this Friday and I'm still stuck with the install and preparing of the development envrironment. The only thing I did was install the Eclipse IDE. I'm still missing the SDK, JAVA and anything else that might be needed. Can someone help me through this? It's only because I'm running out of time to develop, or else I would embarc on a deeper investigation of this OS. I tried the step to install android platforms, through Eclispe-Help-Install New Software, and I got the follwoing error messages, at the end of the process: [2012-06-06 17:35:56 - adb] /home/catia/android-sdks/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-06-06 17:35:56 - adb] 'adb version' failed! /home/catia/android-sdks/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-06-06 17:35:56 - adb] Failed to parse the output of 'adb version': Standard Output was: Error Output was: /home/catia/android-sdks/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-06-06 17:35:56 - adb] /home/catia/android-sdks/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-06-06 17:35:56 - adb] 'adb version' failed! /home/catia/android-sdks/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-06-06 17:35:56 - adb] Failed to parse the output of 'adb version': Standard Output was: Error Output was: /home/catia/android-sdks/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory Can anyone help, please??

    Read the article

  • Oracle Enterprise Data Quality Adds Global Address Verification Capabilities for Greater Accuracy and Broader Location Coverage

    - by Mala Narasimharajan
    Data quality – has many flavors to it.  Product, Customer – you name the data domain and there’s data quality associated with it.  Address verification and data quality are a little different.  in that there is a tremendous amount of variation as well as nuance attached to it.  Specifically, what makes address verification challenging is that more often than not, addresses are incomplete, riddled with misspellings, incorrect postal codes are assigned to locations or non-address items are present.  Almost all data has locations, and accurate locations power a wealth of business processes: Customer Relationship Management, data quality, delivery of materials, goods or services, fraud detection, insurance risk assessment, data analytics, store and territory planning, and much more. Oracle Address Verification Server provides location-based services as well as deeper parsing and analysis capabilities for Oracle Enterprise Data Quality.  Specifically, Pre-integrated with the EDQ platform, Oracle Address Verification Server provides robust parsing, validation, as well as specialized location information for over 240 countries – all populated countries on Earth.  Oracle Enterprise Data Quality (EDQ) is a data quality platform, dedicated to address the distinct challenges of customer and product data quality, and performs advanced data profiling to identify and measure poor quality data and identify rule requirements, as well as semantic and pattern-based recognition to accurately parse and standardize data that is poorly structured.   EDQ is integrated with Oracle Master Data Management, including Oracle Customer Hub and Oracle Product Hub, as well as Oracle Data Integrator Enterprise Edition and Oracle CRM.  Address Verification Server provides key address verification services for Oracle CRM and Oracle Customer Hub.  In addition, Address Verification Server provides greater accuracy when handling address data due to its expanded sources and extensible knowledge repository, solid parsing across locales and countries as well as  adept handling of extraneous data in address fields.  For more information on Oracle Address Verification Server visit:  http://bit.ly/GMUE4H and http://bit.ly/GWf7U6

    Read the article

  • Web-based interface is mangled

    - by justSteve
    Linksys WRT54 - over the last couple days i've been in and out of network configuration screens of my DSL modem and the router (and the commandline for that matter) as I've installed the DynDNS service. (thankx to subsonic and DynDNS.com i'm now able to stream my workstations MP3 catalog over my wife's Droid - making me her tech hero all over again) Somewhere after getting all the net ducks lined up - ports forwarded and firewalls configured - the web-interface for the router ceased rendering the full page - it's only rendering parts, i can F5/refresh and it re-renders and displays some of the cells (table-based webpage) but omits others that _had rendered before the refresh. Happens for both IE and FF. And continues after a reboot. Probably need to re-cycle the router itself but is this known behavior or should i look deeper for a cause? thx

    Read the article

  • Is paper indispensable in a programmer's everyday work?

    - by rwong
    As a programmer who work in a company whose vision is to make paperless office possible, is there any way I can work effectively while using less paper? I can list at least several kinds of papers I use quite often: Paper notebook, on which I do most of the pre-coding design work and ideas Books Temporary printouts of source code, though not so often (in color, with a 6 point font at 600 DPI) Sticky note, to remind myself of things that should be taken care of within a few days On the other hand, I also use a wiki and an office text editor. Once a while I would use a diagramming software to make a few flowcharts. Deeper questions: Is there a relationship between paper use and productivity? How can programmers help save the trees? Is paperless software development fundamentally different from paperless office? Related questions: Do you ever write code with pen and paper, and should we do it more often? What physical tools do you find useful to work as a programmer? What things are essential on a programmer's desk? Stuff every programmer needs while working Additional info, if it helps: Everyone has dual monitors. We have decent project management and issue tracking software (both web-based). Please be constructive. In particular, please give your answer to your peer programmers who wish to be flexible and are willing to change working style in order to become more productive as well as meeting certain their own personal values. Edited: I removed the company's view because it appears to be too flamebait. If you need to see my original words, go to the edit history. Deleted: Doxygen and whiteboard. Reason: disregarding my personal experience with these great tools, we never had to print out anything as a consequence of using/not using them. To see my original words, go to the edit history.

    Read the article

  • SQL SERVER – #TechEdIn – Presenting Tomorrow on Speed Up! – Parallel Processes and Unparalleled Performance at TechEd India 2012

    - by pinaldave
    Performance tuning is always a very hot topic when it is about SQL Server. SQL Server Performance Tuning is a very challenging subject that requires expertise in Database Administration and Database Development. I always have enjoyed talking about SQL Server Performance tuning subject. However, in India, it’s actually the very first time someone is presenting on this interesting subject, so this time I had the biggest challenge to present this session. Frequently enough, we get these two kind of questions: How to turn off parallelism as it is reducing performance? How to turn on parallelism as I want more performance? The reality is that not everyone knows what exactly is needed by their system. In this session, I have attempted to answer this very question. I’ve decided to provide a balanced view but stay away from theory, which leads us to say “It depends”. The session will have a clear message about this towards its end. Deck Details Slides: 45+ Demos: 7+ Bonus Quiz: 5 Images: 10+ Session delivery time: 52 Mins + 8 Mins of Q & A I have presented this session a couple of times to my friends and so far have received good feedback. Oftentimes, when people hear that I am going to present 45 slides, they all say it is too much to cover. However, when I am done with the session the usual reaction is that I truly gave justice to those slides. Action Item Here are a few of the action items for all of those who are going to attend this session: If you want to attend the session, just come early. There’s a good chance that you may not get a seat because right before me, there is a session from SQL Guru Vinod Kumar. He performs a powerful delivery of million concepts in just a little time. Quiz. I will be asking few questions during the session as well as before the session starts. If you get the correct answer, I will give unique learning material for you. You may not want to miss this learning opportunity at any cosst. Session Details Title: Speed Up! – Parallel Processes and Unparalleled Performance (Add to Calendar) Abstract: “More CPU, More Performance” – A  very common understanding is that usage of multiple CPUs can improve the performance of the query. To get a maximum performance out of any query, one has to master various aspects of the parallel processes. In this deep-dive session, we will explore this complex subject with a very simple interactive demo. Attendees will walk away with proper understanding of CX_PACKET wait types, MAXDOP, parallelism threshold and various other concepts. Date and Time: March 23, 2012, 12:15 to 13:15 Location: Hotel Lalit Ashok - Kumara Krupa High Grounds, Bengaluru – 560001, Karnataka, India. Add to Calendar Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Interview Questions and Answers, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Oracle nomeada pela Forrester Leader em Enterprise Business Intelligence Platforms

    - by Paulo Folgado
    According to an October 2010 report from independent analyst firm Forrester Research, Inc., Oracle is a leader in enterprise business intelligence (BI) platforms. Forrester Research defines BI as a set of methodologies, processes, architectures, and technologies that transform raw data into meaningful and useful information, which can then be used to enable more effective strategic, tactical, and operational insights and decision-making. Written by Forrester vice president and principal analyst Boris Evelson, The Forrester Wave: Enterprise Business Intelligence Platforms, Q4 2010 states that "Oracle has built new metadata-level [Oracle Business Intelligence Enterprise Edition 11g] integration with Oracle Fusion Middleware and Oracle Fusion Applications and continues to differentiate with its versatile ROLAP engine." The report goes on, "And in addition to closing some gaps it had in 10.x versions such as lack of RIA functionality, [the Oracle Business Intelligence Enterprise Edition 11g] actually leapfrogs the competition with the Common Enterprise Information Model (CEIM)--including the ability to define actions and execute processes right from BI metadata across BI and ERP applications." "We're pleased that the Forrester Wave recognizes Oracle Business Intelligence as a leading enterprise BI platform," said Paul Rodwick, vice president of product management, Oracle Business Intelligence. Key Innovations in Oracle Business Intelligence 11g Released in August 2010, Oracle Business Intelligence 11g represents the industry's most complete, integrated, and scalable suite of BI products. Encompassing thousands of new features and enhancements, the latest release offers three key areas of innovations. * A unified environment. The industry's first unified environment for accessing and analyzing data across relational, OLAP, and XML data sources. * Enhanced usability. A new, integrated scorecard application, plus innovations in reporting, visualization, search, and collaboration. * Enhanced performance, scalability, and security. Deeper integration with Oracle Enterprise Manager 11g and other components of Oracle Fusion Middleware provide lower management costs and increased performance, scalability, and security. Read the entire Forrester Wave Report.

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >