Search Results

Search found 1101 results on 45 pages for 'essential'.

Page 14/45 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • VMWare Server 2.0 Physical disks

    - by heavyd
    I have a machine setup with VMWare Server 2.0. There are several VMs running on the VMWare server and I have several physical drives. I would like to give one of the VMs exclusive access to one entire physical drive. Is it possible to essential give a physical drive to one of the VMs and let it access it as if it were actual hardware?

    Read the article

  • Which are the fundamental stack manipulation operations?

    - by Aadit M Shah
    I'm creating a stack oriented virtual machine, and so I started learning Forth for a general understanding about how it would work. Then I shortlisted the essential stack manipulation operations I would need to implement in my virtual machine: drop ( a -- ) dup ( a -- a a ) swap ( a b -- b a ) rot ( a b c -- b c a ) I believe that the following four stack manipulation operations can be used to simulate any other stack manipulation operation. For example: nip ( a b -- b ) swap drop -rot ( a b c -- c a b ) rot rot tuck ( a b -- b a b ) dup -rot over ( a b -- a b a ) swap tuck That being said however I wanted to know whether I have listed all the fundamental stack manipulation operations necessary to manipulate the stack in any possible way. Are there any more fundamental stack manipulation operations I would need to implement, without which my virtual machine wouldn't be Turing complete?

    Read the article

  • Microsoft Press Weekend Deal 26/May/2012 - Microsoft® Manual of Style, 4th Edition

    - by TATWORTH
    At http://shop.oreilly.com/product/0790145305770.do?code=MSDEAL, Microsoft Press are offering the Microsoft® Manual of Style, 4th Edition as a PDF for 50% off using the MSDEAL code."Maximize the impact and precision of your message! Now in its fourth edition, the Microsoft Manual of Style provides essential guidance to content creators, journalists, technical writers, editors, and everyone else who writes about computer technology. Direct from the Editorial Style Board at Microsoft—you get a comprehensive glossary of both general technology terms and those specific to Microsoft; clear, concise usage and style guidelines with helpful examples and alternatives; guidance on grammar, tone, and voice; and best practices for writing content for the web, optimizing for accessibility, and communicating to a worldwide audience. Fully updated and optimized for ease of use, the Microsoft Manual of Style is designed to help you communicate clearly, consistently, and accurately about technical topics—across a range of audiences and media." There is a sample chapter for free download at the above link

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • Tokenizing JavaScript: A look at what’s left after minification

    - by InfinitiesLoop
    Minifiers JavaScript minifiers are popular these days. Closure , YUI Compressor , Microsoft Ajax Minifier , to name a few. Using one is essential for any site that uses more than a little script and cares about performance. Each tool of course has advantages and disadvantages. But they all do a pretty good job. The results vary only slightly in the grand scheme of things. Not enough to make so much of a difference that I’d say you should always use one over the other – use whatever fits in with your...(read more)

    Read the article

  • What are some “must have” Ubuntu programs? [closed]

    - by Benjamin
    What essential software do you have on a Ubuntu Desktop machine? One per answer please, and if you can, provide the official site link if they have it. Before adding an another answer, please search for it first and upvote the existing answer rather than adding it again. To search, use the search box in the upper-right corner. To search the answers of the current question, use inquestion:this. For example: inquestion:this "7-zip" P.S Inspired by this question. I think the question was very useful for Windows users. I don't mind if the moderators change this question to community wiki.

    Read the article

  • Concentrating on tedious manuals

    - by intuited
    Reading manuals is often boring — sometimes so boring that it becomes all but impossible to focus on the task. Yet.. so essential. As frustrating as it is to have to reread the same 2-page section four times in order to finally read the whole thing the whole way through without mentally skipping off to Marseilles in mid-paragraph, it's much worse to realize at some later point that you could have saved yourself hours of work by doing so. What sorts of jedi mind tricks — and not so jedi ones — can be employed to keep the mind focused on absorbing this critical matter?

    Read the article

  • ld cannot find -limf -lsvml -lipgo -ldecimal -lirc

    - by Jaguaraci Silva
    I have installed the ICC recently, the XE composer 2013 version and I don't get success for running it because I have some errors to find libs that I don't know, however, I think that the artifact ia32-libs don't be found on the Ubuntu 12.04.1 can be the reason!. Thus, I got these dependencies: sudo apt-get install rpm build-essential sudo apt-get install libstdc++6 but these don't sudo apt-get install ia32-libs* or sudo apt-get install ia32-libs-multiarch I got this error always I'm trying compile on ICC: ld: cannot find -limf ld: cannot find -lsvml ld: cannot find -lipgo ld: cannot find -ldecimal ld: cannot find -lirc ld: cannot find -lsvml Thanks in advantage.

    Read the article

  • Google I/O 2010 - Fireside chat with the Social Web team

    Google I/O 2010 - Fireside chat with the Social Web team Google I/O 2010 - Fireside chat with the Social Web team Fireside Chats, Social Web David Glazer, DeWitt Clinton, John Panzer, Joseph Smarr, Sami Shalabi, Todd Jackson, Chris Chabot (moderator) Social is quickly becoming an integral part of how we experience the web, and this is your chance to pick the brains of the people who are working on Buzz, the Buzz API and the underlying open protocols such as Activity Streams and OAuth which are an essential component of a truly open & social web. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 18 0 ratings Time: 01:01:10 More in Science & Technology

    Read the article

  • How to tweet automatically when you push a new package to nuget.org

    - by Daniel Cazzulino
    Wouldn’t it be nice if your followers could be notified whenever you publish a new version of a NuGet package? Currently, nuget.org offers no support for this, but with the following tricks, you can get it working without programming. The essential idea is to use the OData feed that nuget.org exposes to build an RSS feed with new items as you publish them, and have IFTTT do the tweeting from it. The tools we’ll use to get this working are: LinqPad: to examine the nuget.org OData feed at https://nuget.org/api/v2  Yahoo Pipes: to tweak the OData feed output so that it looks like a “plain” feed IFTTT: to consume the pipe output and auto-tweet on new items   Exploring NuGet OData Feed with LinqPad In order to build the query that will become your tweets’ source, we will add a new connection in LinqPad by clicking on the “Add Connection” link:...Read full article

    Read the article

  • who are software design engineers?

    - by Sepala
    My question is, who are software design engineers? And, what is the meaning of the following statement (from a software design engineer job ad)? "Application domain knowledge is essential and....." What is application domain? SDLC? My hope is to become a software engineer one day (OK, to be honest, more than that. I need to be a legend), that who do programming (They say this job has no programming). I am following final year of my Bsc(Hons) in computing and I have completed a foreign diploma, majoring software engineering - Java technologies. Will this job experience help me out to get a job in my desired position, which is mentioned above, after the degree? Wikipedia and google never gave a clear straight forward answer!! Please help!

    Read the article

  • Tuning GlassFish for Production

    - by arungupta
    The GlassFish distribution is optimized for developers and need simple deployment and server configuration changes to provide the performance typically required for production usage. The formal Performance Tuning Guide provides an explanation of capacity planning and tuning tips for application, GlassFish, JVM, and the operating system. The GlassFish Server Control (only with the commercial edition) also comes with Performance Tuner that optimizes the runtime for optimal throughput and scalability. And then there are multiple blogs that provide more insights as well: • Optimizing GlassFish for Production (Diego Silva, Mar 2012) • GlassFish Production Tuning (Vegard Skjefstad, Nov 2011) • GlassFish in Production (Sunny Saxena, Jul 2011) • Putting GlassFish v3 in Production: Essential Surviving Guide (JeanFrancois, Nov 2009) • A GlassFish Tuning Primer (Scott Oaks, Dec 2007) What is your favorite source for GlassFish Performance Tuning ?

    Read the article

  • C++ : Lack of Standardization at the Binary Level

    - by Nawaz
    Why ISO/ANSI didn't standardize C++ at the binary level? There are many portability issues with C++, which is only because of lack of it's standardization at the binary level. Don Box writes, (quoting from his book Essential COM, chapter COM As A Better C++) C++ and Portability Once the decision is made to distribute a C++ class as a DLL, one is faced with one of the fundamental weaknesses of C++, that is, lack of standardization at the binary level. Although the ISO/ANSI C++ Draft Working Paper attempts to codify which programs will compile and what the semantic effects of running them will be, it makes no attempt to standardize the binary runtime model of C++. The first time this problem will become evident is when a client tries to link against the FastString DLL's import library from a C++ developement environment other than the one used to build the FastString DLL. Are there more benefits Or loss of this lack of binary standardization?

    Read the article

  • Critical Patch Update for April 2010 Now Available

    - by Steven Chan
    The Critical Patch Update (CPU) for April 2010 was released on April 13, 2010. Oracle strongly recommends applying the patches as soon as possible.The Critical Patch Update Advisory is the starting point for relevant information. It includes a list of products affected, pointers to obtain the patches, a summary of the security vulnerabilities, and links to other important documents.Supported Products that are not listed in the "Supported Products and Components Affected" Section of the advisory do not require new patches to be applied.Also, it is essential to review the Critical Patch Update supporting documentation referenced in the Advisory before applying patches, as this is where you can find important pertinent information.The Critical Patch Update Advisory is available at the following location:Oracle Technology NetworkThe next four Critical Patch Update release dates are:July 13, 2010October 12, 2010January 18, 2011April 19, 2011

    Read the article

  • Autostart app with proper icon in unity launcher

    - by kyleN
    One can autostart an application such that it launches on session start with an xdg desktop file in ~/.config/autostart (or /etc/xdg/autostart). But my application (a python/gtk/webkit/html5 app) when autostarted has a unity (and a unity-2d) launcher icon that is a gray question mark, even though: when I find it in dash, the dash shows the icon I specify in my main desktop file (in /usr/share/applications) when I launch it from dash, the launcher shows the icon I specify in my main desktop file when I add it as a favorite, the launcher shows the proper icon There are two cases where I get the gray question mark icon: autostart launch from terminal (this use case is not essential though and doesn't involve the desktop file anyway: but should/does ubuntu have an xdg desktop file interpreter à la #!/usr/bin/desktop or something) So: what is needed such unity (3d/2d) launcher panel shows the icon specified in an autostart desktop file?

    Read the article

  • How Lead-Acid Batteries Work [Video]

    - by Jason Fitzpatrick
    Every morning when you go out and start your car, you’re pulling power from a lead-acid battery. But how exactly do they work? In this informative video we see how lead plates and sulfuric acid provide the power your car craves. Courtesy of Engineer Guy Video: Bill explains the essential principles of a lead-acid battery. He shows the inside of motorcycle lead-acid battery, removes the lead and lead-oxide plates and shows how they generate a 2 volt potential difference when placed in sulfuric acid. He explains how the build up of lead sulfate between the plates will make the battery unusable if it discharged completely, which leads him to a description of how to make a deep cycle battery used for collecting solar energy. [via Make] How to Use an Xbox 360 Controller On Your Windows PC Download the Official How-To Geek Trivia App for Windows 8 How to Banish Duplicate Photos with VisiPic

    Read the article

  • Cross-language Extension Method Calling

    - by Tom Hines
    Extension methods are a concise way of binding functions to particular types. In my last post, I showed how Extension methods can be created in the .NET 2.0 environment. In this post, I discuss calling the extensions from other languages. Most of the differences I find between the Dot Net languages are mainly syntax.  The declaration of Extensions is no exception.  There is, however, a distinct difference with the framework accepting excensions made with C++ that differs from C# and VB.  When calling the C++ extension from C#, the compiler will SOMETIMES say there is no definition for DoCPP with the error: 'string' does not contain a definition for 'DoCPP' and no extension method 'DoCPP' accepting a first argument of type 'string' could be found (are you missing a using directive or an assembly reference?) If I recompile, the error goes away. The strangest problem with calling the C++ extension from C# is that I first must make SOME type of reference to the class BEFORE using the extension or it will not be recognized at all.  So, if I first call the DoCPP() as a static method, the extension works fine later.  If I make a dummy instantiation of the class, it works.  If I have no forward reference of the class, I get the same error as before and recompiling does not fix it.  It seems as if this none of this is supposed to work across the languages. I have made a few work-arounds to get the examples to compile and run. Note the following examples: Extension in C# using System; namespace Extension_CS {    public static class CExtension_CS    {  //in C#, the "this" keyword is the key.       public static void DoCS(this string str)       {          Console.WriteLine("CS\t{0:G}\tCS", str);       }    } } Extension in C++ /****************************************************************************\  * Here is the C++ implementation.  It is the least elegant and most quirky,  * but it works. \****************************************************************************/ #pragma once using namespace System; using namespace System::Runtime::CompilerServices;     //<-Essential // Reference: System.Core.dll //<- Essential namespace Extension_CPP {        public ref class CExtension_CPP        {        public:               [Extension] // or [ExtensionAttribute] /* either works */               static void DoCPP(String^ str)               {                      Console::WriteLine("C++\t{0:G}\tC++", str);               }        }; } Extension in VB ' Here is the VB implementation.  This is not as elegant as the C#, but it's ' functional. Imports System.Runtime.CompilerServices ' Public Module modExtension_VB 'Extension methods can be defined only in modules.    <Extension()> _       Public Sub DoVB(ByVal str As String)       Console.WriteLine("VB" & Chr(9) & "{0:G}" & Chr(9) & "VB", str)    End Sub End Module   Calling program in C# /******************************************************************************\  * Main calling program  * Intellisense and VS2008 complain about the CPP implementation, but with a  * little duct-tape, it works just fine. \******************************************************************************/ using System; using Extension_CPP; using Extension_CS; using Extension_VB; // vitual namespace namespace TestExtensions {    public static class CTestExtensions    {       /**********************************************************************\        * For some reason, this needs a direct reference into the C++ version        * even though it does nothing than add a null reference.        * The constructor provides the fake usage to please the compiler.       \**********************************************************************/       private static CExtension_CPP x = null;   // <-DUCT_TAPE!       static CTestExtensions()       {          // Fake usage to stop compiler from complaining          if (null != x) {} // <-DUCT_TAPE       }       static void Main(string[] args)       {          string strData = "from C#";          strData.DoCPP();          strData.DoCS();          strData.DoVB();       }    } }   Calling program in VB  Imports Extension_CPP Imports Extension_CS Imports Extension_VB Imports System.Runtime.CompilerServices Module TestExtensions_VB    <Extension()> _       Public Sub DoCPP(ByVal str As String)       'Framework does not treat this as an extension, so use the static       CExtension_CPP.DoCPP(str)    End Sub    Sub Main()       Dim strData As String = "from VB"       strData.DoCS()       strData.DoVB()       strData.DoCPP() 'fake    End Sub End Module  Calling program in C++ // TestExtensions_CPP.cpp : main project file. #include "stdafx.h" using namespace System; using namespace Extension_CPP; using namespace Extension_CS; using namespace Extension_VB; void main(void) {        /*******************************************************\         * Extension methods are called like static methods         * when called from C++.  There may be a difference in         * syntax when calling the VB extension as VB Extensions         * are embedded in Modules instead of classes        \*******************************************************/     String^ strData = "from C++";     CExtension_CPP::DoCPP(strData);     CExtension_CS::DoCS(strData);     modExtension_VB::DoVB(strData); //since Extensions go in Modules }

    Read the article

  • Big Data – Operational Databases Supporting Big Data – RDBMS and NoSQL – Day 12 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Cloud in the Big Data Story. In this article we will understand the role of Operational Databases Supporting Big Data Story. Even though we keep on talking about Big Data architecture, it is extremely crucial to understand that Big Data system can’t just exist in the isolation of itself. There are many needs of the business can only be fully filled with the help of the operational databases. Just having a system which can analysis big data may not solve every single data problem. Real World Example Think about this way, you are using Facebook and you have just updated your information about the current relationship status. In the next few seconds the same information is also reflected in the timeline of your partner as well as a few of the immediate friends. After a while you will notice that the same information is now also available to your remote friends. Later on when someone searches for all the relationship changes with their friends your change of the relationship will also show up in the same list. Now here is the question – do you think Big Data architecture is doing every single of these changes? Do you think that the immediate reflection of your relationship changes with your family member is also because of the technology used in Big Data. Actually the answer is Facebook uses MySQL to do various updates in the timeline as well as various events we do on their homepage. It is really difficult to part from the operational databases in any real world business. Now we will see a few of the examples of the operational databases. Relational Databases (This blog post) NoSQL Databases (This blog post) Key-Value Pair Databases (Tomorrow’s post) Document Databases (Tomorrow’s post) Columnar Databases (The Day After’s post) Graph Databases (The Day After’s post) Spatial Databases (The Day After’s post) Relational Databases We have earlier discussed about the RDBMS role in the Big Data’s story in detail so we will not cover it extensively over here. Relational Database is pretty much everywhere in most of the businesses which are here for many years. The importance and existence of the relational database are always going to be there as long as there are meaningful structured data around. There are many different kinds of relational databases for example Oracle, SQL Server, MySQL and many others. If you are looking for Open Source and widely accepted database, I suggest to try MySQL as that has been very popular in the last few years. I also suggest you to try out PostgreSQL as well. Besides many other essential qualities PostgreeSQL have very interesting licensing policies. PostgreSQL licenses allow modifications and distribution of the application in open or closed (source) form. One can make any modifications and can keep it private as well as well contribute to the community. I believe this one quality makes it much more interesting to use as well it will play very important role in future. Nonrelational Databases (NOSQL) We have also covered Nonrelational Dabases in earlier blog posts. NoSQL actually stands for Not Only SQL Databases. There are plenty of NoSQL databases out in the market and selecting the right one is always very challenging. Here are few of the properties which are very essential to consider when selecting the right NoSQL database for operational purpose. Data and Query Model Persistence of Data and Design Eventual Consistency Scalability Though above all of the properties are interesting to have in any NoSQL database but the one which most attracts to me is Eventual Consistency. Eventual Consistency RDBMS uses ACID (Atomicity, Consistency, Isolation, Durability) as a key mechanism for ensuring the data consistency, whereas NonRelational DBMS uses BASE for the same purpose. Base stands for Basically Available, Soft state and Eventual consistency. Eventual consistency is widely deployed in distributed systems. It is a consistency model used in distributed computing which expects unexpected often. In large distributed system, there are always various nodes joining and various nodes being removed as they are often using commodity servers. This happens either intentionally or accidentally. Even though one or more nodes are down, it is expected that entire system still functions normally. Applications should be able to do various updates as well as retrieval of the data successfully without any issue. Additionally, this also means that system is expected to return the same updated data anytime from all the functioning nodes. Irrespective of when any node is joining the system, if it is marked to hold some data it should contain the same updated data eventually. As per Wikipedia - Eventual consistency is a consistency model used in distributed computing that informally guarantees that, if no new updates are made to a given data item, eventually all accesses to that item will return the last updated value. In other words -  Informally, if no additional updates are made to a given data item, all reads to that item will eventually return the same value. Tomorrow In tomorrow’s blog post we will discuss about various other Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Microsoft Press Deal of the day 4/Sep/2012 - Programming Microsoft® SQL Server® 2012

    - by TATWORTH
    Today's deal of the day from Microsoft Press at http://shop.oreilly.com/product/0790145322357.do?code=MSDEAL is Programming Microsoft® SQL Server® 2012 "Your essential guide to key programming features in Microsoft® SQL Server® 2012 Take your database programming skills to a new level—and build customized applications using the developer tools introduced with SQL Server 2012. This hands-on reference shows you how to design, test, and deploy SQL Server databases through tutorials, practical examples, and code samples. If you’re an experienced SQL Server developer, this book is a must-read for learning how to design and build effective SQL Server 2012 applications."

    Read the article

  • Interesting/Innovative Open Source tools for indie games

    - by Gastón
    Just out of curiosity, I want to know opensource tools or projects that can add some interesting features to indie games, preferably those that could only be found on big-budget games. EDIT: As suggested by The Communist Duck and Joe Wreschnig, I'm putting the examples as answers. EDIT 2: Please do not post tools like PyGame, Inkscape, Gimp, Audacity, Slick2D, Phys2D, Blender (except for interesting plugins) and the like. I know they are great tools/libraries and some would argue essential to develop good games, but I'm looking for more rare projects. Could be something really specific or niche, like generating realistic trees and plants, or realistic AI for animals.

    Read the article

  • I'm building a theme for tumblr and the {HasPages} doesn't seem to work properly.

    - by Muhammad
    I've put the HTML draft of the theme so far, with minor CSS edits. Currently I have all the block posts and everything else that's essential to a tumblr theme but I can't seem to get the {HasPages} block to work properly. I've tested it on a different tumblr, also. There are pages created and I already have provided some basic CSS for it just in case. But there isn't anything showing up. Has anyone has this problem and if so, is there a solution I'm missing? The code to display the pages is included. {block:HasPages} <ul> <li><a href="/">Home</a></li> {block:Pages}<li><a href="{URL}">{Label}</a></li>{/block:Pages} </ul> {/block:HasPages} Also, is this a valid web masters' question. I'm not sure.

    Read the article

  • Oracle VM Templates Enable Oracle Real Application Clusters Deployment Up to 10 Times Faster than VMware vSphere

    - by Angela Poth
    Oracle VM - Quantifying the Value of Application-Driven DeploymentThe recently published report by the Evaluator Group, “Oracle VM – Quantifying The Value of Application-Driven Virtualization,“ validates Oracle’s ease of use and time savings over VMware vSphere 5. The report found that users can deploy Oracle Real Application Clusters up to 10 times faster with Oracle VM templates than VMware vSphere. Using Oracle VM templates users can deploy E-Business Suites nearly 7 times faster than VMware vSphere."Oracle VM’s application-driven architecture was built to enable rapid deployment of enterprise applications, with simplified integrated lifecycle management, to fully support Oracle applications. Our testing has demonstrated 90 percent improvement deploying Oracle Real Application Clusters 11g R2 using Oracle VM Templates and a similar improvement of 85 percent for the Oracle E-Business Suite 12.1.1. This clearly shows that Templates can provide real value to customers and should become an essential component for enabling rapid enterprise application deployment and management," said Leah Schoeb, Senior Partner, Evaluator Group.Read the Press Release Get a copy of the Evaluator Group Report: Oracle VM – Quantifying The Value of Application-Driven Virtualization

    Read the article

  • Teaching programming to a non-CS graduate

    - by Shahzada
    I have a couple of friends interested in computer programming, but they're non-CS graduates; some of them have very little experience in software testing field (some of them took some basic software testing courses). I am going to be working with them on teaching basic computer programming, and computer science fundamentals (data structures etc). My questions are; What language should I start with? What are essential computer science topics that I should cover before jumping them into computer programming? What readings can I incorporate to make the topic interesting and non-overwhelming? If we want to spend a year on it, what topics should take priority and must be covered in 12 months? Again, these are non computer science folks, and I want to keep the learning as much fun as possible. Thanks everyone.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >