Search Results

Search found 186 results on 8 pages for 'yaakov davis'.

Page 3/8 | < Previous Page | 1 2 3 4 5 6 7 8  | Next Page >

  • SQL Server Optimizer Malfunction?

    - by Tony Davis
    There was a sharp intake of breath from the audience when Adam Machanic declared the SQL Server optimizer to be essentially "stuck in 1997". It was during his fascinating "Query Tuning Mastery: Manhandling Parallelism" session at the recent PASS SQL Summit. Paraphrasing somewhat, Adam (blog | @AdamMachanic) offered a convincing argument that the optimizer often delivers flawed plans based on assumptions that are no longer valid with today’s hardware. In 1997, when Microsoft engineers re-designed the database engine for SQL Server 7.0, SQL Server got its initial implementation of a cost-based optimizer. Up to SQL Server 2000, the developer often had to deploy a steady stream of hints in SQL statements to combat the occasionally wilful plan choices made by the optimizer. However, with each successive release, the optimizer has evolved and improved in its decision-making. It is still prone to the occasional stumble when we tackle difficult problems, join large numbers of tables, perform complex aggregations, and so on, but for most of us, most of the time, the optimizer purrs along efficiently in the background. Adam, however, challenged further any assumption that the current optimizer is competent at providing the most efficient plans for our more complex analytical queries, and in particular of offering up correctly parallelized plans. He painted a picture of a present where complex analytical queries have become ever more prevalent; where disk IO is ever faster so that reads from disk come into buffer cache faster than ever; where the improving RAM-to-data ratio means that we have a better chance of finding our data in cache. Most importantly, we have more CPUs at our disposal than ever before. To get these queries to perform, we not only need to have the right indexes, but also to be able to split the data up into subsets and spread its processing evenly across all these available CPUs. Improvements such as support for ColumnStore indexes are taking things in the right direction, but, unfortunately, deficiencies in the current Optimizer mean that SQL Server is yet to be able to exploit properly all those extra CPUs. Adam’s contention was that the current optimizer uses essentially the same costing model for many of its core operations as it did back in the days of SQL Server 7, based on assumptions that are no longer valid. One example he gave was a "slow disk" bias that may have been valid back in 1997 but certainly is not on modern disk systems. Essentially, the optimizer assesses the relative cost of serial versus parallel plans based on the assumption that there is no IO cost benefit from parallelization, only CPU. It assumes that a single request will saturate the IO channel, and so a query would not run any faster if we parallelized IO because the disk system simply wouldn’t be able to handle the extra pressure. As such, the optimizer often decides that a serial plan is lower cost, often in cases where a parallel plan would improve performance dramatically. It was challenging and thought provoking stuff, as were his techniques for driving parallelism through query logic based on subsets of rows that define the "grain" of the query. I highly recommend you catch the session if you missed it. I’m interested to hear though, when and how often people feel the force of the optimizer’s shortcomings. Barring mistakes, such as stale statistics, how often do you feel the Optimizer fails to find the plan you think it should, and what are the most common causes? Is it fighting to induce it toward parallelism? Combating unexpected plans, arising from table partitioning? Something altogether more prosaic? Cheers, Tony.

    Read the article

  • What should a game have in order to keep humans playing it?

    - by Adam Davis
    In many entertainment professions there suggestions, loose rules, or general frameworks one follows that appeal to humans in one way or another. For instance, many movies and books follow the monomyth. In video games I find many types of games that attract people in different ways. Some are addicted to facebook gem matching games. Others can't get enough of FPS games. Once in awhile, though, you find a game that seems to transcend stereotypes and appeals almost immediately to everyone that plays it. For instance, Plants Versus Zombies seems to have a very, very large demographic of players. There are other games similar in reach. I'm curious what books, blogs, etc there are that explore these game types and styles, and tries to suss out one or more popular frameworks/styles that satisfy people, while keeping them coming back for more.

    Read the article

  • Head in the Clouds

    - by Tony Davis
    We're just past the second anniversary of the launch of Windows Azure. A couple of years' experience with Azure in the industry has provided some obvious success stories, but has deflated some of the initial marketing hyperbole. As a general principle, Azure seems to work well in providing a Service-Oriented Architecture for services in enterprises that suffer wide fluctuations in demand. Instead of being obliged to provide hardware sufficient for the occasional peaks in demand, one can hire capacity only when it is needed, and the cost of hosting an application is no longer a capital cost. It enables companies to avoid having to scale out hardware for peak periods only to see it underused for the rest of the time. A customer-facing application such as a concert ticketing system, which suffers high demand in short, predictable bursts of activity, is a great example of an application that would work well in Azure. However, moving existing applications to Azure isn't something to be done on impulse. Unless your application is .NET-based, and consists of 'stateless' components that communicate via queues, you are probably in for a lot of redevelopment work. It makes most sense for IT departments who are already deep in this .NET mindset, and who also want 'grown-up' methods of staging, testing, and deployment. Azure fits well with this culture and offers, as a bonus, good Visual Studio integration. The most-commonly stated barrier to porting these applications to Azure is the problem of reconciling the use of the cloud with legislation for data privacy and security. Putting databases in the cloud is a sticky issue for many and impossible for some due to compliance and security issues, the need for direct control over data, and so on. In the face of feedback from the early adopters of Azure, Microsoft has broadened the architectural choices to cater for a wide range of requirements. As well as SQL Azure Database (SAD) and Azure storage, the unstructured 'BLOB and Entity-Attribute-Value' NoSQL storage alternative (which equates more closely with folders and files than a database), Windows Azure offers a wide range of storage options including use of services such as oData: developers who are programming for Windows Azure can simply choose the one most appropriate for their needs. Secondly, and crucially, the Windows Azure architecture allows you the freedom to produce hybrid applications, where only those parts that need cloud-based hosting are deployed to Azure, whereas those parts that must unavoidably be hosted in a corporate datacenter can stay there. By using a hybrid architecture, it will seldom, if ever, be necessary to move an entire application to the cloud, along with personal and financial data. For example that we could port to Azure only put those parts of our ticketing application that capture and process tickets orders. Once an order is captured, the financial side can be processed in our own data center. In short, Windows Azure seems to be a very effective way of providing services that are subject to wide but predictable fluctuations in demand. Have you come to the same conclusions, or do you think I've got it wrong? If you've had experience with Azure, would you recommend it? It would be great to hear from you. Cheers, Tony.

    Read the article

  • Playing part of a sfx audio file in HTML5 using WebAudio

    - by Matthew James Davis
    I have compiled all of my sound effects into one sequenced .ogg file. I have the start and stop times for each sound effect. How do I play the individual effects? That is, how do I play part of an audio file. More specificially, I've created a dictionary { 'sword_hit': { src: 'sfx.ogg', start: 265, // ms length: 212 // ms } } that my play_sound() function can use to look up 'sword_hit' and play the correct audio file at the correct start time for the correct duration. I simply need to know how to tell the WebAudio API to start playing at start ms and only play for length ms.

    Read the article

  • Comparing Apples and Pairs

    - by Tony Davis
    A recent study, High Costs and Negative Value of Pair Programming, by Capers Jones, pulls no punches in its assessment of the costs-to- benefits ratio of pair programming, two programmers working together, at a single computer, rather than separately. He implies that pair programming is a method rushed into production on a wave of enthusiasm for Agile or Extreme Programming, without any real regard for its effectiveness. Despite admitting that his data represented a far from complete study of the economics of pair programming, his conclusions were stark: it was 2.5 times more expensive, resulted in a 15% drop in productivity, and offered no significant quality benefits. The author provides a more scientific analysis than Jon Evans’ Pair Programming Considered Harmful, but the theme is the same. In terms of upfront-coding costs, pair programming is surely more expensive. The claim of productivity loss is dubious and contested by other studies. The third claim, though, did surprise me. The author’s data suggests that if both the pair and the individual programmers employ static code analysis and testing, then there is no measurable difference in the resulting code quality, in terms of defects per function point. In other words, pair programming incurs a massive extra cost for no tangible return in investment. There were, inevitably, many criticisms of his data and his conclusions, a few of which are persuasive. Firstly, that the driver/observer model of pair programming, on which the study bases its findings, is far from the most effective. For example, many find Ping-Pong pairing, based on use of test-driven development, far more productive. Secondly, that it doesn’t distinguish between “expert” and “novice” pair programmers– that is, independently of other programming skills, how skilled was an individual at pair programming. Thirdly, that his measure of quality is too narrow. This point rings true, certainly at Red Gate, where developers don’t pair program all the time, but use the method in short bursts, while tackling a tricky problem and needing a fresh perspective on the best approach, or more in-depth knowledge in a particular domain. All of them argue that pair programming, and collective code ownership, offers significant rewards, if not in terms of immediate “bug reduction”, then in removing the likelihood of single points of failure, and improving the overall quality and longer-term adaptability/maintainability of the design. There is also a massive learning benefit for both participants. One developer told me how he once worked in the same team over consecutive summers, the first time with no pair programming and the second time pair-programming two-thirds of the time, and described the increased rate of learning the second time as “phenomenal”. There are a great many theories on how we should develop software (Scrum, XP, Lean, etc.), but woefully little scientific research in their effectiveness. For a group that spends so much time crunching other people’s data, I wonder if developers spend enough time crunching data about themselves. Capers Jones’ data may be incomplete, but should cause a pause for thought, especially for any large IT departments, supporting commerce and industry, who are considering pair programming. It certainly shouldn’t discourage teams from exploring new ways of developing software, as long as they also think about how to gather hard data to gauge their effectiveness.

    Read the article

  • We have our standards, and we need them

    - by Tony Davis
    The presenter suddenly broke off. He was midway through his section on how to apply to the relational database the Continuous Delivery techniques that allowed for rapid-fire rounds of development and refactoring, while always retaining a “production-ready” state. He sighed deeply and then launched into an astonishing diatribe against Database Administrators, much of his frustration directed toward Oracle DBAs, in particular. In broad strokes, he painted the picture of a brave new deployment philosophy being frustratingly shackled by the relational database, and by especially by the attitudes of the guardians of these databases. DBAs, he said, shunned change and “still favored tools I’d have been embarrassed to use in the ’80′s“. DBAs, Oracle DBAs especially, were more attached to their vendor than to their employer, since the former was the primary source of their career longevity and spectacular remuneration. He contended that someone could produce the best IDE or tool in the world for Oracle DBAs and yet none of them would give a stuff, unless it happened to come from the “mother ship”. I sat blinking in astonishment at the speaker’s vehemence, and glanced around nervously. Nobody in the audience disagreed, and a few nodded in assent. Although the primary target of the outburst was the Oracle DBA, it made me wonder. Are we who work with SQL Server, database professionals or merely SQL Server fanbois? Do DBAs, in general, have an image problem? Is it a good career-move to be seen to be holding onto a particular product by the whites of our knuckles, to the exclusion of all else? If we seek a broad, open-minded, knowledge of our chosen technology, the database, and are blessed with merely mortal powers of learning, then we like standards. Vendors of RDBMSs generally don’t conform to standards by instinct, but by customer demand. Microsoft has made great strides to adopt the international SQL Standards, where possible, thanks to considerable lobbying by the community. The implementation of Window functions is a great example. There is still work to do, though. SQL Server, for example, has an unusable version of the Information Schema. One cast-iron rule of any RDBMS is that we must be able to query the metadata using the same language that we use to query the data, i.e. SQL, and we do this by running queries against the INFORMATION_SCHEMA views. Developers who’ve attempted to apply a standard query that works on MySQL, or some other database, but doesn’t produce the expected results on SQL Server are advised to shun the Standards-based approach in favor of the vendor-specific one, using the catalog views. The argument behind this is sound and well-documented, and of course we all use those catalog views, out of necessity. And yet, as database professionals, committed to supporting the best databases for the business, whatever they are now and in the future, surely our heart should sink somewhat when we advocate a vendor specific approach, to a developer struggling with something as simple as writing a guard clause. And when we read messages on the Microsoft documentation informing us that we shouldn’t rely on INFORMATION_SCHEMA to identify reliably the schema of an object, in SQL Server!

    Read the article

  • You are or will be a laid off programmer - what do you do a year ago, right now, tomorrow, and next week?

    - by Adam Davis
    Many programmers, software engineers, and other technology professionals are out of work, facing layoffs, or are unprepared for layoffs though they feel secure right now. What should every programmer do right now (even if secure in their current job) to prepare them for layoffs down the road? If your boss came to your cubicle while you read this and laid you off: What would you do immediately after? What would you do tomorrow? What would you do next week? It obvious that one should always have an up to date resume, always get recommendations from people when they see you at your best (not when you're looking for a new job), etc. What are the things, step by step, that every programmer should do (or should consider doing) long before they are laid off, when they're laid off, and shortly after being laid off? This is a question with many possible facets. While I want to encourage discussion to center around programming career based answers, please reconsider before downvoting someone because they're thinking in terms of how they're going to prevent going into debt. Bonus catch-22 type question: You can study a new language or technology while out of work, but most places want you to have more than 1-2 months experience in a working environment, not just from a learning exercise. Is it worthwhile to place a priority on new (ideally in demand) skills, or should you instead hone existing skills?

    Read the article

  • Oracle Linux Partner Pavilion Spotlight - Part IV

    - by Ted Davis
    Welcome to the final Oracle Linux Partner Pavilion Spotlight Part IV.  Two days left till the Big Show. You are gearing up. We are gearing up. You can feel the excitement.  We can feel the excitement. This. Will. Be. The. Best. Show. EVER. See you at the Partner Pavilion (Moscone south # 1033) at Oracle OpenWorld. - Oracle Linux / Oracle VM Team HP and Oracle are pleased to announce another Oracle Validated Configuration based on the ProLiant DL980 server. Many choose to deploy Oracle workloads on the ProLiant DL980 based on the cost/performance ratio they achieve running Oracle Linux Unbreakable Enterprise Kernel. You can be confident that Oracle Validated Configurations based on ProLiant servers will help you achieve your most demanding performance goals. QLogic The QLogic-Oracle partnership spans over 20 years resulting in the most comprehensive line of Oracle Linux I/O adapter technology. Interface options include Ethernet, Fibre-Channel, and FCoE. Host side connectivity is offered in both low profile PCIe and Express Module PCIe form factors. QLogic software drives are jointly qualified and “in-box” with Oracle Linux 5.x, 6,x and Oracle VM enabling simplified installation and management while simultaneously taking risk out of the solution. Bringing innovations such as NPIV, T10-PI, and intelligent caching adapter technology to the Oracle Linux environment further strengthens the QLogic advantage. A big thank you to all of our Oracle Linux Partner Pavilion participants. We - they- look forward to meeting you next week at Oracle OpenWorld. If you've missed our three previous Partner Spotlight's - here are the links: Part I, Part II, Part III. 

    Read the article

  • On the art of self-promotion

    - by Tony Davis
    I attended Brent Ozar's Building the Fastest SQL Servers session at Tech Ed last week, and found myself engulfed in a 'perfect storm' of excellent technical and presentational skills coupled with an astute awareness of the value of promoting one's work. I spend a lot of time at such events talking to developers and DBAs about the value of blogging and writing articles, and my impression is that some could benefit from a touch less modesty and a little more self-promotion. I sense a reticence in many would-be writers. Is what I have to say important enough? Haven't far more qualified and established commentators, MVPs and so on, already said it? While it's a good idea to pick reasonably fresh and interesting topics, it's more important not to let such fears lead to writer's block. In the eyes of any future employer, your published writing is an extension of your resume. They will not care that a certain MVP knows how to solve problem x, but they will be very interested to see that you have tackled that same problem, and solved it in your own way, and described the process in your own voice. In your current job, your writing is one of the ways you can express to your peers, and to the organization as a whole, the value of what you contribute. Many Developers and DBAs seem to rely on the idea that their work will speak for itself, and that their skill shines out from it. Unfortunately, this isn't always true. Many Development DBAs, for example, will be painfully aware of the massive effort involved in tuning and adding resilience to rapidly developed applications. However, others in the organization who are unaware of what's involved in getting an application that is 'done' ready for production may dismiss such efforts as fussiness or conservatism. At the dark end of the development cycle, chickens come home to roost, but their droppings tend to land on those trying to clear up the mess. My advice is this: next time you fix a bug or improve the resilience or performance of a database or application, make sure that you use team meetings, informal discussions and so on to ensure that people understand what the problem was and what you had to do to fix it. Use your blog to describe, generally, the process you adopted, the resources you used and the insights that came from your work. Encourage your colleagues to do the same. By spreading the art of self-promotion to everyone involved in an IT project, we get a better idea of the extent of the work and the value of the contribution of all the team members. As always, we'd love to hear what you think. This very week, Simple-talk launches its new blogging platform. If any of this has moved you to 'throw your hat into the ring', drop us a mail at [email protected]. Cheers, Tony.

    Read the article

  • Data Model Dissonance

    - by Tony Davis
    So often at the start of the development of database applications, there is a premature rush to the keyboard. Unless, before we get there, we’ve mapped out and agreed the three data models, the Conceptual, the Logical and the Physical, then the inevitable refactoring will dog development work. It pays to get the data models sorted out up-front, however ‘agile’ you profess to be. The hardest model to get right, the most misunderstood, and the one most neglected by the various modeling tools, is the conceptual data model, and yet it is critical to all that follows. The conceptual model distils what the business understands about itself, and the way it operates. It represents the business rules that govern the required data, its constraints and its properties. The conceptual model uses the terminology of the business and defines the most important entities and their inter-relationships. Don’t assume that the organization’s understanding of these business rules is consistent or accurate. Too often, one department has a subtly different understanding of what an entity means and what it stores, from another. If our conceptual data model fails to resolve such inconsistencies, it will reduce data quality. If we don’t collect and measure the raw data in a consistent way across the whole business, how can we hope to perform meaningful aggregation? The conceptual data model has more to do with business than technology, and as such, developers often regard it as a worthy but rather arcane ceremony like saluting the flag or only eating fish on Friday. However, the consequences of getting it wrong have a direct and painful impact on many aspects of the project. If you adopt a silo-based (a.k.a. Domain driven) approach to development), you are still likely to suffer by starting with an incomplete knowledge of the domain. Even when you have surmounted these problems so that the data entities accurately reflect the business domain that the application represents, there are likely to be dire consequences from abandoning the goal of a shared, enterprise-wide understanding of the business. In reading this, you may recall experiences of the consequence of getting the conceptual data model wrong. I believe that Phil Factor, for example, witnessed the abandonment of a multi-million dollar banking project due to an inadequate conceptual analysis of how the bank defined a ‘customer’. We’d love to hear of any examples you know of development projects poleaxed by errors in the conceptual data model. Cheers, Tony

    Read the article

  • Why does 12.04 try but fail to hibernate, even after I enabled hibernation?

    - by Roger Davis
    Regarding the below (-----) info from another post, my system will try (as is said below) to hibernate, but it won't get all the way there. Hard drive activity stops, but it does not shut down. If I turn the power off, then back on, it will start, but I have to "restore previous session" in the browser and other open apps don't restart, with the accompanying hassle. So because it tries, will the suggested fix then cause it to work correctly, or is the literal "try" not really what he means?!? PLEASE NOTE - this is a desktop system, not a laptop. Before enabling hibernation, please try to test whether it works correctly by running pm-hibernate in a terminal. The system will try to hibernate. If you are able to start the system again then you are more or less safe to add an override. To do so, start editing sudo nano /etc/polkit-1/localauthority/50-local.d/com.ubuntu.enable-hibernate.pkla Fill it with this [Re-enable hibernate by default] Identity=unix-user:* Action=org.freedesktop.upower.hibernate ResultActive=yes Save by pressing Ctrl-O and exit nano by pressing Ctrl-X Restart and hibernation is back!

    Read the article

  • The long road to bug-free software

    - by Tony Davis
    The past decade has seen a burgeoning interest in functional programming languages such as Haskell or, in the Microsoft world, F#. Though still on the periphery of mainstream programming, functional programming concepts are gradually seeping into the imperative C# language (for example, Lambda expressions have their root in functional programming). One of the more interesting concepts from functional programming languages is the use of formal methods, the lofty ideal behind which is bug-free software. The idea is that we write a specification that describes exactly how our function (say) should behave. We then prove that our function conforms to it, and in doing so have proved beyond any doubt that it is free from bugs. All programmers already use one form of specification, specifically their programming language's type system. If a value has a specific type then, in a type-safe language, the compiler guarantees that value cannot be an instance of a different type. Many extensions to existing type systems, such as generics in Java and .NET, extend the range of programs that can be type-checked. Unfortunately, type systems can only prevent some bugs. To take a classic problem of retrieving an index value from an array, since the type system doesn't specify the length of the array, the compiler has no way of knowing that a request for the "value of index 4" from an array of only two elements is "unsafe". We restore safety via exception handling, but the ideal type system will prevent us from doing anything that is unsafe in the first place and this is where we start to borrow ideas from a language such as Haskell, with its concept of "dependent types". If the type of an array includes its length, we can ensure that any index accesses into the array are valid. The problem is that we now need to carry around the length of arrays and the values of indices throughout our code so that it can be type-checked. In general, writing the specification to prove a positive property, even for a problem very amenable to specification, such as a simple sorting algorithm, turns out to be very hard and the specification will be different for every program. Extend this to writing a specification for, say, Microsoft Word and we can see that the specification would end up being no simpler, and therefore no less buggy, than the implementation. Fortunately, it is easier to write a specification that proves that a program doesn't have certain, specific and undesirable properties, such as infinite loops or accesses to the wrong bit of memory. If we can write the specifications to prove that a program is immune to such problems, we could reuse them in many places. The problem is the lack of specification "provers" that can do this without a lot of manual intervention (i.e. hints from the programmer). All this might feel a very long way off, but computing power and our understanding of the theory of "provers" advances quickly, and Microsoft is doing some of it already. Via their Terminator research project they have started to prove that their device drivers will always terminate, and in so doing have suddenly eliminated a vast range of possible bugs. This is a huge step forward from saying, "we've tested it lots and it seems fine". What do you think? What might be good targets for specification and verification? SQL could be one: the cost of a bug in SQL Server is quite high given how many important systems rely on it, so there's a good incentive to eliminate bugs, even at high initial cost. [Many thanks to Mike Williamson for guidance and useful conversations during the writing of this piece] Cheers, Tony.

    Read the article

  • Oracle Linux Partner Pavilion Spotlight - Part II

    - by Ted Davis
    As we draw closer to the first day of Oracle OpenWorld, starting in less than a week, we continue to showcase some of our premier partners exhibiting in the Oracle Linux Partner Pavilion ( Booth #1033). We have Independent Hardware Vendors, Independent Software Vendors and Systems Integrators that show the breadth of support in the Oracle Linux and Oracle VM ecosystem. In today's post we highlight three additional Oracle Linux / Oracle VM Partners from the pavilion. Micro Focus delivers mainframe solutions software and software delivery tools with its Borland products. These tools are grouped under the following solutions: Analysis and testing tools for JDeveloper Micro Focus Enterprise Analyzer is key to the success of application overhaul and modernization strategies by ensuring that they are based on a solid knowledge foundation. It reveals the reality of enterprise application portfolios and the detailed constructs of business applications. COBOL for Oracle Database, Oracle Linux, and Tuxedo Micro Focus Visual COBOL delivers the next generation of COBOL development and deployment. Itbrings the productivity of the Eclipse IDE to COBOL, and provides the ability to deploy key business critical COBOL applications to Oracle Linux both natively and under a JVM. Migration and Modernization tooling for mainframes Enterprise application knowledge, development, test and workload re-hosting tools significantly improves the efficiency of business application delivery, enabling CIOs and IT leaders to modernize application portfolios and target platforms such as Oracle Linux. When it comes to Oracle Linux database environments, supporting high transaction rates with minimal response times is no longer just a goal. It’s a strategic imperative. The “data deluge” is impacting the ability of databases and other strategic applications to access data and provide real-time analytics and reporting. As such, customer demand for accelerated application performance is increasing. Visit LSI at the Oracle Linux Pavilion, #733, to find out how LSI Nytro Application Acceleration products are designed from the ground up for database acceleration. Our intelligent solid-state storage solutions help to eliminate I/O bottlenecks, increase throughput and enable Oracle customers achieve the highest levels of DB performance. Accelerate Your Exadata Success With Teleran. Teleran’s software solutions for Oracle Exadata and Oracle Database reduce the cost, time and effort of migrating and consolidating applications on Exadata. In addition Teleran delivers visibility and control solutions for BI/data warehouse performance and user management that ensure service levels and cost efficiency.Teleran will demonstrate these solutions at the Oracle Open World Linux Pavilion: Consolidation Accelerator - Reduces the cost, time and risk ofof migrating and consolidation applications on Exadata. Application Readiness – Identifies legacy application performance enhancements needed to take advantage of Exadata performance features Workload Accelerator – Identifies and clusters workloads for faster performance on Exadata Application Visibility and Control - Improves performance, user productivity, and alignment to business objectives while reducing support and resource costs. Thanks for reading today's Partner Spotlight. Three more partners will be highlighted tomorrow. If you missed our first Partner Spotlight check it out here.

    Read the article

  • Consuming too much power on a Dell 15R

    - by Daniel Davis
    I'm new to Ubuntu, and I've just installed Ubuntu 11.10, and I must say I really like it a lot. I'm getting used to Ubuntu faster than I thought but the only issue I'm having is the power consumption. I have a " dell 15R with BIOS a07" and I uninstalled Ubuntu 11.04 because of some glitches I hated. None of those appear on this Ubuntu but now my computer discharges way too quick. when I check the power icon, it states that I have like 3:45 min remaining, which is fairly the same as what win7 and Ubuntu 11.04 used to tell me but 15 min later I check again and it gives me only 1:15 min remaining. Also, my computer seems to get a lot hotter than it used to. Is there more options to control the power usage on my laptop?

    Read the article

  • Instantiating objects in Java

    - by Davis Naglis
    I'm learning now Java from scratch and when I started to learn about instantiating objects, I don't understand - in which cases do I need to instantiate objects? For example I'm studying from TutsPlus course about it and there is example about "Rectangle" class. Instructor says that it needs to be instantiated. So I started to doubt about - when do I need to instantiate those objects when writing Java code?

    Read the article

  • Concurrent Affairs

    - by Tony Davis
    I once wrote an editorial, multi-core mania, on the conundrum of ever-increasing numbers of processor cores, but without the concurrent programming techniques to get anywhere near exploiting their performance potential. I came to the.controversial.conclusion that, while the problem loomed for all procedural languages, it was not a big issue for the vast majority of programmers. Two years later, I still think most programmers don't concern themselves overly with this issue, but I do think that's a bigger problem than I originally implied. Firstly, is the performance boost from writing code that can fully exploit all available cores worth the cost of the additional programming complexity? Right now, with quad-core processors that, at best, can make our programs four times faster, the answer is still no for many applications. But what happens in a few years, as the number of cores grows to 100 or even 1000? At this point, it becomes very hard to ignore the potential gains from exploiting concurrency. Possibly, I was optimistic to assume that, by the time we have 100-core processors, and most applications really needed to exploit them, some technology would be around to allow us to do so with relative ease. The ideal solution would be one that allows programmers to forget about the problem, in much the same way that garbage collection removed the need to worry too much about memory allocation. From all I can find on the topic, though, there is only a remote likelihood that we'll ever have a compiler that takes a program written in a single-threaded style and "auto-magically" converts it into an efficient, correct, multi-threaded program. At the same time, it seems clear that what is currently the most common solution, multi-threaded programming with shared memory, is unsustainable. As soon as a piece of state can be changed by a different thread of execution, the potential number of execution paths through your program grows exponentially with the number of threads. If you have two threads, each executing n instructions, then there are 2^n possible "interleavings" of those instructions. Of course, many of those interleavings will have identical behavior, but several won't. Not only does this make understanding how a program works an order of magnitude harder, but it will also result in irreproducible, non-deterministic, bugs. And of course, the problem will be many times worse when you have a hundred or a thousand threads. So what is the answer? All of the possible alternatives require a change in the way we write programs and, currently, seem to be plagued by performance issues. Software transactional memory (STM) applies the ideas of database transactions, and optimistic concurrency control, to memory. However, working out how to break down your program into sufficiently small transactions, so as to avoid contention issues, isn't easy. Another approach is concurrency with actors, where instead of having threads share memory, each thread runs in complete isolation, and communicates with others by passing messages. It simplifies concurrent programs but still has performance issues, if the threads need to operate on the same large piece of data. There are doubtless other possible solutions that I haven't mentioned, and I would love to know to what extent you, as a developer, are considering the problem of multi-core concurrency, what solution you currently favor, and why. Cheers, Tony.

    Read the article

  • The long road to bug-free software

    - by Tony Davis
    The past decade has seen a burgeoning interest in functional programming languages such as Haskell or, in the Microsoft world, F#. Though still on the periphery of mainstream programming, functional programming concepts are gradually seeping into the imperative C# language (for example, Lambda expressions have their root in functional programming). One of the more interesting concepts from functional programming languages is the use of formal methods, the lofty ideal behind which is bug-free software. The idea is that we write a specification that describes exactly how our function (say) should behave. We then prove that our function conforms to it, and in doing so have proved beyond any doubt that it is free from bugs. All programmers already use one form of specification, specifically their programming language's type system. If a value has a specific type then, in a type-safe language, the compiler guarantees that value cannot be an instance of a different type. Many extensions to existing type systems, such as generics in Java and .NET, extend the range of programs that can be type-checked. Unfortunately, type systems can only prevent some bugs. To take a classic problem of retrieving an index value from an array, since the type system doesn't specify the length of the array, the compiler has no way of knowing that a request for the "value of index 4" from an array of only two elements is "unsafe". We restore safety via exception handling, but the ideal type system will prevent us from doing anything that is unsafe in the first place and this is where we start to borrow ideas from a language such as Haskell, with its concept of "dependent types". If the type of an array includes its length, we can ensure that any index accesses into the array are valid. The problem is that we now need to carry around the length of arrays and the values of indices throughout our code so that it can be type-checked. In general, writing the specification to prove a positive property, even for a problem very amenable to specification, such as a simple sorting algorithm, turns out to be very hard and the specification will be different for every program. Extend this to writing a specification for, say, Microsoft Word and we can see that the specification would end up being no simpler, and therefore no less buggy, than the implementation. Fortunately, it is easier to write a specification that proves that a program doesn't have certain, specific and undesirable properties, such as infinite loops or accesses to the wrong bit of memory. If we can write the specifications to prove that a program is immune to such problems, we could reuse them in many places. The problem is the lack of specification "provers" that can do this without a lot of manual intervention (i.e. hints from the programmer). All this might feel a very long way off, but computing power and our understanding of the theory of "provers" advances quickly, and Microsoft is doing some of it already. Via their Terminator research project they have started to prove that their device drivers will always terminate, and in so doing have suddenly eliminated a vast range of possible bugs. This is a huge step forward from saying, "we've tested it lots and it seems fine". What do you think? What might be good targets for specification and verification? SQL could be one: the cost of a bug in SQL Server is quite high given how many important systems rely on it, so there's a good incentive to eliminate bugs, even at high initial cost. [Many thanks to Mike Williamson for guidance and useful conversations during the writing of this piece] Cheers, Tony.

    Read the article

  • Oracle Linux Partner Pavilion Spotlight III

    - by Ted Davis
    Three days until Oracle OpenWorld 2012 begins. The anticipation and excitement are building. In today's spotlight we are presenting an additional three partners exhibiting in the Oracle Linux Partner Pavilion at Oracle OpenWorld ( Booth #1033). Fujitsu will showcase a Gold tower system representing the one-millionth PRIMERGY server shipped, highlighting Fujitsu’s position as the #4 server vendor worldwide. Fujitsu’s broad range of server platforms is reshaping the data center with virtualization and cloud services, including those based on Oracle Linux and Oracle VM. BeyondTrust, the leader in providing context aware security intelligence, will be showcasing its threat management and policy enablement solutions for addressing IT security risks and simplifying compliance. BeyondTrust will discuss how to reduce security risks, close security gaps and improve visibility across your server and database infrastructure. Please stop by to see live demonstrations of BeyondTrust’s award winning vulnerability management and privilege identity management solutions supported on Oracle Linux. Virtualized infrastructure with Oracle VM and NetApp storage and data management solutions provides an integrated and seamless end user experience. Designed for maximum efficiency to allow for native NetApp deduplication and backup/recovery/cloning of VM’s or templates. Whether you are provisioning one or multiple server pools or dynamically re-provisioning storage for your virtual machines to meet business demands, with Oracle and NetApp, you have one single point-and-click console to rapidly and easily deploy a virtualized agile data infrastructure in minutes. So there you have it!  The third install of our Partner Spolight. Check out Part I and Part II of our Partner Spotlights from previous days if you've missed them. Remember to visit the Oracle Linux team at Oracle OpenWorld.

    Read the article

  • jquery is not working over local network [migrated]

    - by Kortyell Davis
    i have a fedora server running apache web server. the server is connected to a home network. i have a laptop connected to the same network. i can enter the ip address of my server into the browser of my laptop and pull up the index.html file located in the document root directory of the fedora home server. the index.html file contains jquery code. the jquery code only works when i open it locally in my browser (e.g. right click open with firefox), but when i attempt to view the webpage from my laptop the jquery code is not executed. the code is here below. <script type="text/javascript" src="jquery-1.8.2.js"></script> ' $(document).ready(function() { $('#form').hide(); $('input[type=text]').focus(function() { $(this).val(''); }); $('input[type=password]').focus(function() { $(this).val(''); }); $('.form').hide(); $('#log').click(function(){ $('#form').toggle(); }); $('#reg').click(function(){ $('.form').toggle(); }); });

    Read the article

  • how to convert bitmap image to CIELab image?

    - by Neal Davis
    Hello to stackoverflow members! This is my first post but I love reading the questions and answers on this forum! I'm using VS2008, CSharp, NET 3.5 and Windows XP Pro latest patches on this project. I am reading in a jpeg file to a bitmap image using CSharp in the following way: Bitmap bmp = new Bitmap("background.jpg"); Graphics g = Graphics.FromImage(bmp); I then write some text and logos on g using various DrawString and DrawImage commands. I then want to write this bitmap to a tiff file using the command: bmp.Save("final_artwork.tif", ImageFormat.Tiff); I don't know what the TIFF tags in the header of the final resultant TIFF file will be when you do bmp.Save as shown above? But what I want to produce is a TIFF image file with PHOTOMETRIC CIELab, BITSPERSAMPLE 8, SAMPLESPERPIXEL 3, FILLORDER MSB2LSB, ORIENTATION TOPLEFT, PLANARCONFIG CONTIG, and also uncompressed and singlestrip. Im fairly certain, at least I have not found anything about Microsoft API and CIELab, I can not convert the bitmap image to CIELab photometric and the other requirements using anything from the Microsoft API or .NET 3.5 environment - someone please correct if I am wrong. I have used libtiff in a project in the past but just to write a tiff file where the tiff image in memory was already in CIELab format and all the other requirements as noted per above. Im not certain libtiff can convert from my CSharp bitmap image, which is probably RGB photometric, to CIELab, 8 bps, and 3 spp? Maybe I can use OpenCVImage or Emgu CV or something similiar? So the main question is what is going to be the easiest way to get my final bitmap image as outlined above into a CIELab TIFF and also meeting the other requirements per above? Thanks for any suggestions and code fragments you can provide? Neal Davis

    Read the article

  • Function to Calculate Median in Sql Server

    - by Yaakov Ellis
    According to MSDN, Median is not available as an aggregate function in Transact-Sql. However, I would like to find out whether it is possible to create this functionality (using the Create Aggregate function, user defined function, or some other method). What would be the best way (if possible) to do this - allow for the calculation of a median value (assuming a numeric data type) in an aggregate query?

    Read the article

  • ORACLE PL/Scope

    - by Yaakov Davis
    I didn't find much data about the internals of PL/Scope. I'd like to use it to analyze identifiers in PL/SQL scripts. Does it work only on Oracle 11g instances? Can I reference its dlls to use it on a machine with only ORACLE 9/10 installed? In a related manner, do I have to execute the script in order to its identifiers to be analyzed?

    Read the article

  • How to Encourage More Frequent Commits to SVN

    - by Yaakov Ellis
    A group of developers that I am working with switched from VSS to SVN about half a year ago. The transition from CheckOut-CheckIn to Update-Commit has been hard on a number of users. Now that they are no longer forced to check in their files when they are done (or more accurately, now that no one else can see that they have the file checked out and tell them to check back in in order to release the lock on the file), it has happened on more than one occasion that users have forgotten to Commit their changes until long after they were completed. Although most users are good about Committing their changes, the issue is serious enough that the decision might be made to force users to get locks on all files in SVN before editing. I would rather not see this happen, but I am at a loss over how to improve the situation in another way. So can anyone suggest ways to do any of the following: Track what files users have edited but have not yet Committed changes for Encourage users to be more consistent with Committing changes when they are done Help finish off the user education necessary to get people used to the new version control paradigm Out-of-the-box solutions welcome (ie: desktop program that reminds users to commit if they have not done so in a given interval, automatically get stats of user Commit rates and send warning emails if frequency drops below a certain threshold, etc).

    Read the article

  • Measuring Web Page Performance on Client vs. Server

    - by Yaakov Ellis
    I am working with a web page (ASP.net 3.5) that is very complicated and in certain circumstances has major performance issues. It uses Ajax (through the Telerik AjaxManager) for most of its functionality. I would like to be able to measure in some way the amounts of time for the following, for each request: On client submitting request to server Client-to-Server On server initializing request On server processing request Server-to-Client Client rendering, JavaScript processing I have monitored the database traffic and cannot find any obvious culprit. On the other hand, I have a suspicion that some of the Ajax interactions are causing performance issues. However, until I have a way to track the times involved, make a baseline measurement, and measure performance as I tweak, it will be hard to work on the issue. So what is the best way to measure all of these? Is there one tool that can do it? Combination of FireBug and logging inserted into different places in the page life-cycle?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8  | Next Page >