Search Results

Search found 2210 results on 89 pages for 'techniques'.

Page 53/89 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • NEW 2-Day Instructor Led Course on Oracle Data Mining Now Available!

    - by chberger
    A NEW 2-Day Instructor Led Course on Oracle Data Mining has been developed for customers and anyone wanting to learn more about data mining, predictive analytics and knowledge discovery inside the Oracle Database.  Course Objectives: Explain basic data mining concepts and describe the benefits of predictive analysis Understand primary data mining tasks, and describe the key steps of a data mining process Use the Oracle Data Miner to build,evaluate, and apply multiple data mining models Use Oracle Data Mining's predictions and insights to address many kinds of business problems, including: Predict individual behavior, Predict values, Find co-occurring events Learn how to deploy data mining results for real-time access by end-users Five reasons why you should attend this 2 day Oracle Data Mining Oracle University course. With Oracle Data Mining, a component of the Oracle Advanced Analytics Option, you will learn to gain insight and foresight to: Go beyond simple BI and dashboards about the past. This course will teach you about "data mining" and "predictive analytics", analytical techniques that can provide huge competitive advantage Take advantage of your data and investment in Oracle technology Leverage all the data in your data warehouse, customer data, service data, sales data, customer comments and other unstructured data, point of sale (POS) data, to build and deploy predictive models throughout the enterprise. Learn how to explore and understand your data and find patterns and relationships that were previously hidden Focus on solving strategic challenges to the business, for example, targeting "best customers" with the right offer, identifying product bundles, detecting anomalies and potential fraud, finding natural customer segments and gaining customer insight.

    Read the article

  • Java Champion Jorge Vargas on Extreme Programming, Geolocalization, and Latin American Programmers

    - by Janice J. Heiss
    In a new interview, up on otn/java, titled “An Interview with Java Champion Jorge Vargas,” Jorge Vargas, a leading Mexican developer, discusses the process of introducing companies to Enterprise JavaBeans through the application of Extreme Programming. Among other things, he gives workshops about building code with agile techniques and creates a master project to build all apps based on Scrum, XP methods and Kanban. He focuses on building core components such as security, login, and menus. Vargas remarks, “This may sound easy, but it’s not—the process takes months and hundreds of hours, but it can be controlled, and with small iterations, we can translate customer requirements and problems of legacy systems to the new system.” In regard to his work with geolocalization, he says: “We have launched a beta program of Yumbling, a geolocalization-based app, with mobile clients for BlackBerry, iPhone, Android, and Nokia, with a Web interface. The first challenge was to design a simple universal mechanism providing information to all clients and to minimize maintenance provision to them. I try not to generalize a lot—to avoid low performance or misunderstanding in processing data. We use the latest Java EE technology—during the last five years, I’ve taught people how to use Java EE efficiently.” Check out the interview here.

    Read the article

  • How to learn & introduce scrum in small startup?

    - by Jens Bannmann
    In a few months, a friend will establish his startup software company, and I will be the software architect with one additional developer. Though we have no real day-to-day experience with agile methods, I have read much "overview" type of material on them, and I firmly believe they are a good - if not the only - way to build software. So with this company, I want to go for iterative, agile development from day 1, preferably something light-weight. I was thinking of Scrum, but the question is: what is the best way for me and my colleagues to learn about it, to introduce it (which techniques when etc) and to evaluate whether we should keep it? Background which might be relevant: we're all experienced developers around the same age with similar professional mindset. We have worked together in the past and afterwards at several different companies, mostly with a Java/.NET focus. Some are a bit familiar with general ideas from the agile movement. In this startup, I have great power over tools, methods and process. The startup's product will be developed from scratch and could be classified as middleware. We have some "customer" contacts in the industry who could provide input as soon as we get to an alpha stage.

    Read the article

  • Get The Most From MySQL Database With MySQL Performance Tuning Training

    - by Antoinette O'Sullivan
    Get the most from MySQL Server's top-level performance by improving your understanding of perforamnce tuning techniques. MySQL Performance Tuning Class In this 4 day class, you'll learn practical, safe, highly efficient ways to optimize performance for the MySQL Server. You can take this class as: Training-on-Demand: Start training within 24 hours of registering and follow the instructor-led lecture material through streaming video at your own pace. Schedule time lab-time to perform the hands-on exercises at your convenience. Live-Virtual Class: Follow the live instructor led class from your own desk - no travel required. There are already a range of events on the schedule to suit different timezones and with delivery in languages including English and German. In-Class Event: Travel to a training center to follow this class. For more information on this class, to see the schedule or register interest in additional events, go to http://oracle.com/education/mysql Troubleshooting MySQL Performance with Sveta Smirnova  During this one-day, live-virtual event, you get a unique opportunity to hear Sveta Smirnova, author of MySQL Troubleshooting, share her indepth experience of identifying and solving performance problems with a MySQL Database. And you can benefit from this opportunity without incurring any travel costs! Dimitri's Blog If MySQL Performance is a topic that interests you, then you should be following Dimitri Kravtchuk's blog. For more information on any aspect of the Authentic MySQL Curriculum, go to http://oracle.com/education/mysql.

    Read the article

  • Organizations &amp; Architecture UNISA Studies &ndash; Chap 7

    - by MarkPearl
    Learning Outcomes Name different device categories Discuss the functions and structure of I/.O modules Describe the principles of Programmed I/O Describe the principles of Interrupt-driven I/O Describe the principles of DMA Discuss the evolution characteristic of I/O channels Describe different types of I/O interface Explain the principles of point-to-point and multipoint configurations Discuss the way in which a FireWire serial bus functions Discuss the principles of InfiniBand architecture External Devices An external device attaches to the computer by a link to an I/O module. The link is used to exchange control, status, and data between the I/O module and the external device. External devices can be classified into 3 categories… Human readable – e.g. video display Machine readable – e.g. magnetic disk Communications – e.g. wifi card I/O Modules An I/O module has two major functions… Interface to the processor and memory via the system bus or central switch Interface to one or more peripheral devices by tailored data links Module Functions The major functions or requirements for an I/O module fall into the following categories… Control and timing Processor communication Device communication Data buffering Error detection I/O function includes a control and timing requirement, to coordinate the flow of traffic between internal resources and external devices. Processor communication involves the following… Command decoding Data Status reporting Address recognition The I/O device must be able to perform device communication. This communication involves commands, status information, and data. An essential task of an I/O module is data buffering due to the relative slow speeds of most external devices. An I/O module is often responsible for error detection and for subsequently reporting errors to the processor. I/O Module Structure An I/O module functions to allow the processor to view a wide range of devices in a simple minded way. The I/O module may hide the details of timing, formats, and the electro mechanics of an external device so that the processor can function in terms of simple reads and write commands. An I/O channel/processor is an I/O module that takes on most of the detailed processing burden, presenting a high-level interface to the processor. There are 3 techniques are possible for I/O operations Programmed I/O Interrupt[t I/O DMA Access Programmed I/O When a processor is executing a program and encounters an instruction relating to I/O it executes that instruction by issuing a command to the appropriate I/O module. With programmed I/O, the I/O module will perform the requested action and then set the appropriate bits in the I/O status register. The I/O module takes no further actions to alert the processor. I/O Commands To execute an I/O related instruction, the processor issues an address, specifying the particular I/O module and external device, and an I/O command. There are four types of I/O commands that an I/O module may receive when it is addressed by a processor… Control – used to activate a peripheral and tell it what to do Test – Used to test various status conditions associated with an I/O module and its peripherals Read – Causes the I/O module to obtain an item of data from the peripheral and place it in an internal buffer Write – Causes the I/O module to take an item of data form the data bus and subsequently transmit that data item to the peripheral The main disadvantage of this technique is it is a time consuming process that keeps the processor busy needlessly I/O Instructions With programmed I/O there is a close correspondence between the I/O related instructions that the processor fetches from memory and the I/O commands that the processor issues to an I/O module to execute the instructions. Typically there will be many I/O devices connected through I/O modules to the system – each device is given a unique identifier or address – when the processor issues an I/O command, the command contains the address of the address of the desired device, thus each I/O module must interpret the address lines to determine if the command is for itself. When the processor, main memory and I/O share a common bus, two modes of addressing are possible… Memory mapped I/O Isolated I/O (for a detailed explanation read page 245 of book) The advantage of memory mapped I/O over isolated I/O is that it has a large repertoire of instructions that can be used, allowing more efficient programming. The disadvantage of memory mapped I/O over isolated I/O is that valuable memory address space is sued up. Interrupts driven I/O Interrupt driven I/O works as follows… The processor issues an I/O command to a module and then goes on to do some other useful work The I/O module will then interrupts the processor to request service when is is ready to exchange data with the processor The processor then executes the data transfer and then resumes its former processing Interrupt Processing The occurrence of an interrupt triggers a number of events, both in the processor hardware and in software. When an I/O device completes an I/O operations the following sequence of hardware events occurs… The device issues an interrupt signal to the processor The processor finishes execution of the current instruction before responding to the interrupt The processor tests for an interrupt – determines that there is one – and sends an acknowledgement signal to the device that issues the interrupt. The acknowledgement allows the device to remove its interrupt signal The processor now needs to prepare to transfer control to the interrupt routine. To begin, it needs to save information needed to resume the current program at the point of interrupt. The minimum information required is the status of the processor and the location of the next instruction to be executed. The processor now loads the program counter with the entry location of the interrupt-handling program that will respond to this interrupt. It also saves the values of the process registers because the Interrupt operation may modify these The interrupt handler processes the interrupt – this includes examination of status information relating to the I/O operation or other event that caused an interrupt When interrupt processing is complete, the saved register values are retrieved from the stack and restored to the registers Finally, the PSW and program counter values from the stack are restored. Design Issues Two design issues arise in implementing interrupt I/O Because there will be multiple I/O modules, how does the processor determine which device issued the interrupt? If multiple interrupts have occurred, how does the processor decide which one to process? Addressing device recognition, 4 general categories of techniques are in common use… Multiple interrupt lines Software poll Daisy chain Bus arbitration For a detailed explanation of these approaches read page 250 of the textbook. Interrupt driven I/O while more efficient than simple programmed I/O still requires the active intervention of the processor to transfer data between memory and an I/O module, and any data transfer must traverse a path through the processor. Thus is suffers from two inherent drawbacks… The I/O transfer rate is limited by the speed with which the processor can test and service a device The processor is tied up in managing an I/O transfer; a number of instructions must be executed for each I/O transfer Direct Memory Access When large volumes of data are to be moved, an efficient technique is direct memory access (DMA) DMA Function DMA involves an additional module on the system bus. The DMA module is capable of mimicking the processor and taking over control of the system from the processor. It needs to do this to transfer data to and from memory over the system bus. DMA must the bus only when the processor does not need it, or it must force the processor to suspend operation temporarily (most common – referred to as cycle stealing). When the processor wishes to read or write a block of data, it issues a command to the DMA module by sending to the DMA module the following information… Whether a read or write is requested using the read or write control line between the processor and the DMA module The address of the I/O device involved, communicated on the data lines The starting location in memory to read from or write to, communicated on the data lines and stored by the DMA module in its address register The number of words to be read or written, communicated via the data lines and stored in the data count register The processor then continues with other work, it delegates the I/O operation to the DMA module which transfers the entire block of data, one word at a time, directly to or from memory without going through the processor. When the transfer is complete, the DMA module sends an interrupt signal to the processor, this the processor is involved only at the beginning and end of the transfer. I/O Channels and Processors Characteristics of I/O Channels As one proceeds along the evolutionary path, more and more of the I/O function is performed without CPU involvement. The I/O channel represents an extension of the DMA concept. An I/O channel ahs the ability to execute I/O instructions, which gives it complete control over I/O operations. In a computer system with such devices, the CPU does not execute I/O instructions – such instructions are stored in main memory to be executed by a special purpose processor in the I/O channel itself. Two types of I/O channels are common A selector channel controls multiple high-speed devices. A multiplexor channel can handle I/O with multiple characters as fast as possible to multiple devices. The external interface: FireWire and InfiniBand Types of Interfaces One major characteristic of the interface is whether it is serial or parallel parallel interface – there are multiple lines connecting the I/O module and the peripheral, and multiple bits are transferred simultaneously serial interface – there is only one line used to transmit data, and bits must be transmitted one at a time With new generation serial interfaces, parallel interfaces are becoming less common. In either case, the I/O module must engage in a dialogue with the peripheral. In general terms the dialog may look as follows… The I/O module sends a control signal requesting permission to send data The peripheral acknowledges the request The I/O module transfers data The peripheral acknowledges receipt of data For a detailed explanation of FireWire and InfiniBand technology read page 264 – 270 of the textbook

    Read the article

  • Software Architecture and Design vs Psychology of HCI class

    - by Joey Green
    I have two classes to choose from and I'm wanting to get an opinion from the more experienced game devs which might be better for someone who wants to be an indie game dev. The first is a Software Architecture and Design course and the second is a course titled Psychology of HCI. I've previously have taken a Software Design course that was focused only on design patterns. I've also taken an Introduction to HCI course. Software Architecture and Design Description Topics include software architectures, methodologies, model representations, component-based design ,patterns,frameworks, CASE-based desgins, and case studies. Psychology of HCI Description Exploration of psychological factors that interact with computer interface usablilty. Interface design techniques and usability evaluation methods are emphasized. I know I would find both interesting, but my concern is really which one might be easier to pick up on my own. I know HCI is relevant to game dev, but am un-sure if the topics in the Software Architecture class would be more for big software projects that go beyond the scope of games. Also, I'm not able to take both because the overlap.

    Read the article

  • How add fog with pixel shader (HLSL) XNA?

    - by Mehdi Bugnard
    I started to make a small game in XNA . And recently i tried to add a "fog" on "pixel shader HLSL" with the class Effect from XNA . I search online about some tutorial and found many sample. But nothing want work on my game :-( . Before i already add a "fog" effect in my game and everything work, because i used the class "BasicEffect" but with the class "Effect" and HLSL, it's really more complicated. If somebody have an idea, it's will be wonderfull. Thanks again. Here is my code HLSL, i use. // Both techniques share this same pixel shader. float4 PixelShaderFunction(VertexShaderOutput input) : COLOR0 { //return tex2D(Sampler, input.TextureCoordinate) * input.Color; float d = length(input.TextureCoordinate - cameraPos); float l = saturate((d-fogNear)/(fogFar-fogNear)); float fogFactory = clamp((d - fogNear) / (fogFar - fogNear), 0, 1) * l; return tex2D(Sampler, input.TextureCoordinate) * lerp(input.Color, fogColor, fogFactory); } Here is the screenShot With effect Without effect

    Read the article

  • Generating Normal map from a Image with a given Albedo map

    - by snape
    I am working on a research problem part of which involves generating normal map from a given image of a rusted object. I searched the internet for techniques to achieve the above and apparently crazybump is mentioned a lot. I tried it but it didn't produce the desirable effects. Also I am looking for a method which draws inspiration from an existing research paper not some closed source software. I turned my attention to the technique described in the this paper. Results from this technique are satisfactory for normal objects because of bias in the training data but it doesn't work very well in the case of rusted objects. After this I focussed my attention on generating Albedo map (the above problem would become more solvable if Albedo map is obtained). Fortunately I am able to generate pretty good albedo maps for images of rusted objects. I used this paper's approach to generate Albedo maps. Now I want to know a good technique to get Normal map given an image and it's corresponding Albedo map. To give you an idea of what kind of images I am working with I am attaching a sample. Links to research material would be really appreciated. Thanks!

    Read the article

  • UPK and the Oracle Unified Method can be used to deploy Oracle-Based Business Solutions

    - by Emily Chorba
    Originally developed to support Oracle's acquisition strategy, the Oracle Unified Method (OUM) defines a common implementation language across all of Oracle's products and technologies. OUM is a flexible, scalable, and evolving body of knowledge that combines existing best practices and field experience with an industry standard framework that includes the latest thinking around agile implementation and cloud computing.    Strong, proven methods are essential to ensuring successful enterprise IT projects both within Oracle and for our customers and partners. OUM provides a collection of repeatable processes that are the basis for agile implementations of Oracle enterprise business solutions. OUM also provides a structure for tracking progress and managing cost and risks. OUM is applicable to any size or type of IT project. While OUM is a plan-based method—including overview material, task and artifact descriptions, and templates—the method is intended to be tailored to support the appropriate level of ceremony (or agility) required for each project. Guidance is provided for identifying the minimum subset of tasks, tailoring the approach, executing iterative and incremental planning, and applying agile techniques, including support for managing projects using Scrum. Supplemental guidance provides specific support for Oracle products, such as UPK. OUM is available to Oracle employees, partners, and customers. Internal Use at Oracle: Employees can download OUM from MyDesktop. OUM Partner Program: OUM is available free of charge to Oracle PartnerNetwork (OPN) Diamond, Platinum, and Gold partners as a benefit of membership. These partners may download OUM from the Oracle Unified Method Knowledge Zone on OPN. OUM Customer Program: The OUM Customer Program allows customers to obtain copies of the method for their internal use by contracting with Oracle for a services engagement of two weeks or longer. Customers who have a signed contract with Oracle and meet the engagement qualification criteria as published on Customer tab of the OUM Website, are permitted to download the current release of OUM for their perpetual use. They may obtain subsequent releases published during a renewable, three-year access period To learn more about OUM, visit OUM Blog OUM on LinkedIn OUM on Twitter Emily Chorba, Principle Product Manager, Oracle User Productivity Kit

    Read the article

  • links for 2010-03-31

    - by Bob Rhubart
    Andy Mulholland: Rethinking the narrow and deep expertise model "We increasingly realise that we have to read requirements in a more open way to decide what techniques can be used, what business experience can be added, etc, so the whole idea of encouraging ‘cross’ discipline understanding seems to look increasingly necessary as we look at how technology touches every part of business, and/or any other aspect of life. It is time to rethink the narrow and deep expertise model and consider T-shaped approaches where the depth is complimented by the width to understand how it might be used and how it fits with other capabilities and disciplines too." -- Andy Mulholland (tags: enterprisearchitecture) @vambenepe: Smoothing a discrete world "For the short term (until we sell one) there are three cars in my household. A manual transmission, an automatic and a CVT (continuous variable transmission). This makes me uniquely qualified to write about Cloud Computing." -- William Vambenepe (tags: otn oracle cloud) @fteter: The Price of Progress "I wonder about the price of progress on the business world. Do some of us get attached to old business models or software applications? Do we resist change for the better for emotional reasons? Are we sometimes impediments to progress just because we don't want things to change?" -- Oracle ACE Director Floyd Teter (tags: otn oracle oracleace progress innovation) Pat Shepherd: Enterprise Architecture should not be Arbitrary "If done properly the Business, Application and Information architectures are nailed down BEFORE any technological direction (SOA or otherwise) is set. Those 3 layers and Governance (people and processes), IMHO, are layers that should not vary much as they have everything to do with understanding the business -- from which technological conclusions can later be drawn." - Pat Shepherd, responding to a post by Jordan Braunstein. (tags: oracle otn enterprisearchitecture soa)

    Read the article

  • Oracle Enterprise Manager sessions on the last day of the Oracle Open World

    - by Anand Akela
    Hope you had a very productive Oracle Open World so far . Hopefully, many of you attended the customer appreciation event yesterday night at the Treasures Islands.   We still have many enterprise manager related sessions today on Thursday, last day of Oracle Open World 2012. Download the Oracle Enterprise Manager 12c OpenWorld schedule (PDF) Oracle Enterprise Manager Cloud Control 12c (and Private Cloud) Time Title Location 11:15 AM - 12:15 PM Application Performance Matters: Oracle Real User Experience Insight Palace Hotel - Sea Cliff 11:15 AM - 12:15 PM Advanced Management of JD Edwards EnterpriseOne with Oracle Enterprise Manager InterContinental - Grand Ballroom B 11:15 AM - 12:15 PM Spark on SPARC Servers: Enterprise-Class IaaS with Oracle Enterprise Manager 12c Moscone West - 3018 11:15 AM - 12:15 PM Pinpoint Production Applications’ Performance Bottlenecks by Using JVM Diagnostics Marriott Marquis - Golden Gate C3 11:15 AM - 12:15 PM Bringing Order to the Masses: Scalable Monitoring with Oracle Enterprise Manager 12c Moscone West - 3020 12:45 PM - 1:45 PM Improving the Performance of Oracle E-Business Suite Applications: Tips from a DBA’s Diary Moscone West - 2018 12:45 PM - 1:45 PM Advanced Management of Oracle PeopleSoft with Oracle Enterprise Manager Moscone West - 3009 12:45 PM - 1:45 PM Managing Sun Servers and Oracle Engineered Systems with Oracle Enterprise Manager Moscone West - 2000 12:45 PM - 1:45 PM Strategies for Configuring Oracle Enterprise Manager 12c in a Secure IT Environment Moscone West - 3018 12:45 PM - 1:45 PM Using Oracle Enterprise Manager 12c to Control Operational Costs Moscone South - 308 2:15 PM - 3:15 PM My Oracle Support: The Proactive 24/7 Assistant for Your Oracle Installations Moscone West - 3018 2:15 PM - 3:15 PM Functional and Load Testing Tips and Techniques for Advanced Testers Moscone South - 307 2:15 PM - 3:15 PM Oracle Enterprise Manager Deployment Best Practices Moscone South - 104 Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • How to set TextureFilter to Point to make example Bloom filter work?

    - by Mr Bell
    I have simple app that renders some particles and now I am trying to apply the bloom shader from the xna samplers ( http://create.msdn.com/en-US/education/catalog/sample/bloom ) to it, but I am running into this exception: "XNA Framework HiDef profile requires TextureFilter to be Point when using texture format Vector4." When the BloomComponent tries to end the sprite batch in the DrawFullscreenQuad method: spriteBatch.Begin(0, BlendState.Opaque, SamplerState.PointWrap, null, null, effect); spriteBatch.Draw(texture, new Rectangle(0, 0, width, height), Color.White); spriteBatch.End(); //<------- Exception thrown here It seems to be related to the pixel shaders that I am using to animate the particle. In a nutshell, I have a texture2d in vector4 format that holds particle positions, and another one for velocities. Here is a snippet from that area: GraphicsDevice.SetRenderTarget(tempRenderTarget); animationEffect.CurrentTechnique = animationEffect.Techniques[technique]; spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque, SamplerState.PointWrap, DepthStencilState.DepthRead, RasterizerState.CullNone, animationEffect); spriteBatch.Draw(randomValues, new Rectangle(0, 0, width, height), Color.White); spriteBatch.End(); What I comment out the code that calls the particle animation pixel shaders the bloom component runs fine. Is there some state that I need to reset to make the bloom work?

    Read the article

  • How much detail is in a good UI regression test?

    - by GlenPeterson
    We use a detailed step-by-step user-interface regression test for our commercial web application. It has a "backbone" test for the most used / most important parts of the system, with optional tests for specific areas of functionality. Using this plan has definitely helped us ensure high quality software. But, having very specific tests can be counter-productive. The tester concentrates on following the test and will completely miss usability issues, or not notice fairly obvious problems such as the bottom part of a page that is missing. By contrast, some of the best UI testing happens when building a demo of a new feature. I often do my own best testing by pretending to demonstrate the system to an imaginary prospect. Yet when I tell the testers, "Just demonstrate the system to yourself" they don't cover nearly as much functionality as they do with a detailed point-by-point test. I'm repeatedly asked to provide more and more detail in the test plan so that a new untrained tester can test with it without asking any questions. Yet details seem to be counter-productive. How much detail do you put in a regression test to make it effective? What techniques make the tester to focus more on the system than on checking off items on the test?

    Read the article

  • Partner Webcast - Rethink HR! Introducing New Fusion HCM - 05 July 2012

    - by Thanos
    Introducing New Cloud Applications from the Leader in Human Capital Management Customers are constantly looking for better ways to manage talent, develop leaders, engage with employees, and strategically plan their workforce. To meet these challenges, you need modern applications that can efficiently deliver analytical insights, collaborative tools, and a personalized user experience. That's why Oracle has rethought the business of HR to provide value to the entire organization—from HR professionals to employees to managers. With Oracle Fusion Human Capital Management, you can: Do things your way with a role-centric user experience that can be personalized to the way you work Know your people better with the most complete HR business intelligence capabilities Work as a team by quickly finding and connecting with peers and experts to deliver concrete results Leverage enterprise-grade software as a service (SaaS) to get up and running fast and securely Our speaker, Csaba Fehér, HCM Senior Sales Consultant, will share how Oracle Fusion Human Capital Management can deliver immediate business value with new techniques and the latest innovations to address your toughest concerns, including how to: Align compensation and performance Organize and calibrate critical talent Predict key workforce trends and risks Delivery FormatThis FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now! For any questions please contact us at [email protected]

    Read the article

  • SQLAuthority News – Scaling Up Your Data Warehouse with SQL Server 2008 R2

    - by pinaldave
    Data Warehouses are suppose to be containing huge amount of the data from the beginning. However, there are cases when too big is not enough. Every Data Warehouse Admin will agree that they have faced situation where they will need to scale up their data warehouse. Microsoft has released white paper discussing the same. Here is the abstract from the Microsoft Official site: SQL Server 2008 introduced many new functional and performance improvements for data warehousing, and SQL Server 2008 R2 includes all these and more. This paper discusses how to use SQL Server 2008 R2 to get great performance as your data warehouse scales up. We present lessons learned during extensive internal data warehouse testing on a 64-core HP Integrity Superdome during the development of the SQL Server 2008 release, and via production experience with large-scale SQL Server customers. Our testing indicates that many customers can expect their performance to nearly double on the same hardware they are currently using, merely by upgrading to SQL Server 2008 R2 from SQL Server 2005 or earlier, and compressing their fact tables. We cover techniques to improve manageability and performance at high-scale, encompassing data loading (extract, transform, load), query processing, partitioning, index maintenance, indexed view (aggregate) management, and backup and restore. Scaling Up Your Data Warehouse with SQL Server 2008 R2 Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Estimating cost of labor for a controlled experiment

    - by Lorin Hochstein
    Let's say you are a software engineering researcher and you are designing a controlled experiment to compare two software technologies or techniques (e.g., TDD vs. non-TDD, Python vs. Go) with respect to some qualities of interest (e.g., quality of resulting code, programmer productivity). According to your study design, participants will work alone to implement a non-trivial software system. You estimate it should take about six months for a single programmer to complete the task. You also estimate via power analysis that you will need around sixty study participants to obtain statistically significant results, assuming the technologies actually do yield different outcomes. To maximize external validity, you want to use professional programmers as study participants. Unfortunately, it isn't possible to find professional programmers who can volunteer for several months to work full-time on implementing a software system. You decide to go the simplest route and contract with a large IT consulting firm to obtain access to programmers to participate in the study. What is a reasonable estimate of the cost range, per person-month, for the programming labor? Assume you are constrained to work with a U.S.-based firm, but it doesn't matter where in the U.S. the firm itself or the programmers or located. Note: I'm looking for a reasonable order-of-magnitude range suitable for back-of-the-envelope calculations so that when people say "Why doesn't somebody just do a study to measure X", I can say, "Because running that study properly would cost $Y", and have a reasonable argument for the value of $Y.

    Read the article

  • Part 3: Customization Strategy or how long does it take

    - by volker.eckardt(at)oracle.com
    The previous part in this blog should have made us aware, that many procedures are required to manage all these steps. To review your status let me ask you a question:What is your Customization Strategy?Your answer might be something like, 'customization strategy, well, we have standards and we let requirement documents approve'.Let me ask you another question:How long does it take to redeploy all your customizations into a fresh installation?In 90% of all installations the answer to this question would be: we can't!Although no one would have to do it (hopefully), just thinking about it and recognizing that we have today too many manual steps involved, different procedures and sometimes (undocumented) manual steps to complete a customization installation. And ... in general too many customizations.Why is working with customizations often so complicated and time consuming?Here are the key reasons as I have identified them in my projects:Customization standards defined, but not maintainedDifferent knowledge on developer side (results getting an individual developer touch)No need to automate deployment (not forced by client)Different documentation styles, not easy to hand over to someone elseDifferent development concepts, difficult for the maintenanceJust the minimum present for testing, often positive testing onlyDeviations from naming conventions accepted, although definedComplicated procedures, therefore sometimes partially ignoredAnd last but not least, hand made version control (still)If you would have to 'redeploy all your customizations' you would have to Follow all your own standards and best practiceTrack deviations and define corrective tasksAutomate as much as possible, minimize manual tasksDo not allow any change coming in without version controlUtilize products to support you in deploymentMinimize hand made scripts and extensive documentationReview regularly used techniques to guarantee that all are in line with the current release and also easy maintainableCreate solution libraries and force the team to contribute and reuseDefine quality activities and execute themDefine a procedure to release customizationsI know, it is easy to write down, but much harder to manage. Will provide some guidelines in my next blog.Volker

    Read the article

  • Writing a SQL Azure Book - Notes

    - by Herve Roggero
    Over the last few months I have had the opportunity to ramp up significantly on SQL Azure.  In fact I will be the co-author of Pro SQL Azure, published by Apress. This is going to be a book on how to best leverage SQL Azure, both from a technology and design standpoint. Talking about design, one of the things I realized is that understanding the key limitations and boundary parameters of Azure in general, and more specifically SQL Azure, will play an important role in making sounds design decisions that both meet reasonable performance requirements and minimize the costs associated with running a cloud computing solution.   The book touches on many design considerations including link encryption, pricing model, design patterns, and also some important performance techniques that need to be leveraged when developing in Azure, including Caching, Lazy Properties and more.   Finally I started working with Shards and how to implement them in Azure to ensure database scalability beyond the current size limitations. Implementing shards is not simple, and the book will address how to create a shard technology within your code to provide a scale-out mechanism for your SQL Azure databases.   As you can see, there are many factors to consider when designing a SQL Azure database. While we can think of SQL Azure as a cloud version of SQL Server, it is best to look at it as a new platform to make sure you don’t make any assumptions on how to best leverage it.

    Read the article

  • My Thoughts on Reinventing the Wheel

    - by Matt Christian
    For awhile now I've known that XNA Game Studio contains built-in scene management however I still built my own for each engine.  Obviously it was redundant and probably inefficient due to the amount of searching and such I was required to do.  And even though I knew this, why did I continue to do it? I've always been very detail oriented, probably part of my mild OCD.  But when it comes to technology I believe in both reinventing the wheel and not reinventing it all at the same time.  Here's what I imagine most programmers doing.  When they pick up XNA, they're typically focused on 'I want to make a game with as little code as possible'.  This is great and XNA GS is a great tool, but what will it do for programmers that want to make games with XNA?  If they don't have any prior experience with other tools they will probably not ever learn scene management. So is it better to leverage code and risk not learning valuable techniques, or write it all yourself and fight through the headaches and hours of time you may spend on something already built?

    Read the article

  • Scaling Scrum within a group of 100s of programmers

    - by blunders
    Most Scrum teams lean toward 7-15 people **, though it's not clear how to scale Scrum among 100s of people, or how the effectiveness of a given team might be compared to another team within the group; meaning beyond just breaking the group into Scrum teams of 7-15 people, it's unclear how efforts between the teams are managed, compared, etc. Any suggestions related to either of these topics, or additional related topics that might be of more importance to account for in planning a large scale SCRUM grouping? ** In reviewing research related to the suggested size of software development teams, which appears to be the basis for the suggested Scrum team size, I found what appears to be an error in the research which oddly appears to show that bigger teams (15+ ppl), not smaller teams (7 ppl) are better. UPDATE, "Re: Scrum doesn't scale": Made huge amounts of progress on personally researching the topic, but thought I'd respond to the general belief of some that Scrum doesn't scale by citing a quote from Succeeding with Agile by Mike Cohn : Scrum Does Scale: You have to admire the intellectual honesty of the earliest agile authors. They were all very careful to say that agile methodolgies like Scrum were for small projects. This conservatism wasn’t because agile or Scrum turned out to be unsuited for large projects but because they hadn’t used these processes on large projects and so were reluctant to advise their readers to do so. But, in the years since the Agile Manifesto and the books that came shortly before and after it, we have learned that the principles and practices of agile development can be scaled up and applied on large projects, albeit it with a considerable amount of overhead. Fortunately, if large organizations use the techniques described regarding the role of the product owner, working with a shared product backlog, being mindful of dependencies, coordinating work among teams, and cultivating communities of practice, they can successfully scale a Scrum project. SOURCE: (ran across the book thanks to Ladislav Mrnka answer)

    Read the article

  • What You Said: How Do You Sync Your Files Between Your Devices?

    - by Jason Fitzpatrick
    Earlier this week we asked you to share your tricks and techniques for keeping files synced between your different devices. Now we’re back to highlight how you do it. Overwhelmingly, you do it with Dropbox. Despite the proliferation of different platforms there has been little inroads made into any sort of universal syncing. We heard from quite a few different readers and by far the most popular option was to use Dropbox to ensure that you could get the music and documents you wanted whether you were on your desktop, laptop, netbook, iPhone, or Android device. In the same breath however, nearly all of your added on an additional service. The real message, it would seem, is that there simply isn’t a service good enough to meet all of the needs most users have, all of the time. The most common response to our Ask the Readers question was “Dropbox and…”; this pattern is illustrated nicely in the following quotes. Kim writes: Dropbox for all kinds of things. (Would also use Sugarsync, but it doesn’t support Linux.) Lastpass for passwords. Xmarks for bookmarks, although I’m going to try Firefox Sync soon. Evernote for things like shell commands I might want someday. Google Beta for music, once I get it uploaded. I have an Amazon account too, but Google gives you more space. Gmail. Michael finds himself in a similar situation and writes: How to Make and Install an Electric Outlet in a Cabinet or DeskHow To Recover After Your Email Password Is CompromisedHow to Clean Your Filthy Keyboard in the Dishwasher (Without Ruining it)

    Read the article

  • Issue 55 - Skin Object Tokens, Optimized Control Panel, OWS Validation and Security, RAD

    April 2010 Welcome to Issue 55 of DNN Creative Magazine In this issue we focus on the new Skin Object token method introduced in DotNetNuke 5 for adding tokens into a DotNetNuke skin. A Skin Object Token is a web user control which covers skin elements such as the logo, menu, search, login links, date, copyright, languages, links, banners, privacy, terms of use, etc. Following this we demonstrate how to install and use two Advanced DotNetNuke Admin Control Panels which are available for free from Oliver Hine. These control panels provide an optimized version of the admin control panel to improve performance and page load times, as well as a ribbon bar control panel which adds additional features. Next, we continue the Open Web Studio tutorials, this month we demonstrate some very advanced techniques for building a car parts application in Open Web Studio. Throughout the tutorial we cover form input, validation, how to use dependant drop down lists, populating checkbox lists and introduce a new concept of data level security. Data level security allows you to control which data a user can access within a module. To finish, we have part five of the "How to Build a News Application with DotNetMushroom Rapid Application Developer (RAD)" article, where we demonstrate how to implement paging. This issue comes complete with 14 videos. Skinning: Skin Object Tokens for DotNetNuke 5 (8 videos - 64mins) Free Module: Advanced Optimized Control Panel by Oliver Hine (1 video - 11mins) Module Development Series: Form Validation, Dependant Drop Downs and Data Level Security in OWS (5 videos - 44mins) How to Implement Paging with DotNetMushroom RAD View issue 55 to download all of the videos in one zip file DNN Creative Magazine for DotNetNuke Web Designers Covering DotNetNuke module video reviews, video tutorials, mp3 interviews, resources and web design tips for working with DotNetNuke. In 55 issues we have created 563 videos!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Skinning with DotNetNuke 5 Super Stylesheets Layouts - 12 Videos

    In this tutorial we demonstrate how to use Super Stylesheets in DotNetNuke for quickly and easily designing the layout of your DotNetNuke skin. Super Stylesheets are ideal for both beginner and experienced skin designers, the advantage of Super Stylesheets is that you can easily create a skin layout which works in all browsers without the need to learn complex CSS techniques. We show you how to build a skin from the very beginning using Super Stylesheets. The videos contain: Video 1 - Introduction to the Super Stylesheets DNN Layouts and Initial Setup Video 2 - Setting Up the Skin Layout Template Code Video 3 - Using the ThreeCol-Portal Layout Template for a Skin Video 4 - How to Add Tokens to the Skin Video 5 - Setting Background Colors for Content Panes and Creating CSS Containers Video 6 - How to Create a Footer Area and Reset the Default Styles Video 7 - How to Style the Text in the Content, Left and Right Panes Video 8 - SEO Skin Layouts for DotNetNuke Tokens Video 9 - Creating Several Skin Layouts Using the Layout Templates Video 10 - Further Layout Templates and MultiLayout Templates Video 11 - SEO Layout Template Skins Video 12 - Final SEO Positioning of the Skin Code Total Time Length: 97min 53secsDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Separation of development responsibilities in a new project

    - by dreza
    We have very recently started a new project (MVC 3.0) and some of our early discussion has been around how the work and development will be split amongst the team members to ensure we get the least amount of overlap of work and so help make it a bit easier for each developer to get on and do their work. The project is expected to take about 6 months - 1 year (although not all developers are likely to be on and might filter off towards the end), Our team is going to be small so this will help out a bit I believe. The team will essentially consist of: 3 x developers (All different levels i.e. more senior, intermediate and junior) 1 x project manager / product owner / tester An external company responsbile for doing our design work General project/development decisions so far have included: Develop in an Agile way using SCRUM techniques (We are still very much learning this approach as a company) Use MVVM archectecture Use Ninject and DI where possible Attempt to use as TDD as much as possible to drive development. Keep our controllers as skinny as possible Keep our views as simple as possible During our discussions two approaches have been broached as too how to seperate the workload given our objectives outlined above. OPTION 1: A framework seperation where each person is responsible for conceptual areas with overlap and discussion primarily in the integration areas. The integration areas would the responsibily of both developers as required. View prototypes (**Graphic designer**) | - Mockups | Views (Razor and view helpers etc) & Javascript (**Developer 1**) | - View models (Integration point) | Controllers and Application logic (**Developer 2**) | - Models (Integration point) | Domain model and persistence (**Developer 3**) OPTION 2: A more task orientated approach where each person is responsible for the completion of the entire task (story) from view - controller - model. QUESTION: For those who have worked in small teams developing MVC projects how have you managed the workload distribution in this situation. I can't imagine the junior would be responsible for building parts of the underlying architecture so would given them responsibility for the view make sense considering we are trying to keep it simple?

    Read the article

  • Select tool to minimize JavaScript and CSS size

    - by Michael Freidgeim
    There are multiple ways and techniques how to combine and minify JS and CSS files.The good number of links can be found in http://stackoverflow.com/questions/882937/asp-net-script-and-css-compression and in http://www.hanselman.com/blog/TheImportanceAndEaseOfMinifyingYourCSSAndJavaScriptAndOptimizingPNGsForYourBlogOrWebsite.aspx There are 2 major approaches- do it during build or at run-time.In our application there are multiple user-controls, each of them required different JS or CSS files, and they loaded dynamically in the different combinations. We decided that loading all JS or CSS files for each page is not a good idea, but for each page we need to load different set of files.Based on this combining files on the build stage does not looks feasible.After Reviewing  different links I’ve decided that squishit should fit to our needs. http://www.codethinked.com/squishit-the-friendly-aspnet-javascript-and-css-squisherDifferent limitations of using SquishIt.We had some browser specific CSS files, that loaded conditionally depending of browser type(i.e IE and all other browsers). We had to put them in separate bundles,For Resources and AXD files we decide to use HttpModule and HttpHandler created by Mads KristensenTo GZIP html we are using wwWebUtils.GZipEncodePage() http://www.west-wind.com/weblog/posts/2007/Feb/05/More-on-GZip-compression-with-ASPNET-Content Just swap the order of which encoding you apply to start by asking for deflate support and then GZip afterwards.Additional tips about SquishIt.Use CDN: https://groups.google.com/group/squishit/browse_thread/thread/99f3b61444da9ad1Support intellisense and generate bundle in codebehind http://tech.kipusoep.nl/2010/07/23/umbraco-45-visual-studio-2010-dotless-jquery-vsdoc-squishit-masterpages/Links about other Libraries that were consideredA few links from http://stackoverflow.com/questions/5288656/which-one-has-better-minification-between-squishit-and-combres2.Net 4.5 will have out-of-the-box tools for JS/CSS combining.http://weblogs.asp.net/scottgu/archive/2011/11/27/new-bundling-and-minification-support-asp-net-4-5-series.aspx . It suggests default bundle of subfolder, but also seems supporting similar to squishit explicitly specified files.http://www.codeproject.com/KB/aspnet/combres2.aspx  config XML file can specify expiry etchttps://github.com/andrewdavey/cassette http://stackoverflow.com/questions/7026029/alternatives-to-cassetteDynamically loaded JS files requireJS http://requirejs.org/docs/start.html  http://www.west-wind.com/weblog/posts/2008/Jul/07/Inclusion-of-JavaScript-FilesPack and minimize your JavaScript code sizeYUI Compressor (from Yahoo)JSMin (by Douglas Crockford)ShrinkSafe (from Dojo library)Packer (by Dean Edwards)RadScriptManager  & RadStyleSheetManager -fromTeleric(not free)Tools to optimize performance:PageSpeed tools family http://code.google.com/intl/ru/speed/page-speed/download.htmlv

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >