Search Results

Search found 4955 results on 199 pages for 'range'.

Page 110/199 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • Catch up with ‘In Touch’ on-demand

    - by rituchhibber
    We had another fantastic live broadcast of the ‘In Touch’ PartnerCast last week which covered a range of topics, updates and answered your questions live on air. The cast started with host David Callaghan, Senior Vice President EMEA Alliances and Channels, updating us on the Rebate programme and focussed on the benefits this system offers. We were then introduced to Will O’Brien, VP Alliances & Channels, UK & Ireland, and Markus Reischl, Senior Director and Sales Leader EMEA Strategic Alliances who discussed the headlines from Oracle OpenWorld from their point of view. Monia Bosetti sent in a video report discussing LMS and how this affects SI’s, which sparked studio conversation betweeen the guests and got you talking at your desks too! David also had the chance to talk with Platinmun Partner Uptime Technology, who shared their best practice and exmaples of working with Oracle to achieve great results. The studio team ended the cast answering your questions live, which had some interesting results! Like the sound of this cast? You can watch on-demand here: Make sure you keep up to date with the ‘In Touch’ series by visiting the website here.

    Read the article

  • Inside Amazon’s Warehouses

    - by Jason Fitzpatrick
    If you’re expecting the inside of Amazon’s warehouses to be some sort of rigidly organized robot-filled warehouse of tomorrow, you’ll be quite surprised to find that storage technique they employ is called “chaotic storage”. International Business Times paid a visit to a major Amazon warehouse and took a tour. Rather than finding robots they found: Amazon must rely on barcodes and human hands to find the ordered items and drop them into the proper bins — without robots, Amazon utilizes a system known as “chaotic storage,” where products are essentially shelved at random. By storing items randomly instead of categorically, the warehouse has a much better flow of material. Even without robots or automation, Amazon can compile a “picking list” where each item needs to be taken off the shelf and scanned again before it can be shipped. The real advantage to chaotic storage is that it’s significantly more flexible than conventional storage systems. If there are big changes in a product range, the company doesn’t need to plan for more space, because the products or their sales volumes don’t need to be known or planned in advance if they’re simply being stored at random. HTG Explains: Does Your Android Phone Need an Antivirus? How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder?

    Read the article

  • They Wrote The Book On It

    - by steve.diamond
    First of all, an apology to you all for my not posting this yesterday, when I should have. For those of you bloggers out there, you know the difference between "Save" and "Preview." But I temporarily forgot it. Nevertheless, while I'm not impressed with this mishap, I'm blown away by the initiative three of my colleagues have taken. Jeff Saenger, Tim Koehler, and Louis Peters, recently wrote a book, "Oracle CRM On Demand Deployment Guide." Not only that, they got this book PUBLISHED. These guys know their stuff. They have worked in the CRM industry for many years. And trust me, they command a lot of respect inside this organization. In the words of Louis Peters (who posted this verbiage yesterday on LinkedIn), "We've assembled all the best practices and lessons learned over the past six years working with CRM On Demand. The book covers a range of topics - working with SaaS-based applications, planning and executing a successful rollout, designing elegant and high-performing applications, and working effectively with Oracle. We even included several sample designs based on successful real-world deployments. Our main target audience is the CRM On Demand project team - sponsors, project managers, administrators, developers - really anyone planning, implementing or maintaining the application." Now these guys don't know it, but I'll be interviewing one of them and including audio excerpts of that conversation right here next Wednesday. In the meantime, if you want to learn more about successful CRM deployments in general, and working with Oracle CRM On Demand in particular, you should check out this book.

    Read the article

  • Kansas City .NET UG March Meeting &ndash; Tonight!!!!

    - by John Alexander
    Meeting tonight!!! Food! Great giveaways including a full license of Infragistics for a year! See you there!! Meeting for March 23rd, 2010 WHERE: Centriq Training, 8700 State Line Road, Leawood, KS (Click WHEN: 6:00 PM TOPIC: Microsoft's Security Development Lifecycle for Agile development Microsoft recently added secure development guidance for agile methodologies within their SDL. During this presentation, Nick will summarize the new guidance and discuss what makes this guidance successful for Agile development. SPEAKER: Nick Coblentz Nick Coblentz is a senior consultant within AT&T Consulting Services' Application Security Practice. He focuses on helping organizations build mature application security programs and secure development processes. Nick has provided consulting services to fortune 500 companies within the retail, financial services, banking, and health care sectors. SPONSOR: TekSystems TEKsystems® is the leading IT staffing and services company. Our capabilities span a wide range of services: from technical staff augmentation and direct placement services, to full management of IT projects and comprehensive workforce management solutions. With over 25 years of experience, we are experts at connecting technical professionals. Whether you are looking for the best IT talent, an experienced IT outsourcing partner, or a career in the IT industry, TEKsystems delivers.

    Read the article

  • Implementing fog of war in opengl es 2.0 game

    - by joxnas
    Hi game development community, this is my first question here! ;) I'm developing a tactics/strategy real time android game and I've been wondering for some time what's the best way to implement an efficient and somewhat nice looking fog of war to incorporate in it. My experience with OpenGL or Android is not vast by any means, but I think it is sufficient for what I'm asking here. So far I have thought in some solutions: Draw white circles to a dark background, corresponding to the units visibility, then render to a texture, and then drawing a quad with that texture with blend mode set to multiply. Will this approach be efficient? Will it take too much memory? (I don't know how to render to texture and then use the texture. Is it too messy?) Have a grid object with a vertex shader which has an array of uniforms having the coordinates of all units, and another array which has their visibility range. The number of units will very probably never be bigger then 100. The vertex shader needs to test for each considered vertex, if there is some unit which can see it. In order to do this it, will have to loop the array with the coordinates and do some calculations based on distance. The efficiency of this is inversely proportional to the looks of it. A more dense grid will result in a more beautiful fog of war... but will require a greater amount of vertexes to be checked. Is it possible to find a nice compromise or is this a bad solution from the start? Which solution is the best? Are there better alternatives? Which ones? Thank you for your time.

    Read the article

  • Service Level Loggin/Tracing

    - by Ahsan Alam
    We all love to develop services, right? First timers want to learn technologies like WCF and Web Services. Some simply want to build services; whereas, others may find services as natural architectural decision for particular systems. Whatever the reason might be, services are commonly used in building wide range of systems. Developers often encapsulates various functionality (small or big) within one or more services, and expose them for multiple applications. Sometimes from day one (and definitely over time) these services may evolve into a set of black boxes. Services or not, black boxes or not, issues and exceptions are sometimes hard to avoid, especially in highly evolving and transactional systems. We can try to be methodical with our unit testing, QA and overall process; but we may not be able to avoid some type of system issues. When issues arise from one or more highly transactional services, it becomes necessary to resolve them very quickly. When systems handle thousands of transaction in matter of hours, some issues may not surface immediately. That is when service level logging becomes very useful. Technologies such as WCF, allow us to enable service level tracing with minimal effort; but that may not provide us with complete picture. Developers may need to add tracing within critical areas of the code with various degrees of verbosity. Programmer can always utilize some logging framework such as the 'Logging Application Block' to get the job done. It may seem overkill sometimes; but I have noticed from my experience that service level logging helps programmer trace many issues very quickly.

    Read the article

  • Oracle Outsourced Repair Solution: The “Control Tower” for the Reverse Supply Chain

    - by John Murphy
    By Hannes Sandmeier, Vice President of cMRO and Depot Repair Development Smart businesses are increasing their focus on core competencies and aggressively cutting costs in their supply chains. Outsourcing repairs can enable a business to focus on what they do best and most profitably while delivering top-notch customer service through partners that specialize in reverse logistics and repair. A well managed “virtual service organization” can deliver fast turn times, lower costs and high customer satisfaction. A poorly managed partner network can deliver disaster for your business. Managing a virtual service organization requires accurate, real-time information and collaboration tools that enable smart, informed and immediate corrective action. To meet this need, Oracle has released the Oracle Outsourced Repair Solution to provide the “control tower” for managing outsourced reverse supply chain operations from customer complaint through remediation to partner claim settlement. The new solution provides real-time visibility to return status, location, turn time, discrepancies and partner performance. Additionally, its web portals allow partners and carriers to view assigned work, request parts, enter data, capture time and submit claims. Leveraging the combined power of Oracle E-Business Suite and Oracle E-Business Suite Extensions for Oracle Endeca, the Oracle Outsourced Repair Solution provides a comprehensive set of tools that range from quick online partner registration to partner claim reconciliation, from capturing parts and labor to Oracle Cost Management and Financials integration, and from part requisition to waste and hazmat controls. These tools empower service operations managers to: · Increase customer satisfaction Ensure customers are satisfied by holding partners accountable for the speed and quality of repairs, and taking immediate corrective action when things go wrong · Reduce costs: Remove waste from the repair process using accurate job cost and cost breakdown data · Increase return velocity: Users have the tools to view all orders in flight and immediately know the current location, status, owner and contact point for repairs so as to be able to remove bottlenecks, resolve discrepancies and manage escalations The Oracle Outsourced Repair Solution further demonstrates Oracle’s commitment to helping supply chain professionals and service managers deliver high customer satisfaction at the lowest cost. For more information on the Oracle Outsourced Repair Solution, visit here. 

    Read the article

  • Follow point of interest by applying torque

    - by azymm
    Given a body with an orientation angle and a point of interest or targetAngle, is there an elegant solution for keeping the body oriented towards the point of interest by applying torque or impulses? I have a naive solution working below, but the effect is pretty 'wobbly', it'll overshoot each time, slowly getting closer to the target angle - undesirable effect in my case. I'd like to find a solution that is more intelligent - that can accelerate to near the target angle then decelerate and stop right at the target angle (or within a small range). If it helps, I'm using box2d and the body is a rectangle. def gameloop(dt): targetAngle = get_target_angle() bodyAngle = get_body_angle() deltaAngle = targetAngle - bodyAngle if deltaAngle > PI: deltaAngle = targetAngle - (bodyAngle + 2.0 * PI) if deltaAngle < -PI: deltaAngle = targetAngle - (bodyAngle - 2.0 * PI) # multiply by 2, for stronger reaction deltaAngle = deltaAngle * 2.0; body.apply_torque(deltaAngle); One other thing, when body has no linear velocity, the above solution works ok. But when the body has some linear velocity, the solution above causes really wonky movement. Not sure why, but would appreciate any hints as to why that might be.

    Read the article

  • gl_PointCoord always zero

    - by Jonathan
    I am trying to draw point sprites in OpenGL with a shader but gl_PointCoord is always zero. Here is my code Setup: //Shader creation..(includes glBindAttribLocation(program, ATTRIB_P, "p");) glEnableVertexAttribArray(ATTRIB_P); In the rendering loop: glUseProgram(shader_particles); float vertices[]={0.0f,0.0f,0.0f}; glEnable(GL_TEXTURE_2D); glEnable(GL_POINT_SPRITE); glEnable(GL_VERTEX_PROGRAM_POINT_SIZE); //glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE, GL_TRUE);(tried with this on/off, doesn't work) glVertexAttribPointer(ATTRIB_P, 3, GL_FLOAT, GL_FALSE, 0, vertices); glDrawArrays(GL_POINTS, 0, 1); Vertex Shader: attribute highp vec4 p; void main() { gl_PointSize = 40.0f; gl_Position = p; } Fragment Shader: void main() { gl_FragColor = vec4(gl_PointCoord.st, 0, 1);//if the coords range from 0-1, this should draw a square with black,red,green,yellow corners } But this only draws a black square with a size of 40. What am I doing wrong? Edit: Point sprites work when i use the fixed function, but I need to use shaders because in the end the code will be for opengl es 2.0 glUseProgram(0); glEnable(GL_TEXTURE_2D); glEnable(GL_POINT_SPRITE); glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE, GL_TRUE); glPointSize(40); glBegin(GL_POINTS); glVertex3f(0.0f,0.0f,0.0f); glEnd(); Is anyone able to get point sprites working with shader? If so, please share some code.

    Read the article

  • New Trusted Status awarded to first Mobile Java Developer

    - by Jacob Lehrbaum
    Java Verified has just announced that GameLoft is the first developer to receive its new Trusted Status!  Java Verified is an industry-recognized Java testing and signing program backed and funded by companies such as AT&T, LG, Motorola, Nokia, Oracle, Orange, Samsung and Vodafone, and chartered with making it easier for mobile developers to certify and deploy applications for use across the billions of mobile handsets that run the Java ME.  Because of its breadth and diversity, Java ME provides an unmatched opportunity to reach more than 3 billions consumers, but at the same time, developers are faced with the challenge of working with multiple distribution channels and a range of handsets. To this end, the Java Verified program provides a suite of tests that help to validate identity, functionality, integrity, and quality.  Since its rebirth in 2010 as an independent organization, the Java Verified program has been actively working to make it even easier to create and distribute Java ME apps.  Example initiatives include updates to the Unified Testing Criteria to make it easier to test "Simple Apps," community outreach to better understand and address developer pain-points  and a new "Trusted Status."  In the words of the Java Verified Program, Trusted Status is:a privileged status to be granted to developers who will have proven that the quality of their Java ME apps is of a consistently high standard. These are developers who will have earned the trust of Java Verified by demonstrating unfailingly that testing to the UTC standard is a crucial part of their product development activityThe first developer to be awarded this status is GameLoft.  By achieving Trusted Status Gameloft can now test their applications to the Java Verified standard without needing to provide Java Verified with the evidence.  The apps then automatically get signed with the Java Verified signature enabling GameLoft to benefit from reduced costs and time-to-market for their new Java ME applications from here on out.  Learn more about the exciting news or apply now for Trusted Status!

    Read the article

  • Profile of Scott L Newman

    - by Ratman21
    To:       Whom It May Concern From: Scott L Newman Date:   4/23/2010 Re:      Profile Who is he, what can he do? Two very good questions. #1. I am a 20 + years experience Information Technology Professional (hold on don’t hit delete yet!). Who is not over the hill (I am on top of it) and still knows how to do (and can still do) that thing call work! #2. A can do attitude, that does not allow problems to sit unfixed. I have a broad range of skills, including: Certified CompTIA A+, Security+ and Network+ Technician §         2.5 years (NOC) Network experience on large Cisco based Wan – UK to Austria §         20 years experience MIS/DP – Yes I can do IBM mainframes and Tandem non-stops too §         18 years experience as technical Help Desk support – panicking users, no problem §         18 years experience with PC/Server based system, intranet and internet systems §         10+ years experienced on: Microsoft Office, Windows XP and Data Network Fundamentals (YES I do windows) §         Strong trouble shooting skills for software, hard ware and circuit issues (and I can tell you what kind of horrors I had to face on all of them). §         Very experienced on working with customers on problems – again panicking users, no problem §         Working experience with Remote Access (VPN/SecurID) – I didn’t just study them I worked on/with them §         Skilled in getting info for and creating documentation for Operation procedures (I do not just wait for them to give it to me I go out and get it. Waiting for info on working applications is, well dumb) Multiple software languages (Hey I have done some programming) And much more experiences in “IT” (Mortgage, stocks and financial information systems experience and have worked “IT” in a hospital) Can multitask, also have ability to adapt to change and learn quickly. (once was put in charge of a system that I had not worked with for over two years. Talk about having to relearn and adapt to changes fast. But I did it.)   The summarization is that I know what do, know keep things going and how to fix it when it breaks.   Scott L. Newman Confidential

    Read the article

  • Oracle Linux and Oracle VM Hardware Certification Program

    - by Durgam Vahia
    The Oracle Linux and Oracle VM are continuing to see growth in IHV (Independent Hardware Vendor) ecosystem. The Oracle Linux and Oracle VM Hardware Certification Program, also referred as HCL, provides a formal means for hardware vendors to work with Oracle to establish high quality support for the certified hardware platform. Since the beginning of the program, number of hardware partners have certified range of server platforms on Oracle Linux and Oracle VM. Currently, HCL lists over 400 certifications from 10 server vendors and the list continues to grow at a rapid pace. New hardware certification involves close collaboration between Oracle and server partner to ensure that adequate testing is performed on the target server and results are thoroughly reviewed. This rigorous process ensures that when new hardware platform is listed on HCL, it has full support from both Oracle and the respective partner. Additionally, once a certification is achieved with Oracle Linux with the current version of Unbreakable Enterprise Kernel, future minor updates of the software continue to carry over the certification, reducing the need for a re-certification. For the complete list of certified hardware, please visit Oracle Linux and Oracle VM Certified Hardware. Also refer to Frequently Asked Questions for more information.

    Read the article

  • Into Orbit (OBIEE 11g Launch)

    - by Darryn.Hinett
    After much anticipation, it appears that OBIEE 11g is about to hit the streets. Join Charles Phillips, President, and Thomas Kurian, Executive Vice President, Product Development, for the launch of the latest release of Oracle's business intelligence software. Be the first to hear about Oracle Business Intelligence Enterprise Edition 11g, the new, industry-leading technology platform for business intelligence, which offers: A powerful end-user experience with rich visualisation, search, and actionable collaboration Advancements in analytics, OLAP, and enterprise reporting, with unmatched performance and scalability Simplified system configuration, life-cycle management, and performance optimisation As well as the keynote and technical general session, break out sessions will cover the following topics: Business Intelligence: From Insight to Action In this session, you will learn about an exciting, industry-first innovation that connects business intelligence directly to your business processes. You can spot an opportunity or issue, and immediately initiate appropriate action directly from your dashboard. Oracle Business Intelligence Enterprise Edition 11g Systems Management and Deployment Learn how you can streamline the process of configuring your system, provisioning users, and monitoring and optimising query performance. Attend this session to hear how new integration with Oracle Enterprise Manager provides unique systems management, superior scalability, and high availability and security benefits, while making upgrades effortless. Extending Business Intelligence Analytics with Online Analytical Processing (OLAP) Learn how you can enhance the analytical power and business value of your BI solution with a unified environment for navigating and querying both OLAP and relational data sources. This session will focus on how Oracle Business Intelligence Enterprise Edition 11g, used with Oracle Essbase, can deliver insight at the speed of thought. Integrated Performance Management If your organisation is using or considering performance management applications such as Oracle's Hyperion Planning and Hyperion Financial Management, you will not want to miss this session. See how you can leverage Oracle's BI solution for accessing performance management applications and performing extended financial reporting and analysis. Visualisation and End-user Experience The latest release of Oracle Business Intelligence provides an unrivaled end user experience, including rich interactive dashboards, a vast range of animated charting options, integrated search, and more. This session will also include a close look at how you can leverage location data to visualise geo-spatial information.

    Read the article

  • Imperative vs. LINQ Performance on WP7

    - by Bil Simser
    Jesse Liberty had a nice post presenting the concepts around imperative, LINQ and fluent programming to populate a listbox. Check out the post as it’s a great example of some foundational things every .NET programmer should know. I was more interested in what the IL code that would be generated from imperative vs. LINQ was like and what the performance numbers are and how they differ. The code at the instruction level is interesting but not surprising. The imperative example with it’s creating lists and loops weighs in at about 60 instructions. .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: .method private hidebysig instance void ImperativeMethod() cil managed 2: { 3: .maxstack 3 4: .locals init ( 5: [0] class [mscorlib]System.Collections.Generic.IEnumerable`1<int32> someData, 6: [1] class [mscorlib]System.Collections.Generic.List`1<int32> inLoop, 7: [2] int32 n, 8: [3] class [mscorlib]System.Collections.Generic.IEnumerator`1<int32> CS$5$0000, 9: [4] bool CS$4$0001) 10: L_0000: nop 11: L_0001: ldc.i4.1 12: L_0002: ldc.i4.s 50 13: L_0004: call class [mscorlib]System.Collections.Generic.IEnumerable`1<int32> [System.Core]System.Linq.Enumerable::Range(int32, int32) 14: L_0009: stloc.0 15: L_000a: newobj instance void [mscorlib]System.Collections.Generic.List`1<int32>::.ctor() 16: L_000f: stloc.1 17: L_0010: nop 18: L_0011: ldloc.0 19: L_0012: callvirt instance class [mscorlib]System.Collections.Generic.IEnumerator`1<!0> [mscorlib]System.Collections.Generic.IEnumerable`1<int32>::GetEnumerator() 20: L_0017: stloc.3 21: L_0018: br.s L_003a 22: L_001a: ldloc.3 23: L_001b: callvirt instance !0 [mscorlib]System.Collections.Generic.IEnumerator`1<int32>::get_Current() 24: L_0020: stloc.2 25: L_0021: nop 26: L_0022: ldloc.2 27: L_0023: ldc.i4.5 28: L_0024: cgt 29: L_0026: ldc.i4.0 30: L_0027: ceq 31: L_0029: stloc.s CS$4$0001 32: L_002b: ldloc.s CS$4$0001 33: L_002d: brtrue.s L_0039 34: L_002f: ldloc.1 35: L_0030: ldloc.2 36: L_0031: ldloc.2 37: L_0032: mul 38: L_0033: callvirt instance void [mscorlib]System.Collections.Generic.List`1<int32>::Add(!0) 39: L_0038: nop 40: L_0039: nop 41: L_003a: ldloc.3 42: L_003b: callvirt instance bool [mscorlib]System.Collections.IEnumerator::MoveNext() 43: L_0040: stloc.s CS$4$0001 44: L_0042: ldloc.s CS$4$0001 45: L_0044: brtrue.s L_001a 46: L_0046: leave.s L_005a 47: L_0048: ldloc.3 48: L_0049: ldnull 49: L_004a: ceq 50: L_004c: stloc.s CS$4$0001 51: L_004e: ldloc.s CS$4$0001 52: L_0050: brtrue.s L_0059 53: L_0052: ldloc.3 54: L_0053: callvirt instance void [mscorlib]System.IDisposable::Dispose() 55: L_0058: nop 56: L_0059: endfinally 57: L_005a: nop 58: L_005b: ldarg.0 59: L_005c: ldfld class [System.Windows]System.Windows.Controls.ListBox PerfTest.MainPage::LB1 60: L_0061: ldloc.1 61: L_0062: callvirt instance void [System.Windows]System.Windows.Controls.ItemsControl::set_ItemsSource(class [mscorlib]System.Collections.IEnumerable) 62: L_0067: nop 63: L_0068: ret 64: .try L_0018 to L_0048 finally handler L_0048 to L_005a 65: } 66:   67: Compare that to the IL generated for the LINQ version which has about half of the instructions and just gets the job done, no fluff. .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: .method private hidebysig instance void LINQMethod() cil managed 2: { 3: .maxstack 4 4: .locals init ( 5: [0] class [mscorlib]System.Collections.Generic.IEnumerable`1<int32> someData, 6: [1] class [mscorlib]System.Collections.Generic.IEnumerable`1<int32> queryResult) 7: L_0000: nop 8: L_0001: ldc.i4.1 9: L_0002: ldc.i4.s 50 10: L_0004: call class [mscorlib]System.Collections.Generic.IEnumerable`1<int32> [System.Core]System.Linq.Enumerable::Range(int32, int32) 11: L_0009: stloc.0 12: L_000a: ldloc.0 13: L_000b: ldsfld class [System.Core]System.Func`2<int32, bool> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate6 14: L_0010: brtrue.s L_0025 15: L_0012: ldnull 16: L_0013: ldftn bool PerfTest.MainPage::<LINQProgramming>b__4(int32) 17: L_0019: newobj instance void [System.Core]System.Func`2<int32, bool>::.ctor(object, native int) 18: L_001e: stsfld class [System.Core]System.Func`2<int32, bool> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate6 19: L_0023: br.s L_0025 20: L_0025: ldsfld class [System.Core]System.Func`2<int32, bool> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate6 21: L_002a: call class [mscorlib]System.Collections.Generic.IEnumerable`1<!!0> [System.Core]System.Linq.Enumerable::Where<int32>(class [mscorlib]System.Collections.Generic.IEnumerable`1<!!0>, class [System.Core]System.Func`2<!!0, bool>) 22: L_002f: ldsfld class [System.Core]System.Func`2<int32, int32> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate7 23: L_0034: brtrue.s L_0049 24: L_0036: ldnull 25: L_0037: ldftn int32 PerfTest.MainPage::<LINQProgramming>b__5(int32) 26: L_003d: newobj instance void [System.Core]System.Func`2<int32, int32>::.ctor(object, native int) 27: L_0042: stsfld class [System.Core]System.Func`2<int32, int32> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate7 28: L_0047: br.s L_0049 29: L_0049: ldsfld class [System.Core]System.Func`2<int32, int32> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate7 30: L_004e: call class [mscorlib]System.Collections.Generic.IEnumerable`1<!!1> [System.Core]System.Linq.Enumerable::Select<int32, int32>(class [mscorlib]System.Collections.Generic.IEnumerable`1<!!0>, class [System.Core]System.Func`2<!!0, !!1>) 31: L_0053: stloc.1 32: L_0054: ldarg.0 33: L_0055: ldfld class [System.Windows]System.Windows.Controls.ListBox PerfTest.MainPage::LB2 34: L_005a: ldloc.1 35: L_005b: callvirt instance void [System.Windows]System.Windows.Controls.ItemsControl::set_ItemsSource(class [mscorlib]System.Collections.IEnumerable) 36: L_0060: nop 37: L_0061: ret 38: } Again, not surprising here but a good indicator that you should consider using LINQ where possible. In fact if you have ReSharper installed you’ll see a squiggly (technical term) in the imperative code that says “Hey Dude, I can convert this to LINQ if you want to be c00L!” (or something like that, it’s the 2010 geek version of Clippy). What about the fluent version? As Jon correctly pointed out in the comments, when you compare the IL for the LINQ code and the IL for the fluent code it’s the same. LINQ and the fluent interface are just syntactical sugar so you decide what you’re most comfortable with. At the end of the day they’re both the same. Now onto the numbers. Again I expected the imperative version to be better performing than the LINQ version (before I saw the IL that was generated). Call it womanly instinct. A gut feel. Whatever. Some of the numbers are interesting though. For Jesse’s example of 50 items, the numbers were interesting. The imperative sample clocked in at 7ms while the LINQ version completed in 4. As the number of items went up, the elapsed time didn’t necessarily climb exponentially. At 500 items they were pretty much the same and the results were similar up to about 50,000 items. After that I tried 500,000 items where the gap widened but not by much (2.2 seconds for imperative, 2.3 for LINQ). It wasn’t until I tried 5,000,000 items where things were noticeable. Imperative filled the list in 20 seconds while LINQ took 8 seconds longer (although personally I wouldn’t suggest you put 5 million items in a list unless you want your users showing up at your door with torches and pitchforks). Here’s the table with the full results. Method/Items 50 500 5,000 50,000 500,000 5,000,000 Imperative 7ms 7ms 38ms 223ms 2230ms 20974ms LINQ/Fluent 4ms 6ms 41ms 240ms 2310ms 28731ms Like I said, at the end of the day it’s not a huge difference and you really don’t want your users waiting around for 30 seconds on a mobile device filling lists. In fact if Windows Phone 7 detects you’re taking more than 10 seconds to do any one thing, it considers the app hung and shuts it down. The results here are for Windows Phone 7 but frankly they're the same for desktop and web apps so feel free to apply it generally. From a programming perspective, choose what you like. Some LINQ statements can get pretty hairy so I usually fall back with my simple mind and write it imperatively. If you really want to impress your friends, write it old school then let ReSharper do the hard work for! Happy programming!

    Read the article

  • Hello Operator, My Switch Is Bored

    - by Paul White
    This is a post for T-SQL Tuesday #43 hosted by my good friend Rob Farley. The topic this month is Plan Operators. I haven’t taken part in T-SQL Tuesday before, but I do like to write about execution plans, so this seemed like a good time to start. This post is in two parts. The first part is primarily an excuse to use a pretty bad play on words in the title of this blog post (if you’re too young to know what a telephone operator or a switchboard is, I hate you). The second part of the post looks at an invisible query plan operator (so to speak). 1. My Switch Is Bored Allow me to present the rare and interesting execution plan operator, Switch: Books Online has this to say about Switch: Following that description, I had a go at producing a Fast Forward Cursor plan that used the TOP operator, but had no luck. That may be due to my lack of skill with cursors, I’m not too sure. The only application of Switch in SQL Server 2012 that I am familiar with requires a local partitioned view: CREATE TABLE dbo.T1 (c1 int NOT NULL CHECK (c1 BETWEEN 00 AND 24)); CREATE TABLE dbo.T2 (c1 int NOT NULL CHECK (c1 BETWEEN 25 AND 49)); CREATE TABLE dbo.T3 (c1 int NOT NULL CHECK (c1 BETWEEN 50 AND 74)); CREATE TABLE dbo.T4 (c1 int NOT NULL CHECK (c1 BETWEEN 75 AND 99)); GO CREATE VIEW V1 AS SELECT c1 FROM dbo.T1 UNION ALL SELECT c1 FROM dbo.T2 UNION ALL SELECT c1 FROM dbo.T3 UNION ALL SELECT c1 FROM dbo.T4; Not only that, but it needs an updatable local partitioned view. We’ll need some primary keys to meet that requirement: ALTER TABLE dbo.T1 ADD CONSTRAINT PK_T1 PRIMARY KEY (c1);   ALTER TABLE dbo.T2 ADD CONSTRAINT PK_T2 PRIMARY KEY (c1);   ALTER TABLE dbo.T3 ADD CONSTRAINT PK_T3 PRIMARY KEY (c1);   ALTER TABLE dbo.T4 ADD CONSTRAINT PK_T4 PRIMARY KEY (c1); We also need an INSERT statement that references the view. Even more specifically, to see a Switch operator, we need to perform a single-row insert (multi-row inserts use a different plan shape): INSERT dbo.V1 (c1) VALUES (1); And now…the execution plan: The Constant Scan manufactures a single row with no columns. The Compute Scalar works out which partition of the view the new value should go in. The Assert checks that the computed partition number is not null (if it is, an error is returned). The Nested Loops Join executes exactly once, with the partition id as an outer reference (correlated parameter). The Switch operator checks the value of the parameter and executes the corresponding input only. If the partition id is 0, the uppermost Clustered Index Insert is executed, adding a row to table T1. If the partition id is 1, the next lower Clustered Index Insert is executed, adding a row to table T2…and so on. In case you were wondering, here’s a query and execution plan for a multi-row insert to the view: INSERT dbo.V1 (c1) VALUES (1), (2); Yuck! An Eager Table Spool and four Filters! I prefer the Switch plan. My guess is that almost all the old strategies that used a Switch operator have been replaced over time, using things like a regular Concatenation Union All combined with Start-Up Filters on its inputs. Other new (relative to the Switch operator) features like table partitioning have specific execution plan support that doesn’t need the Switch operator either. This feels like a bit of a shame, but perhaps it is just nostalgia on my part, it’s hard to know. Please do let me know if you encounter a query that can still use the Switch operator in 2012 – it must be very bored if this is the only possible modern usage! 2. Invisible Plan Operators The second part of this post uses an example based on a question Dave Ballantyne asked using the SQL Sentry Plan Explorer plan upload facility. If you haven’t tried that yet, make sure you’re on the latest version of the (free) Plan Explorer software, and then click the Post to SQLPerformance.com button. That will create a site question with the query plan attached (which can be anonymized if the plan contains sensitive information). Aaron Bertrand and I keep a close eye on questions there, so if you have ever wanted to ask a query plan question of either of us, that’s a good way to do it. The problem The issue I want to talk about revolves around a query issued against a calendar table. The script below creates a simplified version and adds 100 years of per-day information to it: USE tempdb; GO CREATE TABLE dbo.Calendar ( dt date NOT NULL, isWeekday bit NOT NULL, theYear smallint NOT NULL,   CONSTRAINT PK__dbo_Calendar_dt PRIMARY KEY CLUSTERED (dt) ); GO -- Monday is the first day of the week for me SET DATEFIRST 1;   -- Add 100 years of data INSERT dbo.Calendar WITH (TABLOCKX) (dt, isWeekday, theYear) SELECT CA.dt, isWeekday = CASE WHEN DATEPART(WEEKDAY, CA.dt) IN (6, 7) THEN 0 ELSE 1 END, theYear = YEAR(CA.dt) FROM Sandpit.dbo.Numbers AS N CROSS APPLY ( VALUES (DATEADD(DAY, N.n - 1, CONVERT(date, '01 Jan 2000', 113))) ) AS CA (dt) WHERE N.n BETWEEN 1 AND 36525; The following query counts the number of weekend days in 2013: SELECT Days = COUNT_BIG(*) FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; It returns the correct result (104) using the following execution plan: The query optimizer has managed to estimate the number of rows returned from the table exactly, based purely on the default statistics created separately on the two columns referenced in the query’s WHERE clause. (Well, almost exactly, the unrounded estimate is 104.289 rows.) There is already an invisible operator in this query plan – a Filter operator used to apply the WHERE clause predicates. We can see it by re-running the query with the enormously useful (but undocumented) trace flag 9130 enabled: Now we can see the full picture. The whole table is scanned, returning all 36,525 rows, before the Filter narrows that down to just the 104 we want. Without the trace flag, the Filter is incorporated in the Clustered Index Scan as a residual predicate. It is a little bit more efficient than using a separate operator, but residual predicates are still something you will want to avoid where possible. The estimates are still spot on though: Anyway, looking to improve the performance of this query, Dave added the following filtered index to the Calendar table: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear) WHERE isWeekday = 0; The original query now produces a much more efficient plan: Unfortunately, the estimated number of rows produced by the seek is now wrong (365 instead of 104): What’s going on? The estimate was spot on before we added the index! Explanation You might want to grab a coffee for this bit. Using another trace flag or two (8606 and 8612) we can see that the cardinality estimates were exactly right initially: The highlighted information shows the initial cardinality estimates for the base table (36,525 rows), the result of applying the two relational selects in our WHERE clause (104 rows), and after performing the COUNT_BIG(*) group by aggregate (1 row). All of these are correct, but that was before cost-based optimization got involved :) Cost-based optimization When cost-based optimization starts up, the logical tree above is copied into a structure (the ‘memo’) that has one group per logical operation (roughly speaking). The logical read of the base table (LogOp_Get) ends up in group 7; the two predicates (LogOp_Select) end up in group 8 (with the details of the selections in subgroups 0-6). These two groups still have the correct cardinalities as trace flag 8608 output (initial memo contents) shows: During cost-based optimization, a rule called SelToIdxStrategy runs on group 8. It’s job is to match logical selections to indexable expressions (SARGs). It successfully matches the selections (theYear = 2013, is Weekday = 0) to the filtered index, and writes a new alternative into the memo structure. The new alternative is entered into group 8 as option 1 (option 0 was the original LogOp_Select): The new alternative is to do nothing (PhyOp_NOP = no operation), but to instead follow the new logical instructions listed below the NOP. The LogOp_GetIdx (full read of an index) goes into group 21, and the LogOp_SelectIdx (selection on an index) is placed in group 22, operating on the result of group 21. The definition of the comparison ‘the Year = 2013’ (ScaOp_Comp downwards) was already present in the memo starting at group 2, so no new memo groups are created for that. New Cardinality Estimates The new memo groups require two new cardinality estimates to be derived. First, LogOp_Idx (full read of the index) gets a predicted cardinality of 10,436. This number comes from the filtered index statistics: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH STAT_HEADER; The second new cardinality derivation is for the LogOp_SelectIdx applying the predicate (theYear = 2013). To get a number for this, the cardinality estimator uses statistics for the column ‘theYear’, producing an estimate of 365 rows (there are 365 days in 2013!): DBCC SHOW_STATISTICS (Calendar, theYear) WITH HISTOGRAM; This is where the mistake happens. Cardinality estimation should have used the filtered index statistics here, to get an estimate of 104 rows: DBCC SHOW_STATISTICS (Calendar, Weekends) WITH HISTOGRAM; Unfortunately, the logic has lost sight of the link between the read of the filtered index (LogOp_GetIdx) in group 22, and the selection on that index (LogOp_SelectIdx) that it is deriving a cardinality estimate for, in group 21. The correct cardinality estimate (104 rows) is still present in the memo, attached to group 8, but that group now has a PhyOp_NOP implementation. Skipping over the rest of cost-based optimization (in a belated attempt at brevity) we can see the optimizer’s final output using trace flag 8607: This output shows the (incorrect, but understandable) 365 row estimate for the index range operation, and the correct 104 estimate still attached to its PhyOp_NOP. This tree still has to go through a few post-optimizer rewrites and ‘copy out’ from the memo structure into a tree suitable for the execution engine. One step in this process removes PhyOp_NOP, discarding its 104-row cardinality estimate as it does so. To finish this section on a more positive note, consider what happens if we add an OVER clause to the query aggregate. This isn’t intended to be a ‘fix’ of any sort, I just want to show you that the 104 estimate can survive and be used if later cardinality estimation needs it: SELECT Days = COUNT_BIG(*) OVER () FROM dbo.Calendar AS C WHERE theYear = 2013 AND isWeekday = 0; The estimated execution plan is: Note the 365 estimate at the Index Seek, but the 104 lives again at the Segment! We can imagine the lost predicate ‘isWeekday = 0’ as sitting between the seek and the segment in an invisible Filter operator that drops the estimate from 365 to 104. Even though the NOP group is removed after optimization (so we don’t see it in the execution plan) bear in mind that all cost-based choices were made with the 104-row memo group present, so although things look a bit odd, it shouldn’t affect the optimizer’s plan selection. I should also mention that we can work around the estimation issue by including the index’s filtering columns in the index key: CREATE NONCLUSTERED INDEX Weekends ON dbo.Calendar(theYear, isWeekday) WHERE isWeekday = 0 WITH (DROP_EXISTING = ON); There are some downsides to doing this, including that changes to the isWeekday column may now require Halloween Protection, but that is unlikely to be a big problem for a static calendar table ;)  With the updated index in place, the original query produces an execution plan with the correct cardinality estimation showing at the Index Seek: That’s all for today, remember to let me know about any Switch plans you come across on a modern instance of SQL Server! Finally, here are some other posts of mine that cover other plan operators: Segment and Sequence Project Common Subexpression Spools Why Plan Operators Run Backwards Row Goals and the Top Operator Hash Match Flow Distinct Top N Sort Index Spools and Page Splits Singleton and Range Seeks Bitmaps Hash Join Performance Compute Scalar © 2013 Paul White – All Rights Reserved Twitter: @SQL_Kiwi

    Read the article

  • What is recommended minimum object size for gzip performance benefits?

    - by utt73
    I'm working on improving page speed display times, and one of the methods is to gzip content from the webserver. Google recommends: Note that gzipping is only beneficial for larger resources. Due to the overhead and latency of compression and decompression, you should only gzip files above a certain size threshold; we recommend a minimum range between 150 and 1000 bytes. Gzipping files below 150 bytes can actually make them larger. We serve our content through Akamai, using their network for a proxy and CDN. What they've told me: Following up on your question regarding what is the minimum size Akamai will compress the requested object when sending it to the end user: The minimum size is 860 bytes. My reply: What is the reason(s) for why Akamai's minimum size is 860 bytes? And why, for example, is this not the case for files Akamai serves for facebook? (see below) Google recommends to gzip more agressively. And that seems appropriate on our site where the most frequent hits, by far, are AJAX calls that are <860 bytes. Akamai's response: The reasons 860 bytes is the minimum size for compression is twofold: (1) The overhead of compressing an object under 860 bytes outweighs performance gain. (2) Objects under 860 bytes can be transmitted via a single packet anyway, so there isn't a compelling reason to compress them. So I'm here for some fact checking. Is the 860 byte limit due to packet size the end of this reasoning? Why would high traffic sites push this down to the 150 byte limit... just to save on bandwidth costs (since CDNs base their charges on bandwith offloaded from origin), or is there a performance gain in doing so?

    Read the article

  • Elastic versus Distributed in caching.

    - by Mike Reys
    Until now, I hadn't heard about Elastic Caching yet. Today I read Mike Gualtieri's Blog entry. I immediately thought about Oracle Coherence and got a little scare throughout the reading. Elastic Caching is the next step after Distributed Caching. As we've always positioned Coherence as a Distributed Cache, I thought for a brief instance that Oracle had missed a new trend/technology. But then I started reading the characteristics of an Elastic Cache. Forrester definition: Software infrastructure that provides application developers with data caching services that are distributed across two or more server nodes that consistently perform as volumes grow can be scaled without downtime provide a range of fault-tolerance levels Hey wait a minute, doesn't Coherence fullfill all these requirements? Oh yes, I think it does! The next defintion in the article is about Elastic Application Platforms. This is mainly more of the same with the addition of code execution. Now there is analytics functionality in Oracle Coherence. The analytics capability provides data-centric functions like distributed aggregation, searching and sorting. Coherence also provides continuous querying and event-handling. I think that when it comes to providing an Elastic Application Platform (as in the Forrester definition), Oracle is close, nearly there. And what's more, as Elastic Platform is the next big thing towards the big C word, Oracle Coherence makes you cloud-ready ;-) There you go! Find more info on Oracle Coherence here.

    Read the article

  • What's up with LDoms: Part 1 - Introduction & Basic Concepts

    - by Stefan Hinker
    LDoms - the correct name is Oracle VM Server for SPARC - have been around for quite a while now.  But to my surprise, I get more and more requests to explain how they work or to give advise on how to make good use of them.  This made me think that writing up a few articles discussing the different features would be a good idea.  Now - I don't intend to rewrite the LDoms Admin Guide or to copy and reformat the (hopefully) well known "Beginners Guide to LDoms" by Tony Shoumack from 2007.  Those documents are very recommendable - especially the Beginners Guide, although based on LDoms 1.0, is still a good place to begin with.  However, LDoms have come a long way since then, and I hope to contribute to their adoption by discussing how they work and what features there are today.  In this and the following posts, I will use the term "LDoms" as a common abbreviation for Oracle VM Server for SPARC, just because it's a lot shorter and easier to type (and presumably, read). So, just to get everyone on the same baseline, lets briefly discuss the basic concepts of virtualization with LDoms.  LDoms make use of a hypervisor as a layer of abstraction between real, physical hardware and virtual hardware.  This virtual hardware is then used to create a number of guest systems which each behave very similar to a system running on bare metal:  Each has its own OBP, each will install its own copy of the Solaris OS and each will see a certain amount of CPU, memory, disk and network resources available to it.  Unlike some other type 1 hypervisors running on x86 hardware, the SPARC hypervisor is embedded in the system firmware and makes use both of supporting functions in the sun4v SPARC instruction set as well as the overall CPU architecture to fulfill its function. The CMT architecture of the supporting CPUs (T1 through T4) provide a large number of cores and threads to the OS.  For example, the current T4 CPU has eight cores, each running 8 threads, for a total of 64 threads per socket.  To the OS, this looks like 64 CPUs.  The SPARC hypervisor, when creating guest systems, simply assigns a certain number of these threads exclusively to one guest, thus avoiding the overhead of having to schedule OS threads to CPUs, as do typical x86 hypervisors.  The hypervisor only assigns CPUs and then steps aside.  It is not involved in the actual work being dispatched from the OS to the CPU, all it does is maintain isolation between different guests. Likewise, memory is assigned exclusively to individual guests.  Here,  the hypervisor provides generic mappings between the physical hardware addresses and the guest's views on memory.  Again, the hypervisor is not involved in the actual memory access, it only maintains isolation between guests. During the inital setup of a system with LDoms, you start with one special domain, called the Control Domain.  Initially, this domain owns all the hardware available in the system, including all CPUs, all RAM and all IO resources.  If you'd be running the system un-virtualized, this would be what you'd be working with.  To allow for guests, you first resize this initial domain (also called a primary domain in LDoms speak), assigning it a small amount of CPU and memory.  This frees up most of the available CPU and memory resources for guest domains.  IO is a little more complex, but very straightforward.  When LDoms 1.0 first came out, the only way to provide IO to guest systems was to create virtual disk and network services and attach guests to these services.  In the meantime, several different ways to connect guest domains to IO have been developed, the most recent one being SR-IOV support for network devices released in version 2.2 of Oracle VM Server for SPARC. I will cover these more advanced features in detail later.  For now, lets have a short look at the initial way IO was virtualized in LDoms: For virtualized IO, you create two services, one "Virtual Disk Service" or vds, and one "Virtual Switch" or vswitch.  You can, of course, also create more of these, but that's more advanced than I want to cover in this introduction.  These IO services now connect real, physical IO resources like a disk LUN or a networt port to the virtual devices that are assigned to guest domains.  For disk IO, the normal case would be to connect a physical LUN (or some other storage option that I'll discuss later) to one specific guest.  That guest would be assigned a virtual disk, which would appear to be just like a real LUN to the guest, while the IO is actually routed through the virtual disk service down to the physical device.  For network, the vswitch acts very much like a real, physical ethernet switch - you connect one physical port to it for outside connectivity and define one or more connections per guest, just like you would plug cables between a real switch and a real system. For completeness, there is another service that provides console access to guest domains which mimics the behavior of serial terminal servers. The connections between the virtual devices on the guest's side and the virtual IO services in the primary domain are created by the hypervisor.  It uses so called "Logical Domain Channels" or LDCs to create point-to-point connections between all of these devices and services.  These LDCs work very similar to high speed serial connections and are configured automatically whenever the Control Domain adds or removes virtual IO. To see all this in action, now lets look at a first example.  I will start with a newly installed machine and configure the control domain so that it's ready to create guest systems. In a first step, after we've installed the software, let's start the virtual console service and downsize the primary domain.  root@sun # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-c-- UART 512 261632M 0.3% 2d 13h 58m root@sun # ldm add-vconscon port-range=5000-5100 \ primary-console primary root@sun # svcadm enable vntsd root@sun # svcs vntsd STATE STIME FMRI online 9:53:21 svc:/ldoms/vntsd:default root@sun # ldm set-vcpu 16 primary root@sun # ldm set-mau 1 primary root@sun # ldm start-reconf primary root@sun # ldm set-memory 7680m primary root@sun # ldm add-config initial root@sun # shutdown -y -g0 -i6 So what have I done: I've defined a range of ports (5000-5100) for the virtual network terminal service and then started that service.  The vnts will later provide console connections to guest systems, very much like serial NTS's do in the physical world. Next, I assigned 16 vCPUs (on this platform, a T3-4, that's two cores) to the primary domain, freeing the rest up for future guest systems.  I also assigned one MAU to this domain.  A MAU is a crypto unit in the T3 CPU.  These need to be explicitly assigned to domains, just like CPU or memory.  (This is no longer the case with T4 systems, where crypto is always available everywhere.) Before I reassigned the memory, I started what's called a "delayed reconfiguration" session.  That avoids actually doing the change right away, which would take a considerable amount of time in this case.  Instead, I'll need to reboot once I'm all done.  I've assigned 7680MB of RAM to the primary.  That's 8GB less the 512MB which the hypervisor uses for it's own private purposes.  You can, depending on your needs, work with less.  I'll spend a dedicated article on sizing, discussing the pros and cons in detail. Finally, just before the reboot, I saved my work on the ILOM, to make this configuration available after a powercycle of the box.  (It'll always be available after a simple reboot, but the ILOM needs to know the configuration of the hypervisor after a power-cycle, before the primary domain is booted.) Now, lets create a first disk service and a first virtual switch which is connected to the physical network device igb2. We will later use these to connect virtual disks and virtual network ports of our guest systems to real world storage and network. root@sun # ldm add-vds primary-vds root@sun # ldm add-vswitch net-dev=igb2 switch-primary primary You are free to choose whatever names you like for the virtual disk service and the virtual switch.  I strongly recommend that you choose names that make sense to you and describe the function of each service in the context of your implementation.  For the vswitch, for example, you could choose names like "admin-vswitch" or "production-network" etc. This already concludes the configuration of the control domain.  We've freed up considerable amounts of CPU and RAM for guest systems and created the necessary infrastructure - console, vts and vswitch - so that guests systems can actually interact with the outside world.  The system is now ready to create guests, which I'll describe in the next section. For further reading, here are some recommendable links: The LDoms 2.2 Admin Guide The "Beginners Guide to LDoms" The LDoms Information Center on MOS LDoms on OTN

    Read the article

  • Oracle’s AutoVue Enables Visual Decision Making

    - by Pam Petropoulos
    That old saying about a picture being worth a thousand words has never been truer.  Check out the latest reports from IDC Manufacturing Insights which highlight the importance of incorporating visual information in all facets of decision making and the role that Oracle’s AutoVue Enterprise Visualization solutions can play. Take a look at the excerpts below and be sure to click on the titles to read the full reports. Technology Spotlight: Optimizing the Product Life Cycle Through Visual Decision Making, August 2012 Manufacturers find it increasingly challenging to make effective product-related decisions as the result of expanded technical complexities, elongated supply chains, and a shortage of experienced workers. These factors challenge the traditional methodologies companies use to make critical decisions. However, companies can improve decision making by the use of visual decision making, which synthesizes information from multiple sources into highly usable visual context and integrates it with existing enterprise applications such as PLM and ERP systems. Product-related information presented in a visual form and shared across communities of practice with diverse roles, backgrounds, and job skills helps level the playing field for collaboration across business functions, technologies, and enterprises. Visual decision making can contribute to manufacturers making more effective product-related decisions throughout the complete product life cycle. This Technology Spotlight examines these trends and the role that Oracle's AutoVue and its Augmented Business Visualization (ABV) solution play in this strategic market. Analyst Connection: Using Visual Decision Making to Optimize Manufacturing Design and Development, September 2012 In today's environments, global manufacturers are managing a broad range of information. Data is often scattered across countless files throughout the product life cycle, generated by different applications and platforms. Organizations are struggling to utilize these multidisciplinary sources in an optimal way. Visual decision making is a strategy and technology that can address this challenge by integrating and widening access to digital information assets. Integrating with PLM and ERP tools across engineering, manufacturing, sales, and marketing, visual decision making makes digital content more accessible to employees and partners in the supply chain. The use of visual decision-making information rendered in the appropriate business context and shared across functional teams contributes to more effective product-related decision making and positively impacts business performance.

    Read the article

  • CodePlex Daily Summary for Saturday, May 12, 2012

    CodePlex Daily Summary for Saturday, May 12, 2012Popular ReleasesKanboxAPI: KanboxAPI beta: ????? Token Info List DownloadMedia Companion: Media Companion 3.502b: It has been a slow week, but this release addresses a couple of recent bugs: Movies Multi-part Movies - Existing .nfo files that differed in name from the first part, were missed and scraped again. Trailers - MC attempted to scrape info for existing trailers. TV Shows Show Scraping - shows available only in the non-default language would not show up in the main browser. The correct language can now be selected using the TV Show Selector for a single show. General Will no longer prompt for ...NewLife XCode ??????: XCode v8.5.2012.0508、XCoder v4.7.2012.0320: X????: 1,????For .Net 4.0?? XCoder????: 1,???????,????X????,?????? XCode????: 1,Insert/Update/Delete???????????????,???SQL???? 2,IEntityOperate?????? 3,????????IEntityTree 4,????????????????? 5,?????????? 6,??????????????dycom: v1.0: DYCom ????????:Silverlight, Windows phone 7.5.NETMF_for_STM32: Beta 1 Release: First public beta release.Google Book Downloader: Google Books Downloader Lite 1.0: Google Books Downloader Lite 1.0Python Tools for Visual Studio: 1.5 Alpha: We’re pleased to announce the release of Python Tools for Visual Studio 1.5 Alpha. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including: • Supports Cpython, IronPython, Jython and Pypy • Python editor with advanced member, signature intellisense and refactoring • Code navigation: “Find all refs”, goto definition, and object browser • Local and remote debugging...JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.0 RC1 Refresh 1: JayData 1.0.0 RC1 Refresh 1 JayData is a unified data access API to webSQL, indexedDB, OData, Facebook and YQL. Overview The major feature of this release is related to OData provider, FunctionImport is now generally supported. Now you can consume OData service operations (WebMethods). We extended the JaySvcUtil to generate the necessary metadata. We included many fixes, such as the Visual Studio 2010 IntelliSense optimalization (RC1 was optimized only to VS11). It's recommended to upgrade...AD Gallery: AD Gallery 1.2.7: NewsFixed a bug which caused the current thumbnail not to be highlighted Added a hook to take complete control over how descriptions are handled, take a look under Documentation for more info Added removeAllImages()51Degrees.mobi - Mobile Device Detection and Redirection: 2.1.4.8: One Click Install from NuGet Data ChangesIncludes 42 new browser properties in both the Lite and Premium data sets. Premium Data includes many new devices including Nokia Lumia 900, BlackBerry 9220 and HTC One, the Samsung Galaxy Tab 2 range and Samsung Galaxy S III. Lite data includes devices released in January 2012. Changes to Version 2.1.4.81. The IsFirstTime method of the RedirectModule will now return the same value when called multiple times for the same request. This was prevent...Mugen Injection: Mugen Injection ver 2.2 (WinRT supported): Added NamedParameterAttribute, OptionalParameterAttribute. Added behaviors ICycleDependencyBehavior, IResolveUnregisteredTypeBehavior. Added WinRT support. Added support for NET 4.5. Added support for MVC 4.NShape - .Net Diagramming Framework for Industrial Applications: NShape 2.0.1: Changes in 2.0.1:Bugfixes: IRepository.Insert(Shape shape) and IRepository.Insert(IEnumerable<Shape> shapes) no longer insert shape connections. Several context menu items did display although the required permission was not granted Display did not reset the visible and active layers when changing the diagram NullReferenceException when pressing Del key and no shape was selected Changed Behavior: LayerCollection.Find("") no longer throws an exception. Improvements: Display does not rese...AcDown????? - Anime&Comic Downloader: AcDown????? v3.11.6: ?? ●AcDown??????????、??、??????,????1M,????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。??????AcPlay?????,??????、????????????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86),?????"?????????"??? ??????????????,??????????: ??"AcDo...sb0t: sb0t 4.64: New commands added: #scribble <url> #adminscribble on #adminscribble offDocument.Editor: 2012.4: Whats new for Document.Editor 2012.4: Improved Template support Improved Options Dialog Minor Bug Fix's, improvements and speed upsJson.NET: Json.NET 4.5 Release 5: New feature - Added ItemIsReference, ItemReferenceLoopHandling, ItemTypeNameHandling, ItemConverterType to JsonPropertyAttribute New feature - Added ItemRequired to JsonObjectAttribute New feature - Added Path to JsonWriterException Change - Improved deserializer call stack memory usage Change - Moved the PDB files out of the NuGet package into a symbols package Fix - Fixed infinite loop from an input error when reading an array and error handling is enabled Fix - Fixed base objec...BlackJumboDog: Ver5.6.1: 2012.05.07 Ver5.6.1 (1)????????????????(Ver5.6.0??)??? (2)HTTP?????SSL????????????(Ver5.6.0??)??? (3)HTTP?????2G??????????????????????????? (4)HTP???? ?????????ExtAspNet: ExtAspNet v3.1.5: ExtAspNet - ?? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ?????????? ExtAspNet ????? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ??????????。 ExtAspNet ??????? JavaScript,?? CSS,?? UpdatePanel,?? ViewState,?? WebServices ???????。 ??????: IE 7.0, Firefox 3.6, Chrome 3.0, Opera 10.5, Safari 3.0+ ????:Apache License 2.0 (Apache) ??:http://extasp.net/ ??:http://bbs.extasp.net/ ??:http://extaspnet.codeplex.com/ ??:http://sanshi.cnblogs.com/ ????: +2012-05-06 v3.1.5 -????????:grid/grid_twogrid.aspx。 +?...SharpDevelop: SharpDevelop 4.2: Please see http://community.sharpdevelop.net/forums/t/15772.aspx for the release announcement.Desktop Google Reader: 1.4.4: Taskbar icon overlay (number of unread items) can now be switched off in preferences (Windows Vista / 7 only) Maximize button now can be toggled to be fullscreen (as befor) or only normal maximize (taskbar stays visible) in preferences List of feeds is now sorted by alphabetNew Projects3D Scene Editor: A generic 3d level editor built using XNA to speed up designing and building game levels.Attribute Based Cache using Unity Interception: Unity interception handler attribute for Caching which allows to apply boiler plate caching pattern to classes, and class members directly, without configuring them in the application configuration file. Configure your choice of Cache Provider (ObjectCache, Azure included) in the Unity IoC Container and apply the attribute to the method which you want to cache AutoCompleteBox for WinRT: None yetBAC2 Bachelor's Thesis Source Code: The source code of my Bachelor's ThesisBAMabase: It's a bamabase.BBSProject: BBSProjectBrain 2 - Game Engine: Brain 2 is a Game Engine that runs on multiple platforms.cobra-winldtp: Cobra - Windows version of Linux Desktop Testing Project (WinLDTP) - http://ldtp.freedesktop.org LDTP is a GUI test automation tool works on both Windows and Linux platform Windows GUI test automation tool written in C# and test scripts can be written in Python for now. Ruby API will be added soon.CS322: C# Programski jezikDoxBotPlugin: The DoxBotPlugin is a Plug-In for Ice-Chat9 that allows a user to use Icechat9 as both a normal IRC client and to switch on bot mode in certain channels to have DoxBotPlugin act on their behalf.dycom: DYCom??(DY Communication)???????????,?????????????????.????????????????. ??????????????????????,?????????????????。Eyes Protector: PL: Program pomagajacy w ochronie oczu przed przemeczeniem zwiazanym ze zbyt dluga praca przy komputerze. EN: ---kshell: ????Linux??????Lumia: This is Lumia project.MacroDoc: MacroDoc is an engine written in C# for composing documents from reusable pieces of structure and content.Mssql: This is Sql Server project.NETMF_for_STM32: This is the Codeplex project for NETMF for STM32 (F4 Edition). Ocular - a free, open source WYSIWYG editor for HTML: Ocular is a free C# WYSIWYG HTML editor, similar to Adobe Dreamweaver. We are always looking for contributors, so please help us!PeopleCredit: Prototype web service to maintain all employee credits.Project Server workflow: This workflow creates the project site for Basic project plan EPT when workflow task is approved.(This is the correction done over the Branching workflow provided with Project Server 2010 SDK). Workflow task is created using PSWApprovalTask Content type in Project Server Workflow Task List. Quick Reminder: PL: Program pomagajacy zapisac szybkie przypomnienia podczas pracy przy komputerze. EN: ---Random Projects: My random projects...Recommendation Engine Demo: How does the Amazon recommendation works? This is about visualizing the item to item collaborations filtering mechanism using a item-to-item matrix table. The item-to-item matrix, the vectors and the calculated data values are displayed. There are n different items and the item recommendation can display up to m items. There are implemented different item-to-item neighborhood functions. A simple max count of seen neighbor items, the Cosine Similarity and the Jaccard Index. A t...SaveSeaTurtle: Sea TurtleSharpGpx: SharpGpx implements an object model for reading and writing GPX (GPS eXchange Format).SlimDo: SlimDo is a scripting language coded in C#Spring: This is Spring.Net projectSQL Server Quick Tools Pack: SQL Server Quick Tools Pack for your sql server SuLD framework: Supported Link Discovery framework (SuLD) is a tool to discover links to multiple Linked Data datasets. Finding is supported by various features like synonym module or autocomplete.TQuery.Net: .Net??????WAAP - World of warcraft Auction house Analysis Project: A project in which we try to analyse prices and auctioneers on the various World of Warcraft Auction housesxhttp.net: The Xhttp.Net framework is a dotnet implementation of the Extended Hypertext Transfer Protocol (http://www.xhttp.org), with simple service integration, full arguments support including Base64 and DateTime, single and multiple asynchronous requests, data streaming, remote API creation from XHTTP service schemas, and a runtime plugin architecture.

    Read the article

  • The Earth at Night [Video]

    - by Jason Fitzpatrick
    This fresh video from NASA provides the clearest view of the Earth at night ever seen, thanks to the Suomi National Polar-orbiting Partnership Satellite. Check out the video and accompanying pics to see the stunning views. In daylight our big blue marble is all land, oceans and clouds. But the night – is electric. This view of Earth at night is a cloud-free view from space as acquired by the Suomi National Polar-orbiting Partnership Satellite (Suomi NPP). A joint program by NASA and NOAA, Suomi NPP captured this nighttime image by the satellite’s Visible Infrared Imaging Radiometer Suite (VIIRS). The day-night band on VIIRS detects light in a range of wavelengths from green to near infrared and uses filtering techniques to observe signals such as city lights, gas flares, and wildfires. This new image is a composite of data acquired over nine days in April and thirteen days in October 2012. It took 312 satellite orbits and 2.5 terabytes of data to get a clear shot of every parcel of land surface. This video uses the Earth at night view created by NASA’s Earth Observatory with data processed by NOAA’s National Geophysical Data Center and combined with a version of the Earth Observatory’s Blue Marble: Next Generation. Hit up the link below for the full NASA press release, including more videos and photos. How to Factory Reset Your Android Phone or Tablet When It Won’t Boot Our Geek Trivia App for Windows 8 is Now Available Everywhere How To Boot Your Android Phone or Tablet Into Safe Mode

    Read the article

  • Using Table-Valued Parameters With SQL Server Reporting Services

    - by Jesse
    In my last post I talked about using table-valued parameters to pass a list of integer values to a stored procedure without resorting to using comma-delimited strings and parsing out each value into a TABLE variable. In this post I’ll extend the “Customer Transaction Summary” report example to see how we might leverage this same stored procedure from within an SQL Server Reporting Services (SSRS) report. I’ve worked with SSRS off and on for the past several years and have generally found it to be a very useful tool for building nice-looking reports for end users quickly and easily. That said, I’ve been frustrated by SSRS from time to time when seemingly simple things are difficult to accomplish or simply not supported at all. I thought that using table-valued parameters from within a SSRS report would be simple, but unfortunately I was wrong. Customer Transaction Summary Example Let’s take the “Customer Transaction Summary” report example from the last post and try to plug that same stored procedure into an SSRS report. Our report will have three parameters: Start Date – beginning of the date range for which the report will summarize customer transactions End Date – end of the date range for which the report will summarize customer transactions Customer Ids – One or more customer Ids representing the customers that will be included in the report The simplest way to get started with this report will be to create a new dataset and point it at our Customer Transaction Summary report stored procedure (note that I’m using SSRS 2012 in the screenshots below, but there should be little to no difference with SSRS 2008): When you initially create this dataset the SSRS designer will try to invoke the stored procedure to determine what the parameters and output fields are for you automatically. As part of this process the following dialog pops-up: Obviously I can’t use this dialog to specify a value for the ‘@customerIds’ parameter since it is of the IntegerListTableType user-defined type that we created in the last post. Unfortunately this really throws the SSRS designer for a loop, and regardless of what combination of Data Type, Pass Null Value, or Parameter Value I used here, I kept getting this error dialog with the message, "Operand type clash: nvarchar is incompatible with IntegerListTableType". This error message makes some sense considering that the nvarchar type is indeed incompatible with the IntegerListTableType, but there’s little clue given as to how to remedy the situation. I don’t know for sure, but I think that behind-the-scenes the SSRS designer is trying to give the @customerIds parameter an nvarchar-typed SqlParameter which is causing the issue. When I first saw this error I figured that this might just be a limitation of the dataset designer and that I’d be able to work around the issue by manually defining the parameters. I know that there are some special steps that need to be taken when invoking a stored procedure with a table-valued parameter from ADO .NET, so I figured that I might be able to use some custom code embedded in the report  to create a SqlParameter instance with the needed properties and value to make this work, but the “Operand type clash" error message persisted. The Text Query Approach Just because we’re using a stored procedure to create the dataset for this report doesn’t mean that we can’t use the ‘Text’ Query Type option and construct an EXEC statement that will invoke the stored procedure. In order for this to work properly the EXEC statement will also need to declare and populate an IntegerListTableType variable to pass into the stored procedure. Before I go any further I want to make one point clear: this is a really ugly hack and it makes me cringe to do it. Simply put, I strongly feel that it should not be this difficult to use a table-valued parameter with SSRS. With that said, let’s take a look at what we’ll have to do to make this work. Manually Define Parameters First, we’ll need to manually define the parameters for report by right-clicking on the ‘Parameters’ folder in the ‘Report Data’ window. We’ll need to define the ‘@startDate’ and ‘@endDate’ as simple date parameters. We’ll also create a parameter called ‘@customerIds’ that will be a mutli-valued Integer parameter: In the ‘Available Values’ tab we’ll point this parameter at a simple dataset that just returns the CustomerId and CustomerName of each row in the Customers table of the database or manually define a handful of Customer Id values to make available when the report runs. Once we have these parameters properly defined we can take another crack at creating the dataset that will invoke the ‘rpt_CustomerTransactionSummary’ stored procedure. This time we’ll choose the ‘Text’ query type option and put the following into the ‘Query’ text area: 1: exec('declare @customerIdList IntegerListTableType ' + @customerIdInserts + 2: ' EXEC rpt_CustomerTransactionSummary 3: @startDate=''' + @startDate + ''', 4: @endDate='''+ @endDate + ''', 5: @customerIds=@customerIdList')   By using the ‘Text’ query type we can enter any arbitrary SQL that we we want to and then use parameters and string concatenation to inject pieces of that query at run time. It can be a bit tricky to parse this out at first glance, but from the SSRS designer’s point of view this query defines three parameters: @customerIdInserts – This will be a Text parameter that we use to define INSERT statements that will populate the @customerIdList variable that is being declared in the SQL. This parameter won’t actually ever get passed into the stored procedure. I’ll go into how this will work in a bit. @startDate – This is a simple date parameter that will get passed through directly into the @startDate parameter of the stored procedure on line 3. @endDate – This is another simple data parameter that will get passed through into the @endDate parameter of the stored procedure on line 4. At this point the dataset designer will be able to correctly parse the query and should even be able to detect the fields that the stored procedure will return without needing to specify any values for query when prompted to. Once the dataset has been correctly defined we’ll have a @customerIdInserts parameter listed in the ‘Parameters’ tab of the dataset designer. We need to define an expression for this parameter that will take the values selected by the user for the ‘@customerIds’ parameter that we defined earlier and convert them into INSERT statements that will populate the @customerIdList variable that we defined in our Text query. In order to do this we’ll need to add some custom code to our report using the ‘Report Properties’ dialog: Any custom code defined in the Report Properties dialog gets embedded into the .rdl of the report itself and (unfortunately) must be written in VB .NET. Note that you can also add references to custom .NET assemblies (which could be written in any language), but that’s outside the scope of this post so we’ll stick with the “quick and dirty” VB .NET approach for now. Here’s the VB .NET code (note that any embedded code that you add here must be defined in a static/shared function, though you can define as many functions as you want): 1: Public Shared Function BuildIntegerListInserts(ByVal variableName As String, ByVal paramValues As Object()) As String 2: Dim insertStatements As New System.Text.StringBuilder() 3: For Each paramValue As Object In paramValues 4: insertStatements.AppendLine(String.Format("INSERT {0} VALUES ({1})", variableName, paramValue)) 5: Next 6: Return insertStatements.ToString() 7: End Function   This method takes a variable name and an array of objects. We use an array of objects here because that is how SSRS will pass us the values that were selected by the user at run-time. The method uses a StringBuilder to construct INSERT statements that will insert each value from the object array into the provided variable name. Once this method has been defined in the custom code for the report we can go back into the dataset designer’s Parameters tab and update the expression for the ‘@customerIdInserts’ parameter by clicking on the button with the “function” symbol that appears to the right of the parameter value. We’ll set the expression to: 1: =Code.BuildIntegerListInserts("@customerIdList ", Parameters!customerIds.Value)   In order to invoke our custom code method we simply need to invoke “Code.<method name>” and pass in any needed parameters. The first parameter needs to match the name of the IntegerListTableType variable that we used in the EXEC statement of our query. The second parameter will come from the Value property of the ‘@customerIds’ parameter (this evaluates to an object array at run time). Finally, we’ll need to edit the properties of the ‘@customerIdInserts’ parameter on the report to mark it as a nullable internal parameter so that users aren’t prompted to provide a value for it when running the report. Limitations And Final Thoughts When I first started looking into the text query approach described above I wondered if there might be an upper limit to the size of the string that can be used to run a report. Obviously, the size of the actual query could increase pretty dramatically if you have a parameter that has a lot of potential values or you need to support several different table-valued parameters in the same query. I tested the example Customer Transaction Summary report with 1000 selected customers without any issue, but your mileage may vary depending on how much data you might need to pass into your query. If you think that the text query hack is a lot of work just to use a table-valued parameter, I agree! I think that it should be a lot easier than this to use a table-valued parameter from within SSRS, but so far I haven’t found a better way. It might be possible to create some custom .NET code that could build the EXEC statement for a given set of parameters automatically, but exploring that will have to wait for another post. For now, unless there’s a really compelling reason or requirement to use table-valued parameters from SSRS reports I would probably stick with the tried and true “join-multi-valued-parameter-to-CSV-and-split-in-the-query” approach for using mutli-valued parameters in a stored procedure.

    Read the article

  • Squeezing hardware

    - by [email protected]
    It's very common that high availability means duplicate hardware so costs grows up.Nowadays, CIOs and DBAs has the main challenge of reduce the money spent increasing the performance and the availability. Since Grid Infrastructure 11gR2, there is a new feature that helps them to afford this challenge: Server PoolsNow, in Grid Infrastructure 11gR2, you can define server pools across the cluster setting up the minimum number of servers, the maximum and how important is the pool.For example:Consider  that "Velasco, Boixeda & co"  has 3 apps in a 6 servers cluster.First One is the main core business appSecond one is Mid RangeAnd third it's a database not very important.We Define the following resource requirements for expected workload:1- Main App 2 servers required2- Mid Range App requires 1 server3- Is not a required app in case of disasterThe we define 3 server pools across the cluster:1- Main pool min two servers, max three servers, importance four2- Mid pool, min one server max two servers, importance two3- test pool,min zero servers, max one server, importance oneSo the initial configuration is:-Main pool has three servers-Mid pool has two servers-Test pool has one serverLogically, we can see the cluster like this:If any server fails, the following algorithm will be applied:1.-The server pool of least importance2.-IF server pools are of the same importance,   THEN then the Server Pool that has more than its defined minimum servers Is chosenHope it helps 

    Read the article

  • Keyboard locking up in Visual Studio 2010

    - by Jim Wang
    One of the initiatives I’m involved with on the ASP.NET and Visual Studio teams is the Tactical Test Team (TTT), which is a group of testers who dedicate a portion of their time to roaming around and testing different parts of the product.  What this generally translates to is a day and a bit a week helping out with areas of the product that have been flagged as risky, or tackling problems that span both ASP.NET and Visual Studio.  There is also a separate component of this effort outside of TTT which is to help with customer scenarios and design. I enjoy being on TTT because it allows me the opportunity to look at the entire product and gain expertise in a wide range of areas.  This week, I’m looking at Visual Studio 2010 performance problems, and this gem with the keyboard in Visual Studio locking up ended up catching my attention. First of all, here’s a link to one of the many Connect bugs describing the problem: Microsoft Connect I like this problem because it really highlights the challenges of reproducing customer bugs.  There aren’t any clear steps provided here, and I don’t know a lot about your environment: not just the basics like our OS version, but also what third party plug-ins or antivirus software you might be running that might contribute to the problem.  In this case, my gut tells me that there is more than one bug here, just by the sheer volume of reports.  Here’s another thread where users talk about it: Microsoft Connect The volume and different configurations are staggering.  From a customer perspective, this is a very clear cut case of basic functionality not working in the product, but from our perspective, it’s hard to find something reproducible: even customers don’t quite agree on what causes the problem (installing ReSharper seems to cause a problem…or does it?). So this then, is the start of a QA investigation. If anybody has isolated repro steps (just comment on this post) that they can provide this will immensely help us nail down the issue(s), but I’ll be doing a multi-part series on my progress and methodologies as I look into the problem.

    Read the article

  • Fatal Scroll&hellip;

    - by farid
    Hi. Actually I am a glad to writing with geekwithblogs service! but I decided to write a blog to improve my skills on different aspects. This post’s title is “Fatal Scroll”. Motivation for this post was the process of changing my blog theme. When I was trying to change the blog theme, encountered a killing scroll in configuration page of blog. you can see the sample in this picture. (10 inch screen) All I saw in my screen without scrolling was that. I tried to change my blog a few times. but the scroll slows down my try !! after all I gave up changing the FK theme!! In my opinion there is a check list for designing efficient and useful forms.(if you care about it!!) First of all, don’t forget wide range of screen sizes and screen resolutions. Second, always consider the cost of checking the changes made in fields. Third, never forget the scroll. scroll should not hide any main functionality (like save in this case). Forth, don’t use real data to preview the result. (like loading full blog to check new theme) and don’t forget didn’t say this list is a definitive list data entry form usability testing!  That’s it! MY FIRST BLOG POST!!

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >