Search Results

Search found 1675 results on 67 pages for 'basis vasis'.

Page 59/67 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Silverlight Cream Monday WP7 App Review # 2

    - by Dave Campbell
    Today's Review (alphabetic order): GooNews, Grocery Shopping List, Need for Speed, SurfCube, and United Nations News. I'm a day late if these are going to be 'Monday' posts, but there are lots of apps, lots of goodness, and lots of email, so I might try to do 2 a week, we'll see. So once again I've got a small review of 5 apps that are either on my phone or have been. Disclaimers at the end. In this Issue:   GooNews is a very cool app from Shawn Wildermuth (AgiliTrain). I don't know if he uses this as a demo during his instruction, but it definitely serves a purpose... wanna pick up the top news items from Google on a never-ending basis? ... this is it. You can add your own keyword searches, and send stories to InstaPaper or share via email. I like this because it brings me the news quickly and updated, and works great. GooNews is by AgiliTrain and is Free This was a request by the author, and actually surprised me. I'm a big one for lists, but I would have just done a OneNote list to SkyDrive and to my phone. This app is a lot more than that, but will take you some setup to make it be 'yours'. For obvious reasons, there are no unit prices on things, so you have to set that up to get some idea of the cost of what you're shopping for. But if you do that, you'll get a nice total. Lots of thought went into the various categories and you can add your own. There's a bit of animation on the category selection that's nice. He seems to have covered all the bases necessary to use this, even shopping 'plans' that can be saved, and emailing of lists. As I said, I'm more of a raw list person, but if you take the time to set this up, it should work very nicely for you. Grocery Shopping List is by Grocery Shopper and is $0.99 ($1.99 after Feb 1) with a free trial. This was my 2nd commercial game I bought, and the one I've played the most. I ran the trial, thought it worked great, and bought it. I've had a lot of fun with this... there's no gas pedal.. your foot is in the carbeurator from the GO!, and unless you wanna tap the screen and brake like a little girl, just hang onto the steering wheel (the phone), and guide your way through. Hours of fun and challenges here. I like this because it's got some challenge to it, and the cars seem to be very realistic in their reactions. Need for Speed Undercover is by Electronic Arts is $4.99 and has a free trial. SurfCube Browser is another app by the folks that did the GuitarTuner I reviewed on Monday. You have to see SurfCube to believe it. You've probably seen the YouTube video, if not check SilverlightCream number 1017. The app works very solid, and just as the video demonstrates. I downloaded and tried this, and it immediately did 2 things: bought it, and pinned it to my start page. I like this because it's fun to work with, and it works great as a browser. I'm about *this* close to replacing the IE tile on my front page with SurfCube. SurfCube Browser is by Kinabalu Innovation Limited and is $1.99 and has a free trial. Coming in with another News app is United Nations News by Justin Angel. This is definitely a news aggregator for 'grown ups'... news, photos, videos, and radio broadcsts from the international community all in one very slick app. This is an amazingly well thought-out and complete app. Even better yet, Justin has the code on CodePlex. A very well-done International news aggregator. United Nations News is by Justin Angel and is Free. A few disclaimers: Feel free to write me about your app and tell me about it. While it would be very cool to receive a whole bunch of xap files to review, at this point, for technical reasons, I'm unable to side-load my device. Since I plan on only doing this one day a week (twice if I find time), and only 5, I may never get caught up, so if you send me some info, be patient. Re: games ... remember I'm old... I'm from the era of Colossal Cave and Zork. Duke-Nukem 2D and Captain Comic were awesome. I don't own an XBOX or any other game system, so take game reviews from my perspective -- who knows, it may be refreshing :) I won't pay for an app or game just to try it. If you expect me to test-drive your app, it's going to have to have a Free Trial. I'm still playing with the format, comments are welcome. I decided I should alphabetize the list today... so there's no order implied Let me know what you think of the idea of doing reviews, or the layout/whatever, and Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Changing the Game: Why Oracle is in the IT Operations Management Business

    - by DanKoloski
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Next week, in Orlando, is the annual Gartner IT Operations Management Summit. Oracle is a premier sponsor of this annual event, which brings together IT executives for several days of high level talks about the state of operational management of enterprise IT. This year, Sushil Kumar, VP Product Strategy and Business Development for Oracle’s Systems & Applications Management, will be presenting on the transformation in IT Operations required to support enterprise cloud computing. IT Operations transformation is an important subject, because year after year, we hear essentially the same refrain – large enterprises spend an average of two-thirds (67%!) of their IT resources (budget, energy, time, people, etc.) on running the business, with far too little left over to spend on growing and transforming the business (which is what the business actually needs and wants). In the thirtieth year of the distributed computing revolution (give or take, depending on how you count it), it’s amazing that we have still not moved the needle on the single biggest component of enterprise IT resource utilization. Oracle is in the IT Operations Management business because when management is engineered together with the technology under management, the resulting efficiency gains can be truly staggering. To put it simply – what if you could turn that 67% of IT resources spent on running the business into 50%? Or 40%? Imagine what you could do with those resources. It’s now not just possible, but happening. This seems like a simple idea, but it is a radical change from “business as usual” in enterprise IT Operations. For the last thirty years, management has been a bolted-on afterthought – we pick and deploy our technology, then figure out how to manage it. This pervasive dysfunction is a broken cycle that guarantees high ongoing operating costs and low agility. If we want to break the cycle, we need to take a more tightly-coupled approach. As a complete applications-to-disk platform provider, Oracle is engineering management together with technology across our stack and hooking that on-premise management up live to My Oracle Support. Let’s examine the results with just one piece of the Oracle stack – the Oracle Database. Oracle began this journey with the Oracle Database 9i many years ago with the introduction of low-impact instrumentation in the database kernel (“tell me what’s wrong”) and through Database 10g, 11g and 11gR2 has successively added integrated advisory (“tell me how to fix what’s wrong”) and lifecycle management and automated self-tuning (“fix it for me, and do it on an ongoing basis for all my assets”). When enterprises take advantage of this tight-coupling, the results are game-changing. Consider the following (for a full list of public references, visit this link): British Telecom improved database provisioning time 1000% (from weeks to minutes) which allows them to provide a new DBaaS service to their internal customers with no additional resources Cerner Corporation Saved $9.5 million in CapEx and OpEx AND launched a brand-new cloud business at the same time Vodafone Group plc improved response times 50% and reduced maintenance planning times 50-60% while serving 391 million registered mobile customers Or the recent Database Manageability and Productivity Cost Comparisons: Oracle Database 11g Release 2 vs. SAP Sybase ASE 15.7, Microsoft SQL Server 2008 R2 and IBM DB2 9.7 as conducted by independent analyst firm ORC. In later entries, we’ll discuss similar results across other portions of the Oracle stack and how these efficiency gains are required to achieve the agility benefits of Enterprise Cloud. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Session Report - Java on the Raspberry Pi

    - by Janice J. Heiss
    On mid-day Wednesday, the always colorful Oracle Evangelist Simon Ritter demonstrated Java on the Raspberry Pi at his session, “Do You Like Coffee with Your Dessert?”. The Raspberry Pi consists of a credit card-sized single-board computer developed in the UK with the intention of stimulating the teaching of basic computer science in schools. “I don't think there is a single feature that makes the Raspberry Pi significant,” observed Ritter, “but a combination of things really makes it stand out. First, it's $35 for what is effectively a completely usable computer. You do have to add a power supply, SD card for storage and maybe a screen, keyboard and mouse, but this is still way cheaper than a typical PC. The choice of an ARM (Advanced RISC Machine and Acorn RISC Machine) processor is noteworthy, because it avoids problems like cooling (no heat sink or fan) and can use a USB power brick. When you add in the enormous community support, it offers a great platform for teaching everyone about computing.”Some 200 enthusiastic attendees were present at the session which had the feel of Simon Ritter sharing a fun toy with friends. The main point of the session was to show what Oracle was doing to support Java on the Raspberry Pi in a way that is entertaining and fun. Ritter pointed out that, in addition to being great for teaching, it’s an excellent introduction to the ARM architecture, and runs well with Java and will get better once it has official hard float support. The possibilities are vast.Ritter explained that the Raspberry Pi Project started in 2006 with the goal of devising a computer to inspire children; it drew inspiration from the BBC Micro literacy project of 1981 that produced a series of microcomputers created by the Acorn Computer company. It was officially launched on February 29, 2012, with a first production of 10,000 boards. There were 100,000 pre-orders in one day; currently about 4,000 boards are produced a day. Ritter described the specification as follows:* CPU: ARM 11 core running at 700MHz Broadcom SoC package Can now be overclocked to 1GHz (without breaking the warranty!) * Memory: 256Mb* I/O: HDMI and composite video 2 x USB ports (Model B only) Ethernet (Model B only) Header pins for GPIO, UART, SPI and I2C He took attendees through a brief history of ARM Architecture:* Acorn BBC Micro (6502 based) Not powerful enough for Acorn’s plans for a business computer * Berkeley RISC Project UNIX kernel only used 30% of instruction set of Motorola 68000 More registers, less instructions (Register windows) One chip architecture to come from this was… SPARC * Acorn RISC Machine (ARM) 32-bit data, 26-bit address space, 27 registers First machine was Acorn Archimedes * Spin off from Acorn, Advanced RISC MachinesNext he presented its features:* 32-bit RISC Architecture–  ARM accounts for 75% of embedded 32-bit CPUs today– 6.1 Billion chips sold last year (zero manufactured by ARM)* Abstract architecture and microprocessor core designs– Raspberry Pi is ARM11 using ARMv6 instruction set* Low power consumption– Good for mobile devices– Raspberry Pi can be powered from 700mA 5V only PSU– Raspberry Pi does not require heatsink or fanHe described the current ARM Technology:* ARMv6– ARM 11, ARM Cortex-M* ARMv7– ARM Cortex-A, ARM Cortex-M, ARM Cortex-R* ARMv8 (Announced)– Will support 64-bit data and addressingHe next gave the Java Specifics for ARM: Floating point operations* Despite being an ARMv6 processor it does include an FPU– FPU only became standard as of ARMv7* FPU (Hard Float, or HF) is much faster than a software library* Linux distros and Oracle JVM for ARM assume no HF on ARMv6– Need special build of both– Raspbian distro build now available– Oracle JVM is in the works, release date TBDNot So RISCPerformance Improvements* DSP Enhancements* Jazelle* Thumb / Thumb2 / ThumbEE* Floating Point (VFP)* NEON* Security Enhancements (TrustZone)He spent a few minutes going over the challenges of using Java on the Raspberry Pi and covered:* Sound* Vision * Serial (TTL UART)* USB* GPIOTo implement sound with Java he pointed out:* Sound drivers are now included in new distros* Java Sound API– Remember to add audio to user’s groups– Some bits work, others not so much* Playing (the right format) WAV file works* Using MIDI hangs trying to open a synthesizer* FreeTTS text-to-speech– Should work once sound works properlyHe turned to JavaFX on the Raspberry Pi:* Currently internal builds only– Will be released as technology preview soon* Work involves optimal implementation of Prism graphics engine– X11?* Once the JavaFX implementation is completed there will be little of concern to developers-- It’s just Java (WORA). He explained the basis of the Serial Port:* UART provides TTL level signals (3.3V)* RS-232 uses 12V signals* Use MAX3232 chip to convert* Use this for access to serial consoleHe summarized his key points. The Raspberry Pi is a very cool (and cheap) computer that is great for teaching, a great introduction to ARM that works very well with Java and will work better in the future. The opportunities are limitless. For further info, check out, Raspberry Pi User Guide by Eben Upton and Gareth Halfacree. From there, Ritter tried out several fun demos, some of which worked better than others, but all of which were greeted with considerable enthusiasm and support and good humor (even when he ran into some glitches).  All in all, this was a fun and lively session.

    Read the article

  • Pace Layering Comes Alive

    - by Tanu Sood
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Rick Beers is Senior Director of Product Management for Oracle Fusion Middleware. Prior to joining Oracle, Rick held a variety of executive operational positions at Corning, Inc. and Bausch & Lomb. With a professional background that includes senior management positions in manufacturing, supply chain and information technology, Rick brings a unique set of experiences to cover the impact that technology can have on business models, processes and organizations. Rick hosts the IT Leaders Editorial on a monthly basis. By now, readers of this column are quite familiar with Oracle AppAdvantage, a unified framework of middleware technologies, infrastructure and applications utilizing a pace layered approach to enterprise systems platforms. 1. Standardize and Consolidate core Enterprise Applications by removing invasive customizations, costly workarounds and the complexity that multiple instances creates. 2. Move business specific processes and applications to the Differentiate Layer, thus creating greater business agility with process extensions and best of breed applications managed by cross- application process orchestration. 3. The Innovate Layer contains all the business capabilities required for engagement, collaboration and intuitive decision making. This is the layer where innovation will occur, as people engage one another in a secure yet open and informed way. 4. Simplify IT by minimizing complexity, improving performance and lowering cost with secure, reliable and managed systems across the entire Enterprise. But what hasn’t been discussed is the pace layered architecture that Oracle AppAdvantage adopts. What is it, what are its origins and why is it relevant to enterprise scale applications and technologies? It’s actually a fascinating tale that spans the past 20 years and a basic understanding of it provides a wonderful context to what is evolving as the future of enterprise systems platforms. It all begins in 1994 with a book by noted architect Stewart Brand, of ’Whole Earth Catalog’ fame. In his 1994 book How Buildings Learn, Brand popularized the term ‘Shearing Layers’, arguing that any building is actually a hierarchy of pieces, each of which inherently changes at different rates. In 1997 he produced a 6 part BBC Series adapted from the book, in which Part 6 focuses on Shearing Layers. In this segment Brand begins to introduce the concept of ‘pace’. Brand further refined this idea in his subsequent book, The Clock of the Long Now, which began to link the concept of Shearing Layers to computing and introduced the term ‘pace layering’, where he proposes that: “An imperative emerges: an adaptive [system] has to allow slippage between the differently-paced systems … otherwise the slow systems block the flow of the quick ones and the quick ones tear up the slow ones with their constant change. Embedding the systems together may look efficient at first but over time it is the opposite and destructive as well.” In 2000, IBM architects Ian Simmonds and David Ing published a paper entitled A Shearing Layers Approach to Information Systems Development, which applied the concept of Shearing Layers to systems design and development. It argued that at the time systems were still too rigid; that they constrained organizations by their inability to adapt to changes. The findings in the Conclusions section are particularly striking: “Our starting motivation was that enterprises need to become more adaptive, and that an aspect of doing that is having adaptable computer systems. The challenge is then to optimize information systems development for change (high maintenance) rather than stability (low maintenance). Our response is to make it explicit within software engineering the notion of shearing layers, and explore it as the principle that systems should be built to be adaptable in response to the qualitatively different rates of change to which they will be subjected. This allows us to separate functions that should legitimately change relatively slowly and at significant cost from that which should be changeable often, quickly and cheaply.” The problem at the time of course was that this vision of adaptable systems was simply not possible within the confines of 1st generation ERP, which were conceived, designed and developed for standardization and compliance. It wasn’t until the maturity of open, standards based integration, and the middleware innovation that followed, that pace layering became an achievable goal. And Oracle is leading the way. Oracle’s AppAdvantage framework makes pace layering come alive by taking a strategic vision 20 years in the making and transforming it to a reality. It allows enterprises to retain and even optimize their existing ERP systems, while wrapping around those ERP systems three layers of capabilities that inherently adapt as needed, at a pace that’s optimal for the enterprise.

    Read the article

  • NDepend Evaluation: Part 3

    - by Anthony Trudeau
    NDepend is a Visual Studio add-in designed for intense code analysis with the goal of high code quality. NDepend uses a number of metrics and aggregates the data in pleasing static and active visual reports. My evaluation of NDepend will be broken up into several different parts. In the first part of the evaluation I looked at installing the add-in.  And in the last part I went over my first impressions including an overview of the features.  In this installment I provide a little more detail on a few of the features that I really like. Dependency Matrix The dependency matrix is one of the rich visual components provided with NDepend.  At a glance it lets you know where you have coupling problems including cycles.  It does this with number indicating the weight of the dependency and a color-coding that indicates the nature of the dependency. Green and blue cells are direct dependencies (with the difference being whether the relationship is from row-to-column or column-to-row).  Black cells are the ones that you really want to know about.  These indicate that you have a cycle.  That is, type A refers to type B and type B also refers to Type A. But, that’s not the end of the story.  A handy pop-up appears when you hover over the cell in question.  It explains the color, the dependency, and provides several interesting links that will teach you more than you want to know about the dependency. You can double-click the problem cells to explode the dependency.  That will show the dependencies on a method-by-method basis allowing you to more easily target and fix the problem.  When you’re done you can click the back button on the toolbar. Dependency Graph The dependency graph is another component provided.  It’s complementary to the dependency matrix, but it isn’t as easy to identify dependency issues using the window. On a positive note, it does provide more information than the matrix. My biggest issue with the dependency graph is determining what is shown.  This was not readily obvious.  I ended up using the navigation buttons to get an acceptable view.  I would have liked to choose what I see. Once you see the types you want you can get a decent idea of coupling strength based on the width of the dependency lines.  Double-arrowed lines are problematic and are shown in red.  The size of the boxes will be related to the metric being displayed.  This is controlled using the Box Size drop-down in the toolbar.  Personally, I don’t find the size of the box to be helpful, so I change it to Constant Font. One nice thing about the display is that you can see the entire path of dependencies when you hover over a type.  This is done by color-coding the dependencies and dependants.  It would be nice if selecting the box for the type would lock the highlighting in place. I did find a perhaps unintended work-around to the color-coding.  You can lock the color-coding in by hovering over the type, right-clicking, and then clicking on the canvas area to clear the pop-up menu.  You can then do whatever with it including saving it to an image file with the color-coding. CQL NDepend uses a code query language (CQL) to work with your code just like it was a database.  CQL cannot be confused with the robustness of T-SQL or even LINQ, but it represents an impressive attempt at providing an expressive way to enumerate and interrogate your code. There are two main windows you’ll use when working with CQL.  The CQL Query Explorer allows you to define what queries (rules) are run as part of a report – I immediately unselected rules that I don’t want in my results.  The CQL Query Edit window is where you can view or author your own rules.  The explorer window is pretty self-explanatory, so I won’t mention it further other than to say that any queries you author will appear in the custom group. Authoring your own queries is really hard to screw-up.  The Intellisense-like pop-ups tell you what you can do while making composition easy.  I was able to create a query within two minutes of playing with the editor.  My query warns if any types that are interfaces don’t start with an “I”. WARN IF Count > 0 IN SELECT TYPES WHERE IsInterface AND !NameLike “I” The results from the CQL Query Edit window are immediate. That fact makes it useful for ad hoc querying.  It’s worth mentioning two things that could make the experience smoother.  First, out of habit from using Visual Studio I expect to be able to scroll and press Tab to select an item in the list (like Intellisense).  You have to press Enter when you scroll to the item you want.  Second, the commands are case-sensitive.  I don’t see a really good reason to enforce that. CQL has a lot of potential not just in enforcing code quality, but also enforcing architectural constraints that your enterprise has defined. Up Next My next update will be the final part of the evaluation.  I will summarize my experience and provide my conclusions on the NDepend add-in. ** View Part 1 of the Evaluation ** ** View Part 2 of the Evaluation ** Disclaimer: Patrick Smacchia contacted me about reviewing NDepend. I received a free license in return for sharing my experiences and talking about the capabilities of the add-in on this site. There is no expectation of a positive review elicited from the author of NDepend.

    Read the article

  • Webcor Builders Coordinates Construction Schedules and Mitigates Potential Delays More Efficiently with Integrated Project Management

    - by Sylvie MacKenzie, PMP
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} With more than 40 years of commercial construction experience, Webcor Builders is a leading builder of distinguished, high-profile projects, including high-rise condominiums and hotels, laboratories, healthcare centers, and public works projects. Webcor is also known for its award-winning concrete, interior construction, historic restoration, and seismic renovation work. The company has completed more than 50 million square feet of projects to date. Considering the variety and complexity of the construction projects Webcor undertakes, an integrated project management solution is critical to ensuring optimal efficiency and completing client projects on time and on budget. The company previously used a number of scheduling systems for its various building projects. These packages provided different levels of schedule detail and required schedulers, engineers, and other employees to learn multiple systems. From an IT cost and complexity perspective, the company had to manage multiple scheduling systems and pay for multiple sets of licenses. The company looked to standardize on an enterprise project management system, and selected Oracle’s Primavera P6 Enterprise Project Portfolio Management. Webcor uses the solution’s advanced capabilities to schedule complex projects, analyze delays, model and propose multiple scenarios to demonstrate and mitigate delays and cost overruns, and process that information efficiently to deliver the scheduling precision that public and private projects require. In fact, the solution was instrumental in helping the company’s expansion into public sector projects during the recent economic downturn, and with Primavera P6 in place, it can deliver the precise schedule reporting required for large public projects. With Primavera P6 in place, the company could deliver the precise scheduling and milestone reporting capabilities required for large public projects. The solution is in managing the high-profile University of California – Berkeley Memorial Stadium project. Webcor was hired as construction manager and general contractor for the stadium renovation project, which is a fast-paced project located near the seismically active Hayward Fault Zone. Due to the University of California’s football schedule, meeting the Universities deadline for the coming season placed Webcor in a situation where risk awareness and early warnings of issues would be paramount. Webcor and the extended project team needed a solution that could instantly analyze alternate scenarios to mitigate potential delays; Primavera would deliver those answers.The team would also need to enable multiple stakeholders to use an internet-based platform to access the schedule from various locations, and model complicated sequencing requirements where swift decisions would be made to keep the project on track. The schedule is an integral part of Webcor’s construction management process for the stadium project. Rather than providing the client with the industry-standard monthly update, Webcor updates the critical path method (CPM) schedule on a weekly basis. The project team also reviews the schedule and updates weekly to confirm that progress and forecasted performance are accurate. Hired by the University for their ability to deliver in high risk environments The Webcor team was hit recently with a design supplement that could have added up to 70 days to the project. Using Oracle Primavera P6 the team sprung into action analyzing multiple “what if” scenarios to review mitigation means and methods.  Determined to make sure the Bears could take the field in the coming season the project team nearly eliminated the impact with their creative analysis in working the schedule. The total time from the issuance of the final design supplement to an agreed mitigation response was less than one week; leveraging the Oracle Primavera solution Webcor was able to deliver superior customer value With the ability to efficiently manage projects and schedules, Webcor can ensure it completes its projects on time and on budget, as well as inform clients about what changes to plans will mean in terms of delays and additional costs. Read the complete customer case study at :  http://www.oracle.com/us/corporate/customers/customersearch/webcor-builders-1-primavera-ss-1639886.html

    Read the article

  • Where would a spam bot be located?

    - by Tim
    I have a hosted website using a free hosting service, I received an email this afternoon saying that I have been suspended because my account has been compromised. Basically, someone is using my email account to mass send spam. I've changed all the passwords and everything but when my Gmail pulls the emails from the host it's still downloading loads of spam messages that show like this: This message was created automatically by mail delivery software. A message that you sent could not be delivered to one or more of its recipients. This is a permanent error. The following address(es) failed: [email protected] SMTP error from remote mail server after end of data: host 198.91.80.251 [198.91.80.251]: 554 5.6.0 id=23634-03 - Rejected by MTA on relaying, from MTA([127.0.0.1]:10030): 554 Error: This email address has lost rights to send email from the system ------ This is a copy of the message, including all the headers. ------ Return-path: <[email protected]> Received: from keenesystems.com ([66.135.33.211]:2370 helo=server211) by absolut.x10hosting.com with esmtpsa (TLSv1:RC4-MD5:128) (Exim 4.77) (envelope-from <[email protected]>) id 1TGwSW-002hHe-Lc for [email protected]; Wed, 26 Sep 2012 13:35:44 -0500 MIME-Version: 1.0 Date: Wed, 26 Sep 2012 13:35:43 -0500 X-Priority: 3 (Normal) X-Mailer: Ximian Evolution 3.9.9 (8.5.3-6) Subject: New staff members wanted at Auction It Online From: [email protected] Reply-To: [email protected] To: "Nadia Monti" <[email protected]> Content-Type: text/plain Content-Transfer-Encoding: quoted-printable Message-ID: <OUTLOOK-IDM-9aed7054-6a3e-e1a4-1d5c-3e73377652a6@server211> Date : 26 September 2012=0ATime : 13:35=0ASender : Dennise Halcomb Head = Office Manager of RJ Auction Drop-Off Int.=0A=0ANice to meet you Nadia M= onti=0A=0ARJ ADO Ltd., a USA based company, offers a significant amount = of goods worldwide for our customers on eBay and other auction venues. = Our company's main target is to provide a suitable and cost-effective se= rvice for any person, company or fundraising company. The main purpose o= f the administrative assistant / sales support representative is to cont= ribute to the sales force and add convenience to our cost-effective serv= ice dedicated to individuals, businesses, and organizations worldwide. O= ur HR department obtained your resume from one of the various job-orient= ed websites just to offer you this post.=0A=0AWorking Schedule: This is = a part time and home-based offer. You won't need to spend more than 3 ho= urs each day. Your =0Aschedule will be flexible.=0A=0ASalary: At the end= of the trial period (it lasts for 1 month) you will be paid 1,800 EUR. = With the average volume of clients your overall income will raise up to = 3,000 EUR per month. After the trial period is over your base salary wil= l grow up to 2,500 EUR per month, so you will earn 5% commission from th= e transactions completed.=0A=0AWhere?: Italy Wide. As it is a stay at ho= me position all the communication will be carried out via email and via = phone.=0A=0ARequirements: Access to the internet during the workday and = basic microsoft office skills are needed. Basic knowledge of English is = required (most of the contacts will be in English).=0A=0ACosts and Fees:= There are NO costs at any time for our employees. All fees related to t= his position are covered by the RJ ADO Co. Ltd..=0A=0AFurther Hiring Pro= cess: If you are interested in position we offer, please reply to this e= mail and send us the copy of your resume for verification.=0A=0AAfter re= viewing all of the received applications we will reply to successful app= licants only. Then we'll offer to these successful applicants a position= within our firm on a trial period basis for one month beginning from th= e date you sign a trial agreement. During this trial period you will rec= eive full guidance and support. Employees on a one monthly trial period = are evaluated at least one week prior to the end of their trial. During = the trial, your supervisor can recommend termination. At the end of the = trial period, the supervisor can offer continued employment, extension o= f trial period, or termination. After the trial period you may ask for m= ore hours or continue full-time.=0A=0AIf you are interested in this posi= tion, just reply to this email and send any questions you have and the c= opy of your resume for verification.=0A=0AThank You,=0AHR-Manager of RJ = ADO Co. Ltd.=0A=0APermission Settings=0AYou have been referred to RJ Auc= tion Drop-Off If you feel you received this email in error or do not wis= h to receive future messages, please reply to this message with "remove"= in the subject field. We will immediately update our database according= ly. =0AWe apologize for any inconvenience caused.=0A=0ARJ Auction Drop-O= ff Co. Ltd. I'm not aware of how this has happened. I'm not sure how anyone could have got hold of my password. It's a simple wordpress install, at some point recently my host went down and there was a fresh install of wordpress with default admin accounts, I have a feeling it could be something to do with this. My question is, even though I've changed all my passwords it's all still happening, is there annywhere in paticular this script would be stored on my host. I really can't deal with having my hosting account suspended and my email account sending all this spam.

    Read the article

  • Null Values And The T-SQL IN Operator

    - by Jesse
    I came across some unexpected behavior while troubleshooting a failing test the other day that took me long enough to figure out that I thought it was worth sharing here. I finally traced the failing test back to a SELECT statement in a stored procedure that was using the IN t-sql operator to exclude a certain set of values. Here’s a very simple example table to illustrate the issue: Customers CustomerId INT, NOT NULL, Primary Key CustomerName nvarchar(100) NOT NULL SalesRegionId INT NULL   The ‘SalesRegionId’ column contains a number representing the sales region that the customer belongs to. This column is nullable because new customers get created all the time but assigning them to sales regions is a process that is handled by a regional manager on a periodic basis. For the purposes of this example, the Customers table currently has the following rows: CustomerId CustomerName SalesRegionId 1 Customer A 1 2 Customer B NULL 3 Customer C 4 4 Customer D 2 5 Customer E 3   How could we write a query against this table for all customers that are NOT in sales regions 2 or 4? You might try something like this: 1: SELECT 2: CustomerId, 3: CustomerName, 4: SalesRegionId 5: FROM Customers 6: WHERE SalesRegionId NOT IN (2,4)   Will this work? In short, no; at least not in the way that you might expect. Here’s what this query will return given the example data we’re working with: CustomerId CustomerName SalesRegionId 1 Customer A 1 5 Customer E 5   I was expecting that this query would also return ‘Customer B’, since that customer has a NULL SalesRegionId. In my mind, having a customer with no sales region should be included in a set of customers that are not in sales regions 2 or 4.When I first started troubleshooting my issue I made note of the fact that this query should probably be re-written without the NOT IN clause, but I didn’t suspect that the NOT IN clause was actually the source of the issue. This particular query was only one minor piece in a much larger process that was being exercised via an automated integration test and I simply made a poor assumption that the NOT IN would work the way that I thought it should. So why doesn’t this work the way that I thought it should? From the MSDN documentation on the t-sql IN operator: If the value of test_expression is equal to any value returned by subquery or is equal to any expression from the comma-separated list, the result value is TRUE; otherwise, the result value is FALSE. Using NOT IN negates the subquery value or expression. The key phrase out of that quote is, “… is equal to any expression from the comma-separated list…”. The NULL SalesRegionId isn’t included in the NOT IN because of how NULL values are handled in equality comparisons. From the MSDN documentation on ANSI_NULLS: The SQL-92 standard requires that an equals (=) or not equal to (<>) comparison against a null value evaluates to FALSE. When SET ANSI_NULLS is ON, a SELECT statement using WHERE column_name = NULL returns zero rows even if there are null values in column_name. A SELECT statement using WHERE column_name <> NULL returns zero rows even if there are nonnull values in column_name. In fact, the MSDN documentation on the IN operator includes the following blurb about using NULL values in IN sub-queries or expressions that are used with the IN operator: Any null values returned by subquery or expression that are compared to test_expression using IN or NOT IN return UNKNOWN. Using null values in together with IN or NOT IN can produce unexpected results. If I were to include a ‘SET ANSI_NULLS OFF’ command right above my SELECT statement I would get ‘Customer B’ returned in the results, but that’s definitely not the right way to deal with this. We could re-write the query to explicitly include the NULL value in the WHERE clause: 1: SELECT 2: CustomerId, 3: CustomerName, 4: SalesRegionId 5: FROM Customers 6: WHERE (SalesRegionId NOT IN (2,4) OR SalesRegionId IS NULL)   This query works and properly includes ‘Customer B’ in the results, but I ultimately opted to re-write the query using a LEFT OUTER JOIN against a table variable containing all of the values that I wanted to exclude because, in my case, there could potentially be several hundred values to be excluded. If we were to apply the same refactoring to our simple sales region example we’d end up with: 1: DECLARE @regionsToIgnore TABLE (IgnoredRegionId INT) 2: INSERT @regionsToIgnore values (2),(4) 3:  4: SELECT 5: c.CustomerId, 6: c.CustomerName, 7: c.SalesRegionId 8: FROM Customers c 9: LEFT OUTER JOIN @regionsToIgnore r ON r.IgnoredRegionId = c.SalesRegionId 10: WHERE r.IgnoredRegionId IS NULL By performing a LEFT OUTER JOIN from Customers to the @regionsToIgnore table variable we can simply exclude any rows where the IgnoredRegionId is null, as those represent customers that DO NOT appear in the ignored regions list. This approach will likely perform better if the number of sales regions to ignore gets very large and it also will correctly include any customers that do not yet have a sales region.

    Read the article

  • Data Binding to Attached Properties

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2013/06/14/data-binding-to-attached-properties.aspx When I was working on my C#/XAML game framework, I discovered I wanted to try to data bind my sprites to background objects. That way, I could update my objects and the draw functionality would take care of the work for me. After a little experimenting and web searching, it appeared this concept was an impossible dream. Of course, when has that ever stopped me? In my typical way, I started to massively dive down the rabbit hole. I created a sprite on a canvas, and I bound it to a background object. <Canvas Name="GameField" Background="Black"> <Image Name="PlayerStrite" Source="Assets/Ship.png" Width="50" Height="50" Canvas.Left="{Binding X}" Canvas.Top="{Binding Y}"/> </Canvas> Now, we wire the UI item to the background item. public MainPage() { this.InitializeComponent(); this.Loaded += StartGame; }   void StartGame( object sender, RoutedEventArgs e ) { BindingPlayer _Player = new BindingPlayer(); _Player.X = Window.Current.Bounds.Height - PlayerSprite.Height; _Player.X = ( Window.Current.Bounds.Width - PlayerSprite.Width ) / 2.0; } Of course, now we need to actually have our background object. public class BindingPlayer : INotifyPropertyChanged { private double m_X; public double X { get { return m_X; } set { m_X = value; NotifyPropertyChanged(); } }   private double m_Y; public double Y { get { return m_Y; } set { m_Y = value; NotifyPropertyChanged(); } }   public event PropertyChangedEventHandler PropertyChanged; protected void NotifyPropertyChanged( [CallerMemberName] string p_PropertyName = null ) { if( PropertyChanged != null ) PropertyChanged( this, new PropertyChangedEventArgs( p_PropertyName ) ); } } I fired this baby up, and my sprite was correctly positioned on the screen. Maybe the sky wasn't falling after all. Wouldn't it be great if that was the case? I created some code to allow me to move the sprite, but nothing happened. This seems odd. So, I start debugging the application and stepping through code. Everything appears to be working. Time to dig a little deeper. After much profanity was spewed, I stumbled upon a breakthrough. The code only looked like it was working. What was really happening is that there was an exception being thrown in the background thread that I never saw. Apparently, the key call was the one to PropertyChanged. If PropertyChanged is not called on the UI thread, the UI thread ignores the call. Actually, it throws an exception and the background thread silently crashes. Of course, you'll never see this unless you're looking REALLY carefully. This seemed to be a simple problem. I just need to marshal this to the UI thread. Unfortunately, this object has no knowledge of this mythical UI Thread in which we speak. So, I had to pull the UI Thread out of thin air. Let's change our PropertyChanged call to look this. public event PropertyChangedEventHandler PropertyChanged; protected void NotifyPropertyChanged( [CallerMemberName] string p_PropertyName = null ) { if( PropertyChanged != null ) Windows.ApplicationModel.Core.CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync( Windows.UI.Core.CoreDispatcherPriority.Normal, new Windows.UI.Core.DispatchedHandler( () => { PropertyChanged( this, new PropertyChangedEventArgs( p_PropertyName ) ); } ) ); } Now, we raised our notification on the UI thread. Everything is fine, people are happy, and the world moves on. You may have noticed that I didn't await my call to the dispatcher. This was intentional. If I am trying to update a slew of sprites, I don't want thread being hung while I wait my turn. Thus, I send the message and move on. It is worth nothing that this is NOT the most efficient way to do this for game programming. We'll get to that in another blog post. However, it is perfectly acceptable for a business app that is running a background task that would like to notify the UI thread of progress on a periodic basis. It is worth noting that this code was written for a Windows Store App. You can do the same thing with WP8 and WPF. The call to the marshaler changes, but it is the same idea.

    Read the article

  • Sass interface in HTML6 for upload files.

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/11/04/sass-interface-in-html6-for-upload-files.aspx[This post is about experiment & imagination] From Windows XP (ever last OS I tried) I have seen a feature that is about send file to pen drive and make shortcut on Desktop. In XP, Win7 (Win8 have this too, not removed) just select the file right click > send to and you can send this file to many places. In my menu it’s show me Skype because I have installed it. this skype confirm that we can add our own app here to make it more easy for user to send file in our app. Nowadays Many people use Cloud or online site to store the file. In case of html5 drag and drop you need to have site opened and have opened that page which is about file upload. You need to select all  and drag and drop. after drag and drop file is simply uploaded to server and site show you on list (if no error happen). but this file upload is seriously not worthy since I have to open the site when I do this operation.   Through this post I want to describe a feature that can make this thing better.  This API is simply called SASS FILE UPLOAD API Through This API when you surf the site and come into file upload page then the page will tell you that we also have SASS FILE API support. Enable it for better result.   How this work This API feature are activated on 2 basis. 1. Feature are disabled by default on site (or you can change it if it’s not) 2. This API allow specific site to upload the files. Files upload may have some rule. For example (minimum or maximum size of file to uploaded, which format the site allowed you to upload). In case of resume site you will be allowed to use .doc (according to code of site)   How browser recognize that Site have SASS service. In HTML source of  the site, the code have a meta tag similar to this <meta name=”sass-upload-api” path=”/upload.json”/> Remember that upload.json is one file that has define the value of many settings {   "cookie_name": "ck_file",   "maximum_allowed_perday": 24,   "allowed_file_extensions","*.png,*.jpg,*.jpeg,*.gif",   "method": [       {           "get": "file/get",           "routing":"/file/get/{fileName}"       },       {           "post": "file/post",           "routing":"/file/post/{fileName}"       },       {           "delete": "file/delete",           "routing":"/file/delete/{fileName}"       },         {           "put": "file/put",           "routing":"/file/put/{fileName}"       },        {           "all": "file/all",           "routing":"/file/all/{fileName}"       }    ] } cookie name is simply a cookie which should be stored in browser and define in json. we define the cookie_name so we can easily share then with service in Windows system. This cookie will be accessible with the service so it’s security based safe. other cookie will not be shared.   The cookie will be post,put, get from this location. The all location will be simply about showing a whole list of file. This will gave a treeview kind of json to show the directories on sever.   for example example.com if you have activated the API with this site then you will seen a send to option in your explorer.exe when you send you will got a windows open which folder you want to use to send the file. The windows will also describe the limit and how much you can upload. This thing never required site to opened. When you upload the file this will be uploaded through FTP protocol. FTP protocol are better for performance.   How this API make thing faster. Suppose you want to ask a question and want to post image. you just do it and get it ready when you open stackoverflow.com now stackoverflow will only tell you which file you want to put on your current question that you asking for. second use is about people use cloud app.   There is no need of drag and drop anymore. we just need to do it without drag and drop it. we doesn’t need to open the site either. This thing is still in experiment level. I will update this post when I got some progress on this API.

    Read the article

  • Trouble with Samba Domain

    - by Arkevius
    I'm having a bit of trouble setting up this Samba domain correctly. I'm getting an Access Denied error when trying to add a Windows XP machine to the domain. I'll go through my scenario in detail, but for those of you wanting a TLDR summary it'll be at the bottom of this post. I have HP Proliant server with Ubuntu 12.04 LTS installed. For this particular environment, I need this server to act as a PDC, file server, and print server. I began by updating and upgrading the packages (of course). Then went to install samba, gnome-desktop, wine, and cpanm. Samba was, of course, for the PDC and file/print services. The GUI was needed because a certain software has to be installed on there that needs a GUI. Wine was needed because the software is Windows-native. And cpanm was for a perl script I have running. For Samba, I went into the smb.conf file and enabled domain logons, changed the workgroup/domain name, the logon script for a per-group basis (netlogon/%g), enabled the netlogon and profiles share, and setup a couple of custom shares for the file service. The printer was added later, and seems to be working just fine. I then restarted the services, and used the net groupmap command to ensure my unix groups were mapped correctly to the Windows groups. After this, I went to a Windows box, and was able to successfully join the domain without a problem. After some fidgeting with the software to get it running on the win boxes from the server (it's a records management system program, which stores it's database files on the server), I went to add another computer to the domain. But now it's saying Access Denied. Before when I had this trouble it was because I forgot to add the group "machines" so Samba could create machine accounts. Thinking this was the case, I manually created the machine account to test this theory. However, it would still give me an Access Denied error. That must mean it has something to do with permissions now, correct? I've been fighting with this server for the past two weeks. If it's not one thing that;s wrong, then it's something else completely different. This would be the third time I've actually reinstalled everything to start over. I'll post snippets of my system settings below. If anything else is needed, just say the word and I'll gather up the info. The unix group 'domadmin' is the Domain Admins group. Samba Administrator account administrator:x:1000:1000:Administrator,,,:/home/administrator:/bin/bash Adminstrator's groups administrator adm cdrom sudo dip plugdev lpadmin sambashare domadmin crimestar Samba's Configuration FIle (a snippet anyways) [global] workgroup = CITYPD server string = BPDServer dns proxy = no log file = /var/log/samba/log.%m max log size = 1000 syslog = 0 panic action = /usr/share/samba/panic-action %d security = user encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user domain logons = yes logon path = \\%L\srv\samba\profiles\%U logon script = logon.bat add machine script = /usr/sbin/useradd -g machines -c "%u machine account" -d /var/lib/samba -s /bin/false %u domain master = yes usershare allow guests = yes [netlogon] comment = Network Logon Service path = /srv/samba/netlogon/%g guest ok = yes read only = yes browseable = no [profiles] comment = All Printers browseable = no path = /var/spool/samba printable = yes guest ok = no read only = yes create mask = 0700 [print$] comment = Printer Drivers path = /var/lib/samba/printers browseable = yes read only = yes guest ok = no write list = root, @lpadmin [crimestar] comment = "Crimestar DB" path = /srv/crimestar/db valid users = @domadmin, @crimestar admin users = administrator writeable = yes guest ok = no browseable = no create mask = 0666 directory mask = 0777 [crimestarfiles] path = /home/administrator/.wine/drive_c/crimestar admin users = administrator browseable = yes ls -la on /srv/samba/profiles drwxrwxrwx 2 root machines 4096 Nov 21 15:27 . drwxr-xr-x 4 root root 4096 Nov 21 15:28 .. ls -la on /srv/samba/netlogon drwxr-xr-x 6 root root 4096 Nov 21 15:30 . drwxr-xr-x 4 root root 4096 Nov 21 15:28 .. drwxr-xr-x 2 root root 4096 Nov 21 15:30 crimestar drwxr-xr-x 2 root root 4096 Nov 21 18:13 domadmin drwxr-xr-x 3 root root 4096 Nov 21 15:30 guests drwxr-xr-x 2 root root 4096 Nov 21 15:29 users GrouMap list Domain Users (S-1-5-21-2978508755-2341913247-928297747-513) -> users Domain Admins (S-1-5-21-2978508755-2341913247-928297747-512) -> domadmin Domain Guests (S-1-5-21-2978508755-2341913247-928297747-514) -> nogroup TLDR I'm getting an Access Denied error message while trying to join a windows box to a samba domain, even after I successfully joined another computer without a problem. System settings / files are quoted above. Anyone have any ideas or suggestions?

    Read the article

  • Lessons learnt in implementing Scrum in a Large Organization that has traditional values

    - by MarkPearl
    I recently had the experience of being involved in a “test” scrum implementation in a large organization that was used to a traditional project management approach. Here are some lessons that I learnt from it. Don’t let the Project Manager be the Product Owner First lesson learnt is to identify the correct product owner – in this instance the product manager assumed the role of the product owner which was a mistake. The product owner is the one who has the most to loose if the project fails. With a methodology that advocates removing the role of the project manager from the process then it is not in the interests of the person who is employed as a project manager to be the product owner – in fact they have the most to gain should the project fail. Know the time commitments of team members to the Project Second lesson learnt is to get a firm time commitment of the members on a team for the sprint and to hold them to it. In this project instance many of the issues we faced were with team members having to double up on supporting existing projects/systems and the scrum project. In many situations they just didn’t get round to doing any work on the scrum project for several days while they tried to meet other commitments. Initially this was not made transparent to the team – in stand up team members would say that had done some work but would be very vague on how much time they had actually spent using the blackhole of their other legacy projects as an excuse – putting up a time burn down chart made time allocations transparent and easy to hold the team to. In addition, how can you plan for a sprint without knowing the actual time available of the members – when I mean actual time, the exercise of getting them to go through all their appointments and lunch times and breaks and removing them from their time commitment helps get you to a realistic time that they can dedicate. Make sure you meet your minimum team sizes In a recent post I wrote about the difference between a partnership and a team. If you are going to do scrum in a large organization make sure you have a minimum team size of at least 3 developers. My experience with larger organizations is that people have a tendency to be sick more, take more leave and generally not be around – if you have a team size of two it is so easy to loose momentum on the project – the more people you have in the team (up to about 9) the more the momentum the project will have when people are not around. Swapping from one methodology to another can seem as waste to the customer It sounds bad, but most customers don’t care what methodology you use. Often they have bought into the “big plan upfront”. If you can, avoid taking a project on midstream from a traditional approach unless the customer has not bought into the process – with this particular project they had a detailed upfront planning breakaway with the customer using the traditional approach and then before the project started we moved onto a scrum implementation – this seemed as waste to the customer. We should have managed the customers expectation properly. Don’t play the role of the scrum master if you can’t be the scrum master With this particular implementation I was the “scrum master”. But all I did was go through the process of the formal meetings of scrum – I attended stand up, retrospectives and planning – but I was not hands on the ground. I was not performing the most important role of removing blockages – and by the end of the project there were a number of blockages “cropping up”. What could have been a better approach was to take someone on the team and train them to be the scrum master and be present to coach them. Alternatively actually be on the team on a fulltime basis and be the scrum master. By just going through the meetings of scrum didn’t mean we were doing scrum. So we failed with this one, if you fail look at it from an agile perspective As this particular project drew to a close and it became more and more apparent that it was not going to succeed the failure of it became depressing. Emotions were expressed by various people on the team that we not encouraging and enforced the failure. Embracing the failure and looking at it for what it is instead of taking it as the end of the world can change how you grow from the experience. Acknowledging that it failed and then focussing on learning from why and how to avoid the failure in the future can change how you feel emotionally about the team, the project and the organization.

    Read the article

  • Content in Context: The right medicine for your business applications

    - by Lance Shaw
    For many of you, your companies have already invested in a number of applications that are critical to the way your business is run. HR, Payroll, Legal, Accounts Payable, and while they might need an upgrade in some cases, they are all there and handling the lifeblood of your business. But are they really running as efficiently as they could be? For many companies, the answer is no. The problem has to do with the important information caught up within documents and paper. It’s everywhere except where it truly needs to be – readily available right within the context of the application itself. When the right information cannot be easily found, business processes suffer significantly. The importance of this recently struck me when I recently went to meet my new doctor and get a routine physical. Walking into the office lobby, I couldn't help but notice rows and rows of manila folders in racks from floor to ceiling, filled with documents and sensitive, personal information about various patients like myself.  As I looked at all that paper and all that history, two things immediately popped into my head.  “How do they find anything?” and then the even more alarming, “So much for information security!” It sure looked to me like all those documents could be accessed by anyone with a key to the building. Now the truth is that the offices of many general practitioners look like this all over the United States and the world.  But it had me thinking, is the same thing going on in just about any company around the world, involving a wide variety of important business processes? Probably so. Think about all the various processes going on in your company right now. Invoice payments are being processed through Accounts Payable, contracts are being reviewed by Procurement, and Human Resources is reviewing job candidate submissions and doing background checks. All of these processes and many more like them rely on access to forms and documents, whether they are paper or digital. Now consider that it is estimated that employee’s spend nearly 9 hours a week searching for information and not finding it. That is a lot of very well paid employees, spending more than one day per week not doing their regular job while they search for or re-create what already exists. Back in the doctor’s office, I saw this trend exemplified as well. First, I had to fill out a new patient form, even though my previous doctor had transferred my records over months previously. After filling out the form, I was later introduced to my new doctor who then interviewed me and asked me the exact same questions that I had answered on the form. I understand that there is value in the interview process and it was great to meet my new doctor, but this simple process could have been so much more efficient if the information already on file could have been brought directly together with the new patient information I had provided. Instead of having a highly paid medical professional re-enter the same information into the records database, the form I filled out could have been immediately scanned into the system, associated with my previous information, discrepancies identified, and the entire process streamlined significantly. We won’t solve the health records management issues that exist in the United States in this blog post, but this example illustrates how the automation of information capture and classification can eliminate a lot of repetitive and costly human entry and re-creation, even in a simple process like new patient on-boarding. In a similar fashion, by taking a fresh look at the various processes in place today in your organization, you can likely spot points along the way where automating the capture and access to the right information could be significantly improved. As you evaluate how content-process flows through your organization, take a look at how departments and regions share information between the applications they are using. Business applications are often implemented on an individual department basis to solve specific problems but a holistic approach to overall information management is not taken at the same time. The end result over the years is disparate applications with separate information repositories and in many cases these contain duplicate information, or worse, slightly different versions of the same information. This is where Oracle WebCenter Content comes into the story. More and more companies are realizing that they can significantly improve their existing application processes by automating the capture of paper, forms and other content. This makes the right information immediately accessible in the context of the business process and making the same information accessible across departmental systems which has helped many organizations realize significant cost savings. Here on the Oracle WebCenter team, one of our primary goals is to help customers find new ways to be more effective, more cost-efficient and manage information as effectively as possible. We have a series of three webcasts occurring over the next few weeks that are focused on the integration of enterprise content management within the context of business applications. We hope you will join us for one or all three and that you will find them informative. Click here to learn more about these sessions and to register for them. There are many aspects of information management to consider as you look at integrating content management within your business applications. We've barely scratched the surface here but look for upcoming blog posts where we will discuss more specifics on the value of delivering documents, forms and images directly within applications like Oracle E-Business Suite, PeopleSoft Enterprise, JD Edwards Enterprise One, Siebel CRM and many others. What do you think?  Are your important business processes as healthy as they can be?  Do you have any insights to share on the value of delivering content directly within critical business processes? Please post a comment and let us know the value you have realized, the lessons learned and what specific areas you are interested in.

    Read the article

  • Joining on NULLs

    - by Dave Ballantyne
    A problem I see on a fairly regular basis is that of dealing with NULL values.  Specifically here, where we are joining two tables on two columns, one of which is ‘optional’ ie is nullable.  So something like this: i.e. Lookup where all the columns are equal, even when NULL.   NULL’s are a tricky thing to initially wrap your mind around.  Statements like “NULL is not equal to NULL and neither is it not not equal to NULL, it’s NULL” can cause a serious brain freeze and leave you a gibbering wreck and needing your mummy. Before we plod on, time to setup some data to demo against. Create table #SourceTable ( Id integer not null, SubId integer null, AnotherCol char(255) not null ) go create unique clustered index idxSourceTable on #SourceTable(id,subID) go with cteNums as ( select top(1000) number from master..spt_values where type ='P' ) insert into #SourceTable select Num1.number,nullif(Num2.number,0),'SomeJunk' from cteNums num1 cross join cteNums num2 go Create table #LookupTable ( Id integer not null, SubID integer null ) go insert into #LookupTable Select top(100) id,subid from #SourceTable where subid is not null order by newid() go insert into #LookupTable Select top(3) id,subid from #SourceTable where subid is null order by newid() If that has run correctly, you will have 1 million rows in #SourceTable and 103 rows in #LookupTable.  We now want to join one to the other. First attempt – Lets just join select * from #SourceTable join #LookupTable on #LookupTable.id = #SourceTable.id and #LookupTable.SubID = #SourceTable.SubID OK, that’s a fail.  We had 100 rows back,  we didn’t correctly account for the 3 rows that have null values.  Remember NULL <> NULL and the join clause specifies SUBID=SUBID, which for those rows is not true. Second attempt – Lets deal with those pesky NULLS select * from #SourceTable join #LookupTable on #LookupTable.id = #SourceTable.id and isnull(#LookupTable.SubID,0) = isnull(#SourceTable.SubID,0) OK, that’s the right result, well done and 99.9% of the time that is where its left. It is a relatively trivial CPU overhead to wrap ISNULL around both columns and compare that result, so no problems.  But, although that’s true, this a relational database we are using here, not a procedural language.  SQL is a declarative language, we are making a request to the engine to get the results we want.  How we ask for them can make a ton of difference. Lets look at the plan for our second attempt, specifically the clustered index seek on the #SourceTable   There are 2 predicates. The ‘seek predicate’ and ‘predicate’.  The ‘seek predicate’ describes how SQLServer has been able to use an Index.  Here, it has been able to navigate the index to resolve where ID=ID.  So far so good, but what about the ‘predicate’ (aka residual probe) ? This is a row-by-row operation.  For each row found in the index matching the Seek Predicate, the leaf level nodes have been scanned and tested using this logical condition.  In this example [Expr1007] is the result of the IsNull operation on #LookupTable and that is tested for equality with the IsNull operation on #SourceTable.  This residual probe is quite a high overhead, if we can express our statement slightly differently to take full advantage of the index and make the test part of the ‘Seek Predicate’. Third attempt – X is null and Y is null So, lets state the query in a slightly manner: select * from #SourceTable join #LookupTable on #LookupTable.id = #SourceTable.id and ( #LookupTable.SubID = #SourceTable.SubID or (#LookupTable.SubID is null and #SourceTable.SubId is null) ) So its slightly wordier and may not be as clear in its intent to the human reader, that is what comments are for, but the key point is that it is now clearer to the query optimizer what our intention is. Let look at the plan for that query, again specifically the index seek operation on #SourceTable No ‘predicate’, just a ‘Seek Predicate’ against the index to resolve both ID and SubID.  A subtle difference that can be easily overlooked.  But has it made a difference to the performance ? Well, yes , a perhaps surprisingly high one. Clever query optimizer well done. If you are using a scalar function on a column, you a pretty much guaranteeing that a residual probe will be used.  By re-wording the query you may well be able to avoid this and use the index completely to resolve lookups. In-terms of performance and scalability your system will be in a much better position if you can.

    Read the article

  • Windows 8/Surface Lunch Event Summary

    - by Tim Murphy
    Today was a big day for Microsoft with two separate launch event.  The first for Windows 8 and all of it’s hardware partners.  The second was specifically to introduce the Microsoft Windows 8 Surface tablet.  Below are some of the take-aways I got from the webcasts. Windows 8 Launch The three general area that Microsoft focused on were the release of the OS itself, the public unveiling of the Windows Store and the new devices available from its hardware partners. The release of the OS focused on the fact that it will be available at mid-night tonight for both new PCs and for upgrades.  I can’t say that this interested me that much since it was already known to most people.  I think what they did show well was how easy the OS really is to use. The Windows Store is also not a new feature to those of us who have been running the pre-release versions of Windows 8 or have owned Windows Phone 7 for the past 2 years.  What was interesting is that the Windows Store launches with more apps available than any other platforms store at their respective launch.  I think this says a lot about how Microsoft focuses on the ability of developers to create software and make it available.  The of course were sure to emphasize that the Windows Store has better monetary terms for developers than its competitors. The also showed off the fact that XBox Music streaming is available for to all Windows 8 user for free.  Couple this with the Bing suite of apps that give you news, weather, sports and finance right out of the box and I think most people will find the environment a joy to use. I think the hardware demo, while quick and furious, really show where Windows shine: CHOICE!  They made a statement that over 1000 devices have been certified for Windows 8.  They showed tablets, laptops, desktops, all-in-ones and convertibles.  Since these devices have industry standard connectors they give a much wider variety of accessories and devices that you can use with them. Steve Balmer then came on stage and tried to see how many times he could use the “magical”.  He focused on how the Windows 8 OS is designed to integrate with SkyDrive, Skype and Outlook.com.  He also enforced that they think Windows 8 is the best choice for the Enterprise when it comes to protecting data and integrating across devices including Windows Phone 8. With that we were left to wait for the second event of the day. Surface Launch The second event of the day started with kids with magnets.  Ok, they were adults, but who doesn’t like playing with magnets.  Steven Sinofsky detached and reattached the Surface keyboard repeatedly, clearly enjoying himself.  It turns out that there are 4 magnets in the cover, 2 for alignment and 2 as connectors. They then went to giving us the details on the display.  The 10.6” display is optically bonded to the case and is optimized to reduce glare.  I think this came through very well in the demonstrations. The properties of the case were also a great selling point.  The VaporMg allowed them to drop the device on stage, on purpose, and continue working.  Of course they had to bring out the skate boards made from Surface devices. “It just has to feel right” was the reason they gave for many of their design decisions from the weight and size of the device to the way the kickstand and camera work together.  While this gave you the feeling that the whole process was trial and error you could tell that a lot of science went into the specs.  This included making sure that the magnets were strong enough to hold the cover on and still have a 3 year old remove the cover without effort. I am glad that they also decided the a USB port would be part of the spec since it give so many options.  They made the point that this allows Surface to leverage over 420 million existing devices.  That works for me. The last feature that I really thought was important was the microSD port.  Begin stuck with the onboard memory has been an aggravation of mine with many of the devices in the market today. I think they did job of really getting the audience to understand why you want this platform and this particular device.  Using personal examples like creating a video of a birthday party and being in it or the fact that the device was being used to live blog the event and control the lights and presentation.  They showed very well that it was not only fun but very capable of getting real work done.  Handing out tablets to the crowd didn’t hurt either.  In the end I really wanted a Surface even though I really have no need for one on a daily basis.  Great job Microsoft! del.icio.us Tags: Windows 8,Win8,Windows 8 Luanch

    Read the article

  • Exploring TCP throughput with DTrace (2)

    - by user12820842
    Last time, I described how we can use the overlap in distributions of unacknowledged byte counts and send window to determine whether the peer's receive window may be too small, limiting throughput. Let's combine that comparison with a comparison of congestion window and slow start threshold, all on a per-port/per-client basis. This will help us Identify whether the congestion window or the receive window are limiting factors on throughput by comparing the distributions of congestion window and send window values to the distribution of outstanding (unacked) bytes. This will allow us to get a visual sense for how often we are thwarted in our attempts to fill the pipe due to congestion control versus the peer not being able to receive any more data. Identify whether slow start or congestion avoidance predominate by comparing the overlap in the congestion window and slow start distributions. If the slow start threshold distribution overlaps with the congestion window, we know that we have switched between slow start and congestion avoidance, possibly multiple times. Identify whether the peer's receive window is too small by comparing the distribution of outstanding unacked bytes with the send window distribution (i.e. the peer's receive window). I discussed this here. # dtrace -s tcp_window.d dtrace: script 'tcp_window.d' matched 10 probes ^C cwnd 80 10.175.96.92 value ------------- Distribution ------------- count 1024 | 0 2048 | 4 4096 | 6 8192 | 18 16384 | 36 32768 |@ 79 65536 |@ 155 131072 |@ 199 262144 |@@@ 400 524288 |@@@@@@ 798 1048576 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 3848 2097152 | 0 ssthresh 80 10.175.96.92 value ------------- Distribution ------------- count 268435456 | 0 536870912 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 5543 1073741824 | 0 unacked 80 10.175.96.92 value ------------- Distribution ------------- count -1 | 0 0 | 1 1 | 0 2 | 0 4 | 0 8 | 0 16 | 0 32 | 0 64 | 0 128 | 0 256 | 3 512 | 0 1024 | 0 2048 | 4 4096 | 9 8192 | 21 16384 | 36 32768 |@ 78 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 5391 131072 | 0 swnd 80 10.175.96.92 value ------------- Distribution ------------- count 32768 | 0 65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 5543 131072 | 0 Here we are observing a large file transfer via http on the webserver. Comparing these distributions, we can observe: That slow start congestion control is in operation. The distribution of congestion window values lies below the range of slow start threshold values (which are in the 536870912+ range), so the connection is in slow start mode. Both the unacked byte count and the send window values peak in the 65536-131071 range, but the send window value distribution is narrower. This tells us that the peer TCP's receive window is not closing. The congestion window distribution peaks in the 1048576 - 2097152 range while the receive window distribution is confined to the 65536-131071 range. Since the cwnd distribution ranges as low as 2048-4095, we can see that for some of the time we have been observing the connection, congestion control has been a limiting factor on transfer, but for the majority of the time the receive window of the peer would more likely have been the limiting factor. However, we know the window has never closed as the distribution of swnd values stays within the 65536-131071 range. So all in all we have a connection that has been mildly constrained by congestion control, but for the bulk of the time we have been observing it neither congestion or peer receive window have limited throughput. Here's the script: #!/usr/sbin/dtrace -s tcp:::send / (args[4]-tcp_flags & (TH_SYN|TH_RST|TH_FIN)) == 0 / { @cwnd["cwnd", args[4]-tcp_sport, args[2]-ip_daddr] = quantize(args[3]-tcps_cwnd); @ssthresh["ssthresh", args[4]-tcp_sport, args[2]-ip_daddr] = quantize(args[3]-tcps_cwnd_ssthresh); @unacked["unacked", args[4]-tcp_sport, args[2]-ip_daddr] = quantize(args[3]-tcps_snxt - args[3]-tcps_suna); @swnd["swnd", args[4]-tcp_sport, args[2]-ip_daddr] = quantize((args[4]-tcp_window)*(1 tcps_snd_ws)); } One surprise here is that slow start is still in operation - one would assume that for a large file transfer, acknowledgements would push the congestion window up past the slow start threshold over time. The slow start threshold is in fact still close to it's initial (very high) value, so that would suggest we have not experienced any congestion (the slow start threshold is adjusted when congestion occurs). Also, the above measurements were taken early in the connection lifetime, so the congestion window did not get a changes to get bumped up to the level of the slow start threshold. A good strategy when examining these sorts of measurements for a given service (such as a webserver) would be start by examining the distributions above aggregated by port number only to get an overall feel for service performance, i.e. is congestion control or peer receive window size an issue, or are we unconstrained to fill the pipe? From there, the overlap of distributions will tell us whether to drill down into specific clients. For example if the send window distribution has multiple peaks, we may want to examine if particular clients show issues with their receive window.

    Read the article

  • Download a file with DefaultHTTPClient and preemptive authentication

    - by Nils
    After I had a lot of problems with preemptive authentication , I got it finally working. Now the next problem. I want to get a file with it, but I don't know how. I thought the file data might be in the variable response, but it isn't. Any ideas how this might work? I'm trying it since days without success :( - Basically I'm trying to download an jpeg file, which is on a server protected by prem. auth. // BASIC AUTH /* * ==================================================================== * * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * ==================================================================== * * This software consists of voluntary contributions made by many * individuals on behalf of the Apache Software Foundation. For more * information on the Apache Software Foundation, please see * <http://www.apache.org/>. */ //http://svn.apache.org/repos/asf/httpcomponents/httpclient/branches/4.0.x/httpclient/src/examples/org/apache/http/examples/client/ClientPreemptiveBasicAuthentication.java httpclient = new DefaultHttpClient(); httpclient.getCredentialsProvider().setCredentials( new AuthScope(host, port), new UsernamePasswordCredentials(username, password)); // Generate BASIC scheme object and stick it to the local // execution context BasicHttpContext localcontext = new BasicHttpContext(); BasicScheme basicAuth = new BasicScheme(); localcontext.setAttribute("preemptive-auth", basicAuth); //first request interceptor httpclient.addRequestInterceptor(new PreemptiveAuth(), 0); HttpHost targetHost = new HttpHost(host, port, "http"); //HttpGet httpget = new HttpGet("/"); HttpGet httpget = new HttpGet(http.url); System.out.println("executing request" + httpget.getRequestLine()); /// !!! HttpResponse response = httpclient.execute(targetHost, httpget, localcontext); HttpEntity entity = response.getEntity(); System.out.println("----------------------------------------"); System.out.println("+"+response.getStatusLine()+"+"); ...

    Read the article

  • Best ASP.NET E-Commerce Framework (aspDotNetStoreFront, AbleCommerce, BVCommerce, MediaChase,...)

    - by EfficionDave
    I need a full-featured asp.net E-Commerce solution for a project. It needs to be easily extensible as I need to extend it to integrate with a custom Flash based personalization engine. It should also have a great administrative back-end that handles inventory tracking, report, labels, shipping, ... so that the client can really use it as the basis for their business. I've used quite a few different E-Commerce systems including OSCommerce, Zen Cart, Catalook, and eTailer. While I was impressed with Zen Cart and eTailer, this project is big enough that I want to try a more serious, commercial offering. I'm particularly interested in thoughts on aspdotnetstorefront, BVCommerce, AbleCommerce, and MediaChase. [UPDATE] I've done quite a bit of evaluation since I posted this question and will give some of my findings below: BVCommerce - The price point is nice, the product seems fairly well regarded, and I've heard good things about customizing/extending it, it just doesn't seem like it's going to meet my needs as an Top Tier ECommerce solution. After viewing the tutorial videos, the Admin interface just seems too basic and outdated. aspDotNetStorefront - The integration with DotNetNuke was it's biggest selling point for me but after reading peoples opinions, it's integration seems quite weak. It's sounds like there's a significant learning curve and the architecture just doesn't sound like it's as clean as it should be. AbleCommerce - After a rather thorough evaluation, I have selected AbleCommerce for at least my next project. The Admin interface is beautiful and has a nice list of features. The E-Commerce portion of the user side is very well done, highly skin-able (using Master Pages and a couple other techniques), and the checkout workflow has all the features I need while still being very clean. Source to the API can be bought for $500 but you generally won't need it. There are some 3rd party modules available but there's not currently a large market for these. The biggest glaring weakness of AbleCommerce I've found is their CMS capabilities are very limited and not at all well suited for a non-technical user to make most site content updates. But I have a good solution for that that I plan adapting into AbleCommerce. They will automatically generate a site for you to play with to your heart's content for 30 days. It is defniitely worth checking out.

    Read the article

  • PostgreSQL to Data-Warehouse: Best approach for near-real-time ETL / extraction of data

    - by belvoir
    Background: I have a PostgreSQL (v8.3) database that is heavily optimized for OLTP. I need to extract data from it on a semi real-time basis (some-one is bound to ask what semi real-time means and the answer is as frequently as I reasonably can but I will be pragmatic, as a benchmark lets say we are hoping for every 15min) and feed it into a data-warehouse. How much data? At peak times we are talking approx 80-100k rows per min hitting the OLTP side, off-peak this will drop significantly to 15-20k. The most frequently updated rows are ~64 bytes each but there are various tables etc so the data is quite diverse and can range up to 4000 bytes per row. The OLTP is active 24x5.5. Best Solution? From what I can piece together the most practical solution is as follows: Create a TRIGGER to write all DML activity to a rotating CSV log file Perform whatever transformations are required Use the native DW data pump tool to efficiently pump the transformed CSV into the DW Why this approach? TRIGGERS allow selective tables to be targeted rather than being system wide + output is configurable (i.e. into a CSV) and are relatively easy to write and deploy. SLONY uses similar approach and overhead is acceptable CSV easy and fast to transform Easy to pump CSV into the DW Alternatives considered .... Using native logging (http://www.postgresql.org/docs/8.3/static/runtime-config-logging.html). Problem with this is it looked very verbose relative to what I needed and was a little trickier to parse and transform. However it could be faster as I presume there is less overhead compared to a TRIGGER. Certainly it would make the admin easier as it is system wide but again, I don't need some of the tables (some are used for persistent storage of JMS messages which I do not want to log) Querying the data directly via an ETL tool such as Talend and pumping it into the DW ... problem is the OLTP schema would need tweaked to support this and that has many negative side-effects Using a tweaked/hacked SLONY - SLONY does a good job of logging and migrating changes to a slave so the conceptual framework is there but the proposed solution just seems easier and cleaner Using the WAL Has anyone done this before? Want to share your thoughts?

    Read the article

  • Paging through records (json data) using jQuery...

    - by Pandiya Chendur
    I have a JSON result that contains numerous records. I'd like to show the first five records in one page and create pager links which have to move to that page with five record so on. I don't want the page to refresh which is why I'm hoping for a combination of JavaScript and jQuery. My json data looks like this: {"Table" : [ {"Emp_Id" : "3","Identity_No" : "","Emp_Name" : "Jerome","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Supervisior","Desig_Description" : "Supervisior of the Construction","SalaryBasis" : "Monthly","FixedSalary" : "25000.00"}, {"Emp_Id" : "4","Identity_No" : "","Emp_Name" : "Mohan","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Acc ","Desig_Description" : "Accountant","SalaryBasis" : "Monthly","FixedSalary" : "200.00"}, {"Emp_Id" : "5","Identity_No" : "","Emp_Name" : "Murugan","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason","Desig_Description" : "Mason","SalaryBasis" : "Weekly","FixedSalary" : "150.00"}, {"Emp_Id" : "6","Identity_No" : "","Emp_Name" : "Ram","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason","Desig_Description" : "Mason","SalaryBasis" : "Weekly","FixedSalary" : "120.00"}, {"Emp_Id" : "7","Identity_No" : "","Emp_Name" : "Raja","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason","Desig_Description" : "Mason","SalaryBasis" : "Weekly","FixedSalary" : "135.00"}, {"Emp_Id" : "8","Identity_No" : "","Emp_Name" : "Raja kumar","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason Helper","Desig_Description" : "Mason Helper","SalaryBasis" : "Weekly","FixedSalary" : "105.00"}, {"Emp_Id" : "9","Identity_No" : "","Emp_Name" : "Lakshmi","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason Helper","Desig_Description" : "Mason Helper","SalaryBasis" : "Weekly","FixedSalary" : "100.00"}, {"Emp_Id" : "10","Identity_No" : "","Emp_Name" : "Palani","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Carpenter","Desig_Description" : "Carpenter","SalaryBasis" : "Weekly","FixedSalary" : "200.00"}, {"Emp_Id" : "11","Identity_No" : "","Emp_Name" : "Annamalai","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Carpenter","Desig_Description" : "Carpenter","SalaryBasis" : "Weekly","FixedSalary" : "220.00"}, {"Emp_Id" : "12","Identity_No" : "","Emp_Name" : "David","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Steel Fixer","Desig_Description" : "Steel Fixer","SalaryBasis" : "Weekly","FixedSalary" : "220.00"}, {"Emp_Id" : "13","Identity_No" : "","Emp_Name" : "Chandru","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Steel Fixer","Desig_Description" : "Steel Fixer","SalaryBasis" : "Weekly","FixedSalary" : "220.00"}, {"Emp_Id" : "14","Identity_No" : "","Emp_Name" : "Mani","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Steel Helper","Desig_Description" : "Steel Helper","SalaryBasis" : "Weekly","FixedSalary" : "175.00"}, {"Emp_Id" : "15","Identity_No" : "","Emp_Name" : "Karthik","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Wood Fixer","Desig_Description" : "Wood Fixer","SalaryBasis" : "Weekly","FixedSalary" : "195.00"}, {"Emp_Id" : "16","Identity_No" : "","Emp_Name" : "Bala","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Wood Fixer","Desig_Description" : "Wood Fixer","SalaryBasis" : "Weekly","FixedSalary" : "185.00"}, {"Emp_Id" : "17","Identity_No" : "","Emp_Name" : "Tamil arasi","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Wood Helper","Desig_Description" : "Wood Helper","SalaryBasis" : "Weekly","FixedSalary" : "185.00"}, {"Emp_Id" : "18","Identity_No" : "","Emp_Name" : "Perumal","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Cook","Desig_Description" : "Cook","SalaryBasis" : "Weekly","FixedSalary" : "105.00"}, {"Emp_Id" : "19","Identity_No" : "","Emp_Name" : "Andiappan","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Watchman","Desig_Description" : "Watchman","SalaryBasis" : "Weekly","FixedSalary" : "150.00"} ] } And as of now my result looks like this, http://img401.imageshack.us/img401/2500/yuidtsum.jpg I have used jQuery for this: var jsonObj = JSON.parse(HfJsonValue); for (var i = jsonObj.Table.length - 1; i >= 0; i--) { var employee = jsonObj.Table[i]; $('<div class="resultsdiv"><br /><span class="resultName">' + employee.Emp_Name + '</span><span class="resultfields" style="padding-left:100px;">Category&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.Desig_Name + '</span><br /><br /><span id="SalaryBasis" class="resultfields">Salary Basis&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.SalaryBasis + '</span><span class="resultfields" style="padding-left:25px;">Salary&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.FixedSalary + '</span><span style="font-size:110%;font-weight:bolder;padding-left:25px;">Address&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.Address + '</span></div>') .insertAfter('#ResultsDiv'); } My image contains only 6 records as of now. Any suggestions?

    Read the article

  • Single Sign On for a Web App

    - by Jeremy Goodell
    I have been trying to understand how this problem is solved for over a month now. I really need to come up with a general approach that works -- I'm basically the only resource who can do it. I have a theory, but I'm just not sure it's the easiest (or correct) approach and I haven't been able to find any information to support my ideas. Here's the scenario: 1) You have a complex web application that offers secure content on a subscription basis. 2) Users are required to log in to your application with user name and password. 3) You sell to large corporations, which already have a corporate authentication technology (for example, Active Directory). 4) You would like to integrate with the corporate authentication mechanism to allow their users to log onto your Web App without having to enter their user name and password. Now, any solution you come up with will have to provide a mechanism for: adding new users removing users changing user information allowing users to log in Ideally, all these would happen "automagically" when the corporate customer made the corresponding changes to their own authentication. Now, I have a theory that the way to do this (at least for Active Directory) would be for me to write a client-side app that integrates with the customer's Active Directory to track the targeted changes, and then communicate those changes to my Web App. I think that if this communication were done via Web Services offered by my web app, then it would maintain an unhackable level of security, which would obviously be a requirement for these corporate customers. I've found some information about a Microsoft product called Active Directory Federation Service (ADFS) which may or may not be the right approach for me. It seems to be a bit bulky and have some requirements that might not work for all customers. For other existing ID scenarios (like Athens and Shibboleth), I don't think a client application is necessary. It's probably just a matter of tying into the existing ID services. I would appreciate any advice anyone has on anything I've mentioned here. In particular, if you can tell me if my theory is correct about providing a client-side app that communicates with server-side Web Services, or if I'm totally going in the wrong direction. Also, if you could point me at any web sites or articles that explain how to do this, I'd really appreciate it. My research has not turned up much so far. Finally, if you could let me know of any Web applications that currently offer this service (particularly as tied to a corporate Active Directory), I would be very grateful. I am wondering if other B2B Web app's like salesforce.com, or hoovers.com offer a similar service for their corporate customers. I hate being in the dark and would greatly appreciate any light you can shed ... Jeremy

    Read the article

  • Paging Through Records(json data) Using jQuery...

    - by Pandiya Chendur
    I have a JSON result that contains numerous records. I'd like to show the first five records in one page and create pager links which have to move to that page with five record so on. I don't want the page to refresh which is why I'm hoping a combination of JavaScript and jQuery... My json Data looks like this... {"Table" : [{"Emp_Id" : "3","Identity_No" : "","Emp_Name" : "Jerome","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Supervisior","Desig_Description" : "Supervisior of the Construction","SalaryBasis" : "Monthly","FixedSalary" : "25000.00"},{"Emp_Id" : "4","Identity_No" : "","Emp_Name" : "Mohan","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Acc ","Desig_Description" : "Accountant","SalaryBasis" : "Monthly","FixedSalary" : "200.00"},{"Emp_Id" : "5","Identity_No" : "","Emp_Name" : "Murugan","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason","Desig_Description" : "Mason","SalaryBasis" : "Weekly","FixedSalary" : "150.00"},{"Emp_Id" : "6","Identity_No" : "","Emp_Name" : "Ram","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason","Desig_Description" : "Mason","SalaryBasis" : "Weekly","FixedSalary" : "120.00"},{"Emp_Id" : "7","Identity_No" : "","Emp_Name" : "Raja","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason","Desig_Description" : "Mason","SalaryBasis" : "Weekly","FixedSalary" : "135.00"},{"Emp_Id" : "8","Identity_No" : "","Emp_Name" : "Raja kumar","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason Helper","Desig_Description" : "Mason Helper","SalaryBasis" : "Weekly","FixedSalary" : "105.00"},{"Emp_Id" : "9","Identity_No" : "","Emp_Name" : "Lakshmi","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason Helper","Desig_Description" : "Mason Helper","SalaryBasis" : "Weekly","FixedSalary" : "100.00"},{"Emp_Id" : "10","Identity_No" : "","Emp_Name" : "Palani","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Carpenter","Desig_Description" : "Carpenter","SalaryBasis" : "Weekly","FixedSalary" : "200.00"},{"Emp_Id" : "11","Identity_No" : "","Emp_Name" : "Annamalai","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Carpenter","Desig_Description" : "Carpenter","SalaryBasis" : "Weekly","FixedSalary" : "220.00"},{"Emp_Id" : "12","Identity_No" : "","Emp_Name" : "David","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Steel Fixer","Desig_Description" : "Steel Fixer","SalaryBasis" : "Weekly","FixedSalary" : "220.00"},{"Emp_Id" : "13","Identity_No" : "","Emp_Name" : "Chandru","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Steel Fixer","Desig_Description" : "Steel Fixer","SalaryBasis" : "Weekly","FixedSalary" : "220.00"},{"Emp_Id" : "14","Identity_No" : "","Emp_Name" : "Mani","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Steel Helper","Desig_Description" : "Steel Helper","SalaryBasis" : "Weekly","FixedSalary" : "175.00"},{"Emp_Id" : "15","Identity_No" : "","Emp_Name" : "Karthik","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Wood Fixer","Desig_Description" : "Wood Fixer","SalaryBasis" : "Weekly","FixedSalary" : "195.00"},{"Emp_Id" : "16","Identity_No" : "","Emp_Name" : "Bala","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Wood Fixer","Desig_Description" : "Wood Fixer","SalaryBasis" : "Weekly","FixedSalary" : "185.00"},{"Emp_Id" : "17","Identity_No" : "","Emp_Name" : "Tamil arasi","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Wood Helper","Desig_Description" : "Wood Helper","SalaryBasis" : "Weekly","FixedSalary" : "185.00"},{"Emp_Id" : "18","Identity_No" : "","Emp_Name" : "Perumal","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Cook","Desig_Description" : "Cook","SalaryBasis" : "Weekly","FixedSalary" : "105.00"},{"Emp_Id" : "19","Identity_No" : "","Emp_Name" : "Andiappan","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Watchman","Desig_Description" : "Watchman","SalaryBasis" : "Weekly","FixedSalary" : "150.00"}]} And as of now my result looks like this, http://img401.imageshack.us/img401/2500/yuidtsum.jpg I have used jquery for this, var jsonObj = JSON.parse(HfJsonValue); for (var i = jsonObj.Table.length - 1; i >= 0; i--) { var employee = jsonObj.Table[i]; $('<div class="resultsdiv"><br /><span class="resultName">' + employee.Emp_Name + '</span><span class="resultfields" style="padding-left:100px;">Category&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.Desig_Name + '</span><br /><br /><span id="SalaryBasis" class="resultfields">Salary Basis&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.SalaryBasis + '</span><span class="resultfields" style="padding-left:25px;">Salary&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.FixedSalary + '</span><span style="font-size:110%;font-weight:bolder;padding-left:25px;">Address&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.Address + '</span></div>').insertAfter('#ResultsDiv'); } My image contains only 6 records as of now.. Any suggestions?

    Read the article

  • Good jquery pagination plugin to use with json Data...

    - by bala3569
    I am looking for a good jquery pagination plugin to use in my aspx page.... I have the following parameters currentpage,pagesize,TotalRecords,NumberofPages... I would like my plugin to same as stackoverflow paging .... EDIT: It should paginate through json data.... similar to this I use my json data and iterating with jquery var jsonObj = jQuery.parseJSON(HfJsonValue); for (var i = jsonObj.Table.length - 1; i >= 0; i--) { var employee = jsonObj.Table[i]; $('<div class="resultsdiv"><br /><span class="resultName">' + employee.Emp_Name + '</span><span class="resultfields" style="padding-left:100px;">Category&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.Desig_Name + '</span><br /><br /><span id="SalaryBasis" class="resultfields">Salary Basis&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.SalaryBasis + '</span><span class="resultfields" style="padding-left:25px;">Salary&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.FixedSalary + '</span><span style="font-size:110%;font-weight:bolder;padding-left:25px;">Address&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.Address + '</span></div>').insertAfter('#ResultsDiv'); } There are 25 divs in my page as a result i want to show first five divs in page 1 and so on... Any suggestion... My HfJsonValue contains the following json data {"Table" : [{"Emp_Id" : "3","Identity_No" : "","Emp_Name" : "Jerome","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Supervisior","Desig_Description" : "Supervisior of the Construction","SalaryBasis" : "Monthly","FixedSalary" : "25000.00"},{"Emp_Id" : "4","Identity_No" : "","Emp_Name" : "Mohan","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Acc ","Desig_Description" : "Accountant","SalaryBasis" : "Monthly","FixedSalary" : "200.00"},{"Emp_Id" : "5","Identity_No" : "","Emp_Name" : "Murugan","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason","Desig_Description" : "Mason","SalaryBasis" : "Weekly","FixedSalary" : "150.00"},{"Emp_Id" : "6","Identity_No" : "","Emp_Name" : "Ram","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason","Desig_Description" : "Mason","SalaryBasis" : "Weekly","FixedSalary" : "120.00"},{"Emp_Id" : "7","Identity_No" : "","Emp_Name" : "Raja","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason","Desig_Description" : "Mason","SalaryBasis" : "Weekly","FixedSalary" : "135.00"},{"Emp_Id" : "8","Identity_No" : "","Emp_Name" : "Raja kumar","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason Helper","Desig_Description" : "Mason Helper","SalaryBasis" : "Weekly","FixedSalary" : "105.00"},{"Emp_Id" : "9","Identity_No" : "","Emp_Name" : "Lakshmi","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Mason Helper","Desig_Description" : "Mason Helper","SalaryBasis" : "Weekly","FixedSalary" : "100.00"},{"Emp_Id" : "10","Identity_No" : "","Emp_Name" : "Palani","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Carpenter","Desig_Description" : "Carpenter","SalaryBasis" : "Weekly","FixedSalary" : "200.00"},{"Emp_Id" : "11","Identity_No" : "","Emp_Name" : "Annamalai","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Carpenter","Desig_Description" : "Carpenter","SalaryBasis" : "Weekly","FixedSalary" : "220.00"},{"Emp_Id" : "12","Identity_No" : "","Emp_Name" : "David","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Steel Fixer","Desig_Description" : "Steel Fixer","SalaryBasis" : "Weekly","FixedSalary" : "220.00"},{"Emp_Id" : "13","Identity_No" : "","Emp_Name" : "Chandru","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Steel Fixer","Desig_Description" : "Steel Fixer","SalaryBasis" : "Weekly","FixedSalary" : "220.00"},{"Emp_Id" : "14","Identity_No" : "","Emp_Name" : "Mani","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Steel Helper","Desig_Description" : "Steel Helper","SalaryBasis" : "Weekly","FixedSalary" : "175.00"},{"Emp_Id" : "15","Identity_No" : "","Emp_Name" : "Karthik","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Wood Fixer","Desig_Description" : "Wood Fixer","SalaryBasis" : "Weekly","FixedSalary" : "195.00"},{"Emp_Id" : "16","Identity_No" : "","Emp_Name" : "Bala","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Wood Fixer","Desig_Description" : "Wood Fixer","SalaryBasis" : "Weekly","FixedSalary" : "185.00"},{"Emp_Id" : "17","Identity_No" : "","Emp_Name" : "Tamil arasi","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Wood Helper","Desig_Description" : "Wood Helper","SalaryBasis" : "Weekly","FixedSalary" : "185.00"},{"Emp_Id" : "18","Identity_No" : "","Emp_Name" : "Perumal","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Cook","Desig_Description" : "Cook","SalaryBasis" : "Weekly","FixedSalary" : "105.00"},{"Emp_Id" : "19","Identity_No" : "","Emp_Name" : "Andiappan","Address" : "Madurai","Date_Of_Birth" : "","Desig_Name" : "Watchman","Desig_Description" : "Watchman","SalaryBasis" : "Weekly","FixedSalary" : "150.00"}]}

    Read the article

  • Filter Calendar view SharePoint WWS 3.0

    - by lerac
    Hi all, I have a SP site with a calendarview and would like to filter this on the basis of the current user. Don't be afraid I already figured out how do to this with a list customizing some excisting jScripts and working with Content Editor WebPart. Yet this jScript does not work in a Calendar. To paint a picture I have columns like: Judge1 Lawyer Clerk (example). Underneath these columns there are names ofcourse. However these are not shown in Calendar view, so it is hard to filter on something that is not displayed only the casenumbers. Now I've been thinking (not always wise) perhaps I can adjust the aspx page of calendar/list by adjusting a filter I applied in SharePoint. This would also solve the issue of displaying all the content before it filters with Java, since it should not be possible for users to see the entire listcontent (security). I went to Modify list view and created a filter where judge1 = Mr. J. Jenkins. Then I went to SharePoint Designer and opend the Calendar aspx page. To my expectation I found Mr. J. Jenkins with the following code: Since I can't display image because i'm new, not very handy discrimination I have to give you a url. Code can't be pasted either is completely messes it up even with codemode on. Hyperlink CODE IMAGE Keep in mind I just posted a very tiny part of the code (only the part I want to change). Now I have no idea what kind of code this is above this text (SP wss 3.0 uses for aspx pages), but I would like to change Mr. J. Jenkins into a jScript var/val. Since I already managed to get the current user that is logged in content. var user = jP.getUserProfile(); var userinfspvalue = user.Department; There is more code around that one 2 ofcourse, yet to give you a picture. The var userinfspvalue is what I would like to replace the text Mr. J. Jenkins into. This would mean the calendar would be dynamically filtered based upon the current user that is logged on. Have no idea what is possible, perhaps there is a better solution who knows... Do you know? Thank you so much ahead!

    Read the article

  • How to find the data key on CheckedChanged event of checkbox in ListView in ASP.NET?

    - by subodh
    I am using a list view inside that in item template i am using a label and a checkbox. I want that whenever user clicks on the check box the value should be updated in a table.i am using a datakeys in listview.on the basis of datakey value should be updated in the table. Query is: string updateQuery = "UPDATE [TABLE] SET [COLUMN] = " + Convert.ToInt32(chk.Checked) + " WHERE PK_ID =" + dataKey + " ";` also i want some help in displaying the result as it is inside the table.means if the value for column in table for a particular pkid is 1 then the checkbox shoul be checked. Here is the code snippet: <asp:ListView ID="lvFocusArea" runat="server" DataKeyNames="PK_ID" OnItemDataBound="lvFocusArea_ItemDataBound"> <LayoutTemplate> <table border="0" cellpadding="1" width="400px"> <tr style="background-color: #E5E5FE"> <th align="left"> Focus Area </th> <th> Is Current Focused </th> </tr> <tr id="itemPlaceholder" runat="server"> </tr> </table> </LayoutTemplate> <ItemTemplate> <tr> <td width="80%"> <asp:Label ID="lblFocusArea" runat="server" Text=""><%#Eval("FOCUS_AREA_NAME") %></asp:Label> </td> <td align="center" width="20%"> <asp:CheckBox ID="chkFocusArea" runat="server" OnCheckedChanged="chkFocusArea_CheckedChanged" AutoPostBack="true" /> </td> </tr> </ItemTemplate> <AlternatingItemTemplate> <tr style="background-color: #EFEFEF"> <td> <asp:Label ID="lblFocusArea" runat="server" Text=""><%#Eval("FOCUS_AREA_NAME") %></asp:Label> </td> <td align="center"> <asp:CheckBox ID="chkFocusArea" runat="server" OnCheckedChanged="chkFocusArea_CheckedChanged" AutoPostBack="true" /> </td> </tr> </AlternatingItemTemplate> <SelectedItemTemplate> <td> item selected </td> </SelectedItemTemplate> </asp:ListView> Help me.

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >