Search Results

Search found 2033 results on 82 pages for 'cut'.

Page 61/82 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • SSAS: Utility to export SQL code from your cube's Data Source View (DSV)

    - by DrJohn
    When you are working on a cube, particularly in a multi-person team, it is sometimes necessary to review what changes that have been done to the SQL queries in the cube's data source view (DSV). This can be a problem as the SQL editor in the DSV is not the best interface to review code. Now of course you can cut and paste the SQL into SSMS, but you have to do each query one-by-one. What is worse your DBA is unlikely to have BIDS installed, so you will have to manually export all the SQL yourself and send him the files. To make it easy to get hold of the SQL in a Data Source View, I developed a C# utility which connects to an OLAP database and uses Analysis Services Management Objects (AMO) to obtain and export all the SQL to a series of files. The added benefit of this approach is that these SQL files can be placed under source code control which means the DBA can easily compare one version with another. The Trick When I came to implement this utility, I quickly found that the AMO API does not give direct access to anything useful about the tables in the data source view. Iterating through the DSVs and tables is easy, but getting to the SQL proved to be much harder. My Google searches returned little of value, so I took a look at the idea of using the XmlDom to open the DSV’s XML and obtaining the SQL from that. This is when the breakthrough happened. Inspecting the DSV’s XML I saw the things I was interested in were called TableType DbTableName FriendlyName QueryDefinition Searching Google for FriendlyName returned this page: Programming AMO Fundamental Objects which hinted at the fact that I could use something called ExtendedProperties to obtain these XML attributes. This simplified my code tremendously to make the implementation almost trivial. So here is my code with appropriate comments. The full solution can be downloaded from here: ExportCubeDsvSQL.zip   using System;using System.Data;using System.IO;using Microsoft.AnalysisServices; ... class code removed for clarity// connect to the OLAP server Server olapServer = new Server();olapServer.Connect(config.olapServerName);if (olapServer != null){ // connected to server ok, so obtain reference to the OLAP databaseDatabase olapDatabase = olapServer.Databases.FindByName(config.olapDatabaseName);if (olapDatabase != null){ Console.WriteLine(string.Format("Succesfully connected to '{0}' on '{1}'",   config.olapDatabaseName,   config.olapServerName));// export SQL from each data source view (usually only one, but can be many!)foreach (DataSourceView dsv in olapDatabase.DataSourceViews){ Console.WriteLine(string.Format("Exporting SQL from DSV '{0}'", dsv.Name));// for each table in the DSV, export the SQL in a fileforeach (DataTable dt in dsv.Schema.Tables){ Console.WriteLine(string.Format("Exporting SQL from table '{0}'", dt.TableName)); // get name of the table in the DSV// use the FriendlyName as the user inputs this and therefore has control of itstring queryName = dt.ExtendedProperties["FriendlyName"].ToString().Replace(" ", "_");string sqlFilePath = Path.Combine(targetDir.FullName, queryName + ".sql"); // delete the sql file if it exists... file deletion code removed for clarity// write out the SQL to a fileif (dt.ExtendedProperties["TableType"].ToString() == "View"){ File.WriteAllText(sqlFilePath, dt.ExtendedProperties["QueryDefinition"].ToString());}if (dt.ExtendedProperties["TableType"].ToString() == "Table"){ File.WriteAllText(sqlFilePath, dt.ExtendedProperties["DbTableName"].ToString()); } } } Console.WriteLine(string.Format("Successfully written out SQL scripts to '{0}'", targetDir.FullName)); } }   Of course, if you are following industry best practice, you should be basing your cube on a series of views. This will mean that this utility will be of limited practical value unless of course you are inheriting a project and want to check if someone did the implementation correctly.

    Read the article

  • Setting up a Carousel Component in ADF Mobile

    - by Shay Shmeltzer
    The Carousel component is one of the slickier ways of showing collections of data, and on a mobile device it works really great with the finger swipe gesture. Using the Carousel component in ADF Mobile is similar to using it in regular web ADF applications, with one major change - right now you can't drag a collection from the data control palette and drop it as a carousel. So here is a quick work around for that, and details about setting up carousels in your application. First thing you'll need is a data control that returns an array of records. In my demo I'm using the Emps collection that you can get from following this tutorial. Then you drag the emps and drop it in your amx page as an ADF mobile iterator. We are doing this as a short cut to getting the right binding needed for a carousel in our page. If you look now in your page's binding you'll see something like this: You can now remark the whole iterator code in your page's source. Next let's add the carousel From the component palette drag the carousel (from the data view category) to the page. Next drag a carousel item and drop it in the nodestamp facet of the carousel. Now we'll hook up the carousel to the binding we got from the iterator - this is quite simple just copy the var and value attributes from the iterator tag to the carousel tag: var="row" value="#{bindings.emps.collectionModel}" Next drop a panelForm, or another layout panel in to the carousel item. Into that panelForm you can now drop items and bind their value property to row.attributeNames - basically copying the way it is in the fields in the iterator for example: value="#{row.hireDate}". By the way you can also copy other attributes like the label. And that's it. Your code should end up looking something like this:     <amx:carousel id="c1" var="row" value="#{bindings.emps.collectionModel}">      <amx:facet name="nodeStamp">        <amx:carouselItem id="ci1">          <amx:panelFormLayout id="pfl1">            <amx:inputText label="#{bindings.emps.hints.salary.label}" value="#{row.salary}" id="it1"/>            <amx:inputText label="#{bindings.emps.hints.name.label}" value="#{row.name}" id="it2"/>          </amx:panelFormLayout>        </amx:carouselItem>      </amx:facet>    </amx:carousel> And when you run your application it will look like this:

    Read the article

  • Handling "related" work within a single agile work item

    - by Tesserex
    I'm on a project team of 4 devs, myself included. We've been having a long discussion on how to handle extra work that comes up in the course of a single work item. This extra work is usually things that are slightly related to the task, but not always necessary to accomplish the goal of the item (that may be an opinion). Examples include but are not limited to: refactoring of the code changed by the work item refactoring code neighboring the code changed by the item re-architecting the larger code area around the ticket. For example if an item has you changing a single function, you realize the entire class now could be redone to better accommodate this change. improving the UI on a form you just modified When this extra work is small we don't mind. The problem is when this extra work causes a substantial extension of the item beyond the original feature point estimation. Sometimes a 5 point item will actually take 13 points of time. In one case we had a 13 point item that in retrospect could have been 80 points or more. There are two options going around in our discussion for how to handle this. We can accept the extra work in the same work item, and write it off as a mis-estimation. Arguments for this have included: We plan for "padding" at the end of the sprint to account for this sort of thing. Always leave the code in better shape than you found it. Don't check in half-assed work. If we leave refactoring for later, it's hard to schedule and may never get done. You are in the best mental "context" to handle this work now, since you're waist deep in the code already. Better to get it out of the way now and be more efficient than to lose that context when you come back later. We draw a line for the current work item, and say that the extra work goes into a separate ticket. Arguments include: Having a separate ticket allows for a new estimation, so we aren't lying to ourselves about how many points things really are, or having to admit that all of our estimations are terrible. The sprint "padding" is meant for unexpected technical challenges that are direct barriers to completing the ticket requirements. It is not intended for side items that are just "nice-to-haves". If you want to schedule refactoring, just put it at the top of the backlog. There is no way for us to properly account for this stuff in an estimation, since it seems somewhat arbitrary when it comes up. A code reviewer might say "those UI controls (which you actually didn't modify in this work item) are a bit confusing, can you fix that too?" which is like an hour, but they might say "Well if this control now inherits from the same base class as the others, why don't you move all of this (hundreds of lines of) code into the base and rewire all this stuff, the cascading changes, etc.?" And that takes a week. It "contaminates the crime scene" by adding unrelated work into the ticket, making our original feature point estimates meaningless. In some cases, the extra work postpones a check-in, causing blocking between devs. Some of us are now saying that we should decide some cut off, like if the additional stuff is less than 2 FP, it goes in the same ticket, if it's more, make it a new ticket. Since we're only a few months into using Agile, what's the opinion of all the more seasoned Agile veterans around here on how to handle this?

    Read the article

  • Detect Unicode Usage in SQL Column

    One optimization you can make to a SQL table that is overly large is to change from nvarchar (or nchar) to varchar (or char).  Doing so will cut the size used by the data in half, from 2 bytes per character (+ 2 bytes of overhead for varchar) to only 1 byte per character.  However, you will lose the ability to store Unicode characters, such as those used by many non-English alphabets.  If the tables are storing user-input, and your application is or might one day be used internationally, its likely that using Unicode for your characters is a good thing.  However, if instead the data is being generated by your application itself or your development team (such as lookup data), and you can be certain that Unicode character sets are not required, then switching such columns to varchar/char can be an easy improvement to make. Avoid Premature Optimization If you are working with a lookup table that has a small number of rows, and is only ever referenced in the application by its numeric ID column, then you wont see any benefit to using varchar vs. nvarchar.  More generally, for small tables, you wont see any significant benefit.  Thus, if you have a general policy in place to use nvarchar/nchar because it offers more flexibility, do not take this post as a recommendation to go against this policy anywhere you can.  You really only want to act on measurable evidence that suggests that using Unicode is resulting in a problem, and that you wont lose anything by switching to varchar/char. Obviously the main reason to make this change is to reduce the amount of space required by each row.  This in turn affects how many rows SQL Server can page through at a time, and can also impact index size and how much disk I/O is required to respond to queries, etc.  If for example you have a table with 100 million records in it and this table has a column of type nchar(5), this column will use 5 * 2 = 10 bytes per row, and with 100M rows that works out to 10 bytes * 100 million = 1000 MBytes or 1GB.  If it turns out that this column only ever stores ASCII characters, then changing it to char(5) would reduce this to 5*1 = 5 bytes per row, and only 500MB.  Of course, if it turns out that it only ever stores the values true and false then you could go further and replace it with a bit data type which uses only 1 byte per row (100MB  total). Detecting Whether Unicode Is In Use So by now you think that you have a problem and that it might be alleviated by switching some columns from nvarchar/nchar to varchar/char but youre not sure whether youre currently using Unicode in these columns.  By definition, you should only be thinking about this for a column that has a lot of rows in it, since the benefits just arent there for a small table, so you cant just eyeball it and look for any non-ASCII characters.  Instead, you need a query.  Its actually very simple: SELECT DISTINCT(CategoryName)FROM CategoriesWHERE CategoryName <> CONVERT(varchar, CategoryName) Summary Gregg Stark for the tip. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Where Twitter Stands Heading Into 2013

    - by Mike Stiles
    As Twitter continued throughout 2012 to be a stage on which global politics and culture played itself out, the company itself underwent some adjustments that give us a good indication of what users and brands can expect from the platform in 2013. The power of the network did anything but fade. Celebrities continued to use it to connect one-on-one. Even the Pope signed on this year. It continued to fuel revolutions. It played an exponentially large factor in this US Presidential election. And around the world, the freedom to speak was challenged as users were fired, sued, sometimes even jailed for their tweets. Expect more of the same in 2013, as Twitter has entrenched itself, for individuals, causes and brands, as the fastest, easiest, most efficient way to message the masses so some measure of impact can come from it. It’s changed everything, and it’s not finished. These fun facts reveal the position of strength with which Twitter enters 2013: It now generates a billion tweets every 2.5 days It has 500 million+ users The average Twitter user has tweeted 307 times 32% of everyone using the Internet uses Twitter It’s expected to bring in $540 million in ad revenue by 2014 11 new accounts are created every second High-level Executive Summary: people love it, people use it, and they’re going to keep loving and using it. Whether or not outside developers love it is a different matter. 2012 marked a shift from welcoming the third party support that played at least some role in Twitter being so warmly embraced, to discouraging anything that replicates what Twitter can do itself…or plans to do itself. It’s not the open playground it once was. Now Twitter must spend 2013 proving it can innovate in-house and keep us just as entranced. Likewise, Twitter is distancing itself from Facebook. Images from the #1 platform’s Instagram don’t work on Twitter anymore, and Twitter’s rolling out their own photo filter product. Where the two have lived in a “plenty of room for everybody” symbiosis up to now, 2013 could see the giants ramping up a full-on rivalry. Twitter is exhibiting a deliberate strategy. Updates have centered on more visually appealing search results, and making finding and sharing content easier. Deals have been cut with some media entities so their content stands out. CEO Dick Costolo has said tweets aren’t the attraction, they’re what leads you to content. Twitter aims to be a key distributor of media and info. Add the addition of former News Corp. President Peter Chernin to the board, and their hashtag landing page experience for events, and their media behemoth ambitions get pretty clear. There are challenges ahead and Costolo has also laid those out; entry into China, figuring out how to have Twitter deliver both comprehensive and relevant, targeted experiences, and the visualization of big data. What does this mean for corporations? They can expect a more media-rich evolution and growing emphases on imagery. They can expect more opportunities to create great media content and leverage Twitter for its distribution. And they can expect new ways to surface in searches. Are brands diving in? 56% of customer tweets to companies get completely and totally ignored. Ugh. A study Twitter recently conducted with Compete shows people who see tweets from retailers are more likely to buy a product. And, the more retailer tweets they see, the more likely they are to purchase on the retail site. As more of those tweets point to engaging media content from the brand, the results should get even better. Twitter appears ready for 2013. Enterprise brands have some work to do. @mikestilesPhoto Stuart Miles, freedigitalphotos.net

    Read the article

  • Where Have All the Ugly Forms Gone? Users and ADF Took Care Of It

    - by ultan o'broin
    Sometimes I hear that our application demos are a bit too "cutsey" and that we never talk about with any user roles that have lots of data entry as a requirement. Some (no names) consider those old clunker forms, with the myriad rows of fields, to be super-productive for data clerks. We do have such roles covered in Oracle Fusion Applications for sure. But consider what is really the issue here: productivity. Check out how the Oracle Fusion Financials Applications User Experience team went about designing for productivity when receiving and entering invoice data, for example. See how Fusion Financials caters so well for input and control of data? Central to all this is knowing the users and how they work: what tasks do they need to perform, and when. Read more about Fusion Financials productivity in the white paper, Get It Done Fast, Get It Done Right: The Oracle Fusion Financials User Experience. Now and then, I see forms that weren't designed for end user activity at all. Instead, they were designed by developers or by the IT department around the database schema. Forms with literally dozens of fields on the same page, sometimes. Forms that give the impression there was only task involved, when there may have been several. At times, completing one of these huge forms accurately became so tedious that, under pressure, it made more sense for the user to complete it quickly as possible and then let somebody else check it for accuracy and fill in the gaps from data emailed along in spreadsheet form. Data accuracy is critical in our business. Not good. Not efficient. Not productive. So here are a few basics on forms design for data entry-type user roles. A great place for developers to start exploring what is possible with forms layout is the Rich Client User Experience (RCUX) guidance on Form Layout, using ADF components. User-Centered Forms Design Considerations The starting point--something you must always keep in mind with your own design--is design for the end user. Find a representative end user, and keep that user engaged throughout the design, deployment, and test process. Consider these points in user testing those forms: Are there automated or technical solutions to entering the data that avoid manual input in the first place? For example, imports, uploads, OCR, whatever. Some day we will be able to tell Siri to do it, but leave that for now. Design your form to reflect the task involved (i.e., the business process) and not the database schema. On the form, group like fields together, logically. Eliminate duplicate data entry or prepopulate from previous data entry. Allow users to complete fields in the order they wish (i.e., no interdependency). Allow for tabbing between fields (keyboard is faster than mouse), so know how the browser supports this (see that RCUX guideline). Allow for final validation at the page level not at field-level entry. Way better for heads-down users. For example, ADF messages allow you to see a list of all validation errors on a page on a final submit or navigation action and to easily navigate to the point of error. Better still, be error tolerant. Allow users to enter data in formats they comfortable with. Bind any relevant user preference setting to the input format allowed (for example, the locale date format). Explore what data entry conversion can do for you automatically too (see the ADF converter demos, convenience patterns can also be written). Only ask for data input when it's needed. Get rid of, or hide optional fields. Cut down on the number of mandatory fields, and mark them clearly (use a *). Clearly label the fields in plain language. I am sure you may have a few more tips on forms design for data entry users. Remember the user before finding the comments.

    Read the article

  • More Stuff less Fluff

    - by brendonpage
    Originally posted on: http://geekswithblogs.net/brendonpage/archive/2013/11/08/more-stuff-less-fluff.aspxYAGNI – "You Aren't Going To Need It". This is an acronym commonly used in software development to remind developers to only write what they need. This acronym exists because software developers have gotten into the habit of writing everything they need to solve a problem and then everything they think they're going to possibly need in the future. Since we can't predict the future this results in a large portion of the code that we write never being used. That extra code causes unnecessary complexity, which makes it harder to understand and harder to modify when we inevitably have to write something that we didn't think of. I've known about YAGNI for some time now but I never really got it. The words made sense and the idea was clear but the concept never sank in. I was one of those devs who'd happily write a ton of code in the anticipation of future needs. In my mind this was an essential part of writing high quality code. I didn't realise that in doing so I was actually writing low quality code. If you are anything like me you are probably thinking "Lies and propaganda! High quality code needs to be future proof." I agree! But what makes code future proof? If we could see into the future the answer would be simple, code that allows for or meets all future requirements. Since we can't see the future the best we can do is write code that can easily adapt to future requirements, this means writing flexible code. Flexible code is: Fast to understand. Fast to add to. Fast to modify. To be flexible code has to be simple, this means only making it as complex as it needs to be to meet those 3 criteria. That is high quality code. YAGNI! The art is in deciding where to place the seams (abstractions) that will give you flexibility without making decisions about future functionality. Robert C Martin explains it very nicely, he says a good architecture allows you to defer decisions because if you can defer a decision then you have the flexibility to change it. I've recently had a YAGNI experience which brought this all into perspective. I was working on a new project which had multiple clients that connect to a server hosted in the cloud. I was tasked with adding a feature to the desktop client that would allow users to capture items that would then be saved to the cloud. My immediate thought was "Hey we have multiple clients so I should build a web service for these items, that way we can access them from other clients", so I went to work and this is what I created.  I stood back and gazed upon what I'd created with a warm fuzzy feeling. It was beautiful! Then the time came for the team to use the design I'd created for another feature with a new entity. Let's just say that they didn't get the same warm fuzzy feeling that I did when they looked at the design. After much discussion they eventually got it through to me that I'd bloated the design based on an assumption of future functionality. After much more discussion we cut the design down to the following. This design gives us future flexibility with no extra work, it is as complex as it needs to be. It has been a couple of months since this incident and we still haven't needed to access either of the entities from other clients. Using the simpler design allowed us to do more stuff with less stuff!

    Read the article

  • Sitting Pretty

    - by Phil Factor
    Guest Editorial for Simple-Talk IT Pro newsletter'DBAs and SysAdmins generally prefer an expression of calmness under adversity. It is a subtle trick, and requires practice in front of a mirror to get it just right. Too much adversity and they think you're not coping; too much calmness and they think you're under-employed' I dislike the term 'avatar', when used to describe a portrait photograph. An avatar, in the sense of a picture, is merely the depiction of one's role-play alter-ego, often a ridiculous bronze-age deity. However, professional image is important. The choice and creation of online photos has an effect on the way your message is received and it is important to get that right. It is fine to use that photo of you after ten lagers on holiday in an Ibiza nightclub, but what works on Facebook looks hilarious on LinkedIn. My splendid photograph that I use online was done by a professional photographer at great expense and I've never had the slightest twinge of regret when I remember how much I paid for it. It is me, but a more pensive and dignified edition, oozing trust and wisdom. One gasps at the magical skill that a professional photographer can conjure up, without digital manipulation, to make the best of a derisory noggin (ed: slang for a head). Even if he had offered to depict me as a semi-naked, muscle-bound, sword-wielding hero, I'd have demurred. No, any professional person needs a carefully cultivated image that looks right. I'd never thought of using that profile shot, though I couldn't help noticing the photographer flinch slightly when he first caught sight of my face. There is a problem with using an avatar. The use of a single image doesn't express the appropriate emotion. At the moment, it is weird to see someone with a laughing portrait writing something solemn. A neutral cast to the face, somewhat like a passport photo, is probably the best compromise. Actually, the same is true of a working life in IT. One of the first skills I learned was not to laugh at managers, but, instead, to develop a facial expression that promoted a sense of keenness, energy and respect. Every profession has its own preferred facial cast. A neighbour of mine has the natural gift of a face that displays barely repressed grief. Though he is characteristically cheerful, he earns a remarkable income as a pallbearer. DBAs and SysAdmins generally prefer an expression of calmness under adversity. It is a subtle trick, and requires practice in front of a mirror to get it just right. Too much adversity and they think you're not coping; too much calmness and they think you're under-employed. With an appropriate avatar, you could do away with a lot of the need for 'smilies' to give clues as to the meaning of what you've written on forums and blogs. If you had a set of avatars, showing the full gamut of human emotions expressible in writing: Rage, fear, reproach, joy, ebullience, apprehension, exasperation, dissembly, irony, pathos, euphoria, remorse and so on. It would be quite a drop-down list on forums, but given the vast prairies of space on the average hard drive, who cares? It would cut down on the number of spats in Forums just as long as one picks the right avatar. As an unreconstructed geek, I find it hard to admit to the value of image in the workplace, but it is true. Just as we use professionals to tidy up and order our CVs and job applications, we should employ experts to enhance our professional image. After all you don't perform surgery or dentistry on yourself do you?

    Read the article

  • Taking HRMS to the Cloud to Simplify Human Resources Management

    - by HCM-Oracle
    By Anke Mogannam With human capital management (HCM) a top-of-mind issue for executives in every industry, human resources (HR) organizations are poised to have their day in the sun—proving not just their administrative worth but their strategic value as well.  To make good on that promise, however, HR must modernize. Indeed, if HR is to act as an agent of change—providing the swift reallocation of employees  and the rapid absorption of employee data required for enterprises to shift course on a dime—it must first deal with the disruptive change at its own front door. And increasingly, that means choosing the right technology and human resources management system (HRMS) for managing the entire employee lifecycle. Unfortunately, for most organizations, this task has proved easier said than done. This is because while much has been written about advances in HRMS technology, until recently, most of those advances took the form of disparate on-premises solutions designed to serve very specific purposes. Although this may have resulted in key competencies in certain areas, it also meant that processes for core HR functions like payroll and benefits were being carried out in separate systems from those used for talent management, workforce optimization, training, and so on. With no integration—and no single system of record—processes were disconnected, ease of use was impeded, user experience was diminished, and vital data was left untapped.  Today, however, that scenario has begun to change, and end-to-end cloud-based HCM solutions have moved from wished-for innovations to real-life solutions. Why, then, have HR organizations been so slow in adopting them? The answer—it would seem—is, “It’s complicated.” So complicated, in fact, that 45 percent of the respondents to PwC’s “Annual HR Technology Survey” (for 2013) reported having no formal HR software roadmap, and 40 percent stated that they “did not know” whether their organizations would be increasing their use of cloud or software as a service (SaaS) for HR.  Clearly, HR organizations need help sorting through the morass of HR software options confronting them. But just as clearly, there’s an enormous opportunity awaiting those that do. The trick will come in charting a course that allows HR to leverage existing technology while investing in the cloud-based solutions that will deliver the end-to-end processes, easy-to-understand analytics, and superior adaptability required to simplify—and add value to—every aspect of employee management. The Opportunity therefore is to cut costs, drive Innovation, and increase engagement by moving to cloud-based HCM.  Then you will benefit from one Interface, leverage many access points, and  gain at-a-glance insight across your entire workforce. With many legacy on-premises HR systems not being efficient anymore and cloud-based, integrated systems that span the range of HR functions finally reaching maturity, the time is ripe for moving core HR to the cloud. Indeed, for the first time ever there are more HRMS replacement initiatives than HRMS upgrade initiatives under way, and the majority of them involve moving to the cloud per Cedar Crestone’s 2013-2014 HRMS survey. To learn how you can launch your own cloud HCM initiative and begin using HR to power the enterprise, visit Oracle HRMS in the Cloud and Oracle’s new customer 2 cloud program. Anke Mogannam brings more than 16 years of marketing and human capital management experience in the technology industries to her role at Oracle where she is part of the Human Capital Management applications marketing team. In that role, Anke drives content marketing, messaging, go-to-market activities, integrated marketing campaigns, and field enablement. Prior to joining Oracle, Anke held several roles in communications, marketing, HCM product strategy and product management at PeopleSoft, SAP, Workday and Saba. Follow her on Twitter @amogannam

    Read the article

  • Circle-Line Collision Detection Problem

    - by jazzdawg
    I am currently developing a breakout clone and I have hit a roadblock in getting collision detection between a ball (circle) and a brick (convex polygon) working correctly. I am using a Circle-Line collision detection test where each line represents and edge on the convex polygon brick. For the majority of the time the Circle-Line test works properly and the points of collision are resolved correctly. Collision detection working correctly. However, occasionally my collision detection code returns false due to a negative discriminant when the ball is actually intersecting the brick. Collision detection failing. I am aware of the inefficiency with this method and I am using axis aligned bounding boxes to cut down on the number of bricks tested. My main concern is if there are any mathematical bugs in my code below. /* * from and to are points at the start and end of the convex polygons edge. * This function is called for every edge in the convex polygon until a * collision is detected. */ bool circleLineCollision(Vec2f from, Vec2f to) { Vec2f lFrom, lTo, lLine; Vec2f line, normal; Vec2f intersectPt1, intersectPt2; float a, b, c, disc, sqrt_disc, u, v, nn, vn; bool one = false, two = false; // set line vectors lFrom = from - ball.circle.centre; // localised lTo = to - ball.circle.centre; // localised lLine = lFrom - lTo; // localised line = from - to; // calculate a, b & c values a = lLine.dot(lLine); b = 2 * (lLine.dot(lFrom)); c = (lFrom.dot(lFrom)) - (ball.circle.radius * ball.circle.radius); // discriminant disc = (b * b) - (4 * a * c); if (disc < 0.0f) { // no intersections return false; } else if (disc == 0.0f) { // one intersection u = -b / (2 * a); intersectPt1 = from + (lLine.scale(u)); one = pointOnLine(intersectPt1, from, to); if (!one) return false; return true; } else { // two intersections sqrt_disc = sqrt(disc); u = (-b + sqrt_disc) / (2 * a); v = (-b - sqrt_disc) / (2 * a); intersectPt1 = from + (lLine.scale(u)); intersectPt2 = from + (lLine.scale(v)); one = pointOnLine(intersectPt1, from, to); two = pointOnLine(intersectPt2, from, to); if (!one && !two) return false; return true; } } bool pointOnLine(Vec2f p, Vec2f from, Vec2f to) { if (p.x >= min(from.x, to.x) && p.x <= max(from.x, to.x) && p.y >= min(from.y, to.y) && p.y <= max(from.y, to.y)) return true; return false; }

    Read the article

  • The Connected Company: WebCenter Portal - Feedback - Analytics and Polls

    - by Michael Snow
    Evernote Export body, td { }Guest Post by: Mitchell Palski, Staff Sales Consultant The importance of connecting peers has been widely recognized and socialized as a critical component of employee intranets. Organizations are striving to provide mediums for sharing knowledge and improving awareness across their enterprise. Indirectly, the socialization of your enterprise should lead to cost savings and improved product/service quality. However, many times the direct effects of connecting an organization’s leadership with its employees are overlooked. Oracle WebCenter Portal can help you bridge that gap by gathering implicit and explicit feedback. Implicit Feedback Through Usage Analytics Analytics allows administrators to track and analyze WebCenter Portal traffic and usage. Analytics provides the following basic functionality: Usage Tracking Metrics: Analytics collects and reports metrics of common WebCenter Portal functions, including community and portlet traffic. Behavior Tracking: Analytics can be used to analyze WebCenter Portal metrics to determine usage patterns, such as page visit duration and usage over time. User Profile Correlation: Analytics can be used to correlate metric information with user profile information. Usage tracking reports can be viewed and filtered by user profile data such as country, company or title. Usage analytics help measure how users interact with website content – allowing your IT staff and business analysts to make informed decisions when planning development for your next intranet enhancement. For example: If users are not accessing your Announcements page and missing critical information that they need to be aware of, you may elect to use graphical links on the home page to direct more users to that page. As a result, the number of employee help-requests to HR decreases. If users are not accessing your News page to read recent articles, you may elect to stop spending as much time updating the page with new stories and cut costs in your communications department. You notice that there is a high volume of users accessing the Employee Dashboard page so your organization decides to continue making personalization enhancements to the page and investing in the Portal tool that most users are accessing. Usage analytics aren’t necessarily a new concept in the IT industry. What sets WebCenter Portal Analytics apart is: Reports are tailored for WebCenter specific tools Report can be easily added to a page as simple as a drag-and-drop Explicit Feedback Through Polls WebCenter Portal users can create, edit, take, and analyze online polls. With polls, you can survey your audience (such as their opinions and their experience level), check whether they can recall important information, and gather feedback and metrics. How many times have you been involved in a requirements discussion and someone has asked a question similar to “Well how do you know that no one likes our home page?” and the response is “Everyone says they hate it! That’s all anyone complains about.” No one has any measurable, quantifiable metric to gauge user satisfaction. Analytics measure usage, but your organization also needs to measure the quality of your portal as defined by the actual people that use it. With that information, your leadership can make informed decisions that will not only match usage patterns but also relate to employees on a personal level. The end result is a connection between employees and leadership that gives everyone in the organization a sense of ownership of their Portal rather than the feeling of development decisions being segregated to leadership only. Polls can be created and edited through the Poll Manager: Polls and View Poll Results can easily be added to a page through drag-and-drop. What did we learn? Being a “connected” company doesn’t just mean helping employees connect with each other horizontally across your enterprise. It also means connecting those employees to the decisions that affect their everyday activities. Through WebCenter Portal Usage Analytics and Polls, any decision that is made to remove a Portal page, update a Portal page, or develop new Portal functionality, can be justified by quantifiable metrics. Instead of fielding complaints and hearing that your employees don’t have a voice, give those employees a voice and listen!

    Read the article

  • Upcoming Carbon Tax in South Africa

    - by Evelyn Neumayr
    By Elena Avesani, Principal Product Strategy Manager, Oracle In 2012, the South Africa National Treasury announced the plan to impose a carbon tax to cut carbon emissions that are blamed for climate change. South Africa is ranked among the top 20 countries measured by absolute carbon dioxide emissions, with emissions per capita in the region of 10 metric tons per annum and over 90% of South Africa's energy produced by burning fossil fuels. The top 40 largest companies in the country are responsible for 207 million tons of carbon dioxide, directly emitting 20 percent of South Africa’s carbon output. The legislation, originally scheduled to be implemented from January 2015 to 31 December 2019, is now delayed to January 2016. It will levy a carbon tax of R120 (US$11) per ton of CO2, rising then by 10 percent a year until 2020, while all sectors bar electricity will be able to claim additional relief of at least 10 percent. The South African treasury proposed a 60 percent tax-free threshold on emissions for all sectors, including electricity, petroleum, iron, steel and aluminum. Oracle Environmental Accounting and Reporting (EA&R) supports these needs and guarantees consistency across organizations in how data is collected, retained, controlled, consolidated and used in calculating and reporting emissions inventory. EA&R also enables companies to develop an enterprise-wide data view that includes all 5 of the key sustainability categories: carbon emissions, energy, water, materials and waste. Thanks to its native integration with Oracle E-Business Suite and JD Edwards EnterpriseOne ERP Financials and Inventory Systems and the capability of capturing environmental data across business silos, Oracle Environmental Accounting and Reporting is uniquely positioned to support a strategic approach to carbon management that drives business value. Sources: Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} African Utility Week BDlive Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Omni-directional light shadow mapping with cubemaps in WebGL

    - by Winged
    First of all I must say, that I have read a lot of posts describing an usage of cubemaps, but I'm still confused about how to use them. My goal is to achieve a simple omni-directional (point) light type shading in my WebGL application. I know that there is a lot more techniques (like using Two-Hemispheres or Camera Space Shadow Mapping) which are way more efficient, but for an educational purpose cubemaps are my primary goal. Till now, I have adapted a simple shadow mapping which works with spotlights (with one exception: I don't know how to cut off the glitchy part beyond the reach of a single shadow map texture): glitchy shadow mapping<<< So for now, this is how I understand the usage of cubemaps in shadow mapping: Setup a framebuffer (in case of cubemaps - 6 framebuffers; 6 instead of 1 because every usage of framebufferTexture2D slows down an execution which is nicely described here <<<) and a texture cubemap. Also in WebGL depth components are not well supported, so I need to render it to RGBA first. this.texture = gl.createTexture(); gl.bindTexture(gl.TEXTURE_CUBE_MAP, this.texture); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MIN_FILTER, gl.LINEAR); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MAG_FILTER, gl.LINEAR); for (var face = 0; face < 6; face++) gl.texImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, gl.RGBA, this.size, this.size, 0, gl.RGBA, gl.UNSIGNED_BYTE, null); gl.bindTexture(gl.TEXTURE_CUBE_MAP, null); this.framebuffer = []; for (face = 0; face < 6; face++) { this.framebuffer[face] = gl.createFramebuffer(); gl.bindFramebuffer(gl.FRAMEBUFFER, this.framebuffer[face]); gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, this.texture, 0); gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.RENDERBUFFER, this.depthbuffer); var e = gl.checkFramebufferStatus(gl.FRAMEBUFFER); // Check for errors if (e !== gl.FRAMEBUFFER_COMPLETE) throw "Cubemap framebuffer object is incomplete: " + e.toString(); } Setup the light and the camera (I'm not sure if should I store all of 6 view matrices and send them to shaders later, or is there a way to do it with just one view matrix). Render the scene 6 times from the light's position, each time in another direction (X, -X, Y, -Y, Z, -Z) for (var face = 0; face < 6; face++) { gl.bindFramebuffer(gl.FRAMEBUFFER, shadow.buffer.framebuffer[face]); gl.viewport(0, 0, shadow.buffer.size, shadow.buffer.size); gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); camera.lookAt( light.position.add( cubeMapDirections[face] ) ); scene.draw(shadow.program); } In a second pass, calculate the projection a a current vertex using light's projection and view matrix. Now I don't know If should I calculate 6 of them, because of 6 faces of a cubemap. ScaleMatrix pushes the projected vertex into the 0.0 - 1.0 region. vDepthPosition = ScaleMatrix * uPMatrixFromLight * uVMatrixFromLight * vWorldVertex; In a fragment shader calculate the distance between the current vertex and the light position and check if it's deeper then the depth information read from earlier rendered shadow map. I know how to do it with a 2D Texture, but I have no idea how should I use cubemap texture here. I have read that texture lookups into cubemaps are performed by a normal vector instead of a UV coordinate. What vector should I use? Just a normalized vector pointing to the current vertex? For now, my code for this part looks like this (not working yet): float shadow = 1.0; vec3 depth = vDepthPosition.xyz / vDepthPosition.w; depth.z = length(vWorldVertex.xyz - uLightPosition) * linearDepthConstant; float shadowDepth = unpack(textureCube(uDepthMapSampler, vWorldVertex.xyz)); if (depth.z > shadowDepth) shadow = 0.5; Could you give me some hints or examples (preferably in WebGL code) how I should build it?

    Read the article

  • Perfect End to a Bad Day

    - by TehGrumpyCoder
    Yesterday's post about A Bad Day at Work actually had an addendum to it. There were apparently a bunch of guys on ice skates last night competing in some sport way the hell and gone over on the other side of the valley, and enough people couldn't live without seeing them that they had all major arteries heading west honked. I mean honked... the traffic guy reported the 101 had 16 miles of backup... yikes. Since I worked downtown for a number of years, my fallback is to cut across the city on surface streets to get to one of my old 'haunts' and just drive it home from there. Of course with the 101 backed up, then I17 would logically be as well, so I kept the news on rather than my Zune and heard where the bad stuff was going North. I popped out on the freeway about 7 miles south of my exit. Got to the exit which is about a mile from the house without killing or maiming me or anyone else. Waited patiently at the light in the inside lane to make a left and go under the freeway proceeding West. The light changed, I had full green, I started through and whoa... I've got someone in a little rat car crossing my bow! A little explanation... I drive a 3/4 ton pickup with a V-10, extended cab and shell on the back. It's not jacked up, but it sits up pretty good and is longer than any parking place I've ever tried to put it into. I consider this truck to be the consolation prize for paying uninsured motorist coverage for 45 years and having Pilar Martinez totally destroy a 3/4 ton Silverado on March 1, 2007 by plowing into me at traffic speed while I was stopped at a light. If you pay for uninsured motorist coverage, ask your insurance agent *exactly* what that means... I bet it's different than what you think it means. But I digress, sorry... So here I am with a car that is shorter from top to road than the hood on my truck, and the driver thought it would be safe to run a red light and see if they could get past me before I got into the lane. The right side of my front bumper was almost into the driver's window when I hit the brakes and wheeled it left. Fortunately for all involved, I saw it soon enough, and pulled into the 2nd lane for making a left to go back South. I looked in my mirror, signalled a move, then moved over behind the yuck in the rat car. I then punched it, and the future hood ornament and I both made it through the next light. I pulled alongside to let her know that she was DEFINITELY Number 1 in my book, and it's a middle-age woman looking at me with a "sorry, it was an accident" show of pouty face and arms held up. Tough $hit lady... that may have worked when you were 18, but it's not working anymore, and it wasn't an accident... you ran a freakin' red light and almost got yourself killed. That just about put a bow on the day... I was home later than usual, pissed off about work stuff, pissed off at traffic, and now that. I ate dinner, watched a little TV, and was asleep about 9:30 exhausted. Hope today is better.

    Read the article

  • Successfully Deliver on State and Local Capital Projects through Project Portfolio Management

    - by Sylvie MacKenzie, PMP
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} While the debate continues on Capitol Hill about which federal programs to cut and which to keep, communities and towns across America are feeling the budget crunch closer to home. State and local governments are trying to save as many projects as they can without promising too much to constituents – and they, in turn, want to know where their tax dollars are going. Fortunately, with the right planning and management, you can deliver successful projects and portfolios on a limited budget. Watch the replay of our recent webcast with Oracle Primavera and Industry Product Manager Garrett Harley that will demonstrate how state and local governments can get the most out of their capital projects and learn how two Oracle Primavera customers have implemented project portfolio management practices to: Predict the cost of long-term capital programs and projects Assess risk and mitigation strategies Collaborate and track performance across government agencies Speakers: Garrett Harley, Industry and Product Manager, Oracle Primavera Cory Davis, Director of Capital Renovation and New Construction, Chicago Public Schools Julie Owen, PSP™, CCC™, Sr. Project Controls Manager,LA Metro Transit Authority With the right planning and management, state and local governments can deliver successful projects on a limited budget. 1024x768 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif"; mso-fareast-font-family:"Times New Roman";}

    Read the article

  • iOS 5 New Features vs Android

    - by kerry
    Browsing through the iOS 5 features list, I can’t help but notice a lot of it is catch up. Having owned both an iPhone and an Android for a considerable amount of time, I figured I would jot down my opinions. Notification Center – Completely ripped off from Android but looks good and is a much needed addition iMessage – This is very interesting as most people who would think it’s cool, probably really wouldn’t understand the significance.  Basically, Apple is adding an IM application to iOS.  Now iPhone / iPad users can sit around messaging each other how cool it is like Crackberry users circa 2003.  I guess the only real improvement over MMS is that you can easily setup groups, see when each other are typing, and don’t incur text messaging charges; at the expense of leaving your non-iOS buddies out (who wants to talk to those losers anyways?). Newstand – An app update and not an OS one (Apple typically doesn’t make distinctions).  It all seems like stuff my current Nook stuff will do.  Note: I did look to compare prices but it seems that information is not available without downloading iTunes.  lame. Reminders – TODO lists are ho hum, but the ability to have reminders when you arrive or leave a position is pretty cool. Twitter integration – The fact that the best Apple can come up with is ‘one at a timing’ online service integration is laughable at best. Camera – Can control it from the lock screen.  Now you’ll have tons of pocket lint photos in your iCloud to go along with the wicked shot of that cheetah that just unexpectedly ran by your apartment. Photos – Speaking of iCloud, all of your devices photos will be synced through it.  That’s cool I guess, not sure if Android will do the same. Safari – What?  You haven’t been reading rss feeds on your device this whole time?  Something tells me you aren’t about to start. PC Free – Finely Apple untethers the iPhone.  What took them so long? Game Center – This should be an interesting service.  Attention Apple fanboys immediately forget how they are blatantly copying Microsoft achievements (at least rename them). Wifi Sync – Just couldn’t cut the cord completely could they?  For what it’s worth, the Zune has been doing this for 5 years now. All in all a pretty big update.  Mostly iCloud.  Mostly keeping up the mobile status quo.  As an Android user, I can’t say there is anything I am envious of.

    Read the article

  • Dependency injection: How to sell it

    - by Mel
    Let it be known that I am a big fan of dependency injection (DI) and automated testing. I could talk all day about it. Background Recently, our team just got this big project that is to built from scratch. It is a strategic application with complex business requirements. Of course, I wanted it to be nice and clean, which for me meant: maintainable and testable. So I wanted to use DI. Resistance The problem was in our team, DI is taboo. It has been brought up a few times, but the gods do not approve. But that did not discourage me. My Move This may sound weird but third-party libraries are usually not approved by our architect team (think: "thou shalt not speak of Unity, Ninject, NHibernate, Moq or NUnit, lest I cut your finger"). So instead of using an established DI container, I wrote an extremely simple container. It basically wired up all your dependencies on startup, injects any dependencies (constructor/property) and disposed any disposable objects at the end of the web request. It was extremely lightweight and just did what we needed. And then I asked them to review it. The Response Well, to make it short. I was met with heavy resistance. The main argument was, "We don't need to add this layer of complexity to an already complex project". Also, "It's not like we will be plugging in different implementations of components". And "We want to keep it simple, if possible just stuff everything into one assembly. DI is an uneeded complexity with no benefit". Finally, My Question How would you handle my situation? I am not good in presenting my ideas, and I would like to know how people would present their argument. Of course, I am assuming that like me, you prefer to use DI. If you don't agree, please do say why so I can see the other side of the coin. It would be really interesting to see the point of view of someone who disagrees. Update Thank you for everyone's answers. It really puts things into perspective. It's nice enough to have another set of eyes to give you feedback, fifteen is really awesome! This are really great answers and helped me see the issue from different sides, but I can only choose one answer, so I will just pick the top voted one. Thanks everyone for taking the time to answer. I have decided that it is probably not the best time to implement DI, and we are not ready for it. Instead, I will concentrate my efforts on making the design testable and attempt to present automated unit testing. I am aware that writing tests is additional overhead and if ever it is decided that the additional overhead is not worth it, personally I would still see it as a win situation since the design is still testable. And if ever testing or DI is a choice in future, the design can easily handle it.

    Read the article

  • Don’t string together XML

    - by KyleBurns
    XML has been a pervasive tool in software development for over a decade.  It provides a way to communicate data in a manner that is simple to understand and free of platform dependencies.  Also pervasive in software development is what I consider to be the anti-pattern of using string manipulation to create XML.  This usually starts with a “quick and dirty” approach because you need an XML document and looks like (for all of the examples here, we’ll assume we’re writing the body of a method intended to take a Contact object and return an XML string): return string.Format("<Contact><BusinessName>{0}</BusinessName></Contact>", contact.BusinessName);   In the code example, I created (or at least believe I created) an XML document representing a simple contact object in one line of code with very little overhead.  Work’s done, right?  No it’s not.  You see, what I didn’t realize was that this code would be used in the real world instead of my fantasy world where I own all the data and can prevent any of it containing problematic values.  If I use this code to create a contact record for the business “Sanford & Son”, any XML parser will be incapable of processing the data because the ampersand is special in XML and should have been encoded as &amp;. Following the pattern that I have seen many times over, my next step as a developer is going to be to do what any developer in his right mind would do – instruct the user that ampersands are “bad” and they cannot be used without breaking computers.  This may work in many cases and is often accompanied by logic at the UI layer of applications to block these “bad” characters, but sooner or later someone is going to figure out that other applications allow for them and will want the same.  This often leads to the creation of “cleaner” functions that perform a replace on the strings for every special character that the person writing the function can think of.  The cleaner function will usually grow over time as support requests reveal characters that were missed in the initial cut.  Sooner or later you end up writing your own somewhat functional XML engine. I have never been told by anyone paying me to write code that they would like to buy a somewhat functional XML engine.  My employer/customer’s needs have always been for something that may use XML, but ultimately is functionality that drives business value. I’m not going to build an XML engine. So how can I generate XML that is always well-formed without writing my own engine?  Easy – use one of the ones provided to you for free!  If you’re in a shop that still supports VB6 applications, you can use the DomDocument or MXXMLWriter object (of the two I prefer MXXMLWriter, but I’m not going to fully describe either here).  For .Net Framework applications prior to the 3.5 framework, the code is a little more verbose than I would like, but easy once you understand what pieces are required:             using (StringWriter sw = new StringWriter())             {                 using (XmlTextWriter writer = new XmlTextWriter(sw))                 {                     writer.WriteStartDocument();                     writer.WriteStartElement("Contact");                     writer.WriteElementString("BusinessName", contact.BusinessName);                     writer.WriteEndElement(); // end Contact element                     writer.WriteEndDocument();                     writer.Flush();                     return sw.ToString();                 }             }   Looking at that code, it’s easy to understand why people are drawn to the initial one-liner.  Lucky for us, the 3.5 .Net Framework added the System.Xml.Linq.XElement object.  This object takes away a lot of the complexity present in the XmlTextWriter approach and allows us to generate the document as follows: return new XElement("Contact", new XElement("BusinessName", contact.BusinessName)).ToString();   While it is very common for people to use string manipulation to create XML, I’ve discussed here reasons not to use this method and introduced powerful APIs that are built into the .Net Framework as an alternative.  I’ve given a very simplistic example here to highlight the most basic XML generation task.  For more information on the XmlTextWriter and XElement APIs, check out the MSDN library.

    Read the article

  • A programmer who doesn't get to program - where to turn? [closed]

    - by Just an Anon
    I'm in my mid 20's, and have been working as a full time programmer / developer for the last ~6 years, with several years of part-time freelancing before this, and three straight years of freelancing in the middle of this short career. I work mostly with PHP and the Drupal framework. By and large, I focus on programming custom pieces of functionality; these, of course, vary greatly from project to project. I've got years of solid experience with OOP (have done some Java & C# years ago, too) including intensive experience with front-end development, and even some design work. I've lead small teams (2-4 people) of developers. And of course, given the large amount of freelancing, I've got decent project- & client-management skills. My problem is staying motivated at any place of employment. In the time mentioned I've worked (full-time) at six local companies. The longest I've stayed at any company was just over a year. I find that I'll get hired and be very excited and motivated for the first few months, but the work quickly gets "stale." By that I mean that the interesting components (ie. the programming) get done, and the rest of the work turns into boring cleanup (move a button, add text, change colours, add a field). I don't get challenged, and I don't feel like I'm learning anything new. This happens repeatedly time and time again, and I always end up leaving for either a new opportunity, or to freelance. I'm wondering if perhaps I've painted myself into a corner with the rather niche work market (although with very high demand and good compensation) and need to explore other career choices. Another possibility is that I may be choosing the wrong places of employment, mostly small agencies, and need to look into working for a larger, more established firm. I find programming, writing code, and architecting solutions very rewarding. When I'm working on an interesting problem I lose all sense of time and 14-16 hours can fly by like minutes. I get the same exciting feeling when I'm doing high-level planning of a complex system, breaking up the work and figuring out how everything will tie-in together. I absolutely hate doing small, "stupid" changes that pose no challenge, yet seem to make up more and more of my work. I want to find a workplace where I will get to work on such tasks, be challenged, and improve in all areas of product development. This maybe a programming job, management, architecture of desktop apps, or may be managing a taco stand on a beach in Mexico - I don't know, and I need some advice and real-world feedback. What are some job areas worth exploring? The requirements are fairly simple: working with computers interacting with others challenging decent pay (I'm making just short of 90k / year with a month of vacation & some benefits, and would like to stay in this range, but am willing to take a temporary cut in pay for a more interesting position) Any advice would be much appreciated!

    Read the article

  • Undefined control sequence

    - by Jelle Fresen
    Hi, I am making my Master's Thesis with LaTeX, but I can't get the provided style to work. Specifically, I get the error 'Undefined control sequence' when using the function makeformaltitlepages, which is defined in mscthesis.sty. On the internet, the only answer I could find is the straightforward 'you probably made a typo', or 'you probably forgot to include the package', but I have reason to believe neither of those apply to me. I am quite sure that the function exists, for when I add a little verification, using the @ifundefined command, the logfile shows that the function actually does exist. And, as can be seen in the following piece of code, I also include the package: \usepackage{mscthesis} % setup information like author, company, title, etc. \begin{document} \formatmatter \thispagestyle{empty} \maketitle \makeatletter \@ifundefined{makeformaltitlepages}{\message{Function is not defined.}}{\message{Function is defined.}} \makeatother \makeformaltitlepages{\input{abstract}} % add chapters, sections, etc. and end the document Now, the output shows the line "Function is defined." just before the output of maketitle (which I think is rather strange on its own, but that might be a flushing issue), followed by the following infinitely repeated error (well, cut off after 100 times by LaTeX): Function is defined. // some gibberish about font info ! Undefined control sequence. \GenericError ... #4 \errhelp \@err@ ... l.112 \makeformaltitlepages{} The control sequence at the end of the top line of your error message was never \def'ed. If you have misspelled it (e.g., `\hobx'), type `I' and the correct spelling (e.g., `I\hbox'). Otherwise just continue, and I'll forget about whatever was undefined. While the error keeps repeating, the line that starts with '#4' cycles between the following four lines: #4 \errhelp \@err@ ... \let \@err@ ... \@empty \def \MessageBreak... \endgroup Ok, so, do any of you have a suggestion of how I might continue to hunt this bug? Or what blatantly obvious mistake did I make?

    Read the article

  • CKEditor inside jQuery Dialog, how do I build it?

    - by Ben Dauphinee
    So, I'm working with CKEditor and jQuery, trying to build a pop-out editor. Below is what I have coded so far, and I can't seem to get it working the way I want it to. Basically, click the 'Edit' link, dialog box pops up, with the content to edit loaded into the CKEditor. Also, not required, but helpful if you can suggest how to do it. I can't seem to find out how to make the save button work in CKEditor (though I think the form will do it). Thanks in advance for any help. $(document).ready(function(){ var config = new Array(); config.height = "350px"; config.resize_enabled = false; config.tabSpaces = 4; config.toolbarCanCollapse = false; config.width = "700px"; config.toolbar_Full = [["Save","-","Cut","Copy","Paste","-","Undo","Redo","-","Bold","Italic", "-", "NumberedList","BulletedList","-","Link","Unlink","-","Image","Table"]]; $("a.opener").click(function(){ var editid = $(this).attr("href"); var editwin = \'<form><div id="header"><input type="text"></div><div id="content"><textarea id="content"></textarea></div></form>\'; var $dialog = $("<div>"+editwin+"</div>").dialog({ autoOpen: false, title: "Editor", height: 360, width: 710, buttons: { "Ok": function(){ var data = $(this).val(); } } }); //$(this).dialog("close"); $.getJSON("ajax/" + editid, function(data){ alert("datagrab"); $dialog.("textarea#content").html(data.content).ckeditor(config); alert("winset"); $dialog.dialog("open"); }); return false; }); });

    Read the article

  • OCR with Neural network: data extraction

    - by Sebastian Hoitz
    I'm using the AForge library framework and its neural network. At the moment when I train my network I create lots of images (one image per letter per font) at a big size (30 pt), cut out the actual letter, scale this down to a smaller size (10x10 px) and then save it to my harddisk. I can then go and read all those images, creating my double[] arrays with data. At the moment I do this on a pixel basis. So once I have successfully trained my network I test the network and let it run on a sample image with the alphabet at different sizes (uppercase and lowercase). But the result is not really promising. I trained the network so that RunEpoch had an error of about 1.5 (so almost no error), but there are still some letters left that do not get identified correctly in my test image. Now my question is: Is this caused because I have a faulty learning method (pixelbased vs. the suggested use of receptors in this article: http://www.codeproject.com/KB/cs/neural_network_ocr.aspx - are there other methods I can use to extract the data for the network?) or can this happen because my segmentation-algorithm to extract the letters from the image to look at is bad? Does anyone have ideas on how to improve it?

    Read the article

  • Using PHP substr() and strip_tags() while retaining formatting and without breaking HTML

    - by Peter
    I have various HTML strings to cut to 100 characters (of the stripped content, not the original) without stripping tags and without breaking HTML. Original HTML string (288 characters): $content = "<div>With a <span class='spanClass'>span over here</span> and a <div class='divClass'>nested div over <div class='nestedDivClass'>there</div> </div> and a lot of other nested <strong><em>texts</em> and tags in the air <span>everywhere</span>, it's a HTML taggy kind of day.</strong></div>"; When trimming to 100 characters HTML breaks and stripped content comes to about 40 characters: $content = substr($content, 0, 100)."..."; /* output: <div>With a <span class='spanClass'>span over here</span> and a <div class='divClass'>nested div ove... */ Stripping HTML gives the correct character count but obviously looses formatting: $content = substr(strip_tags($content)), 0, 100)."..."; /* output: With a span over here and a nested div over there and a lot of other nested texts and tags in the ai... */ Challenge: To output the character count of strip_tags while retaining HTML formatting and on closing the string finish any started tags: /* <div>With a <span class='spanClass'>span over here</span> and a <div class='divClass'>nested div over <div class='nestedDivClass'>there</div> </div> and a lot of other nested <strong><em>texts</em> and tags in the ai</strong></div>..."; Similar question (less strict on solution provided so far)

    Read the article

  • sudo port install arm-elf-gcc3 fails with "No defined site for tag: gcc…"

    - by Scott Bayes
    Am trying to get the ARM plugin for Eclipse (http://sourceforge.net/projects/gnuarmeclipse/) going on iMac i7, OS 10.6.3, Xcode 3.2.2 (don't want to upgrade during my project). The plugin needs (IIRC) arm-elf-gcc3, which needs darwinports for "easy" install. Of course, probably due to leftovers when I moved from old MacBook to iMac, Darwin ports 1.8.2 wouldn't install till I built 1.7.1 from source and installed it. darwinports 1.8.1 appears to have been properly installed, but sudo port install arm-elf-gcc3 led to 5-10 minutes of dependencies installs, then the following, produced with port -d (starting from last dependency completion for brevity): DEBUG: Found Dependency: receipt exists for gettext DEBUG: Executing org.macports.main (arm-elf-gcc3) --- Fetching arm-elf-gcc3 DEBUG: Executing org.macports.fetch (arm-elf-gcc3) --- gcc-3.4.6.tar.bz2 doesn't seem to exist in /opt/local/var/macports/distfiles/gcc Error: No defined site for tag: gcc, using master_sites Error: Target org.macports.fetch returned: can't read "host": no such variable DEBUG: Backtrace: can't read "host": no such variable while executing "info exists seen($host)" (procedure "sortsites" line 25) invoked from within "sortsites fetch_urls" (procedure "portfetch::fetchfiles" line 49) invoked from within "portfetch::fetchfiles" (procedure "portfetch::fetch_main" line 16) invoked from within "$procedure $targetname" Warning: the following items did not execute (for arm-elf-gcc3): org.macports.activate org.macports.fetch org.macports.extract org.macports.checksum org.macports.patch org.macports.configure org.macports.build org.macports.destroot org.macports.install Error: Status 1 encountered during processing. (sorry if that's a mess, neither blockquote nor code sample tags seem to properly display cut/pasted text from Terminal.app in preview window). Can anyone advise me on how to get around this (or how to build/install arm-elf-gcc3 from source if necessary)? None of the darwinports FAQs or forums mentioned arm-elf-gcc3 anywhere that I saw.

    Read the article

  • Development Environment in a VM against an isolated development/test network

    - by bart
    I currently work in an organization that forces all software development to be done inside a VM. This is for a variety of risk/governance/security/compliance reasons. The standard setup is something like: VMWare image given to devs with tools installed VM is customized to suit project/stream needs VM sits in a network & domain that is isolated from the live/production network SCM connectivity is only possible through dev/test network Email and office tools need to be on live network so this means having two separate desktops going at once Heavyweight dev tools in use on VMs so they are very resource hungry Some problems that people complain about are: Development environment runs slower than normal (host OS is windows XP so memory is limited) Switching between DEV machine and Email/Office machine is a pain, simple things like cut and paste are made harder. This is less efficient from a usability perspective. Mouse in particular doesn't seem to work properly using VMWare player or RDP. Need a separate login to Dev/Test network/domain Has anyone seen or worked in other (hopefully better) setups to this that have similar constraints (as mentioned at the top)? In particular are there viable options that would remove the need for running stuff in a VM altogether?

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >