Search Results

Search found 21331 results on 854 pages for 'require once'.

Page 511/854 | < Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >

  • SQL SERVER – Advanced Data Quality Services with Melissa Data – Azure Data Market

    - by pinaldave
    There has been much fanfare over the new SQL Server 2012, and especially around its new companion product Data Quality Services (DQS). Among the many new features is the addition of this integrated knowledge-driven product that enables data stewards everywhere to profile, match, and cleanse data. In addition to the homegrown rules that data stewards can design and implement, there are also connectors to third party providers that are hosted in the Azure Datamarket marketplace.  In this review, I leverage SQL Server 2012 Data Quality Services, and proceed to subscribe to a third party data cleansing product through the Datamarket to showcase this unique capability. Crucial Questions For the purposes of the review, I used a database I had in an Excel spreadsheet with name and address information. Upon a cursory inspection, there are miscellaneous problems with these records; some addresses are missing ZIP codes, others missing a city, and some records are slightly misspelled or have unparsed suites. With DQS, I can easily add a knowledge base to help standardize my values, such as for state abbreviations. But how do I know that my address is correct? And if my address is not correct, what should it be corrected to? The answer lies in a third party knowledge base by the acknowledged USPS certified address accuracy experts at Melissa Data. Reference Data Services Within DQS there is a handy feature to actually add reference data from many different third-party Reference Data Services (RDS) vendors. DQS simplifies the processes of cleansing, standardizing, and enriching data through custom rules and through service providers from the Azure Datamarket. A quick jump over to the Datamarket site shows me that there are a handful of providers that offer data directly through Data Quality Services. Upon subscribing to these services, one can attach a DQS domain or composite domain (fields in a record) to a reference data service provider, and begin using it to cleanse, standardize, and enrich that data. Besides what I am looking for (address correction and enrichment), it is possible to subscribe to a host of other services including geocoding, IP address reference, phone checking and enrichment, as well as name parsing, standardization, and genderization.  These capabilities extend the data quality that DQS has natively by quite a bit. For my current address correction review, I needed to first sign up to a reference data provider on the Azure Data Market site. For this example, I used Melissa Data’s Address Check Service. They offer free one-month trials, so if you wish to follow along, or need to add address quality to your own data, I encourage you to sign up with them. Once I subscribed to the desired Reference Data Provider, I navigated my browser to the Account Keys within My Account to view the generated account key, which I then inserted into the DQS Client – Configuration under the Administration area. Step by Step to Guide That was all it took to hook in the subscribed provider -Melissa Data- directly to my DQS Client. The next step was for me to attach and map in my Reference Data from the newly acquired reference data provider, to a domain in my knowledge base. On the DQS Client home screen, I selected “New Knowledge Base” under Knowledge Base Management on the left-hand side of the home screen. Under New Knowledge Base, I typed a Name and description of my new knowledge base, then proceeded to the Domain Management screen. Here I established a series of domains (fields) and then linked them all together as a composite domain (record set). Using the Create Domain button, I created the following domains according to the fields in my incoming data: Name Address Suite City State Zip I added a Suite column in my domain because Melissa Data has the ability to return missing Suites based on last name or company. And that’s a great benefit of using these third party providers, as they have data that the data steward would not normally have access to. The bottom line is, with these third party data providers, I can actually improve my data. Next, I created a composite domain (fulladdress) and added the (field) domains into the composite domain. This essentially groups our address fields together in a record to facilitate the full address cleansing they perform. I then selected my newly created composite domain and under the Reference Data tab, added my third party reference data provider –Melissa Data’s Address Check- and mapped in each domain that I had to the provider’s Schema. Now that my composite domain has been married to the Reference Data service, I can take the newly published knowledge base and create a project to cleanse and enrich my data. My next task was to create a new Data Quality project, mapping in my data source and matching it to the appropriate domain column, and then kick off the verification process. It took just a few minutes with some progress indicators indicating that it was working. When the process concluded, there was a helpful set of tabs that place the response records into categories: suggested; new; invalid; corrected (automatically); and correct. Accepting the suggestions provided by  Melissa Data allowed me to clean up all the records and flag the invalid ones. It is very apparent that DQS makes address data quality simplistic for any IT professional. Final Note As I have shown, DQS makes data quality very easy. Within minutes I was able to set up a data cleansing and enrichment routine within my data quality project, and ensure that my address data was clean, verified, and standardized against real reference data. As reviewed here, it’s easy to see how both SQL Server 2012 and DQS work to take what used to require a highly skilled developer, and empower an average business or database person to consume external services and clean data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology Tagged: DQS

    Read the article

  • Anyone had any experience with *.pcap manipulation libs?

    - by zxcvbnm
    I'm using the SharpPcap + PacketDotNet libraries to process some .pcap files and came across a bug in the way the timestamps are calculated. Take this Timeval property, which is something along these lines: PosixTimeval Timeval { DateTime Date; ulong Seconds; ulong MicroSeconds; } The problem is as follows: Suppose you have a trace open in Wireshark with one of the packets with a timestamp of "0.002". Once you open it within one of your programs, it retrieves the packet and its Timeval is setup such that Seconds = 0 and MicroSeconds = 002 = 2. This is done under the hood, so there is no way to avoid it as far as I can tell. My question is if that problem is common to other libraries (and maybe all of them?) who manipulate the pcap file format, which I think are built around the same collection of c/c++ functions, or if this is a problem only with the ones I'm using.

    Read the article

  • Chaining multiple MapReduce jobs in Hadoop.

    - by Niels Basjes
    In many real-life situations where you apply MapReduce, the final algorithms end up being several MapReduce steps. I.e. Map1 , Reduce1 , Map2 , Reduce2 , etc. So you have the output from the last reduce that is needed as the input for the next map. The intermediate data is something you (in general) do not want to keep once the pipeline has been successfully completed. Also because this intermediate data is in general some data structure (like a 'map' or a 'set') you don't want to put too much effort in writing and reading these key-value pairs. What is the recommended way of doing that in Hadoop? Is there a (simple) example that shows how to handle this intermediate data in the correct way, including the cleanup afterward? Thanks.

    Read the article

  • Comments on Comments

    - by Joe Mayo
    I almost tweeted a reply to Capar Kleijne's question about comments on Twitter, but realized that my opinion exceeded 140 characters. The following is based upon my experience with extremes and approaches that I find useful in code comments. There are a couple extremes that I've seen and reasons why people go the distance in each approach. The most common extreme is no comments in the code at all.  A few bad reasons why this happens is because a developer is in a hurry, sloppy, or is interested in job preservation. The unfortunate result is that the code is difficult to understand and hard to maintain. The drawbacks to no comments in code are a primary reason why teachers drill the need for commenting code into our heads.  This viewpoint assumes the lack of comments are bad because the code is bad, but there is another reason for not commenting that is gaining more popularity. I've heard/and read that code should be self documenting. Following this thought pattern, if code is well written with meaningful names, there should not be a reason for comments.  An addendum to this argument is that comments are often neglected and get out-of-date, but the code is what is kept up-to-date. Presumably, if code contained very good naming, it would be easy to maintain.  This is a noble perspective and I like the practice of meaningful naming of identifiers. However, I think it's also an extreme approach that doesn't cover important cases.  i.e. If an identifier is named badly (subjective differences in opinion) or not changed appropriately during maintenance, then the badly named identifier is no more useful than a stale comment. These were the two no-comment extremes, so let's look at the too many comments extreme. On a regular basis, I'll see cases where the code is over-commented; not nearly as often as the no-comment scenarios, but still prevalent.  These are examples of where every single line in the code is commented.  These comments make the code harder to read because they get in the way of the algorithm.  In most cases, the comments parrot what each line of code does.  If a developer understands the language, then most statements are immediately intuitive.  i.e. what use is it to say that I'm assigning foo to bar when it's clear what the code is doing. I think that over-commenting code is a waste of time that slows down initial development and maintenance.  Understandably, the developer's intentions are admirable because they've had it beaten into their heads that they must comment. However, I think it's an extreme and prefer a more moderate approach. I don't think the extremes do justice to code because each can make maintenance harder.  No comments on bad code is obviously a problem, but the other two extremes are subtle and require qualification to address properly. The problem I see with the code-as-documentation approach is that it doesn't lift the developer out of the algorithm to identify dependencies, intentions, and hacks. Any developer can read code and follow an algorithm, but they still need to know where it fits into the big picture of the application. Because of indirections with language features like interfaces, delegates, and virtual members, code can become complex.  Occasionally, it's useful to point out a nuance or reason why a piece of code is there. i.e. If you've building an app that communicates via HTTP, you'll have certain headers to include for the endpoint, and it could be useful to point out why the code for setting those header values is there and how they affect the application. An argument against this could be that you should extract that code into a separate method with a meaningful name to describe the scenario.  My problem with such an approach would be that your code base becomes even more difficult to navigate and work with because you have all of this extra code just to make the code more meaningful. My opinion is that a simple and well-stated comment stating the reasons and intention for the code is more natural and convenient to the initial developer and maintainer.  I just don't agree with the approach of going out of the way to avoid making a comment.  I'm also concerned that some developers would take this approach as an excuse to not comment their bad code. Another area where I like comments is on documentation comments.  Java has it and so does C# and VB.  It's convenient because we can build automated tools that extract these comments.  These extracted comments are often much better than no documentation at all.  The "go read the code" answer always doesn't fulfill the need for a quick summary of an API. To summarize, I think that the extremes of no comments and too many comments are less than desirable approaches. I prefer documentation comments to explain each class and member (API level) and code comments as necessary to supplement well-written code. Joe

    Read the article

  • Django admin site populated combo box based on imput

    - by user292652
    hi i have to following model class Match(models.Model): Team_one = models.ForeignKey('Team', related_name='Team_one') Team_two = models.ForeignKey('Team', related_name='Team_two') Stadium = models.CharField(max_length=255, blank=True) Start_time = models.DateTimeField(auto_now_add=False, auto_now=False, blank=True, null=True) Rafree = models.CharField(max_length=255, blank=True) Judge = models.CharField(max_length=255, blank=True) Winner = models.ForeignKey('Team', related_name='winner', blank=True) updated = models.DateTimeField('update date', auto_now=True ) created = models.DateTimeField('creation date', auto_now_add=True ) def save(self, force_insert=False, force_update=False): pass @models.permalink def get_absolute_url(self): return ('view_or_url_name') class MatchAdmin(admin.ModelAdmin): list_display = ('Team_one','Team_two', 'Winner') search_fields = ['Team_one','Team_tow'] admin.site.register(Match, MatchAdmin) i was wondering is their a way to populated the winner combo box once the team one and team two is selected in admin site ?

    Read the article

  • What is an Efficient algorithm to find Area of Overlapping Rectangles

    - by namenlos
    My situation Input: a set of rectangles each rect is comprised of 4 doubles like this: (x0,y0,x1,y1) they are not "rotated" at any angle, all they are "normal" rectangles that go "up/down" and "left/right" with respect to the screen they are randomly placed - they may be touching at the edges, overlapping , or not have any contact I will have several hundred rectangles this is implemented in C# I need to find The area that is formed by their overlap - all the area in the canvas that more than one rectangle "covers" (for example with two rectangles, it would be the intersection) I don't need the geometry of the overlap - just the area (example: 4 sq inches) Overlaps shouldn't be counted multiple times - so for example imagine 3 rects that have the same size and position - they are right on top of each other - this area should be counted once (not three times) Example The image below contains thre rectangles: A,B,C A and B overlap (as indicated by dashes) B and C overlap (as indicated by dashes) What I am looking for is the area where the dashes are shown - AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAA--------------BBB AAAAAAAAAAAAAAAA--------------BBB AAAAAAAAAAAAAAAA--------------BBB AAAAAAAAAAAAAAAA--------------BBB BBBBBBBBBBBBBBBBB BBBBBBBBBBBBBBBBB BBBBBBBBBBBBBBBBB BBBBBB-----------CCCCCCCC BBBBBB-----------CCCCCCCC BBBBBB-----------CCCCCCCC CCCCCCCCCCCCCCCCCCC CCCCCCCCCCCCCCCCCCC CCCCCCCCCCCCCCCCCCC CCCCCCCCCCCCCCCCCCC

    Read the article

  • Java web start - Unable to load resource

    - by CD
    Hi all, I've got a jar that loads great with java web starter when I browse through the IP address of the server. Once I try the server name instead I get the following exception: com.sun.deploy.net.FailedDownloadException: Unable to load resource: at com.sun.deploy.net.DownloadEngine.actionDownload(Unknown Source) at com.sun.deploy.net.DownloadEngine.getCacheEntry(Unknown Source) at com.sun.deploy.net.DownloadEngine.getCacheEntry(Unknown Source) at com.sun.deploy.net.DownloadEngine.getResourceCacheEntry(Unknown Source) at com.sun.deploy.net.DownloadEngine.getResourceCacheEntry(Unknown Source) at com.sun.deploy.net.DownloadEngine.getResource(Unknown Source) at com.sun.javaws.LaunchDownload$DownloadTask.call(Unknown Source) at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source) at java.util.concurrent.FutureTask.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Any idea what should I look for?

    Read the article

  • Shared Cookies between WebView and HTTPClient?

    - by Arpit
    An Android app I am building requires web authentication for users to make data calls. In Adobe AIR and later the iPhone, we did this by rendering a login page in a webview-equivalent page and setting a cookie when the user signs in. Subsequent data calls use the same Cookie Jar and so are seen as authenticated. In the Android version, I authenticate the user using a WebView and then once thats done, I make a data call using DefaultHttpClient, however I cant seem to load the data on the second call. Is there some cookie gotcha I am missing? I imagine the HTTPClient and WebView would share the same Cookie space. Am I wrong?

    Read the article

  • Delay rendering or force re-rendering in Flex

    - by Tereno
    Hi there, In my Flex application, using a custom control, I am making a JSON request to grab some data from the server. My rendering depends on this data such as knowing how many boxes to draw. How can I either force the rendering to wait until I've got the data before drawing to screen or have the boxes draw once we receive the data? I have an event listener for Event.COMPLETE for my JSON request and in there, I call methods that add to the control. I've tried invalidateDisplayList but that doesn't seem to do anything for me?

    Read the article

  • iPhone: Using static library in an application crashes the device but not the iphone simulator

    - by spin-docta
    I have a library I made, and now I want to utilize it in an application. I've believe I've properly linked to the library. Here are all the things I've done: Set the header search path Set other linker flags to "-ObjC" Added the static library xcode project Made sure the lib.a was listed as a framework target Added the library as a direct dependency Like I said in the title, I've successfully run the app with the static library in the simulator. Once I try testing the app using the device, it crashes the second it has to use a function from the library: *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** +[NSDate firstOfCurrentMonth]: unrecognized selector sent to class 0x3841bb44' 2009-10-10 12:45:31.159 Basement[2372:207] Stack:

    Read the article

  • Unix [Homework]: Get a list of /home/user/ directories in /etc/passwd

    - by KChaloux
    I'm very new to Unix, and currently taking a class learning the basics of the system and its commands. I'm looking for a single command line to list off all of the user home directories in alphabetical order from the /etc/passwd directory. This applies only to the home directories, and not the contents within them. There should be no duplicate entries. I've tried many permutations of commands such as the following: sort -d | find /etc/passwd /home/* -type -d | uniq | less I've tried using -path, -name, removing -type, using -prune, and changing the search pattern to things like /home/*/$, but haven't gotten good results once. At best I can get a list of my own directory (complete with every directory inside it, which is bad), and the directories of the other students on the server (without the contained directories, which is good). I just can't get it to display the /home/user directories and nothing else for my own account. Many thanks in advance.

    Read the article

  • Device Driver IRQL and Thread/Context Switches

    - by Christian Hoglund
    Hi, I'm new to Windows device driver programming. I know that certain operations can only be performed at IRQL PASSIVE_LEVEL. For example, Microsoft have this sample code of how to write to a file from a kernel driver: if (KeGetCurrentIrql() != PASSIVE_LEVEL) return STATUS_INVALID_DEVICE_STATE; Status = ZwCreateFile(...); My question is this: What is preventing the IRQL from being raised after the KeGetCurrentIrql() check above? Say a context or thread swithch occurs, couldn't the IRQL suddenly be DISPATCH_LEVEL when it gets back to my driver which would then result in a system crash? If this is NOT possible then why not just check the IRQL in the DriverEntry function and be done with it once for all?

    Read the article

  • Calculate directed broadcast address given IP address and subnet in PowerShell

    - by halr9000
    My goal is to calculate the directed broadcast address when given the IP and subnet mask of a host node. I know, sounds like homework. Once I reasoned through my task and boiled it down to this, I was amused with myself. Anyway, the solution will look something like the one in this question I suppose, but I'm not a math major and my C sucks. I could do with a PowerShell (preferred) or C# example to get me going. thanks!

    Read the article

  • The Product Owner

    - by Robert May
    In a previous post, I outlined the rules of Scrum.  This post details one of those rules. Picking a most important part of Scrum is difficult.  All of the rules are required, but if there were one rule that is “more” required that every other rule, its having a good Product Owner.  Simply put, the Product Owner can make or break the project. Duties of the Product Owner A Product Owner has many duties and responsibilities.  I’ll talk about each of these duties in detail below. A Product Owner: Discovers and records stories for the backlog. Prioritizes stories in the Product Backlog, Release Backlog and Iteration Backlog. Determines Release dates and Iteration Dates. Develops story details and helps the team understand those details. Helps QA to develop acceptance tests. Interact with the Customer to make sure that the product is meeting the customer’s needs. Discovers and Records Stories for the Backlog When I do Scrum, I always use User Stories as the means for capturing functionality that’s required in the system.  Some people will use Use Cases, but the same rule applies.  The Product Owner has the ultimate responsibility for figuring out what functionality will be in the system.  Many different mechanisms for capturing this input can be used.  User interviews are great, but all sources should be considered, including talking with Customer Support types.  Often, they hear what users are struggling with the most and are a great source for stories that can make the application easier to use. Care should be taken when soliciting user stories from technical types such as programmers and the people that manage them.  They will almost always give stories that are very technical in nature and may not have a direct benefit for the end user.  Stories are about adding value to the company.  If the stories don’t have direct benefit to the end user, the Product Owner should question whether or not the story should be implemented.  In general, technical stories should be included as tasks in User Stories.  Technical stories are often needed, but the ultimate value to the user is in user based functionality, so technical stories should be considered nothing more than overhead in providing that user functionality. Until the iteration prior to development, stories should be nothing more than short, one line placeholders. An exercise called Story Planning can be used to brainstorm and come up with stories.  I’ll save the description of this activity for another blog post. For more information on User Stories, please read the book User Stories Applied by Mike Cohn. Prioritizes Stories in the Product Backlog, Release Backlog and Iteration Backlog Prioritization of stories is one of the most difficult tasks that a Product Owner must do.  A key concept of Scrum done right is the need to have the team working from a single set of prioritized stories.  If the team does not have a single set of prioritized stories, Scrum will likely fail at your organization.  The Product Owner is the ONLY person who has the responsibility to prioritize that list.  The Product Owner must be very diplomatic and sincerely listen to the people around him so that he can get the priorities correct. Just listening will still not yield the proper priorities.  Care must also be taken to ensure that Return on Investment is also considered.  Ultimately, determining which stories give the most value to the company for the least cost is the most important factor in determining priorities.  Product Owners should be willing to look at cold, hard numbers to determine the order for stories.  Even when many people want a feature, if that features is costly to develop, it may not have as high of a return on investment as features that are cheaper, but not as popular. The act of prioritization often causes conflict in an environment.  Customer Service thinks that feature X is the most important, because it will stop people from calling.  Operations thinks that feature Y is the most important, because it will stop servers from crashing.  Developers think that feature Z is most important because it will make writing software much easier for them.  All of these are useful goals, but the team can have only one list of items, and each item must have a priority that is different from all other stories.  The Product Owner will determine which feature gives the best return on investment and the other features will have to wait their turn, which means that someone will not have their top priority feature implemented first. A weak Product Owner will refuse to do prioritization.  I’ve heard from multiple Product Owners the following phrase, “Well, it’s all got to be done, so what does it matter what order we do it in?”  If your product owner is using this phrase, you need a new Product Owner.  Order is VERY important.  In Scrum, every release is potentially shippable.  If the wrong priority items are developed, then the value added in each release isn’t what it should be.  Additionally, the Product Owner with this mindset doesn’t understand Agile.  A product is NEVER finished, until the company has decided that it is no longer a going concern and they are no longer going to sell the product.  Therefore, prioritization isn’t an event, its something that continues every day.  The logical extension of the phrase “It’s all got to be done” is that you will never ship your product, since a product is never “done.”  Once stories have been prioritized, assigning them to the Release Backlog and the Iteration Backlog becomes relatively simple.  The top priority items are copied into the respective backlogs in order and the task is complete.  The team does have the right to shuffle things around a little in the iteration backlog.  For example, they may determine that working on story C with story A is appropriate because they’re related, even though story B is technically a higher priority than story C.  Or they may decide that story B is too big to complete in the time available after Story A has tasks created, so they’ll work on Story C since it’s smaller.  They can’t, however, go deep into the backlog to pick stories to implement.  The team and the Product Owner should work together to determine what’s best for the company. Prioritization is time consuming, but its one of the most important things a Product Owner does. Determines Release Dates and Iteration Dates Product owners are responsible for determining release dates for a product.  A common misconception that Product Owners have is that every “release” needs to correspond with an actual release to customers.  This is not the case.  In general, releases should be no more than 3 months long.  You  may decide to release the product to the customers, and many companies do release the product to customers, but it may also be an internal release. If a release date is too far away, developers will fall into the trap of not feeling a sense of urgency.  The date is far enough away that they don’t need to give the release their full attention.  Additionally, important tasks, such as performance tuning, regression testing, user documentation, and release preparation, will not happen regularly, making them much more difficult and time consuming to do.  The more frequently you do these tasks, the easier they are to accomplish. The Product Owner will be a key participant in determining whether or not a release should be sent out to the customers.  The determination should be made on whether or not the features contained in the release are valuable enough  and complete enough that the customers will see real value in the release.  Often, some features will take more than three months to get them to a state where they qualify for a release or need additional supporting features to be released.  The product owner has the right to make this determination. In addition to release dates, the Product Owner also will help determine iteration dates.  In general, an iteration length should be chosen and the team should follow that iteration length for an extended period of time.  If the iteration length is changed every iteration, you’re not doing Scrum.  Iteration lengths help the team and company get into a rhythm of developing quality software.  Iterations should be somewhere between 2 and 4 weeks in length.  Any shorter, and significant software will likely not be developed.  Any longer, and the team won’t feel urgency and planning will become very difficult. Iterations may not be extended during the iteration.  Companies where Scrum isn’t really followed will often use this as a strategy to complete all stories.  They don’t want to face the harsh reality of what their true performance is, and looking good is more important than seeking visibility and improving the process and team.  Companies like this typically don’t allow failure.  This is unhealthy.  Failure is part of life and unless we learn from it, we can’t improve.  I would much rather see a team push out stories to the next iteration and then have healthy discussions about why they failed rather than extend the iteration and not deal with the core problems. If iteration length varies, retrospectives become more difficult.  For example, evaluating the performance of the team’s estimation efforts becomes much more difficult if the iteration length varies.  Also, the team must have a velocity measurement.  If the iteration length varies, measuring velocity becomes impossible and upper management no longer will have the ability to evaluate the teams performance.  People external to the team will no longer have the ability to determine when key features are likely to be developed.  Variable iterations cause the entire company to fail and likely cause Scrum to fail at an organization. Develops Story Details and Helps the Team Understand Those Details A key concept in Scrum is that the stories are nothing more than a placeholder for a conversation.  Stories should be nothing more than short, one line statements about the functionality.  The team will then converse with the Product Owner about the details about that story.  The product owner needs to have a very good idea about what the details of the story are and needs to be able to help the team understand those details. Too often, we see this requirement as being translated into the need for comprehensive documentation about the story, including old fashioned requirements documentation.  The team should only develop the documentation that is required and should not develop documentation that is only created because their is a process to do so. In general, what we see that works best is the iteration before a team starts development work on a story, the Product Owner, with other appropriate business analysts, will develop the details of that story.  They’ll figure out what business rules are required, potentially make paper prototypes or other light weight mock-ups, and they seek to understand the story and what is implied.  Note that the time allowed for this task is deliberately short.  The Product Owner only has a single iteration to develop all of the stories for the next iteration. If more than one iteration is used, I’ve found that teams will end up with Big Design Up Front and traditional requirements documents.  This is a waste of time, since the team will need to then have discussions with the Product Owner to figure out what the requirements document says.  Instead of this, skip making the pretty pictures and detailing the nuances of the requirements and build only what is minimally needed by the team to do development.  If something comes up during development, you can address it at that time and figure out what you want to do.  The goal is to keep things as light weight as possible so that everyone can move as quickly as possible. Helps QA to Develop Acceptance Tests In Scrum, no story can be counted until it is accepted by QA.  Because of this, acceptance tests are very important to the team.  In general, acceptance tests need to be developed prior to the iteration or at the very beginning of the iteration so that the team can make sure that the tasks that they develop will fulfill the acceptance criteria. The Product Owner will help the team, including QA, understand what will make the story acceptable.  Note that the Product Owner needs to be careful about specifying that the feature will work “Perfectly” at the end of the iteration.  In general, features are developed a little bit at a time, so only the bit that is being developed should be considered as necessary for acceptance. A weak Product Owner will make statements like “Do it right the first time.”  Not only are these statements damaging to the team (like they would try to do it WRONG the first time . . .), they’re also ignoring the iterative nature of Scrum.  Additionally, a weak product owner will seek to add scope in the acceptance testing.  For example, they will refuse to determine acceptance at the beginning of the iteration, and then, after the team has planned and committed to the iteration, they will expand scope by defining acceptance.  This often causes the team to miss the iteration because scope that wasn’t planned on is included.  There are ways that the team can mitigate this problem.  For example, include extra “Product Owner” time to deal with the uncertainty that you know will be introduced by the Product Owner.  This will slow the perceived velocity of the team and is not ideal, since they’ll be doing more work than they get credit for. Interact with the Customer to Make Sure that the Product is Meeting the Customer’s Needs Once development is complete, what the team has worked on should be put in front of real live people to see if it meets the needs of the customer.  One of the great things about Agile is that if something doesn’t work, we can revisit it in a future iteration!  This frees up the team to make the best decision now and know that if that decision proves to be incorrect, the team can revisit it and change that decision. Features are about adding value to the customer, so if the customer doesn’t find them useful, then having the team make tweaks is valuable.  In general, most software will be 80 to 90 percent “right” after the initial round and only minor tweaks are required.  If proper coding standards are followed, these tweaks are usually minor and easy to accomplish.  Product Owners that are doing a good job will encourage real users to see and use the software, since they know that they are trying to add value to the customer. Poor product owners will think that they know the answers already, that their customers are silly and do stupid things and that they don’t need customer input.  If you have a product owner that is afraid to show the team’s work to real customers, you probably need a different product owner. Up Next, “Who Makes a Good Product Owner.” Followed by, “Messing with the Team.” Technorati Tags: Scrum,Product Owner

    Read the article

  • Changes in gcc/persistence of optimization flags gcc/C

    - by gnometorule
    Just curious. Using gcc/gdb under Ubuntu 9.10. Reading a C book that also often gives the disassembly of the object file. When reading in January, my disassembly looks a lot like the book's; now, it's quite different - possibly more optimized (I notice some re-arrangements in the assembly code now that, at least in the files I checked, look optimized). I have used optimization options -O1 - -O3 for gcc between the first and second read, but not before the first. (1) Is the use of optimization options persistent, aka, if you use them once, you'll use them until switching them off? That would be weird (browsed man file and at least did not see anything to that sort). In the unlikely case that it is true, how do you switch them off? (2) Has gcc's assembly changed through any recent upgrade? (3) Does gcc sometimes produce (significantly) different assembly code although same compile options are chosen? Thanks much.

    Read the article

  • DockPanel ActivateMDIChild in C#

    - by James
    Hi I have used WeifenLuo.WinFormsUI.Docking Dockpanel in my project. I have three Document Style DockContent (i.e. A, B, C)in my application which has been arranged in tabbed style. On Focus() method of any of this dockContent (i.e. A, B, C) its getting activated and getting focus well. As per requirement i need to hide all three dockcontent(i.e. A, B, C) n need to show other two (i.e. D,E). And then once particular process get over, i close D,E and show A, B, C. Then after On Focus() of C, its focus is not getting set. May i know what could be the reason for the same. Please guide me for the same. Thanks.

    Read the article

  • Laissez les bon temps rouler! (Microsoft BI Conference 2010)

    - by smisner
    "Laissez les bons temps rouler" is a Cajun phrase that I heard frequently when I lived in New Orleans in the mid-1990s. It means "Let the good times roll!" and encapsulates a feeling of happy expectation. As I met with many of my peers and new acquaintances at the Microsoft BI Conference last week, this phrase kept running through my mind as people spoke about their plans in their respective businesses, the benefits and opportunities that the recent releases in the BI stack are providing, and their expectations about the future of the BI stack. Notwithstanding some jabs here and there to point out the platform is neither perfect now nor will be anytime soon (along with admissions that the competitors are also not perfect), and notwithstanding several missteps by the event organizers (which I don't care to enumerate), the overarching mood at the conference was positive. It was a refreshing change from the doom and gloom hovering over several conferences that I attended in 2009. Although many people expect economic hardships to continue over the coming year or so, everyone I know in the BI field is busier than ever and expects to stay busy for quite a while. Self-Service BI Self-service was definitely a theme of the BI conference. In the keynote, Ted Kummert opened with a look back to a fairy tale vision of self-service BI that he told in 2008. At that time, the fairy tale future was a time when "every end user was able to use BI technologies within their job in order to move forward more effectively" and transitioned to the present time in which SQL Server 2008 R2, Office 2010, and SharePoint 2010 are available to deliver managed self-service BI. This set of technologies is presumably poised to address the needs of the 80% of users that Kummert said do not use BI today. He proceeded to outline a series of activities that users ought to be able to do themselves--from simple changes to a report like formatting or an addtional data visualization to integration of an additional data source. The keynote then continued with a series of demonstrations of both current and future technology in support of self-service BI. Some highlights that interested me: PowerPivot, of course, is the flagship product for self-service BI in the Microsoft BI stack. In the TechEd keynote, which was open to the BI conference attendees, Amir Netz (twitter) impressed the audience by demonstrating interactivity with a workbook containing 100 million rows. He upped the ante at the BI keynote with his demonstration of a future-state PowerPivot workbook containing over 2 billion records. It's important to note that this volume of data is being processed by a server engine, and not in the PowerPivot client engine. (Yes, I think it's impressive, but none of my clients are typically wrangling with 2 billion records at a time. Maybe they're thinking too small. This ability to work quickly with large data sets has greater implications for BI solutions than for self-service BI, in my opinion.) Amir also demonstrated KPIs for the future PowerPivot, which appeared to be easier to implement than in any other Microsoft product that supports KPIs, apart from simple KPIs in SharePoint. (My initial reaction is that we have one more place to build KPIs. Great. It's confusing enough. I haven't seen how well those KPIs integrate with other BI tools, which will be important for adoption.) One more PowerPivot feature that Amir showed was a graphical display of the lineage for calculations. (This is hugely practical, especially if you build up calculations incrementally. You can more easily follow the logic from calculation to calculation. Furthermore, if you need to make a change to one calculation, you can assess the impact on other calculations.) Another product demonstration will be available within the next 30 days--Pivot for Reporting Services. If you haven't seen this technology yet, check it out at www.getpivot.com. (It definitely has a wow factor, but I'm skeptical about its practicality. However, I'm looking forward to trying it out with data that I understand.) Michael Tejedor (twitter) demonstrated a feature that I think is really interesting and not emphasized nearly enough--overshadowed by PowerPivot, no doubt. That feature is the Microsoft Business Intelligence Indexing Connector, which enables search of the content of Excel workbooks and Reporting Services reports. (This capability existed in MOSS 2007, but was more cumbersome to implement. The search results in SharePoint 2010 are not only cooler, but more useful by describing whether the content is found in a table or a chart, for example.) This may yet be the dawning of the age of self-service BI - a phrase I've heard repeated from time to time over the last decade - but I think BI professionals are likely to stay busy for a long while, and need not start looking for a new line of work. Kummert repeatedly referenced strategic BI solutions in contrast to self-service BI to emphasize that self-service BI is not a replacement for the services that BI professionals provide. After all, self-service BI does not appear magically on user desktops (or whatever device they want to use). A supporting infrastructure is necessary, and grows in complexity in proportion to the need to simplify BI for users. It's one thing to hear the party line touted by Microsoft employees at the BI keynote, but it's another to hear from the people who are responsible for implementing and supporting it within an organization. Rob Collie (blog | twitter), Kasper de Jonge (blog | twitter), Vidas Matelis (site | twitter), and I were invited to join Andrew Brust (blog | twitter) as he led a Birds of a Feather session at TechEd entitled "PowerPivot: Is It the BI Deal-Changer for Developers and IT Pros?" I would single out the prevailing concern in this session as the issue of control. On one side of this issue were those who were concerned that they would lose control once PowerPivot is implemented. On the other side were those who believed that data should be freely accessible to users in PowerPivot, and even acknowledgment that users would get the data they want even if it meant they would have to manually enter into a workbook to have it ready for analysis. For another viewpoint on how PowerPivot played out at the conference, see Rob Collie's observations. Collaborative BI I have been intrigued by the notion of collaborative BI for a very long time. Before I discovered BI, I was a Lotus Notes developer and later a manager of developers, working in a software company that enabled collaboration in the legal industry. Not only did I help create collaborative systems for our clients, I created a complete project management from the ground up to collaboratively manage our custom development work. In that case, collaboration involved my team, my client contacts, and me. I was also able to produce my own BI from that system as well, but didn't know that's what I was doing at the time. Only in recent years has SharePoint begun to catch up with the capabilities that I had with Lotus Notes more than a decade ago. Eventually, I had the opportunity at that job to formally investigate BI as another product offering for our software, and the rest - as they say - is history. I built my first data warehouse with Scott Cameron (who has also ventured into the authoring world by writing Analysis Services 2008 Step by Step and was at the BI Conference last week where I got to reminisce with him for a bit) and that began a career that I never imagined at the time. Fast forward to 2010, and I'm still lauding the virtues of collaborative BI, if only the tools will catch up to my vision! Thus, I was anxious to see what Donald Farmer (blog | twitter) and Rita Sallam of Gartner had to say on the subject in their session "Collaborative Decision Making." As I suspected, the tools aren't quite there yet, but the vendors are moving in the right direction. One thing I liked about this session was a non-Microsoft perspective of the state of the industry with regard to collaborative BI. In addition, this session included a better demonstration of SharePoint collaborative BI capabilities than appeared in the BI keynote. Check out the video in the link to the session to see the demonstration. One of the use cases that was demonstrated was linking from information to a person, because, as Donald put it, "People don't trust data, they trust people." The Microsoft BI Stack in General A question I hear all the time from students when I'm teaching is how to know what tools to use when there is overlap between products in the BI stack. I've never taken the time to codify my thoughts on the subject, but saw that my friend Dan Bulos provided good insight on this topic from a variety of perspectives in his session, "So Many BI Tools, So Little Time." I thought one of his best points was that ideally you should be able to design in your tool of choice, and then deploy to your tool of choice. Unfortunately, the ideal is yet to become real across the platform. The closest we come is with the RDL in Reporting Services which can be produced from two different tools (Report Builder or Business Intelligence Development Studio's Report Designer), manually, or by a third-party or custom application. I have touted the idea for years (and publicly said so about 5 years ago) that eventually more products would be RDL producers or consumers, but we aren't there yet. Maybe in another 5 years. Another interesting session that covered the BI stack against a backdrop of competitive products was delivered by Andrew Brust. Andrew did a marvelous job of consolidating a lot of information in a way that clearly communicated how various vendors' offerings compared to the Microsoft BI stack. He also made a particularly compelling argument about how the existence of an ecosystem around the Microsoft BI stack provided innovation and opportunities lacking for other vendors. Check out his presentation, "How Does the Microsoft BI Stack...Stack Up?" Expo Hall I had planned to spend more time in the Expo Hall to see who was doing new things with the BI stack, but didn't manage to get very far. Each time I set out on an exploratory mission, I got caught up in some fascinating conversations with one or more of my peers. I find interacting with people that I meet at conferences just as important as attending sessions to learn something new. There were a couple of items that really caught me eye, however, that I'll share here. Pragmatic Works. Whether you develop SSIS packages, build SSAS cubes, or author SSRS reports (or all of the above), you really must take a look at BI Documenter. Brian Knight (twitter) walked me through the key features, and I must say I was impressed. Once you've seen what this product can do, you won't want to document your BI projects any other way. You can download a free single-user database edition, or choose from more feature-rich standard or professional editions. Microsoft Press ebooks. I also stopped by the O'Reilly Media booth to meet some folks that one of my acquisitions editors at Microsoft Press recommended. In case you haven't heard, Microsoft Press has partnered with O'Reilly Media for distribution and publishing. Apart from my interest in learning more about O'Reilly Media as an author, an advertisement in their booth caught me eye which I think is a really great move. When you buy Microsoft Press ebooks through the O'Reilly web site, you can receive it in any (or all) of the following formats where possible: PDF, epub, .mobi for Kindle and .apk for Android. You also have lifetime DRM-free access to the ebooks. As someone who is an avid collector of books, I fnd myself running out of room for storage. In addition, I travel a lot, and it's hard to lug my reference library with me. Today's e-reader options make the move to digital books a more viable way to grow my library. Having a variety of formats means I am not limited to a single device, and lifetime access means I don't have to worry about keeping track of where I've stored my files. Because the e-books are DRM-free, I can copy and paste when I'm compiling notes, and I can print pages when necessary. That's a winning combination in my mind! Overall, I was pleased with the BI conference. There were many more sessions that I couldn't attend, either because the room was full when I got there or there were multiple sessions running concurrently that I wanted to see. Fortunately, many of the sessions are accessible for viewing online at http://www.msteched.com/2010/NorthAmerica along with the TechEd sessions. You can spot the BI sessions by the yellow skyline on the title slide of the presentation as shown below. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Seven Worlds will collide…. High Availability BI is not such a Distant Sun.

    - by Testas
    Over the last 5 years I have observed Microsoft persevere with the notion of Self Service BI over a series of conferences as far back as SQLBits V in Newport. The release of SQL Server 2012, improvements in Excel and the integration with SharePoint 2010 is making this a reality. Business users are now empowered to create their own BI reports through a number of different technologies such as PowerPivot, PowerView and Report Builder. This opens up a whole new way of working; improving staff productivity, promoting efficient decision making and delivering timely business reports. There is, however; a serious question to answer. What happens should any of these applications become unavailable? More to the point, how would the business react should key business users be unable to fulfil reporting requests for key management meetings when they require it?  While the introduction of self-service BI will provide instant access to the creation of management information reports, it will also cause instant support calls should the access to the data become unavailable. These are questions that are often overlooked when a business evaluates the need for self-service BI. But as I have written in other blog posts, the thirst for information is unquenchable once the business users have access to the data. When they are unable to access the information, you will be the first to know about it and will be expected to have a resolution to the downtime as soon as possible. The world of self-service BI is pushing reporting and analytical databases to the tier 1 application level for some of Coeo’s customers. A level that is traditionally associated with mission critical OLTP environments. There is recognition that by making BI readily available to the business user, provisions also need to be made to ensure that the solution is highly available so that there is minimal disruption to the business. This is where High Availability BI infrastructures provide a solution. As there is a convergence of technologies to support a self-service BI culture, there is also a convergence of technologies that need to be understood in order to provide the high availability architecture required to support the self-service BI infrastructure. While you may not be the individual that implements these components, understanding the concepts behind these components will empower you to have meaningful discussions with the right people should you put this infrastructure in place. There are 7 worlds that you will have to understand to successfully implement a highly available BI infrastructure   1.       Server/Virtualised server hardware/software 2.       DNS 3.       Network Load Balancing 4.       Active Directory 5.       Kerberos 6.       SharePoint 7.       SQL Server I have found myself over the last 6 months reaching out to knowledge that I learnt years ago when I studied for the Windows 2000 and 2003 (MCSE) Microsoft Certified System Engineer. (To the point that I am resuming my studies for the Windows Server 2008 equivalent to be up to date with newer technologies) This knowledge has proved very useful in the numerous engagements I have undertaken since being at Coeo, particularly when dealing with High Availability Infrastructures. As a result of running my session at SQLBits X and SQL Saturday in Dublin, the feedback I have received has been that many individuals desire to understand more of the concepts behind the first 6 “worlds” in the list above. Over the coming weeks, a series of blog posts will be put on this site to help understand the key concepts of each area as it pertains to a High Availability BI Infrastructure. Each post will not provide exhaustive coverage of the topic. For example DNS can be a book in its own right when you consider that there are so many different configuration options with Forward Lookup, Reverse Lookups, AD Integrated Zones and DNA forwarders to name some examples. What I want to do is share the pertinent points as it pertains to the BI infrastructure that you build so that you are equipped with the knowledge to have the right discussion when planning this infrastructure. Next, we will focus on the server infrastructure that will be required to support the High Availability BI Infrastructure, from both a physical box and virtualised perspective. Thanks   Chris

    Read the article

  • Java enums: Gathering info from another enums

    - by Samuel Carrijo
    I made a similar question a few days ago, but now I have new requirements, and new challenges =). As usual, I'm using the animal enums for didactic purposes, once I don't want to explain domain-specific stuff I have a basic enum of animals, which is used by the whole zoo (I can add stuff to it, but must keep compatibility): public enum Animal { DOG, ELEPHANT, WHALE, SHRIMP, BIRD, GIRAFFE; } I need to categorize them in a few, non-related categories, like gray animals (whale (my whale is gray) and elephant), small animals (bird, shrimp and dog), sea animals (whale and shrimp). I could, as suggested in my previous questions, add a lot of booleans, like isGray, isSmall and isFromSea, but I'd like an approach where I could keep this somewhere else (so my enum doesn't need to know much). Something like: public enum Animal { DOG, ELEPHANT, WHALE, SHRIMP, BIRD, GIRAFFE; public boolean isGray() { // What comes here? } } Somewhere else public enum GrayAnimal { WHALE, ELEPHANT; } How is this possible? Am I requesting too much from Java?

    Read the article

  • Best way to detect integer overflow in C/C++

    - by Chris Johnson
    I was writing a program in C++ to find all solutions of a^b = c (a to the power of b), where a, b and c together use all the digits 0-9 exactly once. The program looped over values of a and b, and ran a digit-counting routine each time on a, b and a^b to check if the digits condition was satisfied. However, spurious solutions can be generated when a^b overflows the integer limit. I ended up checking for this using code like: unsigned long b, c, c_test; ... c_test=c*b; // Possible overflow if (c_test/b != c) {/* There has been an overflow*/} else c=c_test; // No overflow Is there a better way of testing for overflow? I know that some chips have an internal flag that is set when overflow occurs, but I've never seen it accessed through C or C++.

    Read the article

  • Help Protect Your Children with the CEOP Enhanced Internet Explorer 8

    - by Asian Angel
    Do you want to make Internet Explorer safer and more helpful for you and family? Then join us as we look at the CEOP (Child Exploitation and Online Protection Centre) enhanced version of Internet Explorer 8. Setting CEOP Up We chose to install the whole CEOP pack in order to have access to complete set of CEOP Tools. The install process will be comprised of two parts…it will begin with CEOP branded windows showing the components being installed… Note: The components can be downloaded separately for those who only want certain CEOP components added to their browser. Then it will move to the traditional Microsoft Internet Explorer 8 install windows. One thing that we did notice is that here you will be told that you will need to restart your computer but in other windows a log off/log on process is mentioned. Just to make certain that everything goes smoothly we recommend restarting your computer when the installation process is complete. In the EULA section you can see the versions of Windows that the CEOP Pack works with. Once you get past the traditional Microsoft install windows you will be dropped back into the CEOP branded windows. CEOP in Action After you have restarted your computer and opened Internet Explorer you will notice that your homepage has been changed. When it comes to your children that is not a bad thing in this instance. It will also give you an opportunity to look through the CEOP online resources. For the moment you may be wondering where everything is but do not worry. First you can find the two new search providers in the drop-down menu for your “Search Bar” and select a new default if desired. The second thing to look for are the new links that have been added to your “Favorites Menu”. These links can definitely be helpful for you and your family. The third part will require your “Favorites Bar” to be visible in order to see the “Click CEOP Button”. If you have not previously done so you will need to turn on subscribing for “Web Slices”. Click on “Yes” to finish the subscription process. Clicking on the “CEOP Button” again will show all kinds of new links to help provide information for you and your children. Notice that the top part is broken down into “topic categories” while the bottom part is set up for “age brackets”…very nice for helping you focus on the information that you want and/or need. Looking for information and help on a particular topic? Clicking on the “Cyberbullying Link” for example will open the following webpage with information about cyberbullying and a link to get help with the problem. Need something that is focused on your child’s age group? Clicking on the “8-10? Link” as an example opened this page. Want information that is focused on you? The “Parent? Link” leads to this page. The “topic categories & age brackets” make the CEOP Button a very helpful and “family friendly” addition to Internet Explorer. Perhaps you (or your child) want to conduct a search for something that is affecting your child. As you type in a “search term” both of the search providers will provide helpful suggestions for dealing with the problem. We felt that these were very nice suggestions in both instances here… Conclusion We have been able to give you a good peek at what the CEOP Tools can do but the best way to see how helpful it can be for you and your family is try it for yourself. Your children’s safety and happiness is worth it. Links Download the Internet Explorer CEOP Pack (link at bottom of webpage) Note: If you are interested in a singular component or only some use these links. Download the Click CEOP Button Download Search CEOP Download Internet Safety and Security Search Similar Articles Productive Geek Tips Mysticgeek Blog: A Look at Internet Explorer 8 Beta 1 on Windows XPWhen to Use Protect Tab vs Lock Tab in FirefoxMake Ctrl+Tab in Internet Explorer 7 Use Most Recent OrderRemove ISP Text or Corporate Branding from Internet Explorer Title BarQuick Hits: 11 Firefox Tab How-Tos TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Download Microsoft Office Help tab The Growth of Citibank Quickly Switch between Tabs in IE Windows Media Player 12: Tweak Video & Sound with Playback Enhancements Own a cell phone, or does a cell phone own you? Make your Joomla & Drupal Sites Mobile with OSMOBI

    Read the article

  • Microsoft, jQuery, and Templating

    - by Stephen Walther
    About two months ago, John Resig and I met at Café Algiers in Harvard square to discuss how Microsoft can contribute to the jQuery project. Today, Scott Guthrie announced in his second-day MIX keynote that Microsoft is throwing its weight behind jQuery and making it the primary way to develop client-side Ajax applications using Microsoft technologies. What does this announcement mean? It means that Microsoft is shifting its resources to invest in jQuery. Developers on the ASP.NET team are now working full-time to contribute features to the core jQuery library. Furthermore, we are working with other teams at Microsoft to ensure that our technologies work great with jQuery. We are contributing to the open-source jQuery project in the exact same way that any other company or individual from the community can contribute to jQuery. We are writing proposals, submitting the proposals to the jQuery forums, and revising the proposals in response to community feedback. The jQuery team can decide to reject or accept any feature that we propose. Any feature that Microsoft contributes to jQuery will be platform neutral. In other words, Microsoft contributions will benefit PHP and RAILS developers just as much as they benefit ASP.NET developers. Microsoft contributions to jQuery will improve the web for everyone. Contributing Support for Templates to jQuery Core Our first proposal concerns templating. We want to contribute support for templates to jQuery so that JavaScript developers can use jQuery to easily display a set of database records. You can read our templating proposal here: http://wiki.github.com/nje/jquery/jquery-templates-proposal You can download and play with our prototype for templating here: http://github.com/nje/jquery-tmpl The following code illustrates how you can use a template to display a set of products in a bulleted list: <script type="text/javascript"> jQuery(function(){ var products = [ { name: "Product 1", price: 12.99}, { name: "Product 2", price: 9.99}, { name: "Product 3", price: 35.59} ]; $("ul").append("#template", products); }); </script> <script id="template" type="text/html"> <li>{%= name %} - {%= price %}</li> </script> <ul></ul> The template is contained in a SCRIPT element that has a TYPE=”text/html” attribute. Browsers ignore the contents of a SCRIPT element when they don’t understand the content type. Notice that the placeholder {%=...%} is used within the template to indicate where the name and price of a product should appear. The delimiters {%=…%} are used for expressions and the delimiters {%...%} are used for code. Finally, the products are rendered using the template with the call to $(“ul”).append(“#template”, products). The standard jQuery DOM manipulation methods have been modified to support templates. When the page above is rendered, you get the bulleted list displayed in the following figure. Our goal is to keep our proposal for templates as simple as possible. After support for templating has been added to jQuery, plug-in authors can take advantage of templating when building complex data-driven plug-ins such as a DataGrid plug-in. The Ajax Control Toolkit Over 100,000 developers download the Ajax Control Toolkit every month. That’s a mind-boggling number of downloads. We realize that the Ajax Control Toolkit is extremely popular among ASP.NET Web Forms developers and we want to continue to invest in the Ajax Control Toolkit. If you are adding JavaScript interactivity to an ASP.NET Web Forms application, and you don’t want to write JavaScript, then we recommend that you use the server controls in the Ajax Control Toolkit. Using the Ajax Control Toolkit does not require knowledge of JavaScript and the toolkit enables you to build applications with the concepts familiar to ASP.NET Web Forms applications developers. If, however, you are interested in creating client-side interactivity without server controls then we recommend that you use jQuery. We plan to continue to release new versions of the Ajax Control Toolkit every few months. Our goal is to continue to improve the quality of the Ajax Control Toolkit and to make it easier for the community to contribute code, bug fixes, and documentation. The ASP.NET Ajax Library We are moving the ASP.NET Ajax Library into the Ajax Control Toolkit. If you currently use ASP.NET Ajax Library client templates, client data-binding, or the client script loader then you can continue to use these features by downloading the Ajax Control Toolkit. Be aware that our focus with the Ajax Control Toolkit is server-side Ajax.  For client-side Ajax, we are shifting our focus to jQuery. For example, if you have been using ASP.NET Ajax Library client templates then we recommend that you shift to using jQuery instead. Conclusion Our plan is to focus on jQuery as the primary technology for building client-side Ajax applications moving forward. We want to adapt Microsoft technologies to work great with jQuery and we want to contribute features to jQuery that will make the web better for everyone. We are very excited to be working with the jQuery core team.

    Read the article

  • Core Location - fallback, location caching and alternatives...

    - by Moshe
    I have a few questions about Core Location. 1) Should the user refuse permission for my app to use core location, or core location is unavailable for some reason, is there a fallback? (Device Locale, for example?) 2)Can I cache a device's location for next time? Does Core Location do this itself? 3)I really need the sunset time in the user's area during the mid-spring season and I have a function to do that, once I have the Latitude and Longitude of the device. Perhaps I can just make an assumption about the time based on Locale? (Example: In the US, assume approximately 7:00pm.)

    Read the article

  • Installing SOA Suite 11.1.1.3

    - by James Taylor
    With the release of Oracle SOA Suite 11.1.1.3 last week (28 April 2010) I thought I would attempt to implement a complete SOA Environment with SOA Suite, BPM and OSB on the WLS infrastructure. One major point of difference with the 11.1.1.3 is that is is released as a point release so you must have 11.1.1.2 installed first, then upgrade to 11.1.1.3. This post is performing the upgrade on Linux, if upgrading on windows you will need to substitute the directories and files accordingly. This post assumes that you have SOA Suite 11.1.1.2 installed already. 1. Download 11.1.1.3 software from the following site: http://www.oracle.com/technology/software/products/middleware/htdocs/fmw_11_download.html WLS 11.1.1.3   RCU 11.1.1.3 SOA Suite 11.1.1.3 OSB 11.1.1.3 Copy files to a staging area. For the purpose of this document the staging area is: /u01/stage  2. Shutdown your existing SOA Suite 11.1.1.2 environment 3. Execute the WLS 11.1.1.3 install from the stage directory. wls1033_linux32.bin 4. Choose the existing 11.1.1.2 Middleware Home 5. Ignore the security update notification 6. Accept the default products to be upgraded. 7. Upgrade of WebLogic has been completed   8. Upgrade the SOA Suite database schemas using the RCU utility. Unzip the RCU utility into the staging area and run the install ./u01/stage/rcuHome/bin/rcu 9. Drop the existing Repository and provide connection details 9. Install SOA Suite patch set 11.1.1.3. Unzip the SOA Suite patchset and execute the runInstaller with the following command. ./u01/stage/Disk1/runInstaller –jreLoc $MW_HOME/jdk160_18/jre 10. Choose the existing 11.1.1.2 middleware home 11. Start Install 12. Your SOA Suite Install should now be completed. Now we need to update the database repository. Login to SQLPlus as sysdba and execute the following command. SELECT version, status FROM schema_version_registry where owner = 'DEV_SOAINFRA'; the result should be similar to this: VERSION                        STATUS      OWNER ------------------------------ ----------- ------------------------------ 11.1.1.2.0                     VALID       DEV_SOAINFRA As you can see the version if these repositories are still at 11.1.1.2. 13. To upgrade these versions you have 2 options. 1 install via RCU, but this will remove any existing services. The second option is to use the Patch Set Assistant. From the $MW_HOME directory run the following command ./Oracle_SOA1/bin/psa -dbType Oracle -dbConnectString 'localhost:1521:xe' -dbaUserName sys -schemaUserName DEV_SOAINFRA 14. Install OSB. For the OSB install I did not install the IDE, or the Examples. run the runInstaller from the command line, unzip the OSB download to the stage area. ./u01/stage/osb/Disk1/runInstaller –jreLoc $MW_HOME/jdk160_18/jre 15. Choose Custom Install NOT to install the IDE (Eclipse) or Examples. 16. Unselect the, Examples and IDE checkboxes. 17. Accept the defaults and start installing. 18. Once the install has been completed configure the domain by running the Configuration Wizard. $MW_HOME/oracle_common/common/bin/config.sh You can create a new domain. In this document I will extend the soa_domain. 19. Select the following from the check list. I have selected the BPM Suite, this is unrelated to OSB but wanted it for my development purposes. To use this functionality additional license are required. 20. Configure the database connectivity. 21. Configure the database connectivity for the OSB schema. 22. Accept the defaults if installing on standard machine, if you require a cluster or advanced configuration then choose the option for you. 23. Upgrade is complete and OSB has been installed. Now you can start your environment.

    Read the article

  • Good tutorial for ASP.net mvc 2

    - by Ben Robinson
    I am an experienced asp.net web forms developer using c# but i have never used asp.net MVC. As I am just starting out with mvc i would like to start with mvc 2. I am looking for a good intro/tutorial to help me understand the basics. I am aware of the Nerd Dinner but that is based around MVC 1. What would you guys recomend for me to get started. Should i work through the nerd dinner tutorial then once i have a good understanding of mvc then research the new features of mvc 2 or is there a similar getting started tutorial for mvc 2. Sugestions of good books to read are also welcome. In fact any advice on getting started on mvc 2 would be good.

    Read the article

< Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >