Search Results

Search found 12950 results on 518 pages for 'field activities'.

Page 142/518 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • Five Key Trends in Enterprise 2.0 for 2011

    - by kellsey.ruppel(at)oracle.com
    We recently sat down with Andy MacMillan, an industry veteran and vice president of product management for Enterprise 2.0 at Oracle, to get his take on the year ahead in Enterprise 2.0 (E2.0). He offered us his five predictions about the ways he believes E2.0 technologies will transform business in 2011. 1. Forward-thinking organizations will achieve an unprecedented level of organizational awareness. Enterprise 2.0 and Web 2.0 technologies have already transformed the ways customers, employees, partners, and suppliers communicate and stay informed. But this year we are anticipating that organizations will go to the next step and integrate social activities with business applications to deliver rich contextual "activity streams." Activity streams are a new way for enterprise users to get relevant information as quickly as it happens, by navigating to that information in context directly from their portal. We don't mean syndicating social activities limited to a single application. Instead, we believe back-office systems will be combined with social media tools to drive how users make informed business decisions in brand new ways. For example, an account manager might log into the company portal and automatically receive notification that colleagues are closing business around a certain product in his market segment. With a single click, he can reach out instantly to these colleagues via social media and learn from their successes to drive new business opportunities in his own area. 2. Online customer engagement will become a high priority for CMOs. A growing number of chief marketing officers (CMOs) have created a new direct report called "head of online"--a senior marketing executive responsible for all engagements with customers and prospects via the Web, mobile, and social media. This new field has been dubbed "Web experience management" or "online customer engagement" by firms and analyst organizations. It is likely to rapidly increase demand for a host of new business objectives and metrics from Web content management solutions. As companies interface with customers more and more over the Web, Web experience management solutions will help deliver more targeted interactions to ensure increased customer loyalty while meeting sales and business objectives. 3. Real composite applications will be widely adopted. We expect organizations to move from the concept of a single "uber-portal" that encompasses all the necessary features to a more modular, component-based concept for composite applications. This approach is now possible as IT and power users are empowered to assemble new, purpose-built composite applications quickly from existing components. 4. Records management will drive ECM consolidation. We continue to see a significant shift in the approach to records management. Several years ago initiatives were focused on overlaying records management across a set of electronic repositories and physical storage locations. We believe federated records management will continue, but we also expect to see records management driving conversations around single-platform content management consolidation. 5. Organizations will demand ECM at extreme scale. We have already seen a trend within IT organizations to provide a common, highly scalable infrastructure to consolidate and support content and information needs. But as data sizes grow exponentially, ECM at an extreme scale is likely to spread at unprecedented speeds this year. This makes sense as regulations and transparency requirements rise. The model in which ECM and lightweight CMS systems provide basic content services such as check-in, update, delete, and search has converged around a set of industry best practices and has even been coded into new industry standards such as content management interoperability services. As these services converge and the demand for them accelerates, organizations are beginning to rationalize investments into a single, highly scalable infrastructure. Is your organization ready for Enterprise 2.0 in 2011? Learn more.

    Read the article

  • My collection of favourite TFS utilities

    - by Aaron Kowall
    So, you’re in charge of your company or team’s Team Foundation Server.  Wish it was easier to manage, administer, extend?  Well, here are a few utilities that I highly recommend looking at. I’ve recently had need to rebuild my laptop and upgrade my local TFS environment to TFS 2012 Update 1.  This gave me cause to enumerate some of the utilities I like to have on hand. One of the reasons I love to use TFS on projects is that it’s basically a complete ALM toolkit.  Everything from Task Management, Version Control, Build Management, Test Management, Metrics and Reporting are all there ‘in the box’.  However, no matter how complete a product set it, there are always ways to make it better.  Here are a list of utilities and libraries that are pretty generally useful.  this is not intended to be an exhaustive list of TFS extensions but rather a set that I recommend you look at.  There are many more out there that may be applicable in one scenario or another.  This set of tools should work with TFS 2012 or 2010 if you grab the right version. Most of these tools (and more) are available from the Visual Studio Gallery or CodePlex. General TFS Power Tools – This is ‘the’ collection of utilities and extensions delivered by the Product Group.  Highly recommended from here are the Best Practice Analyzer for ensuring your TFS implementation is healthy and the Team Foundation Server Backups to ensure your TFS databases are backed up correctly. TFS Administrators Toolkit – helps make updates to work item types and reports across many team projects.  Also provides visibility of disk usage by finding large files in version control or test attachments to assist in managing storage utilization. Version Control Git-TF - a set of cross-platform, command line tools that facilitate sharing of changes between TFS and Git. These tools allow a developer to use a local Git repository, and configure it to share changes with a TFS server.  Great for all Git lovers who must integrate into a TFS repository. Testing TFS 2012 Tester Power Tool – A utility for bulk copying test cases which assists in an approach for managing test cases across multiple releases.  A little plug that this utility was written and maintained by Anna Russo of Imaginet where I also work. Test Scribe - A documentation power tool designed to construct documents directly from the TFS for test plan and test run artifacts for the purpose of discussion, reporting etc. Reporting Community TFS Report Extensions - a single repository of SQL Server Reporting Services report for Team Foundation 2010 (and above).  Check out the Test Plan Status report by Imaginet’s Steve St. Jean.  Very valuable for your test managers. Builds TFS Build Manager – A great utility if you are build manager over a complex build environment with many TFS build definitions. Community TFS Build Extensions – contains many custom build activities.  Current release binaries are for TFS 2010 but many of the activities can be recompiled for use with TFS 2012. While compiling this list, I was surprised by the number of TFS utilities and extensions I no longer use/need in TFS 2012 because of the great work by the TFS team addressing many gaps since the 2010 release. Are there any utilities you depend on that I’ve missed?  I’d love to hear about them in the comments!

    Read the article

  • Tuple - .NET 4.0 new feature

    - by nmarun
    Something I hit while playing with .net 4.0 – Tuple. MSDN says ‘Provides static methods for creating tuple objects.’ and the example below is: 1: var primes = Tuple.Create(2, 3, 5, 7, 11, 13, 17, 19); Honestly, I’m still not sure with what intention MS provided us with this feature, but the moment I saw this, I said to myself – I could use it instead of anonymous types. In order to put this to test, I created an XML file: 1: <Activities> 2: <Activity id="1" name="Learn Tuples" eventDate="4/1/2010" /> 3: <Activity id="2" name="Finish Project" eventDate="4/29/2010" /> 4: <Activity id="3" name="Attend Birthday" eventDate="4/17/2010" /> 5: <Activity id="4" name="Pay bills" eventDate="4/12/2010" /> 6: </Activities> In my console application, I read this file and let’s say I want to pull all the attributes of the node with id value of 1. Now, I have two ways – either define a class/struct that has these three properties and use in the LINQ query or create an anonymous type on the fly. But if we go the .NET 4.0 way, we can do this using Tuples as well. Let’s see the code I’ve written below: 1: var myActivity = (from activity in loaded.Descendants("Activity") 2:       where (int)activity.Attribute("id") == 1 3:       select Tuple.Create( 4: int.Parse(activity.Attribute("id").Value), 5: activity.Attribute("name").Value, 6: DateTime.Parse(activity.Attribute("eventDate").Value))).FirstOrDefault(); Line 3 is where I’m using a Tuple.Create to define my return type. There are three ‘items’ (that’s what the elements are called) in ‘myActivity’ type.. aptly declared as Item1, Item2, Item3. So there you go, you have another way of creating anonymous types. Just out of curiosity, wanted to see what the type actually looked like. So I did a: 1: Console.WriteLine(myActivity.GetType().FullName); and the return was (formatted for better readability): "System.Tuple`3[                            [System.Int32, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089],                            [System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089],                            [System.DateTime, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]                           ]" The `3 specifies the number of items in the tuple. The other interesting thing about the tuple is that it knows the data type of the elements it’s holding. This is shown in the above snippet and also when you hover over myActivity.Item1, it shows the type as an int, Item2 as string and Item3 as DateTime. So you can safely do: 1: int id = myActivity.Item1; 2: string name = myActivity.Item2; 3: DateTime eventDate = myActivity.Item3; Wow.. all I can say is: HAIL 4.0.. HAIL 4.0.. HAIL 4.0

    Read the article

  • ARTS Reference Model for Retail

    - by Sanjeev Sharma
    Consider a hypothetical scenario where you have been tasked to set up retail operations for a electronic goods or daily consumables or a luxury brand etc. It is very likely you will be faced with the following questions: What are the essential business capabilities that you must have in place?  What are the essential business activities under-pinning each of the business capabilities, identified in Step 1? What are the set of steps that you need to perform to execute each of the business activities, identified in Step 2? Answers to the above will drive your investments in software and hardware to enable the core retail operations. More importantly, the choices you make in responding to the above questions will several implications in the short-run and in the long-run. In the short-term, you will incur the time and cost of defining your technology requirements, procuring the software/hardware components and getting them up and running. In the long-term, as you grow in operations organically or through M&A, partnerships and franchiser business models  you will invariably need to make more technology investments to manage the greater complexity (scale and scope) of business operations.  "As new software applications, such as time & attendance, labor scheduling, and POS transactions, just to mention a few, are introduced into the store environment, it takes a disproportionate amount of time and effort to integrate them with existing store applications. These integration projects can add up to 50 percent to the time needed to implement a new software application and contribute significantly to the cost of the overall project, particularly if a systems integrator is called in. This has been the reality that all retailers have had to live with over the last two decades. The effect of the environment has not only been to increase costs, but also to limit retailers' ability to implement change and the speed with which they can do so." (excerpt taken from here) Now, one would think a lot of retailers would have already gone through the pain of finding answers to these questions, so why re-invent the wheel? Precisely so, a major effort began almost 17 years ago in the retail industry to make it less expensive and less difficult to deploy new technology in stores and at the retail enterprise level. This effort is called the Association for Retail Technology Standards (ARTS). Without standards such as those defined by ARTS, you would very likely end up experiencing the following: Increased Time and Cost due to resource wastage arising from re-inventing the wheel i.e. re-creating vanilla processes from scratch, and incurring, otherwise avoidable, mistakes and errors by ignoring experience of others Sub-optimal Process Efficiency due to narrow, isolated view of processes thereby ignoring process inter-dependencies i.e. optimizing parts but not the whole, and resulting in lack of transparency and inter-departmental finger-pointing Embracing ARTS standards as a blue-print for establishing or managing or streamlining your retail operations can benefit you in the following ways: Improved Time-to-Market from parity with industry best-practice processes e.g. ARTS, thus avoiding “reinventing the wheel” for common retail processes and focusing more on customizing processes for differentiations, and lowering integration complexity and risk with a standardized vocabulary for exchange between internal and external i.e. partner systems Lower Operating Costs by embracing the ARTS enterprise-wide process reference model for developing and streamlining retail operations holistically instead of a narrow, silo-ed view, and  procuring IT systems in compliance with ARTS thus avoiding IT budget marginalization While parity with industry standards such as ARTS business process model by itself does not create a differentiation, it does however provide a higher starting point for bridging the strategy-execution gap in setting up and improving retail operations.

    Read the article

  • Representing Mauritius in the 2013 Bench Games

    Only by chance I came across an interesting option for professionals and enthusiasts in IT, and quite honestly I can't even remember where I caught attention of Brainbench and their 2013 Bench Games event. But having access to 600+ free exams in a friendly international intellectual competition doesn't happen to be available every day. So, it was actually a no-brainer to sign up and browse through the various categories. Most interestingly, Brainbench is not only IT-related. They offer a vast variety of fields in their Test Center, like Languages and Communication, Office Skills, Management, Aptitude, etc., and it can be a little bit messy about how things are organised. Anyway, while browsing through their test offers I added a couple of exams to 'My Plan' which I would give a shot afterwards. Self-assessments Actually, I took the tests based on two major aspects: 'Fun Factor' and 'How good would I be in general'... Usually, you have to pay for any kind of exams and given this unique chance by Brainbench to simply train this kind of tests was already worth the time. Frankly speaking, the tests are very close to the ones you would be asked to do at Prometric or Pearson Vue, ie. Microsoft exams, etc. Go through a set of multiple choice questions in a given time frame. Most of the tests I did during the Bench Games were based on 40 questions, each with a maximum of 3 minutes to answer. Ergo, one test in maximum 2 hours - that sounds feasible, doesn't it? The Measure of Achievement While the 2013 Bench Games are considered a worldwide friendly competition of knowledge I was really eager to get other Mauritians attracted. Using various social media networks and community activities it all looked quite well at the beginning. Mauritius was listed on rank #19 of Most Certified Citizens and rank #10 of Most Master Level Certified Nation - not bad, not bad... Until... the next update of the Bench Games Leaderboard. The downwards trend seemed to be unstoppable and I couldn't understand why my results didn't show up on the Individual Leader Board. First of all, I passed exams that were not even listed and second, I had better results on some exams listed. After some further information from the organiser it turned out that my test transcript wasn't available to the public. Only then results are considered and counted in the competition. During that time, I actually managed to hold 3 test results on the Individuals... Other participants were merciless, eh, more successful than me, produced better test results than I did. But still I managed to stay on the final score board: An 'exotic' combination of exam, test result, country and person itself Representing Mauritius and the Visual FoxPro community in that fun event. And although I mainly develop in Visual FoxPro 9.0 SP2 and C# using .NET Framework from 2.0 to 4.5 since a couple of years I still managed to pass on Master Level. Hm, actually my Microsoft Certified Programmer (MCP) exams are dated back in June 2004 - more than 9 years ago... Look who got lucky... As described above I did a couple of exams as time allowed and without any preparations, but still I received the following mail notification: "Thank you for recently participating in our Bench Games event.  We wanted to inform you that you obtained a top score on our test(s) during this event, and as a result, will receive a free annual Brainbench subscription.  Your annual subscription will give you access to all our tests just like Bench Games, but for an entire year plus additional benefits!" -- Leader Board Notification from Brainbench Even fun activities get rewarded sometimes. Thanks to @Brainbench_com for the free annual subscription based on my passed 2013 Bench Games Master Level exam. It would be interesting to know about the total figures, especially to see how many citizens of Mauritius took part in this year's Bench Games. Anyway, I'm looking forward to be able to participate in other challenges like this in the future.

    Read the article

  • Consolidation in a Database Cloud

    - by B R Clouse
    Consolidation of multiple databases onto a shared infrastructure is the next step after Standardization.  The potential consolidation density is a function of the extent to which the infrastructure is shared.  The three models provide increasing degrees of sharing: Server: each database is deployed in a dedicated VM. Hardware is shared, but most of the software infrastructure is not. Standardization is often applied incompletely since operating environments can be moved as-is onto the shared platform. The potential for VM sprawl is an additional downside. Database: multiple database instances are deployed on a shared software / hardware infrastructure. This model is very efficient and easily implemented with the features in the Oracle Database and supporting products. Many customers have moved to this model and achieved significant, measurable benefits. Schema: multiple schemas are deployed within a single database instance. The most efficient model, it places constraints on the environment. Usually this model will be implemented only by customers deploying their own applications.  (Note that a single deployment can combine Database and Schema consolidations.) Customer value: lower costs, better system utilization In this phase of the maturity model, under-utilized hardware can be used to host more workloads, or retired and those workloads migrated to consolidation platforms. Customers benefit from higher utilization of the hardware resources, resulting in reduced data center floor space, and lower power and cooling costs. And, the OpEx savings from Standardization are multiplied, since there are fewer physical components (both hardware and software) to manage. Customer value: higher productivity The OpEx benefits from Standardization are compounded since not only are there fewer types of things to manage, now there are fewer entities to manage. In this phase, customers discover that their IT staff has time to move away from "day-to-day" tasks and start investing in higher value activities. Database users benefit from consolidating onto shared infrastructures by relieving themselves of the requirement to maintain their own dedicated servers. Also, if the shared infrastructure offers capabilities such as High Availability / Disaster Recovery, which are often beyond the budget and skillset of a standalone database environment, then moving to the consolidation platform can provide access to those capabilities, resulting in less downtime. Capabilities / Characteristics In this phase, customers will typically deploy fixed-size clusters and consolidate on a cluster until that cluster is deemed "full," at which point a new cluster is built. Customers will define one or a few cluster architectures that are used wherever possible; occasionally there may be deployments which must be handled as exceptions. The "full" policy may be based on number of databases deployed on the cluster, or observed peak workload, etc. IT will own the provisioning of new databases on a cluster, making the decision of when and where to place new workloads. Resources may be managed dynamically, e.g., as a priority workload increases, it may be given more CPU and memory to handle the spike. Users will be charged at a fixed, relatively coarse level; or in some cases, no charging will be applied. Activities / Tasks Oracle offers several tools to plan a successful consolidation. Real Application Testing (RAT) has a feature to help plan and validate database consolidations. Enterprise Manager 12c's Cloud Management Pack for Database includes a planning module. Looking ahead, customers should start planning for the Services phase by defining the Service Catalog that will be made available for database services.

    Read the article

  • How to restore/change Alt+Tab behaviour/ram usage and a few other things after Ubuntu upgrade from 11.04 to 11.10?

    - by fiktor
    I use Ubuntu for programming. I recently updated it from 11.04 to 11.10. There are some things I don't like in the new version of Unity desktop interface. I don't actually know if it is hard to restore previous behavior or not, and if it is not, where should I look to do that. I know a bit of programming, but I really don't know much about Linux settings. I used to have 3-6 terminal windows and switch between them with Alt+Tab and Shift+Alt+Tab. I liked half-transparent terminal windows, since with them I could open web-page with some instruction in Firefox, press Alt+Tab and type commands in a console window, being able to recognize text on a web-page under it. Now I have problems with my usual work-style because of the following. List of "negative" changes Alt+Tab shows just one icon for all console windows. When I wait some time, it, however, shows all windows, but I don't like to wait. I prefer to remember order of windows and press Alt+Tab as many times as I need to switch to the right window. Alt+Shift+Tab to switch in reverse order doesn't work now. Console windows are not transparent any more. When I don't wait, and switch to this icon, it shows all console windows altogether. So even if they were transparent, I wouldn't be able to see anything below them (I can read something only from the window, which is directly under current one, not a few levels under). When I run a few console windows in Unity I had 740Mb used on Ubuntu 11.04, but I have 1050Mb now. The question is how to make it back to 750-. I really need my memory, since I use my computer to work with 1512Mb of data and I try to save every 10Mb possible (if it doesn't take too much of machine and, more importantly, my time). When I press "The Super key" I have a field to type the name of the program I want to run. But now it sometimes shows this field, but when I'm trying to type nothing happens. Probably, focus is not on the right field. I don't really mean to restore exactly the same behavior, but I want to make my work in Ubuntu 11.10 efficient (at least as efficient as in Ubuntu 11.04). I would be happy if there are some ways to accomplish that. What have I tried I have installed CompizConfig Settings Manager. I have read this question. However enabling "Static Application Switcher" makes Alt+Tab crazy: after enabling it It says about key-binding conflicts with "Ubuntu Unity Plugin"; "Alt+Tab" switching doesn't change, but "Shift+Alt+Tab" now works and shows all windows; Memory usage increases. I have tried turning off Ubuntu Unity Plugin, but this doesn't seem right thing to do, since it seems to turn off all menus, a lot of keystrokes and app-launcher, which usually activates with "The Super key". I have found, that window transparency can be enabled by "Opacity, Brightness and Saturation" plugin from Accessibility. However I don't know if enabling it is the right thing to do (at least it increases memory usage). Update: everything solved but #3: see my own answer below. I have made a separate question about issue #3 (transparency).

    Read the article

  • When should I use a Process Model versus a Use Case?

    - by Dave Burke
    This Blog entry is a follow on to https://blogs.oracle.com/oum/entry/oum_is_business_process_and and addresses a question I sometimes get asked…..i.e. “when I am gathering requirements on a Project, should I use a Process Modeling approach, or should I use a Use Case approach?” Not surprisingly, the short answer is “it depends”! Let’s take a scenario where you are working on a Sales Force Automation project. We’ll call the process that is being implemented “Lead-to-Order”. I would typically think of this type of project as being “Process Centric”. In other words, the focus will be on orchestrating a series of human and system related tasks that ultimately deliver value to the business in a cost effective way. Put in even simpler terms……implement an automated pre-sales system. For this type of (Process Centric) project, requirements would typically be gathered through a series of Workshops where the focal point will be on creating, or confirming, the Future-State (To-Be) business process. If pre-defined “best-practice” business process models exist, then of course they could and should be used during the Workshops, but even in their absence, the focus of the Workshops will be to define the optimum series of Tasks, their connections, sequence, and dependencies that will ultimately reflect a business process that meets the needs of the business. Now let’s take another scenario. Assume you are working on a Content Management project that involves automating the creation and management of content for User Manuals, Web Sites, Social Media publications etc. Would you call this type of project “Process Centric”?.......well you could, but it might also fall into the category of complex configuration, plus some custom extensions to a standard software application (COTS). For this type of project it would certainly be worth considering using a Use Case approach in order to 1) understand the requirements, and 2) to capture the functional requirements of the custom extensions. At this point you might be asking “why couldn’t I use a Process Modeling approach for my Content Management project?” Well, of course you could, but you just need to think about which approach is the most effective. Start by analyzing the types of Tasks that will eventually be automated by the system, for example: Best Suited To? Task Name Process Model Use Case Notes Manage outbound calls Ö A series of linked human and system tasks for calling and following up with prospects Manage content revision Ö Updating the content on a website Update User Preferences Ö Updating a users display preferences Assign Lead Ö Reviewing a lead, then assigning it to a sales person Convert Lead to Quote Ö Updating the status of a lead, and then converting it to a sales order As you can see, it’s not an exact science, and either approach is viable for the Tasks listed above. However, where you have a series of interconnected Tasks or Activities, than when combined, deliver value to the business, then that would be a good indicator to lead with a Process Modeling approach. On the other hand, when the Tasks or Activities in question are more isolated and/or do not cross traditional departmental boundaries, then a Use Case approach might be worth considering. Now let’s take one final scenario….. As you captured the To-Be Process flows for the Sales Force automation project, you discover a “Gap” in terms of what the client requires, and what the standard COTS application can provide. Let’s assume that the only way forward is to develop a Custom Extension. This would now be a perfect opportunity to document the functional requirements (behind the Gap) using a Use Case approach. After all, we will be developing some new software, and one of the most effective ways to begin the Software Development Lifecycle is to follow a Use Case approach. As always, your comments are most welcome.

    Read the article

  • Silverlight 4 Overriding the DataForm Autogenerate to insert Combo Boxes bound to Converters.

    - by kmacmahon
    I've been working towards a solution for some time and could use a little help. I know I've seen an example of this before, but tonight I cannot find anything close to what I need. I have a service that provides me all my DropDownLists, either from Cache or from the DomainService. They are presented as IEnumerable, and are requested from the a repository with GetLookup(LookupId). I have created a custom attribute that I have decorated my MetaDataClass that looks something like this: [Lookup(Lookup.Products)] public Guid ProductId I have created a custom Data Form that is set to AutoGenerateFields and I am intercepting the autogenerate fields. I am checking for my CustomAttribute and that works. Given this code in my CustomDataForm (standard comments removed for brevity), what is the next step to override the field generation and place a bound combobox in its place? public class CustomDataForm : DataForm { private Dictionary<string, DataField> fields = new Dictionary<string, DataField>(); public Dictionary<string, DataField> Fields { get { return this.fields; } } protected override void OnAutoGeneratingField(DataFormAutoGeneratingFieldEventArgs e) { PropertyInfo propertyInfo = this.CurrentItem.GetType().GetProperty(e.PropertyName); foreach (Attribute attribute in propertyInfo.GetCustomAttributes(true)) { LookupFieldAttribute lookupFieldAttribute = attribute as LookupFieldAttribute; if (lookupFieldAttribute != null) { // Create a combo box. // Bind it to my Lookup IEnumerable // Set the selected item to my Field's Value // Set the binding two way } } this.fields[e.PropertyName] = e.Field; base.OnAutoGeneratingField(e); } } Any cited working examples for SL4/VS2010 would be appreciated. Thanks

    Read the article

  • SSRS 2005 - Cascading parameters and default value update problem

    - by sHr0oMaN
    I have a report with cascading parameters. The first parameter is Finanical Period Type, being either Month or Week. The second parameter is a list of either financial months or weeks depending on what was selected for the first parameter. This all works well and selecting a series of different Financial Period Types in sequence correctly updates the second parameter's values. However I now wish to add a default value for the second parameter, which is once again dependent on the first parameter. So I've added an additional field to the dataset populating the second parameter called DefaultPeriod and set the second parameter's default value to be retrieved from the above field. The first time I select the Financial Period Type, the default is correctly set. However changing the Financial Period Type results in an updated list for the second parameter, but the default is incorrect. It remains set to the original default value, even thought the dataset has been refresh and the DefaultPeriod field is correct. This is both an issue in the IDE and on the Report Manager site.

    Read the article

  • MVC - Cocoa interface - Cocoa Design pattern book

    - by Idan
    So I started reading this book: http://www.amazon.com/Cocoa-Design-Patterns-Erik-Buck/dp/0321535022 On chapter 2 it explains about the MVC design pattern and gives and example which I need some clarification to. The simple example shows a view with the following fields: hourlyRate, WorkHours, Standarthours , salary. The example is devided into 3 parts : View - contains some text fiels and a table (the table contains a list of employees' data). Controller - comprised of NSArrayController class (contains an array of MyEmployee) Model - MyEmployee class which describes an employee. MyEmployee class has one method which return the salary according to the calculation logic, and attributes in accordance with the view UI controls. MyEmployee inherits from NSManagedObject. Few things i'm not sure of : 1. Inside the MyEmplpyee class implemenation file, the calculation method gets the class attributes using sentence like " [[self valueForKey:@"hourlyRate"] floatValue];" Howevern, inside the header there is no data member named hourlyRate or any of the view fields. I'm not quite sure how does it work, and how it gets the value from the right view field. (does it have to be the same name as the field name in the view). maybe the conncetion is made somehow using the Interface builder and was not shown in the book ? and more important: 2. how does it seperate the view from the model ? let's say ,as the book implies might happen, I decide one day to remove one of the fields in the view. as far as I understand, that means changing the way the salary method works in MyEmplpyee (cause we have one field less) , and removing one attribute from the same calss. So how is that separate the View from the Model if changing one reflect on the other ? I guess I get something wrong... Any comments ? Thanks

    Read the article

  • C#: Struct Constructor: "fields must be fully assigned before control is returned to the caller."

    - by Rosarch
    Here is a struct I am trying to write: public struct AttackTraits { public AttackTraits(double probability, int damage, float distance) { Probability = probability; Distance = distance; Damage = damage; } private double probability; public double Probability { get { return probability; } set { if (value > 1 || value < 0) { throw new ArgumentOutOfRangeException("Probability values must be in the range [0, 1]"); } probability = value; } } public int Damage { get; set; } public float Distance { get; set; } } This results in the following compilation errors: The 'this' object cannot be used before all of its fields are assigned to Field 'AttackTraits.probability' must be fully assigned before control is returned to the caller Backing field for automatically implemented property 'AttackTraits.Damage' must be fully assigned before control is returned to the caller. Consider calling the default constructor from a constructor initializer. Backing field for automatically implemented property 'AttackTraits.Distance' must be fully assigned before control is returned to the caller. Consider calling the default constructor from a constructor initializer. What am I doing wrong?

    Read the article

  • Zend Framework, Zend_Form_Element how to set custom name?

    - by ikso
    Hello, I have form, where some fields are looks like rows, so I can add/delete them using JS. For example: Field with ID=1 (existing row) <input id="id[1]" type="text" name="id[1]" value="1" /> <input id="name[1]" type="text" name="name[1]" value="100" /> Field with ID=2 (existing row) <input id="name[2]" type="text" name="name[2]" value="200" /> <input id="name[2]" type="text" name="name[2]" value="200" /> new row created by default (to allow add one more row to existing rows) <input id="id[n0]" type="text" name="id[n0]" value="" /> <input id="name[n0]" type="text" name="name[n0]" value="" /> new row created by JS <input id="id[n1]" type="text" name="id[n1]" value="" /> <input id="name[n1]" type="text" name="name[n1]" value="" /> So than we will proceed form, we will know what rows to update and what to add (if index starts with "n" - new, if index is number - existent element). I tried subforms... but do I have to create subform for each field? If I use following code: $subForm = new Zend_Form_SubForm(); $subForm->addElement('Text', 'n0'); $this->addSubForm($subForm, 'pid'); $subForm = new Zend_Form_SubForm(); $subForm->addElement('Text', 'n0'); $this->addSubForm($subForm, 'name'); What is the best way for this? 1) Use subforms? 2) Extend Zend/Form/Decorator/ViewHelper.php to use names like name[nX]? 3) Other solutions? Thanks.

    Read the article

  • Unable to Update Sharepoint Document Properties : Required Fields are Empty.

    - by Pari
    Hi, I am updating Documents on Sharepoint using List.asmx web service. But problem i am facing is: Fields are not getting Updated as some of required fields are not added. But to fill required fields i have to again Update. "ID" field is compulsary at the time of Update. Which we get only after uploading Document.( we get this id by "ows_id" attriute value) Edit : As said by "Janis Veinbergs" We can't get this ID untill document is actualy saved. So how will i update document as ID field is must for Update. If i don't Put ID Field : Error : 0x8102000aInvalid URL Parameter The URL provided contains an invalid Command or Value. Please check the URL again. If i put Null Value to it: Error :0x81020016Item does not exist The page you selected contains an item that does not exist. It may have been deleted by another user. Is there any way to set document properties at the time of uploaidng files on Sharepoint. *Note : i am uploading file in Chunck.And Not using Microsoft.sharepoint.dll * Language : C# I tried this code But here again properties are being set after uploading file.

    Read the article

  • Error converting JSON to .Net object in asp.net

    - by Vinni
    Hello Guys, I am unable to convert JSON string to .net object in asp.net. I am sending JSON string from client to server using hidden field (by keeping the JSON object.Tostring() in hidden field and reading the hidden field value in code behind file) Json string/ Object: [[{"OfferId":"1","OrderValue":"11","HostingTypeID":"3"}, {"OfferId":"1","OrderValue":"11","HostingTypeID":"3"}, {"OfferId":"1","OrderValue":"11","HostingTypeID":"3"}, {"OfferId":"1","OrderValue":"2","HostingTypeID":"3"}, {"OfferId":"1","OrderValue":"2","HostingTypeID":"3"}, {"OfferId":"1","OrderValue":"67","HostingTypeID":"3"}, {"OfferId":"1","OrderValue":"67","HostingTypeID":"3"}], [{"OfferId":"1","OrderValue":"99","HostingTypeID":"6"}], [{"OfferId":"1","OrderValue":"10","HostingTypeID":"8"}]] .Net Object public class JsonFeaturedOffer { public string OfferId { get; set; } public string OrderValue { get; set; } public string HostingTypeID { get; set; } } Converstion code in code behind file byte[] byteArray = Encoding.ASCII.GetBytes(HdnJsonData.Value); MemoryStream stream = new MemoryStream(byteArray); DataContractJsonSerializer serializer = new DataContractJsonSerializer(typeof(JsonFeaturedOffer)); object result= serializer.ReadObject(stream); JsonFeaturedOffer jsonObj = result as JsonFeaturedOffer; While converting i am getting following error: Expecting element 'root' from namespace ''.. Encountered 'None' with name '', namespace ''.

    Read the article

  • DHTMLX Calendar onChange

    - by Sickaaron
    I am having an issue with the DHTMLX Calendar with the onChange in the input field. I am initialisating the code with the following: jQuery(document).ready(function($){ var calendar; calendar = new dhtmlXCalendarObject(["date"]); calendar.hideTime(); calendar.setDateFormat("%d/%m/%Y"); $('.lightbox').lightbox(); $('#slider').cycle(); }); function checkDate() { var date = $("[name='date']").val(); alert(date); } In the form i am using: <input type="text" name="date" id="date" onchange="checkDate();" /> How it should be (and was working with JQuery UI datepicker) once the date is entered into the input field the onchange function would fire. However in this case: the onchange with this calendar does nothing. I have tried changing the following to this: jQuery(document).ready(function($){ var calendar; calendar = new dhtmlXCalendarObject(["date"]); calendar.hideTime(); calendar.setDateFormat("%d/%m/%Y"); calendar.attachEvent("onChange",function(date, state){ checkDate(); }); $('.lightbox').lightbox(); $('#slider').cycle(); }); function checkDate() { var date = $("[name='date']").val(); alert(date); } But the onchange event happens before the date is entered, ie when i click on the input field. Reason for changing from JQuery UI to this DHTMLX is that JQuery UI Datepicker does not support all platforms (iPhones/iPads etc) and DHTMLX does.

    Read the article

  • Ext.data.Store, Javascript Arrays and Ext.grid.ColumnModel

    - by Michael Wales
    I am using Ext.data.Store to call a PHP script which returns a JSON response with some metadata about fields that will be used in a query (unique name, table, field, and user-friendly title). I then loop through each of the Ext.data.Record objects, placing the data I need into an array (this_column), push that array onto the end of another array (columns), and eventually pass this to an Ext.grid.ColumnModel object. The problem I am having is - no matter which query I am testing against (I have a number of them, varying in size and complexity), the columns array always works as expected up to columns[15]. At columns[16], all indexes from that point and previous are filled with the value of columns[15]. This behavior continues until the loop reaches the end of the Ext.data.Store object, when the entire arrays consists of the same value. Here's some code: columns = []; this_column = []; var MetaData = Ext.data.Record.create([ {name: 'id'}, {name: 'table'}, {name: 'field'}, {name: 'title'} ]); // Query the server for metadata for the query we're about to run metaDataStore = new Ext.data.Store({ autoLoad: true, reader: new Ext.data.JsonReader({ totalProperty: 'results', root: 'fields', id: 'id' }, MetaData), proxy: new Ext.data.HttpProxy({ url: 'index.php/' + type + '/' + slug }), listeners: { 'load': function () { metaDataStore.each(function(r) { this_column['id'] = r.data['id']; this_column['header'] = r.data['title']; this_column['sortable'] = true; this_column['dataIndex'] = r.data['table'] + '.' + r.data['field']; // This display valid information, through the entire process console.info(this_column['id'] + ' : ' + this_column['header'] + ' : ' + this_column['sortable'] + ' : ' + this_column['dataIndex']); columns.push(this_column); }); // This goes nuts at columns[15] console.info(columns); gridColModel = new Ext.grid.ColumnModel({ columns: columns });

    Read the article

  • Using SharePoint label to display document version in Word 2007 doesn't work when moved to another l

    - by ITManagerWhoCodes
    I am surfacing the Document Library version of a Word 2007 document by creating a Label ({version}) within the content type of the Document Library and adding it as a Quick-part Label in the Word 2007 document. This works great. The latest version always shows up when I open the Word document. I also added this Version quick-part field to the footer of the Word document and then added this document as a document template to my content type, "ContentTypeMain". Now, I can go to my Document Library and I can create a New instance of "ContentTypeMain" with the Version field automatically there. This works great as well. However, if I create another Document Library and add the same Content Type, "ContentTypeMain" to it, the value of the Version quick-part doesn't update or refresh. The only way is to add another copy of the Label quick-part. It seems like the Quick-Part Label that maps to the Document Library Version is unique to the Document Library. My application dynamically creates subsites using site definitions and list templates. Thus the document library in each of the subsites are all being created from the same List Template. I inspected the XML files under the hood of the Word Document and it does look like there is a GUID attached to the Quick-Part Version field.

    Read the article

  • How to add dynamic profile fields in Invision Power Board?

    - by user361908
    I run a game server and want to link the persons in game character name and stats to Invision Power Board. I've setup IPB so players currently login with their in game login. That means their username on the forum is the same as their username for the game. They can have multiple characters on 1 account so ideally I'd like to allow them to choose a main character and display an actual image of that character and allow them to display other characters if they are online. Currently I'm doing something like this by hacking profileFields.php but it's messy and not very efficient on the user or server end. My code currently uses 2 custom fields which the player can enter their character names in. To display only their main character they enter the name in the first field. To also display other characters if they are online they enter the same name into the second field. To resolve the IDs I have to run a lot of queries. I know PHP but I am not familiar with IPBs code at all. I just need pointed in a direction where I can combine the 2 fields into 1 field. tl;dr: Here is my setup: Invision Power Board 3 Data is stored in MySQL on the same server the forum is hosted on. Usernames on the forum are identical to usernames in the game Here is a breakdown of what I'd like to do: In the edit profile section I need to resolve the forum username to the games account id then: Display a list of characters and allow them to choose which characters they want to display if they are online as well as a default character that will be displayed if none are online. In the posts user info pane: Display the online character or the default if none are online. Here is what I need to know: How to generate a list of characters in the profile edit form and allow selection (checkbox) of each character to display as well as the selection of a default character (radio or dropdown?) How to fetch the data and place it in the posts user info pane

    Read the article

  • crystal reports : cannot determine the queries necessary to get data for this report

    - by phill
    I've recently come across the following error in one of my crystal reports following an accounting system update. Group #1: ? - A : This group section cannot be printed because its condition field is nonexistent or invalid. Format the section to choose another condition field. I've verified every database field being used to ensure it still exists for consistency and checked the formulas for that section. No dice. So to hopefully fix the problem, i remove the section using the Section expert. I run the same database checks. It then complains with the same error for Group #5. So i remove that as well. now I have a new and unusual error when i attempt to run 'Show Query'. the error is: "Cannot determine the queries necessary to get data for this report" I have tried to logon/logoff the database and verify database. No complaints until i try to run, show query. When I attempt to run the report, it also throws the same error. any ideas? Am I approaching this incorrectly? this is done in crystal reports 10. note: this report is run with the sql sa user to eliminate any permission issues.

    Read the article

  • django crispy-forms inline forms

    - by abolotnov
    I'm trying to adopt crispy-forms and bootstrap and use as much of their functionality as possible instead of inventing something over and over again. Is there a way to have inline forms functionality with crispy-forms/bootstrap like django-admin forms have? Here is an example: class NewProjectForm(forms.Form): name = forms.CharField(required=True, label=_(u'???????? ???????'), widget=forms.TextInput(attrs={'class':'input-block-level'})) group = forms.ModelChoiceField(required=False, queryset=Group.objects.all(), label=_(u'?????? ????????'), widget=forms.Select(attrs={'class':'input-block-level'})) description = forms.CharField(required=False, label=_(u'???????? ???????'), widget=forms.Textarea(attrs={'class':'input-block-level'})) class Meta: model = Project fields = ('name','description','group') def __init__(self, *args, **kwargs): self.helper = FormHelper() self.helper.form_class = 'horizontal-form' self.helper.form_action = 'submit_new_project' self.helper.layout = Layout( Field('name', css_class='input-block-level'), Field('group', css_class='input-block-level'), Field('description',css_class='input-block-level'), ) self.helper.add_input(Submit('submit',_(u'??????? ??????'))) self.helper.add_input(Submit('cancel',_(u'? ?????????'))) super(NewProjectForm, self).__init__(*args, **kwargs) it will display a decent form: How do I go about adding a form that basically represents this model: class Link(models.Model): name = models.CharField(max_length=255, blank=False, null=False, verbose_name=_(u'????????')) url = models.URLField(blank=False, null=False, verbose_name=_(u'??????')) project = models.ForeignKey('Project') So there will be a project and name/url links and way to add many, like same thing is done in django-admin where you are able to add extra 'rows' with data related to your main model. On the sreenshot below you are able to fill out data for 'Question' object and below that you are able to add data for QuestionOption objects -you are able to click the '+' icon to add as many QuestionOptions as you want. I'm not looking for a way to get the forms auto-generated from models (that's nice but not the most important) - is there a way to construct a form that will let you add 'rows' of data like django-admin does?

    Read the article

  • IndexOutOfBoundsException when updating a contact in contact list - Blackberry

    - by Taha
    Software and Simulator version i am using Blackberry Smartphone simulator: 2.13.0.65 Blackberry software version 5.0.0_5.0.0.14 I am looking at modifying contacts. Below is the code snippet i am using. I am getting a IndexOutOfBounds Exception at line String wtel = blackBerryContact.getString(BlackBerryContact.TEL, supportedAttributes[i]); Can someone advise what is going wrong here. Following is the code snippet ..... // Load the addressbook and let the user choose from list of contact BlackBerryContactList contactList = (BlackBerryContactList) PIM.getInstance().openPIMList(PIM.CONTACT_LIST,PIM.READ_WRITE); PIMItem pimItem = contactList.choose(); BlackBerryContact blackBerryContact = (BlackBerryContact)pimItem; PIMList pimList = blackBerryContact.getPIMList(); // get the supported attributes for Contact.TEL int[] supportedAttributes = pimList.getSupportedAttributes(Contact.TEL); Dialog.alert("Supported Attributes "+supportedAttributes.length); // gives me 8 for (int i=0; i < supportedAttributes.length;i++){ if(blackBerryContact.ATTR_WORK == supportedAttributes[i]){ Dialog.alert("updating Work"); // This alert is shown Dialog.alert("is supported "+ pimList.isSupportedAttribute(BlackBerryContact.TEL, supportedAttributes[i])+" "+pimList.getAttributeLabel(supportedAttributes[i])); // shows true and work String wtel = blackBerryContact.getString(BlackBerryContact.TEL, supportedAttributes[i]); // I get a IndexOutOfBounds Exception here if(wtel != ""){ pimItem.removeValue(BlackBerryContact.TEL, supportedAttributes[i]); } pimItem.addString( Contact.TEL, BlackBerryContact.ATTR_WORK, number); // passing the number that has to be updated if(pimItem.isModified()) { pimItem.commit(); Dialog.alert("Updated Work Number"); } } } ..... I want to update all the supported attributes for Contact.TEL field http://www.blackberry.com/developers/docs/5.0.0api/net/rim/blackberry/api/pdap/BlackBerryContact.html Field Values Per Field Supported Attributes ----------------------------------------------------------------------------- Contact.TEL 8 Contact.ATTR_WORK, Contact.ATTR_HOME, Contact.ATTR_MOBILE, Contact.ATTR_PAGER, Contact.ATTR_FAX, Contact.ATTR_OTHER, Contact.ATTR_HOME2, Contact.ATTR_WORK2

    Read the article

  • Overwrite clean method in Django Custom Forms

    - by John
    Hi I have wrote a custom widget class AutoCompleteWidget(widgets.TextInput): """ widget to show an autocomplete box which returns a list on nodes available to be tagged """ def render(self, name, value, attrs=None): final_attrs = self.build_attrs(attrs, name=name) if not self.attrs.has_key('id'): final_attrs['id'] = 'id_%s' % name if not value: value = '[]' jquery = u""" <script type="text/javascript"> $("#%s").tokenInput('%s', { hintText: "Enter the word", noResultsText: "No results", prePopulate: %s, searchingText: "Searching..." }); $("body").focus(); </script> """ % (final_attrs['id'], reverse('ajax_autocomplete'), value) output = super(AutoTagWidget, self).render(name, "", attrs) return output + mark_safe(jquery) class MyForm(forms.Form): AutoComplete = forms.CharField(widget=AutoCompleteWidget) this widget uses a jquery function which autocompletes a word based on entries from the database. You can preset its initial values by setting prePopulate to a json string in the form ['name': 'some name', 'id': 'some id'] I do this by setting the inital value of the form field to this json string jquery_string = ['name': 'some name', 'id': 'some id'] form = MyForm(initial={'AutoComplete':jquery_string}) When submitting the form the the value of AutoComplete is returned as a comma seperated list of the selected ids e.g. 12,45,43,66 which if what I want. However if there is an error in the form, for example a required field has not been entered the value of the AutoComplete field is now 12,45,43,66 and not the json string which it requires. What is the best way to solve this. I was thinking about overwriting the clean method in the form class but I'm not sure how to find out if any other element has returned an error. e.g. if forms.errors form.cleaned_date['autocomplete'] = json string return form.cleaned_data Thanks

    Read the article

  • Html Select List: why would onchange be called twice?

    - by Mark Redman
    I have a page with a select list (ASP.NET MVC Page) The select list and onchange event specified like this: <%=Html.DropDownList("CompanyID", Model.CompanySelectList, "(select company)", new { @class = "data-entry-field", @onchange = "companySelectListChanged()" })%> The companySelectListChanged function is getting called twice? I am using the nifty code in this question to get the caller. both times the caller is the onchange event, however if i look at the callers caller using: arguments.callee.caller.caller the first call returns some system code as the caller (i presume) and the second call returns undefined. I am checking for undefined to only react once to onchange, but this doesnt seem ideal, whcy would onchange be called twice? UPDATE: ok, found the culprit! ...apart from me :-) but the issue of calling the companySelectListChanged function twice still stands. The onchange event is set directly on the select as mentioned. This calls the companySelectListChanged function. ..notice the 'data-entry-field' class, now in a separate linked javascript file a change event on all fields with this class is bound to a function that changes the colour of the save button. This means there is two events on the onchange, but the companySelectListChanged is called twice? The additional binding is set as follows: $('.data-entry-field').bind('keypress keyup change', function (e) { highLightSaveButtons(); }); Assuming its possible to have 2 change events on the select list, its would assume that setting keypress & keyup events may be breaking something? Any ideas?

    Read the article

  • Where does lucene .net cache the search results?

    - by Lanceomagnifico
    Hi, I'm trying to figure out where Lucene stores the cached query results, and how it's configured to do so - and how long it caches for. This is for an ASP.NET 3.5 solution. I'm getting this problem: If I run a search and sort the result by a particular product field, it seems to work the very first time each search and sort combination is used. If I then go in and change some product attributes, reindex and run the same search and sort, I get the products returned in the same order as the very first result. example Product A is named: foo Product B is named: bar For the first search, sort by name desc. This results in: Product A Product B Now mix up the data a bit: Change names to: Product A named: bar Product B named: foo reindex verify that the index contains the changes for these two products. search Result: Product A Product B Since I changed the alphabetical order of the names, I expected: Product B Product A So I think that Lucene is caching the search results. (Which, btw, is a very good thing.) I just need to know where/how to clear these results. I've tried deleting the index files and doing an IISreset to clear the memory, but it seems to have no effect. So I'm thinking there is another set of Lucene files outside of the indexes that Lucene uses for caching. EDIT I just found out that you must create the index for field you wish to sort on as un-tokenized. I had the field as tokenized, so sorting didn't work.

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >