Search Results

Search found 29235 results on 1170 pages for 'dynamic management objects'.

Page 584/1170 | < Previous Page | 580 581 582 583 584 585 586 587 588 589 590 591  | Next Page >

  • New Oracle BI Applications released

    - by THE
    Oracle has just released two new Applications for Oracle Business Intelligence Analytics with the Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} 7.9.6.x Extension Pack: Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} · Oracle Manufacturing Analytics, part of the Oracle BI Applications product family, helps discrete and process manufacturing organizations optimize their supply networks by integrating data from across the enterprise value chain, thereby enabling executives, operations managers, cost accountants and production supervisors to make informed and actionable decisions related to manufacturing execution. Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} · Oracle Enterprise Asset Management Analytics, part of the Oracle BI Applications product family, offers complete and enhanced visibility to enterprise-wide maintenance information. Pre-built reports covering Maintenance History, Maintenance Cost Analysis and Maintenance Work Orders, provide Maintenance Managers information to maximize performance, identify potential issues much in advance, and address them before they escalate into serious problems.  More Information about the existing Business Intelligence Analytics Applications can be found on this page: http://www.oracle.com/us/solutions/ent-performance-bi/bi-applications-066544.html If you are not familiar with Oracle Manufacturing or Oracle Enterprise Asset Management, these PDFs might get you started: http://www.oracle.com/us/products/applications/060289.pdf http://www.oracle.com/us/products/applications/057127.pdf

    Read the article

  • Getting Started with Cloud Computing

    - by juanlarios
    You’ve likely heard about how Office 365 and Windows Intune are great applications to get you started with Cloud Computing. Many of you emailed me asking for more info on what Cloud Computing is, including the distinction between "Public Cloud" and "Private Cloud". I want to address these questions and help you get started. Let's begin with a brief set of definitions and some places to find more info; however, an excellent place where you can always learn more about Cloud Computing is the Microsoft Virtual Academy. Public Cloud computing means that the infrastructure to run and manage the applications users are taking advantage of is run by someone else and not you. In other words, you do not buy the hardware or software to run your email or other services being used in your organization – that is done by someone else. Users simply connect to these services from their computers and you pay a monthly subscription fee for each user that is taking advantage of the service. Examples of Public Cloud services include Office 365, Windows Intune, Microsoft Dynamics CRM Online, Hotmail, and others. Private Cloud computing generally means that the hardware and software to run services used by your organization is run on your premises, with the ability for business groups to self-provision the services they need based on rules established by the IT department. Generally, Private Cloud implementations today are found in larger organizations but they are also viable for small and medium-sized businesses since they generally allow an automation of services and reduction in IT workloads when properly implemented. Having the right management tools, like System Center 2012, to implement and operate Private Cloud is important in order to be successful. So – how do you get started? The first step is to determine what makes the most sense to your organization. The nice thing is that you do not need to pick Public or Private Cloud – you can use elements of both where it makes sense for your business – the choice is yours. When you are ready to try and purchase Public Cloud technologies, the Microsoft Volume Licensing web site is a good place to find links to each of the online services. In particular, if you are interested in a trial for each service, you can visit the following pages: Office 365, CRM Online, Windows Intune, and Windows Azure. For Private Cloud technologies, start with some of the courses on Microsoft Virtual Academy and then download and install the Microsoft Private Cloud technologies including Windows Server 2008 R2 Hyper-V and System Center 2012 in your own environment and take it for a spin. Also, keep up to date with the Canadian IT Pro blog to learn about events Microsoft is delivering such as the IT Virtualization Boot Camps and more to get you started with these technologies hands on. Finally, I want to ask for your help to allow the team at Microsoft to continue to provide you what you need. Twice a year through something we call "The Global Relationship Study" – they reach out and contact you to see how they're doing and what Microsoft could do better. If you get an email from "Microsoft Feedback" with the subject line "Help Microsoft Focus on Customers and Partners" between March 5th and April 13th, please take a little time to tell them what you think. Cloud Computing Resources: Microsoft Server and Cloud Computing site – information on Microsoft's overall cloud strategy and products. Microsoft Virtual Academy – for free online training to help improve your IT skillset. Office 365 Trial/Info page – get more information or try it out for yourself. Office 365 Videos – see how businesses like yours have used Office 365 to transition to the cloud. Windows Intune Trial/Info – get more information or try it out for yourself. Microsoft Dynamics CRM Online page – information on trying and licensing Microsoft Dynamics CRM Online. Additional Resources You May Find Useful: Springboard Series Your destination for technical resources, free tools and expert guidance to ease the deployment and management of your Windows-based client infrastructure. TechNet Evaluation Center Try some of our latest Microsoft products for free, Like System Center 2012 Pre-Release Products, and evaluate them before you buy. AlignIT Manager Tech Talk Series A monthly streamed video series with a range of topics for both infrastructure and development managers. Ask questions and participate real-time or watch the on-demand recording. Tech·Days Online Discover what's next in technology and innovation with Tech·Days session recordings, hands-on labs and Tech·Days TV.

    Read the article

  • ADO and Two Way Storage Tiering

    - by Andy-Oracle
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 We get asked the following question about Automatic Data Optimization (ADO) storage tiering quite a bit. Can you tier back to the original location if the data gets hot again? The answer is yes but not with standard Automatic Data Optimization policies, at least not reliably. That's not how ADO is meant to operate. ADO is meant to mirror a traditional view of Information Lifecycle Management (ILM) where data will be very volatile when first created, will become less active or cool, and then will eventually cease to be accessed at all (i.e. cold). I think the reason this question gets asked is because customers realize that many of their business processes are cyclical and the thinking goes that those segments that only get used during month end or year-end cycles could sit on lower cost storage when not being used. Unfortunately this doesn't fit very well with the ADO storage tiering model. ADO storage tiering is based on the amount of free and used space in the source tablespace. There are two parameters that control this behavior, TBS_PERCENT_USED and TBS_PERCENT_FREE. When the space in the tablespace exceeds the TBS_PERCENT_USED value then segments specified in storage tiering clause(s) can be moved until the percent of free space reaches the TBS_PERCENT_FREE value. It is worth mentioning that no checks are made for available space in the target tablespace. Now, it is certainly possible to create custom functions to control storage tiering, but this can get complicated. The biggest problem is insuring that there is enough space to move the segment back to tier 1 storage, assuming that that's the goal. This isn't as much of a problem when moving from tier 1 to tier 2 storage because there is typically more tier 2 storage available. At least that's the premise since it is supposed to be less costly, lower performing and higher capacity storage. In either case though, if there isn't enough space then the operation fails. In the case of a customized function, the question becomes do you attempt to free the space so the move can be made or do you just stop and return false so that the move cannot take place? This is really the crux of the issue. Once you cross into this territory you're really going to have to implement two-way hierarchical storage and the whole point of ADO was to provide automatic storage tiering. You're probably better off using heat map and/or business access requirements and building your own hierarchical storage management infrastructure if you really want two way storage tiering. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Oracle Excellence Award

    - by Hartmut Wiese
    CALL FOR NOMINATIONS 2014 Oracle Excellence Award: Sustainability Innovation Is your organization using an Oracle product to help with a sustainability initiative while reducing costs? Saving energy? Saving gas? Saving paper? For example, you may use Oracle’s Agile Product Lifecycle Management to design more eco-friendly products, Oracle Transportation Management to reduce fleet emissions, Oracle Exadata Database Machine to decrease power and cooling needs while increasing database performance, Oracle Business Intelligence to measure environmental impacts, or one of many other Oracle products. Your organization may be eligible for the 2014 Oracle Excellence Award: Sustainability Innovation. Submit a nomination form located here by Friday June 20 if your company is using any Oracle product to take an environmental lead as well as to reduce costs and improve business efficiencies by using green business practices. These awards will be presented during Oracle OpenWorld 2014 (September 28-October 2) in San Francisco.  Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 About the Award • Winners will be selected from the customer and/or partner nominations. Either a customer, their partner, or Oracle representative can submit the nomination form on behalf of the customer.• There is a nomination form here to discuss your use of Oracle products and how they have helped your sustainability efforts and reduced costs. • Winners will be selected based on the extent of the environmental impact they have had as well as the business efficiencies they have achieved through their combined use of Oracle products. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Nomination Eligibility • Your company uses at least one component of Oracle products, whether it's the Oracle database, business applications, Fusion Middleware, or Sun servers/storage. • This solution should be in production or in active development. • Nomination deadline: Friday June 20, 2014. Benefits to Award Winners • Award presented to winners during Oracle OpenWorld by Jeff Henley, Oracle Chairman of the Board • Free Oracle OpenWorld registration pass for each winning customer • 2014 Oracle Excellence Award: Sustainability Innovation award logo for inclusion on your own website &/or press release • Possible placement in Oracle Profit Magazine &/or Oracle Magazine • ‘Enable the Eco-Enterprise’ podcast opportunity See last year's winners here Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 ______________________________________________________________________________________ Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Questions? Send an email to: [email protected] Follow Oracle’s Sustainability Solutions on Twitter, LinkedIn, YouTube, and the Sustainability Matters blog Web page with award details:  http://www.oracle.com/us/products/applications/green/call-for-nominations-185050.html  

    Read the article

  • Broken Views

    - by Ajarn Mark Caldwell
    “SELECT *” isn’t just hazardous to performance, it can actually return blatantly wrong information. There are a number of blog posts and articles out there that actively discourage the use of the SELECT * FROM …syntax.  The two most common explanations that I have seen are: Performance:  The SELECT * syntax will return every column in the table, but frequently you really only need a few of the columns, and so by using SELECT * your are retrieving large volumes of data that you don’t need, but the system has to process, marshal across tiers, and so on.  It would be much more efficient to only select the specific columns that you need. Future-proof:  If you are taking other shortcuts in your code, along with using SELECT *, you are setting yourself up for trouble down the road when enhancements are made to the system.  For example, if you use SELECT * to return results from a table into a DataTable in .NET, and then reference columns positionally (e.g. myDataRow[5]) you could end up with bad data if someone happens to add a column into position 3 and skewing all the remaining columns’ ordinal position.  Or if you use INSERT…SELECT * then you will likely run into errors when a new column is added to the source table in any position. And if you use SELECT * in the definition of a view, you will run into a variation of the future-proof problem mentioned above.  One of the guys on my team, Mike Byther, ran across this in a project we were doing, but fortunately he caught it while we were still in development.  I asked him to put together a test to prove that this was related to the use of SELECT * and not some other anomaly.  I’ll walk you through the test script so you can see for yourself what happens. We are going to create a table and two views that are based on that table, one of them uses SELECT * and the other explicitly lists the column names.  The script to create these objects is listed below. IF OBJECT_ID('testtab') IS NOT NULL DROP TABLE testtabgoIF OBJECT_ID('testtab_vw') IS NOT NULL DROP VIEW testtab_vwgo IF OBJECT_ID('testtab_vw_named') IS NOT NULL DROP VIEW testtab_vw_namedgo CREATE TABLE testtab (col1 NVARCHAR(5) null, col2 NVARCHAR(5) null)INSERT INTO testtab(col1, col2)VALUES ('A','B'), ('A','B')GOCREATE VIEW testtab_vw AS SELECT * FROM testtabGOCREATE VIEW testtab_vw_named AS SELECT col1, col2 FROM testtabgo Now, to prove that the two views currently return equivalent results, select from them. SELECT 'star', col1, col2 FROM testtab_vwSELECT 'named', col1, col2 FROM testtab_vw_named OK, so far, so good.  Now, what happens if someone makes a change to the definition of the underlying table, and that change results in a new column being inserted between the two existing columns?  (Side note, I normally prefer to append new columns to the end of the table definition, but some people like to keep their columns alphabetized, and for clarity for later people reviewing the schema, it may make sense to group certain columns together.  Whatever the reason, it sometimes happens, and you need to protect yourself and your code from the repercussions.) DROP TABLE testtabgoCREATE TABLE testtab (col1 NVARCHAR(5) null, col3 NVARCHAR(5) NULL, col2 NVARCHAR(5) null)INSERT INTO testtab(col1, col3, col2)VALUES ('A','C','B'), ('A','C','B')goSELECT 'star', col1, col2 FROM testtab_vwSELECT 'named', col1, col2 FROM testtab_vw_named I would have expected that the view using SELECT * in its definition would essentially pass-through the column name and still retrieve the correct data, but that is not what happens.  When you run our two select statements again, you see that the View that is based on SELECT * actually retrieves the data based on the ordinal position of the columns at the time that the view was created.  Sure, one work-around is to recreate the View, but you can’t really count on other developers to know the dependencies you have built-in, and they won’t necessarily recreate the view when they refactor the table. I am sure that there are reasons and justifications for why Views behave this way, but I find it particularly disturbing that you can have code asking for col2, but actually be receiving data from col3.  By the way, for the record, this entire scenario and accompanying test script apply to SQL Server 2008 R2 with Service Pack 1. So, let the developer beware…know what assumptions are in effect around your code, and keep on discouraging people from using SELECT * syntax in anything but the simplest of ad-hoc queries. And of course, let’s clean up after ourselves.  To eliminate the database objects created during this test, run the following commands. DROP TABLE testtabDROP VIEW testtab_vwDROP VIEW testtab_vw_named

    Read the article

  • I.T. Chargeback : Core to Cloud Computing

    - by Anand Akela
    Contributed by Mark McGill Consolidation and Virtualization have been widely adopted over the years to help deliver benefits such as increased server utilization, greater agility and lower cost to the I.T. organization. These are key enablers of cloud, but in themselves they do not provide a complete cloud solution. Building a true enterprise private cloud involves moving from an admin driven world, where the I.T. department is ultimately responsible for the provisioning of servers, databases, middleware and applications, to a world where the consumers of I.T. resources can provision their infrastructure, platforms and even complete application stacks on demand. Switching from an admin-driven provisioning model to a user-driven model creates some challenges. How do you ensure that users provisioning resources will not provision more than they need? How do you encourage users to return resources when they have finished with them so that others can use them? While chargeback has existed as a concept for many years (especially in mainframe environments), it is the move to this self-service model that has created a need for a new breed of chargeback applications for cloud. Enabling self-service without some form of chargeback is like opening a shop where all of the goods are free. A successful chargeback solution will be able to allocate the costs of shared I.T. infrastructure based on the relative consumption by the users. Doing this creates transparency between the I.T. department and the consumers of I.T. When users are able to understand how their consumption translates to cost they are much more likely to be prudent when it comes to their use of I.T. resources. This also gives them control of their I.T. costs, as moderate usage will translate to a lower charge at the end of the month. Implementing Chargeback successfully create a win-win situation for I.T. and the consumers. Chargeback can help to ensure that I.T. resources are used for activities that deliver business value. It also improves the overall utilization of I.T. infrastructure as I.T. resources that are not needed are not left running idle. Enterprise Manager 12c provides an integrated metering and chargeback solution for Enterprise Manager Targets. This solution is built on top of the rich configuration and utilization information already available in Enterprise Manager. It provides metering not just for virtual machines, but also for physical hosts, databases and middleware. Enterprise Manager 12c provides metering based on the utilization and configuration of the following types of Enterprise Manager Target: Oracle VM Host Oracle Database Oracle WebLogic Server Using Enterprise Manager Chargeback, administrators are able to create a set of Charge Plans that are used to attach prices to the various metered resources. These plans can contain fixed costs (eg. $10/month/database), configuration based costs (eg. $10/month if OS is Windows) and utilization based costs (eg. $0.05/GB of Memory/hour) The self-service user provisioning these resources is then able to view a report that details their usage and helps them understand how this usage translates into cost. Armed with this information, the user is able to determine if the resources are delivering adequate business value based on what is being charged. Figure 1: Chargeback in Self-Service Portal Enterprise Manager 12c provides a variety of additional interfaces into this data. The administrator can access summary and trending reports. Summary reports allow the administrator to drill-down through the cost center hierarchy to identify, for example, the top resource consumers across the organization. Figure 2: Charge Summary Report Trending reports can be used for I.T. planning and budgeting as they show utilization and charge trends over a period of time. Figure 3: CPU Trend Report We also provide chargeback reports through BI Publisher. This provides a way for users who do not have an Enterprise Manager login (such as Line of Business managers) to view charge and usage information. For situations where a bill needs to be produced, chargeback can be integrated with billing applications such as Oracle Billing and Revenue Management (BRM). Further information on Enterprise Manager 12c’s integrated metering and chargeback: White Paper Screenwatch Cloud Management on OTN

    Read the article

  • Is this over-abstraction? (And is there a name for it?)

    - by mwhite
    I work on a large Django application that uses CouchDB as a database and couchdbkit for mapping CouchDB documents to objects in Python, similar to Django's default ORM. It has dozens of model classes and a hundred or two CouchDB views. The application allows users to register a "domain", which gives them a unique URL containing the domain name that gives them access to a project whose data has no overlap with the data of other domains. Each document that is part of a domain has its domain property set to that domain's name. As far as relationships between the documents go, all domains are effectively mutually exclusive subsets of the data, except for a few edge cases (some users can be members of more than one domain, and there are some administrative reports that include all domains, etc.). The code is full of explicit references to the domain name, and I'm wondering if it would be worth the added complexity to abstract this out. I'd also like to know if there's a name for the sort of bound property approach I'm taking here. Basically, I have something like this in mind: Before in models.py class User(Document): domain = StringProperty() class Group(Document): domain = StringProperty() name = StringProperty() user_ids = StringListProperty() # method that returns related document set def users(self): return [User.get(id) for id in self.user_ids] # method that queries a couch view optimized for a specific lookup @classmethod def by_name(cls, domain, name): # the view method is provided by couchdbkit and handles # wrapping json CouchDB results as Python objects, and # can take various parameters modifying behavior return cls.view('groups/by_name', key=[domain, name]) # method that creates a related document def get_new_user(self): user = User(domain=self.domain) user.save() self.user_ids.append(user._id) return user in views.py: from models import User, Group # there are tons of views like this, (request, domain, ...) def create_new_user_in_group(request, domain, group_name): group = Group.by_name(domain, group_name)[0] user = User(domain=domain) user.save() group.user_ids.append(user._id) group.save() in group/by_name/map.js: function (doc) { if (doc.doc_type == "Group") { emit([doc.domain, doc.name], null); } } After models.py class DomainDocument(Document): domain = StringProperty() @classmethod def domain_view(cls, *args, **kwargs): kwargs['key'] = [cls.domain.default] + kwargs['key'] return super(DomainDocument, cls).view(*args, **kwargs) @classmethod def get(cls, *args, **kwargs, validate_domain=True): ret = super(DomainDocument, cls).get(*args, **kwargs) if validate_domain and ret.domain != cls.domain.default: raise Exception() return ret def models(self): # a mapping of all models in the application. accessing one returns the equivalent of class BoundUser(User): domain = StringProperty(default=self.domain) class User(DomainDocument): pass class Group(DomainDocument): name = StringProperty() user_ids = StringListProperty() def users(self): return [self.models.User.get(id) for id in self.user_ids] @classmethod def by_name(cls, name): return cls.domain_view('groups/by_name', key=[name]) def get_new_user(self): user = self.models.User() user.save() views.py @domain_view # decorator that sets request.models to the same sort of object that is returned by DomainDocument.models and removes the domain argument from the URL router def create_new_user_in_group(request, group_name): group = request.models.Group.by_name(group_name) user = request.models.User() user.save() group.user_ids.append(user._id) group.save() (Might be better to leave the abstraction leaky here in order to avoid having to deal with a couchapp-style //! include of a wrapper for emit that prepends doc.domain to the key or some other similar solution.) function (doc) { if (doc.doc_type == "Group") { emit([doc.name], null); } } Pros and Cons So what are the pros and cons of this? Pros: DRYer prevents you from creating related documents but forgetting to set the domain. prevents you from accidentally writing a django view - couch view execution path that leads to a security breach doesn't prevent you from accessing underlying self.domain and normal Document.view() method potentially gets rid of the need for a lot of sanity checks verifying whether two documents whose domains we expect to be equal are. Cons: adds some complexity hides what's really happening requires no model modules to have classes with the same name, or you would need to add sub-attributes to self.models for modules. However, requiring project-wide unique class names for models should actually be fine because they correspond to the doc_type property couchdbkit uses to decide which class to instantiate them as, which should be unique. removes explicit dependency documentation (from group.models import Group)

    Read the article

  • Challenges in Corporate Reporting - New Independent Research

    - by ndwyouell
    Earlier this year, Oracle and Accenture sponsored a global study on trends in financial close and reporting. We surveyed 1,123 finance professionals in large organizations in 12 countries around the world during February and March. Financial Consolidation and Reporting is the most mature aspect of Enterprise Performance Management with mainstream solutions having been around for over 30 years. But of course over this time there have been many changes and very significant increases in regulation. So just what is the current state is Financial Consolidation and Reporting in our major corporations across the world? We commissioned this independent research to find out. Highlights of the result are: •          Seeking change: Businesses recognize they need to invest in financial reporting to address the challenges they currently face. 47 percent of companies have made substantial investments over the last year to the financial close, filing, and reporting processes. •          Ineffective investments: Despite these investments, spreadsheets (72 percent) and e-mails (68 percent) are still being used daily to track and manage reporting, suggesting that new investments are falling short of expectations. •          Increased costs and uncertainty: The situation is so opaque that managers across the finance function are unable to fully understand the financial impact or cost implications of reporting, with 60 percent of respondents admitting they did not know the total cost of managing and publicizing their financial results. •          Persistent challenges: 68 percent of respondents admitted that they have inadequate visibility into reporting processes, while 84 percent of finance managers surveyed said they find it difficult to control the quality of financial data across the entire reporting process. •          Decreased effectiveness: 71 percent of finance managers feel their effectiveness is limited in some way by data-analysis–related issues, while 39 percent of C-level or VP-level respondents say their effectiveness is impaired by limited visibility. •          Missed deadlines: Due to late changes to the chart of accounts, 15 percent of global businesses have missed statutory filings, putting their companies at risk of financial penalties and potentially impacting share value. The report makes it clear that investments made to date by these large organizations around the world have been uneven across the close, reporting, and filing processes, which has led to the challenges these organizations currently face in the overall process. Regardless of whether companies are using a variety of solutions or a single solution, the report shows they continue to witness increased costs, ineffectual data management, and missed reporting, which—in extreme circumstances—can impact a company’s corporate image and share value. The good news is that businesses realize that these problems persist and 86 percent of companies are likely to make a significant investment during the next five years to address these issues. While they should invest, it is critical that they direct investments correctly to address the key issues this research identified: •          Improving data integrity •          Optimizing processes •          Integrating the extended financial close process By addressing these issues and with clear guidance on how to implement the correct business processes, infrastructure, and software solutions, finance teams will find that their reporting processes are much more effective, cost-efficient, and aligned with their performance expectations. To get a copy of the full report: http://www.oracle.com/webapps/dialogue/ns/dlgwelcome.jsp?p_ext=Y&p_dlg_id=11747758&src=7300117&Act=92 To replay a webcast discussing the findings: http://www.cfo.com/webcast.cfm?webcast=14639438&pcode=ORA061912_ORA

    Read the article

  • Is my class structure good enough?

    - by Rivten
    So I wanted to try out this challenge on reddit which is mostly about how you structure your data the best you can. I decided to challenge my C++ skills. Here's how I planned this. First, there's the Game class. It deals with time and is the only class main has access to. A game has a Forest. For now, this class does not have a lot of things, only a size and a Factory. Will be put in better use when it will come to SDL-stuff I guess A Factory is the thing that deals with the Game Objects (a.k.a. Trees, Lumberjack and Bears). It has a vector of all GameObjects and a queue of Events which will be managed at the end of one month. A GameObject is an abstract class which can be updated and which can notify the Event Listener The EventListener is a class which handles all the Events of a simulation. It can recieve events from a Game Object and notify the Factory if needed, the latter will manage correctly the event. So, the Tree, Lumberjack and Bear classes all inherits from GameObject. And Sapling and Elder Tree inherits from Tree. Finally, an Event is defined by an event_type enumeration (LUMBERJACK_MAWED, SAPPLING_EVOLUTION, ...) and an event_protagonists union (a GameObject or a pair of GameObject (who killed who ?)). I was quite happy at first with this because it seems quite logic and flexible. But I ended up questionning this structure. Here's why : I dislike the fact that a GameObject need to know about the Factory. Indeed, when a Bear moves somewhere, it needs to know if there's a Lumberjack ! Or it is the Factory which handles places and objects. It would be great if a GameObject could only interact with the EventListener... or maybe it's not that much of a big deal. Wouldn't it be better if I separate the Factory in three vectors ? One for each kind of GameObject. The idea would be to optimize research. If I'm looking do delete a dead lumberjack, I would only have to look in one shorter vector rather than a very long vector. Another problem arises when I want to know if there is any particular object in a given case because I have to look for all the gameObjects and see if they are at the given case. I would tend to think that the other idea would be to use a matrix but then the issue would be that I would have empty cases (and therefore unused space). I don't really know if Sapling and Elder Tree should inherit from Tree. Indeed, a Sapling is a Tree but what about its evolution ? Should I just delete the sapling and say to the factory to create a new Tree at the exact same place ? It doesn't seem natural to me to do so. How could I improve this ? Is the design of an Event quite good ? I've never used unions before in C++ but I didn't have any other ideas about what to use. Well, I hope I have been clear enough. Thank you for taking the time to help me !

    Read the article

  • WPF MVVM ComboBox SelectedItem or SelectedValue not working

    - by cjibo
    Update After a bit of investigating. What seems to be the issue is that the SelectedValue/SelectedItem is occurring before the Item source is finished loading. If I sit in a break point and weight a few seconds it works as expected. Don't know how I'm going to get around this one. End Update I have an application using in WPF using MVVM with a ComboBox. Below is the ViewModel Example. The issue I'm having is when we leave our page and migrate back the ComboBox is not selecting the current Value that is selected. View Model public class MyViewModel { private MyObject _selectedObject; private Collection<Object2> _objects; private IModel _model; public MyViewModel(IModel model) { _model = model; _objects = _model.GetObjects(); } public Collection<MyObject> Objects { get { return _objects; } private set { _objects = value; } } public MyObject SelectedObject { get { return _selectedObject; } set { _selectedObject = value; } } } For the sake of this example lets say MyObject has two properties (Text and Id). My XAML for the ComboBox looks like this. XAML <ComboBox Name="MyComboBox" Height="23" Width="auto" SelectedItem="{Binding Path=SelectedObject,Mode=TwoWay}" ItemsSource="{Binding Objects}" DisplayMemberPath="Text" SelectedValuePath="Id"> No matter which way I configure this when I come back to the page and the object is reassembled the ComboBox will not select the value. The object is returning the correct object via the get in the property though. I'm not sure if this is just an issue with the way the ComboBox and MVVM pattern works. The text box binding we are doing works correctly.

    Read the article

  • DataSet does not support System.Nullable<>

    - by a_m0d
    I'm trying to set the DataSource for a Crystal Reports report, but I've run into a few problems. I've been following a guide written by Mohammad Mahdi Ramezanpour, and have managed to get all the way to the last part now (setting the DataSource). However, I have a problem that Mohammad does not seem to have - when I pass the results of my query to the report, I end up with the following exception: DataSet does not support System.Nullable< This is the query I am using: public IQueryable<Part> GetPartsToDisplayOnStockReport() { return from part in db.Parts where part.showOnStockReport == true select part; } and the way I pass it to the Report: public ActionResult ViewStockReport() { StockReport stockReport = new StockReport(); var parts = ordersRepository.GetPartsToDisplayOnStockReport().ToList(); stockReport.SetDataSource(parts); Stream stream = stockReport.ExportToStream(CrystalDecisions.Shared.ExportFormatType.PortableDocFormat); return File(stream, "application/pdf"); } I have also tried changing my query to this code, in the hope that it would fix my problem: return (from part in db.Parts where part.showOnStockReport == true select part) ?? db.Parts.DefaultIfEmpty(); but it still complained about the same problem. How can I pass the results of this query to my report, to use it as a data source? Also, if each of my Parts object contains other objects / collections of other objects, will I be able to reference them in the report with a datasource like this?

    Read the article

  • Very slow performance deserializing using datacontractserializer in a Silverlight Application.

    - by caryden
    Here is the situation: Silverlight 3 Application hits an asp.net hosted WCF service to get a list of items to display in a grid. Once the list is brought down to the client it is cached in IsolatedStorage. This is done by using the DataContractSerializer to serialize all of these objects to a stream which is then zipped and then encrypted. When the application is relaunched, it first loads from the cache (reversing the process above) and the deserializes the objects using the DataContractSerializer.ReadObject() method. All of this was working wonderfully under all scenarios until recently with the entire "load from cache" path (decrypt/unzip/deserialize) taking hundreds of milliseconds at most. On some development machines but not all (all machines Windows 7) the deserialize process - that is the call to ReadObject(stream) takes several minutes an seems to lock up the entire machine BUT ONLY WHEN RUNNING IN THE DEBUGGER in VS2008. Running the Debug configuration code outside the debugger has no problem. One thing that seems to look suspicious is that when you turn on stop on Exceptions, you can see that the ReadObject() throws many, many System.FormatException's indicating that a number was not in the correct format. When I turn off "Just My Code" thousands of these get dumped to the screen. None go unhandled. These occur both on the read back from the cache AND on a deserialization at the conclusion of a web service call to get the data from the WCF Service. HOWEVER, these same exceptions occur on my laptop development machine that does not experience the slowness at all. And FWIW, my laptop is really old and my desktop is a 4 core, 6GB RAM beast. Again, no problems unless running under the debugger in VS2008. Anyone else seem this? Any thoughts? Here is the bug report link: https://connect.microsoft.com/VisualStudio/feedback/details/539609/very-slow-performance-deserializing-using-datacontractserializer-in-a-silverlight-application-only-in-debugger

    Read the article

  • Refresh UltraGrid's GroupBy Sort on child bands when ListChanged?

    - by Idriss
    I am using Infragistics 2009 vol 1. My UltraGrid is bound to a BindingList of business objects "A" having themself a BindingList property of business objects "B". It results in having two bands: one named "BindingList`1", the other one "ListOfB" thanks to the currency manager. I would like to refresh the GroupBy sort of the grid whenever a change is performed on the child band through the child business object and INotifyPropertyChange. If I group by a property in the child band which is a boolean (let's say "Active") and I subscribe to the event ListChanged on the bindinglist datasource with this event handler: void Grid_ListChanged(object sender, ListChangedEventArgs e) { if (e.ListChangedType == ListChangedType.ItemChanged) { string columnKey = e.PropertyDescriptor.Name; if (e.PropertyDescriptor.PropertyType.Name == "BindingList`1") { ultraGrid.DisplayLayout.Bands[columnKey].SortedColumns.RefreshSort(true); } else { UltraGridBand band = ultraGrid.DisplayLayout.Bands[0]; UltraGridColumn gc = band.Columns[columnKey]; if (gc.IsGroupByColumn || gc.SortIndicator != SortIndicator.None) { band.SortedColumns.RefreshSort(true); } ColumnFilter cf = band.ColumnFilters[columnKey]; if (cf.FilterConditions.Count > 0) { ultraGrid.DisplayLayout.RefreshFilters(); } } } } the band.SortedColumns.RefreshSort(true) is called but It gives unpredictable results in the groupby area when the property Active is changed in the child band: if one object out of three actives becomes inactive it goes from: Active : True (3 items) To: Active : False (3 items) Instead of (which is the case when I drag the column back and forth to the group by area) Active : False (1 item) Active : True (2 items) Am I doing something wrong? Is there a way to restore the expanded state of the rows when performing a RefreshSort(true); ?

    Read the article

  • Android: Adding extended GLSurfaceView to a Layout don't show 3d stuff

    - by Santiago
    I make a game extending the class GLSurfaceView, if I apply SetContentView directly to that class, the 3d stuff and input works great. Now I want to show some items over 3d stuff, so I create a XML with a layout and some objects, and I try to add my class manually to the layout. I'm not getting errors but the 3d stuff is not shown but I can view the objects from XML layout. source: @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); LayoutInflater inflater = (LayoutInflater) getSystemService(LAYOUT_INFLATER_SERVICE); layout = (RelativeLayout) inflater.inflate(R.layout.testlayout, null); //Create an Instance with this Activity my3dstuff = new myGLSurfaceViewClass(this); layout.addView(my3dstuff,4); setContentView(R.layout.testlayout); } And testlayout have: <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/Pantalla"> <ImageView android:id="@+id/zoom_less" android:layout_width="wrap_content" android:layout_height="wrap_content" android:src="@drawable/zoom_less"></ImageView> <ImageView android:id="@+id/zoom_more" android:layout_width="wrap_content" android:src="@drawable/zoom_more" android:layout_height="wrap_content" android:layout_alignParentRight="true"></ImageView> <ImageView android:id="@+id/zoom_normal" android:layout_width="wrap_content" android:layout_height="wrap_content" android:src="@drawable/zoom_normal" android:layout_centerHorizontal="true"></ImageView> <ImageView android:id="@+id/stop" android:layout_width="wrap_content" android:layout_height="wrap_content" android:src="@drawable/stop" android:layout_centerInParent="true" android:layout_alignParentBottom="true"></ImageView> </RelativeLayout> I also tried to add my class to XML but the Activity hangs up. <com.mygame.myGLSurfaceViewClass android:id="@+id/my3dstuff" android:layout_width="fill_parent" android:layout_height="fill_parent"></com.mygame.myGLSurfaceViewClass> and this don't works: <View class="com.mygame.myGLSurfaceViewClass" android:id="@+id/my3dstuff" android:layout_width="fill_parent" android:layout_height="fill_parent"></View> Any Idea? Thanks

    Read the article

  • Listview of items from a object selected in another listview

    - by Ingó Vals
    Ok the title maybe a little confusing. I have a database with the table Companies wich has one-to-many relotionship with another table Divisions ( so each company can have many divisions ) and division will have many employees. I have a ListView of the companies. What I wan't is that when I choose a company from the ListView another ListView of divisions within that company appears below it. Then I pick a division and another listview of employees within that division appaers below that. You get the picture. Is there anyway to do this mostly inside the XAML code declaritively (sp?). I'm using linq so the Company entity objects have a property named Division wich if I understand linq correctly should include Division objects of the divisions connected to the company. So after getting all the companies and putting them as a itemsource to CompanyListView this is where I currently am. <ListView x:Name="CompanyListView" DisplayMemberPath="CompanyName" Grid.Row="0" Grid.Column="0" /> <ListView DataContext="{Binding ElementName=CompanyListView, Path=SelectedItem}" DisplayMemberPath="Division.DivisionName" Grid.Row="1" Grid.Column="0" /> I know I'm way off but I was hoping by putting something specific in the DataContext and DisplayMemberPath I could get this to work. If not then I have to capture the Id of the company I guess and capture a select event or something. Another issue but related is the in the seconde column besides the lisview I wan't to have a details/edit view for the selected item. So when only a company is selected details about that will appear then when a division under the company is picked It will go there instead, any ideas?

    Read the article

  • [ScriptMethod(ResponseFormat = ResponseFormat.Json)]

    - by gnomixa
    In ASP.net web service if the above isn't specified , what is the response format by default? Also, if my web service below: [WebMethod()] public List<Sample> GenerateSamples(string[][] data) { ResultsFactory f = new ResultsFactory(data); List<Sample> samples = f.GenerateSamples(); return samples; } returns the list of objects, If I change the response format to JSON, I have to change the return type to string, then how do I access objects in my javascript? Currently I call this web service in my JS such as: $.ajax({ type: "POST", url: "http://localhost/TemplateWebService/Service.asmx/GenerateSamples", data: jsonText, contentType: "application/json; charset=utf-8", dataType: "json", success: function(response) { var samples = (typeof response.d) == 'string' ? eval('(' + response.d + ')') : response.d; if (samples.length > 0) { doSomethingHere(samples); } else { alert("No samples have been generated"); } }, error: function(xhr, status, error) { var msg = JSON.parse(xhr.responseText); alert(msg.Message); } }); What i noticed though, even though everything works perfectly fine, the eval statement never gets executed, which means that the web service always returns a string! So my question is, is [ScriptMethod(ResponseFormat = ResponseFormat.Json)] necessary on the web service definition side? The way things are now, I can use samples array and access each object and its properties as I normally would in any OOP code, which is very convenient, and everything works no problem, but I just wanted to make sure that I am not missing anything in my set up. I took the basics of combining Jquery's ajax with asp.net from Encosia side, and the response type wasn't mentioned there - I read it on another site and am I not sure how vital it is.

    Read the article

  • Assert.AreEqual() Exception in VS2010

    - by Tom Miller
    I am fairly new to unit testing and am using VS2010 to develop in and run my tests. I have a simple test, illustrated below, that simply compares 2 System.Data.DataTableReader objects. I know that they are equal as they are both created using the same object types, the same input file and I have verified that the objects "look" the same. I realize I may be dealing with a couple of issues, one being whether or not this is the proper use of Assert.AreEqual or even the proper way to test this scenario, and the other being the main issue I am dealing with which is why this test fails with this exception: Failed 00:00:00.1000660 0 Assert.AreEqual failed. Expected:<System.Data.DataTableReader>. Actual:<System.Data.DataTableReader>. Here is the unit test code that is failing: public void EntriesTest() { AuditLog target = new AuditLog(); target.Init(); DataSet ds = new DataSet(); ds.ReadXml(TestContext.DataRow["AuditLogPath"].ToString()); DataTableReader expected = ds.Tables[0].CreateDataReader(); DataTableReader actual = target.Entries.Tables[0].CreateDataReader(); Assert.AreEqual<DataTableReader>(expected, actual); } Any help would be greatly appreciated!

    Read the article

  • Html.DropDownListFor<> and complex object in ASP.NET MVC2

    - by dagda1
    Hi, I am looking at ASP.NET MVC2 and trying to post a complex object using the new EditorFor syntax. I have a FraudDto object that has a FraudCategory child object and I want to set this object from the values that are posted from the form. Posting a simple object is not a problem but I am struggling with how to handle complex objects with child objects. I have the following parent FraudDto object whcih I am binding to on the form: public class FraudDto { public FraudCategoryDto FraudCategory { get; set; } public List<FraudCategoryDto> FraudCategories { get; private set; } public IEnumerable<SelectListItem> FraudCategoryList { get { return FraudCategories.Select(t => new SelectListItem { Text = t.Name, Value = t.Id.ToString() }); } The child FraudCategoryDto object looks like this: public class FraudCategoryDto { public int Id { get; set; } public string Name { get; set; } } On the form, I have the following code where I want to bind the FraudCategoryDto to the dropdown. The view is of type ViewPage: <td class="tac"> <strong>Category:</strong> </td> <td> <%= Html.DropDownListFor(x => x.FraudCategory, Model.FraudTypeList)%> </td> I then have the following controller code: [HttpPost] public virtual ViewResult SaveOrUpdate(FraudDto fraudDto) { return View(fraudDto); } When the form is posted to the server, the FraudCategory property of the Fraud object is null. Are there any additional steps I need to hook up this complex object? Cheers Paul

    Read the article

  • Why were namespaces removed from ECMAScript consideration?

    - by Bob
    Namespaces were once a consideration for ECMAScript (the old ECMAScript 4) but were taken out. As Brendan Eich says in this message: One of the use-cases for namespaces in ES4 was early binding (use namespace intrinsic), both for performance and for programmer comprehension -- no chance of runtime name binding disagreeing with any earlier binding. But early binding in any dynamic code loading scenario like the web requires a prioritization or reservation mechanism to avoid early versus late binding conflicts. Plus, as some JS implementors have noted with concern, multiple open namespaces impose runtime cost unless an implementation works significantly harder. For these reasons, namespaces and early binding (like packages before them, this past April) must go. But I'm not sure I understand all of that. What exactly is a prioritization or reservation mechanism and why would either of those be needed? Also, must early binding and namespaces go hand-in-hand? For some reason I can't wrap my head around the issues involved. Can anyone attempt a more fleshed out explanation? Also, why would namespaces impose runtime costs? In my mind I can't help but see little difference in concept between a namespace and a function using closures. For instance, Yahoo and Google both have YAHOO and google objects that "act like" namespaces in that they contain all of their public and private variables, functions, and objects within a single access point. So why, then, would a namespace be so significantly different in implementation? Maybe I just have a misconception as to what a namespace is exactly.

    Read the article

  • Spring Security: session expiration without redirect to expired-url?

    - by Kdeveloper
    I'm using Spring Security 3.0.2 form based authentication. But I can't figure out how I can configure it so that when a session expires that the request is not redirect to an other page (expired-url) or displays a 'session expires' message. I don't want any redirect or messages, I want that a anonymous session is started just like when a user without a session enters the website. My current configuration: <http> <intercept-url pattern="/login.action*" filters="none"/> <intercept-url pattern="/admin/**" access="ROLE_ADMIN" /> <intercept-url pattern="/**" access="IS_AUTHENTICATED_ANONYMOUSLY"/> <form-login login-page="/login.action" authentication-failure-url="/login.action?error=failed" login-processing-url="/login-handler.action"/> <logout logout-url="/logoff-execute.action" logout-success-url="/logoff.action?done=1"/> <remember-me key="remember-me-security" services-ref="rememberMeServices"/> <session-management > <concurrency-control max-sessions="1" error-if-maximum-exceeded="false" expired-url="/login.action?error=expired.url"/> </session-management> </http>

    Read the article

  • How do I make Master/Detail subreports in ReportBuilder come out right?

    - by Mason Wheeler
    I've got a report in ReportBuilder that's supposed to report on two objects. I didn't create this report, and I can't ask the person who did about how it works. Before running the report, we have some code that goes through, finds all the properties on the objects, and loads them into a memory dataset that looks like this: OBJECT_ID: TStringField PROP_NAME: TStringField PROP_VALUE: TStringField The report engine then creates a line on the report for each property in this dataset. This is implemented in a sub-report, whose parent only contains an OBJECT_ID, which is a human-readable name. Everything was going great until we had to display a "comment" of arbitrary size in the report. I made a second sub-report with a TMemoField so it could hold the text, and set the report up in the report designer. What I expect when I run the report is something that looks like this: HEADER Object 1 properties Object 1 comment Object 2 properties Object 2 comment I've managed to get just about everything but that. I used the MasterDataPipeline and MasterFieldLinks properties of the sub-report's pipelines to try to link the OBJECT_IDs of the sub-reports to the OBJECT_ID of the header, and that's the closest I've managed to come, but now what I see is: HEADER Object 1 properties Object 1 comment Object 2 comment The "Object 2 properties" section is nowhere to be seen, even though I've manually verified that the data is making it into the dataset correctly. This is driving me nuts. Any ReportBuilder gurus out there know what's going on and now to fix it?

    Read the article

  • Get an IDataReader from a typed List

    - by Jason Kealey
    I have a List<MyObject> with a million elements. (It is actually a SubSonic Collection but it is not loaded from the database). I'm currently using SqlBulkCopy as follows: private string FastInsertCollection(string tableName, DataTable tableData) { string sqlConn = ConfigurationManager.ConnectionStrings[SubSonicConfig.DefaultDataProvider.ConnectionStringName].ConnectionString; using (SqlBulkCopy s = new SqlBulkCopy(sqlConn, SqlBulkCopyOptions.TableLock)) { s.DestinationTableName = tableName; s.BatchSize = 5000; s.WriteToServer(tableData); s.BulkCopyTimeout = SprocTimeout; s.Close(); } return sqlConn; } I use SubSonic's MyObjectCollection.ToDataTable() to build the DataTable from my collection. However, this duplicates objects in memory and is inefficient. I'd like to use the SqlBulkCopy.WriteToServer method that uses an IDataReader instead of a DataTable so that I don't duplicate my collection in memory. What's the easiest way to get an IDataReader from my list? I suppose I could implement a custom data reader (like here http://blogs.microsoft.co.il/blogs/aviwortzel/archive/2008/05/06/implementing-sqlbulkcopy-in-linq-to-sql.aspx) , but there must be something simpler I can do without writing a bunch of generic code. Edit: It does not appear that one can easily generate an IDataReader from a collection of objects. Accepting current answer even though I was hoping for something built into the framework.

    Read the article

  • Using two versions of the same assembly (system.web.mvc) at the same time

    - by Joel Abrahamsson
    I'm using a content management system whose admin interface uses MVC 1.0. I would like to build the public parts of the site using MVC 2. If I just reference System.Web.Mvc version 2 in my project the admin mode doesn't work as the reference to System.Web.Mvc.ViewPage created by the views in the admin interface is ambiguous: The type 'System.Web.Mvc.ViewPage' is ambiguous: it could come from assembly 'C:\Windows\assembly\GAC_MSIL\System.Web.Mvc\2.0.0.0__31bf3856ad364e35\System.Web.Mvc.dll' or from assembly 'C:\Windows\assembly\GAC_MSIL\System.Web.Mvc\1.0.0.0__31bf3856ad364e35\System.Web.Mvc.dll'. Please specify the assembly explicitly in the type name. I could easily work around this by using binding redirects to specify that MVC 2 should always be used. Unfortunately the content management systems admin mode isn't compatible with MVC 2. I'm not exactly sure why, but I start getting a bunch of null reference exceptions in some of it's actions when I try it and the developers of the CMS have confirmed that it isn't compatible with MVC 2 (yet). The admin interface which is accessed through domain.com/admin is not physically located in webroot/admin but in the program files folder on the server and domain.com/admin is instead routed there using a virtual path provider. Therefor, putting a separate web.config file in the admin folder to specify a different version of System.Web.Mvc for that part of the site isn't an option as that won't fly when using shared hosting. Can anyone see any solution to this problem? Perhaps it's possible to specify that for some assemblies a different version of a referenced assembly should be used?

    Read the article

  • How to populate Java (web) application with initial data using Spring/JPA/Hibernate

    - by Tuukka Mustonen
    I want to setup my database with initial data programmatically. I want to populate my database for development runs, not for testing runs (it's easy). The product is built on top of Spring and JPA/Hibernate. Developer checks out the project Developer runs command/script to setup database with initial data Developer starts application (server) and begins developing/testing then: Developer runs command/script to flush the database and set it up with new initial data because database structures or the initial data bundle were changed What I want is to setup my environment by required parts in order to call my DAOs and insert new objects into database. I do not want to create initial data sets in raw SQL, XML, take dumps of database or whatever. I want to programmatically create objects and persist them in database as I would in normal application logic. One way to accomplish this would be to start up my application normally and run a special servlet that does the initialization. But is that really the way to go? I would love to execute the initial data setup as Maven task and I don't know how to do that if I take the servlet approach. There is somewhat similar question. I took a quick glance at the suggested DBUnit and Unitils. But they seem to be heavily focused in setting up testing environments, which is not what I want here. DBUnit does initial data population, but only using xml/csv fixtures, which is not what I'm after here. Then, Maven has SQL plugin, but I don't want to handle raw SQL. Maven also has Hibernate plugin, but it seems to help only in Hibernate configuration and table schema creation (not in populating db with data). How to do this?

    Read the article

  • Read multiple tables from dataset in Powershell

    - by Lucas
    I am using a function that collects data from a SQL server: function Invoke-SQLCommand { param( [string] $dataSource = "myserver", [string] $dbName = "mydatabase", [string] $sqlCommand = $(throw "Please specify a query.") ) $SqlConnection = New-Object System.Data.SqlClient.SqlConnection $SqlConnection.ConnectionString = "Server=$dataSource;Database=$dbName;Integrated Security=True" $SqlCmd = New-Object System.Data.SqlClient.SqlCommand $SqlCmd.CommandText = $sqlCommand $SqlCmd.Connection = $SqlConnection $SqlAdapter = New-Object System.Data.SqlClient.SqlDataAdapter $SqlAdapter.SelectCommand = $SqlCmd $DataSet = New-Object System.Data.DataSet $SqlAdapter.Fill($DataSet) $SqlConnection.Close() $DataSet.Tables[0] } It works great but returns only one table. I am passing several Select statements, so the dataset contains multiple tables. I replaced $DataSet.Tables[0] with for ($i=0;$i -lt $DataSet.tables.count;$i++){ $Dataset.Tables[$i] } but the console only shows the content of the first table and blank lines for each records of what should be the second table. The only way to see the result is to change the code to $Dataset.Tables[$i] | out-string but I do not want strings, I want to have table objects to work with. When I assign what is returned by the Invoke-SQLCommand to a variable, I can see that I have an array of datarow objects but only from the first table. What happened to the second table? Any help would be greatly appreciated. Thanks

    Read the article

< Previous Page | 580 581 582 583 584 585 586 587 588 589 590 591  | Next Page >