Search Results

Search found 2157 results on 87 pages for 'sequential workflow'.

Page 79/87 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • Database-as-a-Service on Exadata Cloud

    - by Gagan Chawla
    Note – Oracle Enterprise Manager 12c DBaaS is platform agnostic and is designed to work on Exadata/non-Exadata, physical/virtual, Oracle/non Oracle platforms and it’s not a mandatory requirement to use Exadata as the base platform. Database-as-a-Service (DBaaS) is an important trend these days and the top business drivers motivating customers towards private database cloud model include constant pressure to reduce IT Costs and Complexity, and also to be able to improve Agility and Quality of Service. The first step many enterprises take in their journey towards cloud computing is to move to a consolidated and standardized environment and Exadata being already a proven best-in-class popular consolidation platform, we are seeing now more and more customers starting to evolve from Exadata based platform into an agile self service driven private database cloud using Oracle Enterprise Manager 12c. Together Exadata Database Machine and Enterprise Manager 12c provides industry’s most comprehensive and integrated solution to transform from a typical silo’ed environment into enterprise class database cloud with self service, rapid elasticity and pay-per-use capabilities.   In today’s post, I’ll list down the important steps to enable DBaaS on Exadata using Enterprise Manager 12c. These steps are chalked down based on a recent DBaaS implementation from a real customer engagement - Project Planning - First step involves defining the scope of implementation, mapping functional requirements and objectives to use cases, defining high availability, network, security requirements, and delivering the project plan. In a Cloud project you plan around technology, business and processes all together so ensure you engage your actual end users and stakeholders early on in the project right from the scoping and planning stage. Setup your EM 12c Cloud Control Site – Once the project plan approval and sign off from stakeholders is achieved, refer to EM 12c Install guide and these are some important tips to follow during the site setup phase - Review the new EM 12c Sizing paper before you get started with install Cloud, Chargeback and Trending, Exadata plug ins should be selected to deploy during install Refer to EM 12c Administrator’s guide for High Availability, Security, Network/Firewall best practices and options Your management and managed infrastructure should not be combined i.e. EM 12c repository should not be hosted on same Exadata where target Database Cloud is to be setup Setup Roles and Users – Cloud Administrator (EM_CLOUD_ADMINISTRATOR), Self Service Administrator (EM_SSA_ADMINISTRATOR), Self Service User (EM_SSA_USER) are the important roles required for cloud lifecycle management. Roles and users are managed by Super Administrator via Setup menu –> Security option. For Self Service/SSA users custom role(s) based on EM_SSA_USER should be created and EM_USER, PUBLIC roles should be revoked during SSA user account creation. Configure Software Library – Cloud Administrator logs in and in this step configures software library via Enterprise menu –> provisioning and patching option and the storage location is OMS shared filesystem. Software Library is the centralized repository that stores all software entities and is often termed as ‘local store’. Setup Self Update – Self Update is one of the most innovative and cool new features in EM 12c framework. Self update can be accessed via Setup -> Extensibility option by Super Administrator and is the unified delivery mechanism to get all new and updated entities (Agent software, plug ins, connectors, gold images, provisioning bundles etc) in EM 12c. Deploy Agents on all Compute nodes, and discover Exadata targets – Refer to Exadata discovery cookbook for detailed walkthrough to ensure successful discovery of Exadata targets. Configure Privilege Delegation Settings – This step involves deployment of privilege setting template on all the nodes by Super Administrator via Setup menu -> Security option with the option to define whether to use sudo or powerbroker for all provisioning and patching operations. Provision Grid Infrastructure with RAC Database on Compute Nodes – Software is provisioned in this step via a provisioning profile using EM 12c database provisioning. In case of Exadata, Grid Infrastructure and RAC Database software is already deployed on compute nodes via OneCommand from Oracle, so SSA Administrator just needs to discover Oracle Homes and Listener as EM targets. Databases will be created as and when users request for databases from cloud. Customize Create Database Deployment Procedure – the actual database creation steps are "templatized" in this step by Self Service Administrator and the newly saved deployment procedure will be used during service template creation in next step. This is an important step and make sure you have locked all the required variables marked as locked as ‘Y’ in this table. Setup Self Service Portal – This step involves setting up of zones, user quotas, service templates, chargeback plan. The SSA portal is setup by Self Service Administrator via Setup menu -> Cloud -> Database option and following guided workflow. Refer to DBaaS cookbook for details. You also have an option to customize SSA login page via steps documented in EM 12c Cloud Administrator’s guide Final Checks – Define and document process guidelines for SSA users and administrators. Get your SSA users trained on Self Service Portal features and overall DBaaS model and SSA administrators should be familiar with Self Service Portal setup pieces, EM 12c database lifecycle management capabilities and overall EM 12c monitoring framework. GO LIVE – Announce rollout of Database-as-a-Service to your SSA users. Users can login to the Self Service Portal and request/monitor/view their databases in Exadata based database cloud. Congratulations! You just delivered a successful database cloud implementation project! In future posts, we will cover these additional useful topics around database cloud – DBaaS Implementation tips and tricks – right from setup to self service to managing the cloud lifecycle ‘How to’ enable real production databases copies in DBaaS with rapid provisioning in database cloud Case study of a customer who recently achieved success with their transformational journey from traditional silo’ed environment on to Exadata based database cloud using Enterprise Manager 12c. More Information – Podcast on Database as a Service using Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c Installation and Administration guide, Cloud Administration guide DBaaS Cookbook Exadata Discovery Cookbook Screenwatch: Private Database Cloud: Set Up the Cloud Self-Service Portal Screenwatch: Private Database Cloud: Use the Cloud Self-Service Portal Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Closing the gap between strategy and execution with Oracle Business Intelligence 11g

    - by manan.goel(at)oracle.com
    Wikipedia defines strategy as a plan of action designed to achieve a particular goal. An example of this is General Electric's acquisitions and divestiture strategy (plan) designed to propel GE to number 1 or 2 place (goal) in every business segment that it operated in. Execution on the other hand can be defined as the actions taken to getting things done. In GE's case execution will be steps followed for mergers/acquisitions or divestiture. Business press has written extensively about the importance of both strategy and execution in achieving desired business objectives. Perhaps the quote from Thomas Edison says it best - "vision without execution is hallucination". Conversely, it can be said that "execution without vision" is well may be "wishful thinking". Research overwhelmingly point towards the wide gap between strategy and execution. According to a published study, 49% of surveyed executives perceive a gap between their organizations' ability to develop and communicate sound strategies and their ability to implement those strategies. Further, of these respondents, 64% don't have full confidence that their companies will be able to close the gap. Having established the severity and importance of the problem let's talk about the reasons for the strategy-execution gap. The common reasons include: -        Lack of clearly defined goals -        Lack of consistent measure of success -        Lack of ownership -        Lack of alignment -        Lack of communication -        Lack of proper execution -        Lack of monitoring       There are multiple approaches to solving the problem including organizational development practices, technology enablement etc. In most cases a combination of approaches is required to achieve the desired result. For the purposes of this discussion, I'll focus on technology.  Imagine an integrated closed loop technology platform that automates the entire management cycle from defining strategy to assigning ownership to communicating goals to achieving alignment to collaboration to taking actions to monitoring progress and achieving mid course corrections. Besides, for best ROI and lowest TCO such a system should also have characteristics like:  Complete -        Full functionality -        Rich end user access Open -        Any data source -        Any business application -        Any technology stack  Integrated -        Common metadata -        Common security -        Common system management From a capabilities perspective the system should provide the following capabilities: Define -        Strategy -        Objectives -        Ownership -        KPI's Communicate -        Pervasive -        Collaborative -        Role based -        Secure Execute -        Integrated -        Intuitive -        Secure -        Ubiquitous Monitor -        Multiple styles and formats -        Exception based -        Push & Pull Having talked about the business problem and outlined the blueprint for a technology solution, let's talk about how Oracle Business Intelligence 11g can help. Oracle Business Intelligence is a comprehensive business intelligence solution for reporting, ad hoc query and analysis, OLAP, dashboards and scorecards. Oracle's best in class BI platform is based on an architecturally integrated technology foundation that provides a unified end user experience and features a Common Enterprise Information Model, with common security, query request generation and optimization, and system management. The BI platform is ·         Complete - meaning it delivers all modes and styles of BI including reporting, ad hoc query and analysis, OLAP, dashboards and scorecards with a rich end user experience that includes visualization, collaboration, alerts and notifications, search and mobile access. ·         Open - meaning the BI platform integrates with any data source, ETL tool, business application, application server, security infrastructure, portal technology as well as any ODBC compliant third party analytical tool. The suite accesses data from multiple heterogeneous sources--including popular relational and multidimensional data sources and major ERP and CRM applications from Oracle and SAP. ·         Integrated - meaning the BI platform is based on an architecturally integrated technology foundation built on an open, standards based service oriented architecture.  The platform features a common enterprise information model, common security model and a common configuration, deployment and systems management framework. To summarize, Oracle Business Intelligence is a comprehensive, integrated BI platform that lets you define strategy, identify objectives, assign ownership, define KPI's, collaborate, take action, monitor, report and do course corrections all form a single interface and a single system. The platform's integrated metadata model and task based design ensures that the entire workflow from defining strategy to execution to monitoring is completely integrated delivering end to end visibility, transparency and agility. Click here to learn more about Oracle BI 11g. 

    Read the article

  • Oracle RightNow CX for Good Customer Experiences

    - by Andreea Vaduva
    Oracle RightNow CX is all about the customer experience, it’s about understanding what drives a good interaction and it’s about delivering a solution which works for our customers and by extension, their customers. One of the early guiding principles of Oracle RightNow was an 8-point strategy to providing good customer experiences. Establish a knowledge foundation Empowering the customer Empower employees Offer multi-channel choice Listen to the customer Design seamless experiences Engage proactively Measure and improve continuously The application suite provides all of the tools necessary to deliver a rewarding, repeatable and measurable relationship between business and customer. The Knowledge Authoring tool provides gap analysis, WYSIWIG editing (and includes HTML rich content for non-developers), multi-level categorisation, permission based publishing and Web self-service publishing. Oracle RightNow Customer Portal, is a complete web application framework that enables businesses to control their own end-user page branding experience, which in turn will allow customers to self-serve. The Contact Centre Experience Designer builds a combination of workspaces, agent scripting and guided assistances into a Desktop Workflow. These present an agent with the tools they need, at the time they need them, providing even the newest and least experienced advisors with consistently accurate and efficient information, whilst guiding them through the complexities of internal business processes. Oracle RightNow provides access points for customers to feedback about specific knowledge articles or about the support site in general. The system will generate ‘incidents’ based on the scoring of the comments submitted. This makes it easy to view and respond to customer feedback. It is vital, more now than ever, not to under-estimate the power of the social web – Facebook, Twitter, YouTube – they have the ability to cause untold amounts of damage to businesses with a single post – witness musician Dave Carroll and his protest song on YouTube, posted in response to poor customer services from an American airline. The first day saw 150,000 views and is currently at 12,011,375. The Times reported that within 4 days of the post, the airline’s stock price fell by 10 percent, which represented a cost to shareholders of $180 million dollars. It is a universally acknowledged fact, that when customers are unhappy, they will not come back, and, generally speaking, it only takes one bad experience to lose a customer. The idea that customer loyalty can be regained by using social media channels was the subject of a 2011 Survey commissioned by RightNow and conducted by Harris Interactive. The survey discovered that 68% of customers who posted a negative review about a holiday on a social networking site received a response from the business. It further found that 33% subsequently posted a positive review and 34% removed the original negative review. Cloud Monitor provides the perfect mechanism for seeing what is being said about a business on public Facebook pages, Twitter or YouTube posts; it allows agents to respond proactively – either by creating an Oracle RightNow incident or by using the same channel as the original post. This leaves step 8 – Measuring and Improving: How does a business know whether it’s doing the right thing? How does it know if its customers are happy? How does it know if its staff are being productive? How does it know if its staff are being effective? Cue Oracle RightNow Analytics – fully integrated across the entire platform – Service, Marketing and Sales – there are in excess of 800 standard reports. If this were not enough, a large proportion of the database has been made available via the administration console, allowing users without any prior database experience to write their own reports, format them and schedule them for e-mail delivery to a distribution list. It handles the complexities of table joins, and allows for the manipulation of data with ease. Oracle RightNow believes strongly in the customer owning their solution, and to provide the best foundation for success, Oracle University can give you the RightNow knowledge and skills you need. This is a selection of the courses offered: RightNow Customer Service Administration Rel 12.02 (3 days) Available as In Class and Live Virtual Class (Release 11.11 is available as In Class, Live Virtual Class and Training On Demand) This course familiarises users with the tasks and concepts needed to configure and maintain their system. RightNow Customer Portal Designer and Contact Center Experience Designer Administration Rel 12.02 (2 days) Available as In Class and Live Virtual Class (Release 11.11 is available as In Class, Live Virtual Class and Training On Demand) This course introduces basic CP structure and how to make changes to the look, feel and behaviour of their self-service pages RightNow Analytics Rel 12.02 (2 days) Available as In Class, Live Virtual Class and Training On Demand (Release 11.11 is available as In Class and Live Virtual Class) This course equips users with the skills necessary to understand data supplied by standard reports and to create custom reports RightNow Integration and Customization For Developers Rel 12.02 (5-days) Available as In Class and Live Virtual Class (Release 11.11 is available as In Class, Live Virtual Class and Training On Demand) This course is for experienced web developers and offers an introduction to Add-In development using the Desktop Add-In Framework and introduces the core knowledge that developers need to begin integrating Oracle RightNow CX with other systems A full list of courses offered can be found on the Oracle University website. For more information and course dates please get in contact with your local Oracle University team. On top of the Service components, the suite also provides marketing tools, complex survey creation and tracking and sales functionality. I’m a fan of the application, and I think I’ve made that clear: It’s completely geared up to providing customers with support at point of need. It can be configured to meet even the most stringent of business requirements. Oracle RightNow is passionate about, and committed to, providing the best customer experience possible. Oracle RightNow CX is the application that makes it possible. About the Author: Sarah Anderson worked for RightNow for 4 years in both in both a consulting and training delivery capacity. She is now a Senior Instructor with Oracle University, delivering the following Oracle RightNow courses: RightNow Customer Service Administration RightNow Analytics RightNow Customer Portal Designer and Contact Center Experience Designer Administration RightNow Marketing and Feedback

    Read the article

  • Oracle CRM On Demand Release 24 is Generally Available

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 We are pleased to announce that Oracle CRM On Demand Release 24 is Generally Available as of October 25, 2013 Get smarter, more productive and the best value with Oracle CRM On Demand Release 24. Oracle CRM On Demand continues to be the most complete Software-as-a-Service (SaaS) CRM solution available. Now, with Release 24, organizations of all types and sizes benefit from actionable insight anywhere, anytime, as well as key enhancements in mobility, embedded social, analytics, integration and extensibility, and ease of use.Next Generation Mobile and Desktop Solutions : Oracle CRM On Demand Release 24 offers a complete set of mobile and desktop solutions that improve productivity by enabling reps to access and update information anywhere, anytime. Capabilities include: Oracle CRM On Demand Disconnected Mobile Sales (DMS) – A disconnected native iPad solution, DMS has been further streamlined mobile sales process by adding Structured Product Messaging to record brand specific call objectives, enhancements in HTML5 eDetailing including message response tracking and improvements in administration and configuration such as more field management options for read only fields, role management and enhanced logging. Oracle CRM On Demand Connected Mobile Sales. This add-on mobile service provides a configurable mobile solution on iOS, BlackBerry and now Android devices. You can access data from CRM On Demand in real time with a rich, native user experience, that is comfortable and familiar to current iOS, BlackBerry and Android users. New features also include Single Sign On to enhance security for mobile users.  Oracle CRM On Demand Desktop: This application centralizes essential CRM information in the familiar Microsoft Outlook environment,increasing user adoption and decreasing training costs. Users can manage CRM data while disconnected, then synchronize bi-directionally when they are back on the network. New in Oracle CRM On Demand Desktop Version 3 is the ability to synchronize by Books of Business, and improved Online Lookup. Mobile Browser Support: The following mobile device browsers are now supported: Apple iPhone, Apple iPad, Windows 8 Tablets, and Google Android. Leverage the Social Enterprise Engaging customers via social channels is rapidly becoming a significant key to enhanced customer experience as it provides proactive customer service, targeted messaging and greater intimacy throughout the entire customer lifecycle. Listening to customers on the social channels can identify a customers’ sphere of influence and the real value they bring to their organization, or the impact they can have on the opportunity. Servicing the customer’s need is the first step towards loyalty to a brand, integrating with social channels allows us to maximize brand affinity and virally expand customer engagements thus increasing revenue. Oracle CRM On Demand is leveraging the Social Enterprise through its integration with Oracle’s Social Relationship Management (SRM) product suite by providing out-of-the-box integration with Social Engagement and Monitoring (SEM), Social Marketing (SM) and Oracle Social Network (OSN). With Oracle CRM On Demand Release 24, users are able to create a service request from a social post via SEM and have leads entered on a SM lead form automatically entered into Oracle CRM On Demand along with the campaign, streamlining the lead qualification process. Get Smarter with Actionable Insight The difference between making good decisions and great decisions depends heavily upon the quality, structure, and availability of information at hand. Oracle CRM On Demand Release 24 expands upon its industry-leading analytics capabilities to provide greater business insight than ever before. New capabilities include flexible permissions on analytics reports folders, allowing for read only access to reports, and additional field and object coverage. Get More Productive with Powerful Tools Oracle CRM On Demand Release 24 introduces a new set of powerful capabilities designed to maximize productivity. A significant new feature for customizing Oracle CRM On Demand is a JavaScript API. The JS API allows customers to add new buttons, suppress existing buttons and even change what happens when a user clicks an existing button. Other usability enhancements, such as personalized related information applets, extended case insensitive search provide users with better, more intuitive, experience. Additional privileges for viewing private activities and notes allow administrators to reassign records as needed, and Custom Object management. Workflow has been added to the Order Item object; and now tasks can be assigned to a relative user, such as an Account Owner, allowing more complex business processes to be automated and adhered to. Get the Best Value Oracle CRM On Demand delivers unprecedented value with the broadest set of capabilities from a single-provider solution, the industry’s lowest total cost of ownership, the most on-demand deployment options, the deepest CRM expertise and experience of any CRM provider, and the most secure CRM in the cloud. With Release 24, Oracle CRM On Demand now includes even more enterprise-grade security, integration, and extensibility features, along with enhanced industry editions to save you time and money. New features include: Business Process Administration: A new privilege has been added that allows administrators to override a Business Process Administration rule.This privilege permits users to edit a locked record, or unlock a record, in the event of a material change that needs to be reflected per corporatepolicy. Additionally, the Products Detailed object has been added to Business Process Administration, enabling record locking and logic to be applied. Expanded Integration: Oracle continues to improve Web Services each release, by adding more object coverage enabling customers and partners to easily integrate with CRM On Demand. Bottom Line Oracle CRM On Demand Release 24 enables organizations to get smarter, get more productive, and get the best value, period. For more information on Oracle CRM On Demand Release 24, please visit oracle.com/crmondemand

    Read the article

  • Oracle Fusion Middleware gives you Choice and Portability for Public and Private Cloud

    - by Michelle Kimihira
    Author: Margaret Lee, Senior Director, Product Management, Oracle Fusion Middleware Cloud Computing allows customers to quickly develop and deploy applications in a shared environment.  The environment can span across hardward (IaaS), foundation layer software (PaaS), and end-user software (SaaS). Cloud Computing provides compelling benefits in terms of business agility and IT cost savings.  However, with complex, existing heterogeneous architectures, and concerns for security and manageability, enterprises are challenged to define their Cloud strategy.  For most enterprises, the solution is a hybrid of private and public cloud.  Fusion Middleware supports customers’ Cloud requirements through choice and portability. Fusion Middleware supports a variety of cloud development and deployment models:  Oracle [Public] Cloud; customer private cloud; hybrid of these two, and traditional dedicated, on-premise model Customers can develop applications in any of these models and deployed in another, providing the flexibility and portability they need Oracle Cloud is a public cloud offering.  Within Oracle Cloud, Fusion Middleware provides two key offerings include the Developer cloud service and Java cloud deployment service. Developer Cloud Service Simplify Development: Automated provisioned environment; pre-configured and integrated; web-based administration Deploy Automatically: Fully integrated with Oracle Cloud for Java deployment; workflow ensures build & test Collaborate & Manage: Fits any size team; integrated team source repository; continuous integration; task/defect tracking Integrated with all major IDEs: Oracle JDeveloper; NetBeans; Eclipse Java Cloud Service Java Cloud service provides flexible Java deployment environment for departmental applications and development, staging, QA, training, and demo environments.  It also supports customizations deployments for SaaS-based Fusion Applications customers.  Some key features of Java Cloud Service include: WebLogic Server on Exalogic, secure, highly available infrastructure Database Service & IDE Integration Open, Standard-based Deploy Web Apps, Web Services, REST Services Fully managed and supported by Oracle For more information, please visit Oracle Cloud, Oracle Cloud Java Service and Oracle Cloud Developer Service. If your enterprise prefers a private cloud, for reasons such as security, control, manageability, and complex integration that prevent your applications from being deployed on a public cloud, Fusion Middleware also provide you with the products and tools you need.  Sometimes called Private PaaS, private clouds have their predecessors in shared-services arrangements many large companies have been building in the past decade.  The difference, however, are in the scope of the services, and depth of their capabilities.  In terms of vertical stack depth, private clouds not only provide hardware and software infrastructure to run your applications, they also provide services such as integration and security, that your applications need.  Horizontally, private clouds provide monitoring, management, lifecycle, and charge back capabilities out-of-box that shared-services platforms did not have before. Oracle Fusion Middleware includes the complete stack of hardware and software for you to build private clouds: SOA suite and BPM suite to support systems integration and process flow between applications deployed on your private cloud and the rest of your organization Identity and Access Management suite to provide security, provisioning, and access services for applications deployed on your private cloud WebLogic Server to run your applications Enterprise Manager's Cloud Management pack to monitor, manage, upgrade applications running on your private cloud Exalogic or optimized Oracle-Sun hardware to build out your private cloud The most important key differentiator for Oracle's cloud solutions is portability, between private and public clouds.  This is unique to Oracle because portability requires the vendor to have product depth and breadth in both public cloud services and private cloud product offerings.  Most public cloud vendors cannot provide the infrastructure and tools customers need to build their own private clouds.  In reverse, traditional software tools vendors typically do not have the product and expertise breadth to build out and offer a public cloud.  Oracle can.  It is important for customers that the products and technologies  Oracle uses to build its public is the same set that it sells to customers for them to build private clouds.  Fundamentally, that enables skills reuse,  as well as application portability. For more information on Oracle PaaS offerings, please visit Oracle's product information page.    Resources Follow us on Twitter and Facebook Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • XBRL - Moving from Production to Consumption

    - by jmorourke
    Here's an update on what’s new with XBRL and how it can actually benefit your organization versus adding extra time and costs to financial reporting.  On February 29th (leap day) of 2012 I attended the XBRL and Financial Analysis Technology Conference at Baruch College in NYC.  The event, which attracted over 300 XBRL gurus and fans was presented by XBRL US, The New York Society of Security Analysts’ Improved Corporate Reporting Committee, and Baruch College’s Robert Zicklin Center for Corporate Integrity.  The event featured keynotes from the U.S. Securities and Exchange Commission (SEC), and the CFA Institute as well as panels covering alternative research tools and data, corporate reporting to stakeholders and a demonstration of XBRL analysis tools.  The program culminated in a presentation of the finalists and the winner of the $20,000 XBRL Challenge.    Some of the key points made in the sessions included: The focus of XBRL tools is moving from production to consumption. As of February 2012, over 9000 companies are reporting in XBRL, with over 10 million facts filed to date XBRL taxonomy extensions have dropped from 27% to 11% making comparisons easier The SEC reports that XBRL makes it easier to analyze disclosures, focus on accounting issues XBRL is helping standards-setters like the FASB speed their analysis of impacts of proposed accounting rule changes Companies like Thomson Reuters report that XBRL is helping speed the delivery of data to clients The most interesting part of the program though, was the session highlighting the 5 finalists in the XBRL Challenge competition and the winning solution.  The XBRL Challenge was launched in 2011 as a means of spurring the development of more end-user tools to help with the consumption of XBRL-based financial information.       Over an 8-month process handled by 5 judges, there were 84 registrants, 15 completed submissions, 5 finalists and one winner of the challenge.  All of the solutions are open-sourced tools and most of them focus on consuming XBRL-based data.  The 5 finalists included: Advanced XBRL Processing from Oxide solutions – XBRL viewer for taxonomies, filings and company data with peer comparison capabilities. Arrelle – API for XBRL processes, supports SEC Validations, RSS Feeds to access filings etc. Calcbench – XBRL data analysis tool that can be embedded in other web applications.  This tool can combine XBRL filings with real-time market data. XBRL to XL – allows the importing of XBRL data into Microsoft Excel for analysis, comparisons.  Users start on the web and populate Excel with XBRL data. XBurble – allows users to search and view XBRL filings, export to Excel, merge for comparison, and includes a workflow interface. The winner of the $20,000 XBRL Challenge prize was CalcBench.  More information about the XBRL Challenge and the finalists can be found at www.XBRLUS.org/challenge XBRL for Sustainability Reporting – other recent news on the XBRL front was the announcement by the Global Reporting Initiative (GRI) of an XBRL taxonomy for Sustainability Reporting.  This taxonomy was co-developed by the GRI and Deloitte and is designed to make the consumption of data found in Sustainability Reports much easier.  Although there is no government mandate to file Sustainability Reports in XBRL format, organizations that do use the GRI guidelines for Sustainability Reporting are encouraged to tag and submit their data voluntarily to the GRI – who will populate a database with Sustainability Reporting data and make this available to the public.  For more information about this initiative, you can go to the GRI web site:  www.globalreporting.org. So how does all of this benefit corporate filers and investors?  Since its introduction, the consensus in the market is that XBRL has mainly benefited the regulators and investment analysts who need to consume and analyze large volumes of financial data.  But with the emergence of more end-user tools for consuming and analyzing XBRL-based data, and the ability to perform quick comparisons of one company versus its peers and competitors in an industry group, will soon accelerate the benefits to corporate finance staff, as well as individual investors.  This could apply to financial results tagged in XBRL, as well as non-financial information such as Sustainability Reporting – which over the long-term will likely be integrated with financial reporting.   And as multiple regulators and agencies in a country adopt the XBRL standard for corporate filings, more benefits will accrue as companies will be able to leverage one set of XBRL-based financial data for multiple regulatory filings.     For more information about the latest developments in XBRL, check out the XBRL US or XBRL International web sites:  www.xbrl.org, www.xbrlus.org. For more information about what Oracle is doing to support XBRL, here are some links: http://www.oracle.com/us/solutions/ent-performance-bi/disclosure-management-065892.html http://www.oracle.com/technetwork/database/features/xmldb/index-087631.html Feel free to contact me if you have any questions or need more information:  [email protected]

    Read the article

  • Upgrading to Code Based Migrations EF 4.3.1 with Connector/Net 6.6

    - by GABMARTINEZ
    Entity Framework 4.3.1 includes a new feature called code first migrations.  We are adding support for this feature in our upcoming 6.6 release of Connector/Net.  In this walk-through we'll see the workflow of code-based migrations when you have an existing application and you would like to upgrade to this EF 4.3.1 version and use this approach, so you can keep track of the changes that you do to your database.   The first thing we need to do is add the new Entity Framework 4.3.1 package to our application. This should via the NuGet package manager.  You can read more about why EF is not part of the .NET framework here. Adding EF 4.3.1 to our existing application  Inside VS 2010 go to Tools -> Library Package Manager -> Package Manager Console, this will open the Power Shell Host Window where we can work with all the EF commands. In order to install this library to your existing application you should type Install-Package EntityFramework This will make some changes to your application. So Let's check them. In your .config file you'll see a  <configSections> which contains the version you have from EntityFramework and also was added the <entityFramework> section as shown below. This section is by default configured to use SQL Express which won't be necesary for this case. So you can comment it out or leave it empty. Also please make sure you're using the Connector/Net 6.6.x version which is the one that has this support as is shown in the previous image. At this point we face one issue; in order to be able to work with Migrations we need the __MigrationHistory table that we don't have yet since our Database was created with an older version. This table is used to keep track of the changes in our model. So we need to get it in our existing Database. Getting a Migration-History table into an existing database First thing we need to do to enable migrations in our existing application is to create our configuration class which will set up the MySqlClient Provider as our SQL Generator. So we have to add it with the following code: using System.Data.Entity.Migrations;     //add this at the top of your cs file public class Configuration : DbMigrationsConfiguration<NameOfYourDbContext>  //Make sure to use the name of your existing DBContext { public Configuration() { this.AutomaticMigrationsEnabled = false; //Set Automatic migrations to false since we'll be applying the migrations manually for this case. SetSqlGenerator("MySql.Data.MySqlClient", new MySql.Data.Entity.MySqlMigrationSqlGenerator());     }   }  This code will set up our configuration that we'll be using when executing all the migrations for our application. Once we have done this we can Build our application so we can check that everything is fine. Creating our Initial Migration Now let's add our Initial Migration. In Package Manager Console, execute "add-migration InitialCreate", you can use any other name but I like to set this as our initial create for future reference. After we run this command, some changes were done in our application: A new Migrations Folder was created. A new class migration call InitialCreate which in most of the cases should have empty Up and Down methods as long as your database is up to date with your Model. Since all your entities already exists, delete all duplicated code to create any entity which exists already in your Database if there is any. I found this easier when you don't have any pending updates to do to your database. Now we have our empty migration that will make no changes in our database and represents how are all the things at the begining of our migrations.  Finally, let's create our MigrationsHistory table. Optionally you can add SQL code to delete the edmdata table which is not needed anymore. public override void Up() { // Just make sure that you used 4.1 or later version         Sql("DROP TABLE EdmMetadata"); } From our Package Manager Console let's type: Update-database; If you like to see the operations made on each Update-database command you can use the flag -verbose after the Update-database. This will make two important changes.  It will execute the Up method in the initial migration which has no changes in the database. And second, and very important,  it will create the __MigrationHistory table necessary to keep track of your changes. And next time you make a change to your database it will compare the current model to the one stored in the Model Column of this table. Conclusion The important thing of this walk through is that we must create our initial migration before we start doing any changes to our model. This way we'll be adding the necessary __MigrationsHistory table to our existing database, so we can keep our database up to date with all the changes we do in our context model using migrations. Hope you have found this information useful. Please let us know if you have any questions or comments, also please check our forums here where we keep answering questions in general for the community.  Happy MySQL/Net Coding!

    Read the article

  • Hybrid IT or Cloud Initiative – a Perfect Enterprise Architecture Maturation Opportunity

    - by Ted McLaughlan
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} All too often in the growth and maturation of Enterprise Architecture initiatives, the effort stalls or is delayed due to lack of “applied traction”. By this, I mean the EA activities - whether targeted towards compliance, risk mitigation or value opportunity propositions – may not be attached to measurable, active, visible projects that could advance and prove the value of EA. EA doesn’t work by itself, in a vacuum, without collaborative engagement and a means of proving usefulness. A critical vehicle to this proof is successful orchestration and use of assets and investment resources to meet a high-profile business objective – i.e. a successful project. More and more organizations are now exploring and considering some degree of IT outsourcing, buying and using external services and solutions to deliver their IT and business requirements – vs. building and operating in-house, in their own data centers. The rapid growth and success of “Cloud” services makes some decisions easier and some IT projects more successful, while dramatically lowering IT risks and enabling rapid growth. This is particularly true for “Software as a Service” (SaaS) applications, which essentially are complete web applications hosted and delivered over the Internet. Whether SaaS solutions – or any kind of cloud solution - are actually, ultimately the most cost-effective approach truly depends on the organization’s business and IT investment strategy. This leads us to Enterprise Architecture, the connectivity between business strategy and investment objectives, and the capabilities purchased or created to meet them. If an EA framework already exists, the approach to selecting a cloud-based solution and integrating it with internal IT systems (i.e. a “Hybrid IT” solution) is well-served by leveraging EA methods. If an EA framework doesn’t exist, or is simply not mature enough to address complex, integrated IT objectives – a hybrid IT/cloud initiative is the perfect project to advance and prove the value of EA. Why is this? For starters, the success of any complex IT integration project - spanning multiple systems, contracts and organizations, public and private – depends on active collaboration and coordination among the project stakeholders. For a hybrid IT initiative, inclusive of one or more cloud services providers, the IT services, business workflow and data governance challenges alone can be extremely complex, requiring many diverse layers of organizational expertise and authority. Establishing subject matter expertise, authorities and strategic guidance across all the disciplines involved in a hybrid-IT or hybrid-cloud system requires top-level, comprehensive experience and collaborative leadership. Tools and practices reflecting industry expertise and EA alignment can also be very helpful – such as Oracle’s “Cloud Candidate Selection Tool”. Using tools like this, and facilitating this critical collaboration by leading, organizing and coordinating the input and expertise into a shared, referenceable, reusable set of authority models and practices – this is where EA shines, and where Enterprise Architects can be most valuable. The “enterprise”, in this case, becomes something greater than the core organization – it includes internal systems, public cloud services, 3rd-party IT platforms and datacenters, distributed users and devices; a whole greater than the sum of its parts. Through facilitated project collaboration, leading to identification or creation of solid governance models and processes, a durable and useful Enterprise Architecture framework will usually emerge by itself, if not actually identified and managed as such. The transition from planning collaboration to actual coordination, where the program plan, schedule and resources become synchronized and aligned to other investments in the organization portfolio, is where EA methods and artifacts appear and become most useful. The actual scope and use of these artifacts, in the context of this project, can then set the stage for the most desirable, helpful and pragmatic form of the now-maturing EA framework and community of practice. Considering or starting a hybrid-IT or hybrid-cloud initiative? Running into some complex relationship challenges? This is the perfect time to take advantage of your new, growing or possibly latent Enterprise Architecture practice.

    Read the article

  • System.AccessViolationException when calling DLL from WCF on IIS.

    - by Wodzu
    Hi guys. I've created just a test WCF service in which I need to call an external DLL. Everything works fine under Visutal Studio development server. However, when I try to use my service on IIS I am getting this error: Exception: System.AccessViolationException Message: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. The stack trace leeds to the call of DLL which is presented below. After a lot of reading and experimenting I am almost sure that the error is caused by wrong passing strings to the called function. Here is how the wrapper for DLL looks like: using System; using System.Runtime.InteropServices; using System.Text; using System; using System.Security; using System.Security.Permissions; using System.Runtime.InteropServices; namespace cdn_api_wodzu { public class cdn_api_wodzu { [DllImport("cdn_api.dll", CharSet=CharSet.Ansi)] // [SecurityPermission(SecurityAction.Assert, Unrestricted = true)] public static extern int XLLogin([In, Out] XLLoginInfo _lLoginInfo, ref int _lSesjaID); } [Serializable, StructLayout(LayoutKind.Sequential)] public class XLLoginInfo { public int Wersja; public int UtworzWlasnaSesje; public int Winieta; public int TrybWsadowy; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 0x29)] public string ProgramID; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 0x15)] public string Baza; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 9)] public string OpeIdent; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 9)] public string OpeHaslo; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 200)] public string PlikLog; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 0x65)] public string SerwerKlucza; public XLLoginInfo() { } } } this is how I call the DLL function: int ErrorID = 0; int SessionID = 0; XLLoginInfo Login; Login = new XLLoginInfo(); Login.Wersja = 18; Login.UtworzWlasnaSesje = 1; Login.Winieta = -1; Login.TrybWsadowy = 1; Login.ProgramID = "TestProgram"; Login.Baza = "TestBase"; Login.OpeIdent = "TestUser"; Login.OpeHaslo = "TestPassword"; Login.PlikLog = "C:\\LogFile.txt"; Login.SerwerKlucza = "MyServ\\MyInstance"; ErrorID = cdn_api_wodzu.cdn_api_wodzu.XLLogin(Login, ref SessionID); When I comment all the string field assigments the function works - it returns me an error message that the program ID has not been given. But when I try to assign a ProgramID (or any other string fields, or all at once) then I am getting the mentioned exception. I am using VS2008 SP.1, WinXP and IIS 5.1. Maybe the ISS itself is a problem? I've tried all the workarounds that has been described here: http://forums.asp.net/t/675515.aspx Thansk for your time. After edit: Installing Windows 2003 Server and IIS 6.0 solved the problem.

    Read the article

  • SlimDX: Lightning problem with Direct3D9

    - by Spi1988
    I am creating a simple application to get familiar with SlimDX library. I found some code written in MDX and I'm trying to convert it to run on SlimDX. I am having some problems with the light because everything is being shown as black. The code is: public partial class DirectTest : Form { private Device device= null; private float angle = 0.0f; Light light = new Light(); public DirectTest() { InitializeComponent(); this.Size = new Size(800, 600); this.SetStyle(ControlStyles.AllPaintingInWmPaint | ControlStyles.Opaque, true); } /// <summary> /// We will initialize our graphics device here /// </summary> public void InitializeGraphics() { PresentParameters pres_params = new PresentParameters() { Windowed = true, SwapEffect = SwapEffect.Discard }; device = new Device(new Direct3D(), 0, DeviceType.Hardware, this.Handle, CreateFlags.SoftwareVertexProcessing, pres_params); } private void SetupCamera() { device.SetRenderState(RenderState.CullMode, Cull.None); device.SetTransform(TransformState.World, Matrix.RotationAxis(new Vector3(angle / ((float)Math.PI * 2.0f), angle / ((float)Math.PI * 4.0f), angle / ((float)Math.PI * 6.0f)), angle / (float)Math.PI)); angle += 0.1f; device.SetTransform(TransformState.Projection, Matrix.PerspectiveFovLH((float)Math.PI / 4, this.Width / this.Height, 1.0f, 100.0f)); device.SetTransform(TransformState.View, Matrix.LookAtLH(new Vector3(0, 0, 5.0f), new Vector3(), new Vector3(0, 1, 0))); device.SetRenderState(RenderState.Lighting, false); } protected override void OnPaint(System.Windows.Forms.PaintEventArgs e) { device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.CornflowerBlue, 1.0f, 0); SetupCamera(); CustomVertex.PositionColored[] verts = new CustomVertex.PositionColored[3]; verts[0].Position = new Vector3(0.0f, 1.0f, 1.0f); verts[0].Normal = new Vector3(0.0f, 0.0f, -1.0f); verts[0].Color = System.Drawing.Color.White.ToArgb(); verts[1].Position = new Vector3(-1.0f, -1.0f, 1.0f); verts[1].Normal = new Vector3(0.0f, 0.0f, -1.0f); verts[1].Color = System.Drawing.Color.White.ToArgb(); verts[2].Position = new Vector3(1.0f, -1.0f, 1.0f); verts[2].Normal = new Vector3(0.0f, 0.0f, -1.0f); verts[2].Color = System.Drawing.Color.White.ToArgb(); light.Type = LightType.Point; light.Position = new Vector3(); light.Diffuse = System.Drawing.Color.White; light.Attenuation0 = 0.2f; light.Range = 10000.0f; device.SetLight(0, light); device.EnableLight(0, true); device.BeginScene(); device.VertexFormat = CustomVertex.PositionColored.format; device.DrawUserPrimitives<CustomVertex.PositionColored>(PrimitiveType.TriangleList, 1, verts); device.EndScene(); device.Present(); this.Invalidate(); } } } The Vertex Format that I am using is the following [StructLayout(LayoutKind.Sequential)] public struct PositionNormalColored { public Vector3 Position; public int Color; public Vector3 Normal; public static readonly VertexFormat format = VertexFormat.Diffuse | VertexFormat.Position | VertexFormat.Normal; } Any suggestions on what the problem might be? Thanks in Advance

    Read the article

  • How to take the snapshot of a IE webpage through a BHO (C#)

    - by Kapil
    Hi, I am trying to build an IE BHO in C# for taking the snapshot of a webpage loaded in the IE browser. Here is what I'm trying to do: public class ShowToolbarBHO : BandObjectLib.IObjectWithSite { IWebBrowser2 webBrowser = null; public void SetSite (Object site) { ....... if (site != null) { ...... webBrowser = (IWebBrowser2)site; ...... } } } Also, I p/invoke the following COM methods: [Guid("0000010D-0000-0000-C000-000000000046")] [InterfaceTypeAttribute(ComInterfaceType.InterfaceIsIUnknown)] [ComImportAttribute()] public interface IViewObject { void Draw([MarshalAs(UnmanagedType.U4)] int dwDrawAspect, int lindex, IntPtr pvAspect, [In] IntPtr ptd, IntPtr hdcTargetDev, IntPtr hdcDraw, [MarshalAs(UnmanagedType.LPStruct)] ref COMRECT lprcBounds, [In] IntPtr lprcWBounds, IntPtr pfnContinue, int dwContinue); int GetColorSet([MarshalAs(UnmanagedType.U4)] int dwDrawAspect, int lindex, IntPtr pvAspect, [In] IntPtr ptd, IntPtr hicTargetDev, [Out] IntPtr ppColorSet); int Freeze([MarshalAs(UnmanagedType.U4)] int dwDrawAspect, int lindex, IntPtr pvAspect, out IntPtr pdwFreeze); int Unfreeze([MarshalAs(UnmanagedType.U4)] int dwFreeze); int SetAdvise([MarshalAs(UnmanagedType.U4)] int aspects, [MarshalAs(UnmanagedType.U4)] int advf, [MarshalAs(UnmanagedType.Interface)] IAdviseSink pAdvSink); void GetAdvise([MarshalAs(UnmanagedType.LPArray)] out int[] paspects, [MarshalAs(UnmanagedType.LPArray)] out int[] advf, [MarshalAs(UnmanagedType.LPArray)] out IAdviseSink[] pAdvSink); } [StructLayoutAttribute(LayoutKind.Sequential)] public class COMRECT { public int left; public int top; public int right; public int bottom; public COMRECT() { } public COMRECT(int left, int top, int right, int bottom) { this.left = left; this.top = top; this.right = right; this.bottom = bottom; } } [InterfaceTypeAttribute(ComInterfaceType.InterfaceIsIUnknown)] [ComVisibleAttribute(true)] [GuidAttribute("0000010F-0000-0000-C000-000000000046")] [ComImportAttribute()] public interface IAdviseSink { void OnDataChange([In]IntPtr pFormatetc, [In]IntPtr pStgmed); void OnViewChange([MarshalAs(UnmanagedType.U4)] int dwAspect, [MarshalAs(UnmanagedType.I4)] int lindex); void OnRename([MarshalAs(UnmanagedType.Interface)] object pmk); void OnSave(); void OnClose(); } Now When I take the snapshot: I make a call CaptureWebScreenImage((IHTMLDocument2) webBrowser.document); public static Image CaptureWebScreenImage(IHTMLDocument2 myDoc) { int heightsize = (int)getDocumentAttribute(myDoc, "scrollHeight"); int widthsize = (int)getDocumentAttribute(myDoc, "scrollWidth"); Bitmap finalImage = new Bitmap(widthsize, heightsize); Graphics gFinal = Graphics.FromImage(finalImage); COMRECT rect = new COMRECT(); rect.left = 0; rect.top = 0; rect.right = widthsize; rect.bottom = heightsize; IntPtr hDC = gFinal.GetHdc(); IViewObject vO = myDoc as IViewObject; vO.Draw(1, -1, (IntPtr)0, (IntPtr)0, (IntPtr)0, (IntPtr)hDC, ref rect, (IntPtr)0, (IntPtr)0, 0); gFinal.ReleaseHdc(); gFinal.Dispose(); return finalImage; } I am not getting the image of the webpage. Rather I am getting an image with black background. I am not sure if this is the right way of doing it, but I found over the web that IViewObject::Draw method is used for taking the image of a webpage in IE. I was earlier doing the image capture using the Native PrintWindow() method as mentioned in the following codeproject's page - http://www.codeproject.com/KB/graphics/IECapture.aspx But the image size is humongous! I was trying to see if I can reduce the size by using other techniques. It would be great if someone can point out the mistakes (I am sure there would be many) in my code above. Thanks, Kapil

    Read the article

  • How to take the sanpshot of a IE webpage through a BHO (C#)

    - by Kapil
    Hi, I am trying to build an IE BHO in C# for taking the snapshot of a webpage loaded in the IE browser. Here is what I'm trying to do: public class ShowToolbarBHO : BandObjectLib.IObjectWithSite { IWebBrowser2 webBrowser = null; public void SetSite (Object site) { ....... if (site != null) { ...... webBrowser = (IWebBrowser2)site; ...... } } } Also, I p/invoke the following COM methods: [Guid("0000010D-0000-0000-C000-000000000046")] [InterfaceTypeAttribute(ComInterfaceType.InterfaceIsIUnknown)] [ComImportAttribute()] public interface IViewObject { void Draw([MarshalAs(UnmanagedType.U4)] int dwDrawAspect, int lindex, IntPtr pvAspect, [In] IntPtr ptd, IntPtr hdcTargetDev, IntPtr hdcDraw, [MarshalAs(UnmanagedType.LPStruct)] ref COMRECT lprcBounds, [In] IntPtr lprcWBounds, IntPtr pfnContinue, int dwContinue); int GetColorSet([MarshalAs(UnmanagedType.U4)] int dwDrawAspect, int lindex, IntPtr pvAspect, [In] IntPtr ptd, IntPtr hicTargetDev, [Out] IntPtr ppColorSet); int Freeze([MarshalAs(UnmanagedType.U4)] int dwDrawAspect, int lindex, IntPtr pvAspect, out IntPtr pdwFreeze); int Unfreeze([MarshalAs(UnmanagedType.U4)] int dwFreeze); int SetAdvise([MarshalAs(UnmanagedType.U4)] int aspects, [MarshalAs(UnmanagedType.U4)] int advf, [MarshalAs(UnmanagedType.Interface)] IAdviseSink pAdvSink); void GetAdvise([MarshalAs(UnmanagedType.LPArray)] out int[] paspects, [MarshalAs(UnmanagedType.LPArray)] out int[] advf, [MarshalAs(UnmanagedType.LPArray)] out IAdviseSink[] pAdvSink); } [StructLayoutAttribute(LayoutKind.Sequential)] public class COMRECT { public int left; public int top; public int right; public int bottom; public COMRECT() { } public COMRECT(int left, int top, int right, int bottom) { this.left = left; this.top = top; this.right = right; this.bottom = bottom; } } [InterfaceTypeAttribute(ComInterfaceType.InterfaceIsIUnknown)] [ComVisibleAttribute(true)] [GuidAttribute("0000010F-0000-0000-C000-000000000046")] [ComImportAttribute()] public interface IAdviseSink { void OnDataChange([In]IntPtr pFormatetc, [In]IntPtr pStgmed); void OnViewChange([MarshalAs(UnmanagedType.U4)] int dwAspect, [MarshalAs(UnmanagedType.I4)] int lindex); void OnRename([MarshalAs(UnmanagedType.Interface)] object pmk); void OnSave(); void OnClose(); } Now When I take the snapshot: I make a call CaptureWebScreenImage((IHTMLDocument2) webBrowser.document); public static Image CaptureWebScreenImage(IHTMLDocument2 myDoc) { int heightsize = (int)getDocumentAttribute(myDoc, "scrollHeight"); int widthsize = (int)getDocumentAttribute(myDoc, "scrollWidth"); Bitmap finalImage = new Bitmap(widthsize, heightsize); Graphics gFinal = Graphics.FromImage(finalImage); COMRECT rect = new COMRECT(); rect.left = 0; rect.top = 0; rect.right = widthsize; rect.bottom = heightsize; IntPtr hDC = gFinal.GetHdc(); IViewObject vO = myDoc as IViewObject; vO.Draw(1, -1, (IntPtr)0, (IntPtr)0, (IntPtr)0, (IntPtr)hDC, ref rect, (IntPtr)0, (IntPtr)0, 0); gFinal.ReleaseHdc(); gFinal.Dispose(); return finalImage; } I am not getting the image of the webpage. Rather I am getting an image with black background. I am not sure if this is the right way of doing it, but I found over the web that IViewObject::Draw method is used for taking the image of a webpage in IE. I was earlier doing the image capture using the Native PrintWindow() method as mentioned in the following codeproject's page - http://www.codeproject.com/KB/graphics/IECapture.aspx But the image size is humongous! I was trying to see if I can reduce the size by using other techniques. It would be great if someone can point out the mistakes (I am sure there would be many) in my code above. Thanks, Kapil

    Read the article

  • My PNG has transparency, but after saving with PHP GD, transparency is lost [closed]

    - by Harry Stroker
    I found the solution to my problem. See below the original post and completely at the bottom my solution. I made a stupid mistake :) First I crop an image and then save it to a png file. Right after this, I also show the image. However, the saved png does not have transparency and the shown one has. What is going on? $this->resource = imagecreatefrompng($this->url); imagealphablending($this->resource, false); imagesavealpha($this->resource, true); $newResource = imagecreatetruecolor($destWidth, $destHeight); imagealphablending($newResource, false); imagesavealpha($newResource, true); $resample = imagecopyresampled($newResource,$this->resource,0,0,$srcX1,$srcY1,$destWidth,$destHeight,$srcX2-$srcX1, $srcY2-$srcY1); imagedestroy($this->resource); $this->resource = $newResource; // SAVING imagepng($this->resource, $destination, 100); // SHOWING header('Content-type: image/png'); imagepng($this->resource); The reason I also save the image is for caching. If the script is executed on a png, it saves a cached png. Next time the image is requested, the png file will be shown, but it has lost its transparency. Even stranger: When I save that cached png image as (within Firefox), it saves it suddenly as a jpg, even though the extension was png. Downloading the cached png using chrome and opening it in Photoshop gives the error: "file-format module cannot parse the file". I will show you the shown PNG and the generated PNG: http://www.foodmuseum.nl/SaveProblemTransparency.png Once I try to show that saved PNG with the GD library, it gives me an error. EDIT NO NO NO NO THIS IS NOT A DUPLICATE!!!... I ALREADY USED THEIR SOLUTION. The solution in the supposedly duplicate works for showing my image. But I also try to save it with the exact same resource, but then it has no transparency. EDIT 2 - SOLUTION I found out what the problem was. It was a stupid mistake. The script I provided above were cut out of a class and placed as sequential code, while in real this is not what exactly happened. The save image function: function saveImage($destination,$quality = 90) { $this->loadResource(); switch($extension){ default: case 'JPG': case 'jpg': imagejpeg($this->resource, $destination, $quality); break; case 'gif': imagegif($this->resource, $destination); break; case 'png': imagepng($this->resource, $destination); break; case 'gd2': imagegd2($this->resource, $destination); break; } } However... $extension does not exist. I fixed it by adding: $extension = $this->getExtension($destination);

    Read the article

  • I can't get SetSystemTime to work in Windows Vista using C# with Interop (P/Invoke).

    - by Andrew
    Hi, I'm having a hard time getting SetSystemTime working in my C# code. SetSystemtime is a kernel32.dll function. I'm using P/invoke (interop) to call it. SetSystemtime returns false and the error is "Invalid Parameter". I've posted the code below. I stress that GetSystemTime works just fine. I've tested this on Vista and Windows 7. Based on some newsgroup postings I've seen I have turned off UAC. No difference. I have done some searching for this problem. I found this link: http://groups.google.com.tw/group/microsoft.public.dotnet.framework.interop/browse_thread/thread/805fa8603b00c267 where the problem is reported but no resolution seems to be found. Notice that UAC is also mentioned but I'm not sure this is the problem. Also notice that this gentleman gets no actual Win32Error. Can someone try my code on XP? Can someone tell me what I'm doing wrong and how to fix it. If the answer is to somehow change permission settings programatically, I'd need an example. I would have thought turning off UAC should cover that though. I'm not required to use this particular way (SetSystemTime). I'm just trying to introduce some "clock drift" to stress test something. If there's another way to do it, please tell me. Frankly, I'm surprised I need to use Interop to change the system time. I would have thought there is a .NET method. Thank you very much for any help or ideas. Andrew Code: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Runtime.InteropServices; namespace SystemTimeInteropTest { class Program { #region ClockDriftSetup [StructLayout(LayoutKind.Sequential)] public struct SystemTime { [MarshalAs(UnmanagedType.U2)] public short Year; [MarshalAs(UnmanagedType.U2)] public short Month; [MarshalAs(UnmanagedType.U2)] public short DayOfWeek; [MarshalAs(UnmanagedType.U2)] public short Day; [MarshalAs(UnmanagedType.U2)] public short Hour; [MarshalAs(UnmanagedType.U2)] public short Minute; [MarshalAs(UnmanagedType.U2)] public short Second; [MarshalAs(UnmanagedType.U2)] public short Milliseconds; } [DllImport("kernel32.dll")] public static extern void GetLocalTime( out SystemTime systemTime); [DllImport("kernel32.dll")] public static extern void GetSystemTime( out SystemTime systemTime); [DllImport("kernel32.dll", SetLastError = true)] public static extern bool SetSystemTime( ref SystemTime systemTime); //[DllImport("kernel32.dll", SetLastError = true)] //public static extern bool SetLocalTime( //ref SystemTime systemTime); [System.Runtime.InteropServices.DllImportAttribute("kernel32.dll", EntryPoint = "SetLocalTime")] [return: System.Runtime.InteropServices.MarshalAsAttribute(System.Runtime.InteropServices.UnmanagedType.Bool)] public static extern bool SetLocalTime([InAttribute()] ref SystemTime lpSystemTime); #endregion ClockDriftSetup static void Main(string[] args) { try { SystemTime sysTime; GetSystemTime(out sysTime); sysTime.Milliseconds += (short)80; sysTime.Second += (short)3000; bool bResult = SetSystemTime(ref sysTime); if (bResult == false) throw new System.ComponentModel.Win32Exception(); } catch (Exception ex) { Console.WriteLine("Drift Error: " + ex.Message); } } } }

    Read the article

  • SVN: Release branch headaches, how to merge in website revisions as and when cleared to go live?

    - by Pete Duncanson
    I need a sanity check here if we can, any ideas on correcting/changing the following are very welcome! We've been getting ourselves in knots of late with our SVN and are trying to correct it by putting a Trunk/Release system in place. We have a large website that we develop on and we store it all in SVN. Heres what we had in mind: We have trunk and a release branch All work gets checked into Trunk. When a feature is deemed ready for the next release it is merged into a Release branch. We only have one release branch and just tag "Latest" when we do a push to live We hope to be able to get all the files changed from Latest to Head to give us a zip that we can upload (any ideas on an easy way to do this via scripting?) So we set all this up and where very happy with ourselves. Except its not working and heres why. We work on lots a different features/fixes/problems at once and they don't all get nicely checked in feature complete (but always working at least). Then sometimes you have to wait for Clients to sign off. As a result you end up with revisions which are "ready for live" being scattered with ones which are "still being worked on" in trunk. That means that the completed revisions are not getting merged in sequentially but out of order. I thought SVN could handle this, clever little thing it is, but apparently not. Heres an example: Pete changes some CSS to make a new button look pretty (Revision 1) Dave add some CSS to the bottom of the same CSS file as Pete's for a new feature (Revision 2) Dave's mod gets the nod so he merges it into Release and commits it with a log message mentioning revision number and bug tracking id. Pete adds more buttons to finish this mod, no CSS changes here though (Revision 3) Pete then merges his mods (Revision 1 and 3) into the Head of Release (which has Daves merge in it) but this over-writes Daves CSS additions which now dissapear completely. This leads to the site being broken and the Release branch being pretty much useless. So we tried some other ideas like reverting Release back to "Latest" and then just merging in all the Revisions 1,2 and 3 in order. This worked fine until we had Revision 4 which was not ready for live and Revision 5 which was. Suddenly we are getting ourselves in knots again with exactly the same problem! Ok so take three. Revert to Latest, merge in Revision 5, then do any update back to Head. Tree conflicts galore! So thats a no no. I cracked in the end and built it all up manaually but its not something I want to do regular, ideally I want to script our deployment but can't while Release is in such a mess. HELP! What the heck are we doing wrong? I can't seem to find any solutions to this problem of wanting different none sequential Revisions in Release. If its not possible thats fine but how the heck are we meant to get stuff live easily. We can't branch for every single change, the site takes 30 minutes+ to check out it would take too long. Side note, we are using TortoiseSVN so can we keep command line examples to a minimum in any answers? Latest version of TSVN and SVN Version 1.6 so we have the funky merge tracking etc. EDIT: An excellent blog post which deals with the dev/release cycle (although using GIT but still relivant) thought everyone would like to read it if they found this question interesting. (http://nvie.com/git-model) EDIT 2: I wrote a blog post on how to show which branch you are working on in your website which others have asked me about (http://www.offroadcode.com/2010/5/14/which-svn-branch-are-you-working-on.aspx). Hope that helps. In the meantime we are looking at Kiln and hoping to make the switch next month (gulp!)

    Read the article

  • calling delphi dll from c#

    - by Wouter Roux
    Hi, I have a Delphi dll defined like this: TMPData = record Lastname, Firstname: array[0..40] of char; Birthday: TDateTime; Pid: array[0..16] of char; Title: array[0..20] of char; Female: Boolean; Street: array[0..40] of char; ZipCode: array[0..10] of char; City: array[0..40] of char; Phone, Fax, Department, Company: array[0..20] of char; Pn: array[0..40] of char; In: array[0..16] of char; Hi: array[0..8] of char; Account: array[0..20] of char; Valid, Status: array[0..10] of char; Country, NameAffix: array[0..20] of char; W, H: single; Bp: array[0..10] of char; SocialSecurityNumber: array[0..9] of char; State: array[0..2] of char; end; function Init(const tmpData: TMPData; var ErrorCode: integer; ResetFatalError: boolean = false): boolean; procedure GetData(out tmpData: TMPData); My current c# signatures looks like this: [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Ansi)] public struct TMPData { [MarshalAs(UnmanagedType.LPStr, SizeConst = 40)] public string Lastname; [MarshalAs(UnmanagedType.LPStr, SizeConst = 40)] public string Firstname; [MarshalAs(UnmanagedType.R8)] public double Birthday; [MarshalAs(UnmanagedType.LPStr, SizeConst = 16)] public string Pid; [MarshalAs(UnmanagedType.LPStr, SizeConst = 20)] public string Title; [MarshalAs(UnmanagedType.Bool)] public bool Female; [MarshalAs(UnmanagedType.LPStr, SizeConst = 40)] public string Street; [MarshalAs(UnmanagedType.LPStr, SizeConst = 10)] public string ZipCode; [MarshalAs(UnmanagedType.LPStr, SizeConst = 40)] public string City; [MarshalAs(UnmanagedType.LPStr, SizeConst = 20)] public string Phone; [MarshalAs(UnmanagedType.LPStr, SizeConst = 20)] public string Fax; [MarshalAs(UnmanagedType.LPStr, SizeConst = 20)] public string Department; [MarshalAs(UnmanagedType.LPStr, SizeConst = 20)] public string Company; [MarshalAs(UnmanagedType.LPStr, SizeConst = 40)] public string Pn; [MarshalAs(UnmanagedType.LPStr, SizeConst = 16)] public string In; [MarshalAs(UnmanagedType.LPStr, SizeConst = 8)] public string Hi; [MarshalAs(UnmanagedType.LPStr, SizeConst = 20)] public string Account; [MarshalAs(UnmanagedType.LPStr, SizeConst = 10)] public string Valid; [MarshalAs(UnmanagedType.LPStr, SizeConst = 10)] public string Status; [MarshalAs(UnmanagedType.LPStr, SizeConst = 20)] public string Country; [MarshalAs(UnmanagedType.LPStr, SizeConst = 20)] public string NameAffix; [MarshalAs(UnmanagedType.I4)] public int W; [MarshalAs(UnmanagedType.I4)] public int H; [MarshalAs(UnmanagedType.LPStr, SizeConst = 10)] public string Bp; [MarshalAs(UnmanagedType.LPStr, SizeConst = 9)] public string SocialSecurityNumber; [MarshalAs(UnmanagedType.LPStr, SizeConst = 2)] public string State; } [DllImport("MyDll.dll")] [return: MarshalAs(UnmanagedType.Bool)] public static extern bool Init(TMPData tmpData, int ErrorCode, bool ResetFatalError); [DllImport("MyDll.dll")] [return: MarshalAs(UnmanagedType.Bool)] public static extern bool GetData(out TMPData tmpData); I first call Init setting the BirthDay, LastName and FirstName. I then call GetData but the TMPData structure I get back is incorrect. The FirstName, LastName and Birthday fields are populated but the data is incorrect. Is the mapping correct? ( "array[0..40] of char" equal to "[MarshalAs(UnmanagedType.LPStr, SizeConst = 40)]" )?

    Read the article

  • Disable antialiasing for a specific GDI device context

    - by Jacob Stanley
    I'm using a third party library to render an image to a GDI DC and I need to ensure that any text is rendered without any smoothing/antialiasing so that I can convert the image to a predefined palette with indexed colors. The third party library i'm using for rendering doesn't support this and just renders text as per the current windows settings for font rendering. They've also said that it's unlikely they'll add the ability to switch anti-aliasing off any time soon. The best work around I've found so far is to call the third party library in this way (error handling and prior settings checks ommitted for brevity): private static void SetFontSmoothing(bool enabled) { int pv = 0; SystemParametersInfo(Spi.SetFontSmoothing, enabled ? 1 : 0, ref pv, Spif.None); } // snip Graphics graphics = Graphics.FromImage(bitmap) IntPtr deviceContext = graphics.GetHdc(); SetFontSmoothing(false); thirdPartyComponent.Render(deviceContext); SetFontSmoothing(true); This obviously has a horrible effect on the operating system, other applications flicker from cleartype enabled to disabled and back every time I render the image. So the question is, does anyone know how I can alter the font rendering settings for a specific DC? Even if I could just make the changes process or thread specific instead of affecting the whole operating system, that would be a big step forward! (That would give me the option of farming this rendering out to a separate process- the results are written to disk after rendering anyway) EDIT: I'd like to add that I don't mind if the solution is more complex than just a few API calls. I'd even be happy with a solution that involved hooking system dlls if it was only about a days work. EDIT: Background Information The third-party library renders using a palette of about 70 colors. After the image (which is actually a map tile) is rendered to the DC, I convert each pixel from it's 32-bit color back to it's palette index and store the result as an 8bpp greyscale image. This is uploaded to the video card as a texture. During rendering, I re-apply the palette (also stored as a texture) with a pixel shader executing on the video card. This allows me to switch and fade between different palettes instantaneously instead of needing to regenerate all the required tiles. It takes between 10-60 seconds to generate and upload all the tiles for a typical view of the world. EDIT: Renamed GraphicsDevice to Graphics The class GraphicsDevice in the previous version of this question is actually System.Drawing.Graphics. I had renamed it (using GraphicsDevice = ...) because the code in question is in the namespace MyCompany.Graphics and the compiler wasn't able resolve it properly. EDIT: Success! I even managed to port the PatchIat function below to C# with the help of Marshal.GetFunctionPointerForDelegate. The .NET interop team really did a fantastic job! I'm now using the following syntax, where Patch is an extension method on System.Diagnostics.ProcessModule: module.Patch( "Gdi32.dll", "CreateFontIndirectA", (CreateFontIndirectA original) => font => { font->lfQuality = NONANTIALIASED_QUALITY; return original(font); }); private unsafe delegate IntPtr CreateFontIndirectA(LOGFONTA* lplf); private const int NONANTIALIASED_QUALITY = 3; [StructLayout(LayoutKind.Sequential)] private struct LOGFONTA { public int lfHeight; public int lfWidth; public int lfEscapement; public int lfOrientation; public int lfWeight; public byte lfItalic; public byte lfUnderline; public byte lfStrikeOut; public byte lfCharSet; public byte lfOutPrecision; public byte lfClipPrecision; public byte lfQuality; public byte lfPitchAndFamily; public unsafe fixed sbyte lfFaceName [32]; }

    Read the article

  • powermock : ProcessBuilder redirectErrorStream giving nullPointerException

    - by kaustubh9
    I am using powermock to mock some native command invocation using process builder. the strange thing is these test pass sometimes and fail sometimes giving a NPE. Is this a powermock issue or some gotcha in the program. the snippet of the class under test is.. public void method1(String jsonString, String filename) { try { JSONObject jObj = new JSONObject(jsonString); JSONArray jArr = jObj.getJSONArray("something"); String cmd = "/home/y/bin/perl <perlscript>.pl<someConstant>" + " -k " + <someConstant> + " -t " + <someConstant>; cmd += vmArr.getJSONObject(i).getString("jsonKey"); ProcessBuilder pb = new ProcessBuilder("bash", "-c", cmd); pb.redirectErrorStream(false); Process shell = pb.start(); shell.waitFor(); if (shell.exitValue() != 0) { throw new RuntimeException("Error in Collecting the logs. cmd="+cmd); } StringBuilder error = new StringBuilder(); InputStream iError = shell.getErrorStream(); BufferedReader bfr = new BufferedReader( new InputStreamReader(iError)); String line = null; while ((line = bfr.readLine()) != null) { error.append(line + "\n"); } if (!error.toString().isEmpty()) { LOGGER.error(error`enter code here`); } iError.close(); bfr.close(); } catch (Exception e) { throw new RuntimeException(e); } and the unit test case is .. @PrepareForTest( {.class, ProcessBuilder.class,Process.class, InputStream.class,InputStreamReader.class, BufferedReader.class} ) @Test(sequential=true) public class TestClass { @Test(groups = {"unit"}) public void testMethod() { try { ProcessBuilder prBuilderMock = createMock(ProcessBuilder.class); Process processMock = createMock(Process.class); InputStream iStreamMock = createMock(InputStream.class); InputStreamReader iStrRdrMock = createMock(InputStreamReader.class); BufferedReader bRdrMock = createMock(BufferedReader.class); String errorStr =" Error occured"; String json = <jsonStringInput>; String cmd = "/home/y/bin/perl <perlscript>.pl -k "+<someConstant>+" -t "+<someConstant>+" "+<jsonValue>; expectNew(ProcessBuilder.class, "bash", "-c", cmd).andReturn(prBuilderMock); expect(prBuilderMock.redirectErrorStream(false)).andReturn(prBuilderMock); expect(prBuilderMock.start()).andReturn(processMock); expect(processMock.waitFor()).andReturn(0); expect(processMock.exitValue()).andReturn(0); expect(processMock.getErrorStream()).andReturn(iStreamMock); expectNew(InputStreamReader.class, iStreamMock) .andReturn(iStrRdrMock); expectNew(BufferedReader.class, iStrRdrMock) .andReturn(bRdrMock); expect(bRdrMock.readLine()).andReturn(errorStr); expect(bRdrMock.readLine()).andReturn(null); iStreamMock.close(); bRdrMock.close(); expectLastCall().once(); replayAll(); <ClassToBeTested> instance = new <ClassToBeTested>(); instance.method1(json, fileName); verifyAll(); } catch (Exception e) { Assert.fail("failed while collecting log.", e); } } I get an error on execution an the test case fails.. Caused by: java.lang.NullPointerException at java.lang.ProcessBuilder.start(ProcessBuilder.java:438) Note : I do not get this error on all executions. Sometimes it passes and sometimes it fails. I am not able to understand this behavior.

    Read the article

  • Marshal a C# struct to C++ VARIANT

    - by jortan
    To start with, I'm not very familiar with the COM-technology, this is the first time I'm working with it so bear with me. I'm trying to call a COM-object function from C#. This is the interface in the idl-file: [id(6), helpstring("vConnectInfo=ConnectInfoType")] HRESULT ConnectTarget([in,out] VARIANT* vConnectInfo); This is the interop interface I got after running tlbimp: void ConnectTarget(ref object vConnectInfo); The c++ code in COM object for the target function: STDMETHODIMP PCommunication::ConnectTarget(VARIANT* vConnectInfo) { if (!((vConnectInfo->vt & VT_ARRAY) && (vConnectInfo->vt & VT_BYREF))) { return E_INVALIDARG; } ConnectInfoType *pConnectInfo = (ConnectInfoType *)((*vConnectInfo->pparray)->pvData); ... } This COM-object is running in another process, it is not in a dll. I can add that the COM object is also used from another program written in C++. In that case there is no problem because in C++ a VARIANT is created and pparray-pvData is set to the connInfo data-structure and then the COM-object is called with the VARIANT as parameter. In C#, as I understand, my struct should be marshalled as a VARIANT automatically. These are two methods I've been using (or actually I've tried a lot more...) to call this method from C#: private void method1_Click(object sender, EventArgs e) { pcom.PCom PCom = new pcom.PCom(); pcom.IGeneralManagementServices mgmt = (pcom.IGeneralManagementServices)PCom; m_ci = new ConnectInfoType(); fillConnInfo(ref m_ci); mgmt.ConnectTarget(m_ci); } In the above case the struct gets marshalled as VT_UNKNOWN. This is a simple case and works if the parameter is not a struct (eg. works for int). private void method4_Click(object sender, EventArgs e) { ConnectInfoType ci = new ConnectInfoType(); fillConnInfo(ref ci); pcom PCom = new pcom.PCom(); pcom.IGeneralManagementServices mgmt = (pcom.IGeneralManagementServices)PCom; ParameterModifier[] pms = new ParameterModifier[1]; ParameterModifier pm = new ParameterModifier(1); pm[0] = true; pms[0] = pm; object[] param = new object[1]; param[0] = ci; object[] args = new object[1]; args[0] = param; mgmt.GetType().InvokeMember("ConnectTarget", BindingFlags.InvokeMethod, null, mgmt, args, pms, null, null); } In this case it gets marshalled as VT_ARRAY | VT_BYREF | VT_VARIANT. The problem is that when debugging the "target-function" ConnectTarget I cannot find the data I send in the SAFEARRAY-struct (or in any other place in memory either) What do I do with a VT_VARIANT? Any ideas on how to get my struct-data? Update: The ConnectInfoType struct: [StructLayout(LayoutKind.Sequential, CharSet = CharSet.Ansi)] public class ConnectInfoType { public short network; public short nodeNumber; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 51)] public string connTargPassWord; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 8)] public string sConnectId; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 16)] public string sConnectPassword; public EnuConnectType eConnectType; public int hConnectHandle; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 8)] public string sAccessPassword; }; And the corresponding struct in c++: typedef struct ConnectInfoType { short network; short nodeNumber; char connTargPassWord[51]; char sConnectId[8]; char sConnectPassword[16]; EnuConnectType eConnectType; int hConnectHandle; char sAccessPassword[8]; } ConnectInfoType;

    Read the article

  • Help needed with drawRect:

    - by Andrew Coad
    Hi, I'm having a fundamental issue with use of drawRect: Any advice would be greatly appreciated. The application needs to draw a variety of .png images at different times, sometimes with animation, sometimes without. A design goal that I was hoping to adhere to is to have the code inside drawRect: very simple and "dumb" - i.e. just do drawing and no other application logic. To draw the image I am using the drawAtPoint: method of UIImage. Since this method does not take a CGContext as a parameter, it can only be called within the drawRect: method. So I have: - (void)drawRect:(CGRect)rect { [firstImage drawAtPoint:CGPointMake(firstOffsetX, firstOffsetY)]; } All fine and dandy for one image. To draw multiple images (over time) the approach I have taken is to maintain an array of dictionaries with each dictionary containing an image, the point location to draw at and a flag to enable/suppress drawing for that image. I add dictionaries to the array over time and trigger drawing via the setNeedsDisplay: method of UIView. Use of an array of dictionaries allows me to completely reconstruct the entire display at any time. drawRect: now becomes: - (void)drawRect:(CGRect)rect { for (NSMutableDictionary *imageDict in [self imageDisplayList]) { if ([[imageDict objectForKey:@"needsDisplay"] boolValue]) { [[imageDict objectForKey:@"image"] drawAtPoint:[[imageDict objectForKey:@"location"] CGPointValue]]; [imageDict setValue:[NSNumber numberWithBool:NO] forKey:@"needsDisplay"]; } } } Still OK. The code is simple and compact. Animating this is where I run into problems. The first problem is where do I put the animation code? Do I put it in UIView or UIViewController? If in UIView, do I put it in drawRect: or elsewhere? Because the actual animation depends on the overall state of the application, I would need nested switch statements which, if put in drawRect:, would look something like this: - (void)drawRect:(CGRect)rect { for (NSMutableDictionary *imageDict in [self imageDisplayList]) { if ([[imageDict objectForKey:@"needsDisplay"] boolValue]) { switch ([self currentState]) { case STATE_1: switch ([[imageDict objectForKey:@"animationID"] intValue]) { case ANIMATE_FADE_IN: [self setAlpha:0.0]; [UIView beginAnimations:[[imageDict objectForKey:@"animationID"] intValue] context:nil]; [UIView setAnimationDelegate:self]; [UIView setAnimationCurve:UIViewAnimationCurveEaseIn]; [UIView setAnimationDuration:2]; [self setAlpha:1.0]; break; case ANIMATE_FADE_OUT: [self setAlpha:1.0]; [UIView beginAnimations:[[imageDict objectForKey:@"animationID"] intValue] context:nil]; [UIView setAnimationDelegate:self]; [UIView setAnimationCurve:UIViewAnimationCurveEaseOut]; [UIView setAnimationDuration:2]; [self setAlpha:0.0]; break; case ANIMATE_OTHER: // similar code here break; default: break; } break; case STATE_2: // similar code here break; default: break; } [[imageDict objectForKey:@"image"] drawAtPoint:[[imageDict objectForKey:@"location"] CGPointValue]]; [imageDict setValue:[NSNumber numberWithBool:NO] forKey:@"needsDisplay"]; } } [UIView commitAnimations]; } In addition, to make multiple sequential animations work correctly, there would need to be an outer controlling mechanism involving the animation delegate animationDidStop: callback that would set the needsDisplay entries in the dictionaries to allow/suppress drawing (and animation). The point that we are at now is that it all starts to look very ugly. More specifically: drawRect: starts to bloat quickly and contain code that is not "just drawing" code the UIView needs implicit awareness of the application state the overall process of drawing is now spread across three methods at a minimum And on to the point of this post: how can I do this better? What would the experts out there recommend in terms of overall structure? How can I keep application state information out of the view? Am I looking at this problem from the wrong direction. Is there some completely different approach that I should consider?

    Read the article

  • Slicing a time range into parts

    - by beporter
    First question. Be gentle. I'm working on software that tracks technicians' time spent working on tasks. The software needs to be enhanced to recognize different billable rate multipliers based on the day of the week and the time of day. (For example, "Time and a half after 5 PM on weekdays.") The tech using the software is only required to log the date, his start time and his stop time (in hours and minutes). The software is expected to break the time entry into parts at the boundaries of when the rate multipliers change. A single time entry is not permitted to span multiple days. Here is a partial sample of the rate table. The first-level array keys are the days of the week, obviously. The second-level array keys represent the time of the day when the new multiplier kicks in, and runs until the next sequential entry in the array. The array values are the multiplier for that time range. [rateTable] => Array ( [Monday] => Array ( [00:00:00] => 1.5 [08:00:00] => 1 [17:00:00] => 1.5 [23:59:59] => 1 ) [Tuesday] => Array ( [00:00:00] => 1.5 [08:00:00] => 1 [17:00:00] => 1.5 [23:59:59] => 1 ) ... ) In plain English, this represents a time-and-a-half rate from midnight to 8 am, regular rate from 8 to 5 pm, and time-and-a-half again from 5 till 11:59 pm. The time that these breaks occur may be arbitrary to the second and there can be an arbitrary number of them for each day. (This format is entirely negotiable, but my goal is to make it as easily human-readable as possible.) As an example: a time entry logged on Monday from 15:00:00 (3 PM) to 21:00:00 (9 PM) would consist of 2 hours billed at 1x and 4 hours billed at 1.5x. It is also possible for a single time entry to span multiple breaks. Using the example rateTable above, a time entry from 6 AM to 9 PM would have 3 sub-ranges from 6-8 AM @ 1.5x, 8AM-5PM @ 1x, and 5-9 PM @ 1.5x. By contrast, it's also possible that a time entry may only be from 08:15:00 to 08:30:00 and be entirely encompassed in the range of a single multiplier. I could really use some help coding up some PHP (or at least devising an algorithm) that can take a day of the week, a start time and a stop time and parse into into the required subparts. It would be ideal to have the output be an array that consists of multiple entries for a (start,stop,multiplier) triplet. For the above example, the output would be: [output] => Array ( [0] => Array ( [start] => 15:00:00 [stop] => 17:00:00 [multiplier] => 1 ) [1] => Array ( [start] => 17:00:00 [stop] => 21:00:00 [multiplier] => 1.5 ) ) I just plain can't wrap my head around the logic of splitting a single (start,stop) into (potentially) multiple subparts.

    Read the article

  • How to select table column names in a view and pass to controller in rails?

    - by zachd1_618
    So I am new to Rails, and OO programming in general. I have some grasp of the MVC architecture. My goal is to make a (nearly) completely dynamic plug-and-play plotting web server. I am fairly confused with params, forms, and select helpers. What I want to do is use Rails drop downs to basically pass parameters as strings to my controller, which will use the params to select certain column data from my database and plot it dynamically. I have the latter part of the task working, but I can't seem to pass values from my view to controller. For simplicity's sake, say my database schema looks like this: --------------Plot--------------- |____x____|____y1____|____y2____| | 1 | 1 | 1 | | 2 | 2 | 4 | | 3 | 3 | 9 | | 4 | 4 | 16 | | 5 | 5 | 25 | ... and in my Model, I have dynamic selector scopes that will let me select just certain columns of data: in Plot.rb class Plot < ActiveRecord::Base scope :select_var, lambda {|varname| select(varname)} scope :between_x, lambda {|x1,x2| where("x BETWEEN ? and ?","#{x1}","#{x2}")} So this way, I can call: irb>>@p1 = Plot.select_var(['x','y1']).between_x(1,3) and get in return a class where @p1.x and @p1.y1 are my only attributes, only for values between x=1 to x=4, which I dynamically plot. I want to start off in a view (plot/index), where I can dynamically select which variable names (table column names), and which rows from the database to fetch and plot. The problem is, most select helpers don't seem to work with columns in the database, only rows. So to select columns, I first get an array of column names that exist in my database with a function I wrote. Plots Controller def index d=Plot.first @tags = d.list_vars end So @tags = ['x','y1','y2'] Then in my plot/index.html.erb I try to use a drop down to select wich variables I send back to the controller. index.html.erb <%= select_tag( :variable, options_for_select(@plots.first.list_vars,:name,:multiple=>:true) )%> <%= button_to 'Plot now!', :controller =>"plots/plot_vars", :variable => params[:variable]%> Finally, in the controller again Plots controller ... def plot_vars @plot_data=Plot.select_vars([params[:variable]]) end The problem is everytime I try this (or one of a hundred variations thereof), the params[:variable] is nill. How can I use a drop down to pass a parameter with string variable names to the controller? Sorry its so long, I have been struggling with this for about a month now. :-( I think my biggest problem is that this setup doesn't really match the Rails architecture. I don't have "users" and "articles" as individual entities. I really have a data structure, not a data object. Trying to work with the structure in terms of data object speak is not necessarily the easiest thing to do I think. For background: My actual database has about 250 columns and a couple million rows, and they get changed and modified from time to time. I know I can make the database smarter, but its not worth it on my end. I work at a scientific institute where there are a ton of projects with databases just like this. Each one has a web developer that spends months setting up a web interface and their own janky plotting setups. I want to make this completely dynamic, as a plug-and-play solution so all you have to do is specify your database connection, and this rails setup will automatically show and plot which data you want in it. I am more of a sequential programmer and number cruncher, as are many people here. I think this project could be very helpful in the end, but its difficult to figure out for me right now.

    Read the article

  • Override `drop` for a custom sequence

    - by Bruno Reis
    In short: in Clojure, is there a way to redefine a function from the standard sequence API (which is not defined on any interface like ISeq, IndexedSeq, etc) on a custom sequence type I wrote? 1. Huge data files I have big files in the following format: A long (8 bytes) containing the number n of entries n entries, each one being composed of 3 longs (ie, 24 bytes) 2. Custom sequence I want to have a sequence on these entries. Since I cannot usually hold all the data in memory at once, and I want fast sequential access on it, I wrote a class similar to the following: (deftype DataSeq [id ^long cnt ^long i cached-seq] clojure.lang.IndexedSeq (index [_] i) (count [_] (- cnt i)) (seq [this] this) (first [_] (first cached-seq)) (more [this] (if-let [s (next this)] s '())) (next [_] (if (not= (inc i) cnt) (if (next cached-seq) (DataSeq. id cnt (inc i) (next cached-seq)) (DataSeq. id cnt (inc i) (with-open [f (open-data-file id)] ; open a memory mapped byte array on the file ; seek to the exact position to begin reading ; decide on an optimal amount of data to read ; eagerly read and return that amount of data )))))) The main idea is to read ahead a bunch of entries in a list and then consume from that list. Whenever the cache is completely consumed, if there are remaining entries, they are read from the file in a new cache list. Simple as that. To create an instance of such a sequence, I use a very simple function like: (defn ^DataSeq load-data [id] (next (DataSeq. id (count-entries id) -1 []))) ; count-entries is a trivial "open file and read a long" memoized As you can see, the format of the data allowed me to implement count in very simply and efficiently. 3. drop could be O(1) In the same spirit, I'd like to reimplement drop. The format of these data files allows me to reimplement drop in O(1) (instead of the standard O(n)), as follows: if dropping less then the remaining cached items, just drop the same amount from the cache and done; if dropping more than cnt, then just return the empty list. otherwise, just figure out the position in the data file, jump right into that position, and read data from there. My difficulty is that drop is not implemented in the same way as count, first, seq, etc. The latter functions call a similarly named static method in RT which, in turn, calls my implementation above, while the former, drop, does not check if the instance of the sequence it is being called on provides a custom implementation. Obviously, I could provide a function named anything but drop that does exactly what I want, but that would force other people (including my future self) to remember to use it instead of drop every single time, which sucks. So, the question is: is it possible to override the default behaviour of drop? 4. A workaround (I dislike) While writing this question, I've just figured out a possible workaround: make the reading even lazier. The custom sequence would just keep an index and postpone the reading operation, that would happen only when first was called. The problem is that I'd need some mutable state: the first call to first would cause some data to be read into a cache, all the subsequent calls would return data from this cache. There would be a similar logic on next: if there's a cache, just next it; otherwise, don't bother populating it -- it will be done when first is called again. This would avoid unnecessary disk reads. However, this is still less than optimal -- it is still O(n), and it could easily be O(1). Anyways, I don't like this workaround, and my question is still open. Any thoughts? Thanks.

    Read the article

  • Best ASP.NET E-Commerce Framework (aspDotNetStoreFront, AbleCommerce, BVCommerce, MediaChase,...)

    - by EfficionDave
    I need a full-featured asp.net E-Commerce solution for a project. It needs to be easily extensible as I need to extend it to integrate with a custom Flash based personalization engine. It should also have a great administrative back-end that handles inventory tracking, report, labels, shipping, ... so that the client can really use it as the basis for their business. I've used quite a few different E-Commerce systems including OSCommerce, Zen Cart, Catalook, and eTailer. While I was impressed with Zen Cart and eTailer, this project is big enough that I want to try a more serious, commercial offering. I'm particularly interested in thoughts on aspdotnetstorefront, BVCommerce, AbleCommerce, and MediaChase. [UPDATE] I've done quite a bit of evaluation since I posted this question and will give some of my findings below: BVCommerce - The price point is nice, the product seems fairly well regarded, and I've heard good things about customizing/extending it, it just doesn't seem like it's going to meet my needs as an Top Tier ECommerce solution. After viewing the tutorial videos, the Admin interface just seems too basic and outdated. aspDotNetStorefront - The integration with DotNetNuke was it's biggest selling point for me but after reading peoples opinions, it's integration seems quite weak. It's sounds like there's a significant learning curve and the architecture just doesn't sound like it's as clean as it should be. AbleCommerce - After a rather thorough evaluation, I have selected AbleCommerce for at least my next project. The Admin interface is beautiful and has a nice list of features. The E-Commerce portion of the user side is very well done, highly skin-able (using Master Pages and a couple other techniques), and the checkout workflow has all the features I need while still being very clean. Source to the API can be bought for $500 but you generally won't need it. There are some 3rd party modules available but there's not currently a large market for these. The biggest glaring weakness of AbleCommerce I've found is their CMS capabilities are very limited and not at all well suited for a non-technical user to make most site content updates. But I have a good solution for that that I plan adapting into AbleCommerce. They will automatically generate a site for you to play with to your heart's content for 30 days. It is defniitely worth checking out.

    Read the article

  • Entity Framework won't SaveChanges on new entity with two-level relationship

    - by Tim Rourke
    I'm building an ASP.NET MVC site using the ADO.NET Entity Framework. I have an entity model that includes these entities, associated by foreign keys: Report(ID, Date, Heading, Report_Type_ID, etc.) SubReport(ID, ReportText, etc.) - one-to-one relationship with Report. ReportSource(ID, Name, Description) - one-to-many relationship with Sub_Report. ReportSourceType(ID, Name, Description) - one-to-many relationship with ReportSource. Contact (ID, Name, Address, etc.) - one-to-one relationship with Report_Source. There is a Create.aspx page for each type of SubReport. The post event method returns a new Sub_Report entity. Before, in my post method, I followed this process: Set the properties for a new Report entity from the page's fields. Set the SubReport entity's specific properties from the page's fields. Set the SubReport entity's Report to the new Report entity created in 1. Given an ID provided by the page, look up the ReportSource and set the Sub_Report entity's ReportSource to the found entity. SaveChanges. This workflow succeeded just fine for a couple of weeks. Then last week something changed and it doesn't work any more. Now instead of the save operation, I get this Exception: UpdateException: "Entities in 'DIR2_5Entities.ReportSourceSet' participate in the 'FK_ReportSources_ReportSourceTypes' relationship. 0 related 'ReportSourceTypes' were found. 1 'Report_Source_Types' is expected." The debug visualizer shows the following: The SubReport's ReportSource is set and loaded, and all of its properties are correct. The Report_Source has a valid ReportSourceType entity attached. In SQL Profiler the prepared SQL statement looks OK. Can anybody point me to what obvious thing I'm missing? TIA Notes: The Report and SubReport are always new entities in this case. The Report entity contains properties common to many types of reports and is used for generic queries. SubReports are specific reports with extra parameters varying by type. There is actually a different entity set for each type of SubReport, but this question applies to all of them, so I use SubReport as a simplified example.

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >