Search Results

Search found 896 results on 36 pages for 'analyze'.

Page 31/36 | < Previous Page | 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Trace File Source Adapter

    The Trace File Source adapter is a useful addition to your SSIS toolbox.  It allows you to read 2005 and 2008 profiler traces stored as .trc files and read them into the Data Flow.  From there you can perform filtering and analysis using the power of SSIS. There is no need for a SQL Server connection this just uses the trace file. Example Usages Cache warming for SQL Server Analysis Services Reading the flight recorder Find out the longest running queries on a server Analyze statements for CPU, memory by user or some other criteria you choose Properties The Trace File Source adapter has two properties, both of which combine to control the source trace file that is read at runtime. SQL Server 2005 and SQL Server 2008 trace files are supported for both the Database Engine (SQL Server) and Analysis Services. The properties are managed by the Editor form or can be set directly from the Properties Grid in Visual Studio. Property Type Description AccessMode Enumeration This property determines how the Filename property is interpreted. The values available are: DirectInput Variable Filename String This property holds the path for trace file to load (*.trc). The value is either a full path, or the name of a variable which contains the full path to the trace file, depending on the AccessMode property. Trace Column Definition Hopefully the majority of you can skip this section entirely, but if you encounter some problems processing a trace file this may explain it and allow you to fix the problem. The component is built upon the trace management API provided by Microsoft. Unfortunately API methods that expose the schema of a trace file have known issues and are unreliable, put simply the data often differs from what was specified. To overcome these limitations the component uses  some simple XML files. These files enable the trace column data types and sizing attributes to be overridden. For example SQL Server Profiler or TMO generated structures define EventClass as an integer, but the real value is a string. TraceDataColumnsSQL.xml  - SQL Server Database Engine Trace Columns TraceDataColumnsAS.xml    - SQL Server Analysis Services Trace Columns The files can be found in the %ProgramFiles%\Microsoft SQL Server\100\DTS\PipelineComponents folder, e.g. "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsSQL.xml" "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml" If at runtime the component encounters a type conversion or sizing error it is most likely due to a discrepancy between the column definition as reported by the API and the actual value encountered. Whilst most common issues have already been fixed through these files we have implemented specific exception traps to direct you to the files to enable you to fix any further issues due to different usage or data scenarios that we have not tested. An example error that you can fix through these files is shown below. Buffer exception writing value to column 'Column Name'. The string value is 999 characters in length, the column is only 111. Columns can be overridden by the TraceDataColumns XML files in "C:\Program Files\Microsoft SQL Server\100\DTS\PipelineComponents\TraceDataColumnsAS.xml". Installation The component is provided as an MSI file which you can download and run to install it. This simply places the files on disk in the correct locations and also installs the assemblies in the Global Assembly Cache as per Microsoft’s recommendations. You may need to restart the SQL Server Integration Services service, as this caches information about what components are installed, as well as restarting any open instances of Business Intelligence Development Studio (BIDS) / Visual Studio that you may be using to build your SSIS packages. Finally you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the Trace File Source transformation in the Choose Toolbox Items window. This process has been described in detail in the related FAQ entry for How do I install a task or transform component? We recommend you follow best practice and apply the current Microsoft SQL Server Service pack to your SQL Server servers and workstations. Please note that the Microsoft Trace classes used in the component are not supported on 64-bit platforms. To use the Trace File Source on a 64-bit host you need to ensure you have the 32-bit (x86) tools available, and the way you execute your package is setup to use them, please see the help topic 64-bit Considerations for Integration Services for more details. Downloads Trace Sources for SQL Server 2005 -- Trace Sources for SQL Server 2008 Version History SQL Server 2008 Version 2.0.0.382 - SQL Sever 2008 public release. (9 Apr 2009) SQL Server 2005 Version 1.0.0.321 - SQL Server 2005 public release. (18 Nov 2008) -- Screenshots

    Read the article

  • MySQL Connect 9 Days Away – Optimizer Sessions

    - by Bertrand Matthelié
    72 1024x768 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Following my previous blog post focusing on InnoDB talks at MySQL Connect, let us review today the sessions focusing on the MySQL Optimizer: Saturday, 11.30 am, Room Golden Gate 6: MySQL Optimizer Overview—Olav Sanstå, Oracle The goal of MySQL optimizer is to take a SQL query as input and produce an optimal execution plan for the query. This session presents an overview of the main phases of the MySQL optimizer and the primary optimizations done to the query. These optimizations are based on a combination of logical transformations and cost-based decisions. Examples of optimization strategies the presentation covers are the main query transformations, the join optimizer, the data access selection strategies, and the range optimizer. For the cost-based optimizations, an overview of the cost model and the data used for doing the cost estimations is included. Saturday, 1.00 pm, Room Golden Gate 6: Overview of New Optimizer Features in MySQL 5.6—Manyi Lu, Oracle Many optimizer features have been added into MySQL 5.6. This session provides an introduction to these great features. Multirange read, index condition pushdown, and batched key access will yield huge performance improvements on large data volumes. Structured explain, explain for update/delete/insert, and optimizer tracing will help users analyze and speed up queries. And last but not least, the session covers subquery optimizations in Release 5.6. Saturday, 7.00 pm, Room Golden Gate 4: BoF: Query Optimizations: What Is New and What Is Coming? This BoF presents common techniques for query optimization, covers what is new in MySQL 5.6, and provides a discussion forum in which attendees can tell the MySQL optimizer team which optimizations they would like to see in the future. Sunday, 1.15 pm, Room Golden Gate 8: Query Performance Comparison of MySQL 5.5 and MySQL 5.6—Øystein Grøvlen, Oracle MySQL Release 5.6 contains several improvements in the query optimizer that create improved performance for complex queries. This presentation looks at how MySQL 5.6 improves the performance of many of the queries in the DBT-3 benchmark. Based on the observed improvements, the presentation discusses what makes the specific queries perform better in Release 5.6. It describes the relevant new optimization techniques and gives examples of the types of queries that will benefit from these techniques. Sunday, 4.15 pm, Room Golden Gate 4: Powerful EXPLAIN in MySQL 5.6—Evgeny Potemkin, Oracle The EXPLAIN command of MySQL has long been a very useful tool for understanding how MySQL will execute a query. Release 5.6 of the MySQL database offers several new additions that give more-detailed information about the query plan and make it easier to understand at the same time. This presentation gives an overview of new EXPLAIN features: structured EXPLAIN in JSON format, EXPLAIN for INSERT/UPDATE/DELETE, and optimizer tracing. Examples in the session give insights into how you can take advantage of the new features. They show how these features supplement and relate to each other and to classical EXPLAIN and how and why the MySQL server chooses a particular query plan. You can check out the full program here as well as in the September edition of the MySQL newsletter. Not registered yet? You can still save US$ 300 over the on-site fee – Register Now!

    Read the article

  • MSSQL: Copying data from one database to another

    - by DigiMortal
    I have database that has data imported from another server using import and export wizard of SQL Server Management Studio. There is also empty database with same tables but it also has primary keys, foreign keys and indexes. How to get data from first database to another? Here is the description of my crusade. And believe me – it is not nice one. Bugs in import and export wizard There is some awful bugs in import and export wizard that makes data imports and exports possible only on very limited manner: wizard is not able to analyze foreign keys, wizard wants to create tables always, whatever you say in settings. The result is faulty and useless package. Now let’s go step by step and make things work in our scenario. Database There are two databases. Let’s name them like this: PLAIN – contains data imported from remote server (no indexes, no keys, no nothing, just plain dumb data) CORRECT – empty database with same structure as remote database (indexes, keys and everything else but no data) Our goal is to get data from PLAIN to CORRECT. 1. Create import and export package In this point we will create faulty SSIS package using SQL Server Management Studio. Run import and export wizard and let it create SSIS package that reads data from CORRECT and writes it to, let’s say, CORRECT-2. Make sure you enable identity insert. Make sure there are no views selected. Make sure you don’t let package to create tables (you can miss this step because it wants to create tables anyway). Save package to SSIS. 2. Modify import and export package Now let’s clean up the package and remove all faulty crap. Connect SQL Server Management Studio to SSIS instance. Select the package you just saved and export it to your hard disc. Run Business Intelligence Studio. Create new SSIS project (DON’T MISS THIS STEP). Add package from disc as existing item to project and open it. Move to Control Flow page do one of following: Remove all preparation SQL-tasks and connect Data Flow tasks. Modify all preparation SQL-tasks so the existence of tables is checked before table is created (yes, you have to do it manually). Add new Execute-SQL task as first task in control flow: Open task properties. Assign destination connection as connection to use. Insert the following SQL as command:   EXEC sp_MSForEachTable 'ALTER TABLE ? NOCHECK CONSTRAINT ALL' GO   EXEC sp_MSForEachTable 'DELETE FROM ?' GO   Save task. Add new Execute-SQL task as last task in control flow: Open task properties. Assign destination connection as connection to use. Insert the following SQL as command:   EXEC sp_MSForEachTable 'ALTER TABLE ? CHECK CONSTRAINT ALL' GO   Save task Now connect first Execute-SQL task with first Data Flow task and last Data Flow task with second Execute-SQL task. Now move to Package Explorer tab and change connections under Connection Managers folder. Make source connection to use database PLAIN. Make destination connection to use database CORRECT. Save package and rebuilt the project. Update package using SQL Server Management Studio. Some hints: Make sure you take the package from solution folder because it is saved there now. Don’t overwrite existing package. Use numeric suffix and let Management Studio to create a new version of package. Now you are done with your package. Run it to test it and clean out all the errors you find. TRUNCATE vs DELETE You can see that I used DELETE FROM instead of TRUNCATE. Why? Because TRUNCATE has some nasty limits (taken from MSDN): “You cannot use TRUNCATE TABLE on a table referenced by a FOREIGN KEY constraint; instead, use DELETE statement without a WHERE clause. Because TRUNCATE TABLE is not logged, it cannot activate a trigger. TRUNCATE TABLE may not be used on tables participating in an indexed view.” As I am not sure what tables you have and how they are used I provided here the solution that should work for all scenarios. If you need better performance then in some cases you can use TRUNCATE table instead of DELETE. Conclusion My conclusion is bitter this time although I am very positive guy. It is A.D. 2010 and still we have to write stupid hacks for simple things. Simple tools that existed before are long gone and we have to live mysterious bloatware that is our only choice when using default tools. If you take a look at the length of this posting and the count of steps I had to do for one easy thing you should treat it as a signal that something has went wrong in last years. Although I got my job done I would be still more happy if out of box tools are more intelligent one day. References T-SQL Trick for Deleting All Data in Your Database (Mauro Cardarelli) TRUNCATE TABLE (MSDN Library) Error Handling in SQL 2000 – a Background (Erland Sommarskog) Disable/Enable Foreign Key and Check constraints in SQL Server (Decipher)

    Read the article

  • Create Advanced Panoramas with Microsoft Image Composite Editor

    - by Matthew Guay
    Do you enjoy making panoramas with your pictures, but want more features than tools like Live Photo Gallery offer?  Here’s how you can create amazing panoramas for free with the Microsoft Image Composite Editor. Yesterday we took a look at creating panoramic photos in Windows Live Photo Gallery. Today we take a look at a free tool from Microsoft that will give you more advanced features to create your own masterpiece. Getting Started Download Microsoft Image Composite Editor from Microsoft Research (link below), and install as normal.  Note that there are separate version for 32 & 64-bit editions of Windows, so make sure to download the correct one for your computer. Once it’s installed, you can proceed to create awesome panoramas and extremely large image combinations with it.  Microsoft Image Composite Editor integrates with Live Photo Gallery, so you can create more advanced panoramic pictures directly.  Select the pictures you want to combine, click Extras in the menu bar, and select Create Image Composite. You can also create a photo stitch directly from Explorer.  Select the pictures you want to combine, right-click, and select Stitch Images… Or, simply launch the Image Composite Editor itself and drag your pictures into its editor.  Either way you start a image composition, the program will automatically analyze and combine your images.  This application is optimized for multiple cores, and we found it much faster than other panorama tools such as Live Photo Gallery. Within seconds, you’ll see your panorama in the top preview pane. From the bottom of the window, you can choose a different camera motion which will change how the program stitches the pictures together.  You can also quickly crop the picture to the size you want, or use Automatic Crop to have the program select the maximum area with a continuous picture.   Here’s how our panorama looked when we switched the Camera Motion to Planar Motion 2. But, the real tweaking comes in when you adjust the panorama’s projection and orientation.  Click the box button at the top to change these settings. The panorama is now overlaid with a grid, and you can drag the corners and edges of the panorama to change its shape. Or, from the Projection button at the top, you can choose different projection modes. Here we’ve chosen Cylinder (Vertical), which entirely removed the warp on the walls in the image.  You can pan around the image, and get the part you find most important in the center.  Click the Apply button on the top when you’re finished making changes, or click Revert if you want to switch to the default view settings. Once you’ve finished your masterpiece, you can export it easily to common photo formats from the Export panel on the bottom.  You can choose to scale the image or set it to a maximum width and height as well.  Click Export to disk to save the photo to your computer, or select Publish to Photosynth to post your panorama online. Alternately, from the File menu you can choose to save the panorama as .spj file.  This preserves all of your settings in the Image Composite Editor so you can edit it more in the future if you wish.   Conclusion Whether you’re trying to capture the inside of a building or a tall tree, the extra tools in Microsoft Image Composite Editor let you make nicer panoramas than you ever thought possible.  We found the final results surprisingly accurate to the real buildings and objects, especially after tweaking the projection modes.  This tool can be both fun and useful, so give it a try and let us know what you’ve found it useful for. Works with 32 & 64-bit versions of XP, Vista, and Windows 7 Link Download Microsoft Image Composite Editor Similar Articles Productive Geek Tips Change or Set the Greasemonkey Script Editor in FirefoxNew Vista Syntax for Opening Control Panel Items from the Command-lineTune Your ClearType Font Settings in Windows VistaChange the Default Editor From Nano on Ubuntu LinuxMake MSE Create a Restore Point Before Cleaning Malware TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Get a free copy of WinUtilities Pro 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor

    Read the article

  • Merck Serono Gains Deep Understanding of Product Portfolio Value-Drivers, Risks, and Sales Expectations Through Forecasting Solution

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Merck Serono S.A. is the biopharmaceutical division of Merck KGaA. It offers leading brands in 150 countries to help patients with cancer, multiple sclerosis, infertility, endocrine and metabolic disorders, as well as cardiovascular diseases. Challenges: Establish a better decision-making framework for its complex, development portfolio of pharmaceutical products, where single-point estimates or expected averages of portfolio values, portfolio risks, and sales forecasts are insufficient and can be misleading Enable the company to be aware at all times of the range of possible outcomes of technical and market risks and uncertainties, such as the technical uncertainty of whether a product will produce the desired clinical outcomes, or the market-related uncertainty of whether a product will be outperformed by its competitors Solutions to Overcome the Challenges: Used Oracle Crystal Ball to devise a Monte-Carlo-based approach to better analyze and define the values and risks of the company’s development portfolio, laying the groundwork for optimized decision-making Enabled a better understanding of the range of potential values and risks to improve portfolio planning Enabled detailed analysis of the likelihood of favorable or unfavorable outcomes, such as the likelihood of whether Merck Serono can meet its sales targets planned for the next ten years with its existing product portfolio Gained the ability to take into account correlative risks, synergies and project interactions, enabling Merck Serono to better forecast what the company may achieve—for example, that there is a 70% probability of a particular sales target being met Established Monte-Carlo-based analysis using Oracle Crystal Ball as a useful element in decision-making at the board level, as the approach provides a better analysis of values and risks associated with the company’s product portfolio “Oracle Crystal Ball enables us to make Monte Carlo simulations of the potential value and sales of our development portfolio. It is a very powerful tool for gaining a thorough understanding and improved awareness of value drivers, uncertainties, and risks, along with associated probabilities.” – Riccardo Lampariello, Associate Director, Merck Serono S.A Why Oracle “We chose Oracle Crystal Ball to enable us to perform Monte Carlo analysis, which gives us a deeper understanding and improved awareness of the value drivers, uncertainties and risks of our portfolio of development projects,” said Kimber Hardy, head of valuation and analysis, Merck Serono S.A. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Click here to read the full version of the customer success story Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • How to Use RDA to Generate WLS Thread Dumps At Specified Intervals?

    - by Daniel Mortimer
    Introduction There are many ways to generate a thread dump of a WebLogic Managed Server. For example, take a look at: Taking Thread Dumps - [an excellent blog post on the Middleware Magic site]or  Different ways to take thread dumps in WebLogic Server (Document 1098691.1) There is another method - use Remote Diagnostic Agent! The solution described below is not documented, but it is relatively straightforward to execute. One advantage of using RDA to collect the thread dumps is RDA will also collect configuration, log files, network, system, performance information at the same time. Instructions 1. Not familiar with Remote Diagnostic Agent? Take a look at my previous blog "Resolve SRs Faster Using RDA - Find the Right Profile" 2. Choose a profile, which includes the WebLogic Server data collection modules (for example the profile "WebLogicServer"). At RDA setup time you should see the prompt below: ------------------------------------------------------------------------------- S301WLS: Collects Oracle WebLogic Server Information ------------------------------------------------------------------------------- Enter the location of the directory where the domains to analyze are located (For example in UNIX, <BEA Home>/user_projects/domains or <Middleware Home>/user_projects/domains) Hit 'Return' to accept the default (/oracle/11AS/Middleware/user_projects/domains) > For a successful WLS connection, ensure that the domain Admin Server is up and running. Data Collection Type:   1  Collect for a single server (offline mode)   2  Collect for a single server (using WLS connection)   3  Collect for multiple servers (using WLS connection) Enter the item number Hit 'Return' to accept the default (1) > 2 Choose option 2 or 3. Note: Collect for a single server or multiple servers using WLS connection means that RDA will attempt to connect to execute online WLST commands against the targeted server(s). The thread dumps are collected using the WLST function - "threadDumps()". If WLST cannot connect to the managed server, RDA will proceed to collect other data and ignore the request to collect thread dumps. If in the final output you see no Thread Dump menu item, then it's likely that the managed server is in a state which prevents new connections to it. If faced with this scenario, you would have to employ alternative methods for collecting thread dumps. 3. The RDA setup will create a setup.cfg file in the RDA_HOME directory. Open this file in an editor. You will find the following parameters which govern the number of thread dumps and thread dump interval. #N.Number of thread dumps to capture WREQ_THREAD_DUMP=10 #N.Thread dump interval WREQ_THREAD_DUMP_INTERVAL=5000 The example lines above show the default settings. In other words, RDA will collect 10 thread dumps at 5000 millisecond (5 second) intervals. You may want to change this to something like: #N.Number of thread dumps to capture WREQ_THREAD_DUMP=10 #N.Thread dump interval WREQ_THREAD_DUMP_INTERVAL=30000 However, bear in mind, that such change will increase the total amount of time it takes for RDA to complete its run. 4. Once you are happy with the setup.cfg, run RDA. RDA will collect, render, generate and package all files in the output directory. 5. For ease of viewing, open up the RDA Start html file - "xxxx__start.htm". The thread dumps can be found under the WLST Collections for the target managed server(s). See screenshots belowScreenshot 1:RDA Start Page - Main Index Screenshot 2: Managed Server Sub Index Screenshot 3: WLST Collections Screenshot 4: Thread Dump Page - List of dump file links Screenshot 5: Thread Dump Dat File Link Additional Comment: A) You can view the thread dump files within the RDA Start Page framework, but most likely you will want to download the dat files for in-depth analysis via thread dump analysis tools such as: Thread Dump Analyzer -  Samurai - a GUI based tail , thread dump analysis tool If you are new to thread dump analysis - take a look at this recorded Support Advisor Webcast  Oracle WebLogic Server: Diagnosing Performance Issues through Java Thread Dumps[Slidedeck from webcast in PDF format]B) I have logged a couple of enhancement requests for the RDA Development Team to consider: Add timestamp to dump file links, dat filename and at the top of the body of the dat file Package the individual thread dumps in a zip so all dump files can be conveniently downloaded in one go.

    Read the article

  • Access Services in SharePoint Server 2010

    - by Wayne
    Another SharePoint Server 2010 feature which cannot go unnoticed is the Access Services. Access Services is a service in SharePoint Server 2010 that allows administrators to view, edit, and configure a Microsoft access application within a Web Browser. Access Services settings support backup and recovery, regardless of whether there is a UI setting in Central Administration. However, backup and recovery only apply to service-level and administrative-level settings; end-user content from the Access application is not backed up as part of this process. Access Services has Windows PowerShell functionality that can be used to provide the service that uses settings from a previous backup; configure and manage macro and query setting; manage and configure session management; and configure all the global settings of the service. Key Benefits of SharePoint Server Access Services Easier Access to right tools: The enhanced, customizable Ribbon in Access 2010 makes it easy to uncover more commands so you can focus on the end product. The new Microsoft Office BackstageTM view is yet another feature that can help you easily analyze and document your database, share, publish, and customize your Access 2010 experience, all from one convenient location. Helps build database effortlessly and quickly: Out-of-the box templates and reusable components make Access Services the fastest, simplest database solution available. It helps find new pre-built templates which you can start using without customization or select templates created by your peers in the Access online community and customize them to meet your needs. It builds your databases with new modular components. New Application Parts enable you to add a set of common Access components, such as a table and form for task management, to your database in a few simple clicks. Database navigation is now simplified. It creates Navigation Forms and makes your frequently used forms and reports more accessible without writing any code or logic. Create Impactful forms and reports: Whether it's an inventory of your assets or customer sales database, Access 2010 brings the innovative tools you'd expect from Microsoft Office. Access Services easily spot trends and add emphasis to your data. It quickly create coordinating database forms and reports and bring the Web into your database. Obtain a centralized landing pad for your data: Access 2010 offers easy ways to bring your data together and help increase work quality. New technologies help break down barriers so you can share and work together on your databases, making you or your team more efficient and productive. Add automation and complex expressions: If you need a more robust database design, such as preventing record deletion if a specific condition is met or if you need to create calculations to forecast your budget, Access 2010 empowers you to be your own developer. The enhanced Expression Builder greatly simplifies your expression building experience with IntelliSense®. With the revamped Macro Designer, it's now even easier for you to add basic logic to your database. New Data Macros allow you to attach logic to your data, centralizing the logic on the table, not the objects that update your data. Key features of Access Services 2010 - Access database content through a Web browser: Newly added Access Services on Microsoft SharePoint Server 2010 enables you to make your databases available on the Web with new Web databases. Users without an Access client can open Web forms and reports via a browser and changes are automatically synchronized. - Simplify how you access the features you need: The Ribbon, improved in Access 2010, helps you access commands even more quickly by enabling you to customize or create your own tabs. The new Microsoft Office Backstage view replaces the traditional File menu to provide one central, organized location for all of your document management tasks. - Codeless navigation: Use professional looking web-like navigation forms to make frequently used forms and reports more accessible without writing any code or logic. - Easily reuse Access items in other databases: Use Application Parts to add pre-built Access components for common tasks to your database in a few simple clicks. You can also package common database components, such as data entry forms and reports for task management, and reuse them across your organization or other databases. - Simplified formatting: By using Office themes you can create coordinating professional forms and reports across your database. Simply select a familiar and great looking Office theme, or design your own, and apply it to your database. Newly created Access objects will automatically match your chosen theme.

    Read the article

  • Process Centric Banking: Loan Origination Solution

    - by Manish Palaparthy
    There is an old proverb that goes, "The difference between theory and practice is greater in practice than in theory". So, we keep doing numerous "Proof of Concepts" with our own products on various business cases to analyze them deeply, understand and explain to our customers. We then present our learnings as they happened. The awareness of each PoC should help readers increase the trustworthiness of the results coming out of these PoCs. I present one such PoC where we invested a lot of time&effort.  Process Centric Banking : Loan Origination Solution Loan Origination is a process by which a borrower applies for a new loan and the lender processes that application. Loan origination includes the series of steps taken by the bank from the point the customer shows interest in a loan product all the way to disbursal of funds. The Loan Origination process is relevant for many kind of lenders in Financial services: Banks, Credit Unions, NBFCs(Non Banking Financial Companies) and so on. For simplicity sake, I will use "Bank" as the lending institution in the rest of my article.  Loan Origination is one of the core processes for Banks as it is the process by which the it creates assets against which the Institution earns most of its profits from. A well tuned loan origination process can affect the Bank in many positive ways. Banks have always shown great interest in automating the loan origination process for the above reason. However, due the constant changes in customer environment, market dynamics, prevailing economic conditions, cost pressures & regulatory environment they run into lot of challenges. Let me categorize some of these challenges for you Customer Environment Multiple Channels: Customer can use any of the available channels (Internet Banking, Email, Fax, Branch, Phone Banking, ATM, Broker, Mobile, Snail Mail) to perform all or some of the activities related to her Visibility into the origination process: Expect immediate update on the status of loan processing & alert messages Reduced Turn Around Time: Expect loans to be processed with least turn around time Reduced loan processing fees: Partly due to market dynamics the customer expects the loan processing fee to be negligible Market Dynamics Competitive environment:  The competition keeps creating many variants of loan products to attract customers, the bank needs to create similar product variants with better offers to attract customers or keep existing ones Ability to migrate loans from one vendor to another: It has become really easy for retail customers to move from one bank to the other given the low fee of loan processing and highly attractive offers. How does the bank protect it's customer base while actively engaging with potential customers banking with competitor banks Flexibility to react to market developments: Market development greatly influence loan processing, underwriting, asset valuation, risk mitigation rules. Can the bank modify rules and policies, the idea is not just to react to market developments but to pro-actively manage new developments Economic conditions Constant change in various rates and their implications on the rates and rules applied when on-boarding a loan: How quickly can the bank apply changes to rates offered to customers when the central bank changes various rates Requirements of Audit by the central banker: Tough economic conditions have demanded much more stringent audit rules and tests. The banks needs to produce ready reports(historic & operational) for audit compliance Risk Mitigation: While risk mitigation has always been a key concern for the bank, this is the area where the bank's underwriters & risk analysts spend the maximum time when processing a loan application. In order to reduce TAT the bank cannot compromise on its risk mitigation strategies Cost pressures Reduce Cost of processing per application: To deliver a reduced loan processing fee to the customer, the bank needs to keep its cost per processing loan application low. Meet customer TAT expectations while reducing the queues and the systems being used to process the loan application: The loan application could potentially be spending a lot of time waiting in the queue for further processing. Different volumes & patterns of applications demand different queuing algorithms. The bank needs to have real-time visibility into these queues and have the flexibility to change queuing algorithms at runtime  Increase the use of electronic communication and reduce the branch channel usage: Lesser automation leads not only leads to Increased turn around time, it also impacts more costs to reach out to customers The objective of our PoC was to implement a Loan Origination Solution whose ownership lies with the bank and effectively meet the challenges listed above. We built a simple story board for the solution We then went about implementing our storyboard using Oracle BPM Suite, Webcenter Content : Imaging. The web UI has been built on ADF technolgies, while the integration with core-services has been implemented using the underlying SOA infrastructure. The BPM process model is quite exhaustive can meet all the challenges listed above to reasonable degree. A bank intending to implement an end-to-end Loan Origination Solution has multiple options at it's disposal. It can Develop a customer Loan Origination Application from scratch: Gives maximum opportunity to build what you want but inflexible to upgrade and maintain. Higher TCO in long term Buy a Packaged application & customize it: Customizing a generic loan application can be tedious and prove as difficult as above. Build it using many disparate & un-integrated tools: Initially seems easier than developing from scratch. But, without integrated tool sets this is not a viable approach either or A solution based on a Framework: Independent Services and Business Process Modeling provide decoupled architecture that is flexible. We built this framework end-to-end with processes the core process of loan origination & several sub-processes such as Analyse and define customer needs, customer credit verification, identity check processes, legal review process, New customer registration & risk assessment.

    Read the article

  • SharePoint 2010 Diagnostic Studio Remote Diag

    - by juanlarios
    I have had some time this week to try out some tools that I have been meaning to try out. This week I am trying out the SP 2010 Diagnostic Studio. I installed it successfully and tried it on my development evironment. I was able to build a report and a snapshot of the environment. I decided to turn my attention to my Employer's intranet environment. This would allow me to analyze it and measure it against benchmarks. I didn't want to install the Diagnostic studio on the Production Envorinment, lucky for me, the Diagnostic studio can be run remotely, well...kind of. Issue My development environment is a stand alone, full installation of SharePoint 2010 Server. It has Office 2010, SQL 2008 Enterprise, a DC...well you get the point, it's jammed packed! But more importantly it's a stand alone, self contained VM environment. Well Microsoft has instructions as to how to connect remotely with Diagnostic Studio here. The deciving part of this is that the SP2010DS prompts you for credentails. So I thought I was getting the right account to run the reports. I tried all the Power Shell commands in the link above but I still ended up getting the following errors: 06/28/2011 12:50:18    Connecting to remote server failed with the following error message : The WinRM client cannot process the request...If the SPN exists, but CredSSP cannot use Kerberos to validate the identity of the target computer and you still want to allow the delegation of the user credentials to the target computer, use gpedit.msc and look at the following policy: Computer Configuration -> Administrative Templates -> System -> Credentials Delegation -> Allow Fresh Credentials with NTLM-only Server Authentication.  Verify that it is enabled and configured with an SPN appropriate for the target computer. For example, for a target computer name "myserver.domain.com", the SPN can be one of the following: WSMAN/myserver.domain.com or WSMAN/*.domain.com. Try the request again after these changes. For more information, see the about_Remote_Troubleshooting Help topic. 06/28/2011 12:54:47    Access to the path '\\<targetserver>\C$\Users\<account logging in>\AppData\Local\Temp' is denied. You might also get an error message like this: The WinRM client cannot process the request. A computer policy does not allow the delegation of the user credentials to the target computer. Explanation After looking at the event logs on the target environment, I noticed that there were a several Security Exceptions. After looking at the specifics around who was denied access, I was able to see the account that was being denied access, it was the client machine administrator account. Well of course that was never going to work!!! After some quick Googling, the last error message above will lead you to edit the Local Group Policy on the client server. And although there are instructions from microsoft around doing this, it really will not work in this scenario. Notice the Description and how it only applices to authentication mentioned? Resolution I can tell you what I did, but I wish there was a better way but I simply don't know if it's duable any other way. Because my development environment had it's own DC, I didn't really want to mess with Kerberos authentication. I would also not be smart to connect that server to the domain, considering it has it's own DC. I ended up installing SharePoint 2010 Diagnostic Studio on another Windows 7 Dev environment I have, and connected the machien to the domain. I ran all the necesary remote credentials commands mentioned here. Those commands add the group policy for you! Once I did this I was able to authenticate properly and I was able to get the reports. Conclusion   You can run SharePoint 2010 Diagnostic Studio Remotely but it will require some specific scenarions. A couple of things I should mention is that as far as I understand, SP2010 DS, will install agents on your target environment to run tests and retrieve the data. I was a Farm Administrator, and also a Server Admin on SharePoint Server. I am not 100% sure if you need all those permissions but I that's just what I have to my internal intranet.   I deally I would like to have a machine that I can have SharePoint 2010 DIagnostic Studio installed and I can run that against client environments. It appears that I will not be able to do that, unless I enable Kerberos on my Windows 7 Machine now. If you have it installed in the same way I would like to have it, please let me know, I'll keep trying to get what I'm after. Hope this helps someone out there doing the same.

    Read the article

  • Database Partitioning and Multiple Data Source Considerations

    - by Jeffrey McDaniel
    With the release of P6 Reporting Database 3.0 partitioning was added as a feature to help with performance and data management.  Careful investigation of requirements should be conducting prior to installation to help improve overall performance throughout the lifecycle of the data warehouse, preventing future maintenance that would result in data loss. Before installation try to determine how many data sources and partitions will be required along with the ranges.  In P6 Reporting Database 3.0 any adjustments outside of defaults must be made in the scripts and changes will require new ETL runs for each data source.  Considerations: 1. Standard Edition or Enterprise Edition of Oracle Database.   If you aren't using Oracle Enterprise Edition Database; the partitioning feature is not available. Multiple Data sources are only supported on Enterprise Edition of Oracle   Database. 2. Number of Data source Ids for partitioning during configuration.   This setting will specify how many partitions will be allocated for tables containing data source information.  This setting requires some evaluation prior to installation as       there are repercussions if you don't estimate correctly.   For example, if you configured the software for only 2 data sources and the partition setting was set to 2, however along came a 3rd data source.  The necessary steps to  accommodate this change are as follows: a) By default, 3 partitions are configured in the Reporting Database scripts. Edit the create_star_tables_part.sql script located in <installation directory>\star\scripts   and search for partition.  You’ll see P1, P2, P3.  Add additional partitions and sub-partitions for P4 and so on. These will appear in several areas.  (See P6 Reporting Database 3.0 Installation and Configuration guide for more information on this and how to adjust partition ranges). b) Run starETL -r.  This will recreate each table with the new partition key.  The effect of this step is that all tables data will be lost except for history related tables.   c) Run starETL for each of the 3 data sources (with the data source # (starETL.bat "-s2" -as defined in P6 Reporting Database 3.0 Installation and Configuration guide) The best strategy for this setting is to overestimate based on possible growth.  If during implementation it is deemed that there are atleast 2 data sources with possibility for growth, it is a better idea to set this setting to 4 or 5, allowing room for the future and preventing a ‘start over’ scenario. 3. The Number of Partitions and the Number of Months per Partitions are not specific to multi-data source.  These settings work in accordance to a sub partition of larger tables with regard to time related data.  These settings are dataset specific for optimization.  The number of months per partition is self explanatory, optimally the smaller the partition, the better query performance so if the dataset has an extremely large number of spread/history records, a lower number of months is optimal.  Working in accordance with this setting is the number of partitions, this will determine how many "buckets" will be created per the number of months setting.  For example, if you kept the default for # of partitions of 3, and select 2 months for each partitions you would end up with: -1st partition, 2 months -2nd partition, 2 months -3rd partition, all the remaining records Therefore with records to this setting, it is important to analyze your source db spread ranges and history settings when determining the proper number of months per partition and number of partitions to optimize performance.  Also be aware the DBA will need to monitor when these partition ranges will fill up and when additional partitions will need to be added.  If you get to the final range partition and there are no additional range partitions all data will be included into the last partition. 

    Read the article

  • WPF Animation FPS vs. CPU usage - Am I expecting too much?

    - by Cory Charlton
    Working on a screen saver for my wife, http://cchearts.codeplex.com/, and while I've been able to improve FPS on lower end machines (switch from Path to StreamGeometry, use DrawingVisual instead of UserControl, etc) the CPU usage still seems very high. Here's some numbers I ran from a few 5 minute sampling periods: ~60FPS 35% average CPU on Core 2 Duo T7500 @ 2.2GHz, 3GB ram, NVIDIA Quadro NVS 140M (128MB), Vista [My dev laptop] ~40FPS 50% average CPU on Pentium D @ 3.4GHz, 1.5GB ram, Standard VGA Graphics Adapter (unknown), 2003 Server [A crappy desktop] I can understand the lower frame rate and higher CPU usage on the crappy desktop but it still seems pretty high and 35% on my dev laptop seems high as well. I'd really like to analyze the application to get more details but I'm having issues there as well so I'm wondering if I'm doing something wrong (never profiled WPF before). WPF Performance Suite: Process Launch Error Unable to attach to process: CCHearts.exe Do you want to kill it? This error message occurs when I click cancel after attempting launch. If I don't click cancel it sits there idle, I guess waiting to attach. Performance Explorer: Could not launch C:\Projects2\CC.Hearts\CC.Hearts\bin\Debug (USEVISUAL)\CCHearts.exe. Previous attempt to profile the application finished unsuccessfully. Please restart the application. Output Window from Performance: Profiling started. Profiling process ID 5360 (CCHearts). Process ID 5360 has exited. Data written to C:\Projects2\CC.Hearts\CCHearts100608.vsp. Profiling finished. PRF0025: No data was collected. Profiling complete. So I'm stuck wanting to improve performance but have no concrete way to determine where the bottleneck is. Have been relatively successful throwing darts at this point but I'm beyond that now :) PS: Screensaver is hosted at CodePlex if you want to look at the source and missed the link above. Edit: My RenderOptions darts... // NOTE: Grasping at straws here ;-) RenderOptions.SetBitmapScalingMode(newHeart, BitmapScalingMode.LowQuality); RenderOptions.SetCachingHint(newHeart, CachingHint.Cache); RenderOptions.SetEdgeMode(newHeart, EdgeMode.Aliased); I threw those a little while back and didn't see much difference (not sure if the bitmap scaling even comes into play). Really wish I could get profiling working to know where I should try to optimize. For now I assume there is some overhead in creating a new HeartVisual and the DrawingVisual contained inside. Maybe if I reset and reused the hearts (tossed them in a queue once they completed or something) I'd see an improvement. Shrug Throwing darts while blindfolder is always fun.

    Read the article

  • How to debug nondeterministic access violation crash?

    - by Special Touch
    Our C#/COM/C++ application is crashing and I need help debugging it. Running with gflags enabled and WinDbg attached, we determined the crashes are caused by an access violation, but we haven't been able to narrow it down any more than that. We are not seeing the issue on all machines; there are a couple of machines that seem to reproduce the issue frequently but not deterministically. We have observed the application crash from simply switching away from the application (say, Alt-Tab) and then back. Output from WinDbg is below. We have been trying to systematically comment out areas of code that could be causing the problem, but we haven't had much success yet. Any suggestions on what debugging steps or tools we should try? !analyze -v EXCEPTION_RECORD: ffffffff -- (.exr 0xffffffffffffffff) ExceptionAddress: 1a584ff2 (+0x1a584ff1) ExceptionCode: c0000005 (Access violation) ExceptionFlags: 00000000 NumberParameters: 2 Parameter[0]: 00000000 Parameter[1]: 1a584ff2 Attempt to read from address 1a584ff2 PROCESS_NAME: ProcessFiles.exe ERROR_CODE: (NTSTATUS) 0xc0000005 - The instruction at 0x%08lx referenced memory at 0x%08lx. The memory could not be %s. EXCEPTION_CODE: (NTSTATUS) 0xc0000005 - The instruction at 0x%08lx referenced memory at 0x%08lx. The memory could not be %s. EXCEPTION_PARAMETER1: 00000000 EXCEPTION_PARAMETER2: 1a584ff2 READ_ADDRESS: 1a584ff2 FOLLOWUP_IP: Ed20+1a584ff1 1a584ff2 ?? ??? NTGLOBALFLAG: 2000000 APPLICATION_VERIFIER_FLAGS: 0 IP_MODULE_UNLOADED: Ed20+1a584ff1 1a584ff2 ?? ??? MANAGED_STACK: (TransitionMU) 0EC6F6F4 7B1D8CCE System_Windows_Forms_ni!System.Windows.Forms.Application+ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32, Int32, Int32)+0x24e 0EC6F790 7B1D8937 System_Windows_Forms_ni!System.Windows.Forms.Application+ThreadContext.RunMessageLoopInner(Int32, System.Windows.Forms.ApplicationContext)+0x177 0EC6F7E4 7B1D8781 System_Windows_Forms_ni!System.Windows.Forms.Application+ThreadContext.RunMessageLoop(Int32, System.Windows.Forms.ApplicationContext)+0x61 0EC6F814 7B195911 System_Windows_Forms_ni!System.Windows.Forms.Application.Run(System.Windows.Forms.Form)+0x31 0EC6F828 0969D97A Extract_Utilities_Forms!Extract.Utilities.Forms.VerificationForm`1[[System.__Canon, mscorlib]].A(System.Object)+0x23a 0EC6F8C0 79A00EEE mscorlib_ni!System.Threading.ThreadHelper.ThreadStart_Context(System.Object)+0x72a25e 0EC6F8CC 792E019F mscorlib_ni!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)+0x6f 0EC6F8E4 797DB48A mscorlib_ni!System.Threading.ThreadHelper.ThreadStart(System.Object)+0x4a (TransitionUM) LAST_CONTROL_TRANSFER: from 7e418734 to 1a584ff2 FAULTING_THREAD: ffffffff ADDITIONAL_DEBUG_TEXT: Followup set based on attribute [ip_not_executable] from Frame:[0] on thread:[e30] BUGCHECK_STR: APPLICATION_FAULT_BAD_INSTRUCTION_PTR_INVALID_POINTER_READ_WRONG_SYMBOLS_WINDOW_HOOK PRIMARY_PROBLEM_CLASS: BAD_INSTRUCTION_PTR DEFAULT_BUCKET_ID: BAD_INSTRUCTION_PTR STACK_TEXT: 7b1d8cce System_Windows_Forms_ni!System.Windows.Forms.Application+ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop+0xc 7b1d8937 System_Windows_Forms_ni!System.Windows.Forms.Application+ThreadContext.RunMessageLoopInner+0x0 7b1d8781 System_Windows_Forms_ni!System.Windows.Forms.Application+ThreadContext.RunMessageLoop+0x0 7b195911 System_Windows_Forms_ni!System.Windows.Forms.Application.Run+0x31 0969d97a Extract_Utilities_Forms!Extract.Utilities.Forms.VerificationForm`1[[System.__Canon, mscorlib]].A+0x23a 79a00eee mscorlib_ni!System.Threading.ThreadHelper.ThreadStart_Context+0x72a25e 792e019f mscorlib_ni!System.Threading.ExecutionContext.Run+0x6f 797db48a mscorlib_ni!System.Threading.ThreadHelper.ThreadStart+0x4a STACK_COMMAND: .ecxr ; ~~[e30] ; .frame 0 ; ** Pseudo Context ** ; kb FAILED_INSTRUCTION_ADDRESS: Ed20+1a584ff1 1a584ff2 ?? ??? SYMBOL_NAME: Ed20 FOLLOWUP_NAME: MachineOwner MODULE_NAME: Ed20 IMAGE_NAME: Ed20 DEBUG_FLR_IMAGE_TIMESTAMP: 0 FAILURE_BUCKET_ID: BAD_INSTRUCTION_PTR_c0000005_Ed20!Unloaded BUCKET_ID: APPLICATION_FAULT_BAD_INSTRUCTION_PTR_INVALID_POINTER_READ_WRONG_SYMBOLS_WINDOW_HOOK_BAD_IP_Ed20 Followup: MachineOwner

    Read the article

  • Which OAuth library do you find works best for Objective-C/iPhone?

    - by Brennan
    I have been looking to switch to OAuth for my Twitter integration code and now that there is a deadline in less than 7 weeks (see countdown link) it is even more important to make the jump to OAuth. I have been doing Basic Authentication which is extremely easy. Unfortunately OAuth does not appear to be something that I would whip together in a couple of hours. http://www.countdowntooauth.com/ So I am looking to use a library. I have put together the following list. MPOAuth MGTwitterEngine OAuthConsumer I see that MPOAuth has some great features with a good deal of testing code in place but there is one big problem. It does not work. The sample iPhone project that is supposed to authenticate with Twitter causes an error which others have identified and logged as a bug. http://code.google.com/p/mpoauthconnection/issues/detail?id=29 The last code change was March 11 and this bug was filed on March 30. It has been over a month and this critical bug has not been fixed yet. So I have moved on to MGTwitterEngine. I pulled down the source code and loaded it up in Xcode. Immediately I find that there are a few dependencies and the README file does not have a clear list of steps to fetch those dependencies and integrate them with the project so that it builds successfully. I see this as a sign that the project is not mature enough for prime time. I see also that the project references 2 libraries for JSON when one should be enough. One is TouchJSON which has worked well for me so I am again discouraged from relying on this project for my applications. I did find that MGTwitterEngine makes use of OAuthConsumer which is one of many OAuth projects hosted by an OAuth project on Google Code. http://code.google.com/p/oauth/ http://code.google.com/p/oauthconsumer/wiki/UsingOAuthConsumer It looks like OAuthConsumer is a good choice at first glance. It is hosted with other OAuth libraries and has some nice documentation with it. I pulled down the code and it builds without errors but it does have many warnings. And when I run the new Build and Analyze feature in Xcode 3.2 I see 50 analyzer results. Many are marked as potential memory leaks which would likely lead to instability in any app which uses this library. It seems there is no clear winner and I have to go with something before the big Twitter OAuth deadline. Any suggestions?

    Read the article

  • In MySQL, what is the most effective query design for joining large tables with many to many relatio

    - by lighthouse65
    In our application, we collect data on automotive engine performance -- basically source data on engine performance based on the engine type, the vehicle running it and the engine design. Currently, the basis for new row inserts is an engine on-off period; we monitor performance variables based on a change in engine state from active to inactive and vice versa. The related engineState table looks like this: +---------+-----------+---------------+---------------------+---------------------+-----------------+ | vehicle | engine | engine_state | state_start_time | state_end_time | engine_variable | +---------+-----------+---------------+---------------------+---------------------+-----------------+ | 080025 | E01 | active | 2008-01-24 16:19:15 | 2008-01-24 16:24:45 | 720 | | 080028 | E02 | inactive | 2008-01-24 16:19:25 | 2008-01-24 16:22:17 | 304 | +---------+-----------+---------------+---------------------+---------------------+-----------------+ For a specific analysis, we would like to analyze table content based on a row granularity of minutes, rather than the current basis of active / inactive engine state. For this, we are thinking of creating a simple productionMinute table with a row for each minute in the period we are analyzing and joining the productionMinute and engineEvent tables on the date-time columns in each table. So if our period of analysis is from 2009-12-01 to 2010-02-28, we would create a new table with 129,600 rows, one for each minute of each day for that three-month period. The first few rows of the productionMinute table: +---------------------+ | production_minute | +---------------------+ | 2009-12-01 00:00 | | 2009-12-01 00:01 | | 2009-12-01 00:02 | | 2009-12-01 00:03 | +---------------------+ The join between the tables would be engineState AS es LEFT JOIN productionMinute AS pm ON es.state_start_time <= pm.production_minute AND pm.production_minute <= es.event_end_time. This join, however, brings up multiple environmental issues: The engineState table has 5 million rows and the productionMinute table has 130,000 rows When an engineState row spans more than one minute (i.e. the difference between es.state_start_time and es.state_end_time is greater than one minute), as is the case in the example above, there are multiple productionMinute table rows that join to a single engineState table row When there is more than one engine in operation during any given minute, also as per the example above, multiple engineState table rows join to a single productionMinute row In testing our logic and using only a small table extract (one day rather than 3 months, for the productionMinute table) the query takes over an hour to generate. In researching this item in order to improve performance so that it would be feasible to query three months of data, our thoughts were to create a temporary table from the engineEvent one, eliminating any table data that is not critical for the analysis, and joining the temporary table to the productionMinute table. We are also planning on experimenting with different joins -- specifically an inner join -- to see if that would improve performance. What is the best query design for joining tables with the many:many relationship between the join predicates as outlined above? What is the best join type (left / right, inner)?

    Read the article

  • Visual Studio Express 2012 debug mode doesn't work

    - by user2350086
    I have a project in Visual Studio that I have been working on for a while, and I have used the debugger extensively. Recently I changed some settings and I have lost the ability to stop the program and step through code. I can't figure out what I had changed that might have affected this. If I put a breakpoint in my code and try to have the program stop there, it doesn't. The break point shows up white with a red outline. If I hover the mouse over it, it says "The breakpoint will not currently be hit. No executable code of the debugger's target code type is associated with this line. Possible causes include: conditional compilation, compiler optimizations, or the target architecture of this line is not supported by the current debugger code type." I know for a fact that the program executes the code where the breakpoint is because I put the breakpoint in the beginning of the InitializeComponent method. The program displays the window fine, but does not stop at the breakpoint. Yes, I am running in debug mode. It seems as though there is a disconnect between the compiled code and the source code displayed. Does anyone know what that would be, or know which compiler settings I should check to re-enable debugging? Here are the compiler options: /GS /analyze- /W3 /Zc:wchar_t /I"D:\dev\libcurl-7.19.3-win32-ssl-msvc\include" /Zi /Od /sdl /Fd"Debug\vc110.pdb" /fp:precise /D "WIN32" /D "_DEBUG" /D "_UNICODE" /D "UNICODE" /errorReport:prompt /WX- /Zc:forScope /Oy- /clr /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\mscorlib.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.Data.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.Drawing.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.Windows.Forms.DataVisualization.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.Windows.Forms.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.Xml.dll" /MDd /Fa"Debug\" /EHa /nologo /Fo"Debug\" /Fp"Debug\Prog.pch" The linker options are: /OUT:"D:\dev\Prog\Debug\Prog.exe" /MANIFEST /NXCOMPAT /PDB:"D:\dev\Prog\Debug\Prog.pdb" /DYNAMICBASE "curllib.lib" "winmm.lib" "kernel32.lib" "user32.lib" "gdi32.lib" "winspool.lib" "comdlg32.lib" "advapi32.lib" "shell32.lib" "ole32.lib" "oleaut32.lib" "uuid.lib" "odbc32.lib" "odbccp32.lib" /FIXED:NO /DEBUG /MACHINE:X86 /ENTRY:"Main" /INCREMENTAL /PGD:"D:\dev\Prog\Debug\Prog.pgd" /SUBSYSTEM:WINDOWS /MANIFESTUAC:"level='asInvoker' uiAccess='false'" /ManifestFile:"Debug\Prog.exe.intermediate.manifest" /ERRORREPORT:PROMPT /NOLOGO /LIBPATH:"D:\dev\libcurl-7.19.3-win32-ssl-msvc\lib\Debug" /ASSEMBLYDEBUG /TLBID:1

    Read the article

  • Why does PostgresQL query performance drop over time, but restored when rebuilding index

    - by Jim Rush
    According to this page in the manual, indexes don't need to be maintained. However, we are running with a PostgresQL table that has a continuous rate of updates, deletes and inserts that over time (a few days) sees a significant query degradation. If we delete and recreate the index, query performance is restored. We are using out of the box settings. The table in our test is currently starting out empty and grows to half a million rows. It has a fairly large row (lots of text fields). We are search is based of an index, not the primary key (I've confirmed the index is being used, at least under normal conditions) The table is being used as a persistent store for a single process. Using PostgresQL on Windows with a Java client I'm willing to give up insert and update performance to keep up the query performance. We are considering rearchitecting the application so that data is spread across various dynamic tables in a manner that allows us to drop and rebuild indexes periodically without impacting the application. However, as always, there is a time crunch to get this to work and I suspect we are missing something basic in our configuration or usage. We have considered forcing vacuuming and rebuild to run at certain times, but I suspect the locking period for such an action would cause our query to block. This may be an option, but there are some real-time (windows of 3-5 seconds) implications that require other changes in our code. Additional information: Table and index CREATE TABLE icl_contacts ( id bigint NOT NULL, campaignfqname character varying(255) NOT NULL, currentstate character(16) NOT NULL, xmlscheduledtime character(23) NOT NULL, ... 25 or so other fields. Most of them fixed or varying character fiel ... CONSTRAINT icl_contacts_pkey PRIMARY KEY (id) ) WITH (OIDS=FALSE); ALTER TABLE icl_contacts OWNER TO postgres; CREATE INDEX icl_contacts_idx ON icl_contacts USING btree (xmlscheduledtime, currentstate, campaignfqname); Analyze: Limit (cost=0.00..3792.10 rows=750 width=32) (actual time=48.922..59.601 rows=750 loops=1) - Index Scan using icl_contacts_idx on icl_contacts (cost=0.00..934580.47 rows=184841 width=32) (actual time=48.909..55.961 rows=750 loops=1) Index Cond: ((xmlscheduledtime < '2010-05-20T13:00:00.000'::bpchar) AND (currentstate = 'SCHEDULED'::bpchar) AND ((campaignfqname)::text = '.main.ee45692a-6113-43cb-9257-7b6bf65f0c3e'::text)) And, yes, I am aware there there are a variety of things we could do to normalize and improve the design of this table. Some of these options may be available to us. My focus in this question is about understanding how PostgresQL is managing the index and query over time (understand why, not just fix). If it were to be done over or significantly refactored, there would be a lot of changes.

    Read the article

  • NSZombieEnabled breaking working code?

    - by Gordon Fontenot
    I have the following method in UIImageManipulation.m: +(UIImage *)scaleImage:(UIImage *)source toSize:(CGSize)size { UIImage *scaledImage = nil; if (source != nil) { UIGraphicsBeginImageContext(size); [source drawInRect:CGRectMake(0, 0, size.width, size.height)]; scaledImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); } return scaledImage; } I am calling it in a different view with: imageFromFile = [UIImageManipulator scaleImage:imageFromFile toSize:imageView.frame.size]; (imageView is a UIImageView allocated earlier) This is working great in my code. I resizes the image perfectly, and throws zero errors. I also don't have anything pop up under build - analyze. But the second I turn on NSZombieEnabled to debug a different EXC_BAD_ACCESS issue, the code breaks. Every single time. I can turn NSZombieEnabled off, code runs great. I turn it on, and boom. Broken. I comment out the call, and it works again. Every single time, it gives me an error in the console: -[UIImage release]: message sent to deallocated instance 0x3b1d600. This error doesn't appear if `NSZombieEnabled is turned off. Any ideas? --EDIT-- Ok, This is killing me. I have stuck breakpoints everywhere I can, and I still cannot get a hold of this thing. Here is the full code when I call the scaleImage method: -(void)setupImageButton { UIImage *imageFromFile; if (object.imageAttribute == nil) { imageFromFile = [UIImage imageNamed:@"no-image.png"]; } else { imageFromFile = object.imageAttribute; } UIImage *scaledImage = [UIImageManipulator scaleImage:imageFromFile toSize:imageButton.frame.size]; UIImage *roundedImage = [UIImageManipulator makeRoundCornerImage:scaledImage :10 :10 withBorder:YES]; [imageButton setBackgroundImage:roundedImage forState:UIControlStateNormal]; } The other UIImageManipulator method (makeRoundCornerImage) shouldn't be causing the error, but just in case I'm overlooking something, I threw the entire file up on github here. It's something about this method though. Has to be. If I comment it out, it works great. If I leave it in, Error. But it doesn't throw errors with NSZombieEnabled turned off ever.

    Read the article

  • Oracle Coding Standards Feature Implementation

    - by Mike Hofer
    Okay, I have reached a sort of an impasse. In my open source project, a .NET-based Oracle database browser, I've implemented a bunch of refactoring tools. So far, so good. The one feature I was really hoping to implement was a big "Global Reformat" that would make the code (scripts, functions, procedures, packages, views, etc.) standards compliant. (I've always been saddened by the lack of decent SQL refactoring tools, and wanted to do something about it.) Unfortunatey, I am discovering, much to my chagrin, that there doesn't seem to be any one widely-used or even "generally accepted" standard for PL-SQL. That kind of puts a crimp on my implementation plans. My search has been fairly exhaustive. I've found lots of conflicting documents, threads and articles and the opinions are fairly diverse. (Comma placement, of all things, seems to generate quite a bit of debate.) So I'm faced with a couple of options: Add a feature that lets the user customize the standard and then reformat the code according to that standard. —OR— Add a feature that lets the user customize the standard and simply generate a violations list like StyleCop does, leaving the SQL untouched. In my mind, the first option saves the end-users a lot of work, but runs the risk of modifying SQL in potentially unwanted ways. The second option runs the risk of generating lots of warnings and doing no work whatsoever. (It'd just be generally annoying.) In either scenario, I still have no standard to go by. What I'd need to know from you guys is kind of poll-ish, but kind of not. If you were going to use a tool of this nature, what parts of your SQL code would you want it to warn you about or fix? Again, I'm just at a loss due to a lack of a cohesive standard. And given that there isn't anything out there that's officially published by Oracle, I think this is something the community could weigh in on. Also, given the way that voting works on SO, the votes would help to establish the popularity of a given "refactoring." P.S. The engine parses SQL into an expression tree so it can robustly analyze the SQL and reformat it. There should be quite a bit that we can do to correct the format of the SQL. But I am thinking that for the first release of the thing, layout is the primary concern. Though it is worth noting that the thing already has refactorings for converting keywords to upper case, and identifiers to lower case.

    Read the article

  • NHibernate exception on query

    - by Yoav
    I'm getting a mapping exception doing the most basic query. This is my domain class: public class Project { public virtual string PK { get; set; } public virtual string Id { get; set; } public virtual string Name { get; set; } public virtual string Description { get; set; } } And the mapping class: public class ProjectMap :ClassMap<Project> { public ProjectMap() { Table("PROJECTS"); Id(x => x.PK, "PK"); Map(x => x.Id, "ID"); Map(x => x.Name, "NAME"); Map(x => x.Description, "DESCRIPTION"); } } Configuration: public ISessionFactory SessionFactory { return Fluently.Configure() .Database(MsSqlCeConfiguration.Standard.ShowSql().ConnectionString(c => c.Is("data source=" + path))) .Mappings(m => m.FluentMappings.AddFromAssemblyOf<Project>()) .BuildSessionFactory(); } And query: IList project; using (ISession session = SessionFactory.OpenSession()) { IQuery query = session.CreateQuery("from Project"); project = query.List<Project>(); } I'm getting the exception on the query line: NHibernate.Hql.Ast.ANTLR.QuerySyntaxException: Project is not mapped [from Project] at NHibernate.Hql.Ast.ANTLR.SessionFactoryHelperExtensions.RequireClassPersister(String name) at NHibernate.Hql.Ast.ANTLR.Tree.FromElementFactory.AddFromElement() at NHibernate.Hql.Ast.ANTLR.Tree.FromClause.AddFromElement(String path, IASTNode alias) at NHibernate.Hql.Ast.ANTLR.HqlSqlWalker.fromElement() at NHibernate.Hql.Ast.ANTLR.HqlSqlWalker.fromElementList() at NHibernate.Hql.Ast.ANTLR.HqlSqlWalker.fromClause() at NHibernate.Hql.Ast.ANTLR.HqlSqlWalker.unionedQuery() at NHibernate.Hql.Ast.ANTLR.HqlSqlWalker.query() at NHibernate.Hql.Ast.ANTLR.HqlSqlWalker.selectStatement() at NHibernate.Hql.Ast.ANTLR.HqlSqlWalker.statement() at NHibernate.Hql.Ast.ANTLR.HqlSqlTranslator.Translate() at NHibernate.Hql.Ast.ANTLR.QueryTranslatorImpl.Analyze(HqlParseEngine parser, String collectionRole) at NHibernate.Hql.Ast.ANTLR.QueryTranslatorImpl.DoCompile(IDictionary`2 replacements, Boolean shallow, String collectionRole) at NHibernate.Hql.Ast.ANTLR.QueryTranslatorImpl.Compile(IDictionary`2 replacements, Boolean shallow) at NHibernate.Engine.Query.HQLQueryPlan..ctor(String hql, String collectionRole, Boolean shallow, IDictionary`2 enabledFilters, ISessionFactoryImplementor factory) at NHibernate.Engine.Query.QueryPlanCache.GetHQLQueryPlan(String queryString, Boolean shallow, IDictionary`2 enabledFilters) at NHibernate.Impl.AbstractSessionImpl.GetHQLQueryPlan(String query, Boolean shallow) at NHibernate.Impl.AbstractSessionImpl.CreateQuery(String queryString) I assume something is wrong with my query.

    Read the article

  • Calling/selecting variables (float valued) with user input in Python

    - by Jonathan Straus
    I've been working on a computational physics project (plotting related rates of chemical reactants with respect to eachother to show oscillatory behavior) with a fair amount of success. However, one of my simulations involves more than two active oscillating agents (five, in fact) which would obviously be unsuitable for any single visual plot... My scheme was hence to have the user select which two reactants they wanted plotted on the x-axis and y-axis respectively. I tried (foolishly) to convert string input values into the respective variable names, but I guess I need a radically different approach if any exist? If it helps clarify any, here is part of my code: def coupledBrusselator(A, B, t_trial,display_x,display_y): t = 0 t_step = .01 X = 0 Y = 0 E = 0 U = 0 V = 0 dX = (A) - (B+1)*(X) + (X**2)*(Y) dY = (B)*(X) - (X**2)*(Y) dE = -(E)*(U) - (X) dU = (U**2)*(V) -(E+1)*(U) - (B)*(X) dV = (E)*(U) - (U**2)*(V) array_t = [0] array_X = [0] array_Y = [0] array_U = [0] array_V = [0] while t <= t_trial: X_1 = X + (dX)*(t_step/2) Y_1 = Y + (dY)*(t_step/2) E_1 = E + (dE)*(t_step/2) U_1 = U + (dU)*(t_step/2) V_1 = V + (dV)*(t_step/2) dX_1 = (A) - (B+1)*(X_1) + (X_1**2)*(Y_1) dY_1 = (B)*(X_1) - (X_1**2)*(Y_1) dE_1 = -(E_1)*(U_1) - (X_1) dU_1 = (U_1**2)*(V_1) -(E_1+1)*(U_1) - (B)*(X_1) dV_1 = (E_1)*(U_1) - (U_1**2)*(V_1) X_2 = X + (dX_1)*(t_step/2) Y_2 = Y + (dY_1)*(t_step/2) E_2 = E + (dE_1)*(t_step/2) U_2 = U + (dU_1)*(t_step/2) V_2 = V + (dV_1)*(t_step/2) dX_2 = (A) - (B+1)*(X_2) + (X_2**2)*(Y_2) dY_2 = (B)*(X_2) - (X_2**2)*(Y_2) dE_2 = -(E_2)*(U_2) - (X_2) dU_2 = (U_2**2)*(V_2) -(E_2+1)*(U_2) - (B)*(X_2) dV_2 = (E_2)*(U_2) - (U_2**2)*(V_2) X_3 = X + (dX_2)*(t_step) Y_3 = Y + (dY_2)*(t_step) E_3 = E + (dE_2)*(t_step) U_3 = U + (dU_2)*(t_step) V_3 = V + (dV_2)*(t_step) dX_3 = (A) - (B+1)*(X_3) + (X_3**2)*(Y_3) dY_3 = (B)*(X_3) - (X_3**2)*(Y_3) dE_3 = -(E_3)*(U_3) - (X_3) dU_3 = (U_3**2)*(V_3) -(E_3+1)*(U_3) - (B)*(X_3) dV_3 = (E_3)*(U_3) - (U_3**2)*(V_3) X = X + ((dX + 2*dX_1 + 2*dX_2 + dX_3)/6) * t_step Y = Y + ((dX + 2*dY_1 + 2*dY_2 + dY_3)/6) * t_step E = E + ((dE + 2*dE_1 + 2*dE_2 + dE_3)/6) * t_step U = U + ((dU + 2*dU_1 + 2*dY_2 + dE_3)/6) * t_step V = V + ((dV + 2*dV_1 + 2*dV_2 + dE_3)/6) * t_step dX = (A) - (B+1)*(X) + (X**2)*(Y) dY = (B)*(X) - (X**2)*(Y) t_step = .01 / (1 + dX**2 + dY**2) ** .5 t = t + t_step array_X.append(X) array_Y.append(Y) array_E.append(E) array_U.append(U) array_V.append(V) array_t.append(t) where previously display_x = raw_input("Choose catalyst you wish to analyze in the phase/field diagrams (X, Y, E, U, or V) ") display_y = raw_input("Choose one other catalyst from list you wish to include in phase/field diagrams ") coupledBrusselator(A, B, t_trial, display_x, display_y) Thanks!

    Read the article

  • Keeping track of business rules within IT department?

    - by evaldas-alexander
    I am looking for the best way to keep track of the business rules for both developers and everybody else (support staff / management) in a startup enviroment. The challenge is that our business model requires quite a lot of different business rules, which are created pretty much on the fly and evolving organically after that. After running this project for 3+ years, we have so many of such rules that often the only way to be sure about what the application is supposed to do in a certain situation is to go find the module responsible for that process and analyze its code and comments. That is all fine as long as you have one single developer who created the entire application from the scratch, but every new developer needs to go over pretty much entire codebase in order to understand how the application works. Even bigger problem is that non technical employees don't even have that option and therefore are forced to ask me pretty much every day how some certain case would be handled by the application. Quick example - we only start charging for our customer campaigns once they have been active for at least 72 hours, but at the same time we stop creating invoices for campaigns that belong to insolvent accounts and close such accounts within a month of the first failed charge. That does not apply to accounts that are set to "non-chargeable" which most commonly belongs to us since we are using the service ourselves. The invoices are created on the 1st of each month and include charges from the previous month + any current balance that the account might have. However, some customers are charged only 4 days after their invoice has been generated due to issues with their billing department. In addition to that, invoices are also created when customer deactivates his campaign, but that can only be done once the campaign is not longer under mandatory 6 month contract, unless account manager approves early deactivation. I know, that's quite a lot of rules that need to be taken into account when answering a question "when do we bill our customers", but actually I could still append an asterisk at the end of each sentence in order to disclose some rare exceptions. Of course, it would be easiest just to keep the business rules to the minimum, but we need to adapt to changing marketplace - i.e. less than a year ago we had no contracts whatsoever. One idea that I had so far was a simplistic wiki with categories corresponding to areas such as "Account activation", "Invoicing", "Collection procedures" and so on. Another idea would be to have giant interactive flowchart showing the entire customer "life cycle" from prospecting to account deactivation. What are your experiences / suggestions?

    Read the article

  • How to optimize this SQL query for a rectangular region?

    - by Andrew B.
    I'm trying to optimize the following query, but it's not clear to me what index or indexes would be best. I'm storing tiles in a two-dimensional plane and querying for rectangular regions of that plane. The table has, for the purposes of this question, the following columns: id: a primary key integer world_id: an integer foreign key which acts as a namespace for a subset of tiles tileY: the Y-coordinate integer tileX: the X-coordinate integer value: the contents of this tile, a varchar if it matters. I have the following indexes: "ywot_tile_pkey" PRIMARY KEY, btree (id) "ywot_tile_world_id_key" UNIQUE, btree (world_id, "tileY", "tileX") "ywot_tile_world_id" btree (world_id) And this is the query I'm trying to optimize: ywot=> EXPLAIN ANALYZE SELECT * FROM "ywot_tile" WHERE ("world_id" = 27685 AND "tileY" <= 6 AND "tileX" <= 9 AND "tileX" >= -2 AND "tileY" >= -1 ); QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------- Bitmap Heap Scan on ywot_tile (cost=11384.13..149421.27 rows=65989 width=168) (actual time=79.646..80.075 rows=96 loops=1) Recheck Cond: ((world_id = 27685) AND ("tileY" <= 6) AND ("tileY" >= (-1)) AND ("tileX" <= 9) AND ("tileX" >= (-2))) -> Bitmap Index Scan on ywot_tile_world_id_key (cost=0.00..11367.63 rows=65989 width=0) (actual time=79.615..79.615 rows=125 loops=1) Index Cond: ((world_id = 27685) AND ("tileY" <= 6) AND ("tileY" >= (-1)) AND ("tileX" <= 9) AND ("tileX" >= (-2))) Total runtime: 80.194 ms So the world is fixed, and we are querying for a rectangular region of tiles. Some more information that might be relevant: All the tiles for a queried region may or may not be present The height and width of a queried rectangle are typically about 10x10-20x20 For any given (world, X) or (world, Y) pair, there may be an unbounded number of matching tiles, but the worst case is currently around 10,000, and typically there are far fewer. New tiles are created far less frequently than existing ones are updated (changing the 'value'), and that itself is far less frequent that just reading as in the query above. The only thing I can think of would be to index on (world, X) and (world, Y). My guess is that the database would be able to take those two sets and intersect them. The problem is that there is a potentially unbounded number of matches for either for either of those. Is there some other kind of index that would be more appropriate?

    Read the article

  • Calling a function within a class

    - by JM4
    I am having trouble calling a specific function within a class. The call is made: case "Mod10": if (!validateCreditCard($fields[$field_name])) $errors[] = $error_message; break; and the class code is: class CreditCardValidationSolution { var $CCVSNumber = ''; var $CCVSNumberLeft = ''; var $CCVSNumberRight = ''; var $CCVSType = ''; var $CCVSError = ''; function validateCreditCard($Number) { $this->CCVSNumber = ''; $this->CCVSNumberLeft = ''; $this->CCVSNumberRight = ''; $this->CCVSType = ''; $this->CCVSError = ''; // Catch malformed input. if (empty($Number) || !is_string($Number)) { $this->CCVSError = $CCVSErrNumberString; return FALSE; } // Ensure number doesn't overrun. $Number = substr($Number, 0, 20); // Remove non-numeric characters. $this->CCVSNumber = preg_replace('/[^0-9]/', '', $Number); // Set up variables. $this->CCVSNumberLeft = substr($this->CCVSNumber, 0, 4); $this->CCVSNumberRight = substr($this->CCVSNumber, -4); $NumberLength = strlen($this->CCVSNumber); $DoChecksum = 'Y'; // Mod10 checksum process... if ($DoChecksum == 'Y') { $Checksum = 0; // Add even digits in even length strings or odd digits in odd length strings. for ($Location = 1 - ($NumberLength % 2); $Location < $NumberLength; $Location += 2) { $Checksum += substr($this->CCVSNumber, $Location, 1); } // Analyze odd digits in even length strings or even digits in odd length strings. for ($Location = ($NumberLength % 2); $Location < $NumberLength; $Location += 2) { $Digit = substr($this->CCVSNumber, $Location, 1) * 2; if ($Digit < 10) { $Checksum += $Digit; } else { $Checksum += $Digit - 9; } } // Checksums not divisible by 10 are bad. if ($Checksum % 10 != 0) { $this->CCVSError = $CCVSErrChecksum; return FALSE; } } return TRUE; } } When I run the application - I get the following message: Fatal error: Call to undefined function validateCreditCard() in C:\xampp\htdocs\validation.php on line 339 any ideas?

    Read the article

  • Setting up Netbeans/Eclipse for Linux Kernel Development

    - by red.october
    Hi: I'm doing some Linux kernel development, and I'm trying to use Netbeans. Despite declared support for Make-based C projects, I cannot create a fully functional Netbeans project. This is despite compiling having Netbeans analyze a kernel binary that was compiled with full debugging information. Problems include: files are wrongly excluded: Some files are incorrectly greyed out in the project, which means Netbeans does not believe they should be included in the project, when in fact they are compiled into the kernel. The main problem is that Netbeans will miss any definitions that exist in these files, such as data structures and functions, but also miss macro definitions. cannot find definitions: Pretty self-explanatory - often times, Netbeans cannot find the definition of something. This is partly a result of the above problem. can't find header files: self-explanatory I'm wondering if anyone has had success with setting up Netbeans for Linux kernel development, and if so, what settings they used. Ultimately, I'm looking for Netbeans to be able to either parse the Makefile (preferred) or extract the debug information from the binary (less desirable, since this can significantly slow down compilation), and automatically determine which files are actually compiled and which macros are actually defined. Then, based on this, I would like to be able to find the definitions of any data structure, variable, function, etc. and have complete auto-completion. Let me preface this question with some points: I'm not interested in solutions involving Vim/Emacs. I know some people like them, but I'm not one of them. As the title suggest, I would be also happy to know how to set-up Eclipse to do what I need While I would prefer perfect coverage, something that only misses one in a million definitions is obviously fine SO's useful "Related Questions" feature has informed me that the following question is related: http://stackoverflow.com/questions/149321/what-ide-would-be-good-for-linux-kernel-driver-development. Upon reading it, the question is more of a comparison between IDE's, whereas I'm looking for how to set-up a particular IDE. Even so, the user Wade Mealing seems to have some expertise in working with Eclipse on this kind of development, so I would certainly appreciate his (and of course all of your) answers. Cheers

    Read the article

  • deallocated memory in tableview: message sent to deallocated instance

    - by Kirn
    I tried looking up other issues but couldn't find anything to match so here goes: I'm trying to display text in the table view so I use this bit of code: // StockData is an object I created and it pulls information from Yahoo APIs based on // a stock ticker stored in NSString *heading NSArray* tickerValues = [heading componentsSeparatedByString:@" "]; StockData *chosenStock = [[StockData alloc] initWithContents:[tickerValues objectAtIndex:0]]; [chosenStock getData]; // Set up the cell... NSDictionary *tempDict = [chosenStock values]; NSArray *tempArr = [tempDict allValues]; cell.textLabel.text = [tempArr objectAtIndex:indexPath.row]; return cell; This is all under cellForRowAtIndexPath When I try to release the chosenStock object though I get this error: [CFDictionary release]: message sent to deallocated instance 0x434d3d0 Ive tried using NSZombieEnabled and Build and Analyze to detect problems but no luck thus far. Ive even gone so far as to comment bits and pieces of the code with NSLog but no luck. I'll post the code for StockData below this. As far as I can figure something is getting deallocated before I do the release but I'm not sure how. The only place I've got release in my code is under dealloc method call. Here's the StockData code: // StockData contains all stock information pulled in through Yahoo! to be displayed @implementation StockData @synthesize ticker, values; - (id) initWithContents: (NSString *)newName { if(self = [super init]){ ticker = newName; } return self; } - (void) getData { NSURL *url = [NSURL URLWithString: [NSString stringWithFormat:@"http://download.finance.yahoo.com/d/quotes.csv?s=%@&f=%@&e=.csv", ticker, @"chgvj1"]]; NSError *error; NSURLResponse *response; NSURLRequest *request = [NSURLRequest requestWithURL:url]; NSData *stockData = [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error]; if(stockData) { NSString *tempStr = [[NSString alloc] initWithData:stockData encoding:NSASCIIStringEncoding]; NSArray *receivedValuesArr = [tempStr componentsSeparatedByString:@","]; [tempStr release]; values = [NSDictionary dictionaryWithObjects:receivedValuesArr forKeys:[@"change, high, low, volume, market" componentsSeparatedByString:@", "]]; } else { NSLog(@"Connection failed: %@", error); } } - (void)dealloc { [ticker release]; [values release]; [super dealloc]; NSLog(@"Release took place fine"); } @end

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36  | Next Page >