Search Results

Search found 4460 results on 179 pages for 'ssrs reports'.

Page 170/179 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • Testing Workflows &ndash; Test-After

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/05/30/testing-workflows-ndash-test-after.aspxIn this post I’m going to outline a few common methods that can be used to increase the coverage of of your test suite.  This won’t be yet another post on why you should be doing testing; there are plenty of those types of posts already out there.  Assuming you know you should be testing, then comes the problem of how do I actual fit that into my day job.  When the opportunity to automate testing comes do you take it, or do you even recognize it? There are a lot of ways (workflows) to go about creating automated tests, just like there are many workflows to writing a program.  When writing a program you can do it from a top-down approach where you write the main skeleton of the algorithm and call out to dummy stub functions, or a bottom-up approach where the low level functionality is fully implement before it is quickly wired together at the end.  Both approaches are perfectly valid under certain contexts. Each approach you are skilled at applying is another tool in your tool belt.  The more vectors of attack you have on a problem – the better.  So here is a short, incomplete list of some of the workflows that can be applied to increasing the amount of automation in your testing and level of quality in general.  Think of each workflow as an opportunity that is available for you to take. Test workflows basically fall into 2 categories:  test first or test after.  Test first is the best approach.  However, this post isn’t about the one and only best approach.  I want to focus more on the lesser known, less ideal approaches that still provide an opportunity for adding tests.  In this post I’ll enumerate some test-after workflows.  In my next post I’ll cover test-first. Bug Reporting When someone calls you up or forwards you a email with a vague description of a bug its usually standard procedure to create or verify a reproduction plan for the bug via manual testing and log that in a bug tracking system.  This can be problematic.  Often reproduction plans when written down might skip a step that seemed obvious to the tester at the time or they might be missing some crucial environment setting. Instead of data entry into a bug tracking system, try opening up the test project and adding a failing unit test to prove the bug.  The test project guarantees that all aspects of the environment are setup properly and no steps are missing.  The language in the test project is much more precise than the English that goes into a bug tracking system. This workflow can easily be extended for Enhancement Requests as well as Bug Reporting. Exploratory Testing Exploratory testing comes in when you aren’t sure how the system will behave in a new scenario.  The scenario wasn’t planned for in the initial system requirements and there isn’t an existing test for it.  By definition the system behaviour is “undefined”. So write a new unit test to define that behaviour.  Add assertions to the tests to confirm your assumptions.  The new test becomes part of the living system specification that is kept up to date with the test suite. Examples This workflow is especially good when developing APIs.  When you are finally done your production API then comes the job of writing documentation on how to consume the API.  Good documentation will also include code examples.  Don’t let these code examples merely exist in some accompanying manual; implement them in a test suite. Example tests and documentation do not have to be created after the production API is complete.  It is best to write the example code (tests) as you go just before the production code. Smoke Tests Every system has a typical use case.  This represents the basic, core functionality of the system.  If this fails after an upgrade the end users will be hosed and they will be scratching their heads as to how it could be possible that an update got released with this core functionality broken. The tests for this core functionality are referred to as “smoke tests”.  It is a good idea to have them automated and run with each build in order to avoid extreme embarrassment and angry customers. Coverage Analysis Code coverage analysis is a tool that reports how much of the production code base is exercised by the test suite.  In Visual Studio this can be found under the Test main menu item. The tool will report a total number for the code coverage, which can be anywhere between 0 and 100%.  Coverage Analysis shouldn’t be used strictly for numbers reporting.  Companies shouldn’t set minimum coverage targets that mandate that all projects must have at least 80% or 100% test coverage.  These arbitrary requirements just invite gaming of the coverage analysis, which makes the numbers useless. The analysis tool will break down the coverage by the various classes and methods in projects.  Instead of focusing on the total number, drill down into this view and see which classes have high or low coverage.  It you are surprised by a low number on a class this is an opportunity to add tests. When drilling through the classes there will be generally two types of reaction to a surprising low test coverage number.  The first reaction type is a recognition that there is low hanging fruit to be picked.  There may be some classes or methods that aren’t being tested, which could easy be.  The other reaction type is “OMG”.  This were you find a critical piece of code that isn’t under test.  In both cases, go and add the missing tests. Test Refactoring The general theme of this post up to this point has been how to add more and more tests to a test suite.  I’ll step back from that a bit and remind that every line of code is a liability.  Each line of code has to be read and maintained, which costs money.  This is true regardless whether the code is production code or test code. Remember that the primary goal of the test suite is that it be easy to read so that people can easily determine the specifications of the system.  Make sure that adding more and more tests doesn’t interfere with this primary goal. Perform code reviews on the test suite as often as on production code.  Hold the test code up to the same high readability standards as the production code.  If the tests are hard to read then change them.  Look to remove duplication.  Duplicate setup code between two or more test methods that can be moved to a shared function.  Entire test methods can be removed if it is found that the scenario it tests is covered by other tests.  Its OK to delete a test that isn’t pulling its own weight anymore. Remember to only start refactoring when all the test are green.  Don’t refactor the tests and the production code at the same time.  An automated test suite can be thought of as a double entry book keeping system.  The unchanging, passing production code serves as the tests for the test suite while refactoring the tests. As with all refactoring, it is best to fit this into your regular work rather than asking for time later to get it done.  Fit this into the standard red-green-refactor cycle.  The refactor step no only applies to production code but also the tests, but not at the same time.  Perhaps the cycle should be called red-green-refactor production-refactor tests (not quite as catchy).   That about covers most of the test-after workflows I can think of.  In my next post I’ll get into test-first workflows.

    Read the article

  • Automating deployments with the SQL Compare command line

    - by Jonathan Hickford
    In my previous article, “Five Tips to Get Your Organisation Releasing Software Frequently” I looked at how teams can automate processes to speed up release frequency. In this post, I’m looking specifically at automating deployments using the SQL Compare command line. SQL Compare compares SQL Server schemas and deploys the differences. It works very effectively in scenarios where only one deployment target is required – source and target databases are specified, compared, and a change script is automatically generated and applied. But if multiple targets exist, and pressure to increase the frequency of releases builds, this solution quickly becomes unwieldy.   This is where SQL Compare’s command line comes into its own. I’ve put together a PowerShell script that loops through the Servers table and pulls out the server and database, these are then passed to sqlcompare.exe to be used as target parameters. In the example the source database is a scripts folder, a folder structure of scripted-out database objects used by both SQL Source Control and SQL Compare. The script can easily be adapted to use schema snapshots.     -- Create a DeploymentTargets database and a Servers table CREATE DATABASE DeploymentTargets GO USE DeploymentTargets GO CREATE TABLE [dbo].[Servers]( [id] [int] IDENTITY(1,1) NOT NULL, [serverName] [nvarchar](50) NULL, [environment] [nvarchar](50) NULL, [databaseName] [nvarchar](50) NULL, CONSTRAINT [PK_Servers] PRIMARY KEY CLUSTERED ([id] ASC) ) GO -- Now insert your target server and database details INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment1' , N'mydb1') INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment2' , N'mydb2') Here’s the PowerShell script you can adapt for yourself as well. # We're holding the server names and database names that we want to deploy to in a database table. # We need to connect to that server to read these details $serverName = "" $databaseName = "DeploymentTargets" $authentication = "Integrated Security=SSPI" #$authentication = "User Id=xxx;PWD=xxx" # If you are using database authentication instead of Windows authentication. # Path to the scripts folder we want to deploy to the databases $scriptsPath = "SimpleTalk" # Path to SQLCompare.exe $SQLComparePath = "C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe" # Create SQL connection string, and connection $ServerConnectionString = "Data Source=$serverName;Initial Catalog=$databaseName;$authentication" $ServerConnection = new-object system.data.SqlClient.SqlConnection($ServerConnectionString); # Create a Dataset to hold the DataTable $dataSet = new-object "System.Data.DataSet" "ServerList" # Create a query $query = "SET NOCOUNT ON;" $query += "SELECT serverName, environment, databaseName " $query += "FROM dbo.Servers; " # Create a DataAdapter to populate the DataSet with the results $dataAdapter = new-object "System.Data.SqlClient.SqlDataAdapter" ($query, $ServerConnection) $dataAdapter.Fill($dataSet) | Out-Null # Close the connection $ServerConnection.Close() # Populate the DataTable $dataTable = new-object "System.Data.DataTable" "Servers" $dataTable = $dataSet.Tables[0] #For every row in the DataTable $dataTable | FOREACH-OBJECT { "Server Name: $($_.serverName)" "Database Name: $($_.databaseName)" "Environment: $($_.environment)" # Compare the scripts folder to the database and synchronize the database to match # NB. Have set SQL Compare to abort on medium level warnings. $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/AbortOnWarnings:Medium") # + @("/sync" ) # Commented out the 'sync' parameter for safety, write-host $arguments & $SQLComparePath $arguments "Exit Code: $LASTEXITCODE" # Some interesting variations # Check that every database matches a folder. # For example this might be a pre-deployment step to validate everything is at the same baseline state. # Or a post deployment script to validate the deployment worked. # An exit code of 0 means the databases are identical. # # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") # Generate a report of the difference between the folder and each database. Generate a SQL update script for each database. # For example use this after the above to generate upgrade scripts for each database # Examine the warnings and the HTML diff report to understand how the script will change objects # #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") } It’s worth noting that the above example generates the deployment scripts dynamically. This approach should be problem-free for the vast majority of changes, but it is still good practice to review and test a pre-generated deployment script prior to deployment. An alternative approach would be to pre-generate a single deployment script using SQL Compare, and run this en masse to multiple targets programmatically using sqlcmd, or using a tool like SQL Multi Script.  You can use the /ScriptFile, /report, and /showWarnings flags to generate change scripts, difference reports and any warnings.  See the commented out example in the PowerShell: #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") There is a drawback of running a pre-generated deployment script; it assumes that a given database target hasn’t drifted from its expected state. Often there are (rightly or wrongly) many individuals within an organization who have permissions to alter the production database, and changes can therefore be made outside of the prescribed development processes. The consequence is that at deployment time, the applied script has been validated against a target that no longer represents reality. The solution here would be to add a check for drift prior to running the deployment script. This is achieved by using sqlcompare.exe to compare the target against the expected schema snapshot using the /Assertidentical flag. Should this return any differences (sqlcompare.exe Exit Code 79), a drift report is outputted instead of executing the deployment script.  See the commented out example. # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") Any checks and processes that should be undertaken prior to a manual deployment, should also be happen during an automated deployment. You might think about triggering backups prior to deployment – even better, automate the verification of the backup too.   You can use SQL Compare’s command line interface along with PowerShell to automate multiple actions and checks that you need in your deployment process. Automation is a practical solution where multiple targets and a higher release cadence come into play. As we know, with great power comes great responsibility – responsibility to ensure that the necessary checks are made so deployments remain trouble-free.  (The code sample supplied in this post automates the simple dynamic deployment case – if you are considering more advanced automation, e.g. the drift checks, script generation, deploying to large numbers of targets and backup/verification, please email me at [email protected] for further script samples or if you have further questions)

    Read the article

  • Can't remove burg theme packages

    - by Lassi
    Today after trying to install and remove BURG and few themes I faced an issue. Now I can't install or remove anything. Here is the output (unfortunately partly in Finnish, I couldn't change language since it also seems to depend on package listings: lassi@lassi-ubuntu:~$ sudo apt-get autoremove Luetaan pakettiluetteloita... Valmis Muodostetaan riippuvuussuhteiden puu Luetaan tilatietoja... Valmis Seuraavat paketit POISTETAAN: burg-theme-fortune burg-theme-gnome burg-theme-picchio 0 päivitetty, 0 uutta asennusta, 3 poistettavaa ja 0 päivittämätöntä. 3 ei asennettu kokonaan tai poistettiin. Toiminnon jälkeen vapautuu 7 180 k t levytilaa. Haluatko jatkaa [K/e]? k (Luetaan tietokantaa... 166462 files and directories currently installed.) Poistetaan pakettia burg-theme-fortune... sudo: update-burg: command not found dpkg: virhe käsiteltäessä burg-theme-fortune (--remove): aliprosessi installed post-removal script palautti virhetilakoodin 1 Poistetaan pakettia burg-theme-gnome... sudo: update-burg: command not found dpkg: virhe käsiteltäessä burg-theme-gnome (--remove): aliprosessi installed post-removal script palautti virhetilakoodin 1 Poistetaan pakettia burg-theme-picchio... sudo: update-burg: command not found dpkg: virhe käsiteltäessä burg-theme-picchio (--remove): aliprosessi installed post-removal script palautti virhetilakoodin 1 Käsittelyssä tapahtui liian monta virhettä: burg-theme-fortune burg-theme-gnome burg-theme-picchio E: Sub-process /usr/bin/dpkg returned an error code (1) Basically what seems to happen is this: It creates the package lists, then tries to remove packet burg-theme-fortune. This fails as update-burg command was not found. Then dpkg reports an error while processing the packet. Same goes with all 3 packages. In the end it claims that there were too many errors, and packages stay installed. I also tried installing burg as it tries to run command update-burg, but appears that it tries to delete these packages always when I try to install or remove or do anything with apt. Any ideas how I could solve this issue? Edit: Here is the output of apt-get install burg (tried installing again to get English output) lassi@lassi-ubuntu:~$ LC_ALL=C sudo apt-get install burg [sudo] password for lassi: Reading package lists... Done Building dependency tree Reading state information... Done burg is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 3 not fully installed or removed. Need to get 0 B/6169 kB of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 167497 files and directories currently installed.) Preparing to replace burg-theme-fortune 0.5.0-1 (using .../burg-theme-fortune_0.5.0-1_all.deb) ... Unpacking replacement burg-theme-fortune ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: warning: subprocess old post-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error processing /var/cache/apt/archives/burg-theme-fortune_0.5.0-1_all.deb (--unpack): subprocess new post-removal script returned error exit status 1 Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Preparing to replace burg-theme-gnome 0.5.0-1 (using .../burg-theme-gnome_0.5.0-1_all.deb) ... Unpacking replacement burg-theme-gnome ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: warning: subprocess old post-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error processing /var/cache/apt/archives/burg-theme-gnome_0.5.0-1_all.deb (--unpack): subprocess new post-removal script returned error exit status 1 Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Preparing to replace burg-theme-picchio 0.5.0-1 (using .../burg-theme-picchio_0.5.0-1_all.deb) ... Unpacking replacement burg-theme-picchio ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: warning: subprocess old post-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error processing /var/cache/apt/archives/burg-theme-picchio_0.5.0-1_all.deb (--unpack): subprocess new post-removal script returned error exit status 1 Generating burg.cfg ... /usr/sbin/burg-probe: error: cannot stat `/boot/burg/locale'. No path or device is specified. Try `/usr/sbin/burg-probe --help' for more information. dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/burg-theme-fortune_0.5.0-1_all.deb /var/cache/apt/archives/burg-theme-gnome_0.5.0-1_all.deb /var/cache/apt/archives/burg-theme-picchio_0.5.0-1_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) lassi@lassi-ubuntu:~$

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • Five Key Strategies in Master Data Management

    - by david.butler(at)oracle.com
    Here is a very interesting Profit Magazine article on MDM: A recent customer survey reveals the deleterious effects of data fragmentation. by Trevor Naidoo, December 2010   Across industries and geographies, IT organizations have grown in complexity, whether due to mergers and acquisitions, or decentralized systems supporting functional or departmental requirements. With systems architected over time to support unique, one-off process needs, they are becoming costly to maintain, and the Internet has only further added to the complexity. Data fragmentation has become a key inhibitor in delivering flexible, user-friendly systems. The Oracle Insight team conducted a survey assessing customers' master data management (MDM) capabilities over the past two years to get a sense of where they are in terms of their capabilities. The responses, by 27 respondents from six different industries, reveal five key areas in which customers need to improve their data management in order to get better financial results. 1. Less than 15 percent of organizations surveyed understand the sources and quality of their master data, and have a roadmap to address missing data domains. Examples of the types of master data domains referred to are customer, supplier, product, financial and site. Many organizations have multiple sources of master data with varying degrees of data quality in each source -- customer data stored in the customer relationship management system is inconsistent with customer data stored in the order management system. Imagine not knowing how many places you stored your customer information, and whether a customer's address was the most up to date in each source. In fact, more than 55 percent of the respondents in the survey manage their data quality on an ad-hoc basis. It is important for organizations to document their inventory of data sources and then profile these data sources to ensure that there is a consistent definition of key data entities throughout the organization. Some questions to ask are: How do we define a customer? What is a product? How do we define a site? The goal is to strive for one common repository for master data that acts as a cross reference for all other sources and ensures consistent, high-quality master data throughout the organization. 2. Only 18 percent of respondents have an enterprise data management strategy to ensure that data is treated as an asset to the organization. Most respondents handle data at the department or functional level and do not have an enterprise view of their master data. The sales department may track all their interactions with customers as they move through the sales cycle, the service department is tracking their interactions with the same customers independently, and the finance department also has a different perspective on the same customer. The salesperson may not be aware that the customer she is trying to sell to is experiencing issues with existing products purchased, or that the customer is behind on previous invoices. The lack of a data strategy makes it difficult for business users to turn data into information via reports. Without the key building blocks in place, it is difficult to create key linkages between customer, product, site, supplier and financial data. These linkages make it possible to understand patterns. A well-defined data management strategy is aligned to the business strategy and helps create the governance needed to ensure that data stewardship is in place and data integrity is intact. 3. Almost 60 percent of respondents have no strategy to integrate data across operational applications. Many respondents have several disparate sources of data with no strategy to keep them in sync with each other. Even though there is no clear strategy to integrate the data (see #2 above), the data needs to be synced and cross-referenced to keep the business processes running. About 55 percent of respondents said they perform this integration on an ad hoc basis, and in many cases, it is done manually with the help of Microsoft Excel spreadsheets. For example, a salesperson needs a report on global sales for a specific product, but the product has different product numbers in different countries. Typically, an analyst will pull all the data into Excel, manually create a cross reference for that product, and then aggregate the sales. The exact same procedure has to be followed if the same report is needed the following month. A well-defined consolidation strategy will ensure that a central cross-reference is maintained with updates in any one application being propagated to all the other systems, so that data is synchronized and up to date. This can be done in real time or in batch mode using integration technology. 4. Approximately 50 percent of respondents spend manual efforts cleansing and normalizing data. Information stored in various systems usually follows different standards and formats, making it difficult to match the data. A customer's address can be stored in different ways using a variety of abbreviations -- for example, "av" or "ave" for avenue. Similarly, a product's attributes can be stored in a number of different ways; for example, a size attribute can be stored in inches and can also be entered as "'' ". These types of variations make it difficult to match up data from different sources. Today, most customers rely on manual, heroic efforts to match, cleanse, and de-duplicate data -- clearly not a scalable, sustainable model. To solve this challenge, organizations need the ability to standardize data for customers, products, sites, suppliers and financial accounts; however, less than 10 percent of respondents have technology in place to automatically resolve duplicates. It is no wonder, therefore, that we get communications about products we don't own, at addresses we don't reside, and using channels (like direct mail) we don't like. An all-too-common example of a potential challenge follows: Customers end up receiving duplicate communications, which not only impacts customer satisfaction, but also incurs additional mailing costs. Cleansing, normalizing, and standardizing data will help address most of these issues. 5. Only 10 percent of respondents have the ability to share data that was mastered in a master data hub. Close to 60 percent of respondents have efforts in place that profile, standardize and cleanse data manually, and the output of these efforts are stored in spreadsheets in various parts of the organization. This valuable information is not easily shared with the rest of the organization and, more importantly, this enriched information cannot be sent back to the source systems so that the data is fixed at the source. A key benefit of a master data management strategy is not only to clean the data, but to also share the data back to the source systems as well as other systems that need the information. Aside from the source systems, another key beneficiary of this data is the business intelligence system. Having clean master data as input to business intelligence systems provides more accurate and enhanced reporting.  Characteristics of Stellar MDM When deciding on the right master data management technology, organizations should look for solutions that have four main characteristics: enterprise-grade MDM performance complete technology that can be rapidly deployed and addresses multiple business issues end-to-end MDM process management with data quality monitoring and assurance pre-built MDM business relevant applications with data stores and workflows These master data management capabilities will aid in moving closer to a best-practice maturity level, delivering tremendous efficiencies and savings as well as revenue growth opportunities as a result of better understanding your customers.  Trevor Naidoo is a senior director in Industry Strategy and Insight at Oracle. 

    Read the article

  • Oracle Partner Store (OPS) New Enhancements

    - by Kristin Rose
    Effective June 29th, Oracle Partner Store (OPS) will release the enhancements listed below to improve your overall ordering experience. v Online Transactional Oracle Master Agreement (Online TOMA) The Online TOMA enables end users to execute a transactional end user license agreement with Oracle. The new Online TOMA in OPS will replace the need for you to obtain a signed hard copy of the TOMA from the end user. You will now initiate the Online TOMA via OPS. Navigation: OPS Home > Order Tools > Online TOMA Query > Request Online TOMA> End User Contact, click “Select for TOMA” > Select Language > Submit (an automated email is sent immediately to the requestor and the end user) Ø The Online TOMA can also be initiated from the ‘My OPS’ tab. Under the Online TOMA Query section partners can track Online TOMA request details submitted to end users. The status of the Online TOMA request and the OMA Key generated (once Ts&Cs of the Online TOMA are accepted by an end user) are also displayed in this table. There is also the ability to resend pending Online TOMA requests by clicking ‘Resend’. Navigation: OPS Home > Order Tools > Online TOMA Query For more details on the Transactional OMA, please click here. v Convert Deals to Carts The partner deal registration system within OPS will now allow you to convert approved deals into carts with a simple click of a button. VADs can use Deal to Cart on all of their partners' registrations, regardless of whether they submitted on their partner's behalf, or the partner submitted themselves. Navigation: Login > Deal Registrations > Deal Registration List > Open the approved deal > Click Deal Reg ID number link to open > Click on 'Create Cart' link You can locate your newly created cart in the Saved Carts section of OPS. Links are also available from within an open deal or from the Deal Registration List. Click on the cart number to proceed. v Partner Opportunity Management: Deal Registration on OPS now allows you to see updated information on your opportunities from Oracle’s Fusion CRM opportunity management system.  Key fields such as close date, sales stage, products and status can be viewed by clicking the opportunity ID associated with the deal registration.  This new feature allows you to see regular updates to your opportunities after registrations are approved.  Through ongoing communication with Oracle Channel Managers and Sales Reps, you can ensure that Oracle has the latest information on your active registered deals. v Product Recommendations: When adding products to the Deal Registrations tab, OPS will now show additional products that you can try to include to maximize your sale and rebate. v Advanced Customer Support(ACS) Services Note: This will be available from July 9th. Initiate the purchase of the complete stack (HW/SW/Services) online with one single OPS order. More ACS services now supported online with exception of Start-Up Pack: · New SW installation services for Standard Configurations & stand alone System Software. · New Pre-production & Go-live services for Standard & Engineered Systems · New SW configuration & Platinum Pre-Production & Go-Live services for Engineered Systems · New Travel & Expenses Estimate included · New Partner & VAD volume discount supported v Software as a Service (SaaS) for Independent Software Vendors (ISVs): Oracle SaaS ISVs can now use OPS to submit their monthly usage reports to Oracle within 20 days after the end of every month. Navigation: OPS Home > Cart > Transaction Type: Partner SaaS for ISV’s > Add Eligible Products > Check out v Existing Approvals: In an effort to reduce the processing time of discount approvals, we have added a new section in the Request Approval page for you to communicate pre-existing approvals without having to attach the DAT. Just enter the Approval ID and submit your request. In case of existing software approvals, you will be required to submit the DAT with the Contact Information section filled out. v Additional data for Shipping Box Labels and Packing Slips OPS now has additional fields in the Shipping Notes section for you to add PO details. This will help you easily identify shipments as they arrive. Partners will have an End User PO field, whereas VADs will have VAR and End User PO fields. v Shipping Notes on OPS Hardware delivery Shipping Notes will now have multiple options to better suit your requirements. v Reminders for Royalty Reporting Partners: If you have not submitted your royalty report online, OPS will now send an automated alert to remind you. v Order Tracker Changes: · Order Tracker will now have a deal reg flag (Yes/No). You can now clearly distinguish between orders that have registered opportunities. · All lines of the order will be visible in the order details list. v Changes in Terminology · You will notice textual changes on some of our labels and messages relating to approval requests. “Discount Requests” has been replaced with “Approval Requests” to cater to some of our other offerings. · First Line Support (FLS) transaction type has been renamed to Support Provider Partner (SPP). OPS Support For more details on these enhancements, please request a training here. For assistance on the Oracle Partner Store, please contact the OPS support team in your region. NAMER: [email protected] LAD: [email protected] EMEA : [email protected] APAC: [email protected] Japan: [email protected] You can even call us on our Hotline! Find your local number here.     Thank you, Oracle Partner Store Support Team      

    Read the article

  • Reviewing Retail Predictions for 2011

    - by David Dorf
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} I've been busy thinking about what 2012 and beyond will look like for retail, and I have some interesting predictions to share.  But before I go there, let’s first review this year’s predictions before making new ones for 2012. 1. Alternate Payments We've seen several alternate payment schemes emerge over the last two years, and 2011 may be the year one of them takes hold. Any competition that can drive down fees will be good for everyone. I'm betting that Apple will add NFC chips to their next version of the iPhone, then enable payments in stores using iTunes accounts on the backend. Paypal will continue to make inroads, and Isis will announce a pilot. The iPhone 4S did not contain an NFC chip, so we’ll have to continuing waiting for the iPhone 5. PayPal announced its moving into in-store payments, and Google launched its wallet in selected cities.  Overall I think the payment scene is heating up and that trend will continue. 2. Engineered Systems The industry is moving toward purpose-built appliances that are optimized across the entire stack. Oracle calls these "engineered systems" and the first two examples are Exadata and Exalogic, but there are other examples from other vendors. These are particularly important to the retail industry because of the volume of data that must be processed. There should be continued adoption in 2011. Oracle reports that Exadata is its fasting growing product, and at the recent OpenWorld it announced the SuperCluster and Exalytics products, both continuing the engineered systems trend. SAP’s HANA continues to receive attention, and IBM also seems to be moving in this direction. 3. Social Analytics There are lots of tools that provide insight into how a brand is perceived across popular internet sites, but as far as I know, these tools are not industry specific. The next step needs to mine the data and determine how it should influence retail operations. The data needs to help retailers determine how they create promotions, which products to stock, and how to keep consumers engaged. Social data alone does not provide the answers, but its one more data point that will help retailers make better decisions. Look for some vendor consolidation to help make this happen. In March, Salesforce.com acquired leading social monitoring vendor Radian6 and followed up with acquisitions of Heroku and Model Metrics. The notion of Social CRM seems to be going more mainstream now. 4. 2-D Barcodes Look for more QRCodes on shelf-tags, in newspaper circulars, and on billboards. It's a great portal from the physical world into the digital one that buys us time until augmented reality matures further. Nobody wants to type "www", backslash, and ".com" on their phones. QRCodes are everywhere. ‘Nuff said. 5. In the words of Microsoft, "To the Cloud!" My favorite "cloud application" is Evernote. If you take notes on your work laptop, you will inevitably need those notes on your home PC. And if you manage to solve that problem, you'll need to access them from your mobile phone. Evernote stores your notes in the cloud and provides easy ways to access them. Being able to access a service from anywhere and not having to worry about backups, upgrades, etc. is great. Retailers will start to rely on cloud services, both public and private, in the coming year. There were no shortage of announcements in this area: Amazon’s cloud-based Kindle Fire, Apple’s iCloud, Oracle’s Public Cloud, etc. I saw an interesting presentation showing how BevMo moved their systems to the cloud.  Seems like retailers are starting to consider the cloud for specific uses. 6. F-CommerceTop of Form Move over "E" and "M" so we can introduce "F-Commerce," which should go mainstream in 2011. Already several retailers have created small stores on Facebook, and it won't be long before Facebook becomes a full-fledged channel in the omni-channel world of retail. The battle between Facebook and Google will heat up over retail, where both stand to make lots of money. JCPenney and ASOS both put their entire catalogs on Facebook, and lots of other retailers have connected Facebook to their e-commerce site. I still think selling from the newsfeed is the best approach, and several retailers are trying that approach as well. I just don’t see Google+ as a threat to Facebook, so I think that battle is over.  I called 2011 The Year of F-Commerce, and that was probably accurate. Its good to look back at predictions, but we also have to think about what was missed.  I didn't see Amazon entering the tablet business with such a splash, although in hindsight it was obvious. Nor did I think HP would fall so far so fast.  Look for my 2012 predictions coming soon.

    Read the article

  • Google Analytics on Android

    - by pjv
    There is a specific and official analytics SDK for native Android apps (note that I'm not talking about webpages in apps on a phone). This library basically sends pages and events to Google Analytics and you can view your analytics in exactly the same dashboard as for websites. Since my background is apps rather than websites, and since a lot of the Google Analytics terminology seems particularly inapplicable to a native app, I need some pointers. Please discuss my remarks, provide some clarification where you think I'm off-track, and above all share good experiences! 1. Page Views Pages mostly can match different Activities (and Dialogs) being displayed. Activities can be visible behind non-full-screen Activities however, though only the top-level Activity can be interacted. This sort-off clashes with a "(page) view". You'd also want at least one page view for each visit and therefore put one page view tracker in the Application class. However this does not constitute a window or sorts. Usually an Activity will open at the same time, so the time spent on that page will have been 0. This will influence your "time spent" statistics. How are these counted anyway? Moreover, there is a loose coupling between the Activities, by means of Intents. A user can, much like on any website, step in at any Activity, although usually this then concerns resuming the application where he left off. This makes that the hierarchy of Activities usually is very flat. And since there are no url's involved. What meaning would using slashes in page titles have, such as "/Home"? All pages would appear on an equal level in the reports, so no content drilldown. Non-unique page views seem to be counted as some kind of indicator of successfulness: how often does the visitor revisit the page. When the user rotates the screen however usually an Activity resumes again, thus making it a new page view. This happens a lot. Maybe a well-thought-through placement of the call might solve this, or placing several, I'm not sure. How to deal with Page Views? 2. Events I'd say there are two sorts: A user event Something that happened, usually as an indirect consequence of the above. The latter particularly is giving me headaches. First of all, many events aren't written in code any more, but pieced logically together by means of Intents. This means that there is no place to put the analytics call. You'd either have to give up this advantage and start doing it the old-fashioned way in favor of good analytics, or, just be missing some events. Secondly, as a developer you're not so much interested in when a user clicks a button, but if the action that should have been performed really was performed and what the result was. There seems to be no clear way to get resulting data into Google Analytics (what's up with the integers? I want to put in Strings!). The same that applies to the flat pages hierarchy, also goes for the event categories. You could do "vertical" categories (topically, that is), but some code is shared "horizontally" and the tracking will be equally shared. Just as with the Intents mechanism, inheritance makes it hard for you to put the tracking in the right places at all times. And I can't really imagine "horizontal" categories. Unless you start making really small categories, such as all the items form the same menu in one category, I have a hard time grasping the concept. Finally, how do you deal with cancelling? Usually you both have an explicit cancel mechanism by ways of a button, as well as the implicit cancel when the "back"-button is pressed to leave the activity and there were no changes. The latter also applies to "saves", when the back button is pressed and there ARE changes. How are you consequently going to catch all these if not by doing all the "back"-button work yourself? How to deal with events? 3. Goals For goal types I have choice of: URL Destination, Time on Site, and Pages/Visit. Most apps don't have a funnel that leads the user to some "registration done" or "order placed" page. Apps have either already been bought (in which case you want to stimulate the user to love your app, so that he might bring on new buyers) or are paid for by in-app ads. So URL Destination is not a very important goal. Time on Site also seems troublesome. First, I have some doubt on how this would be measured. Second, I don't necessarily want my user to spend a lot of time in my already paid app, just be active and content. Equivalently, why not mention how frequent a user uses your app? Regarding Pages/Visit I already mentioned how screen orientation changes blow up the page view numbers. In an app I'd be most interested in events/visit to measure the user's involvement/activity. If he's intensively using the app then he must be loving it right? Furthermore, I also have some small funnels (that do not lead to conversion though) that I want to see streamlined. In my mind those funnels would end in events rather than page views but that seems not to be possible. I could also measure clickthroughs on in-app ads, but then I'd need to track those as Page Views rather than Events, in view of "URL Destination". What are smart goals for apps and how can you fit them on top of Analytics? 4. Optimisation Is there a smart way to manually do what "Website Optimiser" does for websites? Most importantly, how would I track different landing page designs? 5. Traffic Sources Referrals deal with installation time referrals, if you're smart enough to get them included. But perhaps I'd also want to get some data which third-party app sends users to my app to perform some actions (this app interoperability is possible via Intents). Many of the terminologies related to "Traffic Sources" seem totally meaningless and there is no possibility of connecting in AdSense. What are smart uses of this data? 6. Visitors Of the "Browser capabilities", "Network Properties" and "Mobile" tabs, many things are pointless as they have no influence on / relation with my mostly offline app that won't use flash anyway. Only if you drill down far enough, can you get to OS versions, which do matter a lot. I even forgot where you could check what exact Android devices visited. What are smart uses of this data? How can you make the relevant info more prominent? 7. Other No in-page analytics. I have to register my app as a web-url (What!?)?

    Read the article

  • Windows Azure Virtual Machine Readiness and Capacity Assessment for SQL Server

    - by SQLOS Team
    Windows Azure Virtual Machine Readiness and Capacity Assessment for Windows Server Machine Running SQL Server With the release of MAP Toolkit 8.0 Beta, we have added a new scenario to assess your Windows Azure Virtual Machine Readiness. The MAP 8.0 Beta performs a comprehensive assessment of Windows Servers running SQL Server to determine you level of readiness to migrate an on-premise physical or virtual machine to Windows Azure Virtual Machines. The MAP Toolkit then offers suggested changes to prepare the machines for migration, such as upgrading the operating system or SQL Server. MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Now, let’s walk through the MAP Toolkit task for completing the Windows Azure Virtual Machine assessment and capacity planning. The tasks include the following: Perform an inventory View the Windows Azure VM Readiness results and report Collect performance data for determine VM sizing View the Windows Azure Capacity results and report Perform an inventory: 1. To perform an inventory against a single machine or across a complete environment, choose Perform an Inventory to launch the Inventory and Assessment Wizard as shown below: 2. After the Inventory and Assessment Wizard launches, select either the Windows computers or SQL Server scenario to inventory Windows machines. HINT: If you don’t care about completely inventorying a machine, just select the SQL Server scenario. Click Next to Continue. 3. On the Discovery Methods page, select how you want to discover computers and then click Next to continue. Description of Discovery Methods: Use Active Directory Domain Services -- This method allows you to query a domain controller via the Lightweight Directory Access Protocol (LDAP) and select computers in all or specific domains, containers, or OUs. Use this method if all computers and devices are in AD DS. Windows networking protocols --  This method uses the WIN32 LAN Manager application programming interfaces to query the Computer Browser service for computers in workgroups and Windows NT 4.0–based domains. If the computers on the network are not joined to an Active Directory domain, use only the Windows networking protocols option to find computers. System Center Configuration Manager (SCCM) -- This method enables you to inventory computers managed by System Center Configuration Manager (SCCM). You need to provide credentials to the System Center Configuration Manager server in order to inventory the managed computers. When you select this option, the MAP Toolkit will query SCCM for a list of computers and then MAP will connect to these computers. Scan an IP address range -- This method allows you to specify the starting address and ending address of an IP address range. The wizard will then scan all IP addresses in the range and inventory only those computers. Note: This option can perform poorly, if many IP addresses aren’t being used within the range. Manually enter computer names and credentials -- Use this method if you want to inventory a small number of specific computers. Import computer names from a files -- Using this method, you can create a text file with a list of computer names that will be inventoried. 4. On the All Computers Credentials page, enter the accounts that have administrator rights to connect to the discovered machines. This does not need to a domain account, but needs to be a local administrator. I have entered my domain account that is an administrator on my local machine. Click Next after one or more accounts have been added. NOTE: The MAP Toolkit primarily uses Windows Management Instrumentation (WMI) to collect hardware, device, and software information from the remote computers. In order for the MAP Toolkit to successfully connect and inventory computers in your environment, you have to configure your machines to inventory through WMI and also allow your firewall to enable remote access through WMI. The MAP Toolkit also requires remote registry access for certain assessments. In addition to enabling WMI, you need accounts with administrative privileges to access desktops and servers in your environment. 5. On the Credentials Order page, select the order in which want the MAP Toolkit to connect to the machine and SQL Server. Generally just accept the defaults and click Next. 6. On the Enter Computers Manually page, click Create to pull up at dialog to enter one or more computer names. 7. On the Summary page confirm your settings and then click Finish. After clicking Finish the inventory process will start, as shown below: Windows Azure Readiness results and report After the inventory progress has completed, you can review the results under the Database scenario. On the tile, you will see the number of Windows Server machine with SQL Server that were analyzed, the number of machines that are ready to move without changes and the number of machines that require further changes. If you click this Azure VM Readiness tile, you will see additional details and can generate the Windows Azure VM Readiness Report. After the report is generated, select View | Saved Reports and Proposals to view the location of the report. Open up WindowsAzureVMReadiness* report in Excel. On the Windows tab, you can see the results of the assessment. This report has a column for the Operating System and SQL Server assessment and provides a recommendation on how to resolve, if there a component is not supported. Collect Performance Data Launch the Performance Wizard to collect performance information for the Windows Server machines that you would like the MAP Toolkit to suggest a Windows Azure VM size for. Windows Azure Capacity results and report After the performance metrics are collected, the Azure VM Capacity title will display the number of Virtual Machine sizes that are suggested for the Windows Server and Linux machines that were analyzed. You can then click on the Azure VM Capacity tile to see the capacity details and generate the Windows Azure VM Capacity Report. Within this report, you can view the performance data that was collected and the Virtual Machine sizes.   MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Useful References: Windows Azure Homepage How to guides for Windows Azure Virtual Machines Provisioning a SQL Server Virtual Machine on Windows Azure Windows Azure Pricing     Peter Saddow Senior Program Manager – MAP Toolkit Team

    Read the article

  • The True Cost of a Solution

    - by D'Arcy Lussier
    I had a Twitter chat recently with someone suggesting Oracle and SQL Server were losing out to OSS (Open Source Software) in the enterprise due to their issues with scaling or being too generic (one size fits all). I challenged that a bit, as my experience with enterprise sized clients has been different – adverse to OSS but receptive to an established vendor. The response I got was: Found it easier to influence change by showing how X can’t solve our problems or X is extremely costly to scale. Money talks. I think this is definitely the right approach for anyone pitching an alternate or alien technology as part of a solution: identify the issue, identify the solution, then present pros and cons including a cost/benefit analysis. What can happen though is we get tunnel vision and don’t present a full view of the costs associated with a solution. An “Acura”te Example (I’m so clever…) This is my dream vehicle, a Crystal Black Pearl coloured Acura MDX with the SH-AWD package! We’re a family of 4 (5 if my daughters ever get their wish of adding a dog), and I’ve always wanted a luxury type of vehicle, so this is a perfect replacement in a few years when our Rav 4 has hit the 8 – 10 year mark. MSRP – $62,890 But as we all know, that’s not *really* the cost of the vehicle. There’s taxes and fees added on, there’s the extended warranty if I choose to purchase it, there’s the finance rate that needs to be factored in… MSRP –   $62,890 Taxes –      $7,546 Warranty - $2,500 SubTotal – $72,936 Finance Charge – $ 1094.04 Grand Total – $74,030 Well! Glad we did that exercise – we discovered an extra $11k added on to the MSRP! Well now we have our true price…or do we? Lifetime of the Vehicle I’m expecting to have this vehicle for 7 – 10 years. While the hard cost of the vehicle is known and dealt with, the costs to run and maintain the vehicle are on top of this. I did some research, and here’s what I’ve found: Fuel and Mileage Gas prices are high as it is for regular fuel, but getting into an MDX will require that I *only* purchase premium fuel, which comes at a premium price. I need to expect my bill at the pump to be higher. Comparing the MDX to my 2007 Rav4 also shows I’ll be gassing up more often. The Rav4 has a city MPG of 21, while the MDX plummets to 16! The MDX does have a bigger fuel tank though, so all in all the number of times I hit the pumps might even out. Still, I estimate I’ll be spending approximately $8000 – $10000 more on gas over a 10 year period than my current Rav4. Service Options Limited Although I have options with my Toyota here in Winnipeg (we have 4 Toyota dealerships), I do go to my original dealer for any service work. Still, I like the fact that I have options. However, there’s only one Acura dealership in all of Winnipeg! So if, for whatever reason, I’m not satisfied with the level of service I’m stuck. Non Warranty Service Work Also let’s not forget that there’s a bulk of work required every year that is *not* covered under warranty – oil changes, tire rotations, brake pads, etc. I expect I’ll need to get new tires at the 5 years mark as well, which can easily be $1200 – $1500 (I just paid $1000 for new tires for the Rav4 and we’re at the 5 year mark). Now these aren’t going to be *new* costs that I’m not used to from our existing vehicles, but they should still be factored in. I’d budget $500/year, or $5000 over the 10 years I’ll own the vehicle. Final Assessment So let’s re-assess the true cost of my dream MDX: MSRP                    $62,890 Taxes                       $7,546 Warranty                 $2,500 Finance Charge         $1094 Gas                        $10,000 Service Work            $5000 Grand Total           $89,030 So now I have a better idea of 10 year cost overall, and I’ve identified some concerns with local service availability. And there’s now much more to consider over the original $62,890 price tag. Tying This Back to Technology Solutions The process that we just went through is no different than what organizations do when considering implementing a new system, technology, or technology based solution, within their environments. It’s easy to tout the short term cost savings of particular product/platform/technology in a vacuum. But its when you consider the wider impact that the true cost comes into play. Let’s create a scenario: A company is not happy with its current data reporting suite. An employee suggests moving to an open source solution. The selling points are: - Because its open source its free - The organization would have access to the source code so they could alter it however they wished - It provided features not available with the current reporting suite At first this sounds great to the management and executive, but then they start asking some questions and uncover more information: - The OSS product is built on a technology not used anywhere within the organization - There are no vendors offering product support for the OSS product - The OSS product requires a specific server platform to operate on, one that’s not standard in the organization All of a sudden, the true cost of implementing this solution is starting to become clearer. The company might save money on licensing costs, but their training costs would increase significantly – developers would need to learn how to develop in the technology the OSS solution was built on, IT staff must learn how to set up and maintain a new server platform within their existing infrastructure, and if a problem was found there was no vendor to contact for support. The true cost of implementing a “free” OSS solution is actually spinning up a project to implement it within the organization – no small cost. And that’s just the short-term cost. Now the organization must ensure they maintain trained staff who can make changes to the OSS reporting solution and IT staff that will stay knowledgeable in the new server platform. If those skills are very niche, then higher labour costs could be incurred if those people are hard to find or if trained employees use that knowledge as leverage for higher pay. Maybe a vendor exists that will contract out support, but then there are those costs to consider as well. And let’s not forget end-user training – in our example, anyone that runs reports will need to be trained on how to use the new system. Here’s the Point We still tend to look at software in an “off the shelf” kind of way. It’s very easy to say “oh, this product is better than vendor x’s product – and its free because its OSS!” but the reality is that implementing any new technology within an organization has a cost regardless of the retail price of the product. Training, integration, support – these are real costs that impact an organization and span multiple departments. Whether you’re pitching an improved business process, a new system, or a new technology, you need to consider the bigger picture costs of implementation. What you define as success (in our example, having better reporting functionality) might not be what others define as success if implementing your solution causes them issues. A true enterprise solution needs to consider the entire enterprise.

    Read the article

  • Metrics - A little knowledge can be a dangerous thing (or 'Why you're not clever enough to interpret metrics data')

    - by Jason Crease
    At RedGate Software, I work on a .NET obfuscator  called SmartAssembly.  Various features of it use a database to store various things (exception reports, name-mappings, etc.) The user is given the option of using either a SQL-Server database (which requires them to have Microsoft SQL Server), or a Microsoft Access MDB file (which requires nothing). MDB is the default option, but power-users soon switch to using a SQL Server database because it offers better performance and data-sharing. In the fashionable spirit of optimization and metrics, an obvious product-management question is 'Which is the most popular? SQL Server or MDB?' We've collected data about this fact, using our 'Feature-Usage-Reporting' technology (available as part of SmartAssembly) and more recently our 'Application Metrics' technology: Parameter Number of users % of total users Number of sessions Number of usages SQL Server 28 19.0 8115 8115 MDB 114 77.6 1449 1449 (As a disclaimer, please note than SmartAssembly has far more than 132 users . This data is just a selection of one build) So, it would appear that SQL-Server is used by fewer users, but more often. Great. But here's why these numbers are useless to me: Only the original developers understand the data What does a single 'usage' of 'MDB' mean? Does this happen once per run? Once per option change? On clicking the 'Obfuscate Now' button? When running the command-line version or just from the UI version? Each question could skew the data 10-fold either way, and the answers only known by the developer that instrumented the application in the first place. In other words, only the original developer can interpret the data - product-managers cannot interpret the data unaided. Most of the data is from uninterested users About half of people who download and run a free-trial from the internet quit it almost immediately. Only a small fraction use it sufficiently to make informed choices. Since the MDB option is the default one, we don't know how many of those 114 were people CHOOSING to use the MDB, or how many were JUST HAPPENING to use this MDB default for their 20-second trial. This is a problem we see across all our metrics: Are people are using X because it's the default or are they using X because they want to use X? We need to segment the data further - asking what percentage of each percentage meet our criteria for an 'established user' or 'informed user'. You end up spending hours writing sophisticated and dubious SQL queries to segment the data further. Not fun. You can't find out why they used this feature Metrics can answer the when and what, but not the why. Why did people use feature X? If you're anything like me, you often click on random buttons in unfamiliar applications just to explore the feature-set. If we listened uncritically to metrics at RedGate, we would eliminate the most-important and more-complex features which people actually buy the software for, leaving just big buttons on the main page and the About-Box. "Ah, that's interesting!" rather than "Ah, that's actionable!" People do love data. Did you know you eat 1201 chickens in a lifetime? But just 4 cows? Interesting, but useless. Often metrics give you a nice number: '5.8% of users have 3 or more monitors' . But unless the statistic is both SUPRISING and ACTIONABLE, it's useless. Most metrics are collected, reviewed with lots of cooing. and then forgotten. Unless a piece-of-data could change things, it's useless collecting it. People get obsessed with significance levels The first things that lots of people do with this data is do a t-test to get a significance level ("Hey! We know with 99.64% confidence that people prefer SQL Server to MDBs!") Believe me: other causes of error/misinterpretation in your data are FAR more significant than your t-test could ever comprehend. Confirmation bias prevents objectivity If the data appears to match our instinct, we feel satisfied and move on. If it doesn't, we suspect the data and dig deeper, plummeting down a rabbit-hole of segmentation and filtering until we give-up and move-on. Data is only useful if it can change our preconceptions. Do you trust this dodgy data more than your own understanding, knowledge and intelligence?  I don't. There's always multiple plausible ways to interpret/action any data Let's say we segment the above data, and get this data: Post-trial users (i.e. those using a paid version after the 14-day free-trial is over): Parameter Number of users % of total users Number of sessions Number of usages SQL Server 13 9.0 1115 1115 MDB 5 4.2 449 449 Trial users: Parameter Number of users % of total users Number of sessions Number of usages SQL Server 15 10.0 7000 7000 MDB 114 77.6 1000 1000 How do you interpret this data? It's one of: Mostly SQL Server users buy our software. People who can't afford SQL Server tend to be unable to afford or unwilling to buy our software. Therefore, ditch MDB-support. Our MDB support is so poor and buggy that our massive MDB user-base doesn't buy it.  Therefore, spend loads of money improving it, and think about ditching SQL-Server support. People 'graduate' naturally from MDB to SQL Server as they use the software more. Things are fine the way they are. We're marketing the tool wrong. The large number of MDB users represent uninformed downloaders. Tell marketing to aggressively target SQL Server users. To choose an interpretation you need to segment again. And again. And again, and again. Opting-out is correlated with feature-usage Metrics tends to be opt-in. This skews the data even further. Between 5% and 30% of people choose to opt-in to metrics (often called 'customer improvement program' or something like that). Casual trial-users who are uninterested in your product or company are less likely to opt-in. This group is probably also likely to be MDB users. How much does this skew your data by? Who knows? It's not all doom and gloom. There are some things metrics can answer well. Environment facts. How many people have 3 monitors? Have Windows 7? Have .NET 4 installed? Have Japanese Windows? Minor optimizations.  Is the text-box big enough for average user-input? Performance data. How long does our app take to start? How many databases does the average user have on their server? As you can see, questions about who-the-user-is rather than what-the-user-does are easier to answer and action. Conclusion Use SmartAssembly. If not for the metrics (called 'Feature-Usage-Reporting'), then at least for the obfuscation/error-reporting. Data raises more questions than it answers. Questions about environment are the easiest to answer.

    Read the article

  • Let your Signature Experience drive IT-decision making

    - by Tania Le Voi
    Today’s CIO job description:  ‘’Align IT infrastructure and solutions with business goals and objectives ; AND while doing so reduce costs; BUT ALSO, be innovative, ensure the architectures are adaptable and agile as we need to act today on the changes that we may request tomorrow.”   Sound like an unachievable request? The fact is, reality dictates that CIO’s are put under this type of pressure to deliver more with less. In a past career phase I spent a few years as an IT Relationship Manager for a large Insurance company. This is a role that we see all too infrequently in many of our customers, and it’s a shame.  The purpose of this role was to build a bridge, a relationship between IT and the business. Key to achieving that goal was to ensure the same language was being spoken and more importantly that objectives were commonly understood - hence service and projects were delivered to time, to budget and actually solved the business problems. In reality IT and the business are already married, but the relationship is most often defined as ‘supplier’ of IT rather than a ‘trusted partner’. To deliver business value they need to understand how to work together effectively to attain this next level of partnership. The Business cannot compete if they do not get a new product to market ahead of the competition, or for example act in a timely manner to address a new industry problem such as a legislative change. An even better example is when the Application or Service fails and the Business takes a hit by bad publicity, being trending topics on social media and losing direct revenue from online channels. For this reason alone Business and IT need the alignment of their priorities and deliverables now more than ever! Take a look at Forrester’s recent study that found ‘many IT respondents considering themselves to be trusted partners of the business but their efforts are impaired by the inadequacy of tools and organizations’.  IT Meet the Business; Business Meet IT So what is going on? We talk about aligning the business with IT but the reality is it’s difficult to do. Like any relationship each side has different goals and needs and language can be a barrier; business vs. technology jargon! What if we could translate the needs of both sides into actionable information, backed by data both sides understand, presented in a meaningful way?  Well now we can with the Business-Driven Application Management capabilities in Oracle Enterprise Manager 12cR2! Enterprise Manager’s Business-Driven Application Management capabilities provide the information that IT needs to understand the impact of its decisions on business criteria.  No longer does IT need to be focused solely on speeds and feeds, performance and throughput – now IT can understand IT’s impact on business KPIs like inventory turns, order-to-cash cycle, pipeline-to-forecast, and similar.  Similarly, now the line of business can understand which IT services are most critical for the KPIs they care about. There are a good deal of resources on Oracle Technology Network that describe the functionality of these products, so I won’t’ rehash them here.  What I want to talk about is what you do with these products. What’s next after we meet? Where do you start? Step 1:  Identify the Signature Experience. This is THE business process (or set of processes) that is core to the business, the one that drives the economic engine, the process that a customer recognises the company brand for, reputation, the customer experience, the process that a CEO would state as his number one priority. The crème de la crème of your business! Once you have nailed this it gets easy as Enterprise Manager 12c makes it easy. Step 2:  Map the Signature Experience to underlying IT.  Taking the signature experience, map out the touch points of the components that play a part in ensuring this business transaction is successful end to end, think of it like mapping out a critical path; the applications, middleware, databases and hardware. Use the wealth of Enterprise Manager features such as Systems, Services, Business Application Targets and Business Transaction Management (BTM) to assist you. Adding Real User Experience Insight (RUEI) into the mix will make the end to end customer satisfaction story transparent. Work with the business and define meaningful key performance indicators (KPI’s) and thresholds to enable you to report and action upon. Step 3:  Observe the data over time.  You now have meaningful insight into every step enabling your signature experience and you understand the implication of that experience on your underlying IT.  Watch if for a few months, see what happens and reconvene with your business stakeholders and set clear and measurable targets which can re-define service levels.  Step 4:  Change the information about which you and the business communicate.  It’s amazing what happens when you and the business speak the same language.  You’ll be able to make more informed business and IT decisions. From here IT can identify where/how budget is spent whether on the level of support, performance, capacity, HA, DR, certification etc. IT SLA’s no longer need be focused on metrics such as %availability but structured around business process requirements. The power of this way of thinking doesn’t end here. IT staff get to see and understand how their own role contributes to the business making them accountable for the business service. Take a step further and appraise your staff on the business competencies that are linked to the service availability. For the business, the language barrier is removed by producing targeted reports on the signature experience core to the business and therefore key to the CEO. Chargeback or show back becomes easier to justify as the ‘cost of day per outage’ can be more easily calculated; the business will be able to translate the cost to the business to the cost/value of the underlying IT that supports it. Used this way, Oracle Enterprise Manager 12c is a key enabler to a harmonious relationship between the end customer the business and IT to deliver ultimate service and satisfaction. Just engage with the business upfront, make the signature experience visible and let Enterprise Manager 12c do the rest. In the next blog entry we will cover some of the Enterprise Manager features mentioned to enable you to implement this new way of working.  

    Read the article

  • WCF client endpoint identity - configuration question

    - by Roel
    Hi all, I'm having a strange situation here. I got it working, but I don't understand why. Situation is as follows: There is a WCF service which my application (a website) has to call. The WCF service exposes a netTcpBinding and requires Transport Security (Windows). Client and server are in the same domain, but on different servers. So generating a client results in the following config (mostly defaults) <system.serviceModel> <bindings> <netTcpBinding> <binding name="MyTcpEndpoint" ...> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Transport"> <transport clientCredentialType="Windows" protectionLevel="EncryptAndSign"/> <message clientCredentialType="Windows" /> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint address="net.tcp://localhost:xxxxx/xxxx/xxx/1.0" binding="netTcpBinding" bindingConfiguration="MyTcpEndpoint" contract="Service.IMyService" name="TcpEndpoint"/> </client> </system.serviceModel> When I run the website and make the call to the service, I get the following error: System.ServiceModel.Security.SecurityNegotiationException: Either the target name is incorrect or the server has rejected the client credentials. ---> System.Security.Authentication.InvalidCredentialException: Either the target name is incorrect or the server has rejected the client credentials. ---> System.ComponentModel.Win32Exception: The logon attempt failed --- End of inner exception stack trace --- at System.Net.Security.NegoState.EndProcessAuthentication(IAsyncResult result) at System.Net.Security.NegotiateStream.EndAuthenticateAsClient(IAsyncResult asyncResult) at System.ServiceModel.Channels.WindowsStreamSecurityUpgradeProvider.WindowsStreamSecurityUpgradeInitiator.InitiateUpgradeAsyncResult.OnCompleteAuthenticateAsClient(IAsyncResult result) at System.ServiceModel.Channels.StreamSecurityUpgradeInitiatorAsyncResult.CompleteAuthenticateAsClient(IAsyncResult result) --- End of inner exception stack trace --- Server stack trace: at System.ServiceModel.AsyncResult.End[TAsyncResult](IAsyncResult result) at System.ServiceModel.Channels.ServiceChannel.SendAsyncResult.End(SendAsyncResult result) at System.ServiceModel.Channels.ServiceChannel.EndCall(String action, Object[] outs, IAsyncResult result) .... Now, if I just alter the configuration of the client like so: <endpoint address="net.tcp://localhost:xxxxx/xxxx/xxx/1.0" binding="netTcpBinding" bindingConfiguration="MyTcpEndpoint" contract="Service.IMyService" name="TcpEndpoint"> <identity> <dns /> </identity> </endpoint> everything works and my server happily reports that it got called by the service account which hosts the AppPool for my website. All good. My question now is: why does this work? What does this do? I got to this solution by mere trial-and-error. To me it seems that all the <dns /> tag does is tell the client to use the default DNS for authentication, but doesn't it do that anyway? Thanks for providing me with some insight.

    Read the article

  • Troubleshooting Windows Authentication problems (no challenge) in IIS 7.5?

    - by Aaronaught
    I know that there are thousands of reports of people having trouble getting Integrated Windows Authentication to work with IIS, but they all seem to lead to web pages that don't apply or solutions that I've already tried. I've deployed dozens of sites like this before, so either there's something bizarre going on with the server/configuration, or I've been looking at this too long and not seeing the obvious. Simply put, everything works perfectly on my local machine, but falls apart on the production server, which as far as I can tell has the exact same configuration. On the local machine: The machine is running Windows 7 Ultimate, Service Pack 1, IIS 7.5. The site has been tested successfully, using both IIS and the VS Web Development Server. The IIS site config has all authentication methods disabled except Windows Authentication. The local machine is not on any domain. The Providers set up are Negotiate and NTLM (not Negotiate:Kerberos). Extended Protection is Off. All browsers tested (IE, Firefox, Chrome) show the challenge prompt and allow me to log in to the localhost domain with my (local) Windows account. All browsers tested also work using an opaque local IP address - so the browsers themselves don't seem to care whether the site appears "local" or "remote". I've added a display line to the web page which shows the currently-logged-in user and it shows exactly what I would expect (whichever local user I logged in with). On the remote machine: The server is running Windows Server 2008 R2, IIS 7.5. Loading the web page results in an immediate 401.2 error: You are not authorized to view this page due to invalid authentication headers. No challenge prompt ever appears. The IIS site config has all authentication methods disabled except Windows Authentication. The remote machine is not on any domain. The Providers set up are Negotiate and NTLM (not Negotiate:Kerberos). Extended Protection is Off. On the remote machine (remote desktop session), the same error appears in Internet Explorer regardless of whether the domain is localhost or the external IP address. If I try to view the remote web site from my local machine, the error is still 401, but a slightly different 401. No subcode, with the text: Access is denied due to invalid credentials. The Windows Authentication IIS role feature is installed. The WindowsAuthentication Module is added (at the Server level). The exact same error occurs if I turn off Windows Authentication and enable Basic Authentication. The site does load if I turn off Windows Authentication and enable Anonymous (obviously). I've already followed all of the troubleshooting steps on Microsoft Support: Troubleshooting HTTP 401 errors in IIS I've already tried the workaround shown on another Microsoft support page (supposedly to force NTLM as the only method). Last but not least, I tried turning on FREB for 401.2 errors and the results don't seem to tell me anything useful, all I see is the following warning: MODULE_SET_RESPONSE_ERROR_STATUS ModuleName IIS Web Core Notification 2 HttpStatus 401 HttpReason Unauthorized HttpSubStatus 2 ErrorCode 2147942405 ConfigExceptionInfo Notification AUTHENTICATE_REQUEST ErrorCode Access is denied. (0x80070005) ...this seems to just be telling me what I already know (that it's simply rejecting the request instead of negotiating the credentials). The trace does indicate that the WindowsAuthentication module is correctly loaded because there is a NOTIFY_MODULE_START line with ModuleName = WindowsAuthentication (and various other ASP.NET follow-up events - [un]fortunately, no interesting errors or warnings here). Can anyone tell me what I might be missing here? Quick Update: I'm a little uncomfortable sending a whole Wireshark dump as it would reveal IPs, URLs and other stuff, but I did a side-by-side comparison of the HTTP responses from localhost and the remote server in Fiddler, and it seems fairly self-evident what the problem is: Localhost: HTTP/1.1 401 Unauthorized Cache-Control: private Content-Type: text/html; charset=utf-8 Server: Microsoft-IIS/7.5 WWW-Authenticate: Negotiate WWW-Authenticate: NTLM X-Powered-By: ASP.NET Date: Sat, 17 Dec 2011 23:42:34 GMT Content-Length: 6399 Proxy-Support: Session-Based-Authentication Remote: HTTP/1.1 401 Unauthorized Content-Type: text/html Server: Microsoft-IIS/7.5 X-Powered-By: ASP.NET Date: Sat, 17 Dec 2011 23:43:13 GMT Content-Length: 1293 Aside from a few seemingly-inconsequential differences like cache-control, the main difference is that the remote server is not sending the WWW-Authenticate headers back to the client. So, I guess that narrows the question down to: Why is IIS not sending WWW-Authenticate headers when Windows Authentication appears to be installed, loaded, and exclusively enabled?

    Read the article

  • .NET SerialPort DataReceived event not firing

    - by Klay
    I have a WPF test app for evaluating event-based serial port communication (vs. polling the serial port). The problem is that the DataReceived event doesn't seem to be firing at all. I have a very basic WPF form with a TextBox for user input, a TextBlock for output, and a button to write the input to the serial port. Here's the code: public partial class Window1 : Window { SerialPort port; public Window1() { InitializeComponent(); port = new SerialPort("COM2", 9600, Parity.None, 8, StopBits.One); port.DataReceived += new SerialDataReceivedEventHandler(port_DataReceived); port.Open(); } void port_DataReceived(object sender, SerialDataReceivedEventArgs e) { Debug.Print("receiving!"); string data = port.ReadExisting(); Debug.Print(data); outputText.Text = data; } private void Button_Click(object sender, RoutedEventArgs e) { Debug.Print("sending: " + inputText.Text); port.WriteLine(inputText.Text); } } Now, here are the complicating factors: The laptop I'm working on has no serial ports, so I'm using a piece of software called Virtual Serial Port Emulator to setup a COM2. VSPE has worked admirably in the past, and it's not clear why it would only malfunction with .NET's SerialPort class, but I mention it just in case. When I hit the button on my form to send the data, my Hyperterminal window (connected on COM2) shows that the data is getting through. Yes, I disconnect Hyperterminal when I want to test my form's ability to read the port. I've tried opening the port before wiring up the event. No change. I've read through another post here where someone else is having a similar problem. None of that info has helped me in this case. EDIT: Here's the console version (modified from http://mark.michaelis.net/Blog/TheBasicsOfSystemIOPortsSerialPort.aspx): class Program { static SerialPort port; static void Main(string[] args) { port = new SerialPort("COM2", 9600, Parity.None, 8, StopBits.One); port.DataReceived += new SerialDataReceivedEventHandler(port_DataReceived); port.Open(); string text; do { text = Console.ReadLine(); port.Write(text + "\r\n"); } while (text.ToLower() != "q"); } public static void port_DataReceived(object sender, SerialDataReceivedEventArgs args) { string text = port.ReadExisting(); Console.WriteLine("received: " + text); } } This should eliminate any concern that it's a Threading issue (I think). This doesn't work either. Again, Hyperterminal reports the data sent through the port, but the console app doesn't seem to fire the DataReceived event. EDIT #2: I realized that I had two separate apps that should both send and receive from the serial port, so I decided to try running them simultaneously... If I type into the console app, the WPF app DataReceived event fires, with the expected threading error (which I know how to deal with). If I type into the WPF app, the console app DataReceived event fires, and it echoes the data. I'm guessing the issue is somewhere in my use of the VSPE software, which is set up to treat one serial port as both input and output. And through some weirdness of the SerialPort class, one instance of a serial port can't be both the sender and receiver. Anyway, I think it's solved.

    Read the article

  • Is there a C pre-processor which eliminates #ifdef blocks based on values defined/undefined?

    - by Jonathan Leffler
    Original Question What I'd like is not a standard C pre-processor, but a variation on it which would accept from somewhere - probably the command line via -DNAME1 and -UNAME2 options - a specification of which macros are defined, and would then eliminate dead code. It may be easier to understand what I'm after with some examples: #ifdef NAME1 #define ALBUQUERQUE "ambidextrous" #else #define PHANTASMAGORIA "ghostly" #endif If the command were run with '-DNAME1', the output would be: #define ALBUQUERQUE "ambidextrous" If the command were run with '-UNAME1', the output would be: #define PHANTASMAGORIA "ghostly" If the command were run with neither option, the output would be the same as the input. This is a simple case - I'd be hoping that the code could handle more complex cases too. To illustrate with a real-world but still simple example: #ifdef USE_VOID #ifdef PLATFORM1 #define VOID void #else #undef VOID typedef void VOID; #endif /* PLATFORM1 */ typedef void * VOIDPTR; #else typedef mint VOID; typedef char * VOIDPTR; #endif /* USE_VOID */ I'd like to run the command with -DUSE_VOID -UPLATFORM1 and get the output: #undef VOID typedef void VOID; typedef void * VOIDPTR; Another example: #ifndef DOUBLEPAD #if (defined NT) || (defined OLDUNIX) #define DOUBLEPAD 8 #else #define DOUBLEPAD 0 #endif /* NT */ #endif /* !DOUBLEPAD */ Ideally, I'd like to run with -UOLDUNIX and get the output: #ifndef DOUBLEPAD #if (defined NT) #define DOUBLEPAD 8 #else #define DOUBLEPAD 0 #endif /* NT */ #endif /* !DOUBLEPAD */ This may be pushing my luck! Motivation: large, ancient code base with lots of conditional code. Many of the conditions no longer apply - the OLDUNIX platform, for example, is no longer made and no longer supported, so there is no need to have references to it in the code. Other conditions are always true. For example, features are added with conditional compilation so that a single version of the code can be used for both older versions of the software where the feature is not available and newer versions where it is available (more or less). Eventually, the old versions without the feature are no longer supported - everything uses the feature - so the condition on whether the feature is present or not should be removed, and the 'when feature is absent' code should be removed too. I'd like to have a tool to do the job automatically because it will be faster and more reliable than doing it manually (which is rather critical when the code base includes 21,500 source files). (A really clever version of the tool might read #include'd files to determine whether the control macros - those specified by -D or -U on the command line - are defined in those files. I'm not sure whether that's truly helpful except as a backup diagnostic. Whatever else it does, though, the pseudo-pre-processor must not expand macros or include files verbatim. The output must be source similar to, but usually simpler than, the input code.) Status Report (one year later) After a year of use, I am very happy with 'sunifdef' recommended by the selected answer. It hasn't made a mistake yet, and I don't expect it to. The only quibble I have with it is stylistic. Given an input such as: #if (defined(A) && defined(B)) || defined(C) || (defined(D) && defined(E)) and run with '-UC' (C is never defined), the output is: #if defined(A) && defined(B) || defined(D) && defined(E) This is technically correct because '&&' binds tighter than '||', but it is an open invitation to confusion. I would much prefer it to include parentheses around the sets of '&&' conditions, as in the original: #if (defined(A) && defined(B)) || (defined(D) && defined(E)) However, given the obscurity of some of the code I have to work with, for that to be the biggest nit-pick is a strong compliment; it is valuable tool to me. The New Kid on the Block Having checked the URL for inclusion in the information above, I see that (as predicted) there is an new program called Coan that is the successor to 'sunifdef'. It is available on SourceForge and has been since January 2010. I'll be checking it out...further reports later this year, or maybe next year, or sometime, or never.

    Read the article

  • 403 error after adding javascript to masterpage for sharepoint.

    - by Jeremy
    I am attempting to add highslide-with-html.js from http://highslide.com/ to my masterpage. I am receiving a 403 forbidden error when I use the provided masterpage. I have placed it in C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\TEMPLATE\LAYOUTS\1033. Test javascript files such as pirate.js which consists solely of alert("Arr!"); have loaded from the same directory. I have provided the code for the masterpage. When I do not reference the problem javascript file there is no 403 error. <%@ Master language="C#" %> <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <%@ Import Namespace="Microsoft.SharePoint" %> <%@ Register Tagprefix="SPSWC" Namespace="Microsoft.SharePoint.Portal.WebControls" Assembly="Microsoft.SharePoint.Portal, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %> <%@ Register Tagprefix="SharePoint" Namespace="Microsoft.SharePoint.WebControls" Assembly="Microsoft.SharePoint, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %> <%@ Register Tagprefix="WebPartPages" Namespace="Microsoft.SharePoint.WebPartPages" Assembly="Microsoft.SharePoint, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %> <%@ Register Tagprefix="PublishingWebControls" Namespace="Microsoft.SharePoint.Publishing.WebControls" Assembly="Microsoft.SharePoint.Publishing, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %> <%@ Register Tagprefix="PublishingNavigation" Namespace="Microsoft.SharePoint.Publishing.Navigation" Assembly="Microsoft.SharePoint.Publishing, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %> <%@ Register TagPrefix="wssuc" TagName="Welcome" src="~/_controltemplates/Welcome.ascx" %> <%@ Register TagPrefix="wssuc" TagName="DesignModeConsole" src="~/_controltemplates/DesignModeConsole.ascx" %> <%@ Register TagPrefix="PublishingVariations" TagName="VariationsLabelMenu" src="~/_controltemplates/VariationsLabelMenu.ascx" %> <%@ Register Tagprefix="PublishingConsole" TagName="Console" src="~/_controltemplates/PublishingConsole.ascx" %> <%@ Register TagPrefix="PublishingSiteAction" TagName="SiteActionMenu" src="~/_controltemplates/PublishingActionMenu.ascx" %> <html dir="<%$Resources:wss, multipages_direction_dir_value %>" runat="server" __expr-val-dir="ltr"> <head runat="server"> <meta name="GENERATOR" content="Microsoft SharePoint"> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <meta http-equiv="Expires" content="0"> <SharePoint:RobotsMetaTag runat="server" __designer:Preview="" __designer:Values="&lt;P N='InDesign' T='False' /&gt;&lt;P N='ID' T='ctl00' /&gt;&lt;P N='Page' ID='1' /&gt;&lt;P N='TemplateControl' ID='2' /&gt;&lt;P N='AppRelativeTemplateSourceDirectory' R='-1' /&gt;"/> <title id="onetidTitle"> <asp:ContentPlaceHolder id="PlaceHolderPageTitle" runat="server"/> </title> <Sharepoint:CssLink runat="server" __designer:Preview="&lt;link rel=&quot;stylesheet&quot; type=&quot;text/css&quot; href=&quot;/Style%20Library/en-US/Core%20Styles/Band.css&quot;/&gt; &lt;link rel=&quot;stylesheet&quot; type=&quot;text/css&quot; href=&quot;/Style%20Library/en-US/Core%20Styles/controls.css&quot;/&gt; &lt;link rel=&quot;stylesheet&quot; type=&quot;text/css&quot; href=&quot;/Style%20Library/zz1_blue.css&quot;/&gt; &lt;link rel=&quot;stylesheet&quot; type=&quot;text/css&quot; href=&quot;/_layouts/1033/styles/core.css&quot;/&gt; " __designer:Values="&lt;P N='InDesign' T='False' /&gt;&lt;P N='ID' T='ctl01' /&gt;&lt;P N='Page' ID='1' /&gt;&lt;P N='TemplateControl' ID='2' /&gt;&lt;P N='AppRelativeTemplateSourceDirectory' R='-1' /&gt;"/> <!--Styles used for positioning, font and spacing definitions--> <SharePoint:CssRegistration name="<% $SPUrl:~SiteCollection/Style Library/~language/Core Styles/Band.css%>" runat="server" __designer:Preview="&lt;link rel=&quot;stylesheet&quot; type=&quot;text/css&quot; href=&quot;/Style%20Library/en-US/Core%20Styles/Band.css&quot;/&gt; " __designer:Values="&lt;P N='Name' Bound='True' T='SPUrl:~SiteCollection/Style Library/~language/Core Styles/Band.css' /&gt;&lt;P N='InDesign' T='False' /&gt;&lt;P N='ID' T='ctl02' /&gt;&lt;P N='Page' ID='1' /&gt;&lt;P N='TemplateControl' ID='2' /&gt;&lt;P N='AppRelativeTemplateSourceDirectory' R='-1' /&gt;"/> <SharePoint:CssRegistration name="<% $SPUrl:~sitecollection/Style Library/~language/Core Styles/controls.css %>" runat="server" __designer:Preview="&lt;link rel=&quot;stylesheet&quot; type=&quot;text/css&quot; href=&quot;/Style%20Library/en-US/Core%20Styles/controls.css&quot;/&gt; " __designer:Values="&lt;P N='Name' Bound='True' T='SPUrl:~sitecollection/Style Library/~language/Core Styles/controls.css' /&gt;&lt;P N='InDesign' T='False' /&gt;&lt;P N='ID' T='ctl03' /&gt;&lt;P N='Page' ID='1' /&gt;&lt;P N='TemplateControl' ID='2' /&gt;&lt;P N='AppRelativeTemplateSourceDirectory' R='-1' /&gt;"/> <SharePoint:CssRegistration name="<% $SPUrl:~SiteCollection/Style Library/zz1_blue.css%>" runat="server" __designer:Preview="&lt;link rel=&quot;stylesheet&quot; type=&quot;text/css&quot; href=&quot;/Style%20Library/zz1_blue.css&quot;/&gt; " __designer:Values="&lt;P N='Name' Bound='True' T='SPUrl:~SiteCollection/Style Library/zz1_blue.css' /&gt;&lt;P N='InDesign' T='False' /&gt;&lt;P N='ID' T='ctl04' /&gt;&lt;P N='Page' ID='1' /&gt;&lt;P N='TemplateControl' ID='2' /&gt;&lt;P N='AppRelativeTemplateSourceDirectory' R='-1' /&gt;"/> <SharePoint:ScriptLink name="init.js" runat="server" __designer:Preview="&lt;script src=&quot;/_layouts/1033/init.js?rev=VhAxGc3rkK79RM90tibDzw%3D%3D&quot;&gt;&lt;/script&gt; " __designer:Values="&lt;P N='Name' T='init.js' /&gt;&lt;P N='InDesign' T='False' /&gt;&lt;P N='ID' T='ctl05' /&gt;&lt;P N='Page' ID='1' /&gt;&lt;P N='TemplateControl' ID='2' /&gt;&lt;P N='AppRelativeTemplateSourceDirectory' R='-1' /&gt;"/> <SharePoint:ScriptLink Name="highslide-with-html.js" runat="server" __designer:Error="Access to the path 'C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\Template\layouts\1033\highslide-with-html.js' is denied."/> <!--Placeholder for additional overrides--> <asp:ContentPlaceHolder id="PlaceHolderAdditionalPageHead" runat="server"/> </head> <body class="body" onload="javascript:_spBodyOnLoadWrapper();"> <WebPartPages:SPWebPartManager runat="server"/> <form runat="server" onsubmit="return _spFormOnSubmitWrapper();"> <table cellpadding="0" cellspacing="0" class="master"> <tr> <td height="100%" class="shadowLeft"> <div class="spacer"> </div> </td> <td valign="top"> <table cellpadding="0" cellspacing="0" width="100%" class="masterContent"> <tr style="height:0px"><td> <wssuc:DesignModeConsole id="IdDesignModeConsole" runat="server" __designer:Preview="&lt;span __designer:NonVisual=&quot;true&quot;&gt;[ DesignModeConsoleContainer &quot;DesignModeContainer&quot; ]&lt;/span&gt; " __designer:Values="&lt;P N='ID' ID='1' T='IdDesignModeConsole' /&gt;&lt;P N='TemplateControl' R='0' /&gt;"/></td></tr> <tr> <td colspan="2" class="authoringRegion"> <span class="siteActionMenu"> <PublishingSiteAction:SiteActionMenu runat="server" __designer:Preview=" &lt;!-- Begin Action Menu Markup --&gt; &lt;table height=100% class=&quot;ms-siteaction&quot; cellpadding=0 cellspacing=0&gt; &lt;tr&gt; &lt;td class=&quot;ms-siteactionsmenu&quot; id=&quot;siteactiontd&quot;&gt; &lt;span style=&quot;display:none&quot;&gt;&lt;menu type='ServerMenu' id=&quot;zz1_SiteActionsMenuMain&quot; largeIconMode=&quot;true&quot;&gt;&lt;ie:menuitem id=&quot;zz2_MenuItem_Create&quot; type=&quot;option&quot; iconSrc=&quot;/_layouts/images/Actionscreate.gif&quot; onMenuClick=&quot;window.location = '/_layouts/create.aspx';&quot; menuGroupId=&quot;100&quot;&gt;&lt;/ie:menuitem&gt;&lt;ie:menuitem id=&quot;zz3_MenuItem_Settings&quot; type=&quot;option&quot; iconSrc=&quot;/_layouts/images/ActionsSettings.gif&quot; onMenuClick=&quot;window.location = '/_layouts/settings.aspx';&quot; menuGroupId=&quot;100&quot;&gt;&lt;/ie:menuitem&gt;&lt;/menu&gt;&lt;/span&gt;&lt;div&gt;&lt;div&gt;&lt;span title=&quot;Open Menu&quot;&gt;&lt;div id=&quot;zz4_SiteActionsMenu_t&quot; class=&quot;&quot; onmouseover=&quot;MMU_PopMenuIfShowing(this);MMU_EcbTableMouseOverOut(this, true)&quot; hoverActive=&quot;ms-siteactionsmenuhover&quot; hoverInactive=&quot;&quot; onclick=&quot; MMU_Open(byid(''), MMU_GetMenuFromClientId('zz4_SiteActionsMenu'),event,false, null, 0);&quot; foa=&quot;MMU_GetMenuFromClientId('zz4_SiteActionsMenu')&quot; oncontextmenu=&quot;this.click(); return false;&quot; nowrap=&quot;nowrap&quot;&gt;&lt;a id=&quot;zz4_SiteActionsMenu&quot; accesskey=&quot;/&quot; href=&quot;#&quot; onclick=&quot;javascript:return false;&quot; style=&quot;cursor:pointer;white-space:nowrap;&quot; onfocus=&quot;MMU_EcbLinkOnFocusBlur(byid(''), this, true);&quot; onkeydown=&quot;MMU_EcbLinkOnKeyDown(byid(''), MMU_GetMenuFromClientId('zz4_SiteActionsMenu'), event);&quot; onclick=&quot; MMU_Open(byid(''), MMU_GetMenuFromClientId('zz4_SiteActionsMenu'),event,false, null, 0);&quot; oncontextmenu=&quot;this.click(); return false;&quot; menuTokenValues=&quot;MENUCLIENTID=zz4_SiteActionsMenu,TEMPLATECLIENTID=zz1_SiteActionsMenuMain&quot; serverclientid=&quot;zz4_SiteActionsMenu&quot;&gt;Site Actions&lt;img src=&quot;/_layouts/images/blank.gif&quot; border=&quot;0&quot; alt=&quot;Use SHIFT+ENTER to open the menu (new window).&quot;/&gt;&lt;/a&gt;&lt;img align=&quot;absbottom&quot; src=&quot;/_layouts/images/whitearrow.gif&quot; alt=&quot;&quot; /&gt;&lt;/div&gt;&lt;/span&gt;&lt;/div&gt;&lt;/div&gt; &lt;/td&gt; &lt;/tr&gt; &lt;/table&gt; &lt;!-- End Action Menu Markup --&gt; " __designer:Values="&lt;P N='TemplateControl' R='0' /&gt;"/> </span> <div class="sharepointLogin"> <!--Authentication for Authors only--> <table cellpadding="0" cellspacing="0" > <tr> <td class="ms-globallinks"> <SharePoint:DelegateControl ControlId="GlobalSiteLink1" Scope="Farm" runat="server" __designer:Preview="&lt;span style='padding-left:3px'&gt;&lt;/span&gt; &lt;a id=&quot;ctl00_ctl09_hlMySite&quot; href=&quot;http://litwaredemo:80/MySite/_layouts/MySite.aspx&quot;&gt;My Site&lt;/a&gt; &lt;span style='padding-left:4px;padding-right:3px'&gt;|&lt;/span&gt; " __designer:Values="&lt;P N='ControlId' T='GlobalSiteLink1' /&gt;&lt;P N='Scope' T='Farm' /&gt;&lt;P N='ID' T='ctl08' /&gt;&lt;P N='Page' ID='1' /&gt;&lt;P N='TemplateControl' ID='2' /&gt;&lt;P N='AppRelativeTemplateSourceDirectory' R='-1' /&gt;"/></td> <td class="ms-globallinks"> <SharePoint:DelegateControl ControlId="GlobalSiteLink2" Scope="Farm" runat="server" __designer:Preview="&lt;span id=&quot;ctl00_ctl11_MyLinksMenu&quot;&gt;&lt;span style=&quot;display:none&quot;&gt;&lt;menu type='ServerMenu' id=&quot;ctl00_ctl11_MyLinksMenuMenuTemplate&quot; largeIconMode=&quot;true&quot;&gt;&lt;/menu&gt;&lt;/span&gt;&lt;span title=&quot;Open Menu&quot;&gt;&lt;span id=&quot;ctl00_ctl11_MyLinksMenuMenu_t&quot; class=&quot;ms-SPLink ms-hovercellinactive&quot; onmouseover=&quot;MMU_PopMenuIfShowing(this);MMU_EcbTableMouseOverOut(this, true)&quot; hoverActive=&quot;ms-SPLink ms-hovercellactive&quot; hoverInactive=&quot;ms-SPLink ms-hovercellinactive&quot; onclick=&quot;javascript:FetchCallbackMenuItems(&amp;#39;ctl00_ctl11_MyLinksMenuMenuTemplate&amp;#39;); MMU_Open(byid('ctl00_ctl11_MyLinksMenuMenuTemplate'), MMU_GetMenuFromClientId('ctl00_ctl11_MyLinksMenuMenu'),event,true, null, 0);&quot; foa=&quot;MMU_GetMenuFromClientId('ctl00_ctl11_MyLinksMenuMenu')&quot; oncontextmenu=&quot;this.click(); return false;&quot; nowrap=&quot;nowrap&quot;&gt;&lt;a id=&quot;ctl00_ctl11_MyLinksMenuMenu&quot; accesskey=&quot;M&quot; href=&quot;#&quot; onclick=&quot;javascript:return false;&quot; style=&quot;cursor:pointer;white-space:nowrap;&quot; onfocus=&quot;MMU_EcbLinkOnFocusBlur(byid('ctl00_ctl11_MyLinksMenuMenuTemplate'), this, true);&quot; onkeydown=&quot;MMU_EcbLinkOnKeyDown(byid('ctl00_ctl11_MyLinksMenuMenuTemplate'), MMU_GetMenuFromClientId('ctl00_ctl11_MyLinksMenuMenu'), event);&quot; onclick=&quot;javascript:FetchCallbackMenuItems(&amp;#39;ctl00_ctl11_MyLinksMenuMenuTemplate&amp;#39;); MMU_Open(byid('ctl00_ctl11_MyLinksMenuMenuTemplate'), MMU_GetMenuFromClientId('ctl00_ctl11_MyLinksMenuMenu'),event,true, null, 0);&quot; oncontextmenu=&quot;this.click(); return false;&quot; menuTokenValues=&quot;MENUCLIENTID=ctl00_ctl11_MyLinksMenuMenu,TEMPLATECLIENTID=ctl00_ctl11_MyLinksMenuMenuTemplate&quot; serverclientid=&quot;ctl00_ctl11_MyLinksMenuMenu&quot;&gt;My Links&lt;img src=&quot;/_layouts/images/blank.gif&quot; border=&quot;0&quot; alt=&quot;Use SHIFT+ENTER to open the menu (new window).&quot;/&gt;&lt;/a&gt;&lt;img align=&quot;absbottom&quot; src=&quot;/_layouts/images/menudark.gif&quot; alt=&quot;&quot; /&gt;&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;|" __designer:Values="&lt;P N='ControlId' T='GlobalSiteLink2' /&gt;&lt;P N='Scope' T='Farm' /&gt;&lt;P N='ID' T='ctl10' /&gt;&lt;P N='Page' ID='1' /&gt;&lt;P N='TemplateControl' ID='2' /&gt;&lt;P N='AppRelativeTemplateSourceDirectory' R='-1' /&gt;"/></td> <td class="ms-globallinks"> <wssuc:Welcome id="explitLogout" runat="server" __designer:Preview=" &lt;span style=&quot;display:none&quot;&gt;&lt;menu type='ServerMenu' id=&quot;zz5_ID_PersonalActionMenu&quot; largeIconMode=&quot;true&quot;&gt;&lt;ie:menuitem id=&quot;zz6_ID_PersonalInformation&quot; type=&quot;option&quot; iconSrc=&quot;/_layouts/images/menuprofile.gif&quot; onMenuClick=&quot;javascript:GoToPage('\u002f_layouts\u002fuserdisp.aspx?Force=True\u0026ID=' + _spUserId);return false;&quot; menuGroupId=&quot;100&quot;&gt;&lt;/ie:menuitem&gt;&lt;ie:menuitem id=&quot;zz7_ID_LoginAsDifferentUser&quot; type=&quot;option&quot; onMenuClick=&quot;javascript:LoginAsAnother('\u002f_layouts\u002fAccessDenied.aspx?loginasanotheruser=true', 0)&quot; menuGroupId=&quot;200&quot;&gt;&lt;/ie:menuitem&gt;&lt;ie:menuitem id=&quot;zz8_ID_RequestAccess&quot; type=&quot;option&quot; onMenuClick=&quot;window.location = '/_layouts/reqacc.aspx?type=list&amp;amp;name=%7B36F0105B%2D0F8E%2D4A22%2DBE90%2D716A51E97B5D%7D';&quot; menuGroupId=&quot;200&quot;&gt;&lt;/ie:menuitem&gt;&lt;ie:menuitem id=&quot;zz9_ID_Logout&quot; type=&quot;option&quot; onMenuClick=&quot;window.location = '/_layouts/SignOut.aspx';&quot; menuGroupId=&quot;200&quot;&gt;&lt;/ie:menuitem&gt;&lt;/menu&gt;&lt;/span&gt;&lt;span title=&quot;Open Menu&quot;&gt;&lt;div id=&quot;zz10_Menu_t&quot; class=&quot;ms-SPLink ms-SpLinkButtonInActive&quot; onmouseover=&quot;MMU_PopMenuIfShowing(this);MMU_EcbTableMouseOverOut(this, true)&quot; hoverActive=&quot;ms-SPLink ms-SpLinkButtonActive&quot; hoverInactive=&quot;ms-SPLink ms-SpLinkButtonInActive&quot; onclick=&quot; MMU_Open(byid(''), MMU_GetMenuFromClientId('zz10_Menu'),event,false, null, 0);&quot; foa=&quot;MMU_GetMenuFromClientId('zz10_Menu')&quot; oncontextmenu=&quot;this.click(); return false;&quot; nowrap=&quot;nowrap&quot;&gt;&lt;a id=&quot;zz10_Menu&quot; accesskey=&quot;L&quot; href=&quot;#&quot; onclick=&quot;javascript:return false;&quot; style=&quot;cursor:pointer;white-space:nowrap;&quot; onfocus=&quot;MMU_EcbLinkOnFocusBlur(byid(''), this, true);&quot; onkeydown=&quot;MMU_EcbLinkOnKeyDown(byid(''), MMU_GetMenuFromClientId('zz10_Menu'), event);&quot; onclick=&quot; MMU_Open(byid(''), MMU_GetMenuFromClientId('zz10_Menu'),event,false, null, 0);&quot; oncontextmenu=&quot;this.click(); return false;&quot; menuTokenValues=&quot;MENUCLIENTID=zz10_Menu,TEMPLATECLIENTID=zz5_ID_PersonalActionMenu&quot; serverclientid=&quot;zz10_Menu&quot;&gt;Welcome LitwareInc Administrator&lt;img src=&quot;/_layouts/images/blank.gif&quot; border=&quot;0&quot; alt=&quot;Use SHIFT+ENTER to open the menu (new window).&quot;/&gt;&lt;/a&gt;&lt;img align=&quot;absbottom&quot; src=&quot;/_layouts/images/menudark.gif&quot; alt=&quot;&quot; /&gt;&lt;/div&gt;&lt;/span&gt;&lt;script type=&quot;text/javascript&quot; language=&quot;javascript&quot;&gt;var _spUserId=1;&lt;/script&gt; &lt;a id=&quot;explitLogout_ExplicitLogin&quot; Href=&quot;_controltemplates/http://litwaredemo/_layouts/Authenticate.aspx&quot; style=&quot;display:none&quot;&gt;Sign In&lt;/a&gt; " __designer:Values="&lt;P N='ID' ID='1' T='explitLogout' /&gt;&lt;P N='TemplateControl' R='0' /&gt;"/></td> </tr> </table> </div> <div class="console"> <PublishingConsole:Console runat="server" __designer:Preview=" &lt;!-- Console --&gt; &lt;span id=&quot;ctl00_publishingContext1&quot;&gt;&lt;/span&gt; &lt;script type=&quot;text/javascript&quot; language=&quot;javascript&quot;&gt;if (document.getElementById('mpdmconsole')) { ShowConsoleBlockPaddingWithOverhang('mpLeftBackPadding', 'mpRightBackPadding', 'masterPageLeftOverhang', 'masterPageRightOverhang'); } &lt;/script&gt; &lt;!-- Console --&gt; " __designer:Values="&lt;P N='TemplateControl' R='0' /&gt;"/> </div> </td> </tr> <tr> <td colspan="2" > <table cellpadding="0" cellspacing="0" width="100%"> <tr> <td colspan="4" class="topArea"> <SharePoint:AspMenu ID="logoLinkId" runat="server" DataSourceID="SiteMapDataSourceRoot" StaticDisplayLevels="1" MaximumDynamicDisplayLevels="0" AccessKey="1" CssClass="logo" __designer:Preview="&lt;table id=&quot;zz12_logoLinkId&quot; class=&quot;logo&quot; cellpadding=&quot;0&quot; cellspacing=&quot;0&quot; border=&quot;0&quot;&gt; &lt;tr id=&quot;zz12_logoLinkIdn0&quot;&gt; &lt;td&gt;&lt;table cellpadding=&quot;0&quot; cellspacing=&quot;0&quot; border=&quot;0&quot; width=&quot;100%&quot;&gt; &lt;tr&gt; &lt;td style=&quot;white-space:nowrap;width:100%;&quot;&gt;&lt;a Href=&quot;/Pages/Default.aspx&quot; accesskey=&quot;1&quot; style=&quot;text-decoration:none;&quot;&gt;Home&lt;/a&gt;&lt;/td&gt; &lt;/tr&gt; &lt;/table&gt;&lt;/td&gt; &lt;/tr&gt; &lt;/table&gt;" __designer:Values="&lt;P N='ID' T='logoLinkId' /&gt;&lt;P N='MaximumDynamicDisplayLevels' T='0' /&gt;&lt;P N='DataSourceID' T='SiteMapDataSourceRoot' /&gt;&lt;P N='AccessKey' T='1' /&gt;&lt;P N='ControlStyle'&gt;&lt;P N='CssClass' ID='1' T='logo' /&gt;&lt;P N='Font' ID='2' /&gt;&lt;P N='IsEmpty' T='False' /&gt;&lt;/P&gt;&lt;P N='CssClass' R='1' /&gt;&lt;P N='Font' R='2' /&gt;&lt;P N='Page' ID='3' /&gt;&lt;P N='TemplateControl' ID='4' /&gt;&lt;P N='AppRelativeTemplateSourceDirectory' R='-1' /&gt;" __designer:Templates="&lt;Group Name=&quot;Item Templates&quot;&gt;&lt;Template Name=&quot;StaticItemTemplate&quot; Flags=&quot;D&quot; Content=&quot;&quot; /&gt;&lt;Template Name=&quot;DynamicItemTemplate&quot; Flags=&quot;D&quot; Content=&quot;&quot; /&gt;&lt;/Group&gt;"/> <PublishingNavigation:PortalSiteMapDataSource ID="SiteMapDataSourceRoot" Runat="server" SiteMapProvider="CombinedNavSiteMapProvider" EnableViewState="true" StartFromCurrentNode="true" StartingNodeOffset="0" ShowStartingNode="true" __designer:Preview="&lt;table cellpadding=4 cellspacing=0 style=&quot;font:messagebox;color:buttontext;background-color:buttonface;border: solid 1px;border-top-color:buttonhighlight;border-left-color:buttonhighlight;border-bottom-color:buttonshadow;border-right-color:buttonshadow&quot;&gt; &lt;tr&gt;&lt;td nowrap&gt;&lt;span style=&quot;font-weight:bold&quot;&gt;PortalSiteMapDataSource&lt;/span&gt; - SiteMapDataSourceRoot&lt;/td&gt;&lt;/tr&gt; &lt;tr&gt;&lt;td&gt;&lt;/td&gt;&lt;/tr&gt; &lt;/table&gt;" __designer:Values="&lt;P N='ID' T='SiteMapDataSourceRoot' /&gt;&lt;P N='SiteMapProvider' T='CombinedNavSiteMapProvider' /&gt;&lt;P N='StartFromCurrentNode' T='True' /&gt;&lt;P N='Page' ID='1' /&gt;&lt;P N='TemplateControl' ID='2' /&gt;&lt;P N='AppRelativeTemplateSourceDirectory' R='-1' /&gt;"/> <div class="topLinkBar"> <div class="topLink"> <PublishingVariations:VariationsLabelMenu id="labelmenu1" runat="server" __designer:Preview="&lt;span __designer:NonVisual=&quot;true&quot;&gt;&lt;table cellpadding=4 cellspacing=0 style=&quot;font:messagebox;color:buttontext;background-color:buttonface;border: solid 1px;border-top-color:buttonhighlight;border-left-color:buttonhighlight;border-bottom-color:buttonshadow;border-right-color:buttonshadow&quot;&gt; &lt;tr&gt;&lt;td nowrap&gt;&lt;span style=&quot;font-weight:bold&quot;&gt;VariationDataSource&lt;/span&gt; - LabelMenuDataSource&lt;/td&gt;&lt;/tr&gt; &lt;tr&gt;&lt;td&gt;&lt;/td&gt;&lt;/tr&gt; &lt;/table&gt;&lt;/span&gt; " __designer:Values="&lt;P N='ID' ID='1' T='labelmenu1' /&gt;&lt;P N='TemplateControl' R='0' /&gt;"/> </div> </div> </td> </tr> <tr class="topNavContainer"> <td class="topNavRoundLeft"> <div class="glassSpacerLeft" /> </td> <td valign="top" width="100%"> <SharePoint:AspMenu ID="GlobalNav" Runat="server" DataSourceID="SiteMapDataSource1" Orientation="Horizontal" StaticDisplayLevels="1" MaximumDynamicDisplayLevels="1" StaticSubMenuIndent="0" DynamicHorizontalOffset="0" DynamicVerticalOffset="-8" StaticEnableDefaultPopOutImage="false" ItemWrap="false" SkipLinkText="<%$Resources:cms,masterpages_skiplinktext%>" CssClass="topNav" __designer:Preview="&lt;table id=&quot;zz13_GlobalNav&quot; class=&quot;topNav&quot; cellpadding=&quot;0&quot; cellspacing=&quot;0&quot; border=&quot;0&quot;&gt; &lt;tr&gt; &lt;td title=&quot;Document Center site&quot; id=&quot;zz13_GlobalNavn0&quot;&gt;&lt;table class=&quot;topNavItem&quot; cellpadding=&quot;0&quot; cellspacing=&quot;0&quot; border=&quot;0&quot; width=&quot;100%&quot;&gt; &lt;tr&gt; &lt;td style=&quot;white-space:nowrap;&quot;&gt;&lt;a class=&quot;topNavItem&quot; Href=&quot;/Docs&quot; style=&quot;text-decoration:none;border-style:none;&quot;&gt;Document Center&lt;/a&gt;&lt;/td&gt; &lt;/tr&gt; &lt;/table&gt;&lt;/td&gt;&lt;td style=&quot;width:0px;&quot;&gt;&lt;/td&gt;&lt;td style=&quot;width:0px;&quot;&gt;&lt;/td&gt;&lt;td title=&quot;Company News Home&quot; id=&quot;zz13_GlobalNavn1&quot;&gt;&lt;table class=&quot;topNavItem&quot; cellpadding=&quot;0&quot; cellspacing=&quot;0&quot; border=&quot;0&quot; width=&quot;100%&quot;&gt; &lt;tr&gt; &lt;td style=&quot;white-space:nowrap;&quot;&gt;&lt;a class=&quot;topNavItem&quot; Href=&quot;/News/Pages/Default.aspx&quot; style=&quot;text-decoration:none;border-style:none;&quot;&gt;News&lt;/a&gt;&lt;/td&gt; &lt;/tr&gt; &lt;/table&gt;&lt;/td&gt;&lt;td style=&quot;width:0px;&quot;&gt;&lt;/td&gt;&lt;td style=&quot;width:0px;&quot;&gt;&lt;/td&gt;&lt;td title=&quot;Report Center&quot; id=&quot;zz13_GlobalNavn2&quot;&gt;&lt;table class=&quot;topNavItem&quot; cellpadding=&quot;0&quot; cellspacing=&quot;0&quot; border=&quot;0&quot; width=&quot;100%&quot;&gt; &lt;tr&gt; &lt;td style=&quot;white-space:nowrap;&quot;&gt;&lt;a class=&quot;topNavItem&quot; Href=&quot;/Reports/Pages/Default.aspx&quot; style=&quot;text-decoration:none;border-style:none;&quot;&gt;Reports&lt;/a&gt;&lt;/td&gt; &lt;/tr&gt; &lt;/table&gt;&lt;/td&gt;&lt;td style=&quot;width:0px;&quot;&gt;&lt;/td&gt;&lt;td style=&quot;width:0px;&quot;&gt;&lt;/td&gt;&lt;td title=&quot;The Search Center displays search results&quot; id=&quot;zz13_GlobalNavn3&quot;&gt;&lt;table class=&quot;topNavItem&quot; cellpadding=&quot;0&quot; cellspacing=&quot;0&quot; border=&quot;0&quot; width=&quot;100%&quot;&gt; &lt;tr&gt; &lt;td style=&quot;white-space:nowrap;&quot;&gt;&lt;a class=&quot;topNavItem&quot; Href=&quot;/SearchCenter/Pages/default.aspx&quot; style=&quot;text-decoration:none;border-style:none;&quot;&gt;Search&lt;/a&gt;&lt;/td&gt; &lt;/tr&gt; &lt;/table&gt;&lt;/td&gt;&lt;td style=&quot;width:0px;&quot;&gt;&lt;/td&gt;&lt;td style=&quot;width:0px;&quot;&gt;&lt;/td&gt;&lt;td title=&quot;Site Directory web&quot; id=&quot;zz13_GlobalNavn4&quot;&gt;&lt;table class=&quot;topNavItem&quot; cellpadding=&quot;0&quot; cellspacing=&quot;0&quot; border=&quot;0&quot; width=&quot;100%&quot;&gt; &lt;tr&gt; &lt;td style=&quot;white-space:nowrap;&quot;&gt;&lt;a class=&quot;topNavItem&quot; Href=&quot;/SiteDirectory/Pages/category.aspx&quot; style=&quot;text-decoration:none;border-style:none;&quot;&gt;Sites&lt;/a&gt;&lt;/td&gt; &lt;/tr&gt; &lt;/table&gt;&lt;/td&gt;&lt;td style=&quot;width:0px;&quot;&gt;&lt;/td&gt; &lt;/tr&gt; &lt;/table&gt;" __designer:Values="&lt;P N='ID' T='GlobalNav' /&gt;&lt;P N='DynamicHoverStyle'&gt;&lt;P N='CssClass' T='topNavFlyOutsHover' /&gt;&lt;P N='IsEmpty' T='False' /&gt;&lt;/P&gt;&lt;P N='DynamicMenuItemStyle'&gt;&lt;P N='CssClass' T='topNavFlyOutsItem' /&gt;&lt;P N='IsEmpty' T='False' /&gt;&lt;/P&gt;&lt;P N='DynamicMenuStyle'&gt;&lt;P N='CssClass' T='topNavFlyOuts' /&gt;&lt;P N='IsEmpty' T='False' /&gt;&lt;/P&gt;&lt;P N='DynamicVerticalOffset' T='-8' /&gt;&lt;P N='MaximumDynamicDisplayLevels' T='1' /&gt;&lt;P N='Orientation' E='0' /&gt;&lt;P N='SkipLinkText' Bound='True' T='Resources:cms,masterpages_skiplinktext' /&gt;&lt;P N='StaticEnableDefaultPopOutImage' T='False' /&gt;&lt;P N='StaticHoverStyle'&gt;&lt;P N='CssClass' T='topNavHover' /&gt;&lt;P N='IsEmpty' T='False' /&gt;&lt;/P&gt;&lt;P N='StaticMenuItemStyle'&gt;&lt;P N='CssClass' T='topNavItem' /&gt;&lt;P N='ItemSpacing' T='0px' /&gt;&lt;P N='IsEmpty' T='False' /&gt;&lt;/P&gt;&lt;P N='StaticSelectedStyle'&gt;&lt;P N='CssClass' T='topNavSelected' /&gt;&lt;P N='ItemSpacing' T='0px' /&gt;&lt;P N='IsEmpty' T='False' /&gt;&lt;/P&gt;&lt;P N='StaticSubMenuIndent' T='0px' /&gt;&lt;P N='DataSourceID' T='SiteMapDataSource1' /&gt;&lt;P N='ControlStyle'&gt;&lt;P N='CssClass' ID='1' T='topNav' /&gt;&lt;P N='Font' ID='2' /&gt;&lt;P N='IsEmpty' T='False' /&gt;&lt;/P&gt;&lt;P N='CssClass' R='1' /&gt;&lt;P N='Font' R='2' /&gt;&lt;P N='Page' ID='3' /&gt;&lt;P N='TemplateControl' ID='4' /&gt;&lt;P N='AppRelativeTemplateSourceDirectory' R='-1' /&gt;" __designer:Templates="&lt;Group Name=&quot;Item Templates&quot;&gt;&lt;Template Name=&quot;StaticItemTemplate&quot; Flags=&quot;D&quot; Content=&quot;&quot; /&gt;&lt;Template Name=&quot;DynamicItemTemplate&quot; Flags=&quot;D&quot; Content=&quot;&quot; /&gt;&lt;/Group&gt;"> <StaticMenuItemStyle CssClass="topNavItem" ItemSpacing="0"/> <StaticSelectedStyle CssClass="topNavSelected" ItemSpacing="0"/> <StaticHoverStyle CssClass="topNavHover"/> <DynamicMenuStyle CssClass="topNavFlyOuts" /> <DynamicMenuItemStyle CssClass="topNavFlyOutsItem" /> <DynamicHoverStyle CssClass="topNavFlyOutsHover"/> </SharePoint:AspMenu> <PublishingNavigation:PortalSiteMapDataSource ID="siteMapDataSource1" Runat="server" SiteMapProvider="CombinedNavSiteMapProvider" EnableViewState="true" StartFromCurrentNode="true" StartingNodeOffset="0" ShowStartingNode="false" TreatStartingNodeAsCurrent="true" TrimNonCurrentTypes="Heading" __designer:Preview="&lt;table cellpadding=4 cellspacing=0 style=&quot;font:messagebox;color:buttontext;background-color:buttonface;border: solid 1px;border-top-color:buttonhighlight;border-left-color:buttonhighlight;border-bottom-color:buttonshadow;border-right-color:buttonshadow&quot;&gt; &lt;tr&gt;&lt;td nowrap&gt;&lt;span style=&quot;font-weight:bold&quot;&gt;PortalSiteMapDataSource&lt;/span&gt; - siteMapDataSource1&lt;/td&gt;&lt;/tr&gt; &lt;tr&gt;&lt;td&gt;&lt;/td&gt;&lt;/tr&gt; &lt;/table&gt;" __designer:Values="&lt;P N='ID' T='siteMapDataSource1' /&gt;&lt;P N='SiteMapProvider' T='CombinedNavSiteMapProvider' /&gt;&lt;P N='StartFromCurrentNode' T='True' /&gt;&lt;P N='ShowStartingNode' T='False' /&gt;&lt;P N='TreatStartingNodeAsCurrent' T='True' /&gt;&lt;P N='TrimNonCurrentTypes' E='32' /&gt;&lt;P N='Page' ID='1' /&gt;&lt;P N='TemplateControl' ID='2' /&gt;&lt;P N='AppRelativeTemplateSourceDirectory' R='-1' /&gt;"/> </td> <td> <div class="search"> <asp:ContentPlaceHolder id="PlaceHolderSearchArea" runat="server"> <SPSWC:SearchBoxEx id="SearchBox" RegisterStyles="false" TextBeforeDropDown="" TextBeforeTextBox="<%$Resources:cms,masterpages_searchbox_label%>" TextBoxWidth="100" GoImageUrl="<% $SPUrl:~sitecollection/Style Library/Images/Search_Arrow.jpg %>" GoImageUrlRTL="<% $SPUrl:~sitecollection/Style Library/Images/Search_Arrow_RTL.jpg %>" UseSiteDefaults="true" DropDownMode = "HideScopeDD" SuppressWebPartChrome="true" runat="server" WebPart="true" __WebPartId="{7DECDCCA-FDA0-4739-8F0E-7B8DE48F0E0D}" __Preview="&lt;table TOPLEVEL border=&quot;0&quot; cellpadding=&quot;0&quot; cellspacing=&quot;0&quot; width=&quot;100%&quot;&gt; &lt;tr&gt; &lt;td&gt;&lt;table border=&quot;0&quot; cellpadding=&quot;0&quot; cellspacing=&quot;0&quot; width=&quot;100%&quot;&gt; &lt;tr class=&quot;ms-WPHeader&quot;&gt; &lt;td title=&quot;&quot; id=&quot;WebPart

    Read the article

  • Play a sound based on the button pressed on the IPhone/xcode.

    - by slickplaid
    I'm trying to play a sound based on which button is pressed using AVAudioPlayer. (This is not a soundboard or fart app.) I have linked all buttons using this code in the header file: @interface appViewController : UIViewController <AVAudioPlayerDelegate> { AVAudioPlayer *player; UIButton *C4; UIButton *Bb4; UIButton *B4; UIButton *A4; UIButton *Ab4; UIButton *As4; UIButton *G3; UIButton *Gb3; UIButton *Gs3; UIButton *F3; UIButton *Fs3; UIButton *E3; UIButton *Eb3; UIButton *D3; UIButton *Db3; UIButton *Ds3; UIButton *C3; UIButton *Cs3; } @property (nonatomic, retain) AVAudioPlayer *player; @property (nonatomic, retain) IBOutlet UIButton *C4; @property (nonatomic, retain) IBOutlet UIButton *B4; @property (nonatomic, retain) IBOutlet UIButton *Bb4; @property (nonatomic, retain) IBOutlet UIButton *A4; @property (nonatomic, retain) IBOutlet UIButton *Ab4; @property (nonatomic, retain) IBOutlet UIButton *As4; @property (nonatomic, retain) IBOutlet UIButton *G3; @property (nonatomic, retain) IBOutlet UIButton *Gb3; @property (nonatomic, retain) IBOutlet UIButton *Gs3; @property (nonatomic, retain) IBOutlet UIButton *F3; @property (nonatomic, retain) IBOutlet UIButton *Fs3; @property (nonatomic, retain) IBOutlet UIButton *E3; @property (nonatomic, retain) IBOutlet UIButton *Eb3; @property (nonatomic, retain) IBOutlet UIButton *D3; @property (nonatomic, retain) IBOutlet UIButton *Db3; @property (nonatomic, retain) IBOutlet UIButton *Ds3; @property (nonatomic, retain) IBOutlet UIButton *C3; @property (nonatomic, retain) IBOutlet UIButton *Cs3; - (IBAction) playNote; @end Buttons are all linked to the event "playNote" in interfaceBuilder and each note is linked to the proper referencing outlet according to note name. All *.mp3 sound files are named after the UIButton name (IE- C3 == C3.mp3). In my implementation file, I have this to play a only one note when the C3 button is pressed: #import "sonicfitViewController.h" @implementation appViewController @synthesize C3, Cs3, D3, Ds3, Db3, E3, Eb3, F3, Fs3, G3, Gs3, A4, Ab4, As4, B4, Bb4, C4; // Implement viewDidLoad to do additional setup after loading the view, typically from a nib. - (void)viewDidLoad { NSString *path = [[NSBundle mainBundle] pathForResource:@"3C" ofType:@"mp3"]; NSLog(@"path: %@", path); NSURL *file = [[NSURL alloc] initFileURLWithPath:path]; AVAudioPlayer *p = [[AVAudioPlayer alloc] initWithContentsOfURL:file error:nil]; [file release]; self.player = p; [p release]; [player prepareToPlay]; [player setDelegate:self]; [super viewDidLoad]; } - (IBAction) playNote { [self.player play]; } Now, with the above I have two issues: First, the NSLog reports NULL and crashes when trying to play the file. I have added the mp3's to the resources folder and they have been copied and not just linked. They are not in an subfolder under the resources folder. Secondly, how can I set it up so that when say button C3 is pressed, it plays C3.mp3 and F3 plays F3.mp3 without writing duplicate lines of code for each different button? playNote should be like NSString *path = [[NSBundle mainBundle] pathForResource:nameOfButton ofType:@"mp3"]; instead of defining it specifically (@"C3"). Is there a better way of doing this and why does the *path report NULL and crash when I load the app? I'm pretty sure it's something as simple as adding additional variable inputs to - (IBAction) playNote:buttonName and putting all the code to call AVAudioPlayer in the playNote function but I'm unsure of the code to do this.

    Read the article

  • c# opennetCF background worker - e.result gives a ObjectDisposedException

    - by ikky
    Hi! I'm new working with background worker in C#. Here is a class, and under it, you will find the instansiation of it, and under there i will define my problem for you: I have the class Drawing: class Drawing { BackgroundWorker bgWorker; ProgressBar progressBar; Panel panelHolder; public Drawing(ref ProgressBar pgbar, ref Panel panelBig) // Progressbar and panelBig as reference { this.panelHolder = panelBig; this.progressBar = pgbar; bgWorker = new BackgroundWorker(); bgWorker.WorkerReportsProgress = true; bgWorker.WorkerSupportsCancellation = true; bgWorker.DoWork += new OpenNETCF.ComponentModel.DoWorkEventHandler(this.bgWorker_DoWork); bgWorker.RunWorkerCompleted += new OpenNETCF.ComponentModel.RunWorkerCompletedEventHandler(this.bgWorker_RunWorkerCompleted); bgWorker.ProgressChanged += new OpenNETCF.ComponentModel.ProgressChangedEventHandler(this.bgWorker_ProgressChanged); } public void createDrawing() { bgWorker.RunWorkerAsync(); } private void bgWorker_DoWork(object sender, DoWorkEventArgs e) { Panel panelContainer = new Panel(); // Adding panels to the panelContainer for(i=0; i<100; i++) { Panel panelSubpanel = new Panel(); // Setting size, color, name etc.... panelContainer.Controls.Add(panelSubpanel); // Adding the subpanel to the panelContainer //Report the progress bgWorker.ReportProgress(0, i); // Reporting number of panels loaded } e.Result = imagePanel; // Send the result(a panel with lots of subpanels) as an argument } private void bgWorker_ProgressChanged(object sender, ProgressChangedEventArgs e) { this.progressBar.Value = (int)e.UserState; this.progressBar.Update(); } private void bgWorker_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e) { if (e.Error == null) { this.panelHolder = (Panel)e.Result; } else { MessageBox.Show("An error occured, please try again"); } } } Instansiating an object of this class: public partial class Draw: Form { public Draw() { ProgressBar progressBarLoading = new ProgressBar(); // Set lots of properties on progressBarLoading Panel panelBigPanelContainer = new Panel(); Drawing drawer = new Drawing(ref progressBarLoading, ref panelBigPanelContainer); drawer.createDrawing(); // this makes the object start a new thread, loading all the panels into a panel container, while also sending the progress to this progressbar. } } Here is my problem: In the private void bgWorker_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e) i don't get the e.Result as it should be. When i debug and look at the e.Result, the panel's properties have this exception message: '((System.Windows.Forms.Control)(e.Result)).ClientSize' threw an exception of type 'System.ObjectDisposedException' So the object gets disposed, but "why" is my question, and how can i fix this? I hope someone will answer me, this is making me crazy. Another question i have: Is it allowed to use "ref" with arguments? is it bad programming? Thanks in advance. I have also written how i understand the Background worker below here: This is what i think is the "rules" for background workers: bgWorker.RunWorkerAsync(); => starts a new thread. bgWorker_DoWork cannot reach the main thread without delegates - private void bgWorker_DoWork(object sender, DoWorkEventArgs e) { // The work happens here, this is a thread that is not reachable by the main thread e.Result => This is an argument which can be reached by bgWorker_RunWorkerCompleted() bgWorker.ReportProgress(progressVar); => Reports the progress to the bgWorker_ProgressChanged() } - private void bgWorker_ProgressChanged(object sender, ProgressChangedEventArgs e) { // I get the progress here, and can do stuff to the main thread from here (e.g update a control) this.ProgressBar.Value = e.ProgressPercentage; } - private void bgWorker_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e) { // This is where the thread is completed. // Here i can get e.Result from the bgWorker thread // From here i can reach controls in my main thread, and use e.Result in my main thread if (e.Error == null) { this.panelTileHolder = (Panel)e.Result; } else { MessageBox.Show("There was an error"); } }

    Read the article

  • heimdal kerberos in openldap issue

    - by Brian
    I think I posted this on the wrong 'sister site', so here it is. I'm having a bit of trouble getting Kerberos (Heimdal version) to work nicely with OpenLDAP. The kerberos database is being stored in LDAP itself. The KDC uses SASL EXTERNAL authentication as root to access the container ou. I created the database in LDAP fine using kadmin -l, but it won't let me use kadmin without the -l flag: root@rds0:~# kadmin -l kadmin> list * krbtgt/REALM kadmin/changepw kadmin/admin changepw/kerberos kadmin/hprop WELLKNOWN/ANONYMOUS WELLKNOWN/org.h5l.fast-cookie@WELLKNOWN:ORG.H5L default brian.empson brian.empson/admin host/rds0.example.net ldap/rds0.example.net host/localhost kadmin> exit root@rds0:~# kadmin kadmin> list * brian.empson/admin@REALM's Password: <----- With right password kadmin: kadm5_get_principals: Key table entry not found kadmin> list * brian.empson/admin@REALM's Password: <------ With wrong password kadmin: kadm5_get_principals: Already tried ENC-TS-info, looping kadmin> I can get tickets without a problem: root@rds0:~# klist Credentials cache: FILE:/tmp/krb5cc_0 Principal: brian.empson@REALM Issued Expires Principal Nov 11 14:14:40 2012 Nov 12 00:14:37 2012 krbtgt/REALM@REALM Nov 11 14:40:35 2012 Nov 12 00:14:37 2012 ldap/rds0.example.net@REALM But I can't seem to change my own password without kadmin -l: root@rds0:~# kpasswd brian.empson@REALM's Password: <---- Right password New password: Verify password - New password: Auth error : Authentication failed root@rds0:~# kpasswd brian.empson@REALM's Password: <---- Wrong password kpasswd: krb5_get_init_creds: Already tried ENC-TS-info, looping kadmin's logs are not helpful at all: 2012-11-11T13:48:33 krb5_recvauth: Key table entry not found 2012-11-11T13:51:18 krb5_recvauth: Key table entry not found 2012-11-11T13:53:02 krb5_recvauth: Key table entry not found 2012-11-11T14:16:34 krb5_recvauth: Key table entry not found 2012-11-11T14:20:24 krb5_recvauth: Key table entry not found 2012-11-11T14:20:44 krb5_recvauth: Key table entry not found 2012-11-11T14:21:29 krb5_recvauth: Key table entry not found 2012-11-11T14:21:46 krb5_recvauth: Key table entry not found 2012-11-11T14:23:09 krb5_recvauth: Key table entry not found 2012-11-11T14:45:39 krb5_recvauth: Key table entry not found The KDC reports that both accounts succeed in authenticating: 2012-11-11T14:48:03 AS-REQ brian.empson@REALM from IPv4:192.168.72.10 for kadmin/changepw@REALM 2012-11-11T14:48:03 Client sent patypes: REQ-ENC-PA-REP 2012-11-11T14:48:03 Looking for PK-INIT(ietf) pa-data -- brian.empson@REALM 2012-11-11T14:48:03 Looking for PK-INIT(win2k) pa-data -- brian.empson@REALM 2012-11-11T14:48:03 Looking for ENC-TS pa-data -- brian.empson@REALM 2012-11-11T14:48:03 Need to use PA-ENC-TIMESTAMP/PA-PK-AS-REQ 2012-11-11T14:48:03 sending 294 bytes to IPv4:192.168.72.10 2012-11-11T14:48:03 AS-REQ brian.empson@REALM from IPv4:192.168.72.10 for kadmin/changepw@REALM 2012-11-11T14:48:03 Client sent patypes: ENC-TS, REQ-ENC-PA-REP 2012-11-11T14:48:03 Looking for PK-INIT(ietf) pa-data -- brian.empson@REALM 2012-11-11T14:48:03 Looking for PK-INIT(win2k) pa-data -- brian.empson@REALM 2012-11-11T14:48:03 Looking for ENC-TS pa-data -- brian.empson@REALM 2012-11-11T14:48:03 ENC-TS Pre-authentication succeeded -- brian.empson@REALM using aes256-cts-hmac-sha1-96 2012-11-11T14:48:03 ENC-TS pre-authentication succeeded -- brian.empson@REALM 2012-11-11T14:48:03 AS-REQ authtime: 2012-11-11T14:48:03 starttime: unset endtime: 2012-11-11T14:53:00 renew till: unset 2012-11-11T14:48:03 Client supported enctypes: aes256-cts-hmac-sha1-96, aes128-cts-hmac-sha1-96, des3-cbc-sha1, arcfour-hmac-md5, using aes256-cts-hmac-sha1-96/aes256-cts-hmac-sha1-96 2012-11-11T14:48:03 sending 704 bytes to IPv4:192.168.72.10 2012-11-11T14:45:39 AS-REQ brian.empson/admin@REALM from IPv4:192.168.72.10 for kadmin/admin@REALM 2012-11-11T14:45:39 Client sent patypes: REQ-ENC-PA-REP 2012-11-11T14:45:39 Looking for PK-INIT(ietf) pa-data -- brian.empson/admin@REALM 2012-11-11T14:45:39 Looking for PK-INIT(win2k) pa-data -- brian.empson/admin@REALM 2012-11-11T14:45:39 Looking for ENC-TS pa-data -- brian.empson/admin@REALM 2012-11-11T14:45:39 Need to use PA-ENC-TIMESTAMP/PA-PK-AS-REQ 2012-11-11T14:45:39 sending 303 bytes to IPv4:192.168.72.10 2012-11-11T14:45:39 AS-REQ brian.empson/admin@REALM from IPv4:192.168.72.10 for kadmin/admin@REALM 2012-11-11T14:45:39 Client sent patypes: ENC-TS, REQ-ENC-PA-REP 2012-11-11T14:45:39 Looking for PK-INIT(ietf) pa-data -- brian.empson/admin@REALM 2012-11-11T14:45:39 Looking for PK-INIT(win2k) pa-data -- brian.empson/admin@REALM 2012-11-11T14:45:39 Looking for ENC-TS pa-data -- brian.empson/admin@REALM 2012-11-11T14:45:39 ENC-TS Pre-authentication succeeded -- brian.empson/admin@REALM using aes256-cts-hmac-sha1-96 2012-11-11T14:45:39 ENC-TS pre-authentication succeeded -- brian.empson/admin@REALM 2012-11-11T14:45:39 AS-REQ authtime: 2012-11-11T14:45:39 starttime: unset endtime: 2012-11-11T15:45:39 renew till: unset 2012-11-11T14:45:39 Client supported enctypes: aes256-cts-hmac-sha1-96, aes128-cts-hmac-sha1-96, des3-cbc-sha1, arcfour-hmac-md5, using aes256-cts-hmac-sha1-96/aes256-cts-hmac-sha1-96 2012-11-11T14:45:39 sending 717 bytes to IPv4:192.168.72.10 I wish I had more detailed logging messages, running kadmind in debug mode seems to almost work but it just kicks me back to the shell when I type in the correct password. GSSAPI via LDAP doesn't work either, but I suspect it's because some parts of kerberos aren't working either: root@rds0:~# ldapsearch -Y GSSAPI -H ldaps:/// -b "o=mybase" o=mybase SASL/GSSAPI authentication started ldap_sasl_interactive_bind_s: Other (e.g., implementation specific) error (80) additional info: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information () root@rds0:~# ldapsearch -Y EXTERNAL -H ldapi:/// -b "o=mybase" o=mybase SASL/EXTERNAL authentication started SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth SASL SSF: 0 # extended LDIF <snip> Would anyone be able to point me in the right direction?

    Read the article

  • Refresh QTextEdit in PyQt

    - by Mark Underwood
    Hi all, Im writing a PyQt app that takes some input in one widget, and then processes some text files. What ive got at the moment is when the user clicks the "process" button a seperate window with a QTextEdit in it pops up, and ouputs some logging messages. On Mac OS X this window is refreshed automatically and you cna see the process. On Windows, the window reports (Not Responding) and then once all the proccessing is done, the log output is shown. Im assuming I need to refresh the window after each write into the log, and ive had a look around at using a timer. etc, but havnt had much luck in getting it working. Below is the source code. It has two files, GUI.py which does all the GUI stuff and MOVtoMXF that does all the processing. GUI.py import os import sys import MOVtoMXF from PyQt4.QtCore import * from PyQt4.QtGui import * class Form(QDialog): def process(self): path = str(self.pathBox.displayText()) if(path == ''): QMessageBox.warning(self, "Empty Path", "You didnt fill something out.") return xmlFile = str(self.xmlFileBox.displayText()) if(xmlFile == ''): QMessageBox.warning(self, "No XML file", "You didnt fill something.") return outFileName = str(self.outfileNameBox.displayText()) if(outFileName == ''): QMessageBox.warning(self, "No Output File", "You didnt do something") return print path + " " + xmlFile + " " + outFileName mov1 = MOVtoMXF.MOVtoMXF(path, xmlFile, outFileName, self.log) self.log.show() rc = mov1.ScanFile() if( rc < 0): print "something happened" #self.done(0) def __init__(self, parent=None): super(Form, self).__init__(parent) self.log = Log() self.pathLabel = QLabel("P2 Path:") self.pathBox = QLineEdit("") self.pathBrowseB = QPushButton("Browse") self.pathLayout = QHBoxLayout() self.pathLayout.addStretch() self.pathLayout.addWidget(self.pathLabel) self.pathLayout.addWidget(self.pathBox) self.pathLayout.addWidget(self.pathBrowseB) self.xmlLabel = QLabel("FCP XML File:") self.xmlFileBox = QLineEdit("") self.xmlFileBrowseB = QPushButton("Browse") self.xmlLayout = QHBoxLayout() self.xmlLayout.addStretch() self.xmlLayout.addWidget(self.xmlLabel) self.xmlLayout.addWidget(self.xmlFileBox) self.xmlLayout.addWidget(self.xmlFileBrowseB) self.outFileLabel = QLabel("Save to:") self.outfileNameBox = QLineEdit("") self.outputFileBrowseB = QPushButton("Browse") self.outputLayout = QHBoxLayout() self.outputLayout.addStretch() self.outputLayout.addWidget(self.outFileLabel) self.outputLayout.addWidget(self.outfileNameBox) self.outputLayout.addWidget(self.outputFileBrowseB) self.exitButton = QPushButton("Exit") self.processButton = QPushButton("Process") self.buttonLayout = QHBoxLayout() #self.buttonLayout.addStretch() self.buttonLayout.addWidget(self.exitButton) self.buttonLayout.addWidget(self.processButton) self.layout = QVBoxLayout() self.layout.addLayout(self.pathLayout) self.layout.addLayout(self.xmlLayout) self.layout.addLayout(self.outputLayout) self.layout.addLayout(self.buttonLayout) self.setLayout(self.layout) self.pathBox.setFocus() self.setWindowTitle("MOVtoMXF") self.connect(self.processButton, SIGNAL("clicked()"), self.process) self.connect(self.exitButton, SIGNAL("clicked()"), self, SLOT("reject()")) self.ConnectButtons() class Log(QTextEdit): def __init__(self, parent=None): super(Log, self).__init__(parent) self.timer = QTimer() self.connect(self.timer, SIGNAL("timeout()"), self.updateText()) self.timer.start(2000) def updateText(self): print "update Called" AND MOVtoMXF.py import os import sys import time import string import FileUtils import shutil import re class MOVtoMXF: #Class to do the MOVtoMXF stuff. def __init__(self, path, xmlFile, outputFile, edit): self.MXFdict = {} self.MOVDict = {} self.path = path self.xmlFile = xmlFile self.outputFile = outputFile self.outputDirectory = outputFile.rsplit('/',1) self.outputDirectory = self.outputDirectory[0] sys.stdout = OutLog( edit, sys.stdout) class OutLog(): def __init__(self, edit, out=None, color=None): """(edit, out=None, color=None) -> can write stdout, stderr to a QTextEdit. edit = QTextEdit out = alternate stream ( can be the original sys.stdout ) color = alternate color (i.e. color stderr a different color) """ self.edit = edit self.out = None self.color = color def write(self, m): if self.color: tc = self.edit.textColor() self.edit.setTextColor(self.color) #self.edit.moveCursor(QtGui.QTextCursor.End) self.edit.insertPlainText( m ) if self.color: self.edit.setTextColor(tc) if self.out: self.out.write(m) self.edit.show() If any other code is needed (i think this is all that is needed) then just let me know. Any Help would be great. Mark

    Read the article

  • Constant Memory Leak in SpeechSynthesizer

    - by DudeFX
    I have developed a project which I would like to release which uses c#, WPF and the System.Speech.Synthesizer object. The issue preventing the release of this project is that whenever SpeakAsync is called it leaves a memory leak that grows to the point of eventual failure. I believe I have cleaned up properly after using this object, but cannot find a cure. I have run the program through Ants Memory Profiler and it reports that WAVEHDR and WaveHeader is growing with each call. I have created a sample project to try to pinpoint the cause, but am still at a loss. Any help would be appreciated. The project uses VS2008 and is a c# WPF project that targets .NET 3.5 and Any CPU. You need to manually add a reference to System.Speech. Here is the Code: <Window x:Class="SpeechTest.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="300" Width="300"> <Grid> <StackPanel Orientation="Vertical"> <Button Content="Start Speaking" Click="Start_Click" Margin="10" /> <Button Content="Stop Speaking" Click="Stop_Click" Margin="10" /> <Button Content="Exit" Click="Exit_Click" Margin="10"/> </StackPanel> </Grid> // Start of code behind using System; using System.Windows; using System.Speech.Synthesis; namespace SpeechTest { public partial class Window1 : Window { // speak setting private bool speakingOn = false; private int curLine = 0; private string [] speakLines = { "I am wondering", "Why whenever Speech is called", "A memory leak occurs", "If you run this long enough", "It will eventually crash", "Any help would be appreciated" }; public Window1() { InitializeComponent(); } private void Start_Click(object sender, RoutedEventArgs e) { speakingOn = true; SpeakLine(); } private void Stop_Click(object sender, RoutedEventArgs e) { speakingOn = false; } private void Exit_Click(object sender, RoutedEventArgs e) { App.Current.Shutdown(); } private void SpeakLine() { if (speakingOn) { // Create our speak object SpeechSynthesizer spk = new SpeechSynthesizer(); spk.SpeakCompleted += new EventHandler(spk_Completed); // Speak the line spk.SpeakAsync(speakLines[curLine]); } } public void spk_Completed(object sender, SpeakCompletedEventArgs e) { if (sender is SpeechSynthesizer) { // get access to our Speech object SpeechSynthesizer spk = (SpeechSynthesizer)sender; // Clean up after speaking (thinking the event handler is causing the memory leak) spk.SpeakCompleted -= new EventHandler(spk_Completed); // Dispose the speech object spk.Dispose(); // bump it curLine++; // check validity if (curLine = speakLines.Length) { // back to the beginning curLine = 0; } // Speak line SpeakLine(); } } } } I run this program on Windows 7 64 bit and it will run and eventually halt when attempting to create a new SpeechSynthesizer object. When run on Windows Vista 64 bit the memory will grow from a starting point of 34k to so far about 400k and growing. Can anyone see anything in the code that might be causing this, or is this an issue with the Speech object itself. Any help would be appreciated.

    Read the article

  • Load-balancing between a Procurve switch and a server

    - by vlad
    Hello I've been searching around the web for this problem i've been having. It's similar in a way to this question: How exactly & specifically does layer 3 LACP destination address hashing work? My setup is as follows: I have a central switch, a Procurve 2510G-24, image version Y.11.16. It's the center of a star topology, there are four switches connected to it via a single gigabit link. Those switches service the users. On the central switch, I have a server with two gigabit interfaces that I want to bond together in order to achieve higher throughput, and two other servers that have single gigabit connections to the switch. The topology looks as follows: sw1 sw2 sw3 sw4 | | | | --------------------- | sw0 | --------------------- || | | srv1 srv2 srv3 The servers were running FreeBSD 8.1. On srv1 I set up a lagg interface using the lacp protocol, and on the switch I set up a trunk for the two ports using lacp as well. The switch showed that the server was a lacp partner, I could ping the server from another computer, and the server could ping other computers. If I unplugged one of the cables, the connection would keep working, so everything looked fine. Until I tested throughput. There was only one link used between srv1 and sw0. All testing was conducted with iperf, and load distribution was checked with systat -ifstat. I was looking to test the load balancing for both receive and send operations, as I want this server to be a file server. There were therefore two scenarios: iperf -s on srv1 and iperf -c on the other servers iperf -s on the other servers and iperf -c on srv1 connected to all the other servers. Every time only one link was used. If one cable was unplugged, the connections would keep going. However, once the cable was plugged back in, the load was not distributed. Each and every server is able to fill the gigabit link. In one-to-one test scenarios, iperf was reporting around 940Mbps. The CPU usage was around 20%, which means that the servers could withstand a doubling of the throughput. srv1 is a dell poweredge sc1425 with onboard intel 82541GI nics (em driver on freebsd). After troubleshooting a previous problem with vlan tagging on top of a lagg interface, it turned out that the em could not support this. So I figured that maybe something else is wrong with the em drivers and / or lagg stack, so I started up backtrack 4r2 on this same server. So srv1 now uses linux kernel 2.6.35.8. I set up a bonding interface bond0. The kernel module was loaded with option mode=4 in order to get lacp. The switch was happy with the link, I could ping to and from the server. I could even put vlans on top of the bonding interface. However, only half the problem was solved: if I used srv1 as a client to the other servers, iperf was reporting around 940Mbps for each connection, and bwm-ng showed, of course, a nice distribution of the load between the two nics; if I run the iperf server on srv1 and tried to connect with the other servers, there was no load balancing. I thought that maybe I was out of luck and the hashes for the two mac addresses of the clients were the same, so I brought in two new servers and tested with the four of them at the same time, and still nothing changed. I tried disabling and reenabling one of the links, and all that happened was the traffic switched from one link to the other and back to the first again. I also tried setting the trunk to "plain trunk mode" on the switch, and experimented with other bonding modes (roundrobin, xor, alb, tlb) but I never saw any traffic distribution. One interesting thing, though: one of the four switches is a Cisco 2950, image version 12.1(22)EA7. It has 48 10/100 ports and 2 gigabit uplinks. I have a server (call it srv4) with a 4 channel trunk connected to it (4x100), FreeBSD 8.0 release. The switch is connected to sw0 via gigabit. If I set up an iperf server on one of the servers connected to sw0 and a client on srv4, ALL 4 links are used, and iperf reports around 330Mbps. systat -ifstat shows all four interfaces are used. The cisco port-channel uses src-mac to balance the load. The HP should use both the source and destination according to the manual, so it should work as well. Could this mean there is some bug in the HP firmware? Am I doing something wrong?

    Read the article

  • Debian apt dependency mismatch (libc6)

    - by Sean Gordon
    Earlier, I tried to install package via apt-get (cython), but it failed with the Errors were encountered while processing: message, and since then, apt is refusing to install anything. apt-get check output below: root@dix:~# apt-get check Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: libc6 : Depends: libc-bin (= 2.11.3-2) but 2.11.3-4 is installed libc6-dev : Depends: libc6 (= 2.11.3-4) but 2.11.3-2 is installed libc6-i386 : Depends: libc6 (= 2.11.3-4) but 2.11.3-2 is installed E: Unmet dependencies. Try using -f. Apt/aptitude don't seem to be able to fix this dependency issue, and I don't know what to do. Edit: Running apt-get -f install results in no change, and my sources are all squeeze. Running apt-get update then apt-get dist-upgrade show no change either. Edit 2: I went back to try this again in a new terminal and apt-get -f install gives this error: dpkg: error processing /var/cache/apt/archives/libc6_2.11.3-4_amd64.deb (--unpack): subprocess new pre-installation script killed by signal (Aborted) configured to not write apport reports Errors were encountered while processing: /var/cache/apt/archives/libc6_2.11.3-4_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Edit 3: Using apt-get clean first, then the previous commands, results in the first error again. Using apt-get -f dist-upgrade gives the below. Reading package lists... Building dependency tree... Reading state information... Correcting dependencies... Done The following packages will be upgraded: apache2 apache2-mpm-prefork apache2-utils apache2.2-bin apache2.2-common at automake base-files bind9 bind9-doc bind9-host bind9utils debian-archive-keyring dnsutils dpkg-dev file host initscripts isc-dhcp-client isc-dhcp-common krb5-multidev libapr1 libbind9-60 libc6 libdns69 libdpkg-perl libexpat1 libexpat1-dev libgc1c2 libgssapi-krb5-2 libgssrpc4 libisc62 libisccc60 libisccfg62 libk5crypto3 libkadm5clnt-mit7 libkadm5srv-mit7 libkdb5-4 libkrb5-3 libkrb5-dev libkrb5support0 liblwres60 libmagic1 libmysqlclient16 libnss3-1d libssl-dev libssl0.9.8 libtiff4 libtiff4-dev libtiffxx0c2 libxi6 libxml2 linux-libc-dev lwresd mysql-client-5.1 mysql-common mysql-server mysql-server-5.1 mysql-server-core-5.1 openjdk-6-jre openjdk-6-jre-headless openjdk-6-jre-lib openssh-client openssh-server openssl procps python python-crypto python-minimal sudo sysv-rc sysvinit sysvinit-utils tzdata tzdata-java 75 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 5 not fully installed or removed. Need to get 0 B/79.9 MB of archives. After this operation, 1,411 kB of additional disk space will be used. (Reading database ... 52241 files and directories currently installed.) Preparing to replace libc6 2.11.3-2 (using .../libc6_2.11.3-4_amd64.deb) ... *** stack smashing detected ***: /usr/bin/perl terminated ======= Backtrace: ========= /lib/libc.so.6(__fortify_fail+0x37)[0x7fdaad9b9f87] /lib/libc.so.6(__fortify_fail+0x0)[0x7fdaad9b9f50] /usr/lib/libperl.so.5.10(Perl_yylex+0x5896)[0x7fdaae343346] [0x8e83a0] ======= Memory map: ======== 00400000-00402000 r-xp 00000000 08:01 525338 /usr/bin/perl 00601000-00602000 rw-p 00001000 08:01 525338 /usr/bin/perl 00602000-0091f000 rw-p 00000000 00:00 0 [heap] 7fdaaca54000-7fdaaca6a000 r-xp 00000000 08:01 393818 /lib/libgcc_s.so.1 7fdaaca6a000-7fdaacc69000 ---p 00016000 08:01 393818 /lib/libgcc_s.so.1 7fdaacc69000-7fdaacc6a000 rw-p 00015000 08:01 393818 /lib/libgcc_s.so.1 7fdaacc6a000-7fdaacc6f000 r-xp 00000000 08:01 524949 /usr/lib/perl5/auto/Locale/gettext/gettext.so 7fdaacc6f000-7fdaace6e000 ---p 00005000 08:01 524949 /usr/lib/perl5/auto/Locale/gettext/gettext.so 7fdaace6e000-7fdaace6f000 rw-p 00004000 08:01 524949 /usr/lib/perl5/auto/Locale/gettext/gettext.so 7fdaace6f000-7fdaace79000 r-xp 00000000 08:01 532753 /usr/lib/perl/5.10.1/auto/Encode/Encode.so 7fdaace79000-7fdaad078000 ---p 0000a000 08:01 532753 /usr/lib/perl/5.10.1/auto/Encode/Encode.so 7fdaad078000-7fdaad079000 rw-p 00009000 08:01 532753 /usr/lib/perl/5.10.1/auto/Encode/Encode.so 7fdaad079000-7fdaad07e000 r-xp 00000000 08:01 525444 /usr/lib/perl/5.10.1/auto/IO/IO.so 7fdaad07e000-7fdaad27d000 ---p 00005000 08:01 525444 /usr/lib/perl/5.10.1/auto/IO/IO.so 7fdaad27d000-7fdaad27e000 rw-p 00004000 08:01 525444 /usr/lib/perl/5.10.1/auto/IO/IO.so 7fdaad27e000-7fdaad299000 r-xp 00000000 08:01 525450 /usr/lib/perl/5.10.1/auto/POSIX/POSIX.so 7fdaad299000-7fdaad498000 ---p 0001b000 08:01 525450 /usr/lib/perl/5.10.1/auto/POSIX/POSIX.so 7fdaad498000-7fdaad49b000 rw-p 0001a000 08:01 525450 /usr/lib/perl/5.10.1/auto/POSIX/POSIX.so 7fdaad49b000-7fdaad49e000 r-xp 00000000 08:01 525436 /usr/lib/perl/5.10.1/auto/Fcntl/Fcntl.so 7fdaad49e000-7fdaad69e000 ---p 00003000 08:01 525436 /usr/lib/perl/5.10.1/auto/Fcntl/Fcntl.so 7fdaad69e000-7fdaad69f000 rw-p 00003000 08:01 525436 /usr/lib/perl/5.10.1/auto/Fcntl/Fcntl.so 7fdaad69f000-7fdaad6a7000 r-xp 00000000 08:01 393824 /lib/libcrypt-2.11.3.so 7fdaad6a7000-7fdaad8a6000 ---p 00008000 08:01 393824 /lib/libcrypt-2.11.3.so 7fdaad8a6000-7fdaad8a7000 r--p 00007000 08:01 393824 /lib/libcrypt-2.11.3.so 7fdaad8a7000-7fdaad8a8000 rw-p 00008000 08:01 393824 /lib/libcrypt-2.11.3.so 7fdaad8a8000-7fdaad8d6000 rw-p 00000000 00:00 0 7fdaad8d6000-7fdaada2f000 r-xp 00000000 08:01 393822 /lib/libc-2.11.3.so 7fdaada2f000-7fdaadc2e000 ---p 00159000 08:01 393822 /lib/libc-2.11.3.so 7fdaadc2e000-7fdaadc32000 r--p 00158000 08:01 393822 /lib/libc-2.11.3.so 7fdaadc32000-7fdaadc33000 rw-p 0015c000 08:01 393822 /lib/libc-2.11.3.so 7fdaadc33000-7fdaadc38000 rw-p 00000000 00:00 0 7fdaadc38000-7fdaadc4f000 r-xp 00000000 08:01 393248 /lib/libpthread-2.11.3.so 7fdaadc4f000-7fdaade4e000 ---p 00017000 08:01 393248 /lib/libpthread-2.11.3.so 7fdaade4e000-7fdaade4f000 r--p 00016000 08:01 393248 /lib/libpthread-2.11.3.so 7fdaade4f000-7fdaade50000 rw-p 00017000 08:01 393248 /lib/libpthread-2.11.3.so 7fdaade50000-7fdaade54000 rw-p 00000000 00:00 0 7fdaade54000-7fdaaded4000 r-xp 00000000 08:01 393826 /lib/libm-2.11.3.so 7fdaaded4000-7fdaae0d4000 ---p 00080000 08:01 393826 /lib/libm-2.11.3.so 7fdaae0d4000-7fdaae0d5000 r--p 00080000 08:01 393826 /lib/libm-2.11.3.so 7fdaae0d5000-7fdaae0d6000 rw-p 00081000 08:01 393826 /lib/libm-2.11.3.so 7fdaae0d6000-7fdaae0d8000 r-xp 00000000 08:01 393825 /lib/libdl-2.11.3.so 7fdaae0d8000-7fdaae2d8000 ---p 00002000 08:01 393825 /lib/libdl-2.11.3.so 7fdaae2d8000-7fdaae2d9000 r--p 00002000 08:01 393825 /lib/libdl-2.11.3.so 7fdaae2d9000-7fdaae2da000 rw-p 00003000 08:01 393825 /lib/libdl-2.11.3.so 7fdaae2da000-7fdaae43f000 r-xp 00000000 08:01 525387 /usr/lib/libperl.so.5.10.1 7fdaae43f000-7fdaae63e000 ---p 00165000 08:01 525387 /usr/lib/libperl.so.5.10.1 7fdaae63e000-7fdaae647000 rw-p 00164000 08:01 525387 /usr/lib/libperl.so.5.10.1 7fdaae647000-7fdaae665000 r-xp 00000000 08:01 393819 /lib/ld-2.11.3.so 7fdaae854000-7fdaae859000 rw-p 00000000 00:00 0 7fdaae862000-7fdaae864000 rw-p 00000000 00:00 0 7fdaae864000-7fdaae865000 r--p 0001d000 08:01 393819 /lib/ld-2.11.3.so 7fdaae865000-7fdaae866000 rw-p 0001e000 08:01 393819 /lib/ld-2.11.3.so 7fdaae866000-7fdaae867000 rw-p 00000000 00:00 0 7fff9616d000-7fff9618e000 rw-p 00000000 00:00 0 [stack] 7fff961ff000-7fff96200000 r-xp 00000000 00:00 0 [vdso] ffffffffff600000-ffffffffff601000 r--p 00000000 00:00 0 [vsyscall] dpkg: error processing /var/cache/apt/archives/libc6_2.11.3-4_amd64.deb (--unpack): subprocess new pre-installation script killed by signal (Aborted) Errors were encountered while processing: /var/cache/apt/archives/libc6_2.11.3-4_amd64.deb

    Read the article

  • How to disable Mac OS X from using swap when there still is "Inactive" memory?

    - by Motin
    A common phenomena in my day to day usage (and several other's according to various posts throughout the internet) of OS X, the system seems to become slow whenever there is no more "Free" memory available. Supposedly, this is due to swapping, since heavy disk activity is apparent and that vm_stat reports many pageouts. (Correct me from wrong) However, the amount of "Inactive" ram is typically around 12.5%-25% of all available memory (^1.) when swapping starts/occurs/ends. According to http://support.apple.com/kb/ht1342 : Inactive memory This information in memory is not actively being used, but was recently used. For example, if you've been using Mail and then quit it, the RAM that Mail was using is marked as Inactive memory. This Inactive memory is available for use by another application, just like Free memory. However, if you open Mail before its Inactive memory is used by a different application, Mail will open quicker because its Inactive memory is converted to Active memory, instead of loading Mail from the slower hard disk. And according to http://developer.apple.com/library/mac/#documentation/Performance/Conceptual/ManagingMemory/Articles/AboutMemory.html : The inactive list contains pages that are currently resident in physical memory but have not been accessed recently. These pages contain valid data but may be released from memory at any time. So, basically: When a program has quit, it's memory becomes marked as Inactive and should be claimable at any time. Still, OS X will prefer to start swapping out memory to the Swap file instead of just claiming this memory, whenever the "Free" memory gets to low. Why? What is the advantage of this behavior over, say, instantly releasing Inactive memory and not even touch the swap file? Some sources (^2.) indicate that OS X would page out the "Inactive" memory to swap before releasing it, but that doesn't make sense now does it if the memory may be released from memory at any time? Swapping is expensive, releasing is cheap, right? Can this behavior be changed using some preference or known hack? (Preferably one that doesn't include disabling swap/dynamic_pager altogether and restarting...) I do appreciate the purge command, as well as the concept of Repairing disk permissions to force some Free memory, but those are ways to painfully force more Free memory than to actually fixing the swap/release decision logic... Btw a similar question was asked here: http://forums.macnn.com/90/mac-os-x/434650/why-does-os-x-swap-when/ and here: http://hintsforums.macworld.com/showthread.php?t=87688 but even though the OPs re-asked the core question, none of the replies addresses an answer to it... ^1. UPDATE 17-mar-2012 Since I first posted this question, I have gone from 4gb to 8gb of installed ram, and the problem remains. The amount of "Inactive" ram was 0.5gb-1.0gb before and is now typically around 1.0-2.0GB when swapping starts/occurs/ends, ie it seems that around 12.5%-25% of the ram is preserved as Inactive by osx kernel logic. ^2. For instance http://apple.stackexchange.com/questions/4288/what-does-it-mean-if-i-have-lots-of-inactive-memory-at-the-end-of-a-work-day : Once all your memory is used (free memory is 0), the OS will write out inactive memory to the swapfile to make more room in active memory. UPDATE 17-mar-2012 Here is a round-up of the methods that have been suggested to help so far: The purge command "Used to approximate initial boot conditions with a cold disk buffer cache for performance analysis. It does not affect anonymous memory that has been allocated through malloc, vm_allocate, etc". This is useful to prevent osx to swap-out the disk cache (which is ridiculous that osx actually does so in the first place), but with the downside that the disk cache is released, meaning that if the disk cache was not about to be swapped out, one would simply end up with a cold disk buffer cache, probably affecting performance negatively. The FreeMemory app and/or Repairing disk permissions to force some Free memory Doesn't help releasing any memory, only moving some gigabytes of memory contents from ram to the hd. In the end, this causes lots of swap-ins when I attempt to use the applications that were open while freeing memory, as a lot of its vm is now on swap. Speeding up swap-allocation using dynamicpagerwrapper Seems a good thing to do in order to speed up swap-usage, but does not address the problem of osx swapping in the first place while there is still inactive memory. Disabling swap by disabling dynamicpager and restarting This will force osx not to use swap to the price of the system hanging when all memory is used. Not a viable alternative... Disabling swap using a hacked dynamicpager Similar to disabling dynamicpager above, some excerpts from the comments to the blog post indicate that this is not a viable solution: "The Inactive Memory is high as usual". "when your system is running out of memory, the whole os hangs...", "if you consume the whole amount of memory of the mac, the machine will likely hang" To sum up, I am still unaware of a way of disabling Mac OS X from using swap when there still is "Inactive" memory. If it isn't possible, maybe at least there is an explanation somewhere of why osx prefers to swap out memory that may be released from memory at any time?

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >