Search Results

Search found 77599 results on 3104 pages for 'test data'.

Page 29/3104 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • ADO.NET Data Services Entity Framework request error when property setter is internal

    - by Jim Straatman
    I receive an error message when exposing an ADO.NET Data Service using an Entity Framework data model that contains an entity (called "Case") with an internal setter on a property. If I modify the setter to be public (using the entity designer), the data services works fine. I don’t need the entity "Case" exposed in the data service, so I tried to limit which entities are exposed using SetEntitySetAccessRule. This didn’t work, and service end point fails with the same error. public static void InitializeService(IDataServiceConfiguration config) { config.SetEntitySetAccessRule("User", EntitySetRights.AllRead); } The error message is reported in a browser when the .svc endpoint is called. It is very generic, and reads “Request Error. The server encountered an error processing the request. See server logs for more details.” Unfortunately, there are no entries in the System and Application event logs. I found this stackoverflow question that shows how to configure tracing on the service. After doing so, the following NullReferenceExceptoin error was reported in the trace log. Does anyone know how to avoid this exception when including an entity with an internal setter? Blockquote 131076 3 0 2 MOTOJIM http://msdn.microsoft.com/en-US/library/System.ServiceModel.Diagnostics.TraceHandledException.aspx Handling an exception. 685a2910-19-128703978432492675 System.NullReferenceException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Object reference not set to an instance of an object. at System.Data.Services.Providers.ObjectContextServiceProvider.PopulateMemberMetadata(ResourceType resourceType, MetadataWorkspace workspace, IDictionary2 entitySets, IDictionary2 knownTypes) at System.Data.Services.Providers.ObjectContextServiceProvider.PopulateMetadata(IDictionary2 knownTypes, IDictionary2 entitySets) at System.Data.Services.Providers.BaseServiceProvider.PopulateMetadata() at System.Data.Services.DataService1.CreateProvider(Type dataServiceType, Object dataSourceInstance, DataServiceConfiguration&amp; configuration) at System.Data.Services.DataService1.EnsureProviderAndConfigForRequest() at System.Data.Services.DataService1.ProcessRequestForMessage(Stream messageBody) at SyncInvokeProcessRequestForMessage(Object , Object[] , Object[] ) at System.ServiceModel.Dispatcher.SyncMethodInvoker.Invoke(Object instance, Object[] inputs, Object[]&amp; outputs) at System.ServiceModel.Dispatcher.DispatchOperationRuntime.InvokeBegin(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage5(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage4(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage3(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage2(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage1(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.MessageRpc.Process(Boolean isOperationContextSet) </StackTrace> <ExceptionString>System.NullReferenceException: Object reference not set to an instance of an object. at System.Data.Services.Providers.ObjectContextServiceProvider.PopulateMemberMetadata(ResourceType resourceType, MetadataWorkspace workspace, IDictionary2 entitySets, IDictionary2 knownTypes) at System.Data.Services.Providers.ObjectContextServiceProvider.PopulateMetadata(IDictionary2 knownTypes, IDictionary2 entitySets) at System.Data.Services.Providers.BaseServiceProvider.P

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • mocha testing for the lazies, single key-press for all possible tests

    - by laggingreflex
    I have a batch file that lists all the test files I have and asks me which test I want to perform, like Test. [U]nit, [I]ntegration : i (user input) Integration. [A]ll, [2][U]serInteraction, [3][R]esultGeneration : u 2 User Interaction. Running "mocha integration\2userint.js" ... So essentially I have configured a batch "option" for each test file I have, which I can choose to run individually or all together. But adding and removing tests is a pain. Is there something that does this or anything like this automatically? Like reads all the files and asks me which file(s) I want to test. A GUI with checkboxes would be ultimate! but I'll take anything. I'm working in node.js

    Read the article

  • Software, script or a tool to automate managing which tests to run

    - by laggingreflex
    I have a batch file that lists all the test files I have and asks me which test I want to perform, like Test. [U]nit, [I]ntegration : i (user input) Integration. [A]ll, [2][U]serInteraction, [3][R]esultGeneration : u 2 User Interaction. Running "mocha integration\2userint.js" ... So essentially I have configured a batch "option" for each test file I have, which I can choose to run individually or all together. But adding and removing tests is a pain. I have to update the batch file everytime a new file is added or changed. Is there a software, script or a tool, that does this automatically, or makes it easier for me to do so? I basically need it to be aware of and ask me which file(s) I want to test. A GUI with checkboxes would be ultimate! but I'll take anything. I'm working in node.js

    Read the article

  • Parsing content-disposion header's filename in multipart/from-data

    - by Artyom
    Hello According to RFC, in multipart/form-data content-disposition header filename field receives as parameter HTTP quoted string - string between quites where character '\' can escape any other ascii character. Problem web browsers don't do it. IE6 sends: Content-Disposition: form-data; name="file"; filename="z:\tmp\test.txt" Instead of expected Content-Disposition: form-data; name="file"; filename="z:\\tmp\\test.txt" Which should be parsed as z:tmptest.txt according to rules instead of z:\tmp\test.txt. Firefox, Konqueror and Chrome don't escape " characters for example: Content-Disposition: form-data; name="file"; filename=""test".txt" Instead of expected Content-Disposition: form-data; name="file"; filename="\"test\".txt" So... how would you suggest to deal with this issue?

    Read the article

  • Import exponetial fixed width format data into Excel

    - by Tom Daniel
    I've received a bunch of text data files consiting of Lots of records (30K/file) of 3 fields each of 5-place numbers in exponential format: s0.nnnnnEsee (where s is +/-, n is a digit and ee is the exponent (always 2 digit). When I open the file in Notepad, the format is perfectly uniform throughout each file, but when I import it to Excel using Data|Import|Fixed Width, many of the data values get messed up, no matter what format (text, exponential, various custom tries) I assign to the cells. Looking at the Notepad version, it appears that leading + signs were replaced with a space in the data file, but the sign of the exponential is always there. This means that some fields begin with a space, and this appears to confuse the Excel import routine. I get the same result in Excel 2003 and 2007. I'm sure there's a straightforward solution (hopefully without a messy VBA routine), but I can't figure out what to try next. :-) To clarify (hopefully), here are some input records and the corresponding text input to Excel: Notepad Excel -0.11311E+01 0.10431E-04 0.27018E-03 -0.11311E 1.0431E-05 2.7018E-04 0.19608E+00-0.81414E-02-0.89553E-02 0.19608E -8.1414E-03 8.9553E-03 etc. Whoopee! Solved my own problem - in the spirit of Jeopardy, now that I've begun the question, here's the answer - Use a different "File Origin" - several other than the default "Unicode UTF..." work fine! What a pain. Hope this helps somebody else avoid a few unpleasant hours! Aloha from Kona, Tom

    Read the article

  • Recover data from quick formatted DVD-R

    - by Andrii Kalytiiuk
    I need to recover data from quick-formatted DVD-R. Please advise a free of charge option (cheap commercial tools will be ok either). Disk was partially recorded with Windows built in disk recorder and recording most likely was not complete. Afterwards I have inserted partially recorded DVD again and on Windows recorder's message box 'How to use this disk?' selected - 'use for CD/DVD player' and data was completely lost - as new recording session was started. Files of photos were recorded on disk. What I have tried so far: DiskInternals CD-DVD recovery - sees 5 jpg files but can't show preview. Tool is commercial - trial version does not allow to recover files. CDCheck - doesn't see any files and reports errors at attempt to scand DVD CD Recovery Toolbox Free - does not even recognize DVD drive ISO Buster - recognizes two files - one MP3 file for 99% of recorded size and one ARC file for about 100 KB MiniTool Power Data Recovery - Free Edition - does not see any files on DVD Stellar Phoenix CD DVD Data Recovery - does not see any files BinaryBiz Virtual Lab - sees DVD disk but needs license to browse content Please advise how is it possible to recover files from DVD.

    Read the article

  • VerifyError When Running jUnit Test on Android 1.6

    - by DKnowles
    Here's what I'm trying to run on Android 1.6: package com.healthlogger.test; public class AllTests extends TestSuite { public static Test suite() { return new TestSuiteBuilder(AllTests.class).includeAllPackagesUnderHere().build(); } } and: package com.healthlogger.test; public class RecordTest extends AndroidTestCase { /** * Ensures that the constructor will not take a null data tag. */ @Test(expected=AssertionFailedError.class) public void testNullDataTagInConstructor() { Record r = new Record(null, Calendar.getInstance(), "Data"); fail("Failed to catch null data tag."); } } The main project is HealthLogger. These are run from a separate test project (HealthLoggerTest). HealthLogger and jUnit4 are in HealthLoggerTest's build path. jUnit4 is also in HealthLogger's build path. The class "Record" is located in com.healthlogger. Commenting out the "@Test..." and "Record r..." lines allows this test to run. When they are uncommented, I get a VerifyError exception. I am severely blocked by this; why is it happening? EDIT: some info from logcat after the crash: E/AndroidRuntime( 3723): Uncaught handler: thread main exiting due to uncaught exception E/AndroidRuntime( 3723): java.lang.VerifyError: com.healthlogger.test.RecordTest E/AndroidRuntime( 3723): at java.lang.Class.getDeclaredConstructors(Native Method) E/AndroidRuntime( 3723): at java.lang.Class.getConstructors(Class.java:507) E/AndroidRuntime( 3723): at android.test.suitebuilder.TestGrouping$TestCasePredicate.hasValidConstructor(TestGrouping.java:226) E/AndroidRuntime( 3723): at android.test.suitebuilder.TestGrouping$TestCasePredicate.apply(TestGrouping.java:215) E/AndroidRuntime( 3723): at android.test.suitebuilder.TestGrouping$TestCasePredicate.apply(TestGrouping.java:211) E/AndroidRuntime( 3723): at android.test.suitebuilder.TestGrouping.select(TestGrouping.java:170) E/AndroidRuntime( 3723): at android.test.suitebuilder.TestGrouping.selectTestClasses(TestGrouping.java:160) E/AndroidRuntime( 3723): at android.test.suitebuilder.TestGrouping.testCaseClassesInPackage(TestGrouping.java:154) E/AndroidRuntime( 3723): at android.test.suitebuilder.TestGrouping.addPackagesRecursive(TestGrouping.java:115) E/AndroidRuntime( 3723): at android.test.suitebuilder.TestSuiteBuilder.includePackages(TestSuiteBuilder.java:103) E/AndroidRuntime( 3723): at android.test.InstrumentationTestRunner.onCreate(InstrumentationTestRunner.java:321) E/AndroidRuntime( 3723): at android.app.ActivityThread.handleBindApplication(ActivityThread.java:3848) E/AndroidRuntime( 3723): at android.app.ActivityThread.access$2800(ActivityThread.java:116) E/AndroidRuntime( 3723): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1831) E/AndroidRuntime( 3723): at android.os.Handler.dispatchMessage(Handler.java:99) E/AndroidRuntime( 3723): at android.os.Looper.loop(Looper.java:123) E/AndroidRuntime( 3723): at android.app.ActivityThread.main(ActivityThread.java:4203) E/AndroidRuntime( 3723): at java.lang.reflect.Method.invokeNative(Native Method) E/AndroidRuntime( 3723): at java.lang.reflect.Method.invoke(Method.java:521) E/AndroidRuntime( 3723): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:791) E/AndroidRuntime( 3723): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:549) E/AndroidRuntime( 3723): at dalvik.system.NativeStart.main(Native Method)

    Read the article

  • EBS 12.1.1 Test Starter Kit now Available for Oracle Application Testing Suite

    - by Steven Chan
    We've discussed automated testing tools for the E-Business Suite several times on this blog, since testing is such a key part of everyone's implementation lifecycle.  An important part of our testing arsenal in E-Business Suite Development is the Oracle Application Testing Suite.  The Oracle Automated Testing Suite (OATS) is built on the foundation of the e-TEST suite of products acquired from Empirix  in 2008.  The testing suite is comprised of:   1. Oracle Load Testing for scalability, performance, and load testing   2. Oracle Functional Testing for automated functional and regression testing   3. Oracle Test Manager for test process management, test execution, and defect trackingOracle Application Testing Suite 9.0 has been supported for use with the E-Business Suite since 2009.  I'm very pleased to let you know that our E-Business Suite Release 12.1.1 Test Starter Kit is now available for Oracle Application Testing Suite 9.1.  You can download it here:Oracle Application Testing Suite Downloads

    Read the article

  • Oracle SQL Developer Data Modeler: What Tables Aren’t In At Least One SubView?

    - by thatjeffsmith
    Organizing your data model makes the information easier to consume. One of the organizational tools provided by Oracle SQL Developer Data Modeler is the ‘SubView.’ In a nutshell, a SubView is a subset of your model. The Challenge: I’ve just created a model which represents my entire ____________ application. We’ll call it ‘residential lending.’ Instead of having all 100+ tables in a single model diagram, I want to break out the tables by module, e.g. appraisals, credit reports, work histories, customers, etc. I’ve spent several hours breaking out the tables to one or more SubViews, but I think i may have missed a few. Is there an easy way to see what tables aren’t in at least ONE subview? The Answer Yes, mostly. The mostly comes about from the way I’m going to accomplish this task. It involves querying the SQL Developer Data Modeler Reporting Schema. So if you don’t have the Reporting Schema setup, you’ll need to do so. Got it? Good, let’s proceed. Before you start querying your Reporting Schema, you might need a data model for the actual reporting schema…meta-meta data! You could reverse engineer the data modeler reporting schema to a new data model, or you could just reference the PDFs in \datamodeler\reports\Reporting Schema diagrams directory. Here’s a hint, it’s THIS one The Query Well, it’s actually going to be at least 2 queries. We need to get a list of distinct designs stored in your repository. For giggles, I’m going to get a listing including each version of the model. So I can query based on design and version, or in this case, timestamp of when it was added to the repository. We’ll get that from the DMRS_DESIGNS table: SELECT DISTINCT design_name, design_ovid, date_published FROM DMRS_designs Then I’m going to feed the design_ovid, down to a subquery for my child report. select name, count(distinct diagram_id) from DMRS_DIAGRAM_ELEMENTS where design_ovid = :dESIGN_OVID and type = 'Table' group by name having count(distinct diagram_id) < 2 order by count(distinct diagram_id) desc Each diagram element has an entry in this table, so I need to filter on type=’Table.’ Each design has AT LEAST one diagram, the master diagram. So any relational table in this table, only having one listing means it’s not in any SubViews. If you have overloaded object names, which is VERY possible, you’ll want to do the report off of ‘OBJECT_ID’, but then you’ll need to correlate that to the NAME, as I doubt you’re so intimate with your designs that you recognize the GUIDs So I’m going to cheat and just stick with names, but I think you get the gist. My Model Of my almost 90 tables, how many of those have I not added to at least one SubView? Now let’s run my report! Voila! My ‘BEER2′ table isn’t in any SubView! It says ’1′ because the main model diagram counts as a view. So if the count came back as ’2′, that would mean the table was in the main model diagram and in 1 SubView diagram. And I know what you’re thinking, what kind of residential lending program would have a table called ‘BEER2?’ Let’s just say, that my business model has some kinks to work out!

    Read the article

  • Announcing SO-Aware Test Workbench

    - by gsusx
    Yesterday was a big day for Tellago Studios . After a few months hands down working, we announced the release of the SO-Aware Test Workbench tool which brings sophisticated performance testing and test visualization capabilities to theWCF world. This work has been the result of the feedback received by many of our SO-Aware and Tellago customers in terms of how to improve the WCF testing. More importantly, with the SO-Aware Test Workbench we are trying to address what has been one of the biggest challenges...(read more)

    Read the article

  • How to Test and Deploy Applications Faster

    - by rickramsey
    photo courtesy of mtoleric via Flickr If you want to test and deploy your applications much faster than you could before, take a look at these OTN resources. They won't disappoint. Developer Webinar: How to Test and Deploy Applications Faster - April 10 Our second developer webinar, conducted by engineers Eric Reid and Stephan Schneider, will focus on how the zones and ZFS filesystem in Oracle Solaris 11 can simplify your development environment. This is a cool topic because it will show you how to test and deploy apps in their likely real-world environments much quicker than you could before. April 10 at 9:00 am PT Video Interview: Tips for Developing Faster Applications with Oracle Solaris 11 Express We recorded this a while ago, and it talks about the Express version of Oracle Solaris 11, but most of it applies to the production release. George Drapeau, who manages a group of engineers whose sole mission is to help customers develop better, faster applications for Oracle Solaris, shares some tips and tricks for improving your applications. How ZFS and Zones create the perfect developer sandbox. What's the best way for a developer to use DTrace. How Crossbow's network bandwidth controls can improve an application's performance. To borrow the classic Ed Sullivan accolade, it's a "really good show." "White Paper: What's New For Application Developers Excellent in-depth analysis of exactly how the capabilities of Oracle Solaris 11 help you test and deploy applications faster. Covers the tools in Oracle Solaris Studio and what you can do with each of them, plus source code management, scripting, and shells. How to replicate your development, test, and production environments, and how to make sure your application runs as it should in those different environments. How to migrate Oracle Solaris 10 applications to Oracle Solaris 11. How to find and diagnose faults in your application. And lots, lots more. - Rick Website Newsletter Facebook Twitter

    Read the article

  • Refreshing imported MySQL data with MySQL for Excel

    - by Javier Rivera
    Welcome to another blog post from the MySQL for Excel Team. Today we're going to talk about a new feature included since MySQL for Excel 1.3.0, you can install the latest GA or maintenance version using the MySQL Installer or optionally you can download directly any GA or non-GA version from the MySQL Developer Zone.As some users suggested in our forums we should be maintaining the link between tables and Excel not only when editing data through the Edit MySQL Data option, but also when importing data via Import MySQL Data. Before 1.3.0 this process only provided you with an offline copy of the Table's data into Excel and you had no way to refresh that information from the DB later on. Now, with this new feature we'll show you how easy is to work with the latest available information at all times. This feature is transparent to you (it doesn't require additional steps to work as long as the users had the Create an Excel Table for the imported MySQL table data option enabled. To ensure you have this option checked, click over Advanced Options... after the Import Data dialog is displayed). The current blog post assumes you already know how to import data into excel, you could always take a look at our previous post How To - Guide to Importing Data from a MySQL Database to Excel using MySQL for Excel if you need further reference on that topic. After importing Data from a MySQL Table into Excel, you can refresh the data in 3 ways.1. Simply right click over the range of the imported data, to show the pop-up menu: Click over the Refresh button to obtain the latest copy of the data in the table. 2. Click the Refresh button on the Data ribbon: 3. Click the Refresh All button in the Data ribbon (beware this will refresh all Excel tables in the Workbook): Please take a note of a couple of details here, the first one is about the size of the table. If by the time you refresh the table new columns had been added to it, and you originally have imported all columns, the table will grow to the right. The same applies to rows, if the table has new rows and you did not limit the results , the table will grow to to the bottom of the sheet in Excel. The second detail you should take into account is this operation will overwrite any changes done to the cells after the table was originally imported or previously refreshed: Now with this new feature, imported data remains linked to the data source and is available to be updated at all times. It empowers the user to always be able to work with the latest version of the imported MySQL data. We hope you like this this new feature and give it a try! Remember that your feedback is very important for us, so drop us a message with your comments, suggestions for this or other features and follow us at our social media channels: MySQL on Windows (this) Blog: https://blogs.oracle.com/MySqlOnWindows/ MySQL for Excel forum: http://forums.mysql.com/list.php?172 Facebook: http://www.facebook.com/mysql YouTube channel: https://www.youtube.com/user/MySQLChannel Thanks!

    Read the article

  • Analyzing data from same tables in diferent db instances.

    - by Oscar Reyes
    Short version: How can I map two columns from table A and B if they both have a common identifier which in turn may have two values in column C Lets say: A --- 1 , 2 B --- ? , 3 C ----- 45, 2 45, 3 Using table C I know that id 2 and 3 belong to the same item ( 45 ) and thus "?" in table B should be 1. What query could do something like that? EDIT Long version ommited. It was really boring/confusing EDIT I'm posting some output here. From this query: select distinct( rolein) , activityin from taskperformance@dm_prod where activityin in ( select activityin from activities@dm_prod where activityid in ( select activityid from activities@dm_prod where activityin in ( select distinct( activityin ) from taskperformance where rolein = 0 ) ) ) I have the following parts: select distinct( activityin ) from taskperformance where rolein = 0 Output: http://question1337216.pastebin.com/f5039557 select activityin from activities@dm_prod where activityid in ( select activityid from activities@dm_prod where activityin in ( select distinct( activityin ) from taskperformance where rolein = 0 ) ) Output: http://question1337216.pastebin.com/f6cef9393 And finally: select distinct( rolein) , activityin from taskperformance@dm_prod where activityin in ( select activityin from activities@dm_prod where activityid in ( select activityid from activities@dm_prod where activityin in ( select distinct( activityin ) from taskperformance where rolein = 0 ) ) ) Output: http://question1337216.pastebin.com/f346057bd Take for instace activityin 335 from first query ( from taskperformance B) . It is present in actvities from A. But is not in taskperformace in A ( but a the related activities: 92, 208, 335, 595 ) Are present in the result. The corresponding role in is: 1

    Read the article

  • Extract wrong data from a frame in C?

    - by ipkiss
    I am writing a program that reads the data from the serial port on Linux. The data are sent by another device with the following frame format: |start | Command | Data | CRC | End | |0x02 | 0x41 | (0-127 octets) | | 0x03| ---------------------------------------------------- The Data field contains 127 octets as shown and octet 1,2 contains one type of data; octet 3,4 contains another data. I need to get these data. Because in C, one byte can only holds one character and in the start field of the frame, it is 0x02 which means STX which is 3 characters. So, in order to test my program, On the sender side, I construct an array as the frame formatted above like: char frame[254]; frame[0] = 0x02; // starting field frame[1] = 0x41; // command field which is character 'A' ..so on.. And, then On the receiver side, I take out the fields like: char result[254]; // read data read(result); printf("command = %c", result[1]); // get the command field of the frame // get other field's values the command field value (result[1]) is not character 'A'. I think, this because the first field value of the frame is 0x02 (STX) occupying 3 first places in the array frame and leading to the wrong results on the receiver side. How can I correct the issue or am I doing something wrong at the sender side? Thanks all. related questions: http://stackoverflow.com/questions/2500567/parse-and-read-data-frame-in-c http://stackoverflow.com/questions/2531779/clear-data-at-serial-port-in-linux-in-c

    Read the article

  • asp.net mvc - How to create fake test objects quickly and efficiently

    - by Simon G
    Hi, I'm currently testing the controller in my mvc app and I'm creating a fake repository for testing. However I seem to be writing more code and spending more time for the fakes than I do on the actual repositories. Is this right? The code I have is as follows: Controller public partial class SomeController : Controller { IRepository repository; public SomeController(IRepository rep) { repository = rep; } public virtaul ActionResult Index() { // Some logic var model = repository.GetSomething(); return View(model); } } IRepository public interface IRepository { Something GetSomething(); } Fake Repository public class FakeRepository : IRepository { private List<Something> somethingList; public FakeRepository(List<Something> somethings) { somthingList = somthings; } public Something GetSomething() { return somethingList; } } Fake Data class FakeSomethingData { public static List<Something> CreateSomethingData() { var somethings = new List<Something>(); for (int i = 0; i < 100; i++) { somethings.Add(new Something { value1 = String.Format("value{0}", i), value2 = String.Format("value{0}", i), value3 = String.Format("value{0}", i) }); } return somethings; } } Actual Test [TestClass] public class SomethingControllerTest { SomethingController CreateSomethingController() { var testData = FakeSomethingData.CreateSomethingData(); var repository = new FakeSomethingRepository(testData); SomethingController controller = new SomethingController(repository); return controller; } [TestMethod] public void SomeTest() { // Arrange var controller = CreateSomethingController(); // Act // Some test here // Arrange } } All this seems to be a lot of extra code, especially as I have more than one repository. Is there a more efficient way of doing this? Maybe using mocks? Thanks

    Read the article

  • Oracle Data Integrator at Oracle OpenWorld 2012: Demonstrations

    - by Irem Radzik
    By Mike Eisterer Oracle OpenWorld is just a few days away and  we look forward to showing Oracle Data Integrator' comprehensive data integration platform, which delivers critical data integration requirements: from high-volume, high-performance batch loads, to event-driven, trickle-feed integration processes, to SOA-enabled data services.  Several Oracle Data Integrator demonstrations will be available October 1st through the3rd : Oracle Data Integrator and Oracle GoldenGate for Oracle Applications, in Moscone South, Right - S-240 Oracle Data Integrator and Service Integration, in Moscone South, Right - S-235 Oracle Data Integrator for Big Data, in Moscone South, Right - S-236 Oracle Data Integrator for Enterprise Data Warehousing, in Moscone South, Right - S-238 Additional information about OOW 2012 may be found for the following demonstrations. If you are not able to attend OpenWorld, please check out our latest resources for Data Integration.  

    Read the article

  • Fraud Detection with the SQL Server Suite Part 1

    - by Dejan Sarka
    While working on different fraud detection projects, I developed my own approach to the solution for this problem. In my PASS Summit 2013 session I am introducing this approach. I also wrote a whitepaper on the same topic, which was generously reviewed by my friend Matija Lah. In order to spread this knowledge faster, I am starting a series of blog posts which will at the end make the whole whitepaper. Abstract With the massive usage of credit cards and web applications for banking and payment processing, the number of fraudulent transactions is growing rapidly and on a global scale. Several fraud detection algorithms are available within a variety of different products. In this paper, we focus on using the Microsoft SQL Server suite for this purpose. In addition, we will explain our original approach to solving the problem by introducing a continuous learning procedure. Our preferred type of service is mentoring; it allows us to perform the work and consulting together with transferring the knowledge onto the customer, thus making it possible for a customer to continue to learn independently. This paper is based on practical experience with different projects covering online banking and credit card usage. Introduction A fraud is a criminal or deceptive activity with the intention of achieving financial or some other gain. Fraud can appear in multiple business areas. You can find a detailed overview of the business domains where fraud can take place in Sahin Y., & Duman E. (2011), Detecting Credit Card Fraud by Decision Trees and Support Vector Machines, Proceedings of the International MultiConference of Engineers and Computer Scientists 2011 Vol 1. Hong Kong: IMECS. Dealing with frauds includes fraud prevention and fraud detection. Fraud prevention is a proactive mechanism, which tries to disable frauds by using previous knowledge. Fraud detection is a reactive mechanism with the goal of detecting suspicious behavior when a fraudster surpasses the fraud prevention mechanism. A fraud detection mechanism checks every transaction and assigns a weight in terms of probability between 0 and 1 that represents a score for evaluating whether a transaction is fraudulent or not. A fraud detection mechanism cannot detect frauds with a probability of 100%; therefore, manual transaction checking must also be available. With fraud detection, this manual part can focus on the most suspicious transactions. This way, an unchanged number of supervisors can detect significantly more frauds than could be achieved with traditional methods of selecting which transactions to check, for example with random sampling. There are two principal data mining techniques available both in general data mining as well as in specific fraud detection techniques: supervised or directed and unsupervised or undirected. Supervised techniques or data mining models use previous knowledge. Typically, existing transactions are marked with a flag denoting whether a particular transaction is fraudulent or not. Customers at some point in time do report frauds, and the transactional system should be capable of accepting such a flag. Supervised data mining algorithms try to explain the value of this flag by using different input variables. When the patterns and rules that lead to frauds are learned through the model training process, they can be used for prediction of the fraud flag on new incoming transactions. Unsupervised techniques analyze data without prior knowledge, without the fraud flag; they try to find transactions which do not resemble other transactions, i.e. outliers. In both cases, there should be more frauds in the data set selected for checking by using the data mining knowledge compared to selecting the data set with simpler methods; this is known as the lift of a model. Typically, we compare the lift with random sampling. The supervised methods typically give a much better lift than the unsupervised ones. However, we must use the unsupervised ones when we do not have any previous knowledge. Furthermore, unsupervised methods are useful for controlling whether the supervised models are still efficient. Accuracy of the predictions drops over time. Patterns of credit card usage, for example, change over time. In addition, fraudsters continuously learn as well. Therefore, it is important to check the efficiency of the predictive models with the undirected ones. When the difference between the lift of the supervised models and the lift of the unsupervised models drops, it is time to refine the supervised models. However, the unsupervised models can become obsolete as well. It is also important to measure the overall efficiency of both, supervised and unsupervised models, over time. We can compare the number of predicted frauds with the total number of frauds that include predicted and reported occurrences. For measuring behavior across time, specific analytical databases called data warehouses (DW) and on-line analytical processing (OLAP) systems can be employed. By controlling the supervised models with unsupervised ones and by using an OLAP system or DW reports to control both, a continuous learning infrastructure can be established. There are many difficulties in developing a fraud detection system. As has already been mentioned, fraudsters continuously learn, and the patterns change. The exchange of experiences and ideas can be very limited due to privacy concerns. In addition, both data sets and results might be censored, as the companies generally do not want to publically expose actual fraudulent behaviors. Therefore it can be quite difficult if not impossible to cross-evaluate the models using data from different companies and different business areas. This fact stresses the importance of continuous learning even more. Finally, the number of frauds in the total number of transactions is small, typically much less than 1% of transactions is fraudulent. Some predictive data mining algorithms do not give good results when the target state is represented with a very low frequency. Data preparation techniques like oversampling and undersampling can help overcome the shortcomings of many algorithms. SQL Server suite includes all of the software required to create, deploy any maintain a fraud detection infrastructure. The Database Engine is the relational database management system (RDBMS), which supports all activity needed for data preparation and for data warehouses. SQL Server Analysis Services (SSAS) supports OLAP and data mining (in version 2012, you need to install SSAS in multidimensional and data mining mode; this was the only mode in previous versions of SSAS, while SSAS 2012 also supports the tabular mode, which does not include data mining). Additional products from the suite can be useful as well. SQL Server Integration Services (SSIS) is a tool for developing extract transform–load (ETL) applications. SSIS is typically used for loading a DW, and in addition, it can use SSAS data mining models for building intelligent data flows. SQL Server Reporting Services (SSRS) is useful for presenting the results in a variety of reports. Data Quality Services (DQS) mitigate the occasional data cleansing process by maintaining a knowledge base. Master Data Services is an application that helps companies maintaining a central, authoritative source of their master data, i.e. the most important data to any organization. For an overview of the SQL Server business intelligence (BI) part of the suite that includes Database Engine, SSAS and SSRS, please refer to Veerman E., Lachev T., & Sarka D. (2009). MCTS Self-Paced Training Kit (Exam 70-448): Microsoft® SQL Server® 2008 Business Intelligence Development and Maintenance. MS Press. For an overview of the enterprise information management (EIM) part that includes SSIS, DQS and MDS, please refer to Sarka D., Lah M., & Jerkic G. (2012). Training Kit (Exam 70-463): Implementing a Data Warehouse with Microsoft® SQL Server® 2012. O'Reilly. For details about SSAS data mining, please refer to MacLennan J., Tang Z., & Crivat B. (2009). Data Mining with Microsoft SQL Server 2008. Wiley. SQL Server Data Mining Add-ins for Office, a free download for Office versions 2007, 2010 and 2013, bring the power of data mining to Excel, enabling advanced analytics in Excel. Together with PowerPivot for Excel, which is also freely downloadable and can be used in Excel 2010, is already included in Excel 2013. It brings OLAP functionalities directly into Excel, making it possible for an advanced analyst to build a complete learning infrastructure using a familiar tool. This way, many more people, including employees in subsidiaries, can contribute to the learning process by examining local transactions and quickly identifying new patterns.

    Read the article

  • Which algorithms/data structures should I "recognize" and know by name?

    - by Earlz
    I'd like to consider myself a fairly experienced programmer. I've been programming for over 5 years now. My weak point though is terminology. I'm self-taught, so while I know how to program, I don't know some of the more formal aspects of computer science. So, what are practical algorithms/data structures that I could recognize and know by name? Note, I'm not asking for a book recommendation about implementing algorithms. I don't care about implementing them, I just want to be able to recognize when an algorithm/data structure would be a good solution to a problem. I'm asking more for a list of algorithms/data structures that I should "recognize". For instance, I know the solution to a problem like this: You manage a set of lockers labeled 0-999. People come to you to rent the locker and then come back to return the locker key. How would you build a piece of software to manage knowing which lockers are free and which are in used? The solution, would be a queue or stack. What I'm looking for are things like "in what situation should a B-Tree be used -- What search algorithm should be used here" etc. And maybe a quick introduction of how the more complex(but commonly used) data structures/algorithms work. I tried looking at Wikipedia's list of data structures and algorithms but I think that's a bit overkill. So I'm looking more for what are the essential things I should recognize?

    Read the article

  • Understanding Data Science: Recent Studies

    - by Joe Lamantia
    If you need such a deeper understanding of data science than Drew Conway's popular venn diagram model, or Josh Wills' tongue in cheek characterization, "Data Scientist (n.): Person who is better at statistics than any software engineer and better at software engineering than any statistician." two relatively recent studies are worth reading.   'Analyzing the Analyzers,' an O'Reilly e-book by Harlan Harris, Sean Patrick Murphy, and Marck Vaisman, suggests four distinct types of data scientists -- effectively personas, in a design sense -- based on analysis of self-identified skills among practitioners.  The scenario format dramatizes the different personas, making what could be a dry statistical readout of survey data more engaging.  The survey-only nature of the data,  the restriction of scope to just skills, and the suggested models of skill-profiles makes this feel like the sort of exercise that data scientists undertake as an every day task; collecting data, analyzing it using a mix of statistical techniques, and sharing the model that emerges from the data mining exercise.  That's not an indictment, simply an observation about the consistent feel of the effort as a product of data scientists, about data science.  And the paper 'Enterprise Data Analysis and Visualization: An Interview Study' by researchers Sean Kandel, Andreas Paepcke, Joseph Hellerstein, and Jeffery Heer considers data science within the larger context of industrial data analysis, examining analytical workflows, skills, and the challenges common to enterprise analysis efforts, and identifying three archetypes of data scientist.  As an interview-based study, the data the researchers collected is richer, and there's correspondingly greater depth in the synthesis.  The scope of the study included a broader set of roles than data scientist (enterprise analysts) and involved questions of workflow and organizational context for analytical efforts in general.  I'd suggest this is useful as a primer on analytical work and workers in enterprise settings for those who need a baseline understanding; it also offers some genuinely interesting nuggets for those already familiar with discovery work. We've undertaken a considerable amount of research into discovery, analytical work/ers, and data science over the past three years -- part of our programmatic approach to laying a foundation for product strategy and highlighting innovation opportunities -- and both studies complement and confirm much of the direct research into data science that we conducted. There were a few important differences in our findings, which I'll share and discuss in upcoming posts.

    Read the article

  • Programming test for ASP.NET C# developer job - Opinions please!

    - by Indy
    Hi all, We are hiring a .NET C# developer and I have developed a technical test for the candidates to complete. They have an hour and it has two parts, some knowledge based questions covering asp.net, C# and SQL and a small practical test. I'd appreciate feedback on the test, is it sufficient to test the programmers ability? What would you change if anything? Part One. What the are events fired as part of the ASP.NET Page lifecycle. What interesting things can you do at each? How does ViewState work and why is it either useful or bad? What is a common way to create web services in ASP.NET 2.0? What is the GAC? What is boxing? What is a delegate? The C# keyword .int. maps to which .NET type? Explain the difference between a Stored Procedure and a Trigger? What is an OUTER Join? What is @@IDENTITY? Part Two: You are provided with the Northwind Database and the attached DB relationship diagram. Please create a page which provides users with the following functionality. You don’t need to be too concerned with the presentation detail of the page. Select a customer from a list, and see all the orders placed by that customer. For the same customer, find all their orders which are Beverages and the quantity is more than 5. I was aware of setting the right balance of difficulty on this as there is an hour's test. I was able to complete the practical test in under 30 mins using SQLDatasource and the query designer in visual studio and the test questions, I am looking to see how they approach it logically and whether they use the tools available. Many thanks!

    Read the article

  • IOMEGA 500GB hard disk data reccovery

    - by Vineeth
    Last year by November I bought an IOMEGA 500GB Prestige hard disk. Yesterday, unfortunately the hard disk fell down from my table. After that incident, when I connect my disk, Windows asks me to format the disk to use, but I didn't format it yet. Actually, on that hard disk I have about 320GB of data. I tried all my possible ways to access my disk. I tried using DOS. It shows "data error (Cyclic redundancy check)". I have a 3 year warranty. Will I be covered under warranty if I report this issue to IOMEGA? Can I get my data back?

    Read the article

  • Mock Object Data

    - by Nissan Fan
    I'd like to mock up object data, not the objects themselves. In other words, I would like to generate a collection of n objects and pass it into a function which generates random data strings and numbers. Is there anything to do this? Think of it as a Lorem Ipsum for object data. Constraints around numerical ranges etc. are not necessary, but would be a bonus.

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >