Search Results

Search found 77599 results on 3104 pages for 'test data'.

Page 487/3104 | < Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >

  • ASP.NET MVC 2 from Scratch &ndash; Part 1 Listing Data from Database

    - by Max
    Part 1 - Listing Data from Database: Let us now learn ASP.NET MVC 2 from Scratch by actually developing a front end website for the Chinook database, which is an alternative to the traditional Northwind database. You can get the Chinook database from here. As always the best way to learn something is by working on it and doing something. The Chinook database has the following schema, a quick look will help us implementing the application in a efficient way. Let us first implement a grid view table with the list of Employees with some details, this table also has the Details, Edit and Delete buttons on it to do some operations. This is series of post will concentrate on creating a simple CRUD front end for Chinook DB using ASP.NET MVC 2. In this post, we will look at listing all the possible Employees in the database in a tabular format, from which, we can then edit and delete them as required. In this post, we will concentrate on setting up our environment and then just designing a page to show a tabular information from the database. We need to first setup the SQL Server database, you can download the required version and then set it up in your localhost. Then we need to add the LINQ to SQL Classes required for us to enable interaction with our database. Now after you do the above step, just use your Server Explorer in VS 2010 to actually navigate to the database, expand the tables node and then drag drop all the tables onto the Object Relational Designer space and you go you will have the tables visualized as classes. As simple as that. Now for the purpose of displaying the data from Employee in a table, we will show only the EmployeeID, Firstname and lastname. So let us create a class to hold this information. So let us add a new class called EmployeeList to the ViewModels. We will send this data model to the View and this can be displayed in the page. public class EmployeeList { public int EmployeeID { get; set; } public string Firstname { get; set; } public string Lastname { get; set; } public EmployeeList(int empID, string fname, string lname) { this.EmployeeID = empID; this.Firstname = fname; this.Lastname = lname; } } Ok now we have got the backend ready. Let us now look at the front end view now. We will first create a master called Site.Master and reuse it across the site. The Site.Master content will be <%@ Master Language="C#" AutoEventWireup="true" CodeBehind="Site.Master.cs" Inherits="ChinookMvcSample.Views.Shared.Site" %>   <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head id="Head1" runat="server"> <title></title> <style type="text/css"> html { background-color: gray; } .content { width: 880px; position: relative; background-color: #ffffff; min-width: 880px; min-height: 800px; float: inherit; text-align: justify; } </style> <script src="../../Scripts/jquery-1.4.1.min.js" type="text/javascript"></script> <asp:ContentPlaceHolder ID="head" runat="server"> </asp:ContentPlaceHolder> </head> <body> <center> <h1> My Website</h1> <div class="content"> <asp:ContentPlaceHolder ID="body" runat="server"> </asp:ContentPlaceHolder> </div> </center> </body> </html> The backend Site.Master.cs does not contain anything. In the actual Index.aspx view, we add the code to simply iterate through the collection of EmployeeList that was sent to the View via the Controller. So in the top of the Index.aspx view, we have this inherits which says Inherits="System.Web.Mvc.ViewPage<IEnumerable<ChinookMvcSample.ViewModels.EmployeeList>>" In this above line, we dictate that the page is consuming a IEnumerable collection of EmployeeList. So once we specify this and compile the project. Then in our Index.aspx page, we can consume the EmployeeList object and access all its methods and properties. <table class="styled" cellpadding="3" border="0" cellspacing="0"> <tr> <th colspan="3"> </th> <th> First Name </th> <th> Last Name </th> </tr> <% foreach (var item in Model) { %> <tr> <td align="center"> <%: Html.ActionLink("Edit", "Edit", new { id = item.EmployeeID }, new { id = "links" })%> </td> <td align="center"> <%: Html.ActionLink("Details", "Details", new { id = item.EmployeeID }, new { id = "links" })%> </td> <td align="center"> <%: Html.ActionLink("Delete", "Delete", new { id = item.EmployeeID }, new { id = "links" })%> </td> <td> <%: item.Firstname %> </td> <td> <%: item.Lastname %> </td> </tr> <% } %> <tr> <td colspan="5"> <%: Html.ActionLink("Create New", "Create") %> </td> </tr> </table> The Html.ActionLink is a Html Helper to a create a hyperlink in the page, in the one we have used, the first parameter is the text that is to be used for the hyperlink, second one is the action name, third one is the parameter to be passed, last one is the attributes to be added while the hyperlink is rendered in the page. Here we are adding the id=”links” to the hyperlinks that is created in the page. In the index.aspx page, we add some jQuery stuff add alternate row colours and highlight colours for rows on mouse over. Now the Controller that handles the requests and directs the request to the right view. For the index view, the controller would be public ActionResult Index() { //var Employees = from e in data.Employees select new EmployeeList(e.EmployeeId,e.FirstName,e.LastName); //return View(Employees.ToList()); return View(_data.Employees.Select(p => new EmployeeList(p.EmployeeId, p.FirstName, p.LastName))); } Let us also write a unit test using NUnit for the above, just testing EmployeeController’s Index. DataClasses1DataContext _data; public EmployeeControllerTest() { _data = new DataClasses1DataContext("Data Source=(local);Initial Catalog=Chinook;Integrated Security=True"); }   [Test] public void TestEmployeeIndex() { var e = new EmployeeController(_data); var result = e.Index() as ViewResult; var employeeList = result.ViewData.Model; Assert.IsNotNull(employeeList, "Result is null."); } In the first EmployeeControllerTest constructor, we set the data context to be used while running the tests. And then in the actual test, We just ensure that the View results returned by Index is not null. Here is the zip of the entire solution files until this point. Let me know if you have any doubts or clarifications. Cheers! Have a nice day.

    Read the article

  • How to use the unit of work and repository patterns in a service oriented enviroment

    - by A. Karimi
    I've created an application framework using the unit of work and repository patterns for it's data layer. Data consumer layers such as presentation depend on the data layer design. For example a CRUD abstract form has a dependency to a repository (IRepository). This architecture works like a charm in client/server environments (Ex. a WPF application and a SQL Server). But I'm looking for a good pattern to change or reuse this architecture for a service oriented environment. Of course I have some ideas: Idea 1: The "Adapter" design pattern Keep the current architecture and create a new unit of work and repository implementation which can work with a service instead of the ORM. Data layer consumers are loosely coupled to the data layer so it's possible but the problem is about the unit of work; I have to create a context which tracks the objects state at the client side and sends the changes to the server side on calling the "Commit" (Something that I think the RIA has done for Silverlight). Here the diagram: ----------- CLIENT----------- | ------------------ SERVER ---------------------- [ UI ] -> [ UoW/Repository ] ---> [ Web Services ] -> [ UoW/Repository ] -> [DB] Idea 2: Add another layer Add another layer (let say "local services" or "data provider"), then put it between the data layer (unit of work and repository) and the data consumer layers (like UI). Then I have to rewrite the consumer classes (CRUD and other classes which are dependent to IRepository) to depend on another interface. And the diagram: ----------------- CLIENT ------------------ | ------------------- SERVER --------------------- [ UI ] -> [ Local Services/Data Provider ] ---> [ Web Services ] -> [ UoW/Repository ] -> [DB] Please note that I have the local services layer on the current architecture but it doesn't expose the data layer functionality. In another word the UI layer can communicate with both of the data and local services layers whereas the local services layer also uses the data layer. | | | | | | | | ---> | Local Services | ---> | | | UI | | | | Data | | | | | | | ----------------------------> | |

    Read the article

  • How to create per-vertex normals when reusing vertex data?

    - by Chris Smith
    I am displaying a cube using a vertex buffer object (gl.ELEMENT_ARRAY_BUFFER). This allows me to specify vertex indicies, rather than having duplicate vertexes. In the case of displaying a simple cube, this means I only need to have eight vertices total. Opposed to needing three vertices per triangle, times two triangles per face, times six faces. Sound correct so far? My question is, how do I now deal with vertex attribute data such as color, texture coordinates, and normals when reusing vertices using the vertex buffer object? If I am reusing the same vertex data in my indexed vertex buffer, how can I differentiate when vertex X is used as part of the cube's front face versus the cube's left face? In both cases I would like the surface normal and texture coordinates to be different. I understand I could average the surface normal, however I would like to render a cube. Also, this still doesn't work for texture coordinates. Is there a way to save memory using a vertex buffer object while being able to provide different vertex attribute data based on context? (Per-triangle would be idea.) Or should I just duplicate each vertex for each context in which it gets rendered. (So there is a one-to-one mapping between vertex, normal, color, etc.) Note: I'm using OpenGL ES.

    Read the article

  • A Big Data korszakban, túl az 1000. eladott Oracle Exadata Database Machine adatbázisgépen

    - by user645740
    Mint azt már egy ideje a szél is fújja, beköszöntött a BIG DATA korszak, azaz egyre több adat gyulik, egyre több adattal gazdálkodunk. A hatalmas mennyiségu adat jó részét Oracle adatbázisokban tárolják. Mi is futtathatná jobban, gyorsabban és hetékonyabban ezeket az Oracle adatbázisokat, mint az Oracle stratégiai high-end megoldása az Oracle Exadata Database Machine? Rengeteg forrása van a sok adatnak, néhány példa, ahol a növekedés óriási: kommunikációs adatok, CDR-ek banki és kormányzati tranzakciók hely információk spatial, location, GPS,..., mint ahogyan a közelmúltban az egyes telefonokkal ésoperációs rendszerekkel kapcsolatos "ügyekben" is olvashattuk, e-mail-ek, közösségi site-ok, intelligens méromuszerek, háztartási berendezések, .... Milyen ütemben no az Exadata értékesítés? Nos az Exadata 2008 oszén lett bejelentve. Az Oracle pénzügyi év végén a jelentésben azt olvashatjuk, hogy az Exadata páratlanul sikeres megoldás, már több mint 1000 Exadatát vásároltak meg az Oracle ügyfelek, mondta Mark Hurd, az Oracle alelnöke:   “In addition to record setting software sales, our Exadata and Exalogic systems also made a strong contribution to our growth in Q4,” said Oracle President, Mark Hurd. “Today there are more than 1,000 Exadata machines installed worldwide. Our goal is to triple that number in FY12.” Larry Ellison, az Oracle elso embere, azt nyilatkozta, hogy mind a felho - cloud computing, mind a memória-adatbázisok területén egyre gyorsabban növekszik az Oracle:   “In FY11 Oracle’s database business experienced its fastest growth in a decade,” said Oracle CEO, Larry Ellison. “Over the past few years we added features to the Oracle database for both cloud computing and in-memory databases that led to increased database sales this past year. Lately we’ve been focused on the big business opportunity presented by Big Data.” A Big Data korszakban  megtakarításokat érhetünk el az Exadatával, tekintse meg a következo videót, de óvatosan, mert gondolkodásra késztet:    -   Oracle Exadata: Are You Ready?.

    Read the article

  • Form Validation Options

    The steps involved in transmitting form data from the client to the Web server User loads web form. User enters data in to web form fields User clicks submit On submit page validates fields using JavaScript. If validation errors are found then the validation script stops the browser from canceling posting the data to the web server and displays error messages as needed. If the form passes the data validation process then the browser will URL encode the values of every field and post it to the server.  The server reads the posted data from the query string and then again validates the data just to ensure data consistency and to prevent any non-validated data because JavaScript was turned off on the clients browser from being inserted in to a database or passed on to other process. If the data passes the second validation check then the server side code will continue with the requested processes. In my opinion, it is mandatory to validate data using client side and server side validation as a fail over process. The client side validation allows users to correct any error before they are sent to the web server for processing, and this allows for an immediate response back to the user regarding data that is not correct or in the proper format that is desired. In addition, this prevents unnecessary interaction between the user and the web server and will free up the server over time compared to doing only server side validation. Server validation is the last line of defense when it comes to validation because you can check to ensure the user’s data is correct before it is used in a business process or stored to a database. Honestly, I cannot foresee a scenario where I would only want to use one form of validation over another especially with the current cost of creating and maintaining data. In my opinion, the redundant validation is well worth the overhead.

    Read the article

  • How to expose game data in the game without a singelton?

    - by zardon
    I'm quite new to cocos2d and games programming, and am currently I am writing a game that is currently in Prototype stage. Everything is going okay, but I've realized a potentially big problem and I am not sure how to solve it. I am using a singelton to store a bunch of arrays for everything, a global list of planets, a global list of troops, a global list of products, etc. And only now I'm realizing that all of this will be in memory and this is the wrong way to do it. I am not storing files or anything on the disk just yet, with exception to a save/load state, which is a capture of everything. My game makes use of a map which allows you to select a planet, then it will give you a breakdown of that planets troops and resources, Lets use this scenario: My game has 20 planets. On which you can have 20 troops. Straight away that's an array of 400! This does not add the NPC, which is another 10. So, 20x10 = 200 So, now we have 600 all in arrays inside a Singelton. This is obviously very bad, and very wrong. Especially as the game scales in the amount of data. But I need to expose pretty much everything, especially on the map page, and I am not sure how else to do it. I've been told that I can use a controller for the map page which has the information I need for each planet, and other controllers for other items I require global display for. I've also thought about storing each planet's data in a save file, using initWithCoder however there could be a boatload of files on the user's device? I really don't want to use a database, mainly because I would need to translate NSObjects and non-NSObjects like CGRects and CGPoints and Colors into/from SQL. I am open to other ideas on how to store and read game data to prevent using a singelton to store everything, everywhere. Thanks for your time.

    Read the article

  • How to manage security of these self hosted web apis, to ensure that the request coming for accessing data is authenticated?

    - by Husrat Mehmood
    Let's pretend I am going to work on an enterprise application. Say I have 11 modules in the application and I would have to develop Dashboards for every role in the organization for whom I are going to develop application. We Decided to use Asp.Net Web Api and return json data from our apis. We are going to include 11 Self hosted web apis projects in our application (one self hosted web api) for every module. All 11 modules are connected to one Sql server 2012 Database. Then once api is ready we would have to create Business Dashboards (Based upon roles in Organization). So Now my web api client is Asp.Net Mvc application.Asp.Net mvc will consume those web apis. Here is the part for whom all explanation is done. How should I manage Security of all 11 self hosted web apis? How should I only authenticated request is coming? If I authenticate user by login and password and then redirect user to appropriate Dashboard designed for the role that user have and load data by consuming web apis. How should I ensure that the request coming for accessing data is authenticated?

    Read the article

  • laptop crashed: why?

    - by sds
    my linux (ubuntu 12.04) laptop crashed, and I am trying to figure out why. # last sds pts/4 :0 Tue Sep 4 10:01 still logged in sds pts/3 :0 Tue Sep 4 10:00 still logged in reboot system boot 3.2.0-29-generic Tue Sep 4 09:43 - 11:23 (01:40) sds pts/8 :0 Mon Sep 3 14:23 - crash (19:19) this seems to indicate a crash at 09:42 (= 14:23+19:19). as per another question, I looked at /var/log: auth.log: Sep 4 09:17:02 t520sds CRON[32744]: pam_unix(cron:session): session closed for user root Sep 4 09:43:17 t520sds lightdm: pam_unix(lightdm:session): session opened for user lightdm by (uid=0) no messages file syslog: Sep 4 09:24:19 t520sds kernel: [219104.819975] CPU0: Package power limit normal Sep 4 09:43:16 t520sds kernel: imklog 5.8.6, log source = /proc/kmsg started. kern.log: Sep 4 09:24:19 t520sds kernel: [219104.819969] CPU1: Package power limit normal Sep 4 09:24:19 t520sds kernel: [219104.819971] CPU2: Package power limit normal Sep 4 09:24:19 t520sds kernel: [219104.819974] CPU3: Package power limit normal Sep 4 09:24:19 t520sds kernel: [219104.819975] CPU0: Package power limit normal Sep 4 09:43:16 t520sds kernel: imklog 5.8.6, log source = /proc/kmsg started. Sep 4 09:43:16 t520sds kernel: [ 0.000000] Initializing cgroup subsys cpuset Sep 4 09:43:16 t520sds kernel: [ 0.000000] Initializing cgroup subsys cpu I had a computation running until 9:24, but the system crashed 18 minutes later! kern.log has many pages of these: Sep 4 09:43:16 t520sds kernel: [ 0.000000] total RAM covered: 8086M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 64K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 128K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 256K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 512K num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 1M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 2M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 4M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 8M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 16M num_reg: 10 lose cover RAM: 38M Sep 4 09:43:16 t520sds kernel: [ 0.000000] *BAD*gran_size: 64K chunk_size: 32M num_reg: 10 lose cover RAM: -16M Sep 4 09:43:16 t520sds kernel: [ 0.000000] *BAD*gran_size: 64K chunk_size: 64M num_reg: 10 lose cover RAM: -16M Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 128M num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 256M num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 512M num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] gran_size: 64K chunk_size: 1G num_reg: 10 lose cover RAM: 0G Sep 4 09:43:16 t520sds kernel: [ 0.000000] *BAD*gran_size: 64K chunk_size: 2G num_reg: 10 lose cover RAM: -1G does this mean that my RAM is bad?! it also says Sep 4 09:43:16 t520sds kernel: [ 2.944123] EXT4-fs (sda1): INFO: recovery required on readonly filesystem Sep 4 09:43:16 t520sds kernel: [ 2.944126] EXT4-fs (sda1): write access will be enabled during recovery Sep 4 09:43:16 t520sds kernel: [ 3.088001] firewire_core: created device fw0: GUID f0def1ff8fbd7dff, S400 Sep 4 09:43:16 t520sds kernel: [ 8.929243] EXT4-fs (sda1): orphan cleanup on readonly fs Sep 4 09:43:16 t520sds kernel: [ 8.929249] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 658984 ... Sep 4 09:43:16 t520sds kernel: [ 9.343266] EXT4-fs (sda1): ext4_orphan_cleanup: deleting unreferenced inode 525343 Sep 4 09:43:16 t520sds kernel: [ 9.343270] EXT4-fs (sda1): 56 orphan inodes deleted Sep 4 09:43:16 t520sds kernel: [ 9.343271] EXT4-fs (sda1): recovery complete Sep 4 09:43:16 t520sds kernel: [ 9.645799] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null) does this mean my HD is bad? As per FaultyHardware, I tried smartctl -l selftest, which uncovered no errors: smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.2.0-30-generic] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: Seagate Momentus 7200.4 Device Model: ST9500420AS Serial Number: 5VJE81YK LU WWN Device Id: 5 000c50 0440defe3 Firmware Version: 0003LVM1 User Capacity: 500,107,862,016 bytes [500 GB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: 8 ATA Standard is: ATA-8-ACS revision 4 Local Time is: Mon Sep 10 16:40:04 2012 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED See vendor-specific Attribute list for marginal Attributes. General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 0) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 109) minutes. Conveyance self-test routine recommended polling time: ( 2) minutes. SCT capabilities: (0x103b) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 117 099 034 Pre-fail Always - 162843537 3 Spin_Up_Time 0x0003 100 100 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 571 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 069 060 030 Pre-fail Always - 17210154023 9 Power_On_Hours 0x0032 095 095 000 Old_age Always - 174362787320258 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 571 184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 188 Command_Timeout 0x0032 100 100 000 Old_age Always - 1 189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0022 061 043 045 Old_age Always In_the_past 39 (0 11 44 26) 191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 84 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 20 193 Load_Cycle_Count 0x0032 099 099 000 Old_age Always - 2434 194 Temperature_Celsius 0x0022 039 057 000 Old_age Always - 39 (0 15 0 0) 195 Hardware_ECC_Recovered 0x001a 041 041 000 Old_age Always - 162843537 196 Reallocated_Event_Count 0x000f 095 095 030 Pre-fail Always - 4540 (61955, 0) 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 254 Free_Fall_Sensor 0x0032 100 100 000 Old_age Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed without error 00% 4545 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. Googling for the messages proved inconclusive, I can't even figure out whether the messages are routine or catastrophic. So, what do I do now?

    Read the article

  • Essbase Analytics Link (EAL) - Performance of some operation of EAL could be improved by tuning of EAL Data Synchronization Server (DSS) parameters

    - by Ahmed Awan
    Generally, performance of some operation of EAL (Essbase Analytics Link) could be improved by tuning of EAL Data Synchronization Server (DSS) parameters. a. Expected that DSS machine will be 64-bit machine with 4-8 cores and 5-8 GB of RAM dedicated to DSS. b. To change DSS configuration - open EAL Configuration Tool on DSS machine.     ->Next:     and define: "Job Units" as <Number of Cores dedicated to DSS> * 1.5 "Max Memory Size" (if this is 64-bit machine) - ~1G for each Job Unit. If DSS machine is 32-bit - max memory size is 2600 MB. "Data Store Size" - depends on number of bridges and volume of HFM applications, but in most cases 50000 MB is enough. This volume should be available in defined "Data Store Dir" driver.   Continue with configuration and finish it. After that, DSS should be restarted to take new definitions.  

    Read the article

  • EDQ Technical Enablement for OPN (Prague - June 17-19)

    - by milomir.vojvodic
    Oracle Enterprise Data Quality (EDQ) Technical Enablement and Partner Training Trusted Data for Your Enterprise Applications Oracle Enterprise Data Quality helps organizations achieve maximum value from their business-critical applications by delivering fit-for-purpose data. These products also enable individuals and collaborative teams to quickly and easily identify and resolve any problems in underlying data. With Oracle Enterprise Data Quality, customers can identify new opportunities, improve operational efficiency, and more efficiently comply with industry or governmental regulation. Oracle Enterprise Data Quality is designed to serve as a very channel friendly platform to OPN.  This means that pre-built extensions, components and even complete business solutions can readily be built and shared.  This allows our customers/partners to be highly efficient in how they deploy custom business solutions, but also allows our partners to develop specialized components, domain knowledge and even complete business solutions. Training is suitable for: · Database administrators · Architects · Technical staff Objectives of the training: After completing this course, participants should: · Have an understanding of the core functionality of EDQ across profiling, auditing, transforming, parsing and matching data · Be able to describe some of the key capabilities and benefits delivered by EDQ · Be able to create and run standalone EDQ processes and jobs · Be ready to start working with data from customers and (with practice) be able to demonstrate EDQ to customers Agenda 17th June Fundamentals For Demoing (Profile, Audit, Transform and More) Profiling Auditing Transforming Writing and exporting data Jobs and scheduling Publishing, packaging and copying EDQ processes Introduction to the Customer Data Extension Pack Realtime Processing via Web Services The Server Console Run Profiles Data Interfaces Sampling Publishing metrics to the Dashboard Users and security 18th June Matching Matching overview Basic matching configuration Matching rule hierarchies Clustering Merging Reviewing possible matches Outputting Match Data Case study 19th June Address Verification Address Verification Overview Configuration Accuracy Flags Parsing Parsing Overview Phrase profiling Tailoring a CDEP Parser Base Tokenization Classification Reclassification Selection Resolution Register Here Don’t miss this FREE event. Space is limited. Oracle University V Parku 2294/4 148 00 Praha 4 17.6. – 19.6. 2014 09:00 a.m.– 17:30 p.m.

    Read the article

  • Is there a simple, flat, XML-based query-able data storage solution? [closed]

    - by alex gray
    I have been in long pursuit of an XML-based query-able data store, and despite continued searches and evaluations, I have yet to find a solution that meets the my needs, which include: Data is wholly contained within XML nodes, in flat text files. There is a "native" - or at least unobtrusive - method with which to perform Create/Read/Update/Delete (CRUD) operations onto the "schema". I would consider access via http, XHR, javascript, PHP, BASH, or PERL to be unobtrusive, dependent on the complexity of the set of dependencies. Server-side file-system reads and writes. A client-side interface element, accessible in any browser without a plug-in. Some extra, preferred (but optional) requirements include: Respond to simple SQL, or similarly syntax queries. Serve the data on a bare bones https server, with no "extra stuff", either via XMLHTTPRequest, HTTP proper, or JSON. A few thoughts: What I'm looking for may be possible via some Java server implementations, but for the sake of this question, please do not suggest that - unless it meets ALL the requirements. Java, especially on the client-side is not really an option, nor is it appealing from a development viewpoint.* I know walking the filesystem is a stretch, and I've heard it's possible with XPATH or XSLT, but as far as I know, that's not ready for primetime, nor even yet a recommendation. However the ability to recursively traverse the filesystem is needed for such a system to be of useful facility. At this point, I have basically implemented what I described via, of all things, CGI and Bash, but there has to be an easier way. Thoughts?

    Read the article

  • Why values in my WCF data contract were suddenly wrong...

    - by mipsen
    A WCF Service I provided took a very simple data contract as parameter (containing one string and one int...) and had a very simple task to do. A .NET 3.5 client was created using the VS2008 feature "Add Service Reference". Everything worked as expected. Then a slight change came in: The client was expected to run on machines with .NET 2.0 only. So we set the Target  Framework to .NET 2.0, removed the references to System.ServiceModel, System.Runtime.Serialization and the ServiceReference and created a new Reference to the Service using the old "Add Web Reference" . A matter of 2 minutes.  When testing, the int value in the data contract arriving at the WCF Service suddenly was 0, instead of 38 as we expected. What happened? When generating an old  Web Reference on a WCF data contract an additional boolean field for each value-type field is created called [Fieldname]Specified (e.g. AgeSpecified) which defaults to "false". WCF inspects these boolean fields to determine if a value was provided for the value-type field. If the "Specified"-field is "false", WCF translates that to using the default-value of the value-type field. For int this is 0. So we had to insert  setting the "Specified"-field  for the int-value to "true" and everything was fine again. That was what we forgot after setting the Framework-version to 2.0...

    Read the article

  • Need help with implementing collision detection using the Separating Axis Theorem

    - by Eddie Ringle
    So, after hours of Googling and reading, I've found that the basic process of detecting a collision using SAT is: for each edge of poly A project A and B onto the normal for this edge if intervals do not overlap, return false end for for each edge of poly B project A and B onto the normal for this edge if intervals do not overlap, return false end for However, as many ways as I try to implement this in code, I just cannot get it to detect the collision. My current code is as follows: for (unsigned int i = 0; i < asteroids.size(); i++) { if (asteroids.valid(i)) { asteroids[i]->Update(); // Player-Asteroid collision detection bool collision = true; SDL_Rect asteroidBox = asteroids[i]->boundingBox; // Bullet-Asteroid collision detection for (unsigned int j = 0; j < player.bullets.size(); j++) { if (player.bullets.valid(j)) { Bullet b = player.bullets[j]; collision = true; if (b.x + (b.w / 2.0f) < asteroidBox.x - (asteroidBox.w / 2.0f)) collision = false; if (b.x - (b.w / 2.0f) > asteroidBox.x + (asteroidBox.w / 2.0f)) collision = false; if (b.y - (b.h / 2.0f) > asteroidBox.y + (asteroidBox.h / 2.0f)) collision = false; if (b.y + (b.h / 2.0f) < asteroidBox.y - (asteroidBox.h / 2.0f)) collision = false; if (collision) { bool realCollision = false; float min1, max1, min2, max2; // Create a list of vertices for the bullet CrissCross::Data::LList<Vector2D *> bullVerts; bullVerts.insert(new Vector2D(b.x - b.w / 2.0f, b.y + b.h / 2.0f)); bullVerts.insert(new Vector2D(b.x - b.w / 2.0f, b.y - b.h / 2.0f)); bullVerts.insert(new Vector2D(b.x + b.w / 2.0f, b.y - b.h / 2.0f)); bullVerts.insert(new Vector2D(b.x + b.w / 2.0f, b.y + b.h / 2.0f)); // Create a list of vectors of the edges of the bullet and the asteroid CrissCross::Data::LList<Vector2D *> bullEdges; CrissCross::Data::LList<Vector2D *> asteroidEdges; for (int k = 0; k < 4; k++) { int n = (k == 3) ? 0 : k + 1; bullEdges.insert(new Vector2D(bullVerts[k]->x - bullVerts[n]->x, bullVerts[k]->y - bullVerts[n]->y)); asteroidEdges.insert(new Vector2D(asteroids[i]->vertices[k]->x - asteroids[i]->vertices[n]->x, asteroids[i]->vertices[k]->y - asteroids[i]->vertices[n]->y)); } for (unsigned int k = 0; k < asteroidEdges.size(); k++) { Vector2D *axis = asteroidEdges[k]->getPerpendicular(); min1 = max1 = axis->dotProduct(asteroids[i]->vertices[0]); for (unsigned int l = 1; l < asteroids[i]->vertices.size(); l++) { float test = axis->dotProduct(asteroids[i]->vertices[l]); min1 = (test < min1) ? test : min1; max1 = (test > max1) ? test : max1; } min2 = max2 = axis->dotProduct(bullVerts[0]); for (unsigned int l = 1; l < bullVerts.size(); l++) { float test = axis->dotProduct(bullVerts[l]); min2 = (test < min2) ? test : min2; max2 = (test > max2) ? test : max2; } delete axis; axis = NULL; if ( (min1 - max2) > 0 || (min2 - max1) > 0 ) { realCollision = false; break; } else { realCollision = true; } } if (realCollision == false) { for (unsigned int k = 0; k < bullEdges.size(); k++) { Vector2D *axis = bullEdges[k]->getPerpendicular(); min1 = max1 = axis->dotProduct(asteroids[i]->vertices[0]); for (unsigned int l = 1; l < asteroids[i]->vertices.size(); l++) { float test = axis->dotProduct(asteroids[i]->vertices[l]); min1 = (test < min1) ? test : min1; max1 = (test > max1) ? test : max1; } min2 = max2 = axis->dotProduct(bullVerts[0]); for (unsigned int l = 1; l < bullVerts.size(); l++) { float test = axis->dotProduct(bullVerts[l]); min2 = (test < min2) ? test : min2; max2 = (test > max2) ? test : max2; } delete axis; axis = NULL; if ( (min1 - max2) > 0 || (min2 - max1) > 0 ) { realCollision = false; break; } else { realCollision = true; } } } if (realCollision) { player.bullets.remove(j); int numAsteroids; float newDegree; srand ( j + asteroidBox.x ); if ( asteroids[i]->degree == 90.0f ) { if ( rand() % 2 == 1 ) { numAsteroids = 3; newDegree = 30.0f; } else { numAsteroids = 2; newDegree = 45.0f; } for ( int k = 0; k < numAsteroids; k++) asteroids.insert(new Asteroid(asteroidBox.x + (10 * k), asteroidBox.y + (10 * k), newDegree)); } delete asteroids[i]; asteroids.remove(i); } while (bullVerts.size()) { delete bullVerts[0]; bullVerts.remove(0); } while (bullEdges.size()) { delete bullEdges[0]; bullEdges.remove(0); } while (asteroidEdges.size()) { delete asteroidEdges[0]; asteroidEdges.remove(0); } } } } } } bullEdges is a list of vectors of the edges of a bullet, asteroidEdges is similar, and bullVerts and asteroids[i].vertices are, obviously, lists of vectors of each vertex for the respective bullet or asteroid. Honestly, I'm not looking for code corrections, just a fresh set of eyes.

    Read the article

  • How to build android cts? And how to add and run your test case?

    - by Leox
    From 2.0 the cts is freely downloadable from android's repository. But there is no documents about it. Does anyone can tell me: how to build cts? Is there a standard procedure? How to run cts? How to add customized test case? Here, share my experience. After repo sync all source, you can't directly run "make" to build all source. You will get some errors. Now, I'am trying to first build android source without cts, and then build cts alone. Also, here are some reference for run cts: http://i-miss-erin.blogspot.com/2010/05/how-to-add-test-plan-package-to-android.html www.mentby.com/chenny/how-does-cts-work-where-can-i-get-the-test-streams.html www.jxva.com/?act=blog!article&articleId=157 1st time Update @ 5-13 18:39 +8:00 I do the following steps: 1.build android source without cts (move cts out of the $SDK_ROOT). 2.build cts (move cts back). both jdk1.5 and 1.6 have the following errors: 1.The 1st time "make cts" report: "Caused by: java.io.FileNotFoundException: ...(Too many open files)" 2.The 2nd time "make cts" report: "acp: file 'out/host/linux-x86/obj/EXECUTABLES/vm-tests_intermediates/tests/data' does not exist" 3.The 3rd time "make cts" report: "/bin/bash: line 0: cd: out/host/linux-x86/obj/EXECUTABLES/vm-tests_intermediates/hostjunit_files/classes: No such file or directory" 4.The last time "make cts" report: "zip error: Nothing to do! (try: zip -q -r ../../android.core.vm-tests.jar . -i .)"

    Read the article

  • How can I beta test web Perl modules under Apache/mod_perl on production web server?

    - by DVK
    We have a setup where most code, before being promoted to full production, is deployed in BETA mode - meaning, it runs in full production environment (using production database - usually production data; and production web server). We call that stage BETA testing. One of the main requirements is that BETA code promotion to production must be a simple "cp" command from beta to production directory - no code/filename changes. For non-web Perl code, achieving seamless BETA test is quite doable (see details here): Perl programs live in a standard location under production root (/usr/code/scripts) with production perl modules living under the same root (/usr/code/lib/perl) The BETA code has 100% same code paths except under beta root (/usr/code/beta/) A special module manipulates @INC of any script based on whether the script was called from /usr/code/scripts or /usr/code/test/scripts, to include beta libraries for beta scripts. This setup works fine up till we need to beta test our web Perl code (the setup is EmbPerl and Apache/mod_perl). The hang-up is as follows: if both a production Perl module and BETA Perl module have the same name (e.g. /usr/code/lib/perl/MyLib1.pm and /usr/code/beta/lib/perl/MyLib1.pm), then mod_perl will only be able to load ONE of these modules into memory - and there's no way we are aware of for a particular web page to affect which version of the module is currently loaded due to concurrency issues. Leaving aside the obvious non-programming solution (get a bloody BETA web server) which for political/organizational reasons is not feasible, is there any way we can somehow hack around this problem in either Perl or mod_perl? I played around with various approaches to unloading Perl modules that %INC has listed, but the problem remains that another user might load a beta page at just the right (or rather wrong) moment and have the beta module loaded which will be used for my production page.

    Read the article

  • Grails Warnings/Errors during run-app

    - by Taylor L
    I'm currently seeing the warnings below when trying to run my Google App Engine/Grails test app in Eclipse. Warning, target causing name overwriting of name startLogging Warning: C:\Users\Some Person.grails\1.2.0\projects\test-grails\plugins\app-engine-0.8.8\grails-app\conf\spring not found. Warning: C:\Users\Some Person.grails\1.2.0\projects\test-grails\plugins\app-engine-0.8.8\grails-app\conf not found. Warning: C:\Users\Some Person.grails\1.2.0\projects\test-grails\plugins\app-engine-0.8.8\grails-app\conf\hibernate not found. Here is the output from the console: Base Directory: C:\Users\Some Person\workspace\test-grails Resolving dependencies... Dependencies resolved in 1160ms. Running script C:\grails-1.2.0\scripts\RunApp.groovy Environment set to development Warning, target causing name overwriting of name startLogging [groovyc] Compiling 1 source file to C:\Users\Some Person\workspace\test-grails\web-app\WEB-INF\classes [copy] Copying 1 file to C:\Users\Some Person\.grails\1.2.0\projects\test-grails [copy] Copying 1 file to C:\Users\Some Person\workspace\test-grails\web-app\WEB-INF Configuring persistence for AppEngine [copy] Warning: C:\Users\Some Person\.grails\1.2.0\projects\test-grails\plugins\app-engine-0.8.8\grails-app\conf\spring not found. [copy] Warning: C:\Users\Some Person\.grails\1.2.0\projects\test-grails\plugins\app-engine-0.8.8\grails-app\conf not found. [copy] Warning: C:\Users\Some Person\.grails\1.2.0\projects\test-grails\plugins\app-engine-0.8.8\grails-app\conf\hibernate not found. I get this error after creating a Grails project with Spring Tools Suite (STS) and then installing the app-engine plugin "grails install-plugin app-engine". Before, I install the app-engine plugin the Grails project runs correctly. Any ideas how to resolve these warnings?

    Read the article

  • How to programmatically start a WPF application from a unit test?

    - by Lernkurve
    Problem VS2010 and TFS2010 support creating so-called Coded UI Tests. All the demos I have found, start with the WPF application already running in the background when the Coded UI Test begins or the EXE is started using the absolute path to it. I, however, would like to start my WPF application under test from the unit test code. That way it'll also work on the build server and on my peer's working copies. How do I accomplish that? My discoveries so far a) This post shows how to start a XAML window. But that's not what I want. I want to start the App.xaml because it contains XAML resources and there is application logic in the code behind file. b) The second screenshot on this post shows a line starting with ApplicationUnterTest calculatorWindow = ApplicationUnderTest.Launch(...); which is conceptually pretty much what I am looking for, except that again this example uses an absolute path the the executable file. c) A Google search for "Programmatically start WPF" didn't help either.

    Read the article

  • Sentiment analysis with NLTK python for sentences using sample data or webservice?

    - by Ke
    I am embarking upon a NLP project for sentiment analysis. I have successfully installed NLTK for python (seems like a great piece of software for this). However,I am having trouble understanding how it can be used to accomplish my task. Here is my task: I start with one long piece of data (lets say several hundred tweets on the subject of the UK election from their webservice) I would like to break this up into sentences (or info no longer than 100 or so chars) (I guess i can just do this in python??) Then to search through all the sentences for specific instances within that sentence e.g. "David Cameron" Then I would like to check for positive/negative sentiment in each sentence and count them accordingly NB: I am not really worried too much about accuracy because my data sets are large and also not worried too much about sarcasm. Here are the troubles I am having: All the data sets I can find e.g. the corpus movie review data that comes with NLTK arent in webservice format. It looks like this has had some processing done already. As far as I can see the processing (by stanford) was done with WEKA. Is it not possible for NLTK to do all this on its own? Here all the data sets have already been organised into positive/negative already e.g. polarity dataset http://www.cs.cornell.edu/People/pabo/movie-review-data/ How is this done? (to organise the sentences by sentiment, is it definitely WEKA? or something else?) I am not sure I understand why WEKA and NLTK would be used together. Seems like they do much the same thing. If im processing the data with WEKA first to find sentiment why would I need NLTK? Is it possible to explain why this might be necessary? I have found a few scripts that get somewhat near this task, but all are using the same pre-processed data. Is it not possible to process this data myself to find sentiment in sentences rather than using the data samples given in the link? Any help is much appreciated and will save me much hair! Cheers Ke

    Read the article

  • codeigniter & cjax framework, fatal error class 'CI_Controller' not found

    - by Martin
    I'm having this weird error with Codeigniter 2.1.3 and latest cjax for codeigniter. Weird thing is, when I download the latest codeigniter, and latest cjax framework for codeitniger and copy to my friends server, and call: domain.com/ajax.php?test/test2 to show the test ajax examples ... it works like a breeze, but when I do this on my server, I get server error (even tho, we both have same php version and such). Server then throws in error log file this error: PHP Fatal error: Class 'CI_Controller' not found in /hosting/www/domain.com/www/application/response/test.php on line 3 Now, I've read thru stackoverflow with people having this problem and solving by changing the construct and calling CI_Controller instead of Controller. But I already do that ... - I mean it's in the basic example that is suppose to work without touching the code, and it does, just not on my domain for some crappy reason. Ajax.php from cjax framework for codeingter should load controller from folder response, named test and call function test2, which looks like this (the actual file named test.php): class Test extends CI_Controller { function __construct() { parent::__construct(); } /** * * ajax.php?test/test/a/b/c * * @param unknown_type $a * @param unknown_type $b * @param unknown_type $c */ function test($a = null,$b = null, $c = null) { $this->load->view('test', array('data' => $a .' '.$b.' '.$c)); } /** * ajax.php?test/test2 * * Here we are testing out the javascript library. * * Note: the library it is not meant to be included in ajax controllers - but in front-controllers, * it is being used here for the sake of simplicity in testing. */ function test2() { $ajax = ajax(); $ajax->update('response','Cjax Works'); $ajax->append('#response','<br /><br />version: '.$ajax->version); $ajax->success('Cjax was successfully installed.', 5); //see application/views/test2.php $this->load->view('test2'); } I was hoping someone could bring some light into this problem - or maybe someone has already experienced it? Thanks for your time! Mart

    Read the article

  • com0com silent install (test signed com0com.sys shows up as signed in explorer but not in Device Manager)

    - by Andrew
    My goal is to have the com0com serial driver install without popping up the install wizard on both WinXP and Win2000. I am working on WinXP x86. I have followed the test signing instructions for the com0com driver, replacing amd64 with i386 at line 60. I have added my test certificate as both a root and trustedprovider using the following commands: certmgr /add com0com.cer /r localMachine root certmgr /add com0com.cer /r localMachine trustedprovider And verified that it is listed under both locations. I then run the newly built setup.exe. This installs the signed com0com.sys file into C:\WINDOWS\system32\DRIVERS and sets up a pair of virtual serial ports and a bus between them. Using explorer, I go to the DRIVERS directory, right click on the com0com.sys file and verify that it has the "test" digital signature. I then go into Device Manager, open the "com0com serial port emulators" entry, pick an entry and do Properties-Driver and see that it says "Not digitally signed". I click details for the driver and can see that it is referring to the com0com.sys driver file that I just confirmed is signed. I found what might be a related issue but I'm not sure. Does WinXP demand a WHQL signature? If so, does that explain why the com0com.sys file is signed but the device driver entries say they aren't signed?

    Read the article

  • How can I run a speed test for Netflix and Hulu?

    - by Brennan Stehling
    Since the FCC rules were struck down I have noticed that Netflix and Hulu often stall on my Time Warner home connection. I've heard others are experiencing the same delays. I use Speedtest.net regularly from my computer and phone to check my connection and typically at home I get 10 to 15Mbps and occasionally higher. Currently it is around 10Mbps yet Hulu is stalling. Is there a way to specifically test my speeds for streaming Netflix and Hulu?

    Read the article

  • Can I include a NSUserDefault password test in AppDelegate to load a loginView?

    - by Michael Robinson
    I have a name and password in NSUserDefaults for login. I want to place a test in my AppDelegate.m class to test for presence and load a login/signup loginView.xib modally if there is no password or name stored in the app. Here is the pulling of the defaults: -(void)refreshFields { NSUserDefaults *defaults = [NSUserDefaults standardUserDefaults]; usernameLabel.text = [defaults objectForKey:kUsernameKey]; passwordLabel.text = [defaults objectForKey:kPasswordKey]; Here is the tabcontroller loading part: - (void)applicationDidFinishLaunching:(UIApplication *)application { firstTab = [[FirstTab alloc] initWithStyle:UITableViewStylePlain]; UINavigationController *firstNavigationController = [[UINavigationController alloc] initWithRootViewController:firstTab]; [firstTab release]; secondTab = // EDITED FOR SPACE thirdTab = // EDITED FOR SPACE tabBarController = [[UITabBarController alloc] init]; tabBarController.viewControllers = [NSArray arrayWithObjects:firstNavigationController, secondNavigationController, thirdNavigationController, nil]; [window addSubview:tabBarController.view]; [firstNavigationController release]; [secondNavigationController release]; [thirdNavigationController release]; [self logout]; [window makeKeyAndVisible]; Here is where the loginView.xib loads automatically: - (void)logout { loginViewController = [[LoginViewController alloc] initWithNibName:@"LoginView" bundle:nil]; UINavigationController *loginNavigationController = [[UINavigationController alloc] initWithRootViewController:loginViewController]; [loginViewController release]; [tabBarController presentModalViewController:loginNavigationController animated:YES]; [loginNavigationController release]; } I want to replace the above autoload with a test similar to below (that works) using IF-ELSE - (void)logout { if ([usernameLabel.text length] == 0 || [passwordLabel.text length] == 0) { loginViewController = [[LoginViewController alloc] initWithNibName:@"LoginView" bundle:nil]; UINavigationController *loginNavigationController = [[UINavigationController alloc] initWithRootViewController:loginViewController]; [loginViewController release]; [tabBarController presentModalViewController:loginNavigationController animated:YES]; [loginNavigationController release]; }else { [window addSubview:tabBarController.view];} Thanks in advance, I'm totally lost on this.

    Read the article

  • What should i do to test EasyMock objects when using Generics ? EasyMock

    - by Arthur Ronald F D Garcia
    See code just bellow Our generic interface public interface Repository<INSTANCE_CLASS, INSTANCE_ID_CLASS> { void add(INSTANCE_CLASS instance); INSTANCE_CLASS getById(INSTANCE_ID_CLASS id); } And a single class public class Order { private Integer id; private Integer orderNumber; // getter's and setter's public void equals(Object o) { if(o == null) return false; if(!(o instanceof Order)) return false; // business key if(getOrderNumber() == null) return false; final Order other = (Order) o; if(!(getOrderNumber().equals(other.getOrderNumber()))) return false; return true; } // hashcode } And when i do the following test private Repository<Order, Integer> repository; @Before public void setUp { repository = EasyMock.createMock(Repository.class); Order order = new Order(); order.setOrderNumber(new Integer(1)); repository.add(order); EasyMock.expectLasCall().once(); EasyMock.replay(repository); } @Test public void addOrder() { Order order = new Order(); order.setOrderNumber(new Integer(1)); repository.add(order); EasyMock.verify(repository) } I get Unexpected method call add(br.com.smac.model.domain.Order@ac66b62): add(br.com.smac.model.domain.Order@ac66b62): expected: 1, actual: 0 Why does it not work as expected ??? What should i do to pass the test ???

    Read the article

  • How can I easily test connectivity to external sources with WinHTTP?

    - by Mike B
    I've got an server application that uses Winhttp to fetch information from an external source. Occasionally, I'll need to troubleshoot connectivity issues and I'd like an easy way to test connections through winHTTP (on the off-chance that there's something that is specifically impeding winHTTP and not other unrelated connectivity commands like telnet). Does IE use WinHTTP? If not, are there any tools (preferably already integrated into Windows) that I can use? Occasionally I'll use IE but I'm not sure if that's quite the same.

    Read the article

  • Jquery getJSON to external PHP page

    - by Pmarcoen
    I've been trying to make an ajax request to an external server. I've learned so far that I need to use getJSON to do this because of security reasons ? Now, I can't seem to make a simple call to an external page. I've tried to simplify it down as much as I can but it's still not working. I have 2 files, test.html & test.php my test.html makes a call like this, to localhost for testing : $.getJSON("http://localhost/OutVoice/services/test.php", function(json){ alert("JSON Data: " + json); }); and I want my test.php to return a simple 'test' : $results = "test"; echo json_encode($results); I'm probably making some incredible rookie mistake but I can't seem to figure it out. Also, if this works, how can I send data to my test.php page, like you would do like test.php?id=15 ? The test.html page is calling the test.php page on localhost, same directory I don't get any errors, just no alert ..

    Read the article

< Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >