Search Results

Search found 63182 results on 2528 pages for 'data driven tests'.

Page 112/2528 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • Compressing 2D level data

    - by Lucius
    So, I'm developing a 2D, tile based game and a map maker thingy - all in Java. The problem is that recently I've been having some memory issues when about 4 maps are loaded. Each one of these maps are composed of 128x128 tiles and have 4 layers (for details and stuff). I already spent a good amount of time searching for solutions and the best thing I found was run-length enconding (RLE). It seems easy enough to use with static data, but is there a way to use it with data that is constantly changing, without a big drop in performance? In my maps, supposing I'm compressing the columns, I would have 128 rows, each with some amount of data (hopefully less than it would be without RLE). Whenever I change a tile, that whole row would have to be checked and I'm affraid that would slow down too much the production (and I'm in a somewhat tight schedule). Well, worst case scenario I work on each map individually, and save them using RLE, but it would be really nice if I could avoind that. EDIT: What I'm currently using to store the data for the tiles is a 2D array of HashMaps that use the layer as key and store the id of the tile in that position - like this: private HashMap< Integer, Integer [][]

    Read the article

  • Non use of persisted data – Part deux

    - by Dave Ballantyne
    In my last blog I showed how persisted data may not be used if you have used the base data on an include on an index. That wasn't the only problem ive had that showed the same symptom.  Using the same code as before,  I was executing similar to the below : select BillToAddressID,SOD.SalesOrderDetailID,SOH.CleanedGuid from sales.salesorderheader SOH join Sales.SalesOrderDetail SOD on SOH.SalesOrderID = SOD.SalesOrderID But,  due to a distribution error in statistics i found it necessary to use a table hint.  In this case, I wanted to force a loop join select BillToAddressID,SOD.SalesOrderDetailID,SOH.CleanedGuid from sales.salesorderheader SOH inner loop join Sales.SalesOrderDetail SOD on SOH.SalesOrderID = SOD.SalesOrderID   But, being the diligent  TSQL developer that I am ,looking at the execution plan I noticed that the ‘compute scalar’ operator was again calling the function.  Again,  profiler is a more graphic way to view this…..   All very odd,  just because ive forced a join , that has NOTHING, to do with my persisted data then something is causing the data to be re-evaluated. Not sure if there is any easy fix you can do to the TSQL here, but again its a lesson learned (or rather reinforced) examine the execution plan of every query you write to ensure that it is operating as you thought it would.

    Read the article

  • Displaying a Grid of Data in ASP.NET MVC

    One of the most common tasks we face as a web developers is displaying data in a grid. In its simplest incarnation, a grid merely displays information about a set of records - the orders placed by a particular customer, perhaps; however, most grids offer features like sorting, paging, and filtering to present the data in a more useful and readable manner. In ASP.NET WebForms the GridView control offers a quick and easy way to display a set of records in a grid, and offers features like sorting, paging, editing, and deleting with just a little extra work. On page load, the GridView automatically renders as an HTML <table> element, freeing you from having to write any markup and letting you focus instead on retrieving and binding the data to display to the GridView. In an ASP.NET MVC application, however, developers are on the hook for generating the markup rendered by each view. This task can be a bit daunting for developers new to ASP.NET MVC, especially those who have a background in WebForms. This is the first in a series of articles that explore how to display grids in an ASP.NET MVC application. This installment starts with a walk through of creating the ASP.NET MVC application and data access code used throughout this series. Next, it shows how to display a set of records in a simple grid. Future installments examine how to create richer grids that include sorting, paging, filtering, and client-side enhancements. We'll also look at pre-built grid solutions, like the Grid component in the MvcContrib project and JavaScript-based grids like jqGrid. But first things first - let's create an ASP.NET MVC application and see how to display database records in a web page. Read on to learn more! Read More >

    Read the article

  • Generate a Word document from list data

    - by PeterBrunone
    This came up on a discussion list lately, so I threw together some code to meet the need.  In short, a colleague needed to take the results of an InfoPath form survey and give them to the user in Word format.  The form data was already in a list item, so it was a simple matter of using the SharePoint API to get the list item, formatting the data appropriately, and using response headers to make the client machine treat the response as MS Word content.  The following rudimentary code can be run in an ASPX (or an assembly) in the 12 hive.  When you link to the page, send the list name and item ID in the querystring and use them to grab the appropriate data. // Clear the current response headers and set them up to look like a word doc.HttpContext.Current.Response.Clear();HttpContext.Current.Response.Charset ="";HttpContext.Current.Response.ContentType ="application/msword";string strFileName = "ThatWordFileYouWanted"+ ".doc";HttpContext.Current.Response.AddHeader("Content-Disposition", "inline;filename=" + strFileName);// Using the current site, get the List by name and then the Item by ID (from the URL).string myListName = HttpContext.Current.Request.Querystring["listName"];int myID = Convert.ToInt32(HttpContext.Current.Request.Querystring["itemID"]);SPSite oSite = SPContext.Current.Site;SPWeb oWeb = oSite.OpenWeb();SPList oList = oWeb.Lists["MyListName"];SPListItem oListItem = oList.Items.GetItemById(myID);// Build a string with the data -- format it with HTML if you like. StringBuilder strHTMLContent = newStringBuilder();// *// Here's where you pull individual fields out of the list item.// *// Once everything is ready, spit it out to the client machine.HttpContext.Current.Response.Write(strHTMLContent);HttpContext.Current.Response.End();HttpContext.Current.Response.Flush();

    Read the article

  • PASS Summit Preconference and Sessions

    - by Davide Mauri
    I’m very pleased to announce that I’ll be delivering a Pre-Conference at PASS Summit 2012. I’ll speak about Business Intelligence again (as I did in 2010) but this time I’ll focus only on Data Warehouse, since it’s big topic even alone. I’ll discuss not only what is a Data Warehouse, how it can be modeled and built, but also how it’s development can be approached using and Agile approach, bringing the experience I gathered in this field. Building the Agile Data Warehouse with SQL Server 2012 http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=2821 I’m sure you’ll like it, especially if you’re starting to create a BI Solution and you’re wondering what is a Data Warehouse, if it is still useful nowadays that everyone talks about Self-Service BI and In-Memory databases, and what’s the correct path to follow in order to have a successful project up and running. Beside this Preconference, I’ll also deliver a regular session, this time related to database administration, monitoring and tuning: DMVs: Power in Your Hands http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=3204 Here we’ll dive into the most useful DMVs, so that you’ll see how that can help in everyday management in order to discover, understand and optimze you SQL Server installation, from the server itself to the single query. See you there!!!!!

    Read the article

  • PASS Summit Preconference and Sessions

    - by Davide Mauri
    I’m very pleased to announce that I’ll be delivering a Pre-Conference at PASS Summit 2012. I’ll speak about Business Intelligence again (as I did in 2010) but this time I’ll focus only on Data Warehouse, since it’s big topic even alone. I’ll discuss not only what is a Data Warehouse, how it can be modeled and built, but also how it’s development can be approached using and Agile approach, bringing the experience I gathered in this field. Building the Agile Data Warehouse with SQL Server 2012 http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=2821 I’m sure you’ll like it, especially if you’re starting to create a BI Solution and you’re wondering what is a Data Warehouse, if it is still useful nowadays that everyone talks about Self-Service BI and In-Memory databases, and what’s the correct path to follow in order to have a successful project up and running. Beside this Preconference, I’ll also deliver a regular session, this time related to database administration, monitoring and tuning: DMVs: Power in Your Hands http://www.sqlpass.org/summit/2012/Sessions/SessionDetails.aspx?sid=3204 Here we’ll dive into the most useful DMVs, so that you’ll see how that can help in everyday management in order to discover, understand and optimze you SQL Server installation, from the server itself to the single query. See you there!!!!!

    Read the article

  • Is the design notion of layers contrived?

    - by Bruce
    Hi all I'm reading through Eric Evans' awesome work, Domain-Driven Design. However, I can't help feeling that the 'layers' model is contrived. To expand on that statement, it seems as if it tries to shoe-horn various concepts into a specific, neat model, that of layers talking to each other. It seems to me that the layers model is too simplified to actually capture the way that (good) software works. To expand further: Evans says: "Partition a complex program into layers. Develop a design within each layer that is cohesive and that depends only on the layers below. Follow standard architectural patterns to provide loose coupling to the layers above." Maybe I'm misunderstanding what 'depends' means, but as far as I can see, it can either mean a) Class X (in the UI for example) has a reference to a concrete class Y (in the main application) or b) Class X has a reference to a class Y-ish object providing class Y-ish services (ie a reference held as an interface). If it means (a), then this is clearly a bad thing, since it defeats re-using the UI as a front-end to some other application that provides Y-ish functionality. But if it means (b), then how is the UI any more dependent on the application, than the application is dependent on the UI? Both are decoupled from each other as much as they can be while still talking to each other. Evans' layer model of dependencies going one way seems too neat. First, isn't it more accurate to say that each area of the design provides a module that is pretty much an island to itself, and that ideally all communication is through interfaces, in a contract-driven/responsibility-driven paradigm? (ie, the 'dependency only on lower layers' is contrived). Likewise with the domain layer talking to the database - the domain layer is as decoupled (through DAO etc) from the database as the database is from the domain layer. Neither is dependent on the other, both can be swapped out. Second, the idea of a conceptual straight line (as in from one layer to the next) is artificial - isn't there more a network of intercommunicating but separate modules, including external services, utility services and so on, branching off at different angles? Thanks all - hoping that your responses can clarify my understanding on this..

    Read the article

  • Winforms Which Design Pattern / Agile Methodology to choose

    - by ZedBee
    I have developed desktop (winforms) applications without following any proper design pattern or agile methodologies. Now I have been given the task to re-write an existing ERP application in C# (Winforms). I have been reading about Domain Driven Design, scrum, extreme programming, layered architecture etc. Its quite confusing and really hard (because of time limitations) to go and try each and every method and then deciding which way to go. Its very hard for me to understand the bigger picture and see which pattern and agile methodology to follow. To be more specific about what I want to know is that: Is it possible to follow Domain Driven Design and still be agile. Should I choose Extreme programming or scrum in this specific scenario Where does MVP and MVVM fits, which one would be a better option for me

    Read the article

  • How do I configure integration tests using rspec 2?

    - by Jamie Monserrate
    I need to have different settings for my unit tests and different settings for my integration tests. Example For unit tests, I would like to do WebMock.disable_net_connect!(:allow_localhost => true) And for integration tests, I would like to do WebMock.allow_net_connect! Also, before the start of an integration test, I would like to make sure that solr is started. Hence I want to be able to call config.before(:suite) do SunspotStarter.start end BUT, only for integration tests. I do not want to start my solr if its a unit test. How do I keep their configurations separate? Right now, I have solved this by keeping my integration tests in a folder outside the spec folder, which has its own spec_helper. Is there any better way?

    Read the article

  • How can data not stored in a DB be accessed from any activity in Android?

    - by jul
    hi, I'm passing data to a ListView to display some restaurant names. Now when clicking on an item I'd like to start another activity to display more restaurant data. I'm not sure about how to do it. Shall I pass all the restaurant data in a bundle through the intent object? Or shall I just pass the restaurant id and get the data in the other activity? In that case, how can I access my restaurantList from the other activity? In any case, how can I get data from the restaurant I clicked on (the view only contains the name)? Any help, pointers welcome! Thanks Jul ListView lv= (ListView)findViewById(R.id.listview); lv.setAdapter( new ArrayAdapter<String>(this,android.R.layout.simple_list_item_1,restaurantList.getRestaurantNames())); lv.setOnItemClickListener(new OnItemClickListener() { public void onItemClick(AdapterView<?> parent, View view, int position, long id) { Intent i = new Intent(Atable.this, RestaurantEdit.class); Bundle b = new Bundle(); //b.putInt("id", ? ); startActivityForResult(i, ACTIVITY_EDIT); } }); RestaurantList.java package org.digitalfarm.atable; import java.util.ArrayList; import java.util.List; public class RestaurantList { private List<Restaurant> restaurants = new ArrayList<Restaurant>(); public List<Restaurant> getRestaurants() { return this.restaurants; } public void setRestaurants(List<Restaurant> restaurants) { this.restaurants = restaurants; } public List<String> getRestaurantNames() { List<String> restaurantNames = new ArrayList<String>(); for (int i=0; i<this.restaurants.size(); i++) { restaurantNames.add(this.restaurants.get(i).getName()); } return restaurantNames; } } Restaurant.java package org.digitalfarm.atable; public class Restaurant { private int id; private String name; private float latitude; private float longitude; public int getId() { return this.id; } public void setId(int id) { this.id = id; } public String getName() { return this.name; } public void setName(String name) { this.name = name; } public float getLatitude() { return this.latitude; } public void setLatitude(float latitude) { this.latitude = latitude; } public float getLongitude() { return this.longitude; } public void setLongitude(float longitude) { this.longitude = longitude; } }

    Read the article

  • What are best practices for collecting, maintaining and ensuring accuracy of a huge data set?

    - by Kyle West
    I am posing this question looking for practical advice on how to design a system. Sites like amazon.com and pandora have and maintain huge data sets to run their core business. For example, amazon (and every other major e-commerce site) has millions of products for sale, images of those products, pricing, specifications, etc. etc. etc. Ignoring the data coming in from 3rd party sellers and the user generated content all that "stuff" had to come from somewhere and is maintained by someone. It's also incredibly detailed and accurate. How? How do they do it? Is there just an army of data-entry clerks or have they devised systems to handle the grunt work? My company is in a similar situation. We maintain a huge (10-of-millions of records) catalog of automotive parts and the cars they fit. We've been at it for a while now and have come up with a number of programs and processes to keep our catalog growing and accurate; however, it seems like to grow the catalog to x items we need to grow the team to y. I need to figure some ways to increase the efficiency of the data team and hopefully I can learn from the work of others. Any suggestions are appreciated, more though would be links to content I could spend some serious time reading. THANKS! Kyle

    Read the article

  • About Data Objects and DAO Design when using Hibernate

    - by X. Ma
    I'm hesitating between two designs of a database project using Hibernate. Design #1. (1) Create a general data provider interface, including a set of DAO interfaces and general data container classes. It hides the underneath implementation. A data provider implementation could access data in database, or an XML file, or a service, or something else. The user of a data provider does not to know about it. (2) Create a database library with Hibernate. This library implements the data provider interface in (1). The bad thing about Design #1 is that in order to hide the implementation details, I need to create two sets of data container classes. One in the general data provider interface - let's call them DPI-Objects, the other set is used in the database library, exclusively for entity/attribute mapping in Hibernate - let's call them H-Objects. In the DAO implementation, I need to read data from database to create H-Objects (via Hibernate) and then convert H-Objects into DPI-Objects. Design #2. Do not create a general data provider interface. Expose H-Objects directly to components that use the database lib. So the user of the database library needs to be aware of Hibernate. I like design #1 more, but I don't want to create two sets of data container classes. Is that the right way to hide H-Objects and other Hibernate implementation details from the user who uses the database-based data provider? Are there any drawbacks of Design #2? I will not implement other data provider in the new future, so should I just forget about the data provider interface and use Design #2? What do you think about this? Thanks for your time!

    Read the article

  • How do I bind an iTunes style source list to an NSTableView using Core Data?

    - by Austin
    I have an iTunes style interface in my application: Source list (NSOutlineView) on the left that contains different libraries and playlists with an NSTableView on the right side of the interface displaying information for "Presentations". Similar to iTunes, I am showing the same type of information in the table view whether a library or playlist is selected (title, author, date created, etc). I currently have an NSArrayController connected to my NSTableView and was setting the fetch predicate based on what was selected in the source list. This works fine when selecting a library because I can just set the fetch predicate to filter by the "type" field in my Presentation Core Data entity. When I try to adjust the fetch predicate for the playlist however, it doesn't look like there is any way to set the fetch predicate because I've got a table in between Playlists and Presentations to keep up with the order within the Playlist. According to the Apple docs, these type of predicates are not doable with Core Data (it basically doesn't multiple inner joins). Below is the relevant portion of my Data Model. Is my data model setup incorrectly? Should I drop the NSArrayController and handle connecting the NSTableView up by hand? I'm trying to figure out if there is a simple fix, or really a design flaw.

    Read the article

  • Handling missing/incomplete data in R--is there function to mask but not remove NAs?

    - by doug
    As you would expect from a DSL aimed at data analysis, R handles missing/incomplete data very well, for instance: Many R functions have an 'na.rm' flag that you can set to 'T' to remove the NAs: mean( c(5,6,12,87,9,NA,43,67), na.rm=T) But if you want to deal with NAs before the function call, you need to do something like this: to remove each 'NA' from a vector: vx = vx[!is.na(a)] to remove each 'NA' from a vector and replace it w/ a '0': ifelse(is.na(vx), 0, vx) to remove entire each row that contains 'NA' from a data frame: dfx = dfx[complete.cases(dfx),] All of these functions permanently remove 'NA' or rows with an 'NA' in them. Sometimes this isn't quite what you want though--making an 'NA'-excised copy of the data frame might be necessary for the next step in the workflow but in subsequent steps you often want those rows back (e.g., to calculate a column-wise statistic for a column that has missing rows caused by a prior call to 'complete cases' yet that column has no 'NA' values in it). to be as clear as possible about what i'm looking for: python/numpy has a class, 'masked array', with a 'mask' method, which lets you conceal--but not remove--NAs during a function call. Is there an analogous function in R?

    Read the article

  • How to write these two queries for a simple data warehouse, using ANSI SQL?

    - by morpheous
    I am writing a simple data warehouse that will allow me to query the table to observe periodic (say weekly) changes in data, as well as changes in the change of the data (e.g. week to week change in the weekly sale amount). For the purposes of simplicity, I will present very simplified (almost trivialized) versions of the tables I am using here. The sales data table is a view and has the following structure: CREATE TABLE sales_data ( sales_time date NOT NULL, sales_amt double NOT NULL ) For the purpose of this question. I have left out other fields you would expect to see - like product_id, sales_person_id etc, etc, as they have no direct relevance to this question. AFAICT, the only fields that will be used in the query are the sales_time and the sales_amt fields (unless I am mistaken). I also have a date dimension table with the following structure: CREATE TABLE date_dimension ( id integer NOT NULL, datestamp date NOT NULL, day_part integer NOT NULL, week_part integer NOT NULL, month_part integer NOT NULL, qtr_part integer NOT NULL, year_part integer NOT NULL, ); which partition dates into reporting ranges. I need to write queries that will allow me to do the following: Return the change in week on week sales_amt for a specified period. For example the change between sales today and sales N days ago - where N is a positive integer (N == 7 in this case). Return the change in change of sales_amt for a specified period. For in (1). we calculated the week on week change. Now we want to know how that change is differs from the (week on week) change calculated last week. I am stuck however at this point, as SQL is my weakest skill. I would be grateful if an SQL master can explain how I can write these queries in a DB agnostic way (i.e. using ANSI SQL).

    Read the article

  • [C#] How to receive uncrackable data or so ? ;P

    - by Prix
    Hi, I am working on an C# application to communicate with my website and retrieve some information from it, using SSL which is working just fine. Now what i want/need is a way to receive encrypted or codified or obfuscated data that if some one cracks my application they will not be able to decrypt the data because it needs something from the server (api, website) but yet the application needs to decrypt it in order to use it... initally i was thinking of an inside RSA pair or keys, to send and receive the encrypt data but let's consider that someone has cracked the application, they could just replace those keys for keys they have made, so i was looking into some methods but havent found or been able to think of any way to harder this... I was learning about RSA, encryption and such and started developing this as a self learning and got involved with it and now i am trying to figure out a way to receive data like that... I have considered obfuscating and compiling my code with packers and etc but this is not about packing it etc... i am more interested in knowing a better way to secure what i described i know it may or is impossible but yet i am looking forward to some approch. I would appreciate advices, suggestions and C# code samples, if you need more information or anything please let me know.

    Read the article

  • How do I authenticate an ADO.NET Data Service?

    - by lsb
    Hi! I've created an ADO.Net Data Service hosted in a Azure worker role. I want to pass credentials from a simple console client to the service then validate them using a QueryInterceptor. Unfortunately, the credentials don't seem to be making it over the wire. The following is a simplified version of the code I'm using, starting with the DataService on the server: using System; using System.Data.Services; using System.Linq.Expressions; using System.ServiceModel; using System.Web; namespace Oslo.Worker { [ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)] public class AdminService : DataService<OsloEntities> { public static void InitializeService( IDataServiceConfiguration config) { config.SetEntitySetAccessRule("*", EntitySetRights.All); config.SetServiceOperationAccessRule("*", ServiceOperationRights.All); } [QueryInterceptor("Pairs")] public Expression<Func<Pair, bool>> OnQueryPairs() { // This doesn't work!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! if (HttpContext.Current.User.Identity.Name != "ADMIN") throw new Exception("Ooops!"); return p => true; } } } Here's the AdminService I'm using to instantiate the AdminService in my Azure worker role: using System; using System.Data.Services; namespace Oslo.Worker { public class AdminHost : DataServiceHost { public AdminHost(Uri baseAddress) : base(typeof(AdminService), new Uri[] { baseAddress }) { } } } And finally, here's the client code. using System; using System.Data.Services.Client; using System.Net; using Oslo.Shared; namespace Oslo.ClientTest { public class AdminContext : DataServiceContext { public AdminContext(Uri serviceRoot, string userName, string password) : base(serviceRoot) { Credentials = new NetworkCredential(userName, password); } public DataServiceQuery<Order> Orders { get { return base.CreateQuery<Pair>("Orders"); } } } } I should mention that the code works great with the signal exception that the credentials are not being passed over the wire. Any help in this regard would be greatly appreciated! Thanks....

    Read the article

  • Setup database for Unit tests with Spring, Hibernate and Spring Transaction Support

    - by Michael Bulla
    I want to test integration of dao-layer with my service-layer in a unit-test. So I need to setup some data in my database (hsql). For this setup I need an own transaction at the begining of my testcase to ensure that all my setup is really commited to database before starting my testcase. So here's what I want to achieve: // NotTranactional public void doTest { // transaction begins setup database // commit transaction service.doStuff() // doStuff is annotated @Transactional(propagation=Propagation.REQUIRED) } Here is my not working code: @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations={"/asynchUnit.xml"}) @DirtiesContext(classMode=ClassMode.AFTER_EACH_TEST_METHOD) public class ReceiptServiceTest implements ApplicationContextAware { @Autowired(required=true) private UserHome userHome; private ApplicationContext context; @Before @Transactional(propagation=Propagation.REQUIRED) public void init() throws Exception { User user = InitialCreator.createUser(); userHome.persist(user); } @Test public void testDoSomething() { ... } } Leading to this exception: org.hibernate.HibernateException: No Hibernate Session bound to thread, and configuration does not allow creation of non-transactional one here at org.springframework.orm.hibernate3.SpringSessionContext.currentSession(SpringSessionContext.java:63) at org.hibernate.impl.SessionFactoryImpl.getCurrentSession(SessionFactoryImpl.java:687) at de.diandan.asynch.modell.GenericHome.getSession(GenericHome.java:40) at de.diandan.asynch.modell.GenericHome.persist(GenericHome.java:53) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:318) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:196) at $Proxy28.persist(Unknown Source) at de.diandan.asynch.service.ReceiptServiceTest.init(ReceiptServiceTest.java:63) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27) at org.springframework.test.context.junit4.statements.RunBeforeTestMethodCallbacks.evaluate(RunBeforeTestMethodCallbacks.java:74) at org.springframework.test.context.junit4.statements.RunAfterTestMethodCallbacks.evaluate(RunAfterTestMethodCallbacks.java:83) at org.springframework.test.context.junit4.statements.SpringRepeat.evaluate(SpringRepeat.java:72) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:231) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184) at org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61) at org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:71) at org.junit.runners.ParentRunner.run(ParentRunner.java:236) at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:174) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:49) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) I dont know whats the right way to get the transaction around setup database. What I tried: @Before @Transactional(propagation=Propagation.REQUIRED) public void setup() { setup database } - Spring seems not to start transaction in @Before-annotated methods. Beyond that, thats not what I really want, cause there are a lot merhods in my testclass which needs a slightly differnt setup, so I need several of that init-methods. @Transactional(propagation=Propagation.REQUIRED) public void setup() { setup database } public void doTest { init(); service.doStuff() // doStuff is annotated @Transactional(propagation=Propagation.REQUIRED) } -- init seems not to get started in transaction What I dont want to do: public void doTest { // doing my own transaction-handling setup database // doing my own transaction-handling service.doStuff() // doStuff is annotated @Transactional(propagation=Propagation.REQUIRED) } -- start mixing springs transaction-handling and my own seems to get pain in the ass. @Transactional(propagation=Propagation.REQUIRED) public void doTest { setup database service.doStuff() } -- I want to test as real as possible situation, so my service should start with a clean session and no transaction opened So whats the right way to setup database for my testcase?

    Read the article

  • faster way to compare rows in a data frame

    - by aguiar
    Consider the data frame below. I want to compare each row with rows below and then take the rows that are equal in more than 3 values. I wrote the code below, but it is very slow if you have a large data frame. How could I do that faster? data <- as.data.frame(matrix(c(10,11,10,13,9,10,11,10,14,9,10,10,8,12,9,10,11,10,13,9,13,13,10,13,9), nrow=5, byrow=T)) rownames(data)<-c("sample_1","sample_2","sample_3","sample_4","sample_5") >data V1 V2 V3 V4 V5 sample_1 10 11 10 13 9 sample_2 10 11 10 14 9 sample_3 10 10 8 12 9 sample_4 10 11 10 13 9 sample_5 13 13 10 13 9 tab <- data.frame(sample = NA, duplicate = NA, matches = NA) dfrow <- 1 for(i in 1:nrow(data)) { sample <- data[i, ] for(j in (i+1):nrow(data)) if(i+1 <= nrow(data)) { matches <- 0 for(V in 1:ncol(data)) { if(data[j,V] == sample[,V]) { matches <- matches + 1 } } if(matches > 3) { duplicate <- data[j, ] pair <- cbind(rownames(sample), rownames(duplicate), matches) tab[dfrow, ] <- pair dfrow <- dfrow + 1 } } } >tab sample duplicate matches 1 sample_1 sample_2 4 2 sample_1 sample_4 5 3 sample_2 sample_4 4

    Read the article

  • CRM@Oracle Series: Showcasing Innovation with Oracle Customer Hub

    - by tony.berk
    When is having too many customers a challenge? It is not something too many people would complain about. But from a data perspective, one challenge is to keep each customer's data consistent across multiple enterprise systems such as CRM, ERP, and all of your other related applications. Buckle your seat belts, we are going a bit technical today... If you have ever tried it, you know it isn't easy. If you haven't, don't go there alone! Customer data integration projects are challenging and, depending on the environment, require sharp, innovative people to succeed. Want to hear from some guys who have done it and succeeded? Here is an interview with Dan Lanir and Afzal Asif from Oracle's Applications IT CRM Systems group on implementing Oracle Customer Hub and innovation. For more interesting discussions on innovation, check out the Oracle Innovation Showcase.

    Read the article

  • Copying Columns from Grid to Clipboard in SQL Developer

    - by thatjeffsmith
    There are several ways to get data from a query or a table|view to the clipboard. You know the tried and true, copy and paste. But what if you only want one or more columns, not every column? There are several ways to do this, let’s see if we can’t identify all of them. Write your query to only include the data you want Obvious? Yes. Needed to be said? Definitely. The best tuning tip is to only ask for the data you need, only when you absolutely need it. But let’s look at a few more practical ways to do this. Hide the unwanted columns Mouse right click on an column header. In the context menu, select ‘Columns.’ Hide the columns you don’t want. Copy and paste. WYSIWYG Grids, Hide Columns and Filter Rows Mouse select the columns Obvious, but a bit painful. For a very large dataset, you’ll be holding down the Shift and PageDown buttons – but it works. Remember to use Ctrl+Shift+C to get the column headers with the data. Use the Export Wizard This used to be called ‘Unload’ – agreed, not a great name. So, we changed it. In a grid, right mouse click on the data, and on the context menu, select ‘Export…’ Select your format – I suggest ‘delimited’ or ‘fixed’ for copying data to the clipboard. You can export to the clipboard, yes you can! Click ‘Next.’ Click in the Columns dialog, and choose the columns you want copied. Trim the columns you don't want copied Click ‘Finish.’ Alt or Ctrl tab to your window or application of choice. And Paste! "FIRST_NAME" "LAST_NAME" "Donald" "OConnell" "Douglas" "Grant" "Jennifer" "Whalen" "Pat" "Fay" "Susan" "Mavris" "William" "Gietz" "Alexander" "Hunold" "Bruce" "Ernst" "David" "Austin" "Valli" "Pataballa" "Diana" "Lorentz" "Daniel" "Faviet" "John" "Chen" "Ismael" "Sciarra" "Jose Manuel" "Urman" "Luis" "Popp" "Alexander" "Khoo" "Shelli" "Baida" "Sigal" "Tobias" "Guy" "Himuro" "Karen" "Colmenares" "Matthew" "Weiss" "Adam" "Fripp" "Payam" "Kaufling" "Shanta" "Vollman" "Kevin" "Mourgos" "Julia" "Nayer" "Irene" "Mikkilineni" ... There’s probably at least 2 or 3 more ways, but… But, try these and let me know how we can improve things. I’ve already gotten a request to be able to include the SQL text used to populate the dataset on the the copy to clipboard, and it’s now on our to-do list

    Read the article

  • And the Winners of Fusion Middleware Innovation Awards in Data Integration are…

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} At OpenWorld, we announced the winners of Fusion Middleware Innovation Awards 2012. Raymond James and Morrison Supermarkets were selected for the data integration category for their innovative use of Oracle’s data integration products and the great results they have achieved. In this blog I would like to briefly introduce you to these award winning projects. Raymond James is a diversified financial services company, which provides financial planning, wealth management, investment banking, and asset management. They are using Oracle GoldenGate and Oracle Data Integrator to feed their operational data store (ODS), which supports application services across the enterprise. A major requirement for their project was low data latency, as key decisions are made based on the data in the ODS. They were able to fulfill this requirement due to the Oracle Data Integrator’s integrated solution with Oracle GoldenGate. Oracle GoldenGate captures changed data from different systems including Oracle Database, HP NonStop and Microsoft SQL Server into a single data store on SQL Server 2008. Oracle Data Integrator provides data transformations for the ODS. Leveraging ODI’s integration with GoldenGate, Raymond James now sees a 9 second median latency (from source commit to ODS target commit). The ODS solution delivers high quality, accurate data for consuming applications such as Raymond James’ next generation client and portfolio management systems as well as real-time operational reporting. It enables timely information for making better decisions. There are more benefits Raymond James achieved with this implementation of Oracle’s data integration solution. The software developers and architects of this solution, Tim Garrod and Ryan Fonnett, have told us during their presentation at OpenWorld that they also reduced application complexity significantly while improving developer productivity through trusted operational services. They were able to utilize CDC to generate alerts for business users, and for applications (for example for cache hydration mechanisms). One cool innovation example among many in this project is that using ODI's flexible architecture, Tim and Ryan could build 24/7 self-healing processes. And these processes have hardly failed. Integration processes fixes the errors itself. Pretty amazing; and a great solution for environments that need such reliability and availability. (You can see Tim and Ryan’s photo with the Innovation Award above.) The other winner of this year in the data integration category, Morrison Supermarkets, is the UK’s 4th largest grocery retailer. The company has been migrating all their legacy applications on to a new-world application set based on Oracle and consolidating all BI on to a single Oracle platform. The company recently implemented Oracle Exadata as the data warehouse engine and uses Oracle Business Intelligence EE. Their goal with deploying GoldenGate and ODI was to provide BI data to the enterprise in a way that it also supports operational decision making requirements from a wide range of Oracle based ERP applications such as E-Business Suite, PeopleSoft, Oracle Retail Suite. They use GoldenGate’s log-based change data capture capabilities and Oracle Data Integrator to populate the Oracle Retail Data Model. The electronic point of sale (EPOS) integration solution they built processes over 80 million transactions/day at busy periods in near real time (15 mins). It provides valuable insight to Retail and Commercial teams for both intra-day and historical trend analysis. As I mentioned in yesterday’s blog, the right data integration platform can transform the business. Here is another example: The point-of-sale integration enabled the grocery chain to optimize its stock management, leading to another award: Morrisons won the Grocer 33 award in 2012 - beating all other major UK supermarkets in product availability. Congratulations, Morrisons,on another award! Celebrating the innovation and the success of our customers with Oracle’s data integration products was definitely a highlight of Oracle OpenWorld for me. I look forward to hearing more from Raymond James, Morrisons, and the other customers that presented their data integration projects at OpenWorld, on how they are creating more value for their organizations.

    Read the article

  • Announcing Sesame Data Browser

    At the occasion of MIX10, which is currently taking place in Las Vegas, I'd like to announce Sesame Data Browser.Sesame will be a suite of tools for dealing with data, and Sesame Data Browser will be the first tool from that suite.Today, during the second MIX10 keynote, Microsoft demonstrated how they are pushing hard to get OData adopted. If you don't know about OData, you can visit the just revamped dedicated website: http://odata.org. There you'll find about the OData protocol, which allows you...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • WCF Error when using “Match Data” function in MDS Excel AddIn

    - by Davide Mauri
    If you’re using MDS and DQS with the Excel Integration you may get an error when trying to use the “Match Data” feature that uses DQS in order to help to identify duplicate data in your data set. The error is quite obscure and you have to enable WCF error reporting in order to have the error details and you’ll discover that they are related to some missing permission in MDS and DQS_STAGING_DATA database. To fix the problem you just have to give the needed permession, as the following script does: use MDS go GRANT SELECT ON mdm.tblDataQualityOperationsState TO [VMSRV02\mdsweb] GRANT INSERT ON mdm.tblDataQualityOperationsState TO [VMSRV02\mdsweb] GRANT DELETE ON mdm.tblDataQualityOperationsState TO [VMSRV02\mdsweb] GRANT UPDATE ON mdm.tblDataQualityOperationsState TO [VMSRV02\mdsweb] USE [DQS_STAGING_DATA] GO ALTER AUTHORIZATION ON SCHEMA::[db_datareader] TO [VMSRV02\mdsweb] ALTER AUTHORIZATION ON SCHEMA::[db_datawriter] TO [VMSRV02\mdsweb] ALTER AUTHORIZATION ON SCHEMA::[db_ddladmin] TO [VMSRV02\mdsweb] GO Where “VMSRV02\mdsweb” is the user you configured for MDS Service execution. If you don’t remember it, you can just check which account has been assigned to the IIS application pool that your MDS website is using:

    Read the article

  • New Exadata Book Available Soon

    - by Rob Reynolds
    Oracle Press is set to released the first book on data warehouse performance and Exadata on March 14th. Achieving Extreme Performance with Oracle Exadata , by my colleagues Rick Greenwald, Robert Stackowiak, Maqsood Alam, and Mans Bhuller will be available at your favorite booksellers next week. I've seen a sneak peak of the content in this book and its a great way to fully grasp the power of Exadata and how to best apply it to achieve extreme data warehouse performance. From the publisher's description: Achieving Extreme Performance with Oracle Exadata and the Sun Oracle Database Machine is filled with best practices for deployments, hardware sizing, architecting the database machine environments for maximum availability, and backup and recovery. Oracle Database 11gR2 features used within these offerings, as well as migration options and paths for Oracle and non-Oracle databases to Oracle Exadata are covered. This Oracle Press guide also discusses architecture, administration, maintenance, monitoring, and tuning of Oracle Exadata Storage Servers and the Sun Oracle Database Machine. If your company is considering Exadata, or if you need more horsepower out of your data warehouse, I highly recommend grabbing a copy of this book next week.

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >