Search Results

Search found 58823 results on 2353 pages for 'data profiling'.

Page 93/2353 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • Linux C++: how to profile time wasted due to cache misses?

    - by anon
    I know that I can use gprof to benchmark my code. However, I have this problem -- I have a smart pointer that has an extra level of indirection (think of it as a proxy object). As a result, I have this extra layer that effects pretty much all functions, and screws with caching. Is there a way to measure the time my CPU wastes due to cache misses? Thanks!

    Read the article

  • How to structure a Visual Studio project for the data access layer

    - by Akk
    I currently have a project that uses various DB access technologies mainly for showcasing or for demos. Currently we have: Namespace App.Data (App.Data.dll) Folder NHibernate Folder EntityFramework Folder LinqToSql The above structure is ok as we only use Sql Server as the DB. But going forward we will be including Oracle, MySql etc. So what would be a better structure with this in mind? I thought about: Namespace App.Data.SqlServer (App.Data.SqlServer.dll) Folder NHibernate Folder EntityFramework Folder LinqToSql Or would it just be better to have separate assemblies for each database and access technology?: Namespace App.Data.SqlServer.NHibernate (App.Data.SqlServer.NHibernate.dll) Namespace App.Data.SqlServer.EntityFramework(App.Data.SqlServer.EntityFramework.dll) Namespace App.Data.Oracle.NHibernate (App.Data.Oracle.NHibernate.dll) Namespace App.Data.MySql.NHibernate (App.Data.MySql.Oracle.dll)

    Read the article

  • Using WPFPerf to profile a WPF 4.0 application doesn't show me any information

    - by Adrian
    I am trying to use WPFPerf to profile a WPF 4.0 application (I have the latest WPFPerf that should work on WPF 4.0 aps). I start the tool Visual Profiler from WPFPerf, I start my aplication, but after that nothing happens and the element tree from the Visual Profiler is empty. No other error message is shown. Can anyone tell me what am I not doint right? As an additional information, when I try to analize my .exe assembly or any other assembly from my application, I get a BadFormatException saying that the assembly was build with a newer version of .NET. From the download page http://go.microsoft.com/fwlink/?LinkID=191420 I see that this version of WPFPerf should be ok for my app

    Read the article

  • Are there any frameworks for data subscription and update?

    - by Timothy Pratley
    There is one server with multiple clients. The clients are viewing subsets of the servers entire data. If the data that a client is viewing changes, the client should be informed of the changes so that it displays the current data. Example: Two clients are viewing a list of users in an administration screen. One client adds a new user to the list and modifies the permissions of another user. The other client sees the changes propagated to their view.

    Read the article

  • How can I profile a subroutine without using modules?

    - by Zaid
    I'm tempted to relabel this question 'Look at this brick. What type of house does it belong to?' Here's the situation: I've effectively been asked to profile some subroutines having access to neither profilers (even Devel::DProf) nor Time::HiRes. The purpose of this exercise is to 'locate' bottlenecks. At the moment, I'm sprinkling print statements at the beginning and end of each sub that log entries and exits to file, along with the result of the time function. Not ideal, but it's the best I can go by given the circumstances. At the very least it'll allow me to see how many times each sub is called. The code is running under Unix. The closest thing I see to my need is perlfaq8, but that doesn't seem to help (I don't know how to make a syscall, and am wondering if it'll affect the code timing unpredictably). Not your typical everyday SO question...

    Read the article

  • Using gprof with sockets

    - by Chris
    I have a program I want to profile with gprof. The problem (seemingly) is that it uses sockets. So I get things like this: ::select(): Interrupted system call I hit this problem a while back, gave up, and moved on. But I would really like to be able to profile my code, using gprof if possible. What can I do? Is there a gprof option I'm missing? A socket option? Is gprof totally useless in the presence of these types of system calls? If so, is there a viable alternative? EDIT: Platform: Linux 2.6 (x64) GCC 4.4.1 gprof 2.19

    Read the article

  • Best way to migrate servers without losing any data and with no downtime(?)

    - by ina
    This is a methodology question from a freelancer, with a corollary on MySQL.. Is there a way to migrate from an old dedicated server to a new one without losing any data in-between - and with no downtime? In the past, I've had to lose MySQL data between the time when the new server goes up (i.e., all files transferred, system up and ready), and when I take the old server down (data still transferred to old until new one takes over). There is also a short period where both are down for DNS, etc., to refresh. Is there a way for MySQL/root to easily transfer all data that was updated/inserted between a certain time frame?

    Read the article

  • Is there an difference between transient properties defined in the data model, or in the custom subc

    - by mystify
    I was reading that setting the value of a transient property always results in marking the managed object as "dirty". However, what I don't get is this: If I make a subclass of NSManagedObject and use some extra properties which I don't need to be persistet, how does Core Data know about them and how can it mark the object as dirty when I access these? Again, they're not defined in the data model, so Core Data has no really good hint that they are there. Or does Core Data use some kind of introspection to analyze my custom class and figure out what properties I have in there?

    Read the article

  • Timer to find elapsed time in a function call in C

    - by Mohit Nanda
    I want to calculate time elapsed during a function call in C, to the precision of 1 nanosecond. Is there a timer function available in C to do it? If yes please provide a sample code-snippet. Pseudo code Timer.Start() foo(); Timer.Stop() Display time elapsed in execution of foo() Environment details: - using gcc 3.4 compiler on a RHEL machine

    Read the article

  • R: How to write out a data.frame so that I can paste it into SO for others to read?

    - by John
    I have a large data.frame displaying some weird properties when plotted. I'd like to ask a question about it on Stackoverflow, to do that I'd like to write the data.frame out in a form that I can paste it into SO and somebody else can easily run it and have it back into a data.frame object again. Is there an easy way to accomplish this? Also, if it is really long, should I use paste bin instead of directly paste it here?

    Read the article

  • LInux C++: how do profile time wasted due to cache misses?

    - by anon
    I know that I can use gprof to benchmark my code. However, I have this problem -- I have a smart pointer that has an extra level of indirection (think of it as a proxy object). As a result, I have this extra layer that effects pretty much all functions, and screws with caching. Is there a way to measure the time my CPU wastes due to cache misses? Thanks!

    Read the article

  • Mass data store with SQL SERVER

    - by Leo
    We need management 10,000 GPS devices, each GPS device upload a GPS data every 30 seconds, these data need to store in the database(MS SQL Server 2005). Each GPS device daily data quantity is: 24 * 60 * 2 = 2,880 10 000 10,000 GPS devices daily data quantity is: 10000 * 2880 = 28,800,000 Each GPS data approximately 160Byte, the amount of data per day is: 28,800,000 * 160 = 4.29GB We need hold at least 3 months of GPS data in the database, My question is: 1, whether SQL Server 2005 can support such a large amount of data store? 2, How to plan data table? (all GPS data storage in one table? Daily table? Each GPS device with a GPS data table?) The GPS data: GPSID varchar(21), RecvTime datetime, GPSTime datetime, IsValid bit, IsNavi bit, Lng float, Lat float, Alt float, Spd smallint, Head smallint, PulseValue bigint, Oil float, TSW1 bigint, TSW1Mask bigint, TSW2 bigint, TSW2Mask, BSW bigint, StateText varchar(200), PosText varchar(200), UploadType tinyint

    Read the article

  • perf events documentation

    - by Thanatos
    I've searched for an exhaustive explanation of the meaning of each event monitored by the perf stat command; I've found a tutorial which explains quite well how to use different the features of the perf tool. However, it doesn't explain the meaning of several events that can be observed (and there are a lot!!). Someone know where is a quite simple and complete documentation about the events listed by the perf list command? In particular, I'm interested in finding out the percentage of cpu used by some application I wrote. Can i measure it directly through cpu-clock or task-clock? What's the meaning of these two events? Thanks in advance

    Read the article

  • C# or windows equivalent of OS X's Core Data?

    - by Nektarios
    I'm late to the boat and have only just now started using Core Data in OS X / Cocoa - it's incredible and is really changing the way I look at things. Is there an equivalent technology in C# or the modern Windows frameworks? i.e. having managed data types where you get saving, data management, deleting, searching all for free? Also wondering if there's anything like this on Linux.

    Read the article

  • What is the most efficient way to use Core Data?

    - by Eric
    I'm developing an iPad application using Core Data, and was hoping someone could clarify something about Core Data. Right now, I populate my table by making a fetch request for all of my data in viewDidLoad. I'd rather make individual fetch requests in my tableView:cellForRowAtIndexPath:. Can anyone tell me which is more efficient, and why? In other words, is it much less efficient to make lots of small requests as opposed to one big request?

    Read the article

  • Algorithm performance

    - by william007
    I am testing an algorithm for different parameters on a computer. I notice the performance fluctuates for each parameters. Say I run for the first time I got 20 ms, second times I got 5ms, third times I got 4ms: But the algorithm should work the same for these 3 times. I am using stopwatch from C# library to count the time, is there a better way to measure the performance without subjecting to those fluctuations?

    Read the article

  • What is the most efficient method to find x contiguous values of y in an array?

    - by Alec
    Running my app through callgrind revealed that this line dwarfed everything else by a factor of about 10,000. I'm probably going to redesign around it, but it got me wondering; Is there a better way to do it? Here's what I'm doing at the moment: int i = 1; while ( ( (*(buffer++) == 0xffffffff && ++i) || (i = 1) ) && i < desiredLength + 1 && buffer < bufferEnd ); It's looking for the offset of the first chunk of desiredLength 0xffffffff values in a 32 bit unsigned int array. It's significantly faster than any implementations I could come up with involving an inner loop. But it's still too damn slow.

    Read the article

  • What's a very easy C++ profiler (VC++)?

    - by John
    I've used a few profilers in the past and never found them particularly easy. Maybe I picked bad ones, maybe I didn't really know what I was expecting! But I'd like to know if there are any 'standard' profilers which simply drop in and work? I don't believe I need massively fine-detailed reports, just to pick up major black-spots. Ease of use is more important to me at this point. It's VC++ 2008 we're using (I run standard edition personally). I don't suppose there are any tools in the IDE for this, I can't see any from looking at the main menus?

    Read the article

  • How can I figure out where all these extra sqlite3 selects are being generated in my rails app?

    - by radixhound
    I'm trying to figure out where a whole pile of extra queries are being generated by my rails app. I need some ideas on how to tackle it. Or, if someone can give me some hints, I'd be grateful. I get these: SQL (1.0ms) SELECT name FROM sqlite_master WHERE type = 'table' AND NOT name = 'sqlite_sequence' SQL (0.8ms) SELECT name FROM sqlite_master WHERE type = 'table' AND NOT name = 'sqlite_sequence' SQL (0.8ms) SELECT name FROM sqlite_master WHERE type = 'table' AND NOT name = 'sqlite_sequence' repeated over and over on every request to the DB (as much as 70 times for a single request) I tried installing a plugin that traced the source of the queries, but it really didn't help at all. I'm using the hobofields gem, dunno if that is what's doing it but I'm somewhat wedded to it at the moment Any tips on hunting down the source of these extra queries?

    Read the article

  • What happens if a user jumps over 10 versions before updating, and every version had a new data mode

    - by dontWatchMyProfile
    Example: User installs app v.1.0, adds data. Then the dev submits 10 updates in 10 weeks. After 11 weeks, the user wants v.11.0 and grabs a copy from the app store. Assuming that the app has got 11 .xcdatamodel versions inside, where ***11.xcdatamodel is the current one, what would happen now since the persistent store of the user is ages old? would the migration happen 10 times, step-by-step through every migration iteration? Or does the actual migration of data (lets assume gigabytes of data) happen exactly once, after Core Data (or the persistent store coordinator) has figured out precisely what to do to go from v.1.0 to v.11.0?

    Read the article

  • In sync query calls, one query causing other query to run slower. Why?

    - by Irchi
    Sorry for the long question, but I think this is an interesting situation and I couldn't find any explanations for it: I was involved in optimization of an application that performed a large number of sequential SELECT and INSERT statements on a single dedicated SQL Server database. The process needs to INSERT a large number of records into a table, but for each of them there should be some value mappings, which performed using SELECT statements on another table in the same database. For a specific execution, it took 90 minutes to run. I used a profiler (JProfiler - the application is Java-based) to determine how much time does each part of the application take. It yields that 60% of the time was spent on INSERT method calls, and almost 20% on SELECT calls (the rest distributed in other parts). After some trials, I came to this situation: I commented out the INSERT query that took 60% of the time. I was expecting for the total run time to be around 35 minutes, as I have removed 60% of the 90 minutes. But the whole process took the same 90 minutes (doing only SELECTs and nothing else), but each SELECT took longer this time! Everything was running sync, there were no async calls. And there was only one single thread of execution. SELECT and INSERT queries are very simple, and don't have anything special, and they are on different tables, but on the same DB. I tested with both the DB on the application machine, and on a remote network machine. I can't think of any explanation for this, as the Profiler (Application profiler, not SQL Profiler) reported the changes in the method call times, and by removing INSERT statements SELECT statements took longer to run. Can anyone give me some kind of explanation of what could have happened? (there can't be cache / query optimization stuff, because the queries were run in sync, and in a single thread, and it was far from affecting the cache this much) I should note that the bottleneck of the speed was in SQL server, using most of the CPU time.

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >