Search Results

Search found 1973 results on 79 pages for 'orm profiler'.

Page 55/79 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • Alternatives to Pessimistic Locking in Cluster Applications

    - by amphibient
    I am researching alternatives to database-level pessimistic locking to achieve transaction isolation in a cluster of Java applications going against the same database. Synchronizing concurrent access in the application tier is clearly not a solution in the present configuration because the same database transaction can be invoked from multiple JVMs concurrently. Currently, we are subject to occasional race conditions which, due to the optimistic locking we have in place via Hibernate, cause a StaleObjectStateException exception and data loss. I have a moderately large transaction within the scope of my refactoring project. Let's describe it as updating one top-level table row and then making various related inserts and/or updates to several of its child entities. I would like to insure exclusive access to the top-level table row and all of the children to be affected but I would like to stay away from pessimistic locking at the database level for performance reasons mostly. We use Hibernate for ORM. Does it make sense to start a single (perhaps synchronous) message queue application into which this method could be moved to insure synchronized access as opposed to each cluster node using its own, which is a clear race condition hazard? I am mentioning this approach even though I am not confident in it because both the top-level table row and its children could also be updated from other system calls, not just the mentioned transaction. So I am seeking to design a solution where the top-level table row and its children will all somehow be pseudo-locked (exclusive transaction isolation) but at the application and not the database level. I am open to ideas and suggestions, I understand this is not a very cut and dried challenge.

    Read the article

  • Database and query to store and retreive friend list [migrated]

    - by amr Kamboj
    I am developing a module in website to save and retreive friend list. I am using Zend Framework and for DB handling I am using Doctrine(ORM). There are two models: 1) users that stores all the users 2) my_friends that stores the friend list (that is refference table with M:M relation of user) the structure of my_friends is following ...id..........user_id............friend_id........approved.... ...10.........20 ..................25...................1.......... ...10.........21 ..................25...................1.......... ...10.........22 ..................30...................1.......... ...10.........25 ..................30...................1.......... The Doctrine query to retreive friend list id follwing $friends = Doctrine_Query::create()->from('my_friends as mf') ->leftJoin('mf.users as friend') ->where("mf.user_id = 25") ->andWhere("mf.approved = 1"); Suppose I am viewing the user no.- 25. With this query I am only getting the user no.- 30. where as user no.- 25 is also approved friend of user no.- 20 and 21. Please guide me, what should be the query to find all friend and is there any need to change the DB structure.

    Read the article

  • Java creation of new set too slow

    - by Mgccl
    I have this program where it have some recursive function similar to this: lambda(HashSet<Integer> s){ for(int i=0;i<w;i++){ HashSet<Integer> p = (HashSet) s.clone(); p.addAll(get_next_set()); lambda(p); } } What I'm doing is union every set with the set s. And run lambda on each one of the union. I run a profiler and found the c.clone() operation took 100% of the time of my code. Are there any way to speed this up considerably?

    Read the article

  • How to monitor MySQL query errors, timeouts and logon attempts?

    - by Abel
    While setting up a third party closed source CMS (Sitefinity) the setup doesn't create all tables and procedures necessary to run it. The software lacks a logging system itself and it made me wonder: could I trace and monitor failing SQL statements from MySQL? This serves more than only the purpose of solving my issue with Sitefinity. More often I wonder what's send to the MySQL server, not wanting to dive into the software products or setup a debugging environment etc. I tried JetProfiler (only performance) and looked through a few others, but although they monitor a lot, they don't monitor query failures, timeouts or logon attempts. Does anyone know a profiler, tracer, monitoring tool, commercial or free, that can show me this information?

    Read the article

  • Problem calling stored procedure with a fixed length binary parameter using Entity Framework

    - by Dave
    I have a problem calling stored procedures with a fixed length binary parameter using Entity Framework. The stored procedure ends up being called with 8000 bytes of data no matter what size byte array I use to call the function import. To give some example, this is the code I am using. byte[] cookie = new byte[32]; byte[] data = new byte[2]; entities.Insert("param1", "param2", cookie, data); The parameters are nvarchar(50), nvarchar(50), binary(32), varbinary(2000) When I run the code through SQL profiler, I get this result. exec [dbo].[Insert] @param1=N'param1',@param2=N'param2',@cookie=0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 [SNIP because of 16000 zeros] ,@data=0x0000 All parameters went through ok other than the binary(32) cookie. The varbinary(2000) seemed to work fine and the correct length was maintained. Is there a way to prevent the extra data being sent to SQL server? This seems like a big waste of network resource.

    Read the article

  • Using XCode and instruments to improve iPhone app performance

    - by MrDatabase
    I've been experimenting with Instruments off and on for a while and and I still can't do the following (with any sensible results): determine or estimate the average runtime of a function that's called many times. For example if I'm driving my gameLoop at 60 Hz with a CADisplayLink I'd like to see how long the loop takes to run on average... 10 ms? 30 ms etc. I've come close with the "CPU activity" instrument but the results are inconsistent or don't make sense. The time profiler seems promising but all I can get is "% of runtime"... and I'd like an actual runtime.

    Read the article

  • Is there a canonical source supporting "all-surrogates"?

    - by user61852
    Background The "all-PK-must-be-surrogates" approach is not present in Codd's Relational Model or any SQL Standard (ANSI, ISO or other). Canonical books seems to elude this restrictions too. Oracle's own data dictionary scheme uses natural keys in some tables and surrogate keys in other tables. I mention this because these people must know a thing or two about RDBMS design. PPDM (Professional Petroleum Data Management Association) recommend the same canonical books do: Use surrogate keys as primary keys when: There are no natural or business keys Natural or business keys are bad ( change often ) The value of natural or business key is not known at the time of inserting record Multicolumn natural keys ( usually several FK ) exceed three columns, which makes joins too verbose. Also I have not found canonical source that says natural keys need to be immutable. All I find is that they need to be very estable, i.e need to be changed only in very rare ocassions, if ever. I mention PPDM because these people must know a thing or two about RDBMS design too. The origins of the "all-surrogates" approach seems to come from recommendations from some ORM frameworks. It's true that the approach allows for rapid database modeling by not having to do much business analysis, but at the expense of maintainability and readability of the SQL code. Much prevision is made for something that may or may not happen in the future ( the natural PK changed so we will have to use the RDBMS cascade update funtionality ) at the expense of day-to-day task like having to join more tables in every query and having to write code for importing data between databases, an otherwise very strightfoward procedure (due to the need to avoid PK colisions and having to create stage/equivalence tables beforehand ). Other argument is that indexes based on integers are faster, but that has to be supported with benchmarks. Obviously, long, varying varchars are not good for PK. But indexes based on short, fix-length varchar are almost as fast as integers. The questions - Is there any canonical source that supports the "all-PK-must-be-surrogates" approach ? - Has Codd's relational model been superceded by a newer relational model ?

    Read the article

  • Service layer coupling

    - by Justin
    I am working on writing a service layer for an order system in php. It's the typical scenario, you have an Order that can have multiple Line Items. So lets say a request is received to store a line item with pictures and comments. I might receive a json request such as { 'type': 'Bike', 'color': 'Red', 'commentIds': [3193,3194] 'attachmentIds': [123,413] } My idea was to have a Service_LineItem_Bike class that knows how to take the json data and store an entity for a bike. My question is, the Service_LineItem class now needs to fetch comments and file attachments, and store the relationships. Service_LineItem seems like it should interact with a Service_Comment and a Service_FileUpload. Should instances of these two other services be instantiated and passed to the Service_LineItem constructor,or set by getters and setters? Dependency injection seems like the right solution, allowing a service access to a 'service fetching helper' seems wrong, and this should stay at the application level. I am using Doctrine 2 as a ORM, and I can technically write a dql query inside Service_LineItem to fetch the comments and file uploads necessary for the association, but this seems like it would have a tighter coupling, rather then leaving this up to the right service object.

    Read the article

  • I need an approach to the problem of preventing inserting duplicate records into the database

    - by Maurice
    Apologies is this question is asked on the incorrect "stack" A webservice that I call returns a list of data. The data from the webservice is updated periodically, so a call to the webservice done in one hour could return the same data as a call done in an hour. Also, the data is returned based on a start and end date. We have multiple users that can run the webservice search, and duplicate data is most likely to be returned (especially for historical data). However I don't want to insert this duplicate data in the database. I've created a db table in which the data is stored (most important columns are) Id int autoincrement PK Date date not null --The date to which the data set belongs. LastUpdate date not null --The date the data set was last updated. UserName varchar(50) --The name of the user doing the search. I use sql server 2008 express with c# 4.0 and visual studio 2010. Entity Framework is used as the ORM. If stored procedures could be avoided in the proposed solution, then that will be a plus. Another way of looking interpreting what I'm asking a solution for is as follows: I have a million unique records in my table. A user does a new search. The search results from the user contains around 300k of the data that is already in the db. An efficient solution to finding an inserting only the unique records is needed.

    Read the article

  • Leak caused by fread

    - by Jack
    I'm profiling code of a game I wrote and I'm wondering how it is possible that the following snippet causes an heap increase of 4kb (I'm profiling with Heapshot Analysis of Xcode) every time it is executed: u8 WorldManager::versionOfMap(FILE *file) { char magic[4]; u8 version; fread(magic, 4, 1, file); <-- this is the line fread(&version,1,1,file); fseek(file, 0, SEEK_SET); return version; } According to the profiler the highlighted line allocates 4.00Kb of memory with a malloc every time the function is called, memory which is never released. This thing seems to happen with other calls to fread around the code, but this was the most eclatant one. Is there anything trivial I'm missing? Is it something internal I shouldn't care about? Just as a note: I'm profiling it on an iPhone and it's compiled as release (-O2).

    Read the article

  • Raw SQL sent to SQL Server from .NET on stored procedure call

    - by Jeff Meatball Yang
    Is there a way to get the raw text that is sent to SQL Server, as seen in SQL Profiler, from the ADO.NET call? using(SqlConnection conn = new SqlConnection(connString)) { SqlCommand cmd = conn.CreateCommand(); cmd.CommandType = CommandType.StoredProcedure; cmd.CommandText = "GetSomeData"; cmd.Parameters.Add("@id").Value = someId; cmd.Parameters.Add("@someOtherParam").Value = "hello"; conn.Open(); SqlDataReader dr = cmd.ExecuteReader(); // this sends up the call: exec GetSomeData @id=24, @someOtherParam='hello' // how can I capture that and write it to debug? Debug.Write("exec GetSomeData @id=24, @someOtherParam='hello'"); }

    Read the article

  • How best to debug Delphi using the IDE and/or FOSS?

    - by LeonixSolutions
    I am currently using Delphi 7 and unsure whether to upgrade. I see the following means of debugging and wonder if there are others or which FOSS tools a small company can use (we don't do much Windows programming). 1 Debug in the IDE, by setting breakpoints, using watches, etc 2 Debug in the IDE, by using the Event Log I got some good info from this page and tweaked it to add timestamps and indent/outdent on procedure call/return, so that I can see nested calls more quickly. Does anyone know of anything better ? 3 Using a profiler 4 Any others? Such as MadExcept, etc?

    Read the article

  • Strategies for managing use of types in Python

    - by dave
    I'm a long time programmer in C# but have been coding in Python for the past year. One of the big hurdles for me was the lack of type definitions for variables and parameters. Whereas I totally get the idea of duck typing, I do find it frustrating that I can't tell the type of a variable just by looking at it. This is an issue when you look at someone else's code where they've used ambiguous names for method parameters (see edit below). In a few cases, I've added asserts to ensure parameters comply with an expected type but this goes against the whole duck typing thing. On some methods, I'll document the expected type of parameters (eg: list of user objects), but even this seems to go against the idea of just using an object and let the runtime deal with exceptions. What strategies do you use to avoid typing problems in Python? Edit: Example of the parameter naming issues: If our code base we have a task object (ORM object) and a task_obj object (higher level object that embeds a task). Needless to say, many methods accept a parameter named 'task'. The method might expect a task or a task_obj or some other construct such as a dictionary of task properties - it is not clear. It is them up to be to look at how that parameter is used in order to work out what the method expects.

    Read the article

  • Why is display:inline killing IE 8.0 performance?

    - by monstermensch
    I have an image gallery based on this jQuery plugin: http://jqueryfordesigners.com/demo/slider-gallery.html This works really well in Firefox, Chrome and even IE 7.0, but when I try it with more than 50 images in IE 8.0 the performance is incredible slow. Just hovering over the thumbnail brings the CPU load to 100%. At first I thought it's a Javascript problem, so I used the IE profiler, but the results were normal. Next I checked the CSS and finally found the cause: .sliderGallery UL LI { display: inline; } This gets the thumbnails to align horizontally. If I chance it to display:block, performance is fine and the scroller is still working but obviously it looks funny, because the thumbs are aligned vertically. My questions: Why does IE 8 have this problem with many display:inline elements What can I do to solve it I'll gladly provide more information if necessary.

    Read the article

  • Python Profiling In Windows, How do you ignore Builtin Functions

    - by Tim McJilton
    I have not been capable of finding this anywhere online. I was looking to find out using a profiler how to better optimize my code, and when sorting by which functions use up the most time cumulatively, things like str(), print, and other similar widely used functions eat up much of the profile. What is the best way to profile a python program to get the user-defined functions only to see what areas of their code they can optimize? I hope that makes sense, any light you can shed on this subject would be very appreciated.

    Read the article

  • Performance testing on .xap files...

    - by Radhi
    Hi All, I want to know that can i use profiler to do performance testing of .xap files. if you have any articles for the same topic please provide it to me. and if there are any other tools available to do this please tell me. in my project we have to check that when we logged into the Silverlight 4 .0 application. the screen takes 5 seconds to load. so i have to check which method is taking time to do this. in our project there are services which calls other services too,, and we have used CAL. so need to identify the bottleneck... please help...

    Read the article

  • How to handle monetary values in PHP and MySql?

    - by Songo
    I've inherited a huge pile of legacy code written in PHP on top of a MySQL database. The thing I noticed is that the application uses doubles for storage and manipulation of data. Now I came across of numerous posts mentioning how double are not suited for monetary operations because of the rounding errors. However, I have yet to come across a complete solution to how monetary values should be handled in PHP code and stored in a MySQL database. Is there a best practice when it comes to handling money specifically in PHP? Things I'm looking for are: How should the data be stored in the database? column type? size? How should the data be handling in normal addition, subtraction. multiplication or division? When should I round the values? How much rounding is acceptable if any? Is there a difference between handling large monetary values and low ones? Note: A VERY simplified sample code of how I might encounter money values in everyday life: $a= $_POST['price_in_dollars']; //-->(ex: 25.06) will be read as a string should it be cast to double? $b= $_POST['discount_rate'];//-->(ex: 0.35) value will always be less than 1 $valueToBeStored= $a * $b; //--> any hint here is welcomed $valueFromDatabase= $row['price']; //--> price column in database could be double, decimal,...etc. $priceToPrint=$valueFromDatabase * 0.25; //again cast needed or not? I hope you use this sample code as a means to bring out more use cases and not to take it literally of course. Bonus Question If I'm to use an ORM such as Doctrine or PROPEL, how different will it be to use money in my code.

    Read the article

  • Memory leak with ContextMenuStrip

    - by Dave
    I'm creating a lot of custom controls and adding them to a FlowLayoutPanel. There is also a ContextMenuStrip created and populated at design time. Every time a control is added to the panel it has its ContextMenuStrip property assigned to this menu, so that all controls "share" the same menu. But I noticed when the controls are removed from the panel and disposed of, the memory in use in Task Manager doesn't drop. It rises around 50kB every time a control is created and added to the layout panel. I downloaded the trial of .NET Memory Profiler and it showed there were references to the menu strip hanging around after the controls were disposed. I changed the code to explicitly set the ContextMenuStrip property to null before disposing of the control, and yep, the memory is now released. Why is this? Shouldn't the GC clean up this type of thing?

    Read the article

  • need help with db-query on sql-server 2005.

    - by Avinash
    We're seeing strange behavior when running two versions of a query on SQL Server 2005: version A: SELECT otherattributes.* FROM listcontacts JOIN otherattributes ON listcontacts.contactId = otherattributes.contactId WHERE listcontacts.listid = 1234 ORDER BY name ASC version B: DECLARE @Id AS INT; SET @Id = 1234; SELECT otherattributes.* FROM listcontacts JOIN otherattributes ON listcontacts.contactId = otherattributes.contactId WHERE listcontacts.listid = @Id ORDER BY name ASC Both queries return 1000 rows; version A takes on average 15s; version B on average takes 4s. Could anyone help us understand the difference in execution times of these two versions of SQL? If we invoke this query via named parameters using NHibernate, we see the following query via SQL Server profiler: EXEC sp_executesql N'SELECT otherattributes.* FROM listcontacts JOIN otherattributes ON listcontacts.contactId = otherattributes.contactId WHERE listcontacts.listid = @id ORDER BY name ASC', N'@id INT', @id=1234; ...and this tends to perform as badly as version A. Thanks in advance,

    Read the article

  • Calling Stored Procedure from VB.net timeout error

    - by Jim
    When calling a stored procedure from vb.net is there a default SQL timeout time if no timeout is specified in the connection string? I am unsure if there is a CommandTimeout specified in the connection string but am going through all the possibilites. Example if no results after 30 seconds (or more) throw: `System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.` SQL Profiler says that the script runs and ends in 30 seconds when the program timesout.. Tthe script runs without error in about 1 minute 45 seconds by itself in SQL server.

    Read the article

  • Updating Many-to-Many relationship with LinqToSQL

    - by Noffie
    If I had, for example, a Many-to-Many mapping table called "RolesToUsers" between a Users and an Roles table, here is how I do it: // DataContext is db, usr is a User entity // newUserRolesMappings is a collection with the desired new mappings, probably // derived by looking at selections in a checkbox list of Roles on a User Edit page db.RolesToUsers.DeleteAllOnSubmit(usr.RolesToUsers); usr.RolesToUsers.Clear(); usr.RolesToUsers.AddRange(newUserRolesMappings); I used the SQL profiler once, and this seems to generate very intelligent SQL - it will only drop the rows which are no longer in the mapping relationship, and only add rows which did not already exist in the relationship. It doesn't blindly do a complete clearing and re-construction of the relationship, as I thought it would. The internet is surprisingly quiet on the subject, and the query "LinqToSQL many-to-many" mostly just turns up articles about how the LinqToSQL data mapper doesn't "support" it very well. How does everyone else update many-to-many with LinqToSQL?

    Read the article

  • Magento cache wrong read permissions?

    - by Lucasmus
    There seems to be a problem in Magento's reading of the var/cache directory. I've disabled Full Page Caching for testing. When I execute the bash command chmod -R 777var/cache/` before loading the page, it loads ~3 seconds quicker (the time it takes before 'mage::dispatch::routers_match' is reached in the Profiler is reduced from ~4 seconds to ~1 second). This speed-up remains a while but then is lost until the chmod is called again. I'm guessing this has to do with writing permissions somehow? The odd thing is, the cache contents are afaik owned by the process that is executing magento (the web user). Does anyone have any clues what could be the problem or what could be changed to prevent this?

    Read the article

  • Two components offering the same functionality, required by different dependencies

    - by kander
    I'm building an application in PHP, using Zend Framework 1 and Doctrine2 as the ORM layer. All is going well. Now, I happened to notice that both ZF1 and Doctrine2 come with, and rely on, their own caching implementation. I've evaluated both, and while each has its own pro's and cons, neither of them stand out as superior to the other for my simple needs. Both libraries also seem to be written against their respective interfaces, not their implementations. Reasons why I feel this is an issue is that during the bootstrapping of my application, I have to configure two caching drivers - each with its own syntax. A mismatch is easily created this way, and it feels inefficient to set up two connections to the caching backend because of this. I'm trying to determine what the best way forward is, and would welcome any insights you may be able to offer. What I've thought up so far are four options: Do nothing, accept that two classes offering caching functionality are present. Create a Facade class to stick Zend's interface onto Doctrine's caching implementation. Option 2, the other way around - create a Facade to map Doctrine's interface on a Zend Framework backend. Use multiple-interface-inheritance to create one interface to rule them all, and pray that there aren't any overlaps (ie: if both have a "save" method, they'll need to accept params in the same order due to PHP's lack of proper polymorphism). What option is best, or is there a "None of the above" variant that I'm not aware of?

    Read the article

  • LINQ saving images to varbinary

    - by m4rc
    I'm having issues saving images to a varbinary(Max) field using LINQ. I can save files in the region of 10KB to the database no problems, but when it comes to files bigger than that, it's as though it doesn't even try. I've had a look in the SQL Server Profiler and when the file is around 10KB I can see the full insert statement in the detail pane. However, when the file is a bit bigger, the detail pane doesn't show anything, although any data besides the varbinary field is written to the database. The data is in the Data Object just before SubmitChanges so I can't figure out what's happening between now and then!

    Read the article

  • dotTrace cant create deploy folder on Windows Azure Web Sites (WAWS)

    - by Orhan Maden
    I get the error message 'Can't create deploy folder.' when I try to profile a remote websites on WAWS. Actions taken: Downloaded and installed the dotTrace Profiler 5.5 from the JetBrains website Downloaded the dotTrace.Performance.Remote version 5.5.0 from Nuget Published the website to WAWS via Visual Studio 2013 Started the dotTrace application as Administrator Connected to the remote _https://subdomain.azurwebsites.net/AgentService.asmx. See image: http://1drv.ms/1nF5Cyh Selected the w3wp process and pressed Run Got the error message 'Can't create deploy folder'. See image: http://1drv.ms/U5h35A I'm running dotTrace in trial mode at the moment. Swift help is much appreciated. Orhan

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >