Search Results

Search found 80052 results on 3203 pages for 'data load performance'.

Page 293/3203 | < Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >

  • Should we denormalize database to improve performance?

    - by Groo
    We have a requirement to store 500 measurements per second, coming from several devices. Each measurement consists of a timestamp, a quantity type, and several vector values. Right now there is 8 vector values per measurement, and we may consider this number to be constant for needs of our prototype project. We are using HNibernate. Tests are done in SQLite (disk file db, not in-memory), but production will probably be MsSQL. Our Measurement entity class is the one that holds a single measurement, and looks like this: public class Measurement { public virtual Guid Id { get; private set; } public virtual Device Device { get; private set; } public virtual Timestamp Timestamp { get; private set; } public virtual IList<VectorValue> Vectors { get; private set; } } Vector values are stored in a separate table, so that each of them references its parent measurement through a foreign key. We have done a couple of things to ensure that generated SQL is (reasonably) efficient: we are using Guid.Comb for generating IDs, we are flushing around 500 items in a single transaction, ADO.Net batch size is set to 100 (I think SQLIte does not support batch updates? But it might be useful later). The problem Right now we can insert 150-200 measurements per second (which is not fast enough, although this is SQLite we are talking about). Looking at the generated SQL, we can see that in a single transaction we insert (as expected): 1 timestamp 1 measurement 8 vector values which means that we are actually doing 10x more single table inserts: 1500-2000 per second. If we placed everything (all 8 vector values and the timestamp) into the measurement table (adding 9 dedicated columns), it seems that we could increase our insert speed up to 10 times. Switching to SQL server will improve performance, but we would like to know if there might be a way to avoid unnecessary performance costs related to the way database is organized right now. [Edit] With in-memory SQLite I get around 350 items/sec (3500 single table inserts), which I believe is about as good as it gets with NHibernate (taking this post for reference: http://ayende.com/Blog/archive/2009/08/22/nhibernate-perf-tricks.aspx). But I might as well switch to SQL server and stop assuming things, right? I will update my post as soon as I test it.

    Read the article

  • How do I improve the efficiency of the queries executed by this generic Linq-to-SQL data access clas

    - by Lee D
    Hi all, I have a class which provides generic access to LINQ to SQL entities, for example: class LinqProvider<T> //where T is a L2S entity class { DataContext context; public virtual IEnumerable<T> GetAll() { return context.GetTable<T>(); } public virtual T Single(Func<T, bool> condition) { return context.GetTable<T>().SingleOrDefault(condition); } } From the front end, both of these methods appear to work as you would expect. However, when I run a trace in SQL profiler, the Single method is executing what amounts to a SELECT * FROM [Table], and then returning the single entity that meets the given condition. Obviously this is inefficient, and is being caused by GetTable() returning all rows. My question is, how do I get the query executed by the Single() method to take the form SELECT * FROM [Table] WHERE [condition], rather than selecting all rows then filtering out all but one? Is it possible in this context? Any help appreciated, Lee

    Read the article

  • DataView Vs DataTable.Select()

    - by Aseem Gautam
    Considering the code below: Dataview someView = new DataView(sometable) someView.RowFilter = someFilter; if(someView.count > 0) { …. } Quite a number of articles which say Datatable.Select() is better than using DataViews, but these are prior to VS2008. Solved: The Mystery of DataView's Poor Performance with Large Recordsets Array of DataRecord vs. DataView: A Dramatic Difference in Performance So in a situation where I just want a subset of datarows based on some filter criteria(single query) and what is better DataView or DataTable.Select()?

    Read the article

  • Mssql dilemma, performance

    - by Woland
    Hello I am creating app where user can save options witch one is better? 1) to save into user table varchar feeld smthing like ('1,23,4354,34,3') query for this is select * from data where CHARINDEX ( 'L', Providers , 0 ) 0 2) create other table where user options are and just add rows select * from data where Providers in (select Providers from userdata where userid=100) thanks for help

    Read the article

  • RijndaelManaged Padding when data matches block size

    - by trampster
    If I use PKCS7 padding in RijndaelManaged with 16 bytes of data then I get 32 bytes of data output. It appears that for PKCS7 when the data size matches the block size it adds a whole extra block of data. If I use Zeros padding for 16 bytes of data I get out 16 bytes of data. So for Zeros padding if the data matches the block size then it doesn't pad. I have searched through the documentation and it says nothing about this difference in padding behavior. Can someone please point me to some kind of documentation which specifies what the padding behavior should be for the different padding modes when the data size matches the block size.

    Read the article

  • Java - Collections.sort() performance

    - by msr
    Hello, Im using Collections.sort() to sort a LinkedList whose elements implements Comparable interface, so they are sorted in a natural order. In the javadoc documentation its said this method uses mergesort algorithm wich has n*log(n) performance. My question is if there is a more efficient algorithm to sort my LinkedList? The size of that list could be very high and sort will be also very frequent. Thanks!

    Read the article

  • Specify which xml file to load when link is clicked

    - by Jason
    Good morning, I would like it so when a link is clicked on the homepage it would load a particular xml file into the next page (the page is called category-list.apsx). This category list page uses the Repeater Control method to display the xml details on the page. I used the example shown here: http://www.w3schools.com/aspnet/aspnet_repeater.asp So at the moment the repeater script looks like: <script runat="server"> sub Page_Load if Not Page.IsPostBack then dim mycategories=New DataSet mycategories.ReadXml(MapPath("categories.xml")) categories.DataSource=mycategories categories.DataBind() end if end sub </script> After doing some research I did find someone with the same problem and the solution was to insert #tags as part of the link on the homepage (i.e. category-list.apsx#company1results) and then some script on the list page to pick up the correct xml file: <script type="text/javascript"> var old_onload = window.onload; // Play it safe by respecting onload handlers set by other scripts. window.onload=function() { var categories = document.location.href.substring(document.location.href.indexOf("#")+1); loadXMLDoc('XML/'+categories+'.xml'); old_onload(); } </script> This was from the following link: http://www.hotscripts.com/forums/javascript/45641-solved-specify-xml-file-load-when-click-link.html How can I get these two scripts to connect with each other? Thank you for your time

    Read the article

  • Reccomendation for tuning 100's of Sql Databases

    - by wayne
    Hi, I'm running several sql servers, each running a few hundred multi gig databases for customers. They are all setup homogeneously as far as the schemas are concerned, however customer usages of the data differ quite alot from database to database. What would be the best way to auto-index / profile / tune this large amount of databases? As there are atleast 600 or more catalogs i cant have someone manually profile, and index as required by each databases usage patterns. I'm currently running SQL 2005 but will be moving to 2008, so solutions that work with either are fine!

    Read the article

  • Speeding up ROW_NUMBER in SQL Server

    - by BlueRaja
    We have a number of machines which record data into a database at sporadic intervals. For each record, I'd like to obtain the time period between this recording and the previous recording. I can do this using ROW_NUMBER as follows: WITH TempTable AS ( SELECT *, ROW_NUMBER() OVER (PARTITION BY Machine_ID ORDER BY Date_Time) AS Ordering FROM dbo.DataTable ) SELECT [Current].*, Previous.Date_Time AS PreviousDateTime FROM TempTable AS [Current] INNER JOIN TempTable AS Previous ON [Current].Machine_ID = Previous.Machine_ID AND Previous.Ordering = [Current].Ordering + 1 The problem is, it goes really slow (several minutes on a table with about 10k entries) - I tried creating separate indicies on Machine_ID and Date_Time, and a single joined-index, but nothing helps. Is there anyway to rewrite this query to go faster?

    Read the article

  • JQuery load help

    - by mtwallet
    Hi. I am trying to use load() to place some html into a div on a page. I have a bunch of links like this: <div id="slideshow"> <div id="slides"> <div class="projects"> <a href="work/mobus.html" title="Mobus Fabrics Website"> <img src="images/work/mobus.jpg" alt="Mobus Fabrics Website" width="280" height="100" /> </a> <a href="work/eglin.html" title="Eglin Ltd Website"> <img src="images/work/eglin.jpg" alt="Eglin Ltd Website" width="280" height="100" /> </a> <a href="work/first-brands.html" title="First Brands Website"> <img src="images/work/first-brands.jpg" alt="First Brands Website" width="280" height="100" /> </a> </div> <a id="prev"></a> <a id="next"></a> </div> and my jquery code looks like this: $('.projects a').click(function() { $('#work').load(this.href); }); The problem is when clicked the html is placed in the #work div the html is loaded in another page. Please can anyone help?

    Read the article

  • Performance tuning of a Hibernate+Spring+MySQL project operation that stores images uploaded by user

    - by Umar
    Hi I am working on a web project that is Spring+Hibernate+MySQL based. I am stuck at a point where I have to store images uploaded by a user into the database. Although I have written some code that works well for now, but I believe that things will mess up when the project would go live. Here's my domain class that carries the image bytes: @Entity public class Picture implements java.io.Serializable{ long id; byte[] data; ... // getters and setters } And here's my controller that saves the file on submit: public class PictureUploadFormController extends AbstractBaseFormController{ ... protected ModelAndView onSubmit(HttpServletRequest request, HttpServletResponse response, Object command, BindException errors) throws Exception{ MutlipartFile file; // getting MultipartFile from the command object ... // beginning hibernate transaction ... Picture p=new Picture(); p.setData(file.getBytes()); pictureDAO.makePersistent(p); // this method simply calls getSession().saveOrUpdate(p) // committing hiernate transaction ... } ... } Obviously a bad piece of code. Is there anyway I could use InputStream or Blob to save the data, instead of first loading all the bytes from the user into the memory and then pushing them into the database? I did some research on hibernate's support for Blob, and found this in Hibernate In Action book: java.sql.Blob and java.sql.Clob are the most efficient way to handle large objects in Java. Unfortunately, an instance of Blob or Clob is only useable until the JDBC transaction completes. So if your persistent class defines a property of java.sql.Clob or java.sql.Blob (not a good idea anyway), you’ll be restricted in how instances of the class may be used. In particular, you won’t be able to use instances of that class as detached objects. Furthermore, many JDBC drivers don’t feature working support for java.sql.Blob and java.sql.Clob. Therefore, it makes more sense to map large objects using the binary or text mapping type, assuming retrieval of the entire large object into memory isn’t a performance killer. Note you can find up-to-date design patterns and tips for large object usage on the Hibernate website, with tricks for particular platforms. Now apparently the Blob cannot be used, as it is not a good idea anyway, what else could be used to improve the performance? I couldn't find any up-to-date design pattern or any useful information on Hibernate website. So any help/recommendations from stackoverflowers will be much appreciated. Thanks

    Read the article

  • Caching Mysql database for better performance

    - by kobey
    Hi, I'm using Amazon cloud and I've performance issue since the HDD is not located on my machine. My database is small (~500MB) and I can afford to keep it all in my RAM. I do not want to keep queries in my RAM, i need all the tables there. How can i do it? Thanks, Koby P.S. I'm using ubuntu server...

    Read the article

  • Caching Authentication Data

    - by PartlyCloudy
    Hi, I'm currently implementing a REST web service using CouchDB and RESTlet. The RESTlet layer is mainly for authentication and some minor filtering of the JSON data served by CouchDB: Clients <= HTTP = [ RESTlet <= HTTP = CouchDB ] I'm using CouchDB also to store user login data, because I don't want to add an additional database server for that purpose. Thus, each request to my service causes two CouchDB requests conducted by RESTlet (auth data + "real" request). In order to keep the service as efficent as possible, I want to reduce the number of requests, in this case redundant requests for login data. My idea now is to provide a cache (i.e.LRU-Cache via LinkedHashMap) within my RESTlet application that caches login data, because HTTP caching will probabily not be enough. But how do I invalidate the cache data, once a user changes the password, for instance. Thanks to REST, the application might run on several servers in parallel, and I don't want to create a central instance just to cache login data. Currently, I save requested auth data in the cache and try to auth new requests by using them. If a authentication fails or there is now entry available, I'll dispatch a GET request to my CouchDB storage in order to obtain the actual auth data. So in a worst case, users that have changed their data will perhaps still be able to login with their old credentials. How can I deal with that? Or what is a good strategy to keep the cache(s) up-to-date in general? Thanks in advance.

    Read the article

  • Index on column with only 2 distinct values

    - by Will
    I am wondering about the performance of this index: I have an "Invalid" varchar(1) column that has 2 values: NULL or 'Y' I have an index on (invalid), as well as (invalid, last_validated) Last_validated is a datetime (this is used for a unrelated SELECT query) I am flagging a small amount of items (1-5%) of rows in the table with this as 'to be deleted'. This is so when i DELETE FROM items WHERE invalid='Y' it does not perform a full table scan for the invalid items. A problem seems to be, the actual DELETE is quite slow now, possibly because all the indexes are being removed as they are deleted. Would a bitmap index provide better performance for this? or perhaps no index at all?

    Read the article

  • master-slave-slave replication: master will become bottleneck for writes

    - by JMW
    hi, the mysql database has arround 2TB of data. i have a master-slave-slave replication running. the application that uses the database does read (SELECT) queries just on one of the 2 slaves and write (DELETE/INSERT/UPDATE) queries on the master. the application does way more reads, than writes. if we have a problem with the read (SELECT) queries, we can just add another slave database and tell the application, that there is another salve. so it scales well... Currently, the master is running arround 40% disk io due to the writes. So i'm thinking about how to scale the the database in the future. Because one day the master will be overloaded. What could be a solution there? maybe mysql cluster? if so, are there any pitfalls or limitations in switching the database to ndb? thanks a lot in advance... :)

    Read the article

  • JDO, GAE: Load object group by child's key

    - by tiex
    I have owned one-to-many relationship between two objects: @PersistenceCapable(identityType = IdentityType.APPLICATION) public class AccessInfo { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private com.google.appengine.api.datastore.Key keyInternal; ... @Persistent private PollInfo currentState; public AccessInfo(){} public AccessInfo(String key, String voter, PollInfo currentState) { this.voter = voter; this.currentState = currentState; setKey(key); // this key is unique in whole systme } public void setKey(String key) { this.keyInternal = KeyFactory.createKey( AccessInfo.class.getSimpleName(), key); } public String getKey() { return this.keyInternal.getName(); } and @PersistenceCapable(identityType = IdentityType.APPLICATION) public class PollInfo { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Key key; @Persistent(mappedBy = "currentState") private List<AccessInfo> accesses; ... I created an instance of class PollInfo and make it persistence. It is ok. But then I want to load this group by AccessInfo key, and I am getting exception NucleusObjectNotFoundException. Is it possible to load a group by child's key?

    Read the article

  • How to correctly load dependent JavaScript files

    - by Vaibhav Garg
    I am trying to extent a website page that displays google maps with the LabeledMarker. Google Maps API defines a class called GMarker which is extended by the LabeledMarker. The problem is, I cant seem to load the LabeledMarker script properly, i.e. after the Google API loads and I get the 'GMarker not defined' error. What is the correct way to specify the scripts in such cases? I am using ASP.NET's ClientScript.RegisterClientScriptInclude() first for the google API url and then immediately after with the LabeledMarker script file. The initial google API loader writes further script links that load the actual GMarker class. Shouldnt all those scripts be executed before the next script block(LabeledMarker script) is processed. I have checked the generated HTML and the script blocks are emitted in the right order. <script src="google api url" type="text/javascript"></script> ... (the above scripts uses document.write() etc to append further script blocks/sources) ... <script src="Scripts/LabeledMarker.js" type="text/javascript"></script> Once again, the LabeledMarker.js seems to get executed before the google API finishes loading.

    Read the article

  • jquery load returns empty, possible MVC 2 problem?

    - by Max Fraser
    I have a site that need to get some data from a different sit that is using asp.net MVC/ The data to get loaded is from these pages: http://charity.hondaclassic.com/home/totaldonations http://charity.hondaclassic.com/Home/CharityList This should be a no brainer but for some reason I get an empty response, here is my JS: <script> jQuery.noConflict(); jQuery(document).ready(function($){ $('.totalDonations').load('http://charity.hondaclassic.com/home/totaldonations'); $('#charityList').load('http://charity.hondaclassic.com/home/CharityList'); }); </script> in firebug I see the request is made and come back with a response of 200 OK but the response is empty, if you browse to these pages they work fine! What the heck? Here are the controller actions from the MVC site: public ActionResult TotalDonations() { var total = "$" + repo.All<Customer>().Sum(x => x.AmountPaid).ToString(); return Content(total); } public ActionResult CharityList() { var charities = repo.All<Company>(); return View(charities); } Someone please out what stupid little thing I am missing - this should have taken me 5 minutes and it's been hours!

    Read the article

  • AIR: sync gui with data-base?

    - by John Isaacks
    I am going to be building an AIR application that shows a list (about 1-25 rows of data) from a data-base. The data-base is on the web. I want the list to be as accurate as possible, meaning as soon as the data-base data changes, the list displayed in the app should update asap. I do not know of anyway that the air application could be notified when there is a change, I am thinking I am going to have to poll the data-base at certain intervals to keep an up to date list. So my question is, first is there any way to NOT have to keep checking the data-base? or if I do keep have to keep checking the data-base what is a reasonable interval to do that at? Thanks.

    Read the article

  • Are bit operations quick?

    - by flashnik
    I'm dealing with a problem which needs to work with a lot of data. Currently its' values are represented as unsigned int. I know that real values do not exceed some limit, say 1000. That means that I can use unsigned short to store it. One profit is that it'll use less space. Do I have to pay for it by loosing in performance? Another assumption. I decided to store data as short but all calling functions use int, so I need to convert between these datatypes when storing/extracting values. Wiil the performance lost be dramatic? Third assumption. Due to great wish to econom memory I decided to use not short but just 10 bits packed into array of unsigned int. What will happen in this case comparing with previous ones?

    Read the article

  • Load collections eagerly in NHibernate using Criteria API

    - by Zuber
    I have an entity A which HasMany entities B and entities C. All entities A, B and C have some references x,y and z which should be loaded eagerly. I want to read from the database all entities A, and load the collections of B and C eagerly using criteria API. So far, I am able to fetch the references in 'A' eagerly. But when the collections are loaded, the references within them are lazily loaded. Here is how I do it AllEntities_A = _session.CreateCriteria(typeof(A)) .SetFetchMode("x", FetchMode.Eager) .SetFetchMode("y", FetchMode.Eager) .List<A>().AsQueryable(); The mapping of entity A using Fluent is as shown below. _B and _C are private ILists for B & C respectively in A. Id(c => c.SystemId); Version(c => c.Version); References(c => c.x).Cascade.All(); References(c => c.y).Cascade.All(); HasMany<B>(Reveal.Property<A>("_B")) .AsBag() .Cascade.AllDeleteOrphan() .Not.LazyLoad() .Inverse() .Cache.ReadWrite().IncludeAll(); HasMany<C>(Reveal.Property<A>("_C")) .AsBag() .Cascade.AllDeleteOrphan() .LazyLoad() .Inverse() .Cache.ReadWrite().IncludeAll(); I don't want to make changes to the mapping file, and would like to load the entire entity A eagerly. i.e. I should get a List of A's where there will be List of B's and C's whose reference properties will also be loaded eagerly

    Read the article

  • Lack of ImageList in MenuStrip and performance issues

    - by Ivan
    MenuStrip doesn't support using ImageList images. What are performance issues of this? Are there chances of using too much GDI resources and slow-downs? How many items should be considered acceptable, after which one should implement custom control that draws images from ImageList?

    Read the article

  • Capturing Drupal7 DOM content before page load for comparison

    - by ehime
    We have an MU (Multisite) installation of Drupal7 here at work, and are trying to temporarily hold back the swarm of bots we receive until we get a chance to load our content. I wrote a quick and and dirty script to send 503 headers if we find a certain criteria in Xpath (This can ALSO be done as a strpos/preg_match if DOM is not formed). In order to get the ball rolling though I need to figure out how to either A) Hijack the Drupal7 bootstrap and pull all content through this filter below B) ob_flush content through the filter before content is loaded The issue that I am having is figuring out exactly where I can catch the content at? I thought that index.php in Drupal7 would be the suspect, but I'm a little confused as to where or how I should capture the contents. Here's the script, and hopefully someone can point me in the right direction. //error_reporting(-1); /* start query */ $dom = new DOMDocument; $dom->preserveWhiteSpace = false; $dom->Load($_SERVER['PHP_SELF']); $xpath = new DOMXPath($dom); //if this exists we aren't ready to be read by bots $query = $xpath->query(".//*[@id='block-views-about-this-site-block']/div/div/div"); //or $query = 'klat-badge'; //if this is a string not DOM /* end query */ if(strpos($query) !== false) { //require banlist require('botlist.php'); $str = strtolower('/'.implode('|', array_unique($list)).'/i'); if(preg_match($str, strtolower($_SERVER['HTTP_USER_AGENT']))) { //so tell bots we're broken header('HTTP/1.1 503 Service Temporarily Unavailable'); header('Status: 503 Service Temporarily Unavailable'); exit; } }

    Read the article

  • HLSL How can one pass data between shaders / read existing colour value?

    - by RJFalconer
    Hello all, I have 2 HLSL ps2.0 shaders. Simplified, they are: Shader 1 Reads texture Outputs colour value based on this texture Shader 2 Needs to read in existing colour (or have it passed in/read from a register) Outputs the final colour which is a function of the previous colour (They need to be different shaders as I've reached the maximum vertex-shader outputs for 1 shader) My problem is I cannot work out how Shader 2 can access the existing fragment/pixel colour. Is the only way for shaders to interact really just the alpha blending options? These aren't sufficient if I want to use the colour as input to my function.

    Read the article

  • beneficial in terms of performance

    - by Usama Khalil
    Hi, is it better to declare Webservice class object instances as static as the .asmx webservice classes have only static methods. what i want is that i declare and instantiate webservice asmx class as static in aspx Page Behind Class. and on every event call on that page i could perform operation against webservice methods. is it beneficial in terms of performance? Thanks Usama

    Read the article

< Previous Page | 289 290 291 292 293 294 295 296 297 298 299 300  | Next Page >