Search Results

Search found 80052 results on 3203 pages for 'data load performance'.

Page 506/3203 | < Previous Page | 502 503 504 505 506 507 508 509 510 511 512 513  | Next Page >

  • SSC Clinic: Can Implementing "Optimize for Ad Hoc Queries" Boost Performance for the SQLServerCentral.com and Simple-Talk.Com SQL Servers?

    With the introduction of the instance-level option “optimize for ad hoc workloads” in SQL Server 2008, DBAs have a tool to deal with a problem known as plan cache pollution, or plan cache bloat. It’s often caused when one-time use ad hoc queries are sent to SQL Server from Object-Relational Mapping (ORM) solutions, such as LINQ, NHibernate, or Entity Framework. The problem can prevent SQL Server from using its available memory optimally, potentially hurting performance. Get smart with SQL Backup ProGet faster, smaller backups with integrated verification.Quickly and easily DBCC CHECKDB your backups. Learn more.

    Read the article

  • Are separate business objects needed when persistent data can be stored in a usable format?

    - by Kylotan
    I have a system where data is stored in a persistent store and read by a server application. Some of this data is only ever seen by the server, but some of it is passed through unaltered to clients. So, there is a big temptation to persist data - whether whole rows/documents or individual fields/sub-documents - in the exact form that the client can use (eg. JSON), as this removes various layers of boilerplate, whether in the form of procedural SQL, an ORM, or any proxy structure which exists just to hold the values before having to re-encode them into a client-suitable form. This form can usually be used on the server too, though business logic may have to live outside of the object, On the other hand, this approach ends up leaking implementation details everywhere. 9 times out of 10 I'm happy just to read a JSON structure out of the DB and send it to the client, but 1 in every 10 times I have to know the details of that implicit structure (and be able to refactor access to it if the stored data ever changes). And this makes me think that maybe I should be pulling this data into separate business objects, so that business logic doesn't have to change when the data schema does. (Though you could argue this just moves the problem rather than solves it.) There is a complicating factor in that our data schema is constantly changing rapidly, to the point where we dropped our previous ORM/RDBMS system in favour of MongoDB and an implicit schema which was much easier to work with. So far I've not decided whether the rapid schema changes make me wish for separate business objects (so that server-side calculations need less refactoring, since all changes are restricted to the persistence layer) or for no separate business objects (because every change to the schema requires the business objects to change to stay in sync, even if the new sub-object or field is never used on the server except to pass verbatim to a client). So my question is whether it is sensible to store objects in the form they are usually going to be used, or if it's better to copy them into intermediate business objects to insulate both sides from each other (even when that isn't strictly necessary)? And I'd like to hear from anybody else who has had experience of a similar situation, perhaps choosing to persist XML or JSON instead of having an explicit schema which has to be assembled into a client format each time.

    Read the article

  • Asynchronously returning a hierarchal data using .NET TPL... what should my return object "look" like?

    - by makerofthings7
    I want to use the .NET TPL to asynchronously do a DIR /S and search each subdirectory on a hard drive, and want to search for a word in each file... what should my API look like? In this scenario I know that each sub directory will have 0..10000 files or 0...10000 directories. I know the tree is unbalanced and want to return data (in relation to its position in the hierarchy) as soon as it's available. I am interested in getting data as quickly as possible, but also want to update that result if "better" data is found (better means closer to the root of c:) I may also be interested in finding all matches in relation to its position in the hierarchy. (akin to a report) Question: How should I return data to my caller? My first guess is that I think I need a shared object that will maintain the current "status" of the traversal (started | notstarted | complete ) , and might base it on the System.Collections.Concurrent. Another idea that I'm considering is the consumer/producer pattern (which ConcurrentCollections can handle) however I'm not sure what the objects "look" like. Optional Logical Constraint: The API doesn't have to address this, but in my "real world" design, if a directory has files, then only one file will ever contain the word I'm looking for.  If someone were to literally do a DIR /S as described above then they would need to account for more than one matching file per subdirectory. More information : I'm using Azure Tables to store a hierarchy of data using these TPL extension methods. A "node" is a table. Not only does each node in the hierarchy have a relation to any number of nodes, but it's possible for each node to have a reciprocal link back to any other node. This may have issues with recursion but I'm addressing that with a shared object in my recursion loop. Note that each "node" also has the ability to store local data unique to that node. It is this information that I'm searching for. In other words, I'm searching for a specific fixed RowKey in a hierarchy of nodes. When I search for the fixed RowKey in the hierarchy I'm interested in getting the results FAST (first node found) but prefer data that is "closer" to the starting point of the hierarchy. Since many nodes may have the particular RowKey I'm interested in, sometimes I may want to get a report of ALL the nodes that contain this RowKey.

    Read the article

  • My 7 is Slow: A Guide to Upgrading Your XP Machine for Optimum Performance with Windows 7

    When the Windows Vista operating system came out you decided that you were better off with what you had. The odds are that you probably made a very smart move. When Windows 7 came out you were also prudent. You waited to see if the newest operating system would be worth the expense of upgrading. Now that you have decided to upgrade to Windows 7 you will have some performance issues to deal with.... Microsoft SQL Server? Value Calculator Reduce Costs & Increase Value with Microsoft SQL Server? 2008. Download Today!

    Read the article

  • How to calculate maximum number of request in 128 MB VPS performance?

    - by ifdion
    I am a newbie here, please let me know if I'm using wrong webmaster terms. I am currently setting up a VPS for a multi site WordPress. The VPS uses Debian 6 LNMP setup and the DNS is being taken care by another service. Currently the VPS is running non multi site WordPress with -+ 83 MB RAM out of 128MB. As far as I know the performance is relative to the number of request, not the number of sites in the multi site setup. The question How do I calculate maximum number of request in with that setup? If the information is not enough, what other factor do I need to know? Thank you in advance.

    Read the article

  • Is there any performance difference between Ubuntu Unity and Classic/Fallback?

    - by user48949
    is there any difference between using Ubuntu Unity and Ubunt Classic/Fallback? Just to be clear, I'm not talking about the Launcher or the Dash. Of course Ubuntu Classic/Fallback doesn't have the Launcher/Dash, but this is not the difference I'm talking about. I mean differences related to performance, features, functionalities, compatibilities, etc. These kinds of differences. I'm asking this because I've heard the Fallback Mode is kind of "incomplete" when it's compared to Gnome Shell or Ubuntu Unity, so I just wanted to know whether or not it's true, because if it's true, I don't think using Fallback Mode is worth it.

    Read the article

  • Do I lose/gain performance for discarding pixels even if I don't use depth testing?

    - by Gajoo
    When I first searched for discard instruction, I've found experts saying using discard will result in performance drain. They said discarding pixels will break GPU's ability to use zBuffer properly because GPU have to first run Fragment shader for both objects to check if the one nearer to camera is discarded or not. For a 2D game I'm currently working on, I've disabled both depth-test and depth-write. I'm drawing all objects sorted by their depth and that's all, no need for GPU to do fancy things. now I'm wondering is it still bad if I discard pixels in my fragment shader?

    Read the article

  • How Can I Disable Windows 7's Aero Performance Warnings?

    - by Jason Fitzpatrick
    You know your computer isn’t cutting edge, but there’s no need for Windows 7 to constantly remind you. Read on to see how you can disable its constant nagging to adjust your color scheme to improve performance. Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-drive grouping of Q&A web sites. Secure Yourself by Using Two-Step Verification on These 16 Web Services How to Fix a Stuck Pixel on an LCD Monitor How to Factory Reset Your Android Phone or Tablet When It Won’t Boot

    Read the article

  • A hotfix is available that improves the performance of CLR when a .NET Framework 3.5 SP1-based appli

    981619 ... A hotfix is available that improves the performance of CLR when a .NET Framework 3.5 SP1-based application runs in a virtualized environmentThis RSS feed provided by kbAlerz.com.Visit kbAlertz.com to subscribe. It's 100% free and you'll be able to recieve e-mail or RSS updates for the technologies you pick from the Microsoft Knowledge Base....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Why does Windows 7 overall performance is better than Ubuntu 11.10?

    - by user37805
    I have a i7 2600 processor, 8Gb DDR3 ram, nVidia GTX570, and Ubuntu takes 45-50 seconds to boot and 32-35 seconds to power off, while windows 7 boots in 20-25 seconds and shuts down in 10 seconds. Both OS with autologin enabled, and in dual boot. Ubuntu is slow with preload too, and doesn't show any boot splash after installing drivers and didn't recognize my nVidia graphics card on jockey GTK, I had to add x swat repository and that didn't worked. I installed proprietary drivers through terminal (nvidia-common, nvidia-settings) in order to have 3d acceleration. But it doesn't make any difference on the speed. I also have a Pentium 4 PC and ubuntu 11.10 is way faster than windows 7 or XP. Also with nvidia graphics card and preload. http://paste.ubuntu.com/924890/ there is my boot script, sorry but some words are in Spanish because my ubuntu is in Spanish. Not using WUBI, Ubuntu has its own partition, 64-bits, and Matlab 2011 has very low performance compared to windows version.

    Read the article

  • What about the performance enhancement when using an SSD as the main disk?

    - by motumboe
    I'm planning to buy a new PC and I am thinking about using an SSD as the main disk. I'd also use a standard spinning disk and I'd mount it to /home. To the people already using such a setup: does this induce a practical, noticeable enhancement in performance? I think that access times, rather than transfer rates, are the more useful feature of SSD's. I would like to know if they have a noticeable effect on a desktop installation of Ubuntu. Thanks in advance!

    Read the article

  • OVH ouvre un Cloud pour les développeurs qui veulent passer au SaaS et se lance dans le calcul haute performance à la demande

    OVH ouvre un Cloud pour les développeurs qui veulent passer au SaaS Et se lance dans le calcul haute performance à la demande « Pour accompagner la mutation du marché des logiciels vers le SaaS, OVH.com fait évoluer son offre Private Cloud basée sur vSphere de VMware », voici résumée en une phrase la nouvelle offre de l'hébergeur nordiste. Une offre qui a pour particularité de ne pas mutualiser les ressources des serveurs, mais qui dédie chaque serveur physique à un client. Le but pour les éditeurs et les développeurs est de disposer d'un environnement dans lequel ils peuvent migrer leurs logiciels et les proposer en mode SaaS. « Au lieu d'installer le logiciel c...

    Read the article

  • .NET Compact Framework 3.9 : compatibilité avec VS 2012, gain de performance et support du multi-core pour l'outil

    .NET Compact Framework 3.9 sera compatible avec Visual Studio 2012 gain de performance et support du multi-coeur pour la version embarquée de .NET Microsoft a dévoilé la semaine dernière sa feuille de route pour l'ensemble de ses systèmes d'exploitation embarqués. L'éditeur prévoit de publier au second trimestre de l'année prochaine Windows Embedded Compact 2013, son OS destiné aux terminaux tactiles légers. Dans cette version, sera inclus le Framework .NET Compact (NETCF) 3.9, la prochaine mise à jour de la plateforme de développement pour l'embarqué. Pour rappel, .NET Framework Compact est une version du Framework .NET pour les périphériques embarqués. Il f...

    Read the article

  • Oracle Data Protection: How Do You Measure Up? - Part 1

    - by tichien
    This is the first installment in a blog series, which examines the results of a recent database protection survey conducted by Database Trends and Applications (DBTA) Magazine. All Oracle IT professionals know that a sound, well-tested backup and recovery strategy plays a foundational role in protecting their Oracle database investments, which in many cases, represent the lifeblood of business operations. But just how common are the data protection strategies used and the challenges faced across various enterprises? In January 2014, Database Trends and Applications Magazine (DBTA), in partnership with Oracle, released the results of its “Oracle Database Management and Data Protection Survey”. Two hundred Oracle IT professionals were interviewed on various aspects of their database backup and recovery strategies, in order to identify the top organizational and operational challenges for protecting Oracle assets. Here are some of the key findings from the survey: The majority of respondents manage backups for tens to hundreds of databases, representing total data volume of 5 to 50TB (14% manage 50 to 200 TB and some up to 5 PB or more). About half of the respondents (48%) use HA technologies such as RAC, Data Guard, or storage mirroring, however these technologies are deployed on only 25% of their databases (or less). This indicates that backups are still the predominant method for database protection among enterprises. Weekly full and daily incremental backups to disk were the most popular strategy, used by 27% of respondents, followed by daily full backups, which are used by 17%. Interestingly, over half of the respondents reported that 10% or less of their databases undergo regular backup testing.  A few key backup and recovery challenges resonated across many of the respondents: Poor performance and impact on productivity (see Figure 1) 38% of respondents indicated that backups are too slow, resulting in prolonged backup windows. In a similar vein, 23% complained that backups degrade the performance of production systems. Lack of continuous protection (see Figure 2) 35% revealed that less than 5% of Oracle data is protected in real-time.  Management complexity 25% stated that recovery operations are too complex. (see Figure 1)  31% reported that backups need constant management. (see Figure 1) 45% changed their backup tools as a result of growing data volumes, while 29% changed tools due to the complexity of the tools themselves. Figure 1: Current Challenges with Database Backup and Recovery Figure 2: Percentage of Organization’s Data Backed Up in Real-Time or Near Real-Time In future blogs, we will discuss each of these challenges in more detail and bring insight into how the backup technology industry has attempted to resolve them.

    Read the article

  • filter data between 2 dates in crystal report vb.net

    - by irienaoki0407
    I need some help for creating Crystal Reports in VB 2005. I want to filter data between two dates (like from date and to date) with datetimepicker. I'm using SQL Server 2000 for the connection. Update: Thanks for the link, but I'm trying using record selection formula.... Here's my code: Try Dim cryRpt As New ReportDocument With cryRpt .FileName = ("C:\Documents and Settings\Ratna Ayu\My Documents\Visual Studio 2005\Projects\Denda\Denda\CrystalReport1.rpt") .RecordSelectionFormula = "{pinjam.tglkembali}>='" & DateTimePicker1.Value.ToString("dd/MM/yyyy") & "' and {pinjam.tglkembali} =<'" & DateTimePicker2.Value.ToString("dd/MM/yyyy") & "'" End With CrystalReportViewer1.ReportSource = cryRpt CrystalReportViewer1.Refresh() Catch ex As Exception MsgBox("tdk ada data", , "") End Try

    Read the article

  • jqGrid using a servlet to store the data, how to access the data from a different java class

    - by Sam
    I'm using jqGrid for a table input and setting up the url as a servlet which will deal with the GET and POST requests and save the rows to a Java object. I'm using webwork web framework and I was wondering how I can get access to the object that the servlet is saving the data to. One way I have thought of is to just call the GET method from the Java action class which the servlet will return a JSON string with the object data. Is there a better design for doing this? This is probably not too clear so ask questions so I can help get across my point. Thanks

    Read the article

  • Dynamic localization with Data Annotations possible?

    - by devries48
    Hi, I'm trying to dynamicly update the language of a Silverlight application. I tried the example provided by Tim Heuer and that was exactly wat I needed. Silverlight and localizing string data Now I'm experimenting with Data Annotations and would like to have the same behaviour.But with no luck... Can someone point me in the right direction. DataAnnotation of a property: [Display(Name = "UserNameLabel", ResourceType = typeof(Resources.Strings.StringResources))] [Required] public string Username ... My Xaml: <dataInput:Label Target="{Binding ElementName=tbUserName}" PropertyPath="UserName"/> Thanks, Ron

    Read the article

  • Updating a progress bar while wpf data binding is taking place (in c#)

    - by evan
    I bind a large dataset to a WPF list box which can take a long time (more than ten seconds). While the the data is being bound I'd like to display a circular progress bar I can't get the progress bar to show while the data binding is occurring, even though I am trying to do the binding in a backgroundworker. I tested it by making the first line of the backgroundworkd's dowork event a Thread.Sleep(5000) and sure enough the progress bar started spinning for that duration only to freeze while when the binding started. Is this because both the databinding and the UI updating have to occur on the same thread? Any ideas on how to work around it? Thanks for your help!!

    Read the article

  • Putting data from local SQL database to remote SQL database without remote SQL access enabled (PHP)

    - by Shyam
    Hi, I have a local database, and all the tables are defined. Eventually I need to publish my data remotely, which I can do easily with PHPmyadmin. Problem however is that my remote host doesn't allow remote SQL connections at all, so writing a script that does a mysqldump and run it through a client (which would've been ideal) won't help me here. Since the schema won't change, but the data will, I need some kind of PHP client that works "reverse". My question is if such a client exists and what would be recommended to use (by experience). I just need an one way trip here, from my local database (Rails) to the remote database (supports PHP), preferable as simple and slick as possible. Thank you for your replies, comments and feedback!

    Read the article

  • How to update data in the user information list when using FBA

    - by Flo
    I've got to support a SharePoint web application which uses FBA with a custom membership and a custom role provider to authenticate the user against two different LDAPs. The user data are only stored in the user information lists. The SSP user profiles are not used. Now one of the users got married and therefore her surname got changed in the LDAP (the one where her information are stored). But this change doesn't get provisioned into the user information list. I wondering what option I have to provision changes of user data to the user information list. I've already tried to update the last name of the user manually, but it seems as if certain information like surname, first name are not editable in the user information list. I tried to edit them as a site administrator. So what option do I have to solve this problem? Being able to edit the information per hand would also be a solution but of course not the most preferred one.

    Read the article

  • jqGrid adding new row with XML Data

    - by mahr.g.mohyuddin
    Hi Everyone, I am using jQGrid in my ASP.NET MVC application with add row feature. My problem is I have to save XML data in some of the row fields. But while posting the new row with some fields having plain xml, to server I get Internal server error. By default jqGrid perhaps expects json or text data to be posted on server while adding a new row. I used beforeSubmit method for workaround to convert xml first to json and then send to server, but while converting json back to xml I lose the orginal structure of the xml. FYI: I used JsonReaderWriterFactory on server to convert it back. Your help will be highly appreciated. Thanks.

    Read the article

  • Importing Excel to SQLCE

    - by ohadsc
    I have a table in excel format (2007 but I can save as anything below that, naturally), and I have an SQL Compact Edition 3.5 SP1 Database table with corresponding columns. I simply want to populate the SQLCE table with the data from the excel file. The data consists of strings and integers only. I tried this utility to no avail, I also tried this SQL script but it won't work since BULK INSERT is not supported in SQLCE. I also found this Microsoft tutorial but I am basically clueless when it comes to SQL, providers and the like... Thanks !

    Read the article

  • ASP.NET MVC2: Getting textbox data from a view to a controller

    - by mr_plumley
    Hi, I'm having difficulty getting data from a textbox into a Controller. I've read about a few ways to accomplish this in Sanderson's book, Pro ASP.NET MVC Framework, but haven't had any success. Also, I've ran across a few similiar questions online, but haven't had any success there either. Seems like I'm missing something rather fundamental. Currently, I'm trying to use the action method parameters approach. Can someone point out where I'm going wrong or provide a simple example? Thanks in advance! Using Visual Studio 2008, ASP.NET MVC2 and C#: What I would like to do is take the data entered in the "Investigator" textbox and use it to filter investigators in the controller. I plan on doing this in the List method (which is already functional), however, I'm using the SearchResults method for debugging. Here's the textbox code from my view, SearchDetails: <h2>Search Details</h2> <% using (Html.BeginForm()) { %> <fieldset> <%= Html.ValidationSummary() %> <h4>Investigator</h4> <p> <%=Html.TextBox("Investigator")%> <%= Html.ActionLink("Search", "SearchResults")%> </p> </fieldset> <% } %> Here is the code from my controller, InvestigatorsController: private IInvestigatorsRepository investigatorsRepository; public InvestigatorsController(IInvestigatorsRepository investigatorsRepository) { //IoC: this.investigatorsRepository = investigatorsRepository; } public ActionResult List() { return View(investigatorsRepository.Investigators.ToList()); } public ActionResult SearchDetails() { return View(); } public ActionResult SearchResults(SearchCriteria search) { string test = search.Investigator; return View(); } I have an Investigator class: [Table(Name = "INVESTIGATOR")] public class Investigator { [Column(IsPrimaryKey = true, IsDbGenerated = false, AutoSync=AutoSync.OnInsert)] public string INVESTID { get; set; } [Column] public string INVEST_FNAME { get; set; } [Column] public string INVEST_MNAME { get; set; } [Column] public string INVEST_LNAME { get; set; } } and created a SearchCriteria class to see if I could get MVC to push the search criteria data to it and grab it in the controller: public class SearchCriteria { public string Investigator { get; set; } } } I'm not sure if project layout has anything to do with this either, but I'm using the 3 project approach suggested by Sanderson: DomainModel, Tests, and WebUI. The Investigator and SearcCriteria classes are in the DomainModel project and the other items mentioned here are in the WebUI project. Thanks again for any hints, tips, or simple examples! Mike

    Read the article

  • SQL query to return a data that both creteria exist in one table

    - by Ali
    Dear all, I have a TWO tables of data with following fields table1=(ITTAG,ITCODE,ITDESC,SUPcode) table2=(ACCODE,ACNAME,ROUTE,SALMAN) this my customer master tables that contains my customer data such as customer code, customer name and so on... Every Route has a supervisor(table1=supcode) and I need to know supervisor name in my table which both supervisor name and code exist in one table. table1 has contain all names separated by ITTAG. for example, supervisor name's ITTAG='K' also salesamn name's ITTAG='S'. ITTAG ITCODE ITDESC SUPCODE ------ ------ ------ ------- S JT JOHN TOMAS TF K WK VIKI KOO NULL NOW THIS IS A RESULT WHICH I WANT ACCODE ACNAME ROUTE SALEMANNAME SUPERVISORNAME ------- ------ ------ ------------ --------------- IMC1010 ABC HOTEL 01 JOHN TOMAS VIKI KOO i hope this this information is sufficient to get the query.. Thanks Ali

    Read the article

  • Extracting, then passing raw data into another class - How to avoid copying twice while maintaining

    - by Kache4
    Consider a class Book with a stl container of class Page. each Page holds a screenshot, like page10.jpg in raw vector<char> form. A Book is opened with a path to a zip, rar, or directory containing these screenshots, and uses respective methods of extracting the raw data, like ifstream inFile.read(buffer, size);, or unzReadCurrentFile(zipFile, buffer, size). It then calls the Page(const char* stream, int filesize) constructor. Right now, it's clear that the raw data is being copied twice. Once to extract to Book's local buffer and a second time in the Page ctor to the Page::vector<char>. Is there a way to maintain encapsulation while getting rid of the middleman buffer?

    Read the article

< Previous Page | 502 503 504 505 506 507 508 509 510 511 512 513  | Next Page >