Search Results

Search found 23103 results on 925 pages for 'performance issues and ha'.

Page 320/925 | < Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >

  • Why Tokyo Tyrant so slow

    - by Tantra
    I have follow situation tyrant server lunched on freebsd host, like this: ttserver -uas -log /data/tyrant/1.log -sid 1 -thnum 8 -tout 5 /data/tyrant/data/1.tct And i try to communicate this server on windows from python and pyrant-0.3.5: like this: import pyrant; import time; t = pyrant.Tyrant(host="192.168.0.220", port=1978); tbegin = time.time(); for i in xrange(4000000): if i and ((i % 10000) == 0): print time.time() - tbegin; tbegin = time.time(); t[i] = {"text": "ruslan text", "value": i}; and have i think very slow performance about 5-6 per 10,000 records. But if i start this code on the same machine like server(ttserver). Performance are good - about 0.5 sec per 10,000 records What i must do to workaround this problem?

    Read the article

  • Popularity Algorithm - SQL / Django

    - by RadiantHex
    Hi folks, I've been looking into popularity algorithms used on sites such as Reddit, Digg and even Stackoverflow. Reddit algorithm: t = (time of entry post) - (Dec 8, 2005) x = upvotes - downvotes y = {1 if x > 0, 0 if x = 0, -1 if x < 0) z = {1 if x < 0, otherwise x} log(z) + (y * t)/45000 I have always performed simple ordering within SQL, I'm wondering how I should deal with such ordering. Should it be used to define a table, or could I build an SQL with the ordering within the formula (without hindering performance)? I am also wondering, if it is possible to use multiple ordering algorithms in different occasions, without incurring into performance problems. I'm using Django and PostgreSQL. Help would be much appreciated! ^^

    Read the article

  • Push data to flex client

    - by KensoDev
    Howday, I want to push data to flex clients. I am talking about anywhere between 5000-15000 concurrent users, need to get data every time a currency is changed so that means lots of changes for lots of users. I have been looking into WebOrb.net, but the performance seem very poor (100 users concurrent) for a product so pricy (we purchased a license). So, I have to look into alternatives, I know there's fluorineFx but it seems no one is really using it for products and it lacks in examples and documentation. My question is: what products can answer my needs (.net backend) and what are the performance I can expect out of these products? Thanks

    Read the article

  • Visual Artifacts in Visual Studio 2010

    - by Simon Chadwick
    I'm using VS 2010 on Windows Server 2003, running on a Dell Inspiron 9400 laptop. VS 2010 runs fine, except for persistent and random screen re-drawing issues. Samples of these are here. These artifacts occur as the mouse moves over items that highlight on a mouse-over event, while scrolling, and when switching tabs. VS 2008 has non of these issues, so I assume that it is related to VS 2010's use of WPF. Could it be that my video card or driver is not up to the task of rendering WPF? Some other WPF applications (not Silverlight) also have some of these screen repainting problems. I have tried a variety of settings in System Properties--Advanced--Performance Options--Visual Effects, and in the related "Advanced" tab, Processor Scheduling is adjusted for best performance of programs. Many thanks for any suggestions!

    Read the article

  • Recordset manipulation in SSIS

    - by JSacksteder
    In my SSIS job, I have a need to accumulate a set of rows and commit them all transitionally when processing has completed successfully. If this was pure SQL, I would use a temp table inside a transaction. In SSIS there are a number of issues complicating this. It's difficult to have multiple components share the same transaction and having temp tables that do not exist at design time is a pain. If I use Recordsets inside SSIS for this purpose, there are other issues. My understanding is that an 'Execute SQL' component will re-initialize the Recordset when it runs, so I can't use that to append an additional row. Is there a way to create an OLE DB connection that references an in-memory Recordset? Is there a better way to achieve this result?

    Read the article

  • Database designing for a bug tracking system in SQL

    - by Yasser
    I am an ASP.NET developer and I really dont do such of database stuff. But for my open source project on Codeplex I am required to setup a database schema for the project. So reading from here and there I have managed to do the following. As being new to database schema designing, I wanted some one else who has a better idea on this topic to help me identify any issues with this design. Most of the relationships are self explanatory I think, but still I will jot each one down. The two keys between UserProfile and Issues are for relationships between UserId and IssueCreatedBy and IssueClosedBy Thanks

    Read the article

  • error 2016: Condition cannot be specified for Column member

    - by IP
    I am having some issues with Entity Framework in VS2010 The problem I'm getting is described very well here... http://social.msdn.microsoft.com/Forums/en/adonetefx/thread/cacf6a76-09a8-4c90-9502-d8b87c2f6bea It's basically happening when a Foreign key is pointed at the primary key of another table...but if I take off the StoreGeneratedPattern as "Identity", then it tries to insert a value into the identity field ** EDIT So, what it seems to be is that EF4 can't handle a null relationship when the primary key is set to StoreGeneratedPattern="Identity". If I create a FK pointing to this primary key, and make it nullable (effectively creating a 0...M relationship), then it throws this compilation error. Removing StoreGeneratedPattern="Identity" fixes the issue, but causes issues elseware It works if the foreign key is set to not nullable

    Read the article

  • My headset doesn't work [closed]

    - by Kristian Flatheim Jensen
    Hello! I am not sure if this question fits on this site :S if it doesn't please let me know and i remove it :) well... here is my problem I just got a steel series headset for christmas(yay!) and i quickly plugged it into my MacBook Pro. The audio output works fine, but i am having issues with the audio input :( i had these kinds of problems before and i think my mac may be broken :( but then i saw this post: http://www.biloca.com/blog/?p=25 and they talked about some power issues to the microphone on the Mac Mini... I did not quite get what they were discussing so i have a simple question :) Do any of you have an idea why my microphone doesn't work? Please help! :) Best Regards Kristian

    Read the article

  • Insert rownumber repeatedly in records in t-sql.

    - by jeff
    Hi, I want to insert a row number in a records like counting rows in a specific number of range. example output: RowNumber ID Name 1 20 a 2 21 b 3 22 c 1 23 d 2 24 e 3 25 f 1 26 g 2 27 h 3 28 i 1 29 j 2 30 k I rather to try using the rownumber() over (partition by order by column name) but my real records are not containing columns that will count into 1-3 rownumber. I already try to loop each of record to insert a row count 1-3 but this loop affects the performance of the query. The query will use for the RDL report, that is why as much as possible the performance of the query must be good. any suggestions are welcome. Thanks

    Read the article

  • Assert parameters in a table-valued UDF

    - by Clay Lenhart
    Is there a way to create "asserts" on the parameters of a table-valued UDF. I'd like to use a table-valued UDF for performance reasons, however I know that certain parameter combinations (like start and end dates that are more than a month apart) will cause performance issues on the server for all users. End users query the database via Excel using UDFs. UDFs (and table-valued UDFs in particular) are useful when the data is too large for Excel. Users write simple SQL queries that categorizes the data into groups to reduce the number of rows. For example, the user may be interested in weekly aggregates rather than hourly ones. Users write a group by SELECT statement to reduce the rows by 24x7=168 times. I know I can write RAISERROR statements in multistatement UDFs, but table-valued UDFs are integrated in the query optimizer so these queries are more efficient with table-valued UDFs. So, can I define assertions on the parameters passed to a table-valued UDF?

    Read the article

  • How to build Lucene / Solr from source code in windows environment in order to add patches

    - by Simon
    I have successfully implemented Apache’s Solr for free text searching a database driven web site build for windows platforms using Visual Studio in c#. I am trying to get a version Solr working with field collapsing (which is not in the release version). There are patches available from apache and discussions on the web of people successfully doing this for the version I am using but my problem is cannot get the build to work. I am a c# coder on windows platforms so java development is new to me. I understand I need to get the correct source code (and revision) from SVN, add the appropriate patches, then build the war file to deploy to my system. I cannot seem to get the source to build and produce the deployment code including jar (and subsequent war) files. My system is: Windows 7 Ultimate for development Visual Studio 2010 for c# / javascript development MyEclipse 8.6 / Eclipse 3.5 for the java build from source Subecplise 1.6x SVN plugin to get the source from apache’s SVN Apache Solr 1.4.1 So far I have: Found the right patches for the function I need: https://issues.apache.org/jira/browse/SOLR-236 Specifically I need to patch: field_collapsing_1.1.0.patch HTTPS //issues.apache.org/jira/secure/attachment/12357681/field_collapsing_1.1.0.patch and SOLR-236-1_4_1.patch HTTPS //issues.apache.org/jira/secure/attachment/12448216/SOLR-236-1_4_1.patch I downloaded the Lucene trunk version from the day before the patch was released (revision 958303 from 28/6/10) via subeclipse into a java package in myeclipse from: HTTPS //svn.apache.org/repos/asf/lucene/dev/trunk (Solr is the web implementation of Lucene and is in the subfolder solr/) I can apply patches to the solr directory once it has downloaded but the parent Lucene project doesn’t build the war files, copy the jar or other files into the bin folder (it stays empty). The build process starts, but doesn’t do anything apart from creating the folders bin and src. I am building the whole Lucene project, which contains Solr. I have tried building the source without patching and the same happens. If I copy out the Solr directory into a new project, it runs the build and copies all the related files, tests, etc but fails with 4,500 errors and does not produce the jar files or war file, which I assume is because it can’t find the Lucene trunk files which it depends on. I have two interrelated problems 1) I can't get the Lucene downloaded trunk to build 2) The jar, war and associated files are not created Can anyone help with what I am missing to build the war file? I have spent 2 days to get this far as the help online is extremely patchy and I can’t find a walk though tutorial on building a java war file from source in a windows environment. Any help will be much appreciated. Simon

    Read the article

  • Is it too early to start designing for Task Parallel Library?

    - by Joe Erickson
    I have been following the development of the .NET Task Parallel Library (TPL) with great interest since Microsoft first announced it. There is no doubt in my mind that we will eventually take advantage of TPL. What I am questioning is whether it makes sense to start taking advantage of TPL when Visual Studio 2010 and .NET 4.0 are released, or whether it makes sense to wait a while longer. Why Start Now? The .NET 4.0 Task Parallel Library appears to be well designed and some relatively simple tests demonstrate that it works well on today's multi-core CPUs. I have been very interested in the potential advantages of using multiple lightweight threads to speed up our software since buying my first quad processor Dell Poweredge 6400 about seven years ago. Experiments at that time indicated that it was not worth the effort, which I attributed largely to the overhead of moving data between each CPU's cache (there was no shared cache back then) and RAM. Competitive advantage - some of our customers can never get enough performance and there is no doubt that we can build a faster product using TPL today. It sounds fun. Yes, I realize that some developers would rather poke themselves in the eye with a sharp stick, but we really enjoy maximizing performance. Why Wait? Are today's Intel Nehalem CPUs representative of where we are going as multi-core support matures? You can purchase a Nehalem CPU with 4 cores which share a single level 3 cache today, and most likely a 6 core CPU sharing a single level 3 cache by the time Visual Studio 2010 / .NET 4.0 are released. Obviously, the number of cores will go up over time, but what about the architecture? As the number of cores goes up, will they still share a cache? One issue with Nehalem is the fact that, even though there is a very fast interconnect between the cores, they have non-uniform memory access (NUMA) which can lead to lower performance and less predictable results. Will future multi-core architectures be able to do away with NUMA? Similarly, will the .NET Task Parallel Library change as it matures, requiring modifications to code to fully take advantage of it? Limitations Our core engine is 100% C# and has to run without full trust, so we are limited to using .NET APIs.

    Read the article

  • Entity Framework Update Entity along with child entities (add/update as necessary)

    - by Jorin
    I have a many-to-many relationship between Issues and Scopes in my EF Context. In ASP.NET MVC, I bring up an Edit form that allows the user to edit a particular Issue. At the bottom of the form, is a list of checkboxes that allow them to select which scopes apply to this issue. When editing an issue, it likely will always have some Scopes associated with it already--these boxes will be checked already. However, the user has the opportunity to check more scopes or remove some of the currently checked scopes. My code looked something like this to save just the Issue: using (var edmx = new MayflyEntities()) { Issue issue = new Issue { IssueID = id, TSColumn = formIssue.TSColumn }; edmx.Issues.Attach(issue); UpdateModel(issue); if (ModelState.IsValid) { //if (edmx.SaveChanges() != 1) throw new Exception("Unknown error. Please try again."); edmx.SaveChanges(); TempData["message"] = string.Format("Issue #{0} successfully modified.", id); } } So, when I try to add in the logic to save the associated scopes, I tried several things, but ultimately, this is what made the most sense to me: using (var edmx = new MayflyEntities()) { Issue issue = new Issue { IssueID = id, TSColumn = formIssue.TSColumn }; edmx.Issues.Attach(issue); UpdateModel(issue); foreach (int scopeID in formIssue.ScopeIDs) { var thisScope = new Scope { ID = scopeID }; edmx.Scopes.Attach(thisScope); thisScope.ProjectID = formIssue.ProjectID; if (issue.Scopes.Contains(thisScope)) { issue.Scopes.Attach(thisScope); //the scope already exists } else { issue.Scopes.Add(thisScope); // the scope needs to be added } } if (ModelState.IsValid) { //if (edmx.SaveChanges() != 1) throw new Exception("Unknown error. Please try again."); edmx.SaveChanges(); TempData["message"] = string.Format("Issue #{0} successfully modified.", id); } } But, unfortunately, that just throws the following exception: An object with the same key already exists in the ObjectStateManager. The ObjectStateManager cannot track multiple objects with the same key. What am I doing wrong?

    Read the article

  • SQL Query to delete oldest rows over a certain row count?

    - by Casey
    I have a table that contains log entries for a program I'm writing. I'm looking for ideas on an SQL query (I'm using SQL Server Express 2005) that will keep the newest X number of records, and delete the rest. I have a datetime column that is a timestamp for the log entry. I figure something like the following would work, but I'm not sure of the performance with the IN clause for larger numbers of records. Performance isn't critical, but I might as well do the best I can the first time. DELETE FROM MyTable WHERE PrimaryKey NOT IN (SELECT TOP 10,000 PrimaryKey FROM MyTable ORDER BY TimeStamp DESC)

    Read the article

  • Weaknesses of Hibernate

    - by Sinuhe
    I would like to know which are the weak points of Hibernate 3. This is not pretended to be a thread against Hibernate. I think it will be a very useful knowledge for decide if Hibernate is the best option for a project or for estimating its time. A weakness can be: A bug Where JDBC or PLSQL are better Performance issues ... Also, can be useful to know some solutions for that problems, better ORM or techniques, or it will be corrected in Hibernate 4. For example, AFAIK, Hibernate will have a very bad performance updating 10000 rows comparing to JDBC in this query: update A set state=3 where state=2

    Read the article

  • Best way to encrypt certain fiels in SQL Server 2008?

    - by Josh
    I'm writing a .net web app that will read and write information to a SQL 2008 backend database. Some of this information will be highly confidential in nature so I want to encrypt certain data elements. I dont want to use TDE or any full-database encryption for performance reasons. My main concern is protecting this sensitive data as a last resort against a SQL injection or even a database server compromise. My question is what is the best way to do this to preserve performance? Is it faster to use the SQL2008 encryption functions such as EncryptByKey, or would it be faster to encrypt and decrypt the data in the .NET web app itself using a symmetric key stored in the secure web.config and store the encrypted values in the DB?

    Read the article

  • ASMX Still slow after 'Generate serialization assembly'

    - by Buzzer
    This question is related to: http://stackoverflow.com/questions/784918/asmx-web-service-slow-first-request. I inherited a proxy to a legacy ASMX Service. Basically as the post above states, the first call performance is literally 10 times slower than the subsequent calls. I went ahead and turned on ‘Generate serialization assembly' on the project that contains the proxy. The 'serializers' assembly is actually generated. However, I haven't seen any performance increase at all. Do I need to do anything else other than make sure the 'serializers' assembly is in the client's bin directory? Do I have to 'link' the proxy to the 'serializers' assembly during proxy generation (wsdl.exe)? I guess I'm stuck at this point. J Saunders where u at? :)

    Read the article

  • fastest in-memory cache for XslCompiledTransform

    - by rudnev
    I have a set of xslt stylesheet files. I need to produce the fastest performance of XslConpiledTransform, so i want to make in-memory representation of these stylesheets. I can load them to in-memory collection as IXpathNavigable on application start, and then load each IXPAthNavigable into singleton XslCompiledTransform on each request. But this works only for styleshhets without xsl:import or xsl:include. (Xsl:import is only for files). also i can load into cache many instances of XSLCompiledTransform for each template. Is it reasonable? Are there other ways? What is the best? what are another tips for improving performance MS Xslt processor?

    Read the article

  • Simplified iphone In-app store implementation for built-in product features

    - by Joey
    This question is for those familiar with implementing the iphone in-app store functionality. The app I'm building has only built-in features that are unlocked when features are purchased. Further, any modifications or additions to store items will require an app update. Also, it is only in English so has no localized languages for the items. If we take those assumptions, is it feasible to skip the step of retrieving the product info with SKProductsRequest and simply use hardcoded data within the app? While I may want to extend my app to greater complexity in the future, I'd like to know if this step to keep it simple would introduce some serious issues. One issue might be, for instance, if we have to expect a few of the items to occasionally be unavailable due to issues on Apple's side and simply trying to purchase it and letting it fail would not be a permissible or workable option in that case (especially if it is uncommon). Thanks.

    Read the article

  • What are some best practises and "rules of thumb" for creating database indexes?

    - by Ash
    I have an app, which cycles through a huge number of records in a database table and performs a number of SQL and .Net operations on records within that database (currently I am using Castle.ActiveRecord on PostgreSQL). I added some basic btree indexes on a couple of the feilds, and as you would expect, the peformance of the SQL operations increased substantially. Wanting to make the most of dbms performance I want to make some better educated choices about what I should index on all my projects. I understand that there is a detrement to performance when doing inserts (as the database needs to update the index, as well as the data), but what suggestions and best practices should I consider with creating database indexes? How do I best select the feilds/combination of fields for a set of database indexes (rules of thumb)? Also, how do I best select which index to use as a clustered index? And when it comes to the access method, under what conditions should I use a btree over a hash or a gist or a gin (what are they anyway?).

    Read the article

  • What's wrong with my logic here?

    - by stu
    In java they say don't concatenate Strings, instead you should make a stringbuffer and keep adding to that and then when you're all done, use toString() to get a String object out of it. Here's what I don't get. They say do this for performance reasons, because concatenating strings makes lots of temporary objects. But if the goal was performance, then you'd use a language like C/C++ or assembly. The argument for using java is that it is a lot cheaper to buy a faster processor than it is to pay a senior programmer to write fast efficient code. So on the one hand, you're supposed let the hardware take care of the inefficiencies, but on the other hand, you're supposed to use stringbuffers to make java more efficient. While I see that you can do both, use java and stringbuffers, my question is where is the flaw in the logic that you either use a faster chip or you spent extra time writing more efficient software.

    Read the article

  • Requirements of an issue/bug tracker

    - by James Brooks
    I've been looking at various issue/bug trackers available on the net. There are some very good ones, but I'm unable to use them as my server does not support Perl/Ruby (for example), I'm not too bothered however because I am able to write code in PHP and as such would prefer something in that language. So I've taken it upon myself to write a custom issue tracker system. As of now it's in early planning stages, and before I continue, I'd like to find out what people need from such an application. My current list of things to add include: Creating/Editing/Deleting issues - both on user and admin level Related issues (similar to that of STO) Admins will be able to create builds/milestones and version control of projects Admins will be able to assign users/groups to a project Roadmap of projects Possible SVN integration with Git? What do you think? There are a couple more things I'd like to add, but I'm sure you'll think of a better way of adding such feature. What would you like to see from an issue tracker?

    Read the article

< Previous Page | 316 317 318 319 320 321 322 323 324 325 326 327  | Next Page >