Search Results

Search found 19555 results on 783 pages for 'job performance'.

Page 304/783 | < Previous Page | 300 301 302 303 304 305 306 307 308 309 310 311  | Next Page >

  • pyopengl: Could it replace c++ ?

    - by Tom
    Hi everyone. I'm starting a computer graphics course, and I have to choose a language. Choices are between C++ and Python. I have no problem with C++, python is a work in progress. So i was thinking to go down the python road, using pyopengl for graphics part. I have heard though, that performance is an issue. Is python / pyopengl mature enough to challenge C++ on performance? I realize its a long shot, but I'd like to hear your thoughts, experiences on uses of pyopengl. Thanks in advance.

    Read the article

  • Popularity Algorithm - SQL / Django

    - by RadiantHex
    Hi folks, I've been looking into popularity algorithms used on sites such as Reddit, Digg and even Stackoverflow. Reddit algorithm: t = (time of entry post) - (Dec 8, 2005) x = upvotes - downvotes y = {1 if x > 0, 0 if x = 0, -1 if x < 0) z = {1 if x < 0, otherwise x} log(z) + (y * t)/45000 I have always performed simple ordering within SQL, I'm wondering how I should deal with such ordering. Should it be used to define a table, or could I build an SQL with the ordering within the formula (without hindering performance)? I am also wondering, if it is possible to use multiple ordering algorithms in different occasions, without incurring into performance problems. I'm using Django and PostgreSQL. Help would be much appreciated! ^^

    Read the article

  • Insert rownumber repeatedly in records in t-sql.

    - by jeff
    Hi, I want to insert a row number in a records like counting rows in a specific number of range. example output: RowNumber ID Name 1 20 a 2 21 b 3 22 c 1 23 d 2 24 e 3 25 f 1 26 g 2 27 h 3 28 i 1 29 j 2 30 k I rather to try using the rownumber() over (partition by order by column name) but my real records are not containing columns that will count into 1-3 rownumber. I already try to loop each of record to insert a row count 1-3 but this loop affects the performance of the query. The query will use for the RDL report, that is why as much as possible the performance of the query must be good. any suggestions are welcome. Thanks

    Read the article

  • Assert parameters in a table-valued UDF

    - by Clay Lenhart
    Is there a way to create "asserts" on the parameters of a table-valued UDF. I'd like to use a table-valued UDF for performance reasons, however I know that certain parameter combinations (like start and end dates that are more than a month apart) will cause performance issues on the server for all users. End users query the database via Excel using UDFs. UDFs (and table-valued UDFs in particular) are useful when the data is too large for Excel. Users write simple SQL queries that categorizes the data into groups to reduce the number of rows. For example, the user may be interested in weekly aggregates rather than hourly ones. Users write a group by SELECT statement to reduce the rows by 24x7=168 times. I know I can write RAISERROR statements in multistatement UDFs, but table-valued UDFs are integrated in the query optimizer so these queries are more efficient with table-valued UDFs. So, can I define assertions on the parameters passed to a table-valued UDF?

    Read the article

  • How to properly use Gearman with my PHP application?

    - by luckytaxi
    For now I just want to use Gearman for background processing. For example, I need to email a recipient that they have a private message waiting for them once the sender submits their message into the DB. I assume I can run the worker/client and server on my primary server but I have no problems offloading some of the tasks to a different web server. Anyways, my question is how do I handle multiple "functions?" Let's say I need a job that handles the email portion and a job to handle image manipulation. Can I have multiple functions in the worker? I've followed a couple of examples I found online but each example only shows one function being initialized. Do I have to start up multiple "workers" to handle multiple functions?

    Read the article

  • Is it too early to start designing for Task Parallel Library?

    - by Joe Erickson
    I have been following the development of the .NET Task Parallel Library (TPL) with great interest since Microsoft first announced it. There is no doubt in my mind that we will eventually take advantage of TPL. What I am questioning is whether it makes sense to start taking advantage of TPL when Visual Studio 2010 and .NET 4.0 are released, or whether it makes sense to wait a while longer. Why Start Now? The .NET 4.0 Task Parallel Library appears to be well designed and some relatively simple tests demonstrate that it works well on today's multi-core CPUs. I have been very interested in the potential advantages of using multiple lightweight threads to speed up our software since buying my first quad processor Dell Poweredge 6400 about seven years ago. Experiments at that time indicated that it was not worth the effort, which I attributed largely to the overhead of moving data between each CPU's cache (there was no shared cache back then) and RAM. Competitive advantage - some of our customers can never get enough performance and there is no doubt that we can build a faster product using TPL today. It sounds fun. Yes, I realize that some developers would rather poke themselves in the eye with a sharp stick, but we really enjoy maximizing performance. Why Wait? Are today's Intel Nehalem CPUs representative of where we are going as multi-core support matures? You can purchase a Nehalem CPU with 4 cores which share a single level 3 cache today, and most likely a 6 core CPU sharing a single level 3 cache by the time Visual Studio 2010 / .NET 4.0 are released. Obviously, the number of cores will go up over time, but what about the architecture? As the number of cores goes up, will they still share a cache? One issue with Nehalem is the fact that, even though there is a very fast interconnect between the cores, they have non-uniform memory access (NUMA) which can lead to lower performance and less predictable results. Will future multi-core architectures be able to do away with NUMA? Similarly, will the .NET Task Parallel Library change as it matures, requiring modifications to code to fully take advantage of it? Limitations Our core engine is 100% C# and has to run without full trust, so we are limited to using .NET APIs.

    Read the article

  • What is a programmer's life like?

    - by Zee JollyRoger
    Imagine like an 8-hour long video of any "typical/average" programming job. What is it like? Before I get myself involved in that path, what can I expect? I am interested in gathering first-hand information and accounts of the typical life of a programmer. My goal is to grasp the fundamental concepts of working in the professional field of programming. I just want to "see" into what it is/means to come to an entry-level programming job and program. See what kind of skills, mentality, expectations, and such are required.

    Read the article

  • JDBC CommunicationsException with MySQL Database

    - by Dominik Siebel
    I'm having a little trouble with my MySQL- Connection- Pooling. This is the case: Different jobs are scheduled via Quartz. All jobs connect to different databases which works fine the whole day while the nightly scheduled jobs fail with a CommunicationsException... Quartz-Jobs: Job1 runs 0 0 6,10,14,18 * * ? Job2 runs 0 30 10,18 * * ? Job3 runs 0 0 5 * * ? As you can see the last job runs at 18 taking about 1 hour to run. The first job at 5am is the one that fails. I already tried all kinds of parameter-combinations in my resource config this is the one I am running right now: <!-- Database 1 (MySQL) --> <Resource auth="Container" driverClassName="com.mysql.jdbc.Driver" maxActive="100" maxIdle="30" maxWait="10000" removeAbandoned="true" removeAbandonedTimeout="60" logAbandoned="true" type="javax.sql.DataSource" name="jdbc/appDbProd" username="****" password="****" url="jdbc:mysql://127.0.0.1:3306/appDbProd?autoReconnect=true&amp;useUnicode=true&amp;characterEncoding=UTF-8" testWhileIdle="true" testOnBorrow="true" testOnReturn="true" validationQuery="SELECT 1" timeBetweenEvictionRunsMillis="1800000" /> <!-- Database 2 (MySQL) --> <Resource auth="Container" driverClassName="com.mysql.jdbc.Driver" maxActive="100" maxIdle="30" maxWait="10000" removeAbandoned="true" removeAbandonedTimeout="60" logAbandoned="true" type="javax.sql.DataSource" name="jdbc/prodDbCopy" username="****" password="****" url="jdbc:mysql://127.0.0.1:3306/prodDbCopy?autoReconnect=true&amp;useUnicode=true&amp;characterEncoding=UTF-8" testWhileIdle="true" testOnBorrow="true" testOnReturn="true" validationQuery="SELECT 1" timeBetweenEvictionRunsMillis="1800000" /> <!-- Database 3 (MSSQL)--> <Resource auth="Container" driverClassName="net.sourceforge.jtds.jdbc.Driver" maxActive="30" maxIdle="30" maxWait="100" removeAbandoned="true" removeAbandonedTimeout="60" logAbandoned="true" name="jdbc/catalogDb" username="****" password="****" type="javax.sql.DataSource" url="jdbc:jtds:sqlserver://127.0.0.1:1433;databaseName=catalog;useNdTLMv2=false" testWhileIdle="true" testOnBorrow="true" testOnReturn="true" validationQuery="SELECT 1" timeBetweenEvictionRunsMillis="1800000" /> For obvious reasons I changed IPs, Usernames and Passwords but they can be assumed to be correct, seeing that the application runs successfully the whole day. The most annoying thing is: The first job that runs first queries Database2 successfully but fails to query Database1 for some reason (CommunicationsException): Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 39,376,539 milliseconds ago. The last packet sent successfully to the server was 39,376,539 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem. Any ideas? Thanks!

    Read the article

  • Weaknesses of Hibernate

    - by Sinuhe
    I would like to know which are the weak points of Hibernate 3. This is not pretended to be a thread against Hibernate. I think it will be a very useful knowledge for decide if Hibernate is the best option for a project or for estimating its time. A weakness can be: A bug Where JDBC or PLSQL are better Performance issues ... Also, can be useful to know some solutions for that problems, better ORM or techniques, or it will be corrected in Hibernate 4. For example, AFAIK, Hibernate will have a very bad performance updating 10000 rows comparing to JDBC in this query: update A set state=3 where state=2

    Read the article

  • SQL Query to delete oldest rows over a certain row count?

    - by Casey
    I have a table that contains log entries for a program I'm writing. I'm looking for ideas on an SQL query (I'm using SQL Server Express 2005) that will keep the newest X number of records, and delete the rest. I have a datetime column that is a timestamp for the log entry. I figure something like the following would work, but I'm not sure of the performance with the IN clause for larger numbers of records. Performance isn't critical, but I might as well do the best I can the first time. DELETE FROM MyTable WHERE PrimaryKey NOT IN (SELECT TOP 10,000 PrimaryKey FROM MyTable ORDER BY TimeStamp DESC)

    Read the article

  • DB Design Question

    - by hazimdikenli
    I am designing an Org Chart, model is almost ready and simplified a bit for clarity here. OrgUnit (OrgUnitId, Name, ReportsToOrgUnitId, ...) OrgUnitJobs (OrgUnitJobId, OrgUnitId, JobName, ReportsToOrgUnitJobId, ... ,IsJobGroup) Employee (EmployeeId, ........) OrgUnitJobEmployee (OrgUnitJobId, EmployeeId, AssignedDate, .....,) so I want to know every OrgUnit's ManagerEmployee (should have one), and Employees can have more than one job, but one of them has to be the main job, so I know whats his manager and other stuff. This is going to support a little workflow behind the scnese, so that is why it is not a very simple Org chart Model. so what would you do, would you add properties like (IsManager property to OrgUnitJobs model) or add ManagerOrgUnitJobId to OrgUnitModel. and why? Likewise, for employees would you add IsPrimaryJob property to OrgUnitJobEmployee model, or add PrimaryJobId to Employee Model.

    Read the article

  • Best way to encrypt certain fiels in SQL Server 2008?

    - by Josh
    I'm writing a .net web app that will read and write information to a SQL 2008 backend database. Some of this information will be highly confidential in nature so I want to encrypt certain data elements. I dont want to use TDE or any full-database encryption for performance reasons. My main concern is protecting this sensitive data as a last resort against a SQL injection or even a database server compromise. My question is what is the best way to do this to preserve performance? Is it faster to use the SQL2008 encryption functions such as EncryptByKey, or would it be faster to encrypt and decrypt the data in the .NET web app itself using a symmetric key stored in the secure web.config and store the encrypted values in the DB?

    Read the article

  • fastest in-memory cache for XslCompiledTransform

    - by rudnev
    I have a set of xslt stylesheet files. I need to produce the fastest performance of XslConpiledTransform, so i want to make in-memory representation of these stylesheets. I can load them to in-memory collection as IXpathNavigable on application start, and then load each IXPAthNavigable into singleton XslCompiledTransform on each request. But this works only for styleshhets without xsl:import or xsl:include. (Xsl:import is only for files). also i can load into cache many instances of XSLCompiledTransform for each template. Is it reasonable? Are there other ways? What is the best? what are another tips for improving performance MS Xslt processor?

    Read the article

  • ASMX Still slow after 'Generate serialization assembly'

    - by Buzzer
    This question is related to: http://stackoverflow.com/questions/784918/asmx-web-service-slow-first-request. I inherited a proxy to a legacy ASMX Service. Basically as the post above states, the first call performance is literally 10 times slower than the subsequent calls. I went ahead and turned on ‘Generate serialization assembly' on the project that contains the proxy. The 'serializers' assembly is actually generated. However, I haven't seen any performance increase at all. Do I need to do anything else other than make sure the 'serializers' assembly is in the client's bin directory? Do I have to 'link' the proxy to the 'serializers' assembly during proxy generation (wsdl.exe)? I guess I'm stuck at this point. J Saunders where u at? :)

    Read the article

  • What are some best practises and "rules of thumb" for creating database indexes?

    - by Ash
    I have an app, which cycles through a huge number of records in a database table and performs a number of SQL and .Net operations on records within that database (currently I am using Castle.ActiveRecord on PostgreSQL). I added some basic btree indexes on a couple of the feilds, and as you would expect, the peformance of the SQL operations increased substantially. Wanting to make the most of dbms performance I want to make some better educated choices about what I should index on all my projects. I understand that there is a detrement to performance when doing inserts (as the database needs to update the index, as well as the data), but what suggestions and best practices should I consider with creating database indexes? How do I best select the feilds/combination of fields for a set of database indexes (rules of thumb)? Also, how do I best select which index to use as a clustered index? And when it comes to the access method, under what conditions should I use a btree over a hash or a gist or a gin (what are they anyway?).

    Read the article

  • Execute linux AT Command via PHP

    - by ahmad Rabie
    When I run this code via ssh echo wget http://domain.com/send_me_email.php | at 12:54 it run correctly and send me an email at that time. but if I run a php Like this exec("echo wget http://domain.com/send_me_email.php | at 12:54"); exec("atq",$arr); print_r($arr); result of that code is something like this : job 63 at 2011-11-27 12:54 ,As you can see the job created successfully but I don't receive any Email at that time?! I test this line in php exec("wget http://domain.com/send_me_email.php"); and it send me an email, it means that I have permission to run exec and wget via php.but what is problem? I cant understand what is my problem. Please help me. thanks

    Read the article

  • Change write-host output color based on foreach if elseif outcome in Powershell

    - by Emo
    I'm trying to change the color of write-host output based on the lastrunoutcome property of SQL Server jobs in Powershell....as in...if a job was successfull, the output of lastrunoutcome is "Success" in green....if failed, then "Failed" in red. I have the script working to get the desired job status...I just don't know how to change the colors. Here's what I have so far: # Check for failed SQL jobs on multiple servers [reflection.assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo") | out-null foreach ($svr in get-content "C:\serverlist2.txt") { $a = get-date $BegDate = (Get-Date $a.AddDays(-1) -f d) + " 12:00:00 AM" $BegDateTrans = [system.datetime]$BegDate write-host $svr $srv=New-Object "Microsoft.SqlServer.Management.Smo.Server" "$svr" $srv.jobserver.jobs | where-object {$_.lastrundate -ge $BegDateTrans -and $_.Name -notlike "????????-????-????-????-????????????"} | format-table name,lastrunoutcome,lastrundate -autosize foreach ($_.lastrunoutcome in $srv.jobserver.jobs) { if ($_.lastrunoutcome = 0) { -forgroundcolor red } else {} } } This seems to be the closest I've gotten...but it's giving me an error of ""LastRunOutcome" is a ReadOnly property." Any help would be greatly appreciated! Thanks! Emo

    Read the article

  • Avoid implicit conversion from date to timestamp for selects with Oracle using Hibernate

    - by sapporo
    I'm using Hibernate 3.2.7.GA criteria queries to select rows from an Oracle Enterprise Edition 10.2.0.4.0 database, filtering by a timestamp field. The field in question is of type java.util.Date in Java, and DATE in Oracle. It turns out that the field gets mapped to java.sql.Timestamp, and Oracle converts all rows to TIMESTAMP before comparing to the passed in value, bypassing the index and thereby ruining performance. One solution would be to use Hibernate's sqlRestriction() along with Oracle's TO_DATE function. That would fix performance, but requires rewriting the application code (lots of queries). So is there a more elegant solution? Since Hibernate already does type mapping, could it be configured to do the right thing? Update: The problem occurs in a variety of configurations, but here's one specific example: Oracle Enterprise Edition 10.2.0.4.0 Oracle JDBC Driver 11.1.0.7.0 Hibernate 3.2.7.GA Hibernate's Oracle10gDialect Java 1.6.0_16

    Read the article

  • What's wrong with my logic here?

    - by stu
    In java they say don't concatenate Strings, instead you should make a stringbuffer and keep adding to that and then when you're all done, use toString() to get a String object out of it. Here's what I don't get. They say do this for performance reasons, because concatenating strings makes lots of temporary objects. But if the goal was performance, then you'd use a language like C/C++ or assembly. The argument for using java is that it is a lot cheaper to buy a faster processor than it is to pay a senior programmer to write fast efficient code. So on the one hand, you're supposed let the hardware take care of the inefficiencies, but on the other hand, you're supposed to use stringbuffers to make java more efficient. While I see that you can do both, use java and stringbuffers, my question is where is the flaw in the logic that you either use a faster chip or you spent extra time writing more efficient software.

    Read the article

  • How to set that compiler flag?

    - by mystify
    Shark told me this: This instruction is the start of a loop that is not aligned to a 16-byte address boundary. For optimal performance, you should align the start of a hot loop using a compiler directive. With gcc 3.3 or later, use the -falign-loops=16 compiler flag. for (int i=0; i < 4; i++) { // line with the info //...code } How would I set that flag, and does it really improve performance?

    Read the article

  • Is SSIS able to query flat files from another Windows Server?

    - by atricapilla
    I pretty new SQL Server Integration Server (SSIS) user. Is SSIS able to query data from text files located in another Windows Server? I mean that when SSIS is installed on Windwos Server A, is SSIS able to query data from e.g. one folder containing text files in Windows Server B (under same domain)? I have used only SAP BO Data Integrator ETL tool and it cannot query flat files from another Server: during execution, all files must be located on the Job Server machine that executes the job.

    Read the article

< Previous Page | 300 301 302 303 304 305 306 307 308 309 310 311  | Next Page >