Search Results

Search found 20275 results on 811 pages for 'general performance'.

Page 584/811 | < Previous Page | 580 581 582 583 584 585 586 587 588 589 590 591  | Next Page >

  • Load Balancing and Failover for Read-Only PostgreSQL Database

    - by Eric J.
    Scenario Multiple application servers host web services written in Java, running in SpringSource dm Server. To implement a new requirement, they will need to query a read-only PostgreSQL database. Issue To support redundancy, at least two PostgreSQL instances will be running. Access to PostgreSQL must be load balanced and must auto-fail over to currently running instances if an instance should go down. Auto-discovery of newly running instances is desirable but not required. Research I have reviewed the official PostgreSQL documentation on this issue. However, that focuses on the more general case of read/write access to the database. Top google results tend to lead to older newsgroup messages or dead projects such as Sequoia or DB Balancer, as well as one active project PG Pool II Question What are your real-world experiences with PG Pool II? What other simple and reliable alternatives are available?

    Read the article

  • What is the difference between MVC model 1 and model 2?

    - by Alex Ciminian
    I've recently discovered that MVC is supposed to have two different flavors, model one and model two. I'm supposed to give a presentation on MVC1 and I was instructed that "it's not the web based version, that is refered to as MVC2". As the presentations are about design patterns in general, I doubt that this separation is related to Java (I found some info on Sun's site, but it seemed far off) or ASP. I have a pretty good understanding of what MVC is and I've used several (web) frameworks that enforce it, but this terminology is new to me. How is the web-based version different from other MVC (I'm guessing GUI) implementations? Does it have something to do with the stateless nature of HTTP? Thanks, Alex

    Read the article

  • .Net Logger (Write your own vs log4net/enterprise logger/nlog etc.)

    - by Jack
    I work for an IT department with about 50+ developers. It used to be about 100+ developers but was cut because of the recession. When our department was bigger there was an ambitious effort made to set up a special architecture group. One thing this group decided to do was create our own internal logger. They thought it was such a simple task that we could spend recources and do it ourselves. Now we are having issues with performance and difficulty viewing the logs generated and some employees are frustrated that we are spending recources on infrastructure stuff like this instead of focusing on serving our business and using stuff that already exists like log4net or Enterprise Logger. Can you assist me in listing up reasons why you should not create your own .net logger. Also reasons for why you should are welcome to get a fair point of view :)

    Read the article

  • How can I prevent users from taking screenshots of my application window?

    - by Midday
    What are some methods to prevent screenshots from being taken, if any? I've considered setting the "Print Screen" button as a hotkey, which makes the window fuzzy. However, there would be the problem of other 3rd party screenshot tools. How can I prevent their use? Why would I want such a thing? The idea is to create a chat client which you can't share the chatted information with others, not by copy & paste nor by print screen... Looking for general ideas or suggestions rather than actual code.

    Read the article

  • Function Object in Java .

    - by Tony
    I wanna implement a javascript like method in java , is this possible ? Say , I have a Person class : public class Person { private String name ; private int age ; // constructor ,accessors are omitted } And a list with Person objects: Person p1 = new Person("Jenny",20); Person p2 = new Person("Kate",22); List<Person> pList = Arrays.asList(new Person[] {p1,p2}); I wanna implement a method like this: modList(pList,new Operation (Person p) { incrementAge(Person p) { p.setAge(p.getAge() + 1)}; }); modList receives two params , one is a list , the other is the "Function object", it loops the list ,and apply this function to every element in the list. In functional programming language,this is easy , I don't know how java do this? Maybe could be done through dynamic proxy, does that have a performance trade off?

    Read the article

  • How to call c# server side method through javascript

    - by Nimesh
    Hi, I need to call a c# server method through the javascript. I have a gridview in which i have a column with dropdown list. When i change the dropdown's value i need to call a server side method through javascript and change the value of another text box in the gridview. I am able to do this on the selected index change. but i am slightly worried about the performance. i am using asp.net c#. Please let me know how to do this.

    Read the article

  • Android EditText and addTextChangedListener

    - by Alex
    im currently porting a database manager to android and due to performance reasons i like to update only propertys that have been modified. Im trying to do this with the addTextChangedListener in order to add modified entrys to a List, but my Program never enters any of its methods. EditText Et = (EditText) Editors.get(Prop.Name); Et.addTextChangedListener(new TextWatcher() { @Override public void afterTextChanged(Editable s) { // TODO Auto-generated method stub } @Override public void beforeTextChanged(CharSequence s, int start, int count, int after) { // TODO Auto-generated method stub } @Override public void onTextChanged(CharSequence s, int start, int before, int count) { // TODO Auto-generated method stub if(Prop.GetType() == Property.PROPTYPE.num) { float f = Float.parseFloat(s.toString()); Prop.FromString(f); } else { Prop.FromString(s.toString()); } propertiesToUpdate.add(Prop); }); Et.setText(Prop.ToString());

    Read the article

  • Optimize C# Code Fragment

    - by Eric J.
    I'm profiling some C# code. The method below is one of the most expensive ones. For the purpose of this question, assume that micro-optimization is the right thing to do. Is there an approach to improve performance of this method? Changing the input parameter to p to ulong[] would create a macro inefficiency. static ulong Fetch64(byte[] p, int ofs = 0) { unchecked { ulong result = p[0 + ofs] + ((ulong)p[1 + ofs] << 8) + ((ulong)p[2 + ofs] << 16) + ((ulong)p[3 + ofs] << 24) + ((ulong)p[4 + ofs] << 32) + ((ulong)p[5 + ofs] << 40) + ((ulong)p[6 + ofs] << 48) + ((ulong)p[7 + ofs] << 56); return result; } }

    Read the article

  • Which testing method to go with? [Rails]

    - by yuval
    I am starting a new project for a client today. I have done some rails projects before but never bothered writing tests for them. I'd like to change that starting with this new project. I am aware there are several testing tools, but am a bit confused as to which I should be using. I heard of RSpec, Mocha, Webrat, and Cucamber. Please keep in mind I never really wrote any regular tests, so my knowledge of testing in general is quite limited. How would you suggest I get started? Thanks!

    Read the article

  • Backup AWS Dynamodb to S3

    - by Ali
    It has been suggested on Amazon docs http://aws.amazon.com/dynamodb/ among other places, that you can backup your dynamodb tables using Elastic Map Reduce, I have a general understanding of how this could work but I couldn't find any guides or tutorials on this, So my question is how can I automate dynamodb backups (using EMR)? So far, I think I need to create a "streaming" job with a map function that reads the data from dynamodb and a reduce that writes it to S3 and I believe these could be written in Python (or java or a few other languages). Any comments, clarifications, code samples, corrections are appreciated.

    Read the article

  • Is it possible to partition more than one way at a time in SQL Server?

    - by meeting_overload
    I'm considering various ways to partition my data in SQL Server. One approach I'm looking at is to partition a particular huge table into 8 partitions, then within each of these partitions to partition on a different partition column. Is this even possible in SQL Server, or am I limited to definining one parition column+function+scheme per table? I'm interested in the more general answer, but this strategy is one I'm considering for Distributed Partitioned View, where I'd partition the data under the first scheme using DPV to distribute the huge amount of data over 8 machines, and then on each machine partition that portion of the full table on another parition key in order to be able to drop (for example) sub-paritions as required.

    Read the article

  • Find valid assignments of integers in arrays (permutations with given order)

    - by evident
    Hi everybody! I am having a general problem finding a good algorithm for generating each possible assignment for some integers in different arrays. Lets say I have n arrays and m numbers (I can have more arrays than numbers, more numbers than arrays or as much arrays as numbers). As an example I have the numbers 1,2,3 and three arrays: { }, { }, { } Now I would like to find each of these solutions: {1,2,3}, { }, { } { }, {1,2,3}, { } { }, { }, {1,2,3} {1,2}, {3}, { } {1,2}, { }, {3} { }, {1,2}, {3} {1}, {2,3}, { } {1}, { }, {2,3} { }, {1}, {2,3} {1}, {2}, {3} So basically I would like to find each possible combination to assign the numbers to the different arrays with keeping the order. So as in the example the 1 always needs to come before the others and so on... I want to write an algorithm in C++/Qt to find all these valid combinations. Does anybody have an approach for me on how to handle this problem? How would I generate these permutations?

    Read the article

  • rewrite URLs in CSS files

    - by Don
    Hi, I'm writing a Maven plugin that merges CSS files together. So all the CSS files that match /foo/bar/*.css might get merged to /foo/merged.css. A concern is that in a file such as /foo/bar/baz.css there might be a property such as: background: url("images/pic.jpg") So when the file is merged into /foo/merged.css this will need to be changed to background: url("bar/images/pic.jpg") The recalculated URL obviously depends on 3 factors: original URL original CSS file location merged CSS file location Assuming that the original and merged CSS files are both on the same filesystem, is there a general formula (or Java library) that can be used to calculate the new url given these 3 inputs? Thanks, Don

    Read the article

  • Reselling Open Source Code licenced under GPL, MIT

    - by Tempe
    I want to use some open source code that is licenced under the following "GNU General Public License (GPL), MIT License". I want to include this code in a product that i will sell. Here is the code in particular What do i have to do to not get sued? :) I dont mind distributing the source code that i have modified, but i dont want the whole application open source. If i build the open source code into a library and open source the library can i link to it and not open the rest of my source?

    Read the article

  • Checking whether an object exists in StructureMap container

    - by Kevin Pang
    I'm using StructureMap to handle the creation of NHibernate's ISessionFactory and ISession. I've scoped ISessionFactory as a singleton so that it's only created once for my web app and I've scoped ISession as a hybrid so that it will only be opened once per web request. I want to make sure that at the end of each web request, I properly dispose of ISession if it was created for that web request. I figured I could put some code in my Application_EndRequest routine to first check if an ISession was created, and if so, call ISession.Dispose. My current workaround is to just open up an ISession on Application_BeginRequest then dispose of it on Application_EndRequest, but that seems somewhat wasteful in that static file requests for images and css files and whatnot will create an ISession without ever using it. I know that the overall performance hit is negligable since ISessions are very lightweight, but it's getting annoying seeing all those ISessions being created inside NHProf.

    Read the article

  • checking if records exists in DB, in single step or 2 steps?

    - by Sinan
    Suppose you want to get a record from database which returns a large data and requires multiple joins. So my question would be is it better to use a single query to check if data exists and get the result if it exists. Or do a more simple query to check if data exists then id record exists, query once again to get the result knowing that it exists. Example: 3 tables a, b and ab(junction table) select * from from a, b, ab where condition and condition and condition and condition etc... or select id from a, b ab where condition then if exists do the query above. So I don't know if there is any reason to do the second. Any ideas how this affects DB performance or does it matter at all?

    Read the article

  • Recommend an algorithms exercise book?

    - by Parappa
    I have a little book called Problems on Algorithms by Ian Parberry which is chock full of exercises related to the study of algorithms. Can anybody recommend similar books? What I am not looking for are recommendations of good books related to algorithms or the theory of computation. Introduction to Algorithms is a good one, and of course there's the Knuth stuff. Ideally I want to know of any books that are light on instructional material and heavy on sample problems. In a nutshell, exercise books. Preferably dedicated to algorithms rather than general logic or other math problems. By the way, the Parberry book does not seem to be in print, but it is available as a PDF dowload.

    Read the article

  • logging in scala

    - by IttayD
    In Java, the standard idiom for logging is to create a static variable for a logger object and use that in the various methods. In Scala, it looks like the idiom is to create a Logging trait with a logger member and mixin the trait in concrete classes. This means that each time an object is created it calls the logging framework to get a logger and also the object is bigger due to the additional reference. Is there an alternative that allows the ease of use of "with Logging" while still using a per-class logger instance? EDIT: My question is not about how one can write a logging framework in Scala, but rather how to use an existing one (log4j) without incurring an overhead of performance (getting a reference for each instance) or code complexity. Also, yes, I want to use log4j, simply because I'll use 3rd party libraries written in Java that are likely to use log4j.

    Read the article

  • How can I calculate data for a boxplot (quartiles, median) in a Rails app on Heroku? (Heroku uses Po

    - by hadees
    I'm trying to calculate the data needed to generate a box plot which means I need to figure out the 1st and 3rd Quartiles along with the median. I have found some solutions for doing it in Postgresql however they seem to depend on either PL/Python or PL/R which it seems like Heroku does not have either enabled for their postgresql databases. In fact I ran "select lanname from pg_language;" and only got back "internal". I also found some code to do it in pure ruby but that seems somewhat inefficient to me. I'm rather new to Box Plots, Postgresql, and Ruby on Rails so I'm open to suggestions on how I should handle this. There is a possibility to have a lot of data which is why I'm concerned with performance however if the solution ends up being too complex I may just do it in ruby and if my application gets big enough to warrant it get my own Postgresql I can host somewhere else. *note: since I was only able to post one link, cause I'm new, I decided to share a pastie with some relevant information

    Read the article

  • What is the best possible technology for pulling huge data from 4 remote servers

    - by Habib Ullah Bahar
    Hello, For one of our project, we need to pull huge real time stock data from 4 remote servers across two countries. The trivial process here, check the sources for a regular interval and save the update to database. But as these are real time stock data of more than 1000 companies, I have to pull every second, which isn't good in case of memory, bandwidth I think. Please give me suggestion on which technology/platform [We are flexible here. PHP, Python, Java, PERL - anyone of them will be OK for us] we should choose, it can be achieved easily and with better performance.

    Read the article

  • Replication - syncronizing most of the data some of the time

    - by uncle brad
    I have some data that isn't properly "partitioned" (for lack of a better word). All inserts, processing and reporting happen on the same table. The bulk of the processing happens not long after the insert and not long after that it becomes immutable (we're talking days). I could do all inserts and processing on a new table that I replicate to the old table. When I detect that the data has become immutable I would delete the data from the new table, but I would edit the delete replication stored procedure so that the delete did not replicate. How bad an idea is this? It seems attractive at the moment (I haven't slept on it yet) because it might mitigate a performance problem with only very small changes to the application. It also seems like it might be a good way to shoot myself in the foot.

    Read the article

  • Alternative databases to use when putting IIS Logs into a database using LogParser

    - by Robin Day
    We have run some scripts that use LogParser to dump our IIS logs into a SQL Server database. We can then query this to get simple stats on hits, usage etc. It's also good when linking it to error log databases and performance counter database to compare usage with errors, etc. Having implemented this for just one system and for the last 2-3 weeks we already have a 5GB database with around 10 million records. This is making any queries to this database quite slow and will no doubt cause storage issues if we continue to log as we are. Can anyone suggest any alternative databases that we could use for this data that would be more efficient for such logs? I'd be particularly interested in any experience of Google's BigTable or Amazon's SimbleDB. Are either of these suitable for reporting queries? COUNTs, GROUP BYs, PIVOTs?

    Read the article

  • What messaging technologies in windows-ce for gauranteed msg delivery?

    - by Aidanapword
    All, We are building a windows-ce (6.0R3) based device that requires guaranteed and audit-ready message delivery (including store & forward) up to and down from the cloud. I have been looking for choices beyond: MSMQ a proprietary solution (what our prototype device is using) AMQP (research on using this in our context is now starting) ... are there any others? We will be transporting sensitive data (who isn't?!?!) over a public network, and large scale options are required. Anything running on an embedded device will be performance sensitive too. Thanks! Aidanapword

    Read the article

  • How to engineer features for machine learning

    - by Ivo Danihelka
    Do you have some advices or reading how to engineer features for a machine learning task? Good input features are important even for a neural network. The chosen features will affect the needed number of hidden neurons and the needed number of training examples. The following is an example problem, but I'm interested in feature engineering in general. A motivation example: What would be a good input when looking at a puzzle (e.g., 15-puzzle or Sokoban)? Would it be possible to recognize which of two states is closer to the goal?

    Read the article

  • Deleting orphans with JPA

    - by homaxto
    I have a one-to-one relation where I use CascadeType.PERSIST. This has over time build up a huge amount of child records that has not been deleted, to such an extend that it is reflected in the performance. Now I wish to add some code that cleans up the database removing all the child records that are not referenced by a parent. At the moment we are talking 400K+ records, at I need to run the code on all customer installations just to be sure they do not run into the same problem. I think the best solution would be to run a named query (because we support two databases) that deletes the necessary records, and this is where I get into problems, because how should I write it in JPQL? The result I want can be defined like the following sql statement, which unfortunaltely does not run on MySQL. DELETE FROM child c1 WHERE c1.pk NOT IN (SELECT DISTINCT p.pk FROM child c2 JOIN parent p ON p.child = c2.pk);

    Read the article

< Previous Page | 580 581 582 583 584 585 586 587 588 589 590 591  | Next Page >