Search Results

Search found 16639 results on 666 pages for 'task engine'.

Page 576/666 | < Previous Page | 572 573 574 575 576 577 578 579 580 581 582 583  | Next Page >

  • DB2 Child Table Not Working - Create Table

    - by gamerzfuse
    I have a bit of a task before me. (DB2 Database) I need to create a table that will be a child table (is that what it is called in SQL?) I need it so that it has a foreign key constraint with my other table, so when the parent table is modified (record deleted) the child table also loses that record. Once I have the table, I also need to populate it with the data from the other table (if there is an easy way to UPDATE this). If you could point me in the right direction, this would help alot, as I do not even know what syntax to look for. Thanks in advance The table I have in place: create table titleauthors ( au_id char(11), title_id char(6), au_ord integer, royaltyshare decimal(5,2)); The table I am creating: create table titles ( title_id char(6), title varchar(80), type varchar(12), pub_id char(4), price decimal(9,2), advance decimal(9,2), ytd_sales integer, contract integer, notes varchar(200), pubdate date); I need the title_id to be matched with the title_id from the parent table AND use the ON DELETE CASCADE syntax to delete when that table is deleted from. My Attempt: CREATE TABLE BookTitles ( title_id char(6) NOT NULL CONSTRAINT BookTitles_title_id_pk REFERENCES titleauthors(title_id) ON DELETE CASCADE, title varchar(80) NOT NULL, type varchar(12), pub_id char(4), price decimal(9,2), advance decimal(9,2), ytd_sales integer, contract integer, notes varchar(200), pubdate date) ; Thanks in advance!

    Read the article

  • What should a self-taught programmer with no degree learn/read?

    - by sjbotha
    I am a self-taught programmer and I do do not have any degrees. I started pretty young and I've got about 7 years of actual programming work experience. I believe I'm a pretty good programmer, but I admit that I have not played much with algorithms or delved into any really low-level aspects of programming such as how compilers work. I have worked with other programmers with and without degrees. Some were good and some not; having a degree didn't seem to make any difference as to which pot they fell into. Since then I've come to realize that it does depend on the school where the degree is obtained. Some people suggest that you really should get a degree; that there are things you'll learn in the process that you won't learn in the real world. Of course there is personal growth and discipline learned from completing a task of that magnitude, but let's just concentrate on the technical knowledge. What would I have been taught in a GOOD CS course that would aid me today and what can I read to fill the gap? I've heard the book "Algorithms" mentioned and I plan on reading that. What other books would you recommend? Edit: Clarification on 'actual work experience': Have worked for 2 small companies on teams with fewer than 5 people. About 2 years experience with Perl, Python, PHP, C, C++. About 5 years experience in Java, Applets, RMI, T-SQL, PL/SQL, VB6. 7 years experience in HTML, Javascript, bash, SQL. Most recently in Java designed and helped build an N-tier Java app with web frontend and RMI.

    Read the article

  • jQuery server ping slowly but surely filling memory?

    - by danspants
    I use the following piece of code to test if our server is running whilst the user is in a page. I've also started adding other functions that grab small amounts of data that are constantly changing and are to be relayed to the user (Files waiting for download, messages, reports etc). I've noticed recently that if I leave any page open (all pages contain the same function), the browser takes up more and more system memory which I can only attribute to this regular task (overnight it reached 1.6 gb). Is there some way of clearing out the data that is being accumulated? Is this normal behaviour? As far as i can tell, every time I call the function it should overwrite the previously retrieved data? function testServer(){ jQuery.ajax({ type:"HEAD", url:"/media/d_arrow_blue.png", error: function(msg) { jQuery.jGrowl("Server Disconnected"); } }); //retrieves count of files awaiting download - move to seperate function jQuery.get("/get_files/",{"type":"count"},function(data) { jQuery("#downloadList").children("div").text(data); }); }; jQuery().doTimeout(6000,function() { testServer(); return true; });

    Read the article

  • How to design Models the correct way: Object-oriented or "Package"-oriented?

    - by ajsie
    I know that in OOP you want every object (from a class) to be a "thing", eg. user, validator etc. I know the basics about MVC, how they different parts interact with each other. However, i wonder if the models in MVC should be designed according to the traditional OOP design, that is to say, should every model be a database/table/row (solution 2)? Or is the intention more like to collect methods that are affecting the same table or a bunch of related tables (solution 1). example for an Address book module in CodeIgniter, where i want be able to "CRUD" a Contact and add/remove it to/from a CRUD-able Contact Group. Models solution 1: bunching all related methods together (not real object, rather a "package") class Contacts extends Model { function create_contact() {) function read_contact() {} function update_contact() {} function delete_contact() {} function add_contact_to_group() {} function delete_contact_from_group() {} function create_group() {} function read_group() {} function update_group() {} function delete_group() {} } Models solution 2: the OOP way (one class per file) class Contact extends Model { private $name = ''; private $id = ''; function create_contact() {) function read_contact() {} function update_contact() {} function delete_contact() {} } class ContactGroup extends Model { private $name = ''; private $id = ''; function add_contact_to_group() {} function delete_contact_from_group() {} function create_group() {} function read_group() {} function update_group() {} function delete_group() {} } i dont know how to think when i want to create the models. and the above examples are my real tasks for creating an Address book. Should i just bunch all functions together in one class. then the class contains different logic (contact and group), so it can not hold properties that are specific for either one of them. the solution 2 works according to the OOP. but i dont know why i should make such a dividing. what would the benefits be to have a Contact object for example. Its surely not a User object, so why should a Contact "live" with its own state (properties and methods). you experienced guys with OOP/MVC, please shed a light on how one should think here in this very concrete task.

    Read the article

  • Simplest way to automatically alter "const" value in Java during compile time

    - by Michael Mao
    Hi all: This is a question corresponds to my uni assignment so I am very sorry to say I cannot adopt any one of the following best practices in a short time -- you know -- assignment is due tomorrow :( link to Best way to alter const in Java on StackOverflow Basically the only task (I hope so) left for me is the performance tuning. I've got a bunch of predefined "const" values in my single-class agent source code like this: //static final values private static final long FT_THRESHOLD = 400; private static final long FT_THRESHOLD_MARGIN = 50; private static final long FT_SMOOTH_PRICE_IDICATOR = 20; private static final long FT_BUY_PRICE_MODIFIER = 0; private static final long FT_LAST_ROUNDS_STARTTIME = 90; private static final long FT_AMOUNT_ADJUST_MODIFIER = 5; private static final long FT_HISTORY_PIRCES_LENGTH = 10; private static final long FT_TRACK_DURATION = 5; private static final int MAX_BED_BID_NUM_PER_AUC = 12; I can definitely alter the values manually and then compile the code to give it another go around. But the execution time for a thorough "statistic analysis" usually requires over 2000 times of execution, which will typically lasts more than half an hour on my own laptop... So I hope there is a way to alter values using other ways than dig into the source code to change the "const" values there, so I can automatically distributed compiled code to other people's PC and let them run the statistic analysis instead. One other reason for a automatically value adjustment is that I can try using my own agent to defeat itself by choosing different "const" values. Although my values are derived from previous history and statistical results, they are far from the most optimized values. I hope there is a easy way so I can quickly adopt that so to have a good sleep tonight while the computer does everything for me... :) Any hints on this sort of stuff? Any suggestion is welcomed and much appreciated.

    Read the article

  • Help, my CentOS servers keep going down , No route to host after a random uptime [closed]

    - by user249071
    Hello , I have a couple of Centos linux servers, that have a very simple task, they run nginx + fastcgi for php , and some NFS mounts between them, readonly They have some RPC commands to start some downloading processes with wget, nothing fancy , from a main server, but their behavior is very unstable, they simply go down, we tried to monitor ram , processor usage, even network connections, they don't load up so much, max network connections up to... 250 max, 15% processor usage and memory , well, doesn't even fill up, 2.5GB from 8GB max , I have no ideea why can a linux server go down like that, they aren't even public servers, no domain names installed no public serving, for sites. The only thing that I've discovered was that if i didn't restart the network service every couple of hours or so... the servers were becoming very slow, starting apps very slow, but not repoting a high usage of resources...Maybe Centos doesn't free the timeout connections, or something like that...It's based on Red Hat right? I'm not a linux expert , but I'm sure that there are a few guys out there that can easily have an answer to this , or even have some leads to what i can do ... I haven't installed snort, or other things to view if we have some DOS attacks, still the scheduled script that restarts the network each hour should put the system back online, and it doesn't.... Thank you in advance

    Read the article

  • Database Functional Programming in Clojure

    - by Ralph
    "It is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." - Abraham Maslow I need to write a tool to dump a large hierarchical (SQL) database to XML. The hierarchy consists of a Person table with subsidiary Address, Phone, etc. tables. I have to dump thousands of rows, so I would like to do so incrementally and not keep the whole XML file in memory. I would like to isolate non-pure function code to a small portion of the application. I am thinking that this might be a good opportunity to explore FP and concurrency in Clojure. I can also show the benefits of immutable data and multi-core utilization to my skeptical co-workers. I'm not sure how the overall architecture of the application should be. I am thinking that I can use an impure function to retrieve the database rows and return a lazy sequence that can then be processed by a pure function that returns an XML fragment. For each Person row, I can create a Future and have several processed in parallel (the output order does not matter). As each Person is processed, the task will retrieve the appropriate rows from the Address, Phone, etc. tables and generate the nested XML. I can use a a generic function to process most of the tables, relying on database meta-data to get the column information, with special functions for the few tables that need custom processing. These functions could be listed in a map(table name -> function). Am I going about this in the right way? I can easily fall back to doing it in OO using Java, but that would be no fun. BTW, are there any good books on FP patterns or architecture? I have several good books on Clojure, Scala, and F#, but although each covers the language well, none look at the "big picture" of function programming design.

    Read the article

  • PHP: Aggregate Model Classes or Uber Model Classes?

    - by sunwukung
    In many of the discussions regarding the M in MVC, (sidestepping ORM controversies for a moment), I commonly see Model classes described as object representations of table data (be that an Active Record, Table Gateway, Row Gateway or Domain Model/Mapper). Martin Fowler warns against the development of an anemic domain model, i.e. a class that is nothing more than a wrapper for CRUD functionality. I've been working on an MVC application for a couple of months now. The DBAL in the application I'm working on started out simple (on account of my understanding - oh the benefits of hindsight), and is organised so that Controllers invoke Business Logic classes, that in turn access the database via DAO/Transaction Scripts pertinent to the task at hand. There are a few "Entity" classes that aggregate these DAO objects to provide a convenient CRUD wrapper, but also embody some of the "behaviour" of that Domain concept (for example, a user - since it's easy to isolate). Taking a look at some of the code, and thinking along refactoring some of the code into a Rich Domain Model, it occurred to me that were I to try and wrap the CRUD routines and behaviour of say, a Company into a single "Model" class, that would be a sizeable class. So, my question is this: do Models represent domain objects, business logic, service layers, all of the above combined? How do you go about defining the responsibilities for these components?

    Read the article

  • Mysql InnoDB performance optimization and indexing

    - by Davide C
    Hello everybody, I have 2 databases and I need to link information between two big tables (more than 3M entries each, continuously growing). The 1st database has a table 'pages' that stores various information about web pages, and includes the URL of each one. The column 'URL' is a varchar(512) and has no index. The 2nd database has a table 'urlHops' defined as: CREATE TABLE urlHops ( dest varchar(512) NOT NULL, src varchar(512) DEFAULT NULL, timestamp timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, KEY dest_key (dest), KEY src_key (src) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 Now, I need basically to issue (efficiently) queries like this: select p.id,p.URL from db1.pages p, db2.urlHops u where u.src=p.URL and u.dest=? At first, I thought to add an index on pages(URL). But it's a very long column, and I already issue a lot of INSERTs and UPDATEs on the same table (way more than the number of SELECTs I would do using this index). Other possible solutions I thought are: -adding a column to pages, storing the md5 hash of the URL and indexing it; this way I could do queries using the md5 of the URL, with the advantage of an index on a smaller column. -adding another table that contains only page id and page URL, indexing both columns. But this is maybe a waste of space, having only the advantage of not slowing down the inserts and updates I execute on 'pages'. I don't want to slow down the inserts and updates, but at the same time I would be able to do the queries on the URL efficiently. Any advice? My primary concern is performance; if needed, wasting some disk space is not a problem. Thank you, regards Davide

    Read the article

  • How can I print a web page on a server?

    - by Gavin Schultz
    Suppose I develop a web page using the cool Google visualization API, and it does everything the user wants. They can the parameters, look at the graphs, and print the page to get a reasonable-looking report. All good. Now suppose I want to do the same thing server-side. For example, say we need a set of report generated at a specific time of day, printed to a PDF and emailed to a manager. It's not a user-initiated action, so we don't have a user's browser or their printer. We have a URL that would render the report if we had a browser, and that's it. Is there a good way to do this server-side? Is this just foolish? Has anyone done anything like that before? Do any of the major browsers have APIs that might provide such functionality? Keep in mind too that it's not just static HTML; probably javascript will be running first to shift the DOM around. I know we could implement a whole different reporting engine on the server side to do this, but that will (a) generate reports that look a bit different, and (b) require me to build/maintain two sets of functionality. Instead, I'd be happy if I could just render the page / pages I want in an invisible server-side browser and print it to a PDF (let's mostly ignore that step - I know any number of PDF printer drivers that could do this). I don't really want to do it ugly either - i.e. by starting a browser process and then sending keystrokes directly to the window either - that's just bound to fall apart with a slight nudge. The only related question I found had an answer like that. Any advice appreciated!

    Read the article

  • How do you read a file line by line in your language of choice?

    - by Jon Ericson
    I got inspired to try out Haskell again based on a recent answer. My big block is that reading a file line by line (a task made simple in languages such as Perl) seems complicated in a functional language. How do you read a file line by line in your favorite language? So that we are comparing apples to other types of apples, please write a program that numbers the lines of the input file. So if your input is: Line the first. Next line. End of communication. The output would look like: 1 Line the first. 2 Next line. 3 End of communication. I will post my Haskell program as an example. Ken commented that this question does not specify how errors should be handled. I'm not overly concerned about it because: Most answers did the obvious thing and read from stdin and wrote to stdout. The nice thing is that it puts the onus on the user to redirect those streams the way they want. So if stdin is redirected from a non-existent file, the shell will take care of reporting the error, for instance. The question is more aimed at how a language does IO than how it handles exceptions. But if necessary error handling is missing in an answer, feel free to either edit the code to fix it or make a note in the comments.

    Read the article

  • Learning Clojure - What should I know about Java and more.

    - by kartan
    Hello, I have started learning Clojure recently, my main programming language is Ruby and I have no Java experience whatsoever. Which Java standard classes are a must to know when working with Clojure? Obviously Clojure doesn't come with a wrapper for everything and a lot of functionality is provided by Java's libraries. There's like a gazillion of them on javadocs - which ones should I explore? Where do I look for and how do I install third party libraries? In Ruby I'd visit Rubyforge or Rubytoolbox and then just gem install the package I found interesting. Which editor/ide for Clojure would you recommend (with the lowest learning curve)? I am trying to use Netbeans with Enclojure and paradoxally it's my biggest obstacle so far: Each generated project has some xml files, folders, library dependencies etc. the purpose of I have no clue. I am doing some labrepl exercises and wanted to try out some of the bundled libraries separately in a new project, but this simple task I cannot accomplish :/ Are there any clojure community driven blogs with news, code tips etc?

    Read the article

  • How to check the backtrace of a "USER process" in the Linux Kernel Crash Dump

    - by Biswajit
    I was trying to debug a USER Process in Linux Crash Dump. The normal steps to go to the crash dump are: Go to the path where the dump is located. Use the command crash kernel_link dump.201104181135. Where kernel_link is a soft link I have created for vmlinux image. Now you will be in the CRASH prompt. If you run the command foreach <PID Of the process> bt Eg: crash> **foreach 6920 bt** **PID: 6920 TASK: ffff88013caaa800 CPU: 1 COMMAND: **"**climmon**"**** #0 [ffff88012d2cd9c8] **schedule** at ffffffff8130b76a #1 [ffff88012d2cdab0] **schedule_timeout** at ffffffff8130bbe7 #2 [ffff88012d2cdb50] **schedule_timeout_uninterruptible** at ffffffff8130bc2a #3 [ffff88012d2cdb60] **__alloc_pages_nodemask** at ffffffff810b9e45 #4 [ffff88012d2cdc60] **alloc_pages_curren**t at ffffffff810e1c8c #5 [ffff88012d2cdc90] **__page_cache_alloc** at ffffffff810b395a #6 [ffff88012d2cdcb0] **__do_page_cache_readahead** at ffffffff810bb592 #7 [ffff88012d2cdd30] **ra_submit** at ffffffff810bb6ba #8 [ffff88012d2cdd40] **filemap_fault** at ffffffff810b3e4e #9 [ffff88012d2cdda0] **__do_fault** at ffffffff810caa5f #10 [ffff88012d2cde50] **handle_mm_fault** at ffffffff810cce69 #11 [ffff88012d2cdf00] **do_page_fault** at ffffffff8130f560 #12 [ffff88012d2cdf50] **page_fault** at ffffffff8130d3f5 RIP: 00007fd02b7e9071 RSP: 0000000040e86ea0 RFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007fd02b7e9071 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000040e86ec0 RBP: 0000000040e87140 R8: 0000000000000800 R9: 0000000000000000 R10: 0000000000000000 R11: 0000000000000202 R12: 00007fff16ec43d0 R13: 00007fd02bcadf00 R14: 0000000040e87950 R15: 0000000000001000 ORIG_RAX: ffffffffffffffff CS: 0033 SS: 002b If you check the above backtrace it shows the kernel functions used for scheduling/handling page fault but not the functions that were executed in the USER process (here eg. climmon). So I am not able to debug this process as I am not able to see the functions executed in that process. Can any one help me with this case?

    Read the article

  • How to change default locale in BIRT

    - by zlajo
    Hi everybody, I am working with the BIRT reporting engine and my current task is to implement internationalization for reports. We are using the webviewer to generate and download pdf reports. There is a parameter (__locale) which allows me to specify the locale that should be used to generate a report. So far everything works fine. There is an additional requirement I have not been able to implement, though. Additionally to the locale which is set by the http parameter there should also be the possibility to specify some kind of fallback locale. Take the following example: There are two property-files common_en_US.properties and common_en_GB.properties. The first locale to be used should be en_GB (__locale=en_GB). Everything works fine if the common_en_GB.properties file exist. But I would also like to tell BIRT to use common_en_US.properties if the en_GB-file cannot be found, which does not work as expected. I tried to solve this by manually setting the Java default locale before executing BIRT because I thought that BIRT would use the Java mechanism to resolve localized strings. Unfortunately this attempt does not work. Is there a different way to do what I would like to do? Is it possible to do something like that at all? Thanks a lot! Johannes

    Read the article

  • ASP.NET application - Error when trying to connect to a SQL Server 2008 instance

    - by Pablo Dami
    Hi everyone! Despite that I’m a regular reader of this great forum, this is my first post on it. I believe that this community can help me with the following problem that I have. I’m trying to publish an ASP.NET website over an IIS 6.0 (Windows 2003 Server), and I have some troubles trying to connect to the database. Curiously, I have installed another ASP.NET website into the same IIS 6.0 with the same properties and security parameters and can connect without problems with the same database. The application that works fine is almost the same that the one that can’t connect with SQL Server (actually is the same but with several modifications). I’ll enumarate some information related to the problem: S.O: Windows 2003 Server SQL Server Engine: SQL Server 2008 SQL Server accept remote connections? Yes. ASP.NET version: 2.0.50727 The connections via TCP/IP are enabled to the SQL Server instance? Yes. The corresponding user that I have in the connection string, actually exists into the database with the “owner” role? Yes. ORM Tool used: nHibernate I get the following error when I try to run the aplication into the browser: Error while establishing a connection to the server. When connecting to SQL Server 2005, this failure may occur because the default settings SQL Server does not allow remote connections. (provider: Shared Memory Provider, error: 40 - Could not open a connection to SQL Server) In order to isolate the problem, I made some test. For example, using the web app that works fine I can connect without any problema with the database that uses the web app that can’t. With this evidence I concluded that the problem is within the web app and not into the SQL Server instance. I also google it my problem but sadly I can't find nothing usefull to solve it. If someone can help me I’ll appreciate that. Thank you so much for your time!

    Read the article

  • Strangely structured xml code finding last value of a certain type using java

    - by Damien.Bell
    Thus the structure is something like this: OasisReportMessagePayloadRTOReport_ItemReport_Data Under report data it's broken into categories: >>Zone >>Type >>Value >>Interval What I need to do is: Get the value if the type is equal to 'myType' and the interval value is the LARGEST. So an example of the xml might be (under report_data): OasisReport MessagePayload RTO REPORT_ITEM REPORT_DATA <zone>myZone1</zone> -- This should be the same in all reports since I only get them for 1 zone <type>myType</type> --This can change from line to line <value>12345</value>--This changes every interval <Interval>122</Interval> -- This is essentially how many 5 minute intervals have taken place since the beginning of a day, finding the "max" lets me know it's the newest data. Thereby I want to find stuff of "MyType" for the "max" interval and pull the Value (into a string, or a double, if not I can convert from string. Can someone help me with this task? Thanks! Note: I've used Xpath to handle things like this in the past, but it seems outlandish for this... as it's SO complex (since not all the reports live in the same report_item, and not all the types are the same in each report)

    Read the article

  • Hadoop Map/Reduce - simple use example to do the following...

    - by alexeypro
    I have MySQL database, where I store the following BLOB (which contains JSON object) and ID (for this JSON object). JSON object contains a lot of different information. Say, "city:Los Angeles" and "state:California". There are about 500k of such records for now, but they are growing. And each JSON object is quite big. My goal is to do searches (real-time) in MySQL database. Say, I want to search for all JSON objects which have "state" to "California" and "city" to "San Francisco". I want to utilize Hadoop for the task. My idea is that there will be "job", which takes chunks of, say, 100 records (rows) from MySQL, verifies them according to the given search criteria, returns those (ID's) which qualify. Pros/cons? I understand that one might think that I should utilize simple SQL power for that, but the thing is that JSON object structure is pretty "heavy", if I put it as SQL schemas, there will be at least 3-5 tables joins, which (I tried, really) creates quite a headache, and building all the right indexes eats RAM faster than I one can think. ;-) And even then, every SQL query has to be analyzed to be utilizing the indexes, otherwise with full scan it literally is a pain. And with such structure we have the only way "up" is just with vertical scaling. But I am not sure it's the best option for me, as I see how JSON objects will grow (the data structure), and I see that the number of them will grow too. :-) Help? Can somebody point me to simple examples of how this can be done? Does it make sense at all? Am I missing something important? Thank you.

    Read the article

  • Mysql Server Optimization

    - by Ish Kumar
    Hi Geeks, We are having serious MySQL(InnoDB) performance issues at a moment when we do: (10-20) insertions on TABLE1 (10-20) updates on TABLE2 Note: Both above operations happens within fraction of a second. And this occurs every few (10-15) minutes. And all online users (approx 400-600) doing read operation on join of TABLE1 & TABLE2 every 1 second. Here is our mysql configuration info: http://docs.google.com/View?id=dfrswh7c_117fmgcmb44 Issues: Lot queries wait and expire later (saw it from phpmyadmin / processes). My poor MySQL server crashes sometimes Questions Q1: Any suggestions to optimize at MySQL level? Q2: I thinking to use persistent connections at application level, is it right? Info Added Later: Database Engine: InnoDB TABLE1 : 400,000 rows (inserting 8,000 daily) & TABLE2: 8,000 rows 1 second query: SELECT b.id, b.user_id, b.description, b.debit, b.created, b.price, u.username, u.email, u.mobile FROM TABLE1 b, TABLE2 u WHERE b.credit = 0 AND b.user_id = u.id AND b.auction_id = "12345" ORDER BY b.id DESC LIMIT 10; // there are few more but they are not so critical. Indexing is good, we are using them wisely. In above query all id's are indexed And TABLE1 has frequent insertions and TABLE2 has frequent updates.

    Read the article

  • Which DHT algorithm to use (if I want to join two separate DHTs)?

    - by webdreamer
    I've been looking into some DHT systems, specially Pastry and Chord. I've read some concerns about Chord's reaction to churn, though I believe that won't be a problem for the task I have at hands. I'm implementing some sort of social network service that doesn't rely on any central servers for a course project. I need the DHT for the lookups. Now I don't know of all the servers in the network in the beginning. As I've stated, there's no main tracker server. It works this way: each client has three dedicated servers. The three servers have the profile of the client, and it's wall, it's personal info, replicated. I only get to know about other group of servers when the user adds a friend (inputing the client's address). So I would create two separate DHTs on the two groups of three servers and when they friend each other I would like to join the DHTs. I would like to this consistently. I haven't had a lot of time to get all that familiar with the protocols, so I would like to know which one is better if I want to join the two separate DHTs?

    Read the article

  • Improve Application Performace

    - by Gtest
    Hello, Want To Improvide Performace Of C#.Net Applicaiton.. In My Application I am using Third Party Interop/Dll To Process .doc Files. It's a Simple Operation, Which Pass Input/Output FilePath to Interop dll ...& dll will execute text form input file. To Improve Performace I have Tried, Execute 2 therad to process 32 files.(each Thread process 16 files) Execute application code by creating 2 new AppDomains(each AppDomain Code process 16 files) Execute Code Using TPL(Task Parellel Library) But all options take around same time (32 sec) to process 32 files.Manually process tooks same 32 sec to process 32 files. Just tried one thing ..when i have created sample exe to process 16 files as input & output for refrence PAth given in TextBox. ..I open 2 exe instance to process. 1 exe has differnt 16 input files & output Created with input file path 2 exe has differnt 16 input files & output Created with input file path When i click on start button of both exe ..it use 100% cpu & Utilize both core significantly & Process Completed within 16 sec for 32 files. Can we provide this kind of explicit prallism to Improve my applicaiton Peformace? Thanks.

    Read the article

  • What is optimal hardware configuration for heavy load LAMP application

    - by Piotr Kochanski
    I need to run Linux-Apache-PHP-MySQL application (Moodle e-learning platform) for a large number of concurrent users - I am aiming 5000 users. By concurrent I mean that 5000 people should be able to work with the application at the same time. "Work" means not only do database reads but writes as well. The application is not very typical, since it is doing a lot of inserts/updates on the database, so caching techniques are not helping to much. We are using InnoDB storage engine. In addition application is not written with performance in mind. For instance one Apache thread usually occupies about 30-50 MB of RAM. I would be greatful for information what hardware is needed to build scalable configuration that is able to handle this kind of load. We are using right now two HP DLG 380 with two 4 core processors which are able to handle much lower load (typically 300-500 concurrent users). Is it reasonable to invest in this kind of boxes and build cluster using them or is it better to go with some more high-end hardware? I am particularly curious how many and how powerful servers are needed (number of processors/cores, size of RAM) what network equipment should be used (what kind of switches, network cards) any other hardware, like particular disc storage solutions, etc, that are needed Another thing is how to put together everything, that is what is the most optimal architecture. Clustering with MySQL is rather hard (people are complaining about MySQL Cluster, even here on Stackoverflow).

    Read the article

  • How to know if all the Thread Pool's thread are already done with its tasks?

    - by mcxiand
    I have this application that will recurse all folders in a given directory and look for PDF. If a PDF file is found, the application will count its pages using ITextSharp. I did this by using a thread to recursively scan all the folders for pdf, then if then PDF is found, this will be queued to the thread pool. The code looks like this: //spawn a thread to handle the processing of pdf on each folder. var th = new Thread(() => { pdfDirectories = Directory.GetDirectories(pdfPath); processDir(pdfDirectories); }); th.Start(); private void processDir(string[] dirs) { foreach (var dir in dirs) { pdfFiles = Directory.GetFiles(dir, "*.pdf"); processFiles(pdfFiles); string[] newdir = Directory.GetDirectories(dir); processDir(newdir); } } private void processFiles(string[] files) { foreach (var pdf in files) { ThreadPoolHelper.QueueUserWorkItem( new { path = pdf }, (data) => { processPDF(data.path); } ); } } My problem is, how do i know that the thread pool's thread has finished processing all the queued items so i can tell the user that the application is done with its intended task?

    Read the article

  • How to html encode the output of an NHaml view (or any MVC view)?

    - by jessegavin
    I have several views written in NHaml that I would like to render as encoded html. Here's one. %table.data %thead %tr %th Country Name %th ISO 2 %th ISO 3 %th ISO # %tbody - foreach(var c in ViewData.Model.Countries) %tr %td =c.Name %td =c.Alpha2 %td =c.Alpha3 %td =c.Number I know that NHaml provides syntax to Html encode the output for given lines using &=. However, in order to encode the entire view, I would essentially lose the benefit of writing my view in NHaml since it would have to look like this.... &= "<table class='data'> &= " <thead> So I was wondering if there was any cool way to be able to capture the rendered view as a string, then to html encode that string. Maybe something like the following??? public ContentResult HtmlTable(string format) { var m = new CountryViewModel(); m.Countries = _countryService.GetAll(); // Somehow render the view and store it as a string? // Not sure how to achieve this. var viewHtml = View("HtmlTable", m); // ??? return Content(viewHtml); } This question may actually have no particular relevance to the View engine that I am using I guess. Any help or thoughts would be appreciated.

    Read the article

  • OpenSL ES decode 24bit FLAC

    - by yano
    I am trying to decode a FLAC file with 24bit sample format using OpenSL ES on Android. Originally, I had my SLDataFormat_PCM for the SLDataSink setup like this. _pcm.formatType = SL_DATAFORMAT_PCM; _pcm.numChannels = 2; _pcm.samplesPerSec = SL_SAMPLINGRATE_44_1; _pcm.bitsPerSample = SL_PCMSAMPLEFORMAT_FIXED_16; _pcm.containerSize = SL_PCMSAMPLEFORMAT_FIXED_16; _pcm.channelMask = SL_SPEAKER_FRONT_LEFT | SL_SPEAKER_FRONT_RIGHT; _pcm.endianness = SL_BYTEORDER_LITTLEENDIAN; This is working well for basically any data format. Luckily the samplesPerSec is not respected (I don't want resampling). Now I want to support the full bit-depth of a FLAC file with 24bit samples. When using this format, it apparently performs a bit-depth conversion, because once I load the file, and then check the ANDROID_KEY_PCMFORMAT_BITSPERSAMPLE info, it is 16. When I put bitsPerSample = SL_PCMSAMPLEFORMAT_FIXED_24; or SL_PCMSAMPLEFORMAT_FIXED_32, then OpenSL ES rejects it E/libOpenSLES(22706): pAudioSnk: bitsPerSample=32 W/libOpenSLES(22706): Leaving Engine::CreateAudioPlayer (SL_RESULT_CONTENT_UNSUPPORTED) Any idea how this is meant to work? Is Android currently restricted to 16 bit int only? I would also accept 32bit float, but I don't suppose that will work either.

    Read the article

  • Packaging reference documentation with jar file

    - by soren.enemaerke
    We are porting our .NET library to a java equivalent and is now looking at how to distribute this port. Packaging the classes into a jar-file seems like best practice and we would then ship this jar file in a zip along with some license terms. But what about the documentation? In .NET land it seems like best practice to distribute the xml file that can be consumed by tooling (Visual Studio) but we can't seem to find such best practices for java. We have javadoc comments on our public classes and interfaces, so we are just looking for a way to generate and distribute these comments in a way that is developer friendly (we're thinking easily consumed from various IDEs). What are developers expecting and how do you best deliver this? We would really prefer to bundle the documentation along with the jar file and not have to host the documentation on our website EDIT: We would like for our documentation to appear inside the java IDEs so we want to provide the documentation in a way that integrates into the IDEs as gracefully as possible. In .NET land this is as an xml file placed next to the .dll file, but is there a similar concept for jar files that enables the integration into tooling? PS: We are developing in Eclipse and have an ant task doing the building and jar-file packaing in our automated build.

    Read the article

< Previous Page | 572 573 574 575 576 577 578 579 580 581 582 583  | Next Page >