Search Results

Search found 11543 results on 462 pages for 'partition wise join'.

Page 359/462 | < Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >

  • WCF Service method synchronous/async

    - by Rafal
    Hi I have a problem with calling WCF Service methods with Silverlight 3. private bool usr_OK = false; clientService.CheckUserMailAsync(this.mailTF.Text); if (usr_OK == true) { isValidationOK = true; } else { isValidationOK = false; MessageBox.Show("User already exists.", "User registered succes!", MessageBoxButton.OK); } CheckUserMail should change usr_OK parameter. However it runs in other thread and it does not change the usr_OK param before IF block begins. I've tried thread.join byt the application freezed and i do not know what to do else. Please help me...how can i wait for WCF method to return param usr_OK.

    Read the article

  • Is my objective possible using WCF (and is it the right way to do things?)

    - by David
    I'm writing some software that modifies a Windows Server's configuration (things like MS-DNS, IIS, parts of the filesystem). My design has a server process that builds an in-memory object graph of the server configuration state and a client which requests this object graph. The server would then serialize the graph, send it to the client (presumably using WCF), the server then makes changes to this graph and sends it back to the server. The server receives the graph and proceeds to make modifications to the server. However I've learned that object-graph serialisation in WCF isn't as simple as I first thought. My objects have a hierarchy and many have parametrised-constructors and immutable properties/fields. There are also numerous collections, arrays, and dictionaries. My understanding of WCF serialisation is that it requires use of either the XmlSerializer or DataContractSerializer, but DCS places restrictions on the design of my object-graph (immutable data seems right-out, it also requires parameter-less constructors). I understand XmlSerializer lets me use my own classes provided they implement ISerializable and have the de-serializer constructor. That is fine by me. I spoke to a friend of mine about this, and he advocates going for a Data Transport Object-only route, where I'd have to maintain a separate DataContract object-graph for the transport of data and re-implement my server objects on the client. Another friend of mine said that because my service only has two operations ("GetServerConfiguration" and "PutServerConfiguration") it might be worthwhile just skipping WCF entirely and implementing my own server that uses Sockets. So my questions are: Has anyone faced a similar problem before and if so, are there better approaches? Is it wise to send an entire object graph to the client for processing? Should I instead break it down so that the client requests a part of the object graph as it needs it and sends only bits that have changed (thus reducing concurrency-related risks?)? If sending the object-graph down is the right way, is WCF the right tool? And if WCF is right, what's the best way to get WCF to serialise my object graph?

    Read the article

  • Using SalesForce's Web Service to create and set the type of a Task

    - by Alan Williamson
    I am successfully creating a Task using the SalesForce API SOAP API through Java. However, my problem is that I can't seem to set the Type of it. They all default to "Call" but I really want them to be "Email". Can someone point me in the direction of where I can do this? I think it is to do with RecordTypeMapping, but i am somewhat confused as to how to use this in my Java code to look up the particular one for Task type. I feel I have got so close with this. I have the correct WSDL that is giving me the extra method on the Task.java class, but no matter what I pass in, it dies. This doesn't seem to be a huge ask, yet i am perplexed as to which dots to join to get it to work Any help would be appreciated. thanks

    Read the article

  • Is using os.path.abspath to validate an untrusted filename's location secure?

    - by mcmt
    I don't think I'm missing anything. Then again I'm kind of a newbie. def GET(self, filename): name = urllib.unquote(filename) full = path.abspath(path.join(STATIC_PATH, filename)) #Make sure request is not tricksy and tries to get out of #the directory, e.g. filename = "../.ssh/id_rsa". GET OUTTA HERE assert full[:len(STATIC_PATH)] == STATIC_PATH, "bad path" return open(full).read() Edit: I realize this will return the wrong HTTP error code if the file doesn't exist (at least under web.py). I will fix this.

    Read the article

  • Concurency issues with scheduling app

    - by Sazug
    Our application needs a simple scheduling mechanism - we can schedule only one visit per room for the same time interval (but one visit can be using one or more rooms). Using SQL Server 2005, sample procedure could look like this: CREATE PROCEDURE CreateVisit @start datetime, @end datetime, @roomID int AS BEGIN DECLARE @isFreeRoom INT BEGIN TRANSACTION SELECT @isFreeRoom = COUNT(*) FROM visits V INNER JOIN visits_rooms VR on VR.VisitID = V.ID WHERE @start = start AND @end = [end] AND VR.RoomID = @roomID IF (@isFreeRoom = 0) BEGIN INSERT INTO visits (start, [end]) VALUES (@start, @end) INSERT INTO visits_rooms (visitID, roomID) VALUES (SCOPE_IDENTITY(), @roomID) END COMMIT TRANSACTION END In order to not have the same room scheduled for two visits at the same time, how should we handle this problem in procedure? Should we use SERIALIZABLE transaction isolation level or maybe use table hints (locks)? Which one is better?

    Read the article

  • Can this MySQL subquery be optimised?

    - by Dan
    I have two tables, news and news_views. Every time an article is viewed, the news id, IP address and date is recorded in news_views. I'm using a query with a subquery to fetch the most viewed titles from news, by getting the total count of views in the last 24 hours for each one. It works fine except that it takes between 5-10 seconds to run, presumably because there's hundreds of thousands of rows in news_views and it has to go through the entire table before it can finish. The query is as follows, is there any way at all it can be improved? SELECT n.title , nv.views FROM news n LEFT JOIN ( SELECT news_id , count( DISTINCT ip ) AS views FROM news_views WHERE datetime >= SUBDATE(now(), INTERVAL 24 HOUR) GROUP BY news_id ) AS nv ON nv.news_id = n.id ORDER BY views DESC LIMIT 15

    Read the article

  • How do I find the top N batters per year?

    - by Drew Stephens
    I'm playing around with the Lahman Baseball Database in a MySQL instance. I want to find the players who topped home runs (HR) for each year. The Batting table has the following (relevant parts) of its schema: +-----------+----------------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-----------+----------------------+------+-----+---------+-------+ | playerID | varchar(9) | NO | PRI | | | | yearID | smallint(4) unsigned | NO | PRI | 0 | | | HR | smallint(3) unsigned | YES | | NULL | | +-----------+----------------------+------+-----+---------+-------+ For each year, every player has an entry (between hundreds and 12k per year, going back to 1871). Getting the top N hitters for a single year is easy: SELECT playerID,yearID,HR FROM Batting WHERE yearID=2009 ORDER BY HR DESC LIMIT 3; +-----------+--------+------+ | playerID | yearID | HR | +-----------+--------+------+ | pujolal01 | 2009 | 47 | | fieldpr01 | 2009 | 46 | | howarry01 | 2009 | 45 | +-----------+--------+------+ But I'm interested in finding the top 3 from every year. I've found solutions like this, describing how to select the top from a category and I've tried to apply it to my problem, only to end up with a query that never returns: SELECT b.yearID, b.playerID, b.HR FROM Batting AS b LEFT JOIN Batting b2 ON (b.yearID=b2.yearID AND b.HR <= b2.HR) GROUP BY b.yearID HAVING COUNT(*) <= 3; Where have I gone wrong?

    Read the article

  • Should java try blocks be scoped as tightly as possible?

    - by isme
    I've been told that there is some overhead in using the Java try-catch mechanism. So, while it is necessary to put methods that throw checked exception within a try block to handle the possible exception, it is good practice performance-wise to limit the size of the try block to contain only those operations that could throw exceptions. I'm not so sure that this is a sensible conclusion. Consider the two implementations below of a function that processes a specified text file. Even if it is true that the first one incurs some unnecessary overhead, I find it much easier to follow. It is less clear where exactly the exceptions come from just from looking at statements, but the comments clearly show which statements are responsible. The second one is much longer and complicated than the first. In particular, the nice line-reading idiom of the first has to be mangled to fit the readLine call into a try block. What is the best practice for handling exceptions in a funcion where multiple exceptions could be thrown in its definition? This one contains all the processing code within the try block: void processFile(File f) { try { // construction of FileReader can throw FileNotFoundException BufferedReader in = new BufferedReader(new FileReader(f)); // call of readLine can throw IOException String line; while ((line = in.readLine()) != null) { process(line); } } catch (FileNotFoundException ex) { handle(ex); } catch (IOException ex) { handle(ex); } } This one contains only the methods that throw exceptions within try blocks: void processFile(File f) { FileReader reader; try { reader = new FileReader(f); } catch (FileNotFoundException ex) { handle(ex); return; } BufferedReader in = new BufferedReader(reader); String line; while (true) { try { line = in.readLine(); } catch (IOException ex) { handle(ex); break; } if (line == null) { break; } process(line); } }

    Read the article

  • I have many processes in my usecase , is this normal ?

    - by BugKiller
    Hi, I'm working now on a big system that consists of many subsystems , each subsystem depends on the other. I wrote a usecase for this system , but I note that I have many processes in my usecase ( more than 40 processes ! ) . it looks like this : Group subsystem: add Group. remove Group. join to Group. upload file. create poll. remove file. remove poll. write post/topic close post. edit post. .... Messages Centers send message view inbox read message. and so on .. each user interacts with these processes . How can I reduce the number of these processes? Is it possible to divide the usecase processes into many pages?

    Read the article

  • Is there an implementation of rapid concurrent syntactical sugar in scala? eg. map-reduce

    - by TiansHUo
    Passing messages around with actors is great. But I would like to have even easier code. Examples (Pseudo-code) val splicedList:List[List[Int]]=biglist.partition(100) val sum:Int=ActorPool.numberOfActors(5).getAllResults(splicedList,foldLeft(_+_)) where spliceIntoParts turns one big list into 100 small lists the numberofactors part, creates a pool which uses 5 actors and receives new jobs after a job is finished and getallresults uses a method on a list. all this done with messages passing in the background. where maybe getFirstResult, calculates the first result, and stops all other threads (like cracking a password)

    Read the article

  • Stream classes ... design, pattern for creating views over streams

    - by ToxicAvenger
    A question regarding the design of stream classes - I need a pattern to create independent views over a single stream instance (in my case for reading). A view would be a consecutive part of the stream. The problem I have with the stream classes is that the state (reading or writing) is coupled with the underlying data/storage. So if I need to partition a stream into different segments (whether segments overlap or not doesn't matter), I cannot easily create views over the stream, the views would store start and end position. Because reading from a view - which would translate to reading from the underlying stream adjusted based on the start/end positions - would change the state of the underlying stream instance. So what I could do is take a read on a view instance, adjust the Position of the stream, read the chunks I need. But I cannot do that concurrently. Why is it designed in such a way, and what kind of pattern could I implement to create independet views over a single stream instance which would allow to read/write independently (and concurrently)?

    Read the article

  • What scenarios are possible where the VS C# compiler would not compile a reference of a reference?

    - by SuperKing
    Hello, I'm probably asking this question wrong (and that may be why Google isn't helping), but here goes: In Visual Studio I am compiling a C# project (let's call it Project A, the startup project) which has a reference to Project B. Project B has a reference to a Project C, so when A gets built, the dlls for B gets placed in the bin directory of A, as does the dll for C (because B requires C, and A requires B). However, I have apparently made some change recently so that the dll for Project C does not go into the bin directory of Project A when rebuilding the solution. I have no idea what I've done to make this happen. I have not modified the setup of the solution itself, and I have only added additional references to the project files. Code wise, I have commented out most of the actual code in Project B that references classes in Project C, but did not remove the reference from the project itself (I don't think this matters). I was told that perhaps the C# compiler was optimizing somehow so that it was not building Project C, but really I'm out of ideas. I would think someone has run into something similar before Any thoughts? Thanks!

    Read the article

  • Do we need to differntiate anything for this in IE8?

    - by kumar
    I have this code in my application var checked = $('#fieldset input[type=checkbox]:checked'); var ids= checked.map(function() { return $(this).val(); }).get().join(','); in firefox I am getting all the checked Ids something like this.. 123,234,443.. but same code in IE8 its showing only first Id not all checked id's even its checked? Even if I uncheck the first checkbox if I check second checkbox second checkbox value showing as null? can anybody help me out? thanks

    Read the article

  • Which are the RDBMS that minimize the server roundtrips? Which RDBMS are better (in this area) than

    - by user193655
    When the latency is high ("when pinging the server takes time") the server roundtrips make the difference. Now I don't want to focus on the roundtrips created in programming, but the roundtrips that occur "under the hood" in the DB engine, so the roundtrips that are 100% dependant on how the RDBMS is written itself. I have been told that FireBird has more roundtrips than MySQL. But this is the only information I know. I am currently supporting MS SQL but I'd like to change RDBMS (because I use Express Editions and in my scenario they are quite limiting from the performance point of view), so to make a wise choice I would like to include also this point into "my RDBMS comparison feature matrix" to understand which is the best RDBMS to choose as an alternative to MS SQL. So the bold sentence above would make me prefer MySQL to Firebird (for the roundtrips concept, not in general), but can anyone add informations? And MS SQL where is it located? Is someone able to "rank" the roundtrip performance of the main RDBMS, or at least: MS SQL, MySql, Postegresql, Firebird (I am not interested in Oracle since it is not free, and if I have to change I would change to a free RDBMS). Anyway MySql (as mentioned several times on stackoverflow) has a not clear future and a not 100% free license. So my final choice will probably dall on PostgreSQL or Firebird. Additional info: somehow you can answer my question by making a simple list like: MSSQL:3; MySQL:1; Firebird:2; Postgresql:2 (where 1 is good, 2 average, 3 bad). Of course if you can post some links where the roundtrips per RDBMSs are compared it would be great

    Read the article

  • MySQL GROUP_CONCAT + IN() = missing data :-(

    - by Andrew Heath
    Example: Table: box boxID color 01 red 02 blue 03 green Table: boxHas boxID has 01 apple 01 pear 01 grapes 01 banana 02 lime 02 apple 02 pear 03 chihuahua 03 nachos 03 baby crocodile I want to query on the contents of each box, and return a table with each ID, color, and a column that concatenates the contents of each box, so I use: SELECT box.boxID, box.color, GROUP_CONCAT(DISTINCT boxHas.has SEPARATOR ", ") AS contents FROM box LEFT JOIN boxHas ON box.boxID=boxHas.boxID WHERE boxHas.has IN ('apple','pear') GROUP BY box.boxID ORDER BY box.boxID and I get the following table of results: boxID color contents 01 red apple, pear 02 blue apple, pear My question to you is: why isn't it listing ALL the has values in the contents column? Why is my WHERE statement also cropping my GROUP_CONCAT? The table I thought I was going to get is: boxID color contents 01 red apple, banana, grapes, pear 02 blue apple, lime, pear Although I want to limit my boxID results based upon the WHERE statement, I do not want to limit the contents field for valid boxes. :-/ Help?

    Read the article

  • Not able to pass multiple override parameters using nose-testconfig 0.6 plugin in nosetests

    - by Jaikit
    Hi, I am able to override multiple config parameters using nose-testconfig plugin only if i pass the overriding parameters on commandline. e.g. nosetests -c nose.cfg -s --tc=jack.env1:asl --tc=server2.env2:abc But when I define the same thing inside nose.cfg, than only the value for last parameter is modified. e.g. tc = server2.env2:abc tc = jack.env1:asl I checked the plugin code. It looks fine to me. I am pasting the part of plugin code below: parser.add_option( "--tc", action="append", dest="overrides", default = [], help="Option:Value specific overrides.") configure: if options.overrides: self.overrides = [] overrides = tolist(options.overrides) for override in overrides: keys, val = override.split(":") if options.exact: config[keys] = val else: ns = ''.join(['["%s"]' % i for i in keys.split(".") ]) # BUG: Breaks if the config value you're overriding is not # defined in the configuration file already. TBD exec('config%s = "%s"' % (ns, val)) Let me know if any one has any clue.

    Read the article

  • Using a "vo" for joined data?

    - by keithjgrant
    I'm building a small financial system. Because of double-entry accounting, transactions always come in batches of two or more, so I've got a batch table and a transaction table. (The transaction table has batch_id, account_id, and amount fields, and shared data like date and description are relegated to the batch table). I've been using basic vo-type models for each table so far. Because of this table structure structure, though, transactions will almost always be selected with a join on the batch table. So should I take the selected records and splice them into two separate vo objects, or should I create a "shared" vo that contains both batch and transaction data? There are a few cases in which batch records and/or transaction records. Are there possible pitfalls down the road if I have "overlapping" vo classes?

    Read the article

  • Autmatically create table on MySQL server based on date?

    - by Anthony
    Is there an equivalent to cron for MySQL? I have a PHP script that queries a table based on the month and year, like: SELECT * FROM data_2010_1 What I have been doing until now is, every time the script executes it does a query for the table, and if it exists, does the work, if it doesn't it creates the table. I was wondering if I can just set something up on the MySQL server itself that will create the table (based on a default table) at the stroke of midnight on the first of the month. Update Based on the comments I've gotten, I'm thinking this isn't the best way to achieve my goal. So here's two more questions: If I have a table with thousands of rows added monthly, is this potentially a drag on resources? If so, what is the best way to partition this table, since the above is verboten? What are the potential problems with my home-grown method I originally thought up?

    Read the article

  • Deleting orphans with JPA

    - by homaxto
    I have a one-to-one relation where I use CascadeType.PERSIST. This has over time build up a huge amount of child records that has not been deleted, to such an extend that it is reflected in the performance. Now I wish to add some code that cleans up the database removing all the child records that are not referenced by a parent. At the moment we are talking 400K+ records, at I need to run the code on all customer installations just to be sure they do not run into the same problem. I think the best solution would be to run a named query (because we support two databases) that deletes the necessary records, and this is where I get into problems, because how should I write it in JPQL? The result I want can be defined like the following sql statement, which unfortunaltely does not run on MySQL. DELETE FROM child c1 WHERE c1.pk NOT IN (SELECT DISTINCT p.pk FROM child c2 JOIN parent p ON p.child = c2.pk);

    Read the article

  • Help with writing a php code that repeats itself per array value

    - by Mohammad
    Hi I'm using Closure Compiler to compress and join a few JavaScript files the syntax is something like this; $c = new PhpClosure(); $c->add("JavaScriptA.js") ->add("JavaScriptB.js") ->write(); How could I make it systematically add more files from an array lets say for each array element in $file = array('JavaScriptA.js','JavaScriptB.js','JavaScriptC.js',..) it would execute the following code $c = new PhpClosure(); $c->add("JavaScriptA.js") ->add("JavaScriptB.js") ->add("JavaScriptC.js") ->add ... ->write(); Thank you so much in advance!

    Read the article

  • Is it valid for Hibernate list() to return duplicates?

    - by skaffman
    Is anyone aware of the validity of Hibernate's Criteria.list() and Query.list() methods returning multiple occurrences of the same entity? Occasionally I find when using the Criteria API, that changing the default fetch strategy in my class mapping definition (from "select" to "join") can sometimes affect how many references to the same entity can appear in the resulting output of list(), and I'm unsure whether to treat this as a bug or not. The javadoc does not define it, it simply says "The list of matched query results." (thanks guys). If this is expected and normal behaviour, then I can de-dup the list myself, that's not a problem, but if it's a bug, then I would prefer to avoid it, rather than de-dup the results and try to ignore it. Anyone got any experience of this?

    Read the article

  • Querying Two Tables At Once

    - by John
    Hello, I am trying to do what I believe is called a join query. First, in a MySQL table called "login," I want to look up what "loginid" is in the record where "username" equals $profile. (This will be just one record / row in the MySQL table). Then, I want to take that "loginid" and look up all rows / records in a different MySQL table called "submission," and pull data that have that "loginid." This could possibly be more than one record / row. How do I do this? The code below doesn't seem to work. Thanks in advance, John $profile = mysql_real_escape_string($_GET['profile']); $sqlStr = "SELECT l.username, l.loginid, s.loginid, s.submissionid, s.title, s.url, s.datesubmitted, s.displayurl FROM submission AS s, login AS l WHERE l.username = '$profile', s.loginid = l.loginid ORDER BY s.datesubmitted DESC";

    Read the article

  • Problem with returning values from a helper method in Rails

    - by True Soft
    I want to print some objects in a table having 2 rows per object, like this: <tr class="title"> <td>Name</td><td>Price</td> </tr> <tr class="content"> <td>Content</td><td>123</td> </tr> I wrote a helper method in products_helper.rb, based on the answer of this question. def write_products(products) products.map { |product| content_tag :tr, :class => "title" do content_tag :td do link_to h(product.name), product, :title=>product.name end content_tag :td do product.price end end content_tag :tr, :class => "content" do content_tag :td, h(product.content) content_tag :td, product.count end }.join end But this does not work as expected. It only returns the last node - the last <td>123</td> What should I do to make it work?

    Read the article

  • subqueries linq

    - by user297378
    Hey all I am trying to do a subquery in linq but the subquery is a value and it seems to not be working, can anyone help out? I am using the entit frame work I keep getting and int to string error not sure why. from lrp in remit.log_record_product join lr in remit.log_record on lrp.log_record_id equals lr.log_record_id where (lrp.que_submit_date >= RadDatePickerStartDate.SelectedDate) && (lrp.que_submit_date <= RadDatePickerEndDate.SelectedDate) select new { lrp.que_submit_date, lr.officer_name, lr.c_fname, lr.c_lname, lrp.price_sold, lrp.product_cost, gap_account_number = (from gap in remit.gap_contracts where gap.log_record_product_id == lrp.log_record_product_id select gap.account_number), iui_account_number = (from iui in remit.iui_contracts where iui.log_record_product_id == lrp.log_record_product_id select iui.account_number), dp_account_number = (from dp in remit.dp_contracts where dp.log_record_product_id == lrp.log_record_product_id select dp.account_number), mpd_account_number = (from mpd in remit.mbp_contracts where mpd.log_record_product_id == lrp.log_record_product_id select mpd.product_account_number) }

    Read the article

  • Surrogate key for date dimension?

    - by Navin
    There are 2 school of thoughts : Use surrogate key preferbly in the format of YYYYMMDD as this will always be sequential. Eliminate Date dimension surrogate key and use actual date instead. My Questions to experts on dimension modeling are : 1> Which design would you prefer and why ? 2> How should we handle unknown values in each of the cases, Can we simply place NULL in Fact table for unknown dates as Foreign Key can be NULL (if no why)? 3> If we need to partition fact table on date column ,how would we achieve that in case 1. I am inclined towards using actual date and using NULL to represent UNKNOWN dates in fact table , as date related validation on fact can be done without need to look in to dimension table.

    Read the article

< Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >