Search Results

Search found 19412 results on 777 pages for 'multiple resultsets'.

Page 356/777 | < Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >

  • Best practices, PHP, tracking millions of impressions per day.

    - by John
    What do I have to do to make 20k mysql inserts per second possible (during peak hours around 1k/sec during slower times)? I've been doing some research and I've seen the "INSERT DELAYED" suggestion, writing to a flat file, "fopen(file,'a')", and then running a chron job to dump the "needed" data into mysql, etc. I've also heard you need multiple servers and "load balancers" which I've never heard of, to make something like this work. I've also been looking at these "cloud server" thing-a-ma-jigs, and their automatic scalability, but not sure about what's actually scalable. The application is just a tracker script, so if I have 100 websites that get 3 million page loads a day, there will be around 300 million inserts a day. The data will be ran through a script that will run every 15-30 minutes which will normalize the data and insert it into another mysql table. How do the big dogs do it? How do the little dogs do it? I can't afford a huge server anymore so any intuitive ways, if there are multiple ways of going at it, you smart people can think of.. please let me know :)

    Read the article

  • Is there an ORM that supports composition w/o Joins

    - by Ken Downs
    EDIT: Changed title from "inheritance" to "composition". Left body of question unchanged. I'm curious if there is an ORM tool that supports inheritance w/o creating separate tables that have to be joined. Simple example. Assume a table of customers, with a Bill-to address, and a table of vendors, with a remit-to address. Keep it simple and assume one address each, not a child table of addresses for each. These addresses will have a handful of values in common: address 1, address 2, city, state/province, postal code. So let's say I'd have a class "addressBlock" and I want the customers and vendors to inherit from this class, and possibly from other classes. But I do not want separate tables that have to be joined, I want the columns in the customer and vendor tables respectively. Is there an ORM that supports this? The closest question I have found on StackOverflow that might be the same question is linked below, but I can't quite figure if the OP is asking what I am asking. He seems to be asking about foregoing inheritance precisely because there will be multiple tables. I'm looking for the case where you can use inheritance w/o generating the multiple tables. Model inheritance approach with Django's ORM

    Read the article

  • How to extract block of XML from a log file on Linux

    - by dragonmantank
    I have a log file that looks like the following: 2010-05-12 12:23:45 Some sort of log entry 2010-05-12 01:45:12 Request XML: <RootTag> <Element>Value</Element> <Element>Another Value</Element> </RootTag> 2010-05-12 01:45:32 Response XML: <ResponseRoot> <Element>Value</Element> </ResponseRoot> 2010-05-12 01:45:49 Another log entry What I want to do is extract the Request and Response XML (and ultimately dump them into their own single files). I had a similar parser that used egrep but the XML was all on one line, not multiple ones like above. The log files are also somewhat large, hitting 500-600 megs a log. Smaller logs I would read in via a PHP script and use regex matching, but the amount of memory required for such a large file would more than likely kill the script. Is there an easy way using the built-in tools on a Linux box (CentOS in this case) to extract multiple lines or am I going to have to bite the bullet and use Perl or PHP to read in the entire file to extract it?

    Read the article

  • How can I optimize retrieving lowest edit distance from a large table in SQL?

    - by Matt
    Hey, I'm having troubles optimizing this Levenshtein Distance calculation I'm doing. I need to do the following: Get the record with the minimum distance for the source string as well as a trimmed version of the source string Pick the record with the minimum distance If the min distances are equal (original vs trimmed), choose the trimmed one with the lowest distance If there are still multiple records that fall under the above two categories, pick the one with the highest frequency Here's my working version: DECLARE @Results TABLE ( ID int, [Name] nvarchar(200), Distance int, Frequency int, Trimmed bit ) INSERT INTO @Results SELECT ID, [Name], (dbo.Levenshtein(@Source, [Name])) As Distance, Frequency, 'False' As Trimmed FROM MyTable INSERT INTO @Results SELECT ID, [Name], (dbo.Levenshtein(@SourceTrimmed, [Name])) As Distance, Frequency, 'True' As Trimmed FROM MyTable SET @ResultID = (SELECT TOP 1 ID FROM @Results ORDER BY Distance, Trimmed, Frequency) SET @Result = (SELECT TOP 1 [Name] FROM @Results ORDER BY Distance, Trimmed, Frequency) SET @ResultDist = (SELECT TOP 1 Distance FROM @Results ORDER BY Distance, Trimmed, Frequency) SET @ResultTrimmed = (SELECT TOP 1 Trimmed FROM @Results ORDER BY Distance, Trimmed, Frequency) I believe what I need to do here is to.. Not dumb the results to a temporary table Do only 1 select from `MyTable` Setting the results right in the select from the initial select statement. (Since select will set variables and you can set multiple variables in one select statement) I know there has to be a good implementation to this but I can't figure it out... this is as far as I got: SELECT top 1 @ResultID = ID, @Result = [Name], (dbo.Levenshtein(@Source, [Name])) As distOrig, (dbo.Levenshtein(@SourceTrimmed, [Name])) As distTrimmed, Frequency FROM MyTable WHERE /* ... yeah I'm lost */ ORDER BY distOrig, distTrimmed, Frequency Any ideas?

    Read the article

  • MySql Query lag time?

    - by Click Upvote
    When there are multiple PHP scripts running in parallel, each making an UPDATE query to the same record in the same table repeatedly, is it possible for there to be a 'lag time' before the table is updated with each query? I have basically 5-6 instances of a PHP script running in parallel, having been launched via cron. Each script gets all the records in the items table, and then loops through them and processes them. However, to avoid processing the same item more than once, I store the id of the last item being processed in a seperate table. So this is how my code works: function getCurrentItem() { $sql = "SELECT currentItemId from settings"; $result = $this->db->query($sql); return $result->get('currentItemId'); } function setCurrentItem($id) { $sql = "UPDATE settings SET currentItemId='$id'"; $this->db->query($sql); } $currentItem = $this->getCurrentItem(); $sql = "SELECT * FROM items WHERE status='pending' AND id > $currentItem'"; $result = $this->db->query($sql); $items = $result->getAll(); foreach ($items as $i) { //Check if $i has been processed by a different instance of the script, and if so, //leave it untouched. if ($this->getCurrentItem() > $i->id) continue; $this->setCurrentItem($i->id); // Process the item here } But despite of all the precautions, most items are being processed more than once. Which makes me think that there is some lag time between the update queries being run by the PHP script, and when the database actually updates the record. Is it true? And if so, what other mechanism should I use to ensure that the PHP scripts always get only the latest currentItemId even when there are multiple scripts running in parrallel? Would using a text file instead of the db help?

    Read the article

  • Real time embeddable http server library required

    - by Howard May
    Having looked at several available http server libraries I have not yet found what I am looking for and am sure I can't be the first to have this set of requirements. I need a library which presents an API which is 'pipelined'. Pipelining is used to describe an HTTP feature where multiple HTTP requests can be sent across a TCP link at a time without waiting for a response. I want a similar feature on the library API where my application can receive all of those request without having to send a response (I will respond but want the ability to process multiple requests at a time to reduce the impact of internal latency). So the web server library will need to support the following flow 1) HTTP Client transmits http request 1 2) HTTP Client transmits http request 2 ... 3) Web Server Library receives request 1 and passes it to My Web Server App 4) My Web Server App receives request 1 and dispatches it to My System 5) Web Server receives request 2 and passes it to My Web Server App 6) My Web Server App receives request 2 and dispatches it to My System 7) My Web Server App receives response to request 1 from My System and passes it to Web Server 8) Web Server transmits HTTP response 1 to HTTP Client 9) My Web Server App receives response to request 2 from My System and passes it to Web Server 10) Web Server transmits HTTP response 2 to HTTP Client Hopefully this illustrates my requirement. There are two key points to recognise. Responses to the Web Server Library are asynchronous and there may be several HTTP requests passed to My Web Server App with responses outstanding. Additional requirements are Embeddable into an existing 'C' application Small footprint; I don't need all the functionality available in Apache etc. Efficient; will need to support thousands of requests a second Allows asynchronous responses to requests; their is a small latency to responses and given the required request throughput a synchronous architecture is not going to work for me. Support persistent TCP connections Support use with Server-Push Comet connections Open Source / GPL support for HTTPS Portable across linux, windows; preferably more. I will be very grateful for any recommendation Best Regards

    Read the article

  • Drupal-- How to place an image in Panels 3 panels and mini-panels, w/o views or nodes?

    - by msumme
    Is it possible, through any modular functionality, to insert an image into a (mini-)panel, either through token replacement, or through an upload dialog, or through a file selection menu? Do I have to use views? Do I have to create nodes? Would the best way be to make a panel node, and then embed it in a mini-node, if I want a block-like panel that can be placed on multiple pages? I want to build a site with images in a particular layout as a small block, and make it very easy for my client to change those images in the future. I can think of some other ways to make this work, but it's driving me crazy that there seems to be no way to simply PUT an image in a mini-panel without having to upload it and hard-code an image tag. And since my client knows no HTML, coding it this way makes it un-helpful for him. And this mini-panel block is going to be used on a number of pages, and needs to be easily modified. I have been googling for about 45 minutes, and come up with nothing useful. EDIT: OR EVEN just put ONLY one image from an image field w/ multiple values in a panel region on a panel node?

    Read the article

  • Creating a blocking Queue<T> in .NET?

    - by spoon16
    I have a scenario where I have multiple threads adding to a queue and multiple threads reading from the same queue. If the queue reaches a specific size all threads that are filling the queue will be blocked on add until an item is removed from the queue. The solution below is what I am using right now and my question is: How can this be improved? Is there an object that already enables this behavior in the BCL that I should be using? internal class BlockingCollection<T> : CollectionBase, IEnumerable { //todo: might be worth changing this into a proper QUEUE private AutoResetEvent _FullEvent = new AutoResetEvent(false); internal T this[int i] { get { return (T) List[i]; } } private int _MaxSize; internal int MaxSize { get { return _MaxSize; } set { _MaxSize = value; checkSize(); } } internal BlockingCollection(int maxSize) { MaxSize = maxSize; } internal void Add(T item) { Trace.WriteLine(string.Format("BlockingCollection add waiting: {0}", Thread.CurrentThread.ManagedThreadId)); _FullEvent.WaitOne(); List.Add(item); Trace.WriteLine(string.Format("BlockingCollection item added: {0}", Thread.CurrentThread.ManagedThreadId)); checkSize(); } internal void Remove(T item) { lock (List) { List.Remove(item); } Trace.WriteLine(string.Format("BlockingCollection item removed: {0}", Thread.CurrentThread.ManagedThreadId)); } protected override void OnRemoveComplete(int index, object value) { checkSize(); base.OnRemoveComplete(index, value); } internal new IEnumerator GetEnumerator() { return List.GetEnumerator(); } private void checkSize() { if (Count < MaxSize) { Trace.WriteLine(string.Format("BlockingCollection FullEvent set: {0}", Thread.CurrentThread.ManagedThreadId)); _FullEvent.Set(); } else { Trace.WriteLine(string.Format("BlockingCollection FullEvent reset: {0}", Thread.CurrentThread.ManagedThreadId)); _FullEvent.Reset(); } } }

    Read the article

  • Is there a standard pattern for scanning a job table executing some actions?

    - by Howiecamp
    (I realize that my title is poor. If after reading the question you have an improvement in mind, please either edit it or tell me and I'll change it.) I have the relatively common scenario of a job table which has 1 row for some thing that needs to be done. For example, it could be a list of emails to be sent. The table looks something like this: ID Completed TimeCompleted anything else... ---- --------- ------------- ---------------- 1 No blabla 2 No blabla 3 Yes 01:04:22 ... I'm looking either for a standard practice/pattern (or code - C#/SQL Server preferred) for periodically "scanning" (I use the term "scanning" very loosely) this table, finding the not-completed items, doing the action and then marking them completed once done successfully. In addition to the basic process for accomplishing the above, I'm considering the following requirements: I'd like some means of "scaling linearly", e.g. running multiple "worker processes" simultaneously or threading or whatever. (Just a specific technical thought - I'm assuming that as a result of this requirement, I need some method of marking an item as "in progress" to avoid attempting the action multiple times.) Each item in the table should only be executed once. Some other thoughts: I'm not particularly concerned with the implementation being done in the database (e.g. in T-SQL or PL/SQL code) vs. some external program code (e.g. a standalone executable or some action triggered by a web page) which is executed against the database Whether the "doing the action" part is done synchronously or asynchronously is not something I'm considering as part of this question.

    Read the article

  • how to know application is going to destroy/remove?

    - by Bhavesh Jethani
    My requirement is, i want to cancel my alarm manager or services when application going to destroy/closed. if i am in activity A by clicking on button started activity B suddenly user is going to press home button. ---- if application is running in background then don't cancel service. but when he going to remove from recent list of application at that time cancel service. Service/Alarm manager will work even when user has sent application in background by pressing home screen. No of way application can destroy or removed. 1) by clicking back button multiple times if stack have multiple activities. 2) by pressing home button and then remove from recent list. i am facing issue with (2)second option. in that when user press home button at that time service or alarm manager should be work. But when user will be going to remove application from recent list at that time i want to cancel service/alarm manager. Second issue. my service is called at fix interval of time. if i will write code on onCreate(), onDestroy() in each activity. then timer will start from beginning. Is there any listners who talk me application is going to closed?

    Read the article

  • Logic for family tree program

    - by david robers
    Hi All, I am creating a family tree program in Java, or at least trying to. I have developed several classes: Person - getters and setter for name gender age etc FamilyMember - extends Person getters and setters for setting arents and children Family - which consists of multiple family members and methods for adding removing members FamilyTree which is the main class for setting relationships. I have two main problems: 1) I need to set the relationships between people. Currently I am doing: FamilyMember A, FamilyMember B B.setMother(A); A.setChild(B); The example above is for setting a mother child relationship. This seems very clunky. Its getting very long winded to implement all relationships. Any ideas on how to implement multiple relationships in a less prodcedural way? 2) I have to be able to display the family tree. How can I do this? Are there any custom classes out there to make life easier? Thanks for your time...

    Read the article

  • XSLT how to merge some lists of parameters

    - by buggy1985
    Hi, I have an URL Structure like this: http://my.domain.com/generated.xml?param1=foo&param2=bar&xsl=path/to/my.xsl The generated XML will be transformed using the given XSL Stylesheet. The two other parameters are integrated too like this: <root> <params> <param name="param1">foo</param> <param name="param2">bar</param> </param> ... </root> Now I want to create with XSLT a link with a new URI that keeps the existing parameters and adds one or multiple new parameters like page=3 or sort=DESC. If the given parameter already exists, it should be replaced. I'm not sure how to do this. How to pass multiple (optional) parameters to a template. How to merge two lists of parameters. Any ideas? Thanks ;)

    Read the article

  • PHP Socket Server vs node.js: Web Chat

    - by Eliasdx
    I want to program a HTTP WebChat using long-held HTTP requests (Comet), ajax and websockets (depending on the browser used). Userdatabase is in mysql. Chat is written in PHP except maybe the chat stream itself which could also be written in javascript (node.js): I don't want to start a php process per user as there is no good way to send the chat messages between these php childs. So I thought about writing an own socket server in either PHP or node.js which should be able to handle more then 1000 connections (chat users). As a purely web developer (php) I'm not much familiar with sockets as I usually let web server care about connections. The chat messages won't be saved on disk nor in mysql but in RAM as an array or object for best speed. As far as I know there is no way to handle multiple connections at the same time in a single php process (socket server), however you can accept a great amount of socket connections and process them successive in a loop (read and write; incoming message - write to all socket connections). The problem is that there will most-likely be a lag with ~1000 users and mysql operations could slow the whole thing down which will then affect all users. My question is: Can node.js handle a socket server with better performance? Node.js is event-based but I'm not sure if it can process multiple events at the same time (wouldn't that need multi-threading?) or if there is just an event queue. With an event queue it would be just like php: process user after user. I could also spawn a php process per chat room (much less users) but afaik there are singlethreaded IRC servers which are also capable to handle thousands of users. (written in c++ or whatever) so maybe it's also possible in php. I would prefer PHP over Node.js because then the project would be php-only and not a mixture of programming languages. However if Node can process connections simultaneously I'd probably choose it.

    Read the article

  • How do I put an ASP.NET website project and class library projects in one .sln file on Subversion

    - by JustinP8
    My company has several class libraries we use in multiple website projects (not web application projects). Website projects don't have .sln files, but I'm sure I've read in my past research that you can make a blank solution and put your website and class library projects in it. After answers to my previous questions, this is the direction that I'm going (based slightly on [http://amadiere.com/blog/2009/06/multiple-subversion-projects-in-one-visual-studio-solution-using-svnexternals/][1]: /websites /website1 /trunk /website1 /libraries /library1 /trunk /library1 /library2 /trunk /library2 /etc... Then I planed on using svn:externals to copy /library1, /library2, and so on into the working_copy/websites/website1/ folder. I want my team members to be able to checkout the /trunk folder for website1 and get a .sln file, /library1 external, /library2 external, etc. I want that .sln file to contain the website1 website project, and all of the library external projects. Hopefully that would look something like: /working_copy /websites /website1 /trunk /website1 /library1 (svn:external of libraries/library1/trunk/library1) /library2 (svn:external of libraries/library2/trunk/library2) /etc. website1.sln So, at the end of all of this, the goal is that my teammates check out the trunk, open the solution, and everyone has the exact same solution. When we commit, everything is committed appropriately to subversion (the website code, and the libraries are committed to their appropriate place on the repo). How have others solved these issues? How can I make a .sln file that my team members and I can share in this manner? [1]: "This Article"

    Read the article

  • MS Access (Jet) transactions, workspaces & scope

    - by Eric G
    I am having trouble with committing a transaction (using Access 2003 DAO). It's acting as if I never had called BeginTrans -- I get error 3034 on CommitTrans, "You tried to commit or rollback a transaction without first beginning a transaction"; and the changes are written to the database (presumably because they were never wrapped in a transaction). However, BeginTrans is run, if you step through it. I am running it within the Access environment using the DBEngine(0) workspace. The tables I'm updating are all opened via a Jet database connection (to the same database) and updated using DAO.Recordset.update. The connection is opened before starting BeforeTrans. I'm not doing anything weird in the middle of the transaction like closing/opening connections or multiple workspaces etc. There is one nested transaction level (basically it's wrapping multiple transacted updates in an outer transaction, so if any fail they all fail). The inner transactions run without errors, it's the outer transaction that doesn't work. Here are a few things I've looked into and ruled out: The transaction is spread across several methods and BeginTrans and CommitTrans (and Rollback) are all in different places. But when I tried a simple test of running a transaction this way, it doesn't seem like this should matter. I thought maybe the database connection gets closed when it goes out of local scope, even though I have another 'global' reference to it (I'm never sure what DAO does with dbase connections to be honest). But this seems not to be the case -- right before the commit, the connection and its recordsets are alive (I can check their properties, EOF = False, etc.) My CommitTrans and Rollback are done within event callbacks. (Very basically, a parser program is throwing an 'onLoad' or 'onLoadFail' event at the end of parsing, which I am handling by either committing or rolling back the inserts I made during processing.) However, again, trying a simple test, it doesn't seem like this should matter. Any ideas why this isn't working for me? Thanks.

    Read the article

  • select query from mysql_num_rows

    - by Andi Nugroho
    i want create multiple search where statement $where_search is a multiple condition from post form. but stil error when iam using this code ".where_search." in where condition with mysql_num_rows for paging $tampil2 = mysql_query("SELECT * FROM bb where ".$where_search." and kd_kelompok='2' and kd_komoditi='11' and nm_sebutan IS NOT NULL " ); this is the complete code. $where_search = "kd_pok='2' and kd_komoditi='11' "; if (isset($_POST['lakpus'])) { if (empty($_POST['lakpus'])) { } else { if (empty($where_search)) { $where_search .= "lakpus = '$lakpus' "; } else { $where_search .= "AND lakpus = '$lakpus' "; } } } if (isset($_POST['kd_por'])) { $kd_por = $_POST['kd_por'] ; if (empty($_POST['kd_por'])) { } else { if (empty($where_search)) { $where_search .= "kd_por = '$kd_por' "; } else { $where_search .= "AND tab1.kd_por = '$kd_por' "; } } } $max=15; $tampil2 = mysql_query("SELECT * FROM bb where ".$where_search." and kd_kelompok='2' and kd_komoditi='11' and nm_sebutan IS NOT NULL " ); $jml = mysql_num_rows($tampil2); $jmlhal = ceil($jml/$max);

    Read the article

  • Is it possible to redirect non-HTML files with HTTP? And chaining redirects?

    - by Earlz
    Hello, I have been thinking about a neat way of load balancing and one thing that would be required is to be capable of loading an image on an HTML page from multiple locations without rewriting the URL(on each load) So what I need to be able to do is have one URL which is the "static" URL. Such as http://example.com/myimage.png The image is not actually contained in example.com though. So example.com does a either a 302 or 301 or 307 HTTP response to cause a redirect to 2.example.com. How do browsers handle this with images like in this situation? Also, how do browsers handle multiple redirections for instance if 2.example.com also didn't contain it and it went to 3.example.com ? (Note, I am asking this because I've never seen a 301 redirect on anything but an HTML page) Also, which status code would be best to use. 301 means "moved permanently" which this "move" isn't permanent so I don't want it cached. Should I use 307? Is that supported by search engines and modern browsers?

    Read the article

  • Why would certain browsers request all pages on my ASP.Net Web site twice?

    - by Deane
    Firefox is issuing duplicate requests to my ASP.Net web site. It will request a page, get the response, then immediately issue the same request again (well, almost the same -- see below). This happens on every page of this particular Web site (but not any others). IE does not do this, but Chrome also does this. I have confirmed that there is no Location header in the response, and no Javascript or meta tag in the page which would cause the page to be re-requested (if any of these were true, IE would be re-requesting pages as well). I have confirmed this behavior on multiple Firefox installs on multiple machines. Versions vary, but all are 3.x. The only difference between the two requests is the Accepts header. For the first request, it looks like this: Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 For the second request, it looks like this: Accept: */* The Content-Type response header in all cases is: Content-Type: text/html; charset=utf-8 Something else odd -- even though Firefox requests the page twice, it uses the first response and discards the second. I put a counter on a page that increments with every request. I can watch the responses come back (via the Charles proxy). Firefox will get a "1" the first time, and a "2" the second time. Yet it will display the "1," for some reason. Chrome exhibits this exact same behavior. I suspect it's a protocol-level issue, given the difference in Accepts header, but I've never seen this before.

    Read the article

  • Problem with linking in gcc

    - by chitra
    I am compiling a program in which a header file is defined in multiple places. Contents of each of the header file is different, though the variable names are the same internal members within the structures are different . Now at the linking time it is picking up from a library file which belongs to a different header not the one which is used during compilation. Due to this I get an error at link time. Since there are so many libraries with the same name I don't know which library is being picked up. I have lot of oems and other customized libraries which are part of this build. I checked out the options in gcc which talks about selecting different library files to be included. But no where I am able to see an option which talks about which libraries are being picked up the linker. If the linker is able to find more than one library file name, then which does the linker pick up is something which I am not able to understand. I don't want to specify any path, rather I want to understand how the linker is resolving the multiple libraries that it is able to locate. I tried putting -v option, but that doesn't list out the path from which the gcc picks up the library. I am using gcc on linux. Any help in this regard is highly appreciated. Regards, Chitra

    Read the article

  • How might I wrap the FindXFile-style APIs to the STL-style Iterator Pattern in C++?

    - by BillyONeal
    Hello everyone :) I'm working on wrapping up the ugly innards of the FindFirstFile/FindNextFile loop (though my question applies to other similar APIs, such as RegEnumKeyEx or RegEnumValue, etc.) inside iterators that work in a manner similar to the Standard Template Library's istream_iterators. I have two problems here. The first is with the termination condition of most "foreach" style loops. STL style iterators typically use operator!= inside the exit condition of the for, i.e. std::vector<int> test; for(std::vector<int>::iterator it = test.begin(); it != test.end(); it++) { //Do stuff } My problem is I'm unsure how to implement operator!= with such a directory enumeration, because I do not know when the enumeration is complete until I've actually finished with it. I have sort of a hack together solution in place now that enumerates the entire directory at once, where each iterator simply tracks a reference counted vector, but this seems like a kludge which can be done a better way. The second problem I have is that there are multiple pieces of data returned by the FindXFile APIs. For that reason, there's no obvious way to overload operator* as required for iterator semantics. When I overload that item, do I return the file name? The size? The modified date? How might I convey the multiple pieces of data to which such an iterator must refer to later in an ideomatic way? I've tried ripping off the C# style MoveNext design but I'm concerned about not following the standard idioms here. class SomeIterator { public: bool next(); //Advances the iterator and returns true if successful, false if the iterator is at the end. std::wstring fileName() const; //other kinds of data.... }; EDIT: And the caller would look like: SomeIterator x = ??; //Construct somehow while(x.next()) { //Do stuff } Thanks! Billy3

    Read the article

  • Mercurial CLI is slow in C#?

    - by pATCheS
    I'm writing a utility in C# that will make managing multiple Mercurial repositories easier for the way my team is using it. However, it seems that there is always about a 300 to 400 millisecond delay before I get anything back from hg.exe. I'm using the code below to run hg.exe and hgtk.exe (TortoiseHg's GUI). The code currently includes a Stopwatch and some variables for timing purposes. The delay is roughly the same on multiple runs within the same session. I have also tried specifying the exact path of hg.exe, and got the same result. static string RunCommand(string executable, string path, string arguments) { var psi = new ProcessStartInfo() { FileName = executable, Arguments = arguments, WorkingDirectory = path, UseShellExecute = false, RedirectStandardError = true, RedirectStandardInput = true, RedirectStandardOutput = true, WindowStyle = ProcessWindowStyle.Maximized, CreateNoWindow = true }; var sbOut = new StringBuilder(); var sbErr = new StringBuilder(); var sw = new Stopwatch(); sw.Start(); var process = Process.Start(psi); TimeSpan firstRead = TimeSpan.Zero; process.OutputDataReceived += (s, e) => { if (firstRead == TimeSpan.Zero) { firstRead = sw.Elapsed; } sbOut.Append(e.Data); }; process.ErrorDataReceived += (s, e) => sbErr.Append(e.Data); process.BeginOutputReadLine(); process.BeginErrorReadLine(); var eventsStarted = sw.Elapsed; process.WaitForExit(); var processExited = sw.Elapsed; sw.Reset(); if (process.ExitCode != 0 || sbErr.Length > 0) { Error.Mercurial(process.ExitCode, sbOut.ToString(), sbErr.ToString()); } return sbOut.ToString(); } Any ideas on how I can speed things up? As it is, I'm going to have to do a lot of caching in addition to threading to keep the UI snappy.

    Read the article

  • Using NULLs in matchup table

    - by TomWilsonFL
    I am working on the accounting portion of a reservation system (think limo company). In the system there are multiple objects that can either be paid or submit a payment. I am tracking all of these "transactions" in three tables called: tx, tx_cc, and tx_ch. tx generates a new tx_id (for transaction ID) and keeps the information about amount, validity, etc. Tx_cc and tx_ch keep the information about the credit card or check used, respectively, which link to other tables (credit_card and bank_account among others). This seems fairly normalized to me, no? Now here is my problem: The payment transaction can take place for a myriad of reasons. Either a reservation is being paid for, a travel agent that booked a reservation is being paid, a driver is being paid, etc. This results in multiple tables, one for each of the entities: agent_tx, driver_tx, reservation_tx, etc. They look like this: CREATE TABLE IF NOT EXISTS `driver_tx` ( `tx_id` int(10) unsigned zerofill NOT NULL, `driver_id` int(11) NOT NULL, `reservation_id` int(11) default NULL, `reservation_item_id` int(11) default NULL, PRIMARY KEY (`tx_id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; Now this transaction is for a driver, but could be applied to an individual item on the reservation or the entire reservation overall. Therefore I demand either reservation_id OR reservation_item_id to be null. In the future there may be other things which a driver is paid for, which I would also add to this table, defaulting to null. What is the rule on this? Opinion? Obviously I could break this out into MANY three column tables, but the amount of OUTER JOINing needed seems outrageous. Your input is appreciated. Peace, Tom

    Read the article

  • Instanced drawing with OpenGL ES 2.0

    - by Mårten Wikström
    In short: Is it possible to use the gl_InstanceID built-in variable in OpenGL ES 2.0? And, if so, how? Some more info: I want to draw multiple instances of an object using glDrawArraysInstanced and gl_InstanceID, and I want my application to run on multiple platforms, including iOS. The specification clearly says that these features require ES 3.0. According to the iOS Device Compatibility Reference ES 3.0 is only available on a few devices (those based on the A7 GPU; so iPhone 5s, but not on iPhone 5 or earlier). So my first assumption was that I needed to avoid using instanced drawing on older iOS devices. However, further down in the compatibility reference document it says that the EXT_draw_instanced extension is supported for all SGX Series 5 processors (that includes iPhone 5 and 4s). This makes me think that I could indeed use instanced drawing on older iOS devices too, by looking up and using the appropriate extension function (EXT or ARB) for glDrawArraysInstanced. I'm currently just running some test code using SDL and GLEW on Windows so I haven't tested anything on iOS yet. However, in my current setup I'm having trouble using the gl_InstanceID built-in variable in a vertex shader. I'm getting the following error message: 'gl_InstanceID' : variable is not available in current GLSL version Enabling the "draw_instanced" extension in GLSL has no effect: #extension GL_ARB_draw_instanced : enable #extension GL_EXT_draw_instanced : enable The error goes away when I specifically declare that I need ES 3.0 (GLSL 300 ES): #version 300 es Although that seem to work fine on my Windows desktop machine in an ES 2.0 context I doubt that this would work on an iPhone 5. So, shall I abandon the idea of being able to use instanced drawing on older iOS devices?

    Read the article

  • How to keep form items selected after post request?

    - by Ole Jak
    I have a simple html form. On php page. A simple list is placed on form. I submit this form (selected list items) to this page so it gives me page refresh. I want items which were POSTED to be selected after form was submited. How to do such thing? For my form I use such code: <form action="FormPage.php" method="post"> <select id="Streams" class="multiselect ui-widget-content ui-corner-all" multiple="multiple" name="Streams[]"> <?php $query = " SELECT s.streamId, s.userId, u.username FROM streams AS s JOIN user AS u ON s.userId = u.id LIMIT 0 , 30 "; $streams_set = mysql_query($query, $connection); confirm_query($streams_set); $streams_count = mysql_num_rows($streams_set); while ($row = mysql_fetch_array($streams_set)){ echo '<option value="' , $row['streamId'] , '"> ' , $row['username'] , ' (' , $row['streamId'] ,')' ,'</option> '; } ?> </select> <br/> <input type="submit" class="ui-state-default ui-corner-all" name="submitForm" id="submitForm" value="Play Stream from selected URL's!"/> </fieldset> </form>

    Read the article

  • How do you verify that 2 copies of a VB 6 executable came from the same code base?

    - by Tim Visher
    I have a program under version control that has gone through multiple releases. A situation came up today where someone had somehow managed to point to an old copy of the program and thus was encountering bugs that have since been fixed. I'd like to go back and just delete all the old copies of the program (keeping them around is a company policy that dates from before version control was common and should no longer be necessary) but I need a way of verifying that I can generate the exact same executable that is better than saying "The old one came out of this commit so this one should be the same." My initial thought was to simply MD5 hash the executable, store the hash file in source control, and be done with it but I've come up against a problem which I can't even parse. It seems that every time the executable is generated (method: Open Project. File Make X.exe) it hashes differently. I've noticed that Visual Basic messes with files every time the project is opened in seemingly random ways but I didn't think that would make it into the executable, nor do I have any evidence that that is indeed what's happening. To try to guard against that I tried generating the executable multiple times within the same IDE session and checking the hashes but they continued to be different every time. So that's: Generate Executable Generate MD5 Checksum: md5sum X.exe > X.md5 Verify MD5 for current executable: md5sum -c X.md5 Generate New Executable Verify MD5 for new executable: md5sum -c X.md5 Fail verification because computed checksum doesn't match. I'm not understanding something about either MD5 or the way VB 6 is generating the executable but I'm also not married to the idea of using MD5. If there is a better way to verify that two executables are indeed the same then I'm all ears. Thanks in advance for your help!

    Read the article

< Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >