Search Results

Search found 6634 results on 266 pages for 'fast fashion'.

Page 104/266 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • Is there any way to run "dir" directly?

    - by Mason Wheeler
    In my answer to this question, where the asker needed a fast way to get a directory listing of a folder on a network drive, I suggested using the DOS "dir" command. Unfortunately, it's a command, not a program, so you can't execute it with CreateProcess and so I had to put it in a batch file. I don't really like that solution. It feels like a hack to me. Does anyone know a way to run dir from Delphi instead of from an external batch file?

    Read the article

  • Something like PPerl for Ruby?

    - by sal
    I've used PPerl for deamon like processes. This program turns ordinary perl scripts into long running daemons, making subsequent executions extremely fast. It forks several processes for each script, allowing many proceses to call the script at once. Does anyone know of something like this for ruby? Right now I am planing on using a wrapper around curl to call a REST WebService written in Sinatra running on JRuby. I'm hoping there is a simpler option.

    Read the article

  • How does Lucene work

    - by Midhat
    I am trying to find out how lucene search works so fast. Cant find any useful docs on the web. If you have anything (short of lucene source code) to read, let me know. A text search query using mysql5 text search with index takes about 18 minutes in my case. A lucene search for the same query takes less than a second

    Read the article

  • Use one Socket to send and recieve data

    - by volody
    What makes more sense? use one socket to send and receive data to/from a embedded hardware device use one socket to send data and separate socket to read data Communication is not very intensive but the important point is to receive data as fast as possible. On application side is used Windows XP and up.

    Read the article

  • PHP CMS with ability to create custom tables

    - by Cracker
    I am building a website. I have created the database in MySQL. I need to build the web pages really fast! Is there a PHP CMS with which I can easily create the webpages with forms that can modify my database tables? The point is that I don't want to code it using plain PHP or MVC frameworks either. I looked at other CMSs' like Drupal and Joomla but it looks like its difficult to make them use custom tables.

    Read the article

  • A graph problem

    - by copperhead
    I am struggling to solve the following problem http://uva.onlinejudge.org/external/1/193.html However Im not able to get a fast solution. And as seen by the times of others, there should be a solution of maximum n^2 complexity http://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&category=3&page=show_problem&problemid=129&page=problem_stats Can I get some help?

    Read the article

  • Flex, Popups: rollout event, not always working

    - by Patrick
    hi, I'm using PopupManager to create popups.. I noticed that sometimes the popups are not removed when on mouseOut event, for some reasons. I tried to roll over/out fast but the popups perfectly work. So I'm not able to say why sometimes they remain visible instead of disappear. Thanks

    Read the article

  • Maximum Row in DBMS

    - by Am1rr3zA
    Is there any limit to maximum row of table in DBMS (specially MySQL)? I want create table for saving logfile and it's row increase so fast I want know what shoud I do to prevent any problem.

    Read the article

  • uefa.com is a great website

    - by olst
    Hi all. I was wondering if anyone knows in what technology/web platform the uefa.com website was built. Its page suffix is ".html", but I don't see how it could be built with just html, since it probably has a lot of dynamic content... Anyway, it's a great website with fast loading pages and nice design. Does anyone know who built it ? ... thanks ...

    Read the article

  • Cocoa - does CGDataProviderCopyData() actually copy the bytes? Or just the pointer?

    - by jtrim
    I'm running that method in quick succession as fast as I can, and the faster the better, so obviously if CGDataProviderCopyData() is actually copying the data byte-for-byte, then I think there must be a faster way to directly access that data...it's just bytes in memory. Anyone know for sure if CGDataProviderCopyData() actually copies the data? Or does it just create a new pointer to the existing data?

    Read the article

  • Do non-clustered indexes slow down inserts?

    - by mikeinmadison
    I'm working in Sql Server 2005. I have an event log table that tracks user actions, and I want to make sure that inserts into the table are as fast as possible. Currently the table doesn't have any indexes. Does adding a single non-clustered index slow down inserts at all? Or is it only clustered indexes that slow down inserts? Or should I just add a clustered index and not worry about it?

    Read the article

  • C# System.Diagnostics.Process redirecting Standard Out for large amounts of data

    - by Matt
    I running an exe from a .NET app and trying to redirect standard out to a streamreader. The problem is that when I do myprocess.exe out.txt out.txt is close to 14mb. When I do the command line version it is very fast but when I run the process from my csharp app it is excruciatingly slow because I believe the default streamreader flushes every 4096 bytes. Is there a way to change the default stream reader for the Process object?

    Read the article

  • Contains performs MUCH slower with variable vs constant string MS SQL Server

    - by Greg R
    For some unknown reason I'm running into a problem when passing a variable to a full text search stored procedure performs many times slower than executing the same statement with a constant value. Any idea why and how can that be avoided? This executes very fast: SELECT * FROM table WHERE CONTAINS (comments, '123') This executes very slowly and times out: DECLARE @SearchTerm nvarchar(30) SET @SearchTerm = '123' SET @SearchTerm = '"' + @SearchTerm + '"' SELECT * FROM table WHERE CONTAINS (comments, @SearchTerm) Does this make any sense???

    Read the article

  • Report generation in PHP (formats required pdf,xls,doc,csv)

    - by Ish Kumar
    I need to generate reports in my PHP website (in zend framework) Formats required: PDF (with tables & images) // presently using Zend_Pdf XLS (with tables & images) DOC (with tables & images) CSV (only tables) Please recommend robust and fast solution for generating reports in PHP. Platform: Zend Framework on LAMP I know there are some tricky solutions for creating such reports, i wonder is there any open source report generation utility that can be used with LAMP environment

    Read the article

  • How can I create the XML::Simple data structure using a Perl XML SAX parser?

    - by DVK
    Summary: I am looking a fast XML parser (most likely a wrapper around some standard SAX parser) which will produce per-record data structure 100% identical to those produced by XML::Simple. Details: We have a large code infrastructure which depends on processing records one-by-one and expects the record to be a data structure in a format produced by XML::Simple since it always used XML::Simple since early Jurassic era. An example simple XML is: <root> <rec><f1>v1</f1><f2>v2</f2></rec> <rec><f1>v1b</f1><f2>v2b</f2></rec> <rec><f1>v1c</f1><f2>v2c</f2></rec> </root> And example rough code is: sub process_record { my ($obj, $record_hash) = @_; # do_stuff } my $records = XML::Simple->XMLin(@args)->{root}; foreach my $record (@$records) { $obj->process_record($record) }; As everyone knows XML::Simple is, well, simple. And more importantly, it is very slow and a memory hog—due to being a DOM parser and needing to build/store 100% of data in memory. So, it's not the best tool for parsing an XML file consisting of large amount of small records record-by-record. However, re-writing the entire code (which consist of large amount of "process_record"-like methods) to work with standard SAX parser seems like an big task not worth the resources, even at the cost of living with XML::Simple. I'm looking for an existing module which will probably be based on a SAX parser (or anything fast with small memory footprint) which can be used to produce $record hashrefs one by one based on the XML pictured above that can be passed to $obj->process_record($record) and be 100% identical to what XML::Simple's hashrefs would have been. I don't care much what the interface of the new module is; e.g whether I need to call next_record() or give it a callback coderef accepting a record.

    Read the article

  • What server side technologies does uefa.com use?

    - by olst
    Hi all. I was wondering if anyone knows in what technology/web platform the uefa.com website was built. Its page suffix is ".html", but I don't see how it could be built with just html, since it probably has a lot of dynamic content... Anyway, it's a great website with fast loading pages and nice design. Does anyone know who built it ? ... thanks ...

    Read the article

  • SQL optimization: deletes taking a long time

    - by Will
    I have an Oracle SQL query as part of a stored proc: DELETE FROM item i WHERE NOT EXISTS (SELECT 1 FROM item_queue q WHERE q.n=i.n) AND NOT EXISTS (SELECT 1 FROM tool_queue t WHERE t.n=i.n); A bit about the tables: item contains about 10k rows with an index on the n column item_queue contains about 1mil rows also with index on n column tool_queue contains about 5mil rows indexed as well I am wondering if the query/subqueries can be optimized somehow to make them run faster, I thought that deletes were generally fairly fast

    Read the article

  • UIScrollView Infinite Scrolling

    - by Ben Robinson
    I'm attempting to setup a scrollview with infinite (horizontal) scrolling. Scrolling forward is easy - I have implemented scrollViewDidScroll, and when the contentOffset gets near the end I make the scrollview contentsize bigger and add more data into the space (i'll have to deal with the crippling effect this will have later!) My problem is scrolling back - the plan is to see when I get near the beginning of the scroll view, then when I do make the contentsize bigger, move the existing content along, add the new data to the beginning and then - importantly adjust the contentOffset so the data under the view port stays the same. This works perfectly if I scroll slowly (or enable paging) but if I go fast (not even very fast!) it goes mad! Heres the code: - (void) scrollViewDidScroll:(UIScrollView *)scrollView { float pageNumber = scrollView.contentOffset.x / 320; float pageCount = scrollView.contentSize.width / 320; if (pageNumber > pageCount-4) { //Add 10 new pages to end mainScrollView.contentSize = CGSizeMake(mainScrollView.contentSize.width + 3200, mainScrollView.contentSize.height); //add new data here at (320*pageCount, 0); } //*** the problem is here - I use updatingScrollingContent to make sure its only called once (for accurate testing!) if (pageNumber < 4 && !updatingScrollingContent) { updatingScrollingContent = YES; mainScrollView.contentSize = CGSizeMake(mainScrollView.contentSize.width + 3200, mainScrollView.contentSize.height); mainScrollView.contentOffset = CGPointMake(mainScrollView.contentOffset.x + 3200, 0); for (UIView *view in [mainContainerView subviews]) { view.frame = CGRectMake(view.frame.origin.x+3200, view.frame.origin.y, view.frame.size.width, view.frame.size.height); } //add new data here at (0, 0); } //** MY CHECK! NSLog(@"%f", mainScrollView.contentOffset.x); } As the scrolling happens the log reads: 1286.500000 1285.500000 1284.500000 1283.500000 1282.500000 1281.500000 1280.500000 Then, when pageNumber<4 (we're getting near the beginning): 4479.500000 4479.500000 Great! - but the numbers should continue to go down in the 4,000s but the next log entries read: 1278.000000 1277.000000 1276.500000 1275.500000 etc.... Continiuing from where it left off! Just for the record, if scrolled slowly the log reads: 1294.500000 1290.000000 1284.500000 1280.500000 4476.000000 4476.000000 4473.000000 4470.000000 4467.500000 4464.000000 4460.500000 4457.500000 etc.... Any ideas???? Thanks Ben.

    Read the article

  • How to stream semi-live audio over internet

    - by Thomas Tempelmann
    I want to write something like Skype, i.e. I have a constant audio stream on one computer and then recompress it in a format that's suitable for a latent internet connection, receive it on the other end and play it. Let's also assume that the internet connection is fairly modern and fast, i.e. DSL or alike, no slow connections over phone and such. The involved computers will also be rather modern (Dual Core Intel CPUs at 2GHz or more). I know how to handle the audio on the machines. What I don't know is how to transmit the audio in an efficient way. The challenges are: I'd like get good audio quality across the line. The stream should be received without drops. The stream may, however, be received with a little delay (a second delay is acceptable). I imagine that the transport software could first determine the average (and max) latency, then start the stream and tell the receiver to wait for that max latency before starting to play the audio. With that, if the latency doesn't get any higher, the entire stream will be playable on the other side without stutter or drops. If, due to unexpected IP latencies or blockages, the stream does get cut off, I want to be able to notice this so that I can take actions (e.g. abort the stream) and eventually start a new transmission. What are my options if I want do use ready-made software for the compression and tranmission? I have no intention to write my own audio compression engine, really. OTOH, I plan to sell the solution in a vertical market, meaning I can afford a few dollars of license fees per copy, but not $100s. I guess the simplest solution would be to just open a TCP stream, send a few packets back and forth to determine their running time (or even use UDP for that), then use the results as the guide for my max latency value, then simply fire the audio data in its raw form (uncompressed 16 bit stereo), along with a timing code over the TCP connection. The receiver reads the data and plays it with the pre-determined delay. That might just work with the type of fast connection I expect. I just wonder if there are better solutions to reach this goal, with better performance (lower latency) and less data (compressed). BTW, I first try to implement this on OS X, but might want to do it on Windows, too, if it proves successful.

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >