Search Results

Search found 6078 results on 244 pages for 'processing'.

Page 155/244 | < Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >

  • A very basic issue with routes in ruby

    - by Haris
    I am new to ruby and while creating a sample application found out an issue that whenever I go to http://127.0.0.1:3000/people/index by default show action is executed and index is taken as a parameter. This is server log: Started GET "/people/index" for 127.0.0.1 at 2010-12-23 18:43:01 +0500 Processing by PeopleController#show as HTML Parameters: {"id"=>"index"} I have this in my route file: root :to => "people#index" resources> :people match ':controller(/:action(/:id(.:format)))' What is going on here and how can I fix the issue?

    Read the article

  • Kill a Perl system call after a timeout

    - by Fergal
    I've got a Perl script I'm using for running a file processing tool which is started using backticks. The problem is that occasionally the tool hangs and It needs to be killed in order for the rest of the files to be processed. Whats the best way best way to apply a timeout after which the parent script will kill the hung process? At the moment I'm using: foreach $file (@FILES) { $runResult = `mytool $file >> $file.log`; } But when mytool hangs after n seconds I'd like to be able to kill it and continue to the next file.

    Read the article

  • Problem with Sphinx resultset larger than 16 MB in MySQL

    - by gmemon
    Hello All, I am accessing a large indexed text dataset using sphinxse via MySQL. The size of resultset is on the order of gigabytes. However, I have noticed that MySQL stops the query with following error whenever the dataset is larger than 16MB: 1430 (HY000): There was a problem processing the query on the foreign data source. Data source error: bad searchd response length (length=16777523) length shows the length of resultset that offended MySQL. I have tried the same query with Sphinx's standalone search program. It works fine. I have tried all possible variables in both MySQL and Sphinx, but nothing is helping. I am using Sphinx 0.9.9 rc-2 and MySQL 5.1.46. Thanks

    Read the article

  • Build a JavaScript wrapper for a rails-generated XML API?

    - by Thor Thurn
    I am working with a large website written in Ruby on Rails. Thanks to the support for REST in Rails 2, the site's business logic is all accessible via a consistent XML API. Now I want to be able to easily write one or more JavaScript frontends to the site that interact with the generated Rails XML API. Ideally, an automated wrapper for the API could be created in JavaScript, since this would minimize the effort required in writing XML processing code for the more than 500 API functions. How, then, can I automatically generate a wrapper around a given XML API in JavaScript so that it's more pleasant to work with? I've worked with solutions of this nature for Java that generate classes and methods to wrap an API, so my current thinking is that I want something of that nature for JavaScript. I'd be open to an alternative take on the problem, though.

    Read the article

  • Which design pattern should I be using?

    - by Gabriel
    Here's briefly what I'm trying to do. The user supplies me with a link to a photo from one of several photo-sharing websites (such as Flickr, Zooomr, et. al). I then do some processing on the photo using their respective APIs. Right now, I'm only implementing one service, but I will most likely add more in the near future. I don't want to have a bunch of if/else or switch statements to define the logic for the different websites (but maybe that's necessary?) I'd rather just call GetImage(url) and have it get me the image from whatever service the url's domain is from. I'm confused how the GetImage function and classes should be designed. Maybe I need the strategy pattern? I'm still reading and trying to understand the various design patterns and how I could make one fit in this case. I'm doing this in C#, but this question is language-agnostic.

    Read the article

  • URLs with query stripped of ampersands appearing in error logs

    - by Jeremy DeGroot
    I've noticed a curious phenomena popping up in my error logs recently. If, as the result of processing a form, I redirect my users to the URL http://www.example.com/index.php?foo=bar&bar=baz, I will see the following two URLs in my log http://www.example.com/index.php?foo=barbar=baz http://www.example.com/index.php?foo=bar&bar=baz The first one is obviously incorrect and will cause my application to redirect to a 404. It always appears first, usually a second before the second one. The 404 page is not doing the redirection, so it appears that the browser is trying both versions. At first, looking at my server logs made me believe it affected only Firefox 3.6.3, but I've found an example of Safari being afflicted as well. It happens fairly intermittently, though it can occur multiple times in a users' session. I've never been able to get it to happen to me. Any thoughts as to the nature of the problem or a solution?

    Read the article

  • Javascript: replacing newlines with <br/> working in FF and SAFARI and not working in IE

    - by Daniel
    I was thinking that replacing \n with with javascript was quite a simple task, but it seems not to be so. Posts in Ask Ben or StackOverflow suggest that something as simple as: dst= dst.replace (/\n/g, "<br/>"); $("div.descr").html(dst); will get the job done. Indeed, this work in FF and Safari but not in IE. Text has been created in a textarea and then stored in a database, then retrieved without further processing. It works using FF on windows and Safari on Mac. IE on windows, nada. Is it a major bug in my head? Is it a JQuery issue? Have some idea about how to solve this? And possible reason for? Many thanks

    Read the article

  • Need help with this Regex + UrlRewriter.NET please :)

    - by Pure.Krome
    Previously, on StackOverflow ... (Summarized) I need to capture all requests, for a particluar subdomain .. and rewrite their destination. Now, the trick to determining the host via regex was solved. Now, i need to make sure all requests to the root index page is rewritten, but i can't figure out the correct regex to find the 'homepage' / website root. this is what i have.... <if header="HTTP_HOST" match="^foo\.mydomain\.com\.au(?::\d+)?/?$"> <!-- snip some other rewrites, eg./buying/product -> ~/Pages/Foo/Bar.aspx --> <rewrite url="^/$" to="~/Pages/SomeWeirdFolder/Home.aspx" processing="stop"/> </if> Now if one of the rewrites were not found, then it falls through and continues. So .. can anyone please help?

    Read the article

  • How to generate makefile targets from variables?

    - by Ketil
    I currently have a makefile to process some data. The makefile gets the inputs to the data processing by sourcing a CONFIG file, which defines the input data in a variable. Currently, I symlink the input files to a local directory, i.e. the makefile contains: tmp/%.txt: tmp ln -fs $(shell echo $(INPUTS) | tr ' ' '\n' | grep $(patsubst tmp/%,%,$@)) $@ This is not terribly elegant, but appears to work. Is there a better way? Basically, given INPUTS = /foo/bar.txt /zot/snarf.txt I would like to be able to have e.g. %.out: %.txt some command As well as targets to merge results depending on all $(INPUT) files. Also, apart from the kludgosity, the makefile doesn't work correctly with -j, something that is crucial for the analysis to complete in reasonable time. I guess that's a bug in GNU make, but any hints welcome.

    Read the article

  • How do I create a downscaled copy of an FBO in OpenGL?

    - by Jasper Bekkers
    Hi, In order to speed up some post-processing shaders I'm using, I need to perform these operations on a framebuffer that is smaller in size than the actual window (about 1/4th or more). Most of the effects I want to optimize are simple blurring operations that could be replaced (for a large part) by smaller kernel and bilinear filtering. Thus, I need to create a copy of the current FBO into another one. However, I couldn't find anything, that works, on how to do this. I've tried using glBlitframebufferEXT and rendering a fullscreen quad into the other framebuffer, but both paths result in a black texture as output. How do I go about solving this problem?

    Read the article

  • ASP.NET MVC: post-redirect-get pattern, with only two overloaded action methods

    - by Rafi
    Is it possible to implement post-redirect-get pattern, with two overloaded action methods(One for GET action and the other for POST action) in ASP.NET MVC. In all of the MVC post-redirect-get pattern samples, I have seen three different action methods for the post-redirect-get process, each having different names. Is this really required? For Eg:(Does the code shown below, follows Post-Redirect-Get pattern?) public class SalaryTransferController : Controller { // // GET: /SalaryTransfer/ [HttpGet] public ActionResult Index(int id) { SalaryTransferIndexViewModel vm = new SalaryTransferIndexViewModel(id) { SelectedDivision = DivisionEnum.Contracting }; //Do some processing here return View(vm); } // // POST: /SalaryTransfer/ [HttpPost] public ActionResult Index(SalaryTransferIndexViewModel vm) { bool validationsuccess = false; //validate if (validationsuccess) return RedirectToAction("Index", new {id=1234 }); else return View(vm); } } Thank you for your responses.

    Read the article

  • How to open a large text file in C#

    - by desmati
    I have a text file that contains about 100000 articles. The structure of file is: BEGIN OF FILE .Document ID 42944-YEAR:5 .Date 03\08\11 .Cat political Article Content 1 .Document ID 42945-YEAR:5 .Date 03\08\11 .Cat political Article Content 2 END OF FILE I want to open this file in c# for processing it line by line. I tried this code: String[] FileLines = File.ReadAllText(TB_SourceFile.Text).Split(Environment.NewLine.ToCharArray()); But it says: Exception of type 'System.OutOfMemoryException' was thrown. The question is How can I open this file and read it line by line. File Size: 564 MB (591,886,626 bytes) File Encoding: UTF-8 File contains Unicode characters.

    Read the article

  • postgres store with composite value type, or a better way of attributing an inverted index

    - by Hassan Syed
    can't seem to figure out the syntax for populating a hstore with a value of composite type -- note: I do not want to convert a record to a hstore. select hstore('hello => ROW(1,2)'); I know it's something simple; However, google is not my friend today. use case : custom inverted index. The data is modelling an inverted index of lexemes, the composite data types are various probabilities related to the lexemes which I will use to implement document clustering. Does anyone know a better way of doing this ? I'm open to using an external system if it allows attaching attributes to key-posting pairs in the inverted index. I'd use something external if it had solid support for what I am trying to do, I suspect that sticking 3-10k lexemes per tuple and then doing batch processing on them is gonna be nasty as the whole hstore will have to be parsed and converted .

    Read the article

  • Design Pattern for Server Emulator

    - by adisembiring
    I wanna build server socket emulator, but I want implement some design pattern there. I will described my case study that I have simplified like these: My Server Socket will always listen client socket. While some request message come from the client socket, the server emulator will response the client through the socket. the response is response code. '00' will describe request message processed successfully, and another response code expect '00' will describe there are some error while processing the message request. IN the server there are some UI, this UI contain check response parameter such as. response code timeout interval While the server want to response the client message, the response code taken from input parameter response form UI check the timeout interval, it will create sleep thread and the interval taken from timeout interval input from UI. I have implement the function, but I create it in one class. I feel it so sucks. Can you suggest me what class / interface that I must create to refactor my code.

    Read the article

  • How do I watch a file for changes using Python?

    - by Jon Cage
    I have a log file being written by another process which I want to watch for changes. Each time a change occurrs I'd like to read the new data in to do some processing on it. What's the best way to do this? I was hoping there'd be some sort of hook from the PyWin32 library. I've found the win32file.FindNextChangeNotification function but have no idea how to ask it to watch a specific file. If anyone's done anything like this I'd be really grateful to hear how... [Edit] I should have mentioned that I was after a solution that doesn't require polling. [Edit] Curses! It seems this doesn't work over a mapped network drive. I'm guessing windows doesn't 'hear' any updates to the file the way it does on a local disk.

    Read the article

  • bash: flushing stdin (standard input)

    - by rahul
    I have a bash script that gets some input as stdin. After processing, I copy a file using "-i" (interactive). However, this never gets executed since (I guess) standard input has not been flushed. To simplify with an example: #!/bin/bash while read line do echo $line done # the next line does not execute read -p "y/n" x echo "got $x" Place this in t.sh, and execute with: ls | ./t.sh The read is not executed. I need to flush stdin before the read. How could it do this?

    Read the article

  • Any tips of how to handle hierarchical trees in relational model?

    - by George
    Hello all. I have a tree structure that can be n-levels deep, without restriction. That means that each node can have another n nodes. What is the best way to retrieve a tree like that without issuing thousands of queries to the database? I looked at a few other models, like flat table model, Preorder Tree Traversal Algorithm, and so. Do you guys have any tips or suggestions of how to implement a efficient tree model? My objective in the real end is to have one or two queries that would spit the whole tree for me. With enough processing i can display the tree in dot net, but that would be in client machine, so, not much of a big deal. Thanks for the attention

    Read the article

  • Processes sharing cores on Ubuntu system

    - by muckabout
    My coworkers and I share an 8-core server running Ubuntu for our batch processes. I tend to run 4 processes at a time, each of which consumes 100% CPU per core when nothing else is running. When a coworker runs his processes (typically about 4 at a time), his also get 100% per. However, when both of us run ours (he always goes first), his still get 100% and mine seem to divide the remaining processing power and linger in the 10-40% range. I even reniced his process to a lower value and it did not change. What are the issues that may cause this?

    Read the article

  • c# - pull records from database without timeout

    - by BhejaFry
    Hi folks, i have a sql query with multiple joins & it pulls data from a database for processing. This is supposed to be running on some scheduled basis. So day 1, it might pull 500, day 2 say 400. Now, if the service is stopped for some reason & the data not processed, then on day3 there could be as much as 1000 records to process. This is causing timeout on the sql query. How best to handle this situation without causing timeout & gradually reducing workload to process? TIA

    Read the article

  • call multiple c++ functions in python using threads

    - by wiso
    Suppose I have a C(++) function taking an integer, and it is bound to (C)python with python api, so I can call it from python: import c_module c_module.f(10) now, I want to parallelize it. The problem is: how does the GIL work in this case? Suppose I have a queue of numbers to be processed, and some workers (threading.Thread) working in parallel, each of them calling c_module.f(number) where number is taken from a queue. The difference with the usual case, when GIL lock the interpreter, is that now you don't need the interpreter to evaluate c_module.f because it is compiled. So the question is: in this case the processing is really parallel?

    Read the article

  • PDF text search and split library

    - by Horace Ho
    I am look for a server side PDF library (or command line tool) which can: split a multi-page PDF file into individual PDF files, based on a search result of the PDF file content Examples: Search "Page ???" pattern in text and split the big PDF into 001.pdf, 002,pdf, ... ???.pdf A server program will scan the PDF, look for the search pattern, save the page(s) which match the patten, and save the file in the disk. It will be nice with integration with PHP / Ruby. Command line tool is also acceptable. It will be a server side (linux or win32) batch processing tool. GUI/login is not supported. i18n support will be nice but no required. Thanks~

    Read the article

  • How to leverage Spring Integration in a real-world JMS distributed architecture?

    - by ngeek
    For the following scenario I am looking for your advices and tips on best practices: In a distributed (mainly Java-based) system with: many (different) client applications (web-app, command-line tools, REST API) a central JMS message broker (currently in favor of using ActiveMQ) multiple stand-alone processing nodes (running on multiple remote machines, computing expensive operations of different types as specified by the JMS message payload) How would one best apply the JMS support provided by the Spring Integration framework to decouple the clients from the worker nodes? When reading through the reference documentation and some very first experiments it looks like the configuration of an JMS inbound adapter inherently require to use a subscriber, which in a decoupled scenario does not exist. Small side note: communication should happen via JMS text messages (using a JSON data structure for future extensibility).

    Read the article

  • How to implement buffering with timeout in RX

    - by Gaspar Nagy
    I need to implement an event processing, that is done delayed when there are no new events arriving for a certain period. (I have to queue up a parsing task when the text buffer changed, but I don't want to start the parsing when the user is still typing.) I'm new in RX, but as far as I see, I would need a combination of BufferWithTime and the Timeout methods. I imagine this to be working like this: it buffers the events until they are received regularly within a specified time period between the subsequent events. If there is a gap in the event flow (longer than the timespan) it should return propagate the events buffered so far. Having a look at how Buffer and Timeout is implemented, I could probably implement my BufferWithTimeout method (if everyone have one, please share with me), but I wonder if this can be achieved just by combining the existing methods. Any ideas?

    Read the article

  • Adding/removing session variables on Page OnInit/OnLoad in C#

    - by MKS
    Hi Guys, I am using C#. I am having below code in C#: protected override void OnInit(EventArgs e) { try { if (Session["boolSignOn"].ToString() == "true".ToString()) { lblPanelOpen.Text = Session["panelOpen"].ToString(); } else { lblPanelOpen.Text = Session["panelOpen"].ToString(); } } catch (Exception ex) { Logger.Error("Error processing request:" + ex.Message); } } protected override void OnLoad(EventArgs e) { try { if (!string.IsNullOrEmpty(Session["panelOpen"].ToString())) { lblPanelOpen.Text = string.Empty; Session.Remove("panelOpen"); } } catch (Exception ex) { Logger.Error("Unable to remove the session variable:" + ex.Message); } } In above code I am having a Session["panelOpen"] variable which is created from another user control and once my page is trying to render, I am storing Session["panelOpen"] in my hidden lblPanelOpen.Text on page OnInit() method, however when page is loaded completely then I am trying to remove the session variable. Please suggest!

    Read the article

  • Best way to use SFTP folder as concurrent work queue

    - by Gabe Moothart
    I am writing a c# windows service which will be polling an SFTP folder for new files (one file = one job) and processing them. Multiple instances of the service may be running at the same time, so it is important that they do not step on each other. I realize that an SFTP folder does not make an ideal queue, but that's what I have to work with. What do I need to do to either use this SFTP folder as a concurrent message queue, or safely represent it in a way that can be used concurrently?

    Read the article

< Previous Page | 151 152 153 154 155 156 157 158 159 160 161 162  | Next Page >