Search Results

Search found 6679 results on 268 pages for 'cube processing'.

Page 25/268 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Mono and Apache are serving files with no ASP.NET processing

    - by dnord
    On a new Rackspace Cloud Server box (Ubuntu 9.10), I've installed apache2, libapache2-mod-mono, and mod-mono-server2. I've disabled mod_mono and enabled mod_mono_auto, but whatever I do, requests for Default.aspx return the actual contents of Default.aspx (in this case, "This is a marker file generated by the precompilation tool, and should not be deleted!") I've installed XSP, and it looks like it works okay, but I'd like to use Apache with mod_mono (seems a more common configuration) if I can get it running. However, this is no error messages and no hints, with Google obviously not terribly helpful. What else can I look for to make sure I'm configured correctly? How can I test further?

    Read the article

  • ASP.NET MVC Controller parameter processing

    - by Leonardo
    In my application I have a string parameter called "shop" that is required in all controllers, but it needs to be transformed using code like this: shop = shop.Replace("-", " ").ToLower(); How can I do this globally for all controllers without repeating this line in over and over? Thanks, Leo

    Read the article

  • Processing files with C# in folders whose names contain spaces

    - by Nigel Ainscoe
    There are plenty of C# samples that show how to manipulate files and directories but they inevitably use folder paths that contain no spaces. In the real world I need to be able to process files in folders with names that contain spaces. I have written the code below which shows how I have solved the problem. However it doesn't seem to be very elegant and I wonder if anyone has a better way. class Program { static void Main(string[] args) { var dirPath = @args[0] + "\\"; string[] myFiles = Directory.GetFiles(dirPath, "*txt"); foreach (var oldFile in myFiles) { string newFile = dirPath + "New " + Path.GetFileName(oldFile); File.Move(oldFile, newFile); } Console.ReadKey(); } } Regards, Nigel Ainscoe

    Read the article

  • Jekyll - How to approach asset processing (minification, spriting...)

    - by Gromix
    I recently switched to Jekyll and I find the conversion pipeline works really well. However I'm stuck on which approach to take when the process is many inputs to one output (ex: concatenating CSS files, creating image sprites...) I know several tools that can do it, that can be called either from the command line or in Ruby code directly. For ex: Jammit css sprites Compass sprites My current solution is a few Jekyll plugins that call these tools. However, it has the following problems: 1. SASS files should be processed, then concatenated/minified SASS-CSS is a Converter, and the concatenation is a Generator run on the output. Unfortunately generators are run first, which means the concatenation is always a step behind (I have to run the build twice) 2. Jekyll does not know about the source/output relationship With converters, when I run Jekyll in server mode, if I change a SASS file it automatically runs the conversion to CSS. When dealing with concatenation/spriting, I haven't found a way to do the same. I end up having to run a "normal" Jekyll build (not server auto) to update the concatenated files and sprites. Thanks for any ideas!

    Read the article

  • Out of the box approach to upload XML file to the BIRT Server for Processing

    - by Paul
    Hello, I have the BIRT Report Server configured in TOMCAT and it works fine when running reports that require an XML datasource, but that XML file has be available on the network in order for the server to find it and run. Is there an out of the box configuration in the BIRT server that will prompt the user to upload the XML file directly to the server when they try to run a given report that requires an XML data source? This would be handy for users that have the XML datasource stored locally on their C drive and not have to move them to a network server in order to be read by BIRT. Thanks in advance. Paul

    Read the article

  • Pipeline For Downloading and Processing Files In Unix/Linux Environment With Perl

    - by neversaint
    I have a list of files URLS where I want to download them: http://somedomain.com/foo1.gz http://somedomain.com/foo2.gz http://somedomain.com/foo3.gz What I want to do is the following for each file: Download foo1,2.. in parallel with wget and nohup. Every time it complete download process them with myscript.sh What I have is this: #! /usr/bin/perl @files = glob("foo*.gz"); foreach $file (@files) { my $downurls = "http://somedomain.com/".$file; system("nohup wget $file &"); system("./myscript.sh $file >> output.txt"); } The problem is that I can't tell the above pipeline when does the file finish downloading. So now it myscript.sh doesn't get executed properly. What's the right way to achieve this?

    Read the article

  • Complex Event Processing with C#

    - by Soham
    Hi All, Can you suggest me a possible way to get started with CEP in C# ? By what I mean when I say, get started: A good book talking about CEP and C# A library which deals event clouds Some sample codes using the library Some good quality codes in general to get a possible feel of the problems Good blogs Anything else you might feel necessary to add for someone getting started in CEP and C# will be helpful. Thanks Soham

    Read the article

  • Learn mp3 format and audio signal processing

    - by Shankhoneer Chakrovarty
    I am trying to learn the following things: How mp3 file looks like internally? I found this: http://mpgedit.org/mpgedit/mpeg_format/MP3Format.html but it seems old. Is there any recent changes to the format? I couldnt find any. How to open a mp3 file in java and look for bytes? I tried using audiostream but I am getting a lot of zeros and signed short integers which nowhere resemble the header/body format as mentioned in the above link. Am I wrong in interpreting the bytes? How to get amplitude, frequency and pitch of a mp3 file? No idea. Can you please suggest some book or tutorial? Can you please help me in getting the solution for the above questions? I am sorry if some questions appear to be naive, I am a just begun to learn mp3. Thanks

    Read the article

  • Processing image data in java me

    - by Trimack
    Hi, I want to process raw data from a picture taken in java me with byte[] snap = videoControl.getSnapshot(encoding); My question is, whether I should try working already with snap or should I first create the image from this array Image im = Image.createImage(snap, 0, snap.length); and then work with that? Or is there some better documentation of both methods, i.e. getSnapshot and createImage than the Java API reference?

    Read the article

  • Issue with XSLT Processing on PHP

    - by monksy
    I'm getting a few errors from XSLTProcessor: XSLTProcessor::transformToDoc() [<a href='function.XSLTProcessor-transformToDoc'>function.XSLTProcessor-transformToDoc</a>]: Invalid or inclomplete context XSLTProcessor::transformToDoc() [<a href='function.XSLTProcessor-transformToDoc'>function.XSLTProcessor-transformToDoc</a>]: XSLTProcessor::transformToDoc() [<a href='function.XSLTProcessor-transformToDoc'>function.XSLTProcessor-transformToDoc</a>]: xsltValueOf: text copy failed in Which is parsing this XSLT Line: <xsl:apply-templates select="page/sections/section" mode="subset"/> The section is: <xsl:template match="page/sections/section" mode="subset"> <a href="#{shorttitle}"> <xsl:value-of select="title"/> </a> <xsl:if test="position() != last()"> | </xsl:if> </xsl:template> The XML that the section is parsing is: <shorttitle>About</shorttitle> <title>#~ About</title> The PHP XSLT Code is: $xslt = new XSLTProcessor(); $XSL = new DOMDocument(); $XSL->load( $xsltFile, LIBXML_NOCDATA); $xslt->importStylesheet( $XSL ); print $xslt->transformToXML( $XML ); My suspicion about the the errors is due to content. I'm not getting these errors with Firefox's XSLT rendering, nor am I getting an invalid XML document on the backend.I'm not getting errors on the load, its just on the transformToXML function. Does anyone have a clue on how to solve this? This is with PHP5.

    Read the article

  • WPF calls not working during long method processing

    - by Colin Rouse
    Hi, The following method does not apply the wpf changes (background = red) until the 2nd method (DoWork) exits: private void change() { Background = Brushes.Red; Dispatcher.BeginInvoke((Action) DoWork); } DoWork() takes several seconds to run and I don't really want to put it into a thread, as this code will be used in several places and will probably interact will the Dispatcher thread at various intervals. I've tried calling the Invalidate...() methods, but to no avail. The BeginInvoke() was added to see if the delay would allow the background change to be applied before the logic was called. Typically, the logic would be part of this method. Btw, most of the logic is performed on a different thread and shouldn't block the Dispatcher thread?! Can someone please help? Thanks

    Read the article

  • PHP or Javascript or other - Draw simple shapes onto images?

    - by Tommo
    I basically have an image of a world map and i would like to place a pin image at a specified pixel co-ordinate ontop of this world map image. It's for a website, so ideally the solution should be in PHP or Javascript (i'm avoiding Java and Flash as i want it to be as simple as possible). I had a look at the processing.js library but it is way to big and bloated for just performing this simple task. Is there a pre-existing Javascript function which will allow me to do this? Or a more simple javascript library that i can use? (processing.js was a bit too advanced for me, i couldnt get it working lol) In terms of a PHP solution, i would prefer taking the load off the server and onto the client for this task, but i would still like to hear methods for doing it in PHP if they are suitable. Thanks!

    Read the article

  • How create processing events for array of TextBox [closed]

    - by ScarX
    I create array: TextBox[] textarray = new TextBox[100]; Then in cycle set this params, all items array situated in uniformGrid1 textarray[i] = new TextBox(); textarray[i].Height = 30; textarray[i].Width = 50; uniformGrid1.Children.Add(textarray[i]); How create events Click or DoubleClick that all items array? Sorry my English.

    Read the article

  • how to intercept processing when Session.IsNewSession is true

    - by Cen
    I have a small 4-page application for my customers to work through. They fill out information. If they let it sit too long, and the Session timeout out, I want to pop up a javascript alert that their session has expired, and that they need to start over. At that point, then redirected to the beginning page of the application. I'm getting some strange behavior. I'm stepping through code, forcing my Sessioni.IsNewSession to be true. At this point, I write out a call to Javascript to a Literal Control placed at the bottom of the . The javascript is called, and the redirection occurs. However, what is happening is.. I am pressing a button which is more or less a "Next Page" button and triggering this code. The next page is being displayed, and then the Alert and redirection occurs. The result I was expecting was to stay on the same page I received the "Timeout", with the alert to pop-up over it, then redirection. I'm checking for Session.IsNewSession in a BaseClass for these pages, overriding the OnInit event. Any ideas why I am getting this behavior? Thanks!

    Read the article

  • what is the idea behind scaling an image using lanczos?

    - by banister
    Hi, I'm interested in image scaling algorithms and have implemented the bilinear and bicubic methods. However, I have heard of the lanczos and other more sophisticated methods for even higher quality image scaling and I am very curious how they work. Could someone here explain the basic idea behind scaling an image using lanczos (both upscaling and downscaling) and why it results in higher quality? I do have a background in fourier analysis and have done some signal processing stuff in the past, but not with relation to image processing, so don't be afraid to use terms like "frequency response" and such in your answer :) EDIT: I guess what i really want to know is the concept and theory behind using a convolution filter for interpolation. (Note: i have already read the wikipedia article on lanczos resampling but it didn't have nearly enough detail for me) thanks alot!

    Read the article

  • Processing XML form input in ASP

    - by Omar Kooheji
    I'm maintaining a legacy application which consists of some ASP.Net pages with c# code behinds and some asp pages. I need to change the way the application accepts it's input from reading a set of parameters from some form fields to reading in one form field which contains contains some XML and parsing to get the parameters out. I've written a C# class that takes an The NameValueCollection from the C# HttpRequest's Form Element. Like so NameValueCollection form = Request.Form; Dictionary<string, string> fieldDictionary = RequestDataExtractor.BuildFieldDictionary(form); The code in the class looks for a particular parameter and if it's there processes the XML and outputs a Dictionary, if its not there it just cycles through the Form parameters and puts them all into the dictionary (Allowing the old method to still work) How would I do this in ASP? Can I use my same class, or a modified version of it? or do I have to write some new code to get this working? If I have to write ASP code Whats the best way to process the XML in ASP? Sorry if this seems like a stupid question but I know next to nothing about ASP and VB.

    Read the article

  • How to perform undirected graph processing from SQL data

    - by recipriversexclusion
    I ran into the following problem in dynamically creating topics for our ActiveMQ system: I have a number of processes (M_1, ..., M_n), where n is not large, typically 5-10. Some of the processes will listen to the output of others, through a message queue; these edges are specified in an XML file, e.g. <link from="M1" to="M3"</link> <link from="M2" to="M4"</link> <link from="M3" to="M4"</link> etc. The edges are sparse, so there won't be many of them. I will parse this XML and store this information in an SQL DB, one table for nodes and another for edges. Now, I need to dynamically create strings of the form M1.exe --output_topic=T1 M2.exe --output_topic=T2 M3.exe --input_topic=T1 --output_topic=T3 M4.exe --input_topic=T2 --input_topic=T3 where the tags are sequentially generated. What is the best way to go about querying SQL to obtain these relationships? Are there any tools or other tutorials you can point me to? I've never done graps with SQL. Using SQL is imperative, because we use it for other stuff, too. Thanks!

    Read the article

  • help merging perl code routines together for file processing

    - by jdamae
    I need some perl help in putting these (2) processes/code to work together. I was able to get them working individually to test, but I need help bringing them together especially with using the loop constructs. I'm not sure if I should go with foreach..anyways the code is below. Also, any best practices would be great too as I'm learning this language. Thanks for your help. Here's the process flow I am looking for: -read a directory -look for a particular file -use the file name to strip out some key information to create a newly processed file -process the input file -create the newly processed file for each input file read (if i read in 10, I create 10 new files) Sample Recs: col1,col2,col3,col4,col5 [email protected],[email protected],8,2009-09-24 21:00:46,1 [email protected],[email protected],16,2007-08-18 22:53:12,33 [email protected],[email protected],16,2007-08-18 23:41:23,33 Here's my test code: Target Filetype: `/backups/test/foo101.name.aue-foo_p002.20110124.csv` Part 1: my $target_dir = "/backups/test/"; opendir my $dh, $target_dir or die "can't opendir $target_dir: $!"; while (defined(my $file = readdir($dh))) { next if ($file =~ /^\.+$/); #Get filename attributes if ($file =~ /^foo(\d{3})\.name\.(\w{3})-foo_p(\d{1,4})\.\d+.csv$/) { print "$1\n"; print "$2\n"; print "$3\n"; } print "$file\n"; } Part 2: use strict; use Digest::MD5 qw(md5_hex); #Create new file open (NEWFILE, ">/backups/processed/foo$1.name.$2-foo_p$3.out") || die "cannot create file"; my $data = ''; my $line1 = <>; chomp $line1; my @heading = split /,/, $line1; my ($sep1, $sep2, $eorec) = ( "^A", "^E", "^D"); while (<>) { my $digest = md5_hex($data); chomp; my (@values) = split /,/; my $extra = "__mykey__$sep1$digest$sep2" ; $extra .= "$heading[$_]$sep1$values[$_]$sep2" for (0..scalar(@values)); $data .= "$extra$eorec"; print NEWFILE "$data"; } #print $data; close (NEWFILE);

    Read the article

  • Optimizing processing and management of large Java data arrays

    - by mikera
    I'm writing some pretty CPU-intensive, concurrent numerical code that will process large amounts of data stored in Java arrays (e.g. lots of double[100000]s). Some of the algorithms might run millions of times over several days so getting maximum steady-state performance is a high priority. In essence, each algorithm is a Java object that has an method API something like: public double[] runMyAlgorithm(double[] inputData); or alternatively a reference could be passed to the array to store the output data: public runMyAlgorithm(double[] inputData, double[] outputData); Given this requirement, I'm trying to determine the optimal strategy for allocating / managing array space. Frequently the algorithms will need large amounts of temporary storage space. They will also take large arrays as input and create large arrays as output. Among the options I am considering are: Always allocate new arrays as local variables whenever they are needed (e.g. new double[100000]). Probably the simplest approach, but will produce a lot of garbage. Pre-allocate temporary arrays and store them as final fields in the algorithm object - big downside would be that this would mean that only one thread could run the algorithm at any one time. Keep pre-allocated temporary arrays in ThreadLocal storage, so that a thread can use a fixed amount of temporary array space whenever it needs it. ThreadLocal would be required since multiple threads will be running the same algorithm simultaneously. Pass around lots of arrays as parameters (including the temporary arrays for the algorithm to use). Not good since it will make the algorithm API extremely ugly if the caller has to be responsible for providing temporary array space.... Allocate extremely large arrays (e.g. double[10000000]) but also provide the algorithm with offsets into the array so that different threads will use a different area of the array independently. Will obviously require some code to manage the offsets and allocation of the array ranges. Any thoughts on which approach would be best (and why)?

    Read the article

  • Png image processing in .NET.

    - by Oybek
    I have the following task. Take a base image and overlay on it another one. The base image is 8b png as well as overlay. Here are the base (left) and overlay (right) images. Here is a result and how it must look. The picture in the left is a screenshot when one picture is on top of another (html and positioning) and the second is the result of programmatic merging. As you can see in the screenshot the borders of the text is darker. Also here are the sizes of the images Base image 14.9 KB Overlay image 6.87 KB Result image 34.8 KB The size of the resulting image is also huge Here is my code that I use to merge those pictures /*...*/ public Stream Concatinate(Stream baseStream, params Stream[] overlayStreams) { var @base = Image.FromStream(baseStream); var canvas = new Bitmap(@base.Width, @base.Height); using (var g = canvas.ToGraphics()) { g.DrawImage(@base, 0, 0); foreach (var item in overlayStreams) { using (var overlayImage = Image.FromStream(item)) { try { Overlay(@base, overlayImage, g); } catch { } } } } var ms = new MemoryStream(); canvas.Save(ms, ImageFormat.Png); canvas.Dispose(); @base.Dispose(); return ms; } /*...*/ /*Tograpics extension*/ public static Graphics ToGraphics(this Image image, CompositingQuality compositingQuality = CompositingQuality.HighQuality, SmoothingMode smoothingMode = SmoothingMode.HighQuality, InterpolationMode interpolationMode = InterpolationMode.HighQualityBicubic) { var g = Graphics.FromImage(image); g.CompositingQuality = compositingQuality; g.SmoothingMode = smoothingMode; g.InterpolationMode = interpolationMode; return g; } My questions are What should I do in order to merge images to achieve the result as in the screenshot? How can I lower the size of the result image? Is the System.Drawing a suitable tool for this or is there any better tool for working with png for .NET?

    Read the article

  • When using delegates, need better way to do sequential processing

    - by Padawan
    I have a class WebServiceCaller that uses NSURLConnection to make asynchronous calls to a web service. The class provides a delegate property and when the web service call is done, it calls a method webServiceDoneWithXXX on the delegate. There are several web service methods that can be called, two of which are say GetSummary and GetList. The classes that use WebServiceCaller initially need both the summary and list so they are written like this: -(void)getAllData { [webServiceCaller getSummary]; } -(void)webServiceDoneWithGetSummary { [webServiceCaller getList]; } -(void)webServiceDoneWithGetList { ... } This works but there are at least two problems: The calls are split across delegate methods so it's hard to see the sequence at a glance but more important it's hard to control or modify the sequence. Sometimes I want to call just GetSummary and not also GetList so I would then have to use an ugly class-level state variable that tells webServiceDoneWithGetSummary whether to call GetList or not. Assume that GetList cannot be done until GetSummary completes and returns some data which is used as input to GetList. Is there a better way to handle this and still get asynchronous calls? Update based on Matt Long's answer: Using notifications instead of a delegate, it looks like I can solve problem #2 by setting a different selector depending on whether I want the full sequence (GetSummary+GetList) or just GetSummary. Both observers would still use the same notification name when calling GetSummary. I would have to write two separate methods to handle GetSummaryDone instead of using a single delegate method (where I would have needed some class-level variable to tell whether to then call GetList). -(void)getAllData { [[NSNotificationCenter defaultCenter] addObserver:self              selector:@selector(getSummaryDoneAndCallGetList:)                  name:kGetSummaryDidFinish object:nil];     [webServiceCaller getSummary]; } -(void)getSummaryDoneAndCallGetList { [NSNotificationCenter removeObserver] //process summary data [[NSNotificationCenter defaultCenter] addObserver:self              selector:@selector(getListDone:)                  name:kGetListDidFinish object:nil];     [webServiceCaller getList]; } -(void)getListDone { [NSNotificationCenter removeObserver] //process list data } -(void)getJustSummaryData { [[NSNotificationCenter defaultCenter] addObserver:self              selector:@selector(getJustSummaryDone:) //different selector but                  name:kGetSummaryDidFinish object:nil]; //same notification name     [webServiceCaller getSummary]; } -(void)getJustSummaryDone { [NSNotificationCenter removeObserver] //process summary data } I haven't actually tried this yet. It seems better than having state variables and if-then statements but you have to write more methods. I still don't see a solution for problem 1.

    Read the article

  • CTE and last known date processing

    - by stackoverflowuser
    Input @StartDate = '01/25/2010' @EndDate = '02/06/2010' I have 2 CTEs in a stored procedure as follows: with CTE_A as ( [gives output A..Shown below] ), with CTE_B as ( Here, I want to check if @StartDate is NOT in output A then replace it with the last known date. In this case, since @startdate is less than any date in output A hence @StartDate will become 02/01/2010. Also to check if @EndDate is NOT in output A then replace it with the last known date. In this case, since @enddate is 02/06/2010 hence it will be replace with 02/05/2010. // Here there is a query using @startDate and @EndDate. ) output A Name Date A 02/01/2010 B 02/01/2010 C 02/05/2010 D 02/10/2010

    Read the article

  • Processing command-line arguments in prefix notation in Python

    - by ejm
    I'm trying to parse a command-line in Python which looks like the following: $ ./command -o option1 arg1 -o option2 arg2 arg3 In other words, the command takes an unlimited number of arguments, and each argument may optionally be preceded with an -o option, which relates specifically to that argument. I think this is called a "prefix notation". In the Bourne shell I would do something like the following: while test -n "$1" do if test "$1" = '-o' then option="$2" shift 2 fi # Work with $1 (the argument) and $option (the option) # ... shift done Looking around at the Bash tutorials, etc. this seems to be the accepted idiom, so I'm guessing Bash is optimized to work with command-line arguments this way. Trying to implement this pattern in Python, my first guess was to use pop(), as this is basically a stack operation. But I'm guessing this won't work as well on Python because the list of arguments in sys.argv is in the wrong order and would have to be processed like a queue (i.e. pop from the left). I've read that lists are not optimized for use as queues in Python. So, my ideas are: convert argv to a collections.deque and use popleft(), reverse argv using reverse() and use pop(), or maybe just work with the int list indices themselves. Does anyone know of a better way to do this, otherwise which of my ideas would be best-practise in Python?

    Read the article

  • Client-side or server-side processing?

    - by Nick
    So, I'm new to dynamic web design (my sites have been mostly static with some PHP), and I'm trying to learn the latest technologies in web development (which seems to be AJAX), and I was wondering, if you're transferring a lot of data, is it better to construct the page on the server and "push" it to the user, or is it better to "pull" the data needed and create the HTML around it on the clientside using JavaScript? More specifically, I'm using CodeIgniter as my PHP framework, and jQuery for JavaScript, and if I wanted to display a table of data to the user (dynamically), would it be better to format the HTML using CodeIgniter (create the tables, add CSS classes to elements, etc..), or would it be better to just serve the raw data using JSON and then build it into a table with jQuery? My intuition says to do it clientside, as it would save bandwidth and the page would probably load quicker with the new JavaScript optimizations all these browsers have now, however, then the site would break for someone not using JavaScript... Thanks for the help

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >