Search Results

Search found 20833 results on 834 pages for 'oracle advice'.

Page 782/834 | < Previous Page | 778 779 780 781 782 783 784 785 786 787 788 789  | Next Page >

  • How to clean completly select2 control?

    - by Candil
    I'm working with the awesome select2 control. I'm trying to clean and disable the select2 with the content too so I do this: $("#select2id").empty(); $("#select2id").select2("disable"); Ok, it works, but if i had a value selected all the items are removed, the control is disabled, but the selected value is still displayed. I want to clear all content so the placeholder would be showed. Here is a example I did where you can see the issue: http://jsfiddle.net/BSEXM/ HTML: <select id="sel" data-placeholder="This is my placeholder"> <option></option> <option value="a">hello</option> <option value="b">all</option> <option value="c">stack</option> <option value="c">overflow</option> </select> <br> <button id="pres">Disable and clear</button> <button id="ena">Enable</button> Code: $(document).ready(function () { $("#sel").select2(); $("#pres").click(function () { $("#sel").empty(); $("#sel").select2("disable"); }); $("#ena").click(function () { $("#sel").select2("enable"); }); }); CSS: #sel { margin: 20px; } Do you have any idea or advice to this?

    Read the article

  • Setting Class-Level Variable to Use Between Event Handlers

    - by lush
    I'm having a hard time understanding why the following code doesn't work. I'm sure it's something remedial that I'm missing or not understanding. I currently have a page that asks for user input. If, based on the input and logged in user, I find data from this page already in the database, I need to update the existing records rather than creating new ones, so I set a class-level bool to true. The problem is, when MyNextButton is clicked, PreviouslySubmitted is still false. So, I'm not sure how to make the value of this variable persist. Any advice is appreciated, thanks. public partial class MyForm : System.Web.UI.Page { private bool previouslySubmitted; protected void Page_Load(object sender, EventArgs e) { MyButton.Click += (o, i) => { q = from a in db.TableA where (a.SomeField == SomeValue) select a; if(q.Any()) { PreviouslySubmitted = true; //populate the form's fields with values from database for user to revise } } MyNextButton.Click += (o, i) => { if(PreviouslySubmitted) { //update database } else { //insert into database } }

    Read the article

  • Is there a better tool than postcat for viewing postfix mail queue files?

    - by Geekman
    So I got a call early this morning about a client needing to see what email they have waiting to be delivered sitting in our secondary mail server. Their link for the main server had (still is) been down for two days and they needed to see their email. So I wrote up a quick Perl script to use mailq in combination with postcat to dump each email for their address into separate files, tar'd it up and sent it off. Horrible code, I know, but it was urgent. My solution works OK in that it at least gives a raw view, but I thought tonight it would be nice if I had a solution where I could provide their email attachments and maybe remove some "garbage" header text as well. Most of the important emails seem to have a PDF or similar attached. I've been looking around but the only method of viewing queue files I can see is the postcat command, and I really don't want to write my own parser - so I was wondering if any of you have already done so, or know of a better command to use? Here's the code for my current solution: #!/usr/bin/perl $qCmd="mailq | grep -B 2 \"someemailaddress@isp\" | cut -d \" \" -f 1"; @data = split(/\n/, `$qCmd`); $i = 0; foreach $line (@data) { $i++; $remainder = $i % 2; if ($remainder == 0) { next; } if ($line =~ /\(/ || $line =~ /\n/ || $line eq "") { next; } print "Processing: " . $line . "\n"; `postcat -q $line > $line.email.txt`; $subject=`cat $line.email.txt | grep "Subject:"`; #print "SUB" . $subject; #`cat $line.email.txt > \"$subject.$line.email.txt\"`; } Any advice appreciated.

    Read the article

  • managing html rich text selections

    - by swami
    Hi, I am writing a component for a web app which will display some html, and let me capture and manipulate the selection boundaries (of the text selected by the user). I have done this successfully (for Mozilla) with a simple div element using window.getSelection(). However, the browser selection API is different for IE. If I were to use a textarea instead (for interacting with the selection api), is there a uniform API across the browsers? Then I would need to overlay a DIV on top of this to display the styled text, and presumably I'd need to manage the cursor etc... Basically I want a rich text editor - but without editing. Does anyone have any advice on the best way to go about this, which is quick, simple and cross browser compatible. I don't want to spend ages reinventing the wheel... (If anyone's interested - this is for an online xml editor. I capture the users selection on a html version of the xml doc and then send the selection offsets info to the server, where the real xml doc gets marked up). Kind Regards Swami

    Read the article

  • WordPress plugin to output a certain category on content page tweak help

    - by talkingD0G
    I'm using WordPress plugin Category Page to display the most recent 5 posts from a certain category on a regular content page (not the blog page) of a website. Right now the plugin is limited to display the post title linked to the post page. This is a video blog type site and I need the plugin to display the post title (as it does now) with the video as well. Probably just telling the script to show the content would work but I don't know how to tweak it. This is the section of the script that is outputting the post title: function page2cat_content_catlist($content){ global $post; if ( stristr( $content, '[catlist' )) { $search = "@(?:<p>)*\s*\[catlist\s*=\s*(\w+|^\+)\]\s*(?:</p>)*@i"; if (preg_match_all($search, $content, $matches)) { if (is_array($matches)) { $title = get_option('p2c_catlist_title'); if($title != "") $output = "<h4>".$title."</h4>"; else $output = ""; $output .= "<ul class='p2c_catlist'>"; $limit = get_option('p2c_catlist_limit'); foreach ($matches[1] as $key =>$v0) { $catposts = get_posts('category='.$v0."&numberposts=".$limit); foreach($catposts as $single): $output .= "<li><a href='".get_permalink($single->ID)." '>".$single->post_title."</a></li>"; endforeach; $search = $matches[0][$key]; $replace= $output; $content= str_replace ($search, $replace, $content); } $output .= "</ul>"; } } } return $content; } If anyone has any advice or knows how to help thanks in advance!

    Read the article

  • c# string interning

    - by CodingThunder
    I am trying to understand string interning and why is doesn't seem to work in my example. The point of the example is to show Example 1 uses less (a lot less memory) as it should only have 10 strings in memory. However, in the code below both example use roughly the same amount of memory (virtual size and working set). Please advice why example 1 isn't using a lot less memory? Thanks Example 1: IList<string> list = new List<string>(10000); for (int i = 0; i < 10000; i++) { for (int k = 0; k < 10; k++) { list.Add(string.Intern(k.ToString())); } } Console.WriteLine("intern Done"); Console.ReadLine(); Example 2: IList<string> list = new List<string>(10000); for (int i = 0; i < 10000; i++) { for (int k = 0; k < 10; k++) { list.Add(k.ToString()); } } Console.WriteLine("intern Done"); Console.ReadLine();

    Read the article

  • Noob - Cycle through stored names and skip blanks

    - by ActiveJimBob
    NOOB trying to make my code more efficient. On scroll button push, the function 'SetName' stores a number to integer 'iName' which is index against 5 names stored in memory. If a name is not set in memeory, it skips to the next. The code works, but takes up a lot of room. Any advice appreciated. Code: #include <string.h> int iName = 0; int iNewName = 0; BYTE GetName () { return iName; } void SetName (int iNewName) { while (iName != iNewName) { switch (byNewName) { case 1: if (strlen (memory.m_nameA) == 0) new_name++; else iName = iNewName; break; case 2: if (strlen (memory.m_nameB) == 0) new_name++; else iName = iNewName; break; case 3: if (strlen (memory.m_nameC) == 0) new_name++; else iName = iNewName; break; case 4: if (strlen (memory.m_nameD) == 0) new_name++; else iName = iNewName; break; case 5: if (strlen (memory.m_nameE) == 0) new_name++; else iName = iNewName; break; default: iNewName = 1; break; } // end of case } // end of loop } // end of SetName function void main () { while(1) { if (Button_pushed) SetName(GetName+1); } // end of infinite loop } // end of main

    Read the article

  • Sprint velocity calculations

    - by jase
    Need some advice on working out the team velocity for a sprint. Our team normally consists of about 4 developers and 2 testers. The scrum master insists that every team member should contribute equally to the velocity calculation i.e. we should not distinguish between developers and testers when working out how much we can do in a sprint. The is correct according to Scrum, but here's the problem. Despite suggestions to the contrary, testers never help with non-test tasks and developers never help with non-dev tasks, so we are not cross functional team members at all. Also, despite various suggestions, testers normally spend the first few days of each sprint waiting for something to test. The end result is that typically we take on far more dev work than we actually have capacity for in the sprint. For example, the developers might contribute 20 days to the velocity calculation and the testers 10 days. If you add up the tasks after sprint planning though, dev tasks add up to 25 days and test tasks add up to 5 days. How do you guys deal with this sort of situation?

    Read the article

  • jquery post and get request different on local intranet and live server

    - by nccsbim071
    Hi, I have been developing an asp.net mvc application where i need to make large amounts of jquery post and get request to call controller methods and get back json result. Everything is working fine. The problem is i had to write different jquery post and get request url on local intranet(deployed by making virtual directory) and live server. the current jquery request url is given as below: $.post("/ProjectsChat/GetMessages", { roomId: 24 },.......... now this format of url for jquery request works fine for live server but not for local intranet. Since on local intranet i have made a virtual directory. It only works when i append the name of the virtual directory like this "$.post("MyProjectVirutalDirName/ProjectsChat..................." I am sure most of you must have come across same problem. now i have made a full project, there are large number of jquery requests made, i want to test the application by deploying on local intranet and fix the bugs. Changing all the jquery requests for local intranet doesn't seem feasible solution to me, i am really in a big problem, i can't deploy the same project on live server just like that and test it there, client will kill me. I need some expert advice. Please help Thanks

    Read the article

  • Upgraded to Xcode 4 -- Endless stream of duplicate symbol errors causing build errors

    - by D-Nice
    Everything was working perfectly fine in Xcode 3 yesterday before I upgraded. So I completed the upgrade, restarted my computer, and opened my old project. I had to reconfigure a few settings like the header paths so that I could begin to compile. I'm using AdWhirl for ad mediation, and at this point my errors begin to read something like duplicate symbol _OBJC_METACLASS_$_SBJSON in /Users/Admin/Desktop/TMapLiteAdwhirl/AdWhirl/MMSDK/libMMSDK.a(SBJSON.o) and /Users/Admin/Library/Developer/Xcode/DerivedData/TruxMapLite-bgpylibztethnlhkfkdumpvrjvgy/Build/Intermediates/TruxMapLite.build/Debug-iphoneos/TruxMapLite.build/Objects-normal/armv6/SBJSON.o for architecture armv6 The library it's referring to is the SDK for one of the ad networks I'm including in AdWhirl. Both of the 'duplicate symbols' refer to the SAME FILE, but they use different paths. If I had still had XCode 3, I would simply try excluding these libraries from the build path, but I have no idea how that can be done in Xcode 4. I've tried everything all the way down to deleting the library and all associated files from my project, but when I do this, i will simply get the same type of error for a different library in the AdWhirl directory. This is incredibly frustrating because before my upgrade everything was working smoothly and I was prepared to submit my binary. If anyone has any advice, id be more than happy to give it a try. Thanks!

    Read the article

  • asp.net mvc stand alone ascx control how do i link (css and js) most efficiently

    - by Julian
    Hi, I need some advice. I have developed some asp.net mvc web pages. Each page has a master and some ascx controls (between 2 - 6) embedded into it a js and css file. Up to now every thing was fine. In order to improve modularity, flexibility and testability the ascx's are now expected to be able to work as stand alone controls. (Each ascx has also got its own css and js files in some cases it has another control inside it) In order to meet this requirement we call the controller with the relevant parameters and it returns the ascx (partial) directly to the browser without all of the other parts of the original page . In order to get it to display correctly (css) and act correctly (js/jquery) all of the relevant files need to be added (as links or scripts eg. href="<%= ResolveUrl(styleSheet)%>") to the user control. This is "contradicting" the concept of positioning the files at the most logical place (could be the master page for example). How can I overcome this problem? Keep in mind that this is relevant for each "control" ascx file. Any thoughts will be appreciated.

    Read the article

  • Should I use MEF or Prism for my Silverlight project?

    - by Daniel
    Hi! My team(3 developers) will be building a Silverlight LOB application. This is the first Silverlight project for us. We've been doing mostly Winforms. We'll be using Silverlight4 / VS2010 / possibly WCF RIA Services, and ASP.NET Web application to handle authentication and host the silverlight pages. We need a way to.. Modularize the silverlight project so we can work in different parts of the application, then integrate them. Dynamically load different parts of the application, so the initial download size of the xap file wouldn't be too large. After some research, I found out that Prism and MEF are possible solutions to these goals. Can you give me advice on which framework to use? or possibly another solution? We don't have much experience on Silverlight and the project needs to be finished in 3 months, so the learning curves for frameworks should be considered. Thank you for reading! Any inputs will be much appreciated.

    Read the article

  • How to call a function from another class file

    - by Guy Parker
    I am very familiar with writing VB based applications but am new to Xcode (and Objective C). I have gone through numerous tutorials on the web and understand the basics and how to interact with Interface Builder etc. However, I am really struggling with some basic concepts of the C language and would be grateful for any help you can offer. Heres my problem… I have a simple iphone app which has a view controller (FirstViewController) and a subview (SecondViewController) with associated header and class files. In the FirstViewController.m have a function defined @implementation FirstViewController (void) writeToServer:(const uint8_t ) buf { [oStream write:buf maxLength:strlen((char)buf)]; } It doesn't really matter what the function is. I want to use this function in my SecondViewController, so in SecondViewController.m I import FirstViewController.h import "SecondViewController.h" import "FirstViewController.h" @implementation SecondViewController -(IBAction) SetButton: (id) sender { NSString *s = [@"Fill:" stringByAppendingString: FillLevelValue.text]; NSString *strToSend = [s stringByAppendingString: @":"]; const uint8_t *str = (uint8_t *) [strToSend cStringUsingEncoding:NSASCIIStringEncoding]; FillLevelValue.text = strToSend; [FirstViewController writeToServer:str]; } This last line is where my problem is. XCode tells me that FirstViewController may not respond to writeToServer. And when I try to run the application it crashes when this function is called. I guess I don't fully understand how to share functions and more importantly, the relationship between classes. In an ideal world I would create a global class to place my functions in and call them as required. Any advice gratefully received.

    Read the article

  • Finding Local IP via Socket Creation / getsockname

    - by BSchlinker
    I need to get the IP address of a system within C++. I followed the logic and advice of another comment on here and created a socket and then utilized getsockname to determine the IP address which the socket is bound to. However, this doesn't appear to work (code below). I'm receiving an invalid IP address (58.etc) when I should be receiving a 128.etc Any ideas? string Routes::systemIP(){ // basic setup int sockfd; char str[INET_ADDRSTRLEN]; sockaddr* sa; socklen_t* sl; struct addrinfo hints, *servinfo, *p; int rv; memset(&hints, 0, sizeof hints); hints.ai_family = AF_UNSPEC; hints.ai_socktype = SOCK_DGRAM; if ((rv = getaddrinfo("4.2.2.1", "80", &hints, &servinfo)) != 0) { fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(rv)); return "1"; } // loop through all the results and make a socket for(p = servinfo; p != NULL; p = p->ai_next) { if ((sockfd = socket(p->ai_family, p->ai_socktype, p->ai_protocol)) == -1) { perror("talker: socket"); continue; } break; } if (p == NULL) { fprintf(stderr, "talker: failed to bind socket\n"); return "2"; } // get information on the local IP from the socket we created getsockname(sockfd, sa, sl); // convert the sockaddr to a sockaddr_in via casting struct sockaddr_in *sa_ipv4 = (struct sockaddr_in *)sa; // get the IP from the sockaddr_in and print it inet_ntop(AF_INET, &(sa_ipv4->sin_addr.s_addr), str, INET_ADDRSTRLEN); printf("%s\n", str); // return the IP return str; }

    Read the article

  • Project Euler, Problem 10 java solution not working

    - by Dennis S
    Hi, I'm trying to find the sum of the prime numbers < 2'000'000. This is my solution in java but I can't seem get the correct answer. Please give some input on what could be wrong and general advice on the code is appreciated. Printing 'sum' gives: 1308111344, which is incorrect. /* The sum of the primes below 10 is 2 + 3 + 5 + 7 = 17. Find the sum of all the primes below two million. */ class Helper{ public void run(){ Integer sum = 0; for(int i = 2; i < 2000000; i++){ if(isPrime(i)) sum += i; } System.out.println(sum); } private boolean isPrime(int nr){ if(nr == 2) return true; else if(nr == 1) return false; if(nr % 2 == 0) return false; for(int i = 3; i < Math.sqrt(nr); i += 2){ if(nr % i == 0) return false; } return true; } } class Problem{ public static void main(String[] args){ Helper p = new Helper(); p.run(); } }

    Read the article

  • Attempting to Convert Byte[] into Image... but is there platform issues involved

    - by user305535
    Greetings, Current, I'm attempting to develop an application that takes a Byte Array that is streamed to us from a Linux C language program across a TCPClient (stream) and reassemble it back into an image/jpg. The "sending" application was developed by a off-site developer who claims that the image reassembles back into an image without any problems or errors in his test environment (all Linux)... However, we are not so fortunate. I (believe) we successfully get all of the data sent, storing it as a string (lets us append the stream until it is complete) and then we convert it back into a Byte[]. This appears to be working fine... But, when we take the byte[] we get from the streaming (and our string assembly) and try to convert it into an image using the System.Drawing.Image.FromStream() we get errors.... Anyone have any idea what we're doing wrong? Or, does anyone know if this is a cross-platform issue? We're developing our app for Windows XP and C# .net, but the off-site developer did his work in c and Linux... perhaps there's some difference as to how each Operating System Coverts Images into Byte Arrays? Anyway, here's the code for converting our received ByteArray (from the TCPClient Stream) into an image. This code works when we send an image from a test machine we built that RUNS on XP, but not from the Linux box... System.Text.ASCIIEncoding encoding = new System.Text.ASCIIEncoding(); byte[] imageBytes = encoding.GetBytes(data); MemoryStream ms = new MemoryStream(imageBytes, 0, imageBytes.Length); // Convert byte[] to Image ms.Write(imageBytes, 0, imageBytes.Length); System.Drawing.Image image = System.Drawing.Image.FromStream(ms, false); <-- DIES here, throws a {System.ArgumentException: Parameter is not valid.} error Any advice, suggestions, theories, or HELP would be GREATLY appreciated! Please let me know??? Best wishes all! Thanks in advance! Greg

    Read the article

  • Typical Hadoop setup for remote job submission

    - by Artii
    So I am still a bit new to hadoop and am currently in the process of setting up a small test cluster on Amazonaws. So my question relates to some tips on the structuring of the cluster so it is possible to work submit jobs from remote machines. Currently I have 5 machines. 4 are basically the Hadoop cluster with the NameNodes, Yarn etc. One machine is used as a manager machine( Cloudera Manager). I am gonna describe my thinking process on the setup and if anyone can chime in the points I am not clear with, that would be great. I was thinking what was the best setup for a small cluster. So I decided to expose only one manager machine and probably use that to submit all the jobs through it. The other machines will see each other etc, but not be accessible from the outside world. I am have conceptual idea on how to do this,but I am not sure how to properly go about doing this though, if anyone could point me in the right direction that would great. Also another big point is, I want to be able to submit jobs to the cluster through exposed machine from a client machine (might be Windows). I am not so clear on this setup as well. Do I need to have Hadoop installed on the machine in order to use the normal hadoop commands, and to write/submit jobs say from Eclipse or something similar. So to sum it up my questions are, Is this an ok setup for a small test cluster How can I go about using one exposed machine to submit/route jobs to the cluster, without having any of the Hadoop nodes on it. How do I setup a client machine to submit jobs to a remote cluster, and an example on how to do it on Windows. Also if there are any reason not to use Windows as a client machine in this setup. Thanks I would greatly appreciate any advice or help on this.

    Read the article

  • How to make a increasing numbers after filenames in C?

    - by zaplec
    Hi, I have a little problem. I need to do some little operations on quite many files in one little program. So far I have decided to operate them in a single loop where I just change the number after the name. The files are all named TFxx.txt where xx is increasing number from 1 to 80. So how can I open them all in a single loop one after one? I have tried this: for(i=0; i<=80; i++) { char name[8] = "TF"+i+".txt"; FILE = open(name, r); /* Do something */ } As you can see the second line would be working in python but not in C. I have tried to do similiar running numbering with C to this program, but I haven't found out yet how to do that. The format doesn't need to be as it is on the second line, but I'd like to have some advice of how can I solve this problem. All I need to do is just be able to open many files and do same operations to them.

    Read the article

  • Loading an XML configuration file BEFORE the flex application loads

    - by Shahar
    Hi, We are using an XML file as an external configuration file for several parameters in our application (including default values for UI components and properties values of some service layer objects). The idea is to be able to load the XML configuration file before the flex application initializes any of its components. This is crucial because XML loading is processed a-synchronously in flex, which can potentially cause race-conditions in the application. For example: the configuration file holds the endpoint URL of a web service used to obtain data from the server. The URL resides in the XML because we want to allow our users to alter the endpoint URL according to their environment. Now because the endpoint URL is retrieved only after the XML has been completely loaded, some of the application's components might be invoking operations on this web service before it is initialized with the correct endpoint. The trivial solution would have been to suspend the initialization of the application until the complete event is dispatched by the loader. But it appears that this solution is far from being trivial. I haven't found a single solution that allows me to load the XML before any other object in the application. Can anyone advice or comment on this matter? Regards, Shahar

    Read the article

  • Pros and Cons of using SqlCommand Prepare in C#?

    - by MadBoy
    When i was reading books to learn C# (might be some old Visual Studio 2005 books) I've encountered advice to always use SqlCommand.Prepare everytime I execute SQL call (whether its' a SELECT/UPDATE or INSERT on SQL SERVER 2005/2008) and I pass parameters to it. But is it really so? Should it be done every time? Or just sometimes? Does it matter whether it's one parameter being passed or five or twenty? What boost should it give if any? Would it be noticeable at all (I've been using SqlCommand.Prepare here and skipped it there and never had any problems or noticeable differences). For the sake of the question this is my usual code that I use, but this is more of a general question. public static decimal pobierzBenchmarkKolejny(string varPortfelID, DateTime data, decimal varBenchmarkPoprzedni, decimal varStopaOdniesienia) { const string preparedCommand = @"SELECT [dbo].[ufn_BenchmarkKolejny](@varPortfelID, @data, @varBenchmarkPoprzedni, @varStopaOdniesienia) AS 'Benchmark'"; using (var varConnection = Locale.sqlConnectOneTime(Locale.sqlDataConnectionDetailsDZP)) //if (varConnection != null) { using (var sqlQuery = new SqlCommand(preparedCommand, varConnection)) { sqlQuery.Prepare(); sqlQuery.Parameters.AddWithValue("@varPortfelID", varPortfelID); sqlQuery.Parameters.AddWithValue("@varStopaOdniesienia", varStopaOdniesienia); sqlQuery.Parameters.AddWithValue("@data", data); sqlQuery.Parameters.AddWithValue("@varBenchmarkPoprzedni", varBenchmarkPoprzedni); using (var sqlQueryResult = sqlQuery.ExecuteReader()) if (sqlQueryResult != null) { while (sqlQueryResult.Read()) { //sqlQueryResult["Benchmark"]; } } } }

    Read the article

  • Script to install and compile Python, Django, Virtualenv, Mercurial, Git, LessCSS, etc... on Dreamho

    - by tmslnz
    The Story After cleaning up my Dreamhost shared server's home folder from all the cruft accumulated over time, I decided to start afresh and compile/reinstall Python. All tutorials and snippets I found seemed overly simplistic, assuming (or ignoring) a bunch of dependencies needed by Python to compile all modules correctly. So, starting from http://andrew.io/weblog/2010/02/installing-python-2-6-virtualenv-and-virtualenvwrapper-on-dreamhost/ (so far the best guide I found), I decided to write a set-and-forget Bash script to automate this painful process, including along the way a bunch of other things I am planning to use. The Script I am hosting the script on http://bitbucket.org/tmslnz/python-dreamhost-batch/src/ The TODOs So far it runs fine, and does all it needs to do in about 900 seconds, giving me at the end of the process a fully functional Python / Mercurial / etc... setup without even needing to log out and back in. I though this might be of use for others too, but there are a few things that I think it's missing and I am not quite sure how to go for it, what's the best way to do it, or if this just doesn't make any sense at all. Check for errors and break Check for minor version bumps of the packages and give warnings Check for known dependencies Use arguments to install only some of the packages instead of commenting out lines Organise the code in a manner that's easy to update Optionally make the installers and compiling silent, with error logging to file failproof .bashrc modification to prevent breaking ssh logins and having to log back via FTP to fix it EDIT: The implied question is: can anyone, more bashful than me, offer general advice on the worthiness of the above points or highlight any problems they see with this approach? (see my answer to Ry4an's comment below) The Gist I am no UNIX or Bash or compiler expert, and this has been built iteratively, by trial and error. It is somehow going towards apt-get (well, 1% of it...), but since Dreamhost and others obviously cannot give root access on shared servers, this looks to me like a potentially very useful workaround; particularly so with some community work involved.

    Read the article

  • How to optimize Conway's game of life for CUDA?

    - by nlight
    I've written this CUDA kernel for Conway's game of life: global void gameOfLife(float* returnBuffer, int width, int height) { unsigned int x = blockIdx.x*blockDim.x + threadIdx.x; unsigned int y = blockIdx.y*blockDim.y + threadIdx.y; float p = tex2D(inputTex, x, y); float neighbors = 0; neighbors += tex2D(inputTex, x+1, y); neighbors += tex2D(inputTex, x-1, y); neighbors += tex2D(inputTex, x, y+1); neighbors += tex2D(inputTex, x, y-1); neighbors += tex2D(inputTex, x+1, y+1); neighbors += tex2D(inputTex, x-1, y-1); neighbors += tex2D(inputTex, x-1, y+1); neighbors += tex2D(inputTex, x+1, y-1); __syncthreads(); float final = 0; if(neighbors < 2) final = 0; else if(neighbors 3) final = 0; else if(p != 0) final = 1; else if(neighbors == 3) final = 1; __syncthreads(); returnBuffer[x + y*width] = final; } I am looking for errors/optimizations. Parallel programming is quite new to me and I am not sure if I get how to do it right. The rest of the app is: Memcpy input array to a 2d texture inputTex stored in a CUDA array. Output is memcpy-ed from global memory to host and then dealt with. As you can see a thread deals with a single pixel. I am unsure if that is the fastest way as some sources suggest doing a row or more per thread. If I understand correctly NVidia themselves say that the more threads, the better. I would love advice on this on someone with practical experience.

    Read the article

  • Why are my event listeners firing more than once?

    - by Arms
    In my Flash project I have a movieclip that has 2 keyframes. Both frames contain 1 movieclip each. frame 1 - Landing frame 2 - Game The flow of the application is simple: User arrives on landing page (frame 1) User clicks "start game" button User is brought to the game page (frame 2) When the game is over, the user can press a "play again" button which brings them back to step 1 Both Landing and Game movieclips are linked to separate classes that define event listeners. The problem is that when I end up back at step 1 after playing the game, the Game event listeners fire twice for their respective event. And if I go through the process a third time, the event listeners fire three times for every event. This keeps happening, so if I loop through the application flow 7 times, the event listeners fire seven times. I don't understand why this is happening because on frame 1, the Game movieclip (and I would assume its related class instance) does not exist - but I'm clearly missing something here. I've run into this problem in other projects too, and tried fixing it by first checking if the event listeners existed and only defining them if they didn't, but I ended up with unexpected results that didn't really solve the problem. I need to ensure that the event listeners only fire once. Any advice & insight would be greatly appreciated, thanks!

    Read the article

  • Rexml - Parsing Data

    - by Paddy
    I have a XML File in the following format: <?xml version='1.0' encoding='UTF-8'?> <entry xmlns='http://www.w3.org/2005/Atom' xmlns:gwo='http://schemas.google.com/analytics/websiteoptimizer/2009' xmlns:app='http://www.w3.org/2007/app' xmlns:gd='http://schemas.google.com/g/2005' gd:etag='W/&quot;DUYGRX85fCp7I2A9WxFWEkQ.&quot;'><id>https://www.google.com/analytics/feeds/websiteoptimizer/experiments/1025910</id><updated>2010-05-31T02:12:04.124-07:00</updated><app:edited>2010-05-31T02:12:04.124-07:00</app:edited><title>Flow Experiment</title><link rel='gwo:goalUrl' type='text/html' href='http://cart.personallifemedia.com/dlg/download.php'/><link rel='alternate' type='text/html' href='https://www.google.com/websiteoptimizer'/><link rel='self' type='application/atom+xml' href='https://www.google.com/analytics/feeds/websiteoptimizer/experiments/1025910'/><gwo:analyticsAccountId>16334726</gwo:analyticsAccountId><gwo:autoPruneMode>None</gwo:autoPruneMode><gwo:controlScript>..... I have to parse and get the data for gd:etag and how do I do it? I was able to get the value using SimpleXML, but i wanted to achieve it in ReXML. Please do advice.

    Read the article

  • How to write a flexible modular program with good interaction possibilities between modules?

    - by PeterK
    I went through answers on similar topics here on SO but could't find a satisfying answer. Since i know this is a rather large topic, i will try to be more specific. I want to write a program which processes files. The processing is nontrivial, so the best way is to split different phases into standalone modules which then would be used as necessary (since sometimes i will be only interested in the output of module A, sometimes i would need output of five other modules, etc). The thing is, that i need the modules to cooperate, because the output of one might be the input of another. And i need it to be FAST. Moreover i want to avoid doing certain processing more than once (if module A creates some data which then need to be processed by module B and C, i don't want to run module A twice to create the input for modules B,C ). The information the modules need to share would mostly be blocks of binary data and/or offsets into the processed files. The task of the main program would be quite simple - just parse arguments, run required modules (and perhaps give some output, or should this be the task of the modules?). I don't need the modules to be loaded at runtime. It's perfectly fine to have libs with a .h file and recompile the program every time there is a new module or some module is updated. The idea of modules is here mainly because of code readability, maintaining and to be able to have more people working on different modules without the need to have some predefined interface or whatever (on the other hand, some "guidelines" on how to write the modules would be probably required, i know that). We can assume that the file processing is a read-only operation, the original file is not changed. Could someone point me in a good direction on how to do this in C++ ? Any advice is wellcome (links, tutorials, pdf books...).

    Read the article

< Previous Page | 778 779 780 781 782 783 784 785 786 787 788 789  | Next Page >