Search Results

Search found 19408 results on 777 pages for 'output formats'.

Page 616/777 | < Previous Page | 612 613 614 615 616 617 618 619 620 621 622 623  | Next Page >

  • rails foreman does not load all my services on start

    - by Rubytastic
    Rails foreman does not load all my services defined in Procfile. Procfile.rb: redis: redis-server resque: bundle exec rake resque:start &&> log/resque_worker_queue.log privpub: bundle exec rackup private_pub.ru -s thin -E production & &> log/private_pub.log sunspot: bundle exec rake sunspot:solr:run I always have to manually start all of them by copy paste the commands in terminal foreman start does not work, what am i missing? This is foreman output: 12:35:40 privpub.1 | process terminated 12:35:40 system | sending SIGTERM to all processes 12:35:40 system | sending SIGTERM to pid 4375 12:35:40 redis.1 | [4375] 02 Jun 12:35:40 # Received SIGTERM, scheduling shutdown... 12:35:40 redis.1 | [4375] 02 Jun 12:35:40 # User requested shutdown... 12:35:40 redis.1 | [4375] 02 Jun 12:35:40 * Saving the final RDB snapshot before exiting. 12:35:40 redis.1 | [4375] 02 Jun 12:35:40 * DB saved on disk 12:35:40 redis.1 | [4375] 02 Jun 12:35:40 # Redis is now ready to exit, bye bye... 12:35:40 system | sending SIGTERM to pid 4376 12:35:40 resque.1 | rake aborted! 12:35:40 resque.1 | SIGTERM 12:35:40 resque.1 | 12:35:40 resque.1 | (See full trace by running task with --trace) 12:35:40 system | sending SIGTERM to pid 4378 12:35:40 sunspot.1 | rake aborted! 12:35:40 sunspot.1 | SIGTERM 12:35:40 sunspot.1 | 12:35:40 sunspot.1 | (See full trace by running task with --trace) 12:35:40 sunspot.1 | process terminated 12:35:40 resque.1 | process terminated 12:35:40 redis.1 | process terminated

    Read the article

  • Facebook failing on XFBML call using Yii Framework

    - by Wenzi
    I have used this same call in other IFRAME Facebook apps, but it gives me nothing at all in terms of output. I am trying it on Yii and getting nothing. <script type="text/javascript"> window.onload = function() { FB_RequireFeatures(["XFBML"], function() { FB.init('xxxxxx', 'xd_receiver.htm'); FB.XFBML.Host.get_areElementsReady().waitUnitlReady(function() { document.getElementById("container").style.visibility = "visible"; }); }); }; </script> <script type="text/javascript"> function publish() { FB_RequireFeatures(["Connect"], function() { FB.init('xxxxxx', 'xd_receiver.htm'); FB.ensureInit(function() { FB.Connect.streamPublish(); }); }); } </script> <fb:serverFbml style="width: 755px;"> <script type="text/fbml"> <fb:fbml> <fb:request-form action="http://apps.facebook.com/ixxxx" method="POST" invite="true" type="rrrrr" content="rrrrr <?php echo htmlentities("<fb:req-choice url=\"http://apps.facebook.com/XXXX\" label=\"Authorize My Application\"") ?>" > <fb:multi-friend-selector showborder="false" actiontext="Invite your friends to use SuperThief."> </fb:request-form> </fb:fbml> </script> </fb:serverFbml>

    Read the article

  • BroadcastReceiver not receiving an alarm's broadcast

    - by juanjux
    I have a code that sets a new repeating alarm (on production I'll use a inexactRepeating), but the BroadCastReceiver I've registered for handling it is not being called. Here is the code where I set the alarm: newAlarmPeriod = 5000; // For debugging Intent alarmIntent = new Intent(this, GroupsCheckAlarmReceiver.class); PendingIntent sender = PendingIntent.getBroadcast(this, Constants.CHECK_ALARM_CODE, alarmIntent, 0); AlarmManager am = (AlarmManager) getSystemService(ALARM_SERVICE); am.setRepeating(AlarmManager.RTC_WAKEUP, System.currentTimeMillis() + newAlarmPeriod, newAlarmPeriod, sender); It seems to work and it triggers and alarm every five seconds, as seen in the output of "adb shell dumpsys alarm": DUMP OF SERVICE alarm: Current Alarm Manager state: Realtime wakeup (now=1269941046923): RTC_WAKEUP #1: Alarm{43cbac58 type 0 android} type=0 when=1269997200000 repeatInterval=0 count=0 operation=PendingIntent{43bb1738: PendingIntentRecord{43bb1248 android broadcastIntent}} RTC_WAKEUP #0: Alarm{43ce30e0 type 0 com.almarsoft.GroundhogReader} type=0 when=1269941049555 repeatInterval=5000 count=1 operation=PendingIntent{43d990c8: PendingIntentRecord{43d49108 com.almarsoft.GroundhogReader broadcastIntent}} RTC #1: Alarm{43bfc250 type 1 android} type=1 when=1269993600000 repeatInterval=0 count=0 operation=PendingIntent{43c5a618: PendingIntentRecord{43c4f048 android broadcastIntent}} RTC #0: Alarm{43d67dd8 type 1 android} type=1 when=1269941100000 repeatInterval=0 count=0 operation=PendingIntent{43c4e0f0: PendingIntentRecord{43c4f6c8 android broadcastIntent}} Broadcast ref count: 0 Alarm Stats: android 24390ms running, 0 wakeups 80 alarms: act=android.intent.action.TIME_TICK flg=0x40000004 com.almarsoft.GroundhogReader 26ms running, 2 wakeups 2 alarms: flg=0x4 cmp=com.almarsoft.GroundhogReader/.GroupsCheckAlarmReceiver But for some reason my BroadCastReceiver is not being called when the alarm is triggered. I've declared it on the Manifest: <receiver android:name=".GroupsCheckAlarmReceiver" /> And this is the abbreviated code: public class GroupsCheckAlarmReceiver extends BroadcastReceiver{ @Override public void onReceive(Context context, Intent intent) { Toast.makeText(context, "XXX Alarm worked.", Toast.LENGTH_LONG).show(); Log.d("XXX", "GroupsCheckAlarmReceiver.onReceive"); }

    Read the article

  • Workaround for GNU Make 3.80 eval bug

    - by bengineerd
    I'm trying to create a generic build template for my Makefiles, kind of like they discuss in the eval documentation. I've run into a known bug with GNU Make 3.80. When $(eval) evaluates a line that is over 193 characters, Make crashes with a "Virtual Memory Exhausted" error. The code I have that causes the issue looks like this. SRC_DIR = ./src/ PROG_NAME = test define PROGRAM_template $(1)_SRC_DIR = $$(SRC_DIR)$(1)/ $(1)_SRC_FILES = $$(wildcard $$($(1)_SRC_DIR)*.c) $(1)_OBJ_FILES = $$($(1)_SRC_FILES):.c=.o) $$($(1)_OBJ_FILES) : $$($(1)_SRC_FILES) # This is the problem line endef $(eval $(call PROGRAM_template,$(PROG_NAME))) When I run this Makefile, I get gmake: *** virtual memory exhausted. Stop. The expected output is that all .c files in ./src/test/ get compiled into .o files (via an implicit rule). The problem is that $$($(1)_SRC_FILES) and $$($(1)_OBJ_FILES) are together over 193 characters long (if there are enough source files). I have tried running the make file on a directory where there is only 2 .c files, and it works fine. It's only when there are many .c files in the SRC directory that I get the error. I know that GNU Make 3.81 fixes this bug. Unfortunately I do not have the authority or ability to install the newer version on the system I'm working on. I'm stuck with 3.80. So, is there some workaround? Maybe split $$($(1)_SRC_FILES) up and declare each dependency individually within the eval?

    Read the article

  • Jetty startup delay

    - by Tauren
    I'm trying to figure out what would be causing a 1 minute delay in the startup of Jetty. Is it a configuration problem, my application, or something else? I have Jetty 7 (jetty-7.0.1.v20091125 25 November 2009) installed on a server and I deploy a 45MB ROOT.war file into the webapps directory. This is the only webapp configured in Jetty. I then start Jetty with the command: java -DSTOP.PORT=8079 -DSTOP.KEY=mystopkey -Denv=stage -jar start.jar etc/jetty-logging.xml etc/jetty.xml & I get two lines of output right after doing this: 2010-03-07 14:20:06.642:INFO::Logging to StdErrLog::DEBUG=false via org.eclipse.jetty.util.log.StdErrLog 2010-03-07 14:20:06.710:INFO::Redirecting stderr/stdout to /home/zing/jetty-distribution-7.0.1.v20091125/logs/2010_03_07.stderrout.log When I press the enter key, I get my command prompt back. Looking at the log file (logs/2010_03_07.stderrout.log), I see the following at the beginning: 2010-03-07 14:08:50.396:INFO::jetty-7.0.1.v20091125 2010-03-07 14:08:50.495:INFO::Extract jar:file:/home/zing/jetty-distribution-7.0.1.v20091125/webapps/ROOT.war!/ to /tmp/Jetty_0_0_0_0_8080_ROOT.war___.8te0nm/webapp 2010-03-07 14:08:52.599:INFO::NO JSP Support for , did not find org.apache.jasper.servlet.JspServlet 2010-03-07 14:09:51.379:INFO::Set web app root system property: 'webapp.root' = [/tmp/Jetty_0_0_0_0_8080_ROOT.war___.8te0nm/webapp] 2010-03-07 14:09:51.585:INFO::Initializing Spring root WebApplicationContext INFO - ContextLoader - Root WebApplicationContext: initialization started INFO - XmlWebApplicationContext - Refreshing Root WebApplicationContext: startup date [Sun Mar 07 14:09:51 PST 2010]; root of context hierarchy ... Notice the 1 minute long pause between the 3rd and 4th lines. What is Jetty doing at this point? What other things could be going on? It doesn't even look like it has started my Spring initialization yet. Note that I checked my /tmp directory to see if it was simply the time to unpack my war file, but the file had been completely unpacked even at the start of this 1 minute delay.

    Read the article

  • Running job in the background from Perl WITHOUT waiting for return

    - by Rafael Almeida
    The Disclaimer First of all, I know this question (or close variations) have been asked a thousand times. I really spent a few hours looking in the obvious and the not-so-obvious places, but there may be something small I'm missing. The Context Let me define the problem more clearly: I'm writing a newsletter app in which I want the actual sending process to be async. As in, user clicks "send", request returns immediately and then they can check the progress in a specific page (via AJAX, for example). It's written in your traditional LAMP stack. In the particular host I'm using, PHP's exec() and system() are disabled for security reasons, but Perl's system functions (exec, system and backticks) aren't. So my workaround solution was to create a "trigger" script in Perl that calls the actual sender via the PHP CLI, and redirects to the progress page. Where I'm Stuck The very line the calls the sender is, as of now: system("php -q sender.php &"); Problem being, it's not returning immediately, but waiting for the script to finish. I want it to run in the background but the system call itself returns right away. I also tried running a similar script in my Linux terminal, and in fact the prompt doesn't show until after the script has finished, even though my test output doesn't run, indicating it's really running in the background. What I already tried Perl's exec() function - same result of system(). Changing the command to: "php -q sender.php | at now"), hoping that the "at" daemon would return and that the PHP process itself wouldn't be attached to Perl. What should I try now?

    Read the article

  • Retrieve parent node from selection (range) in Gecko and Webkit

    - by Jason
    I am trying to add an attribute when using a wysiwyg editor that uses "createLink" command. I thought it would be trivial to get back the node that is created after the browse executes that command. Turns out, I am only able to grab this newly created node in IE. Any ideas? The following code demonstrates the issue (debug logs at bottom show different output in each browser): var getSelectedHTML = function() { if ($.browser.msie) { return this.getRange().htmlText; } else { var elem = this.getRange().cloneContents(); return $("<p/>").append($(elem)).html(); } }; var getSelection = function() { if ($.browser.msie) { return this.editor.selection; } else { return this.iframe[0].contentDocument.defaultView.getSelection(); } }; var getRange = function() { var s = this.getSelection(); return (s.getRangeAt) ? s.getRangeAt(0) : s.createRange(); }; var getSelectedNode = function() { var range = this.getRange(); var parent = range.commonAncestorContainer ? range.commonAncestorContainer : range.parentElement ? range.parentElement(): range.item(0); return parent; }; // **** INSIDE SOME EVENT HANDLER **** if ($.browser.msie) { this.ec("createLink", true); } else { this.ec("createLink", false, prompt("Link URL:", "http://")); } var linkNode = $(this.getSelectedNode()); linkNode.attr("rel", "external"); $.log(linkNode.get(0).tagName); // Gecko: "body" // IE: "a" // Webkit: "undefined" $.log(this.getSelectedHTML()); // Gecko: "<a href="http://site.com">foo</a>" // IE: "<A href="http://site.com" rel=external>foo</A>" // Webkit: "foo" $.log(this.getSelection()); // Gecko: "foo" // IE: [object Selection] // Webkit: "foo" Thanks for any help on this, I've scoured related questions on SO with no success!

    Read the article

  • Code Golf: Connect 4

    - by Matthieu M.
    If you don't know the Connect 4 game, follow the link :) I used to play it a lot when I was a child. At least until my little sister got bored with me winning... Anyway I was reading the Code Golf: Tic Tac Toe the other day and I thought that solving the Tic Tac Toe problem was simpler than solving the Connect 4... and wondered how much this would reflect on the number of characters a solution would yield. I thus propose a similar challenge: Find the winner The grid is given under the form of a string meant to passed as a parameter to a function. The goal of the code golf is to write the body of the function, the parameter will be b, of string type The image in the wikipedia article leads to the following representation: "....... ..RY... ..YYYR. ..RRYY. ..RYRY. .YRRRYR" (6 rows of 7 elements) but is obviously incomplete (Yellow has not won yet) There is a winner in the grid passed, no need to do error checking Remember that it might not be exactly 4 The expected output is the letter representing the winner (either R or Y) I expect perl mongers to produce the most unreadable script (along with Ook and whitespace, of course), but I am most interested in reading innovative solutions. I must admit the magic square solution for Tic Tac Toe was my personal fav and I wonder if there is a way to build a similar one with this. Well, happy Easter weekend :) Now I just have a few days to come up with a solution of my own!

    Read the article

  • Parsing unicode XML with Python SAX on App Engine

    - by Derek Dahmer
    I'm using xml.sax with unicode strings of XML as input, originally entered in from a web form. On my local machine (python 2.5, using the default xmlreader expat, running through app engine), it works fine. However, the exact same code and input strings on production app engine servers fail with "not well-formed". For example, it happens with the code below: from xml import sax class MyHandler(sax.ContentHandler): pass handler = MyHandler() # Both of these unicode strings return 'not well-formed' # on app engine, but work locally xml.parseString(u"<a>b</a>",handler) xml.parseString(u"<!DOCTYPE a[<!ELEMENT a (#PCDATA)> ]><a>b</a>",handler) # Both of these work, but output unicode xml.parseString("<a>b</a>",handler) xml.parseString("<!DOCTYPE a[<!ELEMENT a (#PCDATA)> ]><a>b</a>",handler) resulting in the error: File "<string>", line 1, in <module> File "/base/python_dist/lib/python2.5/xml/sax/__init__.py", line 49, in parseString parser.parse(inpsrc) File "/base/python_dist/lib/python2.5/xml/sax/expatreader.py", line 107, in parse xmlreader.IncrementalParser.parse(self, source) File "/base/python_dist/lib/python2.5/xml/sax/xmlreader.py", line 123, in parse self.feed(buffer) File "/base/python_dist/lib/python2.5/xml/sax/expatreader.py", line 211, in feed self._err_handler.fatalError(exc) File "/base/python_dist/lib/python2.5/xml/sax/handler.py", line 38, in fatalError raise exception SAXParseException: <unknown>:1:1: not well-formed (invalid token) Any reason why app engine's parser, which also uses python2.5 and expat, would fail when inputting unicode?

    Read the article

  • .NET socket timeout - blocking on Close method

    - by Mark
    I'm having trouble implementing a connect timeout using asynchronous socket calls. The idea being that I call BeginConnect on a Socket object, then use a timer to call Close() on the socket after a timeout period has elapsed. This works fine as long as the socket is created on the GUI thread - the Close method returns immediately, and the callback method is executed. However, if the socket is created on any other thread, the Close method blocks until the default IP timeout occurs. Code to reproduce: private Socket client; private void button1_Click(object sender, EventArgs e) { // Creating the socket on a threadpool thread causes Close to block. ThreadPool.QueueUserWorkItem((object state) => { client = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); IAsyncResult result = client.BeginConnect(IPAddress.Parse("144.1.1.1"), 23, new AsyncCallback(CallbackMethod), client); // Wait for 2 seconds before closing the socket. if (result.AsyncWaitHandle.WaitOne(2000)) { MessageBox.Show("Connected."); } else { MessageBox.Show("Timed out. Closing socket..."); client.Close(); MessageBox.Show("Socket closed."); } }); } private void CallbackMethod(IAsyncResult result) { MessageBox.Show("Callback started."); Socket client = result.AsyncState as Socket; try { client.EndConnect(result); } catch (ObjectDisposedException) { } MessageBox.Show("Callback finished."); } If you remove the QueueUserWorkItem line, creating the socket on the GUI thread, the socket closes instantly without blocking. Can anyone shed some light on what's going on? Thanks. Edit - System.Net trace output seems to be different depending on whether it's being connected on the GUI thread or a different thread: Trace from non-blocking close when using GUI thread Trace from blocking close when using non-GUI thread

    Read the article

  • Illegal start of expression?

    - by Fraser
    I'm trying to build a simple Android app that increments a number displayed every time a button is pressed, but I can't work out how to fix the "illegal start of expression" error I keep getting. My code: package com.clicker; import android.app.Activity; import android.os.Bundle; import android.view.View; import android.widget.Button; import android.widget.TextView; public class Clicker extends Activity { private int clickerNumber = 0; private TextView clickerText; private Button clickerButton; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); clickerText = (TextView)findViewById(R.id.clickerText); final Button clickerButton = (Button)findViewById(R.id.clickerButton); clickerButton.setOnClickListener(new View.OnClickListener()); { public void onClick(); { clickerNumber = clickerNumber++; clickerText.setText(Integer.toString(clickerNumber)); } } } } And compiler output: compile: [javac] Compiling 2 source files to /home/fraser/Applications/Android/Code/Clicker/bin/classes [javac] /home/fraser/Applications/Android/Code/Clicker/src/com/clicker/Clicker.java:24: ')' expected [javac] clickerButton.setOnClickListener(new View.OnClickListener(); [javac] ^ [javac] /home/fraser/Applications/Android/Code/Clicker/src/com/clicker/Clicker.java:26: illegal start of expression [javac] public void onClick(); [javac] ^ [javac] /home/fraser/Applications/Android/Code/Clicker/src/com/clicker/Clicker.java:26: illegal start of expression [javac] public void onClick(); [javac] ^ [javac] /home/fraser/Applications/Android/Code/Clicker/src/com/clicker/Clicker.java:26: ';' expected [javac] public void onClick(); [javac] ^ [javac] /home/fraser/Applications/Android/Code/Clicker/src/com/clicker/Clicker.java:29: ';' expected [javac] clickerText.setText(Integer.toString(clickerNumber))); [javac] ^ [javac] 5 errors

    Read the article

  • How do I test OpenCL on GPU when logged in remotely on Mac?

    - by Christopher Bruns
    My OpenCL program can find the GPU device when I am logged in at the console, but not when I am logged in remotely with ssh. Further, if I run the program as root in the ssh session, the program can find the GPU. The computer is a Snow Leopard Mac with a GeForce 9400 GPU. If I run the program (see below) from the console or as root, the output is as follows (notice the "GeForce 9400" line): 2 devices found Device #0 name = GeForce 9400 Device #1 name = Intel(R) Core(TM)2 Duo CPU P8700 @ 2.53GHz but if it is just me, over ssh, there is no GeForce 9400 entry: 1 devices found Device #0 name = Intel(R) Core(TM)2 Duo CPU P8700 @ 2.53GHz I would like to test my code on the GPU without having to be root. Is that possible? Simplified GPU finding program below: #include <stdio.h> #include <OpenCL/opencl.h> int main(int argc, char** argv) { char dname[500]; size_t namesize; cl_device_id devices[10]; cl_uint num_devices; int d; clGetDeviceIDs(0, CL_DEVICE_TYPE_ALL, 10, devices, &num_devices); printf("%d devices found\n", num_devices); for (d = 0; d < num_devices; ++d) { clGetDeviceInfo(devices[d], CL_DEVICE_NAME, 500, dname, &namesize); printf("Device #%d name = %s\n", d, dname); } return 0; } EDIT: I found essentially the same question being asked on nvidia's forums. Unfortunately, the only answer was of the form "this is the wrong forum".

    Read the article

  • cache_money only writing to memcached on creates and updates, and seemingly never looking in the cac

    - by Shane Liebling
    I seem to be having some extremely odd cache_money interactions. When I am on the console, and I create a new instance of a class and save it I see the cache misses and cache stores on my memcached console output. Then when the create finishes I see a bunch of cache deletions. If I then try to do any kind of find for the newly created object (or any other objects for that matter) I never see any cache access. This is highly confusing. I could kind of understand if all finds never hit the cache (though that in and of itself would be an issue requiring investigation), but finds do seem to hit the cache when the object is being created (checking for associations and such). Anyone have this experience in the past at all? Any thoughts? AFAIK there isn't really much in the way of configuration options for cache_money, and it certainly doesn't seem like there are any that would be on by default and be creating these kinds of symptoms. My cache_money config is basically straight out of the docs. Any help would be greatly appreciated.

    Read the article

  • slicing a 2d numpy array

    - by MedicalMath
    The following code: import numpy as p myarr=[[0,1],[0,6],[0,1],[0,6],[0,1],[0,6],[0,1],[0,6],[0,1],[0,6],[0,1],[0,6],[0,1],[0,6],[0,1],[0,6],[0,1],[0,6]] copy=p.array(myarr) p.mean(copy)[:,1] Is generating the following error message: Traceback (most recent call last): File "<pyshell#3>", line 1, in <module> p.mean(copy)[:,1] IndexError: 0-d arrays can only use a single () or a list of newaxes (and a single ...) as an index I looked up the syntax at this link and I seem to be using the correct syntax to slice. However, when I type copy[:,1] into the Python shell, it gives me the following output, which is clearly wrong, and is probably what is throwing the error: array([1, 6, 1, 6, 1, 6, 1, 6, 1, 6, 1, 6, 1, 6, 1, 6, 1, 6]) Can anyone show me how to fix my code so that I can extract the second column and then take the mean of the second column as intended in the original code above? EDIT: Thank you for your solutions. However, my posting was an oversimplification of my real problem. I used your solutions in my real code, and got a new error. Here is my real code with one of your solutions that I tried: filteredSignalArray=p.array(filteredSignalArray) logical=p.logical_and(EndTime-10.0<=matchingTimeArray,matchingTimeArray<=EndTime) finalStageTime=matchingTimeArray.compress(logical) finalStageFiltered=filteredSignalArray.compress(logical) for j in range(len(finalStageTime)): if j == 0: outputArray=[[finalStageTime[j],finalStageFiltered[j]]] else: outputArray+=[[finalStageTime[j],finalStageFiltered[j]]] print 'outputArray[:,1].mean() is: ',outputArray[:,1].mean() And here is the error message that is now being generated by the new code: File "mypath\myscript.py", line 1545, in WriteToOutput10SecondsBeforeTimeMarker print 'outputArray[:,1].mean() is: ',outputArray[:,1].mean() TypeError: list indices must be integers, not tuple Second EDIT: This is solved now that I added: outputArray=p.array(outputArray) above my code. I have been at this too many hours and need to take a break for a while if I am making these kinds of mistakes.

    Read the article

  • How to retain XML string as a string field during XML deserialization

    - by detale
    I got an XML input string and want to deserialize it to an object which partially retain the raw XML. <SetProfile> <sessionId>A81D83BC-09A0-4E32-B440-0000033D7AAD</sessionId> <profileDataXml> <ArrayOfProfileItem> <ProfileItem> <Name>Pulse</Name> <Value>80</Value> </ProfileItem> <ProfileItem> <Name>BloodPresure</Name> <Value>120</Value> </ProfileItem> </ArrayOfProfileItem> </profileDataXml> </SetProfile> The class definition: public class SetProfile { public Guid sessionId; public string profileDataXml; } I hope the deserialization syntax looks like string inputXML = "..."; // the above XML XmlSerializer xs = new XmlSerializer(typeof(SetProfile)); using (TextReader reader = new StringReader(inputXML)) { SetProfile obj = (SetProfile)xs.Deserialize(reader); // use obj .... } but XMLSerializer will throw an exception and won't output < profileDataXml 's descendants to "profileDataXml" field in raw XML string. Is there any way to implement the deserialization like that?

    Read the article

  • Grails - Simple hasMany Problem - How does 'save' work?

    - by gav
    My problem is this: I want to create a grails domain instance, defining the 'Many' instances of another domain that it has. I have the actual source in a Google Code Project but the following should illustrate the problem. class Person { String name static hasMany[skills:Skill] static constraints = { id (visible:false) skills (nullable:false, blank:false) } } class Skill { String name String description static constraints = { id (visible:false) name (nullable:false, blank:false) description (nullable:false, blank:false) } } If you use this model and def scaffold for the two Controllers then you end up with a form like this that doesn't work; My own attempt to get this to work enumerates the Skills as checkboxes and looks like this; But when I save the Volunteer the skills are null! This is the code for my save method; def save = { log.info "Saving: " + params.toString() def skills = params.skills log.info "Skills: " + skills def volunteerInstance = new Volunteer(params) log.info volunteerInstance if (volunteerInstance.save(flush: true)) { flash.message = "${message(code: 'default.created.message', args: [message(code: 'volunteer.label', default: 'Volunteer'), volunteerInstance.id])}" redirect(action: "show", id: volunteerInstance.id) log.info volunteerInstance } else { render(view: "create", model: [volunteerInstance: volunteerInstance]) } } This is my log output (I have custom toString() methods); 2010-05-10 21:06:41,494 [http-8080-3] INFO bumbumtrain.VolunteerController - Saving: ["skills":["1", "2"], "name":"Ian", "_skills":["", ""], "create":"Create", "action":"save", "controller":"volunteer"] 2010-05-10 21:06:41,495 [http-8080-3] INFO bumbumtrain.VolunteerController - Skills: [1, 2] 2010-05-10 21:06:41,508 [http-8080-3] INFO bumbumtrain.VolunteerController - Volunteer[ id: null | Name: Ian | Skills [Skill[ id: 1 | Name: Carpenter ] , Skill[ id: 2 | Name: Sound Engineer ] ]] Note that in the final log line the right Skills have been picked up and are part of the object instance. When the volunteer is saved the 'Skills' are ignored and not commited to the database despite the in memory version created clearly does have the items. Is it not possible to pass the Skills at construction time? There must be a way round this? I need a single form to allow a person to register but I want to normalise the data so that I can add more skills at a later time. If you think this should 'just work' then a link to a working example would be great. Hope this makes sense, thanks in advance! Gav

    Read the article

  • What is the most platform- and Python-version-independent way to make a fast loop for use in Python?

    - by Statto
    I'm writing a scientific application in Python with a very processor-intensive loop at its core. I would like to optimise this as far as possible, at minimum inconvenience to end users, who will probably use it as an uncompiled collection of Python scripts, and will be using Windows, Mac, and (mainly Ubuntu) Linux. It is currently written in Python with a dash of NumPy, and I've included the code below. Is there a solution which would be reasonably fast which would not require compilation? This would seem to be the easiest way to maintain platform-independence. If using something like Pyrex, which does require compilation, is there an easy way to bundle many modules and have Python choose between them depending on detected OS and Python version? Is there an easy way to build the collection of modules without needing access to every system with every version of Python? Does one method lend itself particularly to multi-processor optimisation? (If you're interested, the loop is to calculate the magnetic field at a given point inside a crystal by adding together the contributions of a large number of nearby magnetic ions, treated as tiny bar magnets. Basically, a massive sum of these.) # calculate_dipole # ------------------------- # calculate_dipole works out the dipole field at a given point within the crystal unit cell # --- # INPUT # mu = position at which to calculate the dipole field # r_i = array of atomic positions # mom_i = corresponding array of magnetic moments # --- # OUTPUT # B = the B-field at this point def calculate_dipole(mu, r_i, mom_i): relative = mu - r_i r_unit = unit_vectors(relative) #4pi / mu0 (at the front of the dipole eqn) A = 1e-7 #initalise dipole field B = zeros(3,float) for i in range(len(relative)): #work out the dipole field and add it to the estimate so far B += A*(3*dot(mom_i[i],r_unit[i])*r_unit[i] - mom_i[i]) / sqrt(dot(relative[i],relative[i]))**3 return B

    Read the article

  • php claims my defined variable is undefined

    - by tedders
    My php is a little rusty but this is boggling my mind right now. I googled this and read all the stackoverflow questions I could find that looked related, but those all seemed to have legitimate undefined variables in them. That leads me to believe that mine is the same problem, but no amount of staring at the simple bit of code I have reduced this to seems to get me anywhere. Please someone give me my dunce cap and tell me what I did wrong! <?php //test for damn undefined variable error $msgs = ""; function add_msg($msg){ $msgs .= "<div>$msg</div>"; } function print_msgs(){ print $msgs; } add_msg("test"); add_msg("test2"); print_msgs(); ?> This gives me the following, maddening output: Notice: Undefined variable: msgs in C:\wamp\www\fgwl\php-lib\fgwlshared.php on line 7 Notice: Undefined variable: msgs in C:\wamp\www\fgwl\php-lib\fgwlshared.php on line 7 Notice: Undefined variable: msgs in C:\wamp\www\fgwl\php-lib\fgwlshared.php on line 10 Yes, this is supposed to be a shared file, but at the moment I have stripped it down to just what I pasted. Any ideas?

    Read the article

  • Flush kernel's TCP buffer with `MSG_MORE`-flagged packets

    - by timn
    send()'s man page reveals the MSG_MORE flag which is asserted to act like TCP_CORK. I have a wrapper function around send(): int SocketConnection_Write(SocketConnection *this, void *buf, int len) { errno = 0; int sent = send(this->fd, buf, len, MSG_NOSIGNAL); if (errno == EPIPE || errno == ENOTCONN) { throw(exc, &SocketConnection_NotConnectedException); } else if (errno == ECONNRESET) { throw(exc, &SocketConnection_ConnectionResetException); } else if (sent != len) { throw(exc, &SocketConnection_LengthMismatchException); } return sent; } Assuming I want to use the kernel buffer, I could go with TCP_CORK, enable whenever it is necessary and then disable it to flush the buffer. But on the other hand, thereby the need for an additional system call arises. Thus, the usage of MSG_MORE seems more appropriate to me. I'd simply change the above send() line to: int sent = send(this->fd, buf, len, MSG_NOSIGNAL | MSG_MORE); According to lwm.net, packets will be flushed automatically if they are large enough: If an application sets that option on a socket, the kernel will not send out short packets. Instead, it will wait until enough data has shown up to fill a maximum-size packet, then send it. When TCP_CORK is turned off, any remaining data will go out on the wire. But this section only refers to TCP_CORK. Now, what is the proper way to flush MSG_MORE packets? I can only think of two possibilities: Call send() with an empty buffer and without MSG_MORE being set Re-apply the TCP_CORK option as described on this page Unfortunately the whole topic is very poorly documented and I couldn't find much on the Internet. I am also wondering how to check that everything works as expected? Obviously running the server through strace' is not an option. So the only simplest way would be to usenetcat' and then look at its `strace' output? Or will the kernel handle traffic differently transmitted over a loopback interface?

    Read the article

  • Using HAML with custom filters

    - by Guard
    Hi everybody. I feel quite excited about HAML and CoffeeScript and am working on tutorial showing how to use them in non-Rails environment. So, haml has easy to use command-line utility haml input.haml output.html. And, what is great, there exist a project (one of many forks: https://github.com/aussiegeek/coffee-haml-filter) aimed at providing custom filter that converts CoffeeScript into JS inside of HAML files. Unfortunately (or am I missing something?) haml doesn't allow specifying custom filters on the command line or with some configuration file. I (not being a Ruby fan or even knowing it enough) managed to solve it (based on some clever suggestion somewhere on SO) with this helper script: haml.rb require 'rubygems' require 'active_support/core_ext/object/blank' require 'haml' require 'haml/filters/coffee' template = ARGV.length > 0 ? File.read(ARGV.shift) : STDIN.read haml_engine = Haml::Engine.new(template) file = ARGV.length > 0 ? File.open(ARGV.shift, 'w') : STDOUT file.write(haml_engine.render) file.close Which is quite straightforward, except of requires in the beginning. Now, the questions are: 1) should I really use it, or is there another way to have on-demand HAML to HTML compilation with custom filters? 2) What about HAML watch mode? It's great and convenient. I can, of course, create a polling script in python that will watch the directory changes and call this .rb script, but it looks like a dirty solution.

    Read the article

  • ASP, sorting database with conditions using multiple columns...

    - by Mitch
    First of all, I'm still working in classic ASP (vbScript) with an MS Access Database. And, yes I know its archaic, but I'm still hopeful I can do this! So now to my problem: Take the following table as an example: PROJECTS ContactName StartDate EndDate Complete Mitch 2009-02-13 2011-04-23 No Eric 2006-10-01 2008-11-15 Yes Mike 2007-05-04 2009-03-30 Yes Kyle 2009-03-07 2012-07-08 No Using ASP (with VBScript), and an MS Access Database as the backend, I’d like to be able to sort this table with the following logic: I would like to sort this table by date, however, depending on whether a given project is complete or not I would like it to use either the “StartDate” or “EndDate” as the reference for a particular row. So to break it down further, this is what I’m hoping to achieve: For PROJECTS where Complete = “Yes”, reference “EndDate” for the purpose of sorting. For PROJECTS where Complete = “No”, reference “StartDate” for the purpose of sorting. So, if I were to sort the above table following these rules, the output would be: PROJECTS ContactName StartDate EndDate Complete 1 Eric 2006-10-01 2008-11-15* Yes 2 Mitch 2009-02-13* 2011-04-23 No 3 Kyle 2009-03-07* 2012-07-08 No 4 Mike 2007-05-04 2009-03-30* Yes *I’ve put a star next to the date that should be used for the sort in the table above. NOTE: This is actually a simplified version of what I really need to do, but I think that if I could just figure this out, I’ll be able to do the rest on my own. ANY HELP IS GREATLY APPRECIATED; I’VE BEEN STRUGGLING WITH THIS FOR FAR TOO LONG NOW! Thank you!

    Read the article

  • VS2010 compiles solution without errors, msbuild fails: "fatal error CS0002: Unable to load message string from resources"

    - by Nathan Ridley
    I'm having a lot of trouble trying to track down the cause of this error message. I have a large Visual Studio 2010 solution which compiles without error on my local machine but on the build server, msbuild fails on one of the projects with the error: fatal error CS0002: Unable to load message string from resources Here's the red error section at the end: Build FAILED. "C:\TeamCity\buildAgent\work\85eff164854b9e67\Libraries\Domainface.Proxy.Common\Domainface.Proxy.Common.csproj" (default target) (9) -> (CoreCompile target) -> CSC : fatal error CS0002: Unable to load message string from resources. [C:\TeamCity\buildAgent\work\85eff164854b9e67\Libraries\Domainface.Proxy.Common\Domainface.Proxy.Common.csproj] 0 Warning(s) 1 Error(s) The entire msbuild output from the build server is here: http://pastie.org/3660842 What does the error generally refer to, that would cause it to build locally but not on the build server? UPDATE I have just run msbuild /version on both machines and it turns out the .net framework versions are very slightly different. Local machine is 4.0.30319.488 and build server is 4.0.30319.1. I'm about to run windows update on the server to allow it to install some updates, as several seem to be .net framework-related, so I'll see if that makes a difference. UPDATE Installing the updates didn't help. Just remembered I copied up csc.exe from the async preview a little while ago in order to facilitate async compilation (the actual async preview had failed to install on the server due to visual studio not being there, but installing visual studio team viewer seems to have fixed that, so i've just run the proper async ctp3 installer to see if that makes a difference.

    Read the article

  • Efficient Context-Free Grammar parser, preferably Python-friendly

    - by Max Shawabkeh
    I am in need of parsing a small subset of English for one of my project, described as a context-free grammar with (1-level) feature structures (example) and I need to do it efficiently . Right now I'm using NLTK's parser which produces the right output but is very slow. For my grammar of ~450 fairly ambiguous non-lexicon rules and half a million lexical entries, parsing simple sentences can take anywhere from 2 to 30 seconds, depending it seems on the number of resulting trees. Lexical entries have little to no effect on performance. Another problem is that loading the (25MB) grammar+lexicon at the beginning can take up to a minute. From what I can find in literature, the running time of the algorithm used to parse such a grammar (Earley or CKY) should be linear to the size of the grammar and cubic to the size of the input token list. My experience with NLTK indicates that ambiguity is what hurts the performance most, not the absolute size of the grammar. So now I'm looking for a CFG parser to replace NLTK. I've been considering PLY but I can't tell whether it supports feature structures in CFGs, which are required in my case, and the examples I've seen seem to be doing a lot of procedural parsing rather than just specifying a grammar. Can anybody show me an example of PLY both supporting feature structs and using a declarative grammar? I'm also fine with any other parser that can do what I need efficiently. A Python interface is preferable but not absolutely necessary.

    Read the article

  • Can I trigger PHP garbage collection to happen automatically if I have circular references?

    - by Beau Simensen
    I seem to recall a way to setup the __destruct for a class in such a way that it would ensure that circular references would be cleaned up as soon as the outside object falls out of scope. However, the simple test I built seems to indicate that this is not behaving as I had expected/hoped. Is there a way to setup my classes in such a way that PHP would clean them up correctly when the outermost object falls out of scope? I am not looking for alternate ways to write this code, I am looking for whether or not this can be done, and if so, how? I generally try to avoid these types of circular references where possible. class Bar { private $foo; public function __construct($foo) { $this->foo = $foo; } public function __destruct() { print "[destroying bar]\n"; unset($this->foo); } } class Foo { private $bar; public function __construct() { $this->bar = new Bar($this); } public function __destruct() { print "[destroying foo]\n"; unset($this->bar); } } function testGarbageCollection() { $foo = new Foo(); } for ( $i = 0; $i < 25; $i++ ) { echo memory_get_usage() . "\n"; testGarbageCollection(); } The output looks like this: 60440 61504 62036 62564 63092 63620 [ destroying foo ] [ destroying bar ] [ destroying foo ] [ destroying bar ] [ destroying foo ] [ destroying bar ] [ destroying foo ] [ destroying bar ] [ destroying foo ] [ destroying bar ] What I had hoped for: 60440 [ destorying foo ] [ destorying bar ] 60440 [ destorying foo ] [ destorying bar ] 60440 [ destorying foo ] [ destorying bar ] 60440 [ destorying foo ] [ destorying bar ] 60440 [ destorying foo ] [ destorying bar ] 60440 [ destorying foo ] [ destorying bar ]

    Read the article

  • Will fixed-point arithmetic be worth my trouble?

    - by Thomas
    I'm working on a fluid dynamics Navier-Stokes solver that should run in real time. Hence, performance is important. Right now, I'm looking at a number of tight loops that each account for a significant fraction of the execution time: there is no single bottleneck. Most of these loops do some floating-point arithmetic, but there's a lot of branching in between. The floating-point operations are mostly limited to additions, subtractions, multiplications, divisions and comparisons. All this is done using 32-bit floats. My target platform is x86 with at least SSE1 instructions. (I've verified in the assembler output that the compiler indeed generates SSE instructions.) Most of the floating-point values that I'm working with have a reasonably small upper bound, and precision for near-zero values isn't very important. So the thought occurred to me: maybe switching to fixed-point arithmetic could speed things up? I know the only way to be really sure is to measure it, that might take days, so I'd like to know the odds of success beforehand. Fixed-point was all the rage back in the days of Doom, but I'm not sure where it stands anno 2010. Considering how much silicon is nowadays pumped into floating-point performance, is there a chance that fixed-point arithmetic will still give me a significant speed boost? Does anyone have any real-world experience that may apply to my situation?

    Read the article

< Previous Page | 612 613 614 615 616 617 618 619 620 621 622 623  | Next Page >