Search Results

Search found 20140 results on 806 pages for 'output formatting'.

Page 644/806 | < Previous Page | 640 641 642 643 644 645 646 647 648 649 650 651  | Next Page >

  • cache_money only writing to memcached on creates and updates, and seemingly never looking in the cac

    - by Shane Liebling
    I seem to be having some extremely odd cache_money interactions. When I am on the console, and I create a new instance of a class and save it I see the cache misses and cache stores on my memcached console output. Then when the create finishes I see a bunch of cache deletions. If I then try to do any kind of find for the newly created object (or any other objects for that matter) I never see any cache access. This is highly confusing. I could kind of understand if all finds never hit the cache (though that in and of itself would be an issue requiring investigation), but finds do seem to hit the cache when the object is being created (checking for associations and such). Anyone have this experience in the past at all? Any thoughts? AFAIK there isn't really much in the way of configuration options for cache_money, and it certainly doesn't seem like there are any that would be on by default and be creating these kinds of symptoms. My cache_money config is basically straight out of the docs. Any help would be greatly appreciated.

    Read the article

  • slicing a 2d numpy array

    - by MedicalMath
    The following code: import numpy as p myarr=[[0,1],[0,6],[0,1],[0,6],[0,1],[0,6],[0,1],[0,6],[0,1],[0,6],[0,1],[0,6],[0,1],[0,6],[0,1],[0,6],[0,1],[0,6]] copy=p.array(myarr) p.mean(copy)[:,1] Is generating the following error message: Traceback (most recent call last): File "<pyshell#3>", line 1, in <module> p.mean(copy)[:,1] IndexError: 0-d arrays can only use a single () or a list of newaxes (and a single ...) as an index I looked up the syntax at this link and I seem to be using the correct syntax to slice. However, when I type copy[:,1] into the Python shell, it gives me the following output, which is clearly wrong, and is probably what is throwing the error: array([1, 6, 1, 6, 1, 6, 1, 6, 1, 6, 1, 6, 1, 6, 1, 6, 1, 6]) Can anyone show me how to fix my code so that I can extract the second column and then take the mean of the second column as intended in the original code above? EDIT: Thank you for your solutions. However, my posting was an oversimplification of my real problem. I used your solutions in my real code, and got a new error. Here is my real code with one of your solutions that I tried: filteredSignalArray=p.array(filteredSignalArray) logical=p.logical_and(EndTime-10.0<=matchingTimeArray,matchingTimeArray<=EndTime) finalStageTime=matchingTimeArray.compress(logical) finalStageFiltered=filteredSignalArray.compress(logical) for j in range(len(finalStageTime)): if j == 0: outputArray=[[finalStageTime[j],finalStageFiltered[j]]] else: outputArray+=[[finalStageTime[j],finalStageFiltered[j]]] print 'outputArray[:,1].mean() is: ',outputArray[:,1].mean() And here is the error message that is now being generated by the new code: File "mypath\myscript.py", line 1545, in WriteToOutput10SecondsBeforeTimeMarker print 'outputArray[:,1].mean() is: ',outputArray[:,1].mean() TypeError: list indices must be integers, not tuple Second EDIT: This is solved now that I added: outputArray=p.array(outputArray) above my code. I have been at this too many hours and need to take a break for a while if I am making these kinds of mistakes.

    Read the article

  • How to retain XML string as a string field during XML deserialization

    - by detale
    I got an XML input string and want to deserialize it to an object which partially retain the raw XML. <SetProfile> <sessionId>A81D83BC-09A0-4E32-B440-0000033D7AAD</sessionId> <profileDataXml> <ArrayOfProfileItem> <ProfileItem> <Name>Pulse</Name> <Value>80</Value> </ProfileItem> <ProfileItem> <Name>BloodPresure</Name> <Value>120</Value> </ProfileItem> </ArrayOfProfileItem> </profileDataXml> </SetProfile> The class definition: public class SetProfile { public Guid sessionId; public string profileDataXml; } I hope the deserialization syntax looks like string inputXML = "..."; // the above XML XmlSerializer xs = new XmlSerializer(typeof(SetProfile)); using (TextReader reader = new StringReader(inputXML)) { SetProfile obj = (SetProfile)xs.Deserialize(reader); // use obj .... } but XMLSerializer will throw an exception and won't output < profileDataXml 's descendants to "profileDataXml" field in raw XML string. Is there any way to implement the deserialization like that?

    Read the article

  • What is the most platform- and Python-version-independent way to make a fast loop for use in Python?

    - by Statto
    I'm writing a scientific application in Python with a very processor-intensive loop at its core. I would like to optimise this as far as possible, at minimum inconvenience to end users, who will probably use it as an uncompiled collection of Python scripts, and will be using Windows, Mac, and (mainly Ubuntu) Linux. It is currently written in Python with a dash of NumPy, and I've included the code below. Is there a solution which would be reasonably fast which would not require compilation? This would seem to be the easiest way to maintain platform-independence. If using something like Pyrex, which does require compilation, is there an easy way to bundle many modules and have Python choose between them depending on detected OS and Python version? Is there an easy way to build the collection of modules without needing access to every system with every version of Python? Does one method lend itself particularly to multi-processor optimisation? (If you're interested, the loop is to calculate the magnetic field at a given point inside a crystal by adding together the contributions of a large number of nearby magnetic ions, treated as tiny bar magnets. Basically, a massive sum of these.) # calculate_dipole # ------------------------- # calculate_dipole works out the dipole field at a given point within the crystal unit cell # --- # INPUT # mu = position at which to calculate the dipole field # r_i = array of atomic positions # mom_i = corresponding array of magnetic moments # --- # OUTPUT # B = the B-field at this point def calculate_dipole(mu, r_i, mom_i): relative = mu - r_i r_unit = unit_vectors(relative) #4pi / mu0 (at the front of the dipole eqn) A = 1e-7 #initalise dipole field B = zeros(3,float) for i in range(len(relative)): #work out the dipole field and add it to the estimate so far B += A*(3*dot(mom_i[i],r_unit[i])*r_unit[i] - mom_i[i]) / sqrt(dot(relative[i],relative[i]))**3 return B

    Read the article

  • Grails - Simple hasMany Problem - How does 'save' work?

    - by gav
    My problem is this: I want to create a grails domain instance, defining the 'Many' instances of another domain that it has. I have the actual source in a Google Code Project but the following should illustrate the problem. class Person { String name static hasMany[skills:Skill] static constraints = { id (visible:false) skills (nullable:false, blank:false) } } class Skill { String name String description static constraints = { id (visible:false) name (nullable:false, blank:false) description (nullable:false, blank:false) } } If you use this model and def scaffold for the two Controllers then you end up with a form like this that doesn't work; My own attempt to get this to work enumerates the Skills as checkboxes and looks like this; But when I save the Volunteer the skills are null! This is the code for my save method; def save = { log.info "Saving: " + params.toString() def skills = params.skills log.info "Skills: " + skills def volunteerInstance = new Volunteer(params) log.info volunteerInstance if (volunteerInstance.save(flush: true)) { flash.message = "${message(code: 'default.created.message', args: [message(code: 'volunteer.label', default: 'Volunteer'), volunteerInstance.id])}" redirect(action: "show", id: volunteerInstance.id) log.info volunteerInstance } else { render(view: "create", model: [volunteerInstance: volunteerInstance]) } } This is my log output (I have custom toString() methods); 2010-05-10 21:06:41,494 [http-8080-3] INFO bumbumtrain.VolunteerController - Saving: ["skills":["1", "2"], "name":"Ian", "_skills":["", ""], "create":"Create", "action":"save", "controller":"volunteer"] 2010-05-10 21:06:41,495 [http-8080-3] INFO bumbumtrain.VolunteerController - Skills: [1, 2] 2010-05-10 21:06:41,508 [http-8080-3] INFO bumbumtrain.VolunteerController - Volunteer[ id: null | Name: Ian | Skills [Skill[ id: 1 | Name: Carpenter ] , Skill[ id: 2 | Name: Sound Engineer ] ]] Note that in the final log line the right Skills have been picked up and are part of the object instance. When the volunteer is saved the 'Skills' are ignored and not commited to the database despite the in memory version created clearly does have the items. Is it not possible to pass the Skills at construction time? There must be a way round this? I need a single form to allow a person to register but I want to normalise the data so that I can add more skills at a later time. If you think this should 'just work' then a link to a working example would be great. Hope this makes sense, thanks in advance! Gav

    Read the article

  • ASP, sorting database with conditions using multiple columns...

    - by Mitch
    First of all, I'm still working in classic ASP (vbScript) with an MS Access Database. And, yes I know its archaic, but I'm still hopeful I can do this! So now to my problem: Take the following table as an example: PROJECTS ContactName StartDate EndDate Complete Mitch 2009-02-13 2011-04-23 No Eric 2006-10-01 2008-11-15 Yes Mike 2007-05-04 2009-03-30 Yes Kyle 2009-03-07 2012-07-08 No Using ASP (with VBScript), and an MS Access Database as the backend, I’d like to be able to sort this table with the following logic: I would like to sort this table by date, however, depending on whether a given project is complete or not I would like it to use either the “StartDate” or “EndDate” as the reference for a particular row. So to break it down further, this is what I’m hoping to achieve: For PROJECTS where Complete = “Yes”, reference “EndDate” for the purpose of sorting. For PROJECTS where Complete = “No”, reference “StartDate” for the purpose of sorting. So, if I were to sort the above table following these rules, the output would be: PROJECTS ContactName StartDate EndDate Complete 1 Eric 2006-10-01 2008-11-15* Yes 2 Mitch 2009-02-13* 2011-04-23 No 3 Kyle 2009-03-07* 2012-07-08 No 4 Mike 2007-05-04 2009-03-30* Yes *I’ve put a star next to the date that should be used for the sort in the table above. NOTE: This is actually a simplified version of what I really need to do, but I think that if I could just figure this out, I’ll be able to do the rest on my own. ANY HELP IS GREATLY APPRECIATED; I’VE BEEN STRUGGLING WITH THIS FOR FAR TOO LONG NOW! Thank you!

    Read the article

  • Flush kernel's TCP buffer with `MSG_MORE`-flagged packets

    - by timn
    send()'s man page reveals the MSG_MORE flag which is asserted to act like TCP_CORK. I have a wrapper function around send(): int SocketConnection_Write(SocketConnection *this, void *buf, int len) { errno = 0; int sent = send(this->fd, buf, len, MSG_NOSIGNAL); if (errno == EPIPE || errno == ENOTCONN) { throw(exc, &SocketConnection_NotConnectedException); } else if (errno == ECONNRESET) { throw(exc, &SocketConnection_ConnectionResetException); } else if (sent != len) { throw(exc, &SocketConnection_LengthMismatchException); } return sent; } Assuming I want to use the kernel buffer, I could go with TCP_CORK, enable whenever it is necessary and then disable it to flush the buffer. But on the other hand, thereby the need for an additional system call arises. Thus, the usage of MSG_MORE seems more appropriate to me. I'd simply change the above send() line to: int sent = send(this->fd, buf, len, MSG_NOSIGNAL | MSG_MORE); According to lwm.net, packets will be flushed automatically if they are large enough: If an application sets that option on a socket, the kernel will not send out short packets. Instead, it will wait until enough data has shown up to fill a maximum-size packet, then send it. When TCP_CORK is turned off, any remaining data will go out on the wire. But this section only refers to TCP_CORK. Now, what is the proper way to flush MSG_MORE packets? I can only think of two possibilities: Call send() with an empty buffer and without MSG_MORE being set Re-apply the TCP_CORK option as described on this page Unfortunately the whole topic is very poorly documented and I couldn't find much on the Internet. I am also wondering how to check that everything works as expected? Obviously running the server through strace' is not an option. So the only simplest way would be to usenetcat' and then look at its `strace' output? Or will the kernel handle traffic differently transmitted over a loopback interface?

    Read the article

  • php claims my defined variable is undefined

    - by tedders
    My php is a little rusty but this is boggling my mind right now. I googled this and read all the stackoverflow questions I could find that looked related, but those all seemed to have legitimate undefined variables in them. That leads me to believe that mine is the same problem, but no amount of staring at the simple bit of code I have reduced this to seems to get me anywhere. Please someone give me my dunce cap and tell me what I did wrong! <?php //test for damn undefined variable error $msgs = ""; function add_msg($msg){ $msgs .= "<div>$msg</div>"; } function print_msgs(){ print $msgs; } add_msg("test"); add_msg("test2"); print_msgs(); ?> This gives me the following, maddening output: Notice: Undefined variable: msgs in C:\wamp\www\fgwl\php-lib\fgwlshared.php on line 7 Notice: Undefined variable: msgs in C:\wamp\www\fgwl\php-lib\fgwlshared.php on line 7 Notice: Undefined variable: msgs in C:\wamp\www\fgwl\php-lib\fgwlshared.php on line 10 Yes, this is supposed to be a shared file, but at the moment I have stripped it down to just what I pasted. Any ideas?

    Read the article

  • Using HAML with custom filters

    - by Guard
    Hi everybody. I feel quite excited about HAML and CoffeeScript and am working on tutorial showing how to use them in non-Rails environment. So, haml has easy to use command-line utility haml input.haml output.html. And, what is great, there exist a project (one of many forks: https://github.com/aussiegeek/coffee-haml-filter) aimed at providing custom filter that converts CoffeeScript into JS inside of HAML files. Unfortunately (or am I missing something?) haml doesn't allow specifying custom filters on the command line or with some configuration file. I (not being a Ruby fan or even knowing it enough) managed to solve it (based on some clever suggestion somewhere on SO) with this helper script: haml.rb require 'rubygems' require 'active_support/core_ext/object/blank' require 'haml' require 'haml/filters/coffee' template = ARGV.length > 0 ? File.read(ARGV.shift) : STDIN.read haml_engine = Haml::Engine.new(template) file = ARGV.length > 0 ? File.open(ARGV.shift, 'w') : STDOUT file.write(haml_engine.render) file.close Which is quite straightforward, except of requires in the beginning. Now, the questions are: 1) should I really use it, or is there another way to have on-demand HAML to HTML compilation with custom filters? 2) What about HAML watch mode? It's great and convenient. I can, of course, create a polling script in python that will watch the directory changes and call this .rb script, but it looks like a dirty solution.

    Read the article

  • VS2010 compiles solution without errors, msbuild fails: "fatal error CS0002: Unable to load message string from resources"

    - by Nathan Ridley
    I'm having a lot of trouble trying to track down the cause of this error message. I have a large Visual Studio 2010 solution which compiles without error on my local machine but on the build server, msbuild fails on one of the projects with the error: fatal error CS0002: Unable to load message string from resources Here's the red error section at the end: Build FAILED. "C:\TeamCity\buildAgent\work\85eff164854b9e67\Libraries\Domainface.Proxy.Common\Domainface.Proxy.Common.csproj" (default target) (9) -> (CoreCompile target) -> CSC : fatal error CS0002: Unable to load message string from resources. [C:\TeamCity\buildAgent\work\85eff164854b9e67\Libraries\Domainface.Proxy.Common\Domainface.Proxy.Common.csproj] 0 Warning(s) 1 Error(s) The entire msbuild output from the build server is here: http://pastie.org/3660842 What does the error generally refer to, that would cause it to build locally but not on the build server? UPDATE I have just run msbuild /version on both machines and it turns out the .net framework versions are very slightly different. Local machine is 4.0.30319.488 and build server is 4.0.30319.1. I'm about to run windows update on the server to allow it to install some updates, as several seem to be .net framework-related, so I'll see if that makes a difference. UPDATE Installing the updates didn't help. Just remembered I copied up csc.exe from the async preview a little while ago in order to facilitate async compilation (the actual async preview had failed to install on the server due to visual studio not being there, but installing visual studio team viewer seems to have fixed that, so i've just run the proper async ctp3 installer to see if that makes a difference.

    Read the article

  • Efficient Context-Free Grammar parser, preferably Python-friendly

    - by Max Shawabkeh
    I am in need of parsing a small subset of English for one of my project, described as a context-free grammar with (1-level) feature structures (example) and I need to do it efficiently . Right now I'm using NLTK's parser which produces the right output but is very slow. For my grammar of ~450 fairly ambiguous non-lexicon rules and half a million lexical entries, parsing simple sentences can take anywhere from 2 to 30 seconds, depending it seems on the number of resulting trees. Lexical entries have little to no effect on performance. Another problem is that loading the (25MB) grammar+lexicon at the beginning can take up to a minute. From what I can find in literature, the running time of the algorithm used to parse such a grammar (Earley or CKY) should be linear to the size of the grammar and cubic to the size of the input token list. My experience with NLTK indicates that ambiguity is what hurts the performance most, not the absolute size of the grammar. So now I'm looking for a CFG parser to replace NLTK. I've been considering PLY but I can't tell whether it supports feature structures in CFGs, which are required in my case, and the examples I've seen seem to be doing a lot of procedural parsing rather than just specifying a grammar. Can anybody show me an example of PLY both supporting feature structs and using a declarative grammar? I'm also fine with any other parser that can do what I need efficiently. A Python interface is preferable but not absolutely necessary.

    Read the article

  • Can I trigger PHP garbage collection to happen automatically if I have circular references?

    - by Beau Simensen
    I seem to recall a way to setup the __destruct for a class in such a way that it would ensure that circular references would be cleaned up as soon as the outside object falls out of scope. However, the simple test I built seems to indicate that this is not behaving as I had expected/hoped. Is there a way to setup my classes in such a way that PHP would clean them up correctly when the outermost object falls out of scope? I am not looking for alternate ways to write this code, I am looking for whether or not this can be done, and if so, how? I generally try to avoid these types of circular references where possible. class Bar { private $foo; public function __construct($foo) { $this->foo = $foo; } public function __destruct() { print "[destroying bar]\n"; unset($this->foo); } } class Foo { private $bar; public function __construct() { $this->bar = new Bar($this); } public function __destruct() { print "[destroying foo]\n"; unset($this->bar); } } function testGarbageCollection() { $foo = new Foo(); } for ( $i = 0; $i < 25; $i++ ) { echo memory_get_usage() . "\n"; testGarbageCollection(); } The output looks like this: 60440 61504 62036 62564 63092 63620 [ destroying foo ] [ destroying bar ] [ destroying foo ] [ destroying bar ] [ destroying foo ] [ destroying bar ] [ destroying foo ] [ destroying bar ] [ destroying foo ] [ destroying bar ] What I had hoped for: 60440 [ destorying foo ] [ destorying bar ] 60440 [ destorying foo ] [ destorying bar ] 60440 [ destorying foo ] [ destorying bar ] 60440 [ destorying foo ] [ destorying bar ] 60440 [ destorying foo ] [ destorying bar ] 60440 [ destorying foo ] [ destorying bar ]

    Read the article

  • Poll multiple desktops/servers on a network remotely to determine the IP Type: Static or DHCP

    - by Charles Laird
    Had a gentleman answer 90% of my original question, which is to say I now have the ability to poll a device that I am running the below script on. The end goal is to obtain IP type: Static or DHCP on all desktop/servers on a network I support. I have the list of servers that I will input in a batch file, just looking for the code to actually poll the other devices on the network from one location. Output to be viewed: Device name: IP Address: MAC Address: Type: Marvell Yukon 88E8001/8003/8010 PCI Gigabit Ethernet Controller NULL 00:00:F3:44:C6:00 DHCP Generic Marvell Yukon 88E8056 based Ethernet Controller 192.168.1.102 00:00:F3:44:D0:00 DHCP ManagementClass objMC = new ManagementClass("Win32_NetworkAdapterConfiguration"); ManagementObjectCollection objMOC = objMC.GetInstances(); txtLaunch.Text = ("Name\tIP Address\tMAC Address\tType" +"\r\n"); foreach (ManagementObject objMO in objMOC) { StringBuilder builder = new StringBuilder(); object o = objMO.GetPropertyValue("IPAddress"); object m = objMO.GetPropertyValue("MACAddress"); if (o != null || m != null) { builder.Append(objMO["Description"].ToString()); builder.Append("\t"); if (o != null) builder.Append(((string[])(objMO["IPAddress"]))[0].ToString()); else builder.Append("NULL"); builder.Append("\t"); builder.Append(m.ToString()); builder.Append("\t"); builder.Append(Convert.ToBoolean(objMO["DHCPEnabled"]) ? "DHCP" : "Static"); builder.Append("\r\n"); } txtLaunch.Text = txtLaunch.Text + (builder.ToString()); I'm open to recommendations here.

    Read the article

  • Will fixed-point arithmetic be worth my trouble?

    - by Thomas
    I'm working on a fluid dynamics Navier-Stokes solver that should run in real time. Hence, performance is important. Right now, I'm looking at a number of tight loops that each account for a significant fraction of the execution time: there is no single bottleneck. Most of these loops do some floating-point arithmetic, but there's a lot of branching in between. The floating-point operations are mostly limited to additions, subtractions, multiplications, divisions and comparisons. All this is done using 32-bit floats. My target platform is x86 with at least SSE1 instructions. (I've verified in the assembler output that the compiler indeed generates SSE instructions.) Most of the floating-point values that I'm working with have a reasonably small upper bound, and precision for near-zero values isn't very important. So the thought occurred to me: maybe switching to fixed-point arithmetic could speed things up? I know the only way to be really sure is to measure it, that might take days, so I'd like to know the odds of success beforehand. Fixed-point was all the rage back in the days of Doom, but I'm not sure where it stands anno 2010. Considering how much silicon is nowadays pumped into floating-point performance, is there a chance that fixed-point arithmetic will still give me a significant speed boost? Does anyone have any real-world experience that may apply to my situation?

    Read the article

  • FasterCSV Parsing issue?

    - by Schroedinger
    G'day guys, I'm currently using fastercsv to construct ActiveRecord elements and I can't for the life of me see this bug (tired), but for some reason when it creates, if in the rake file i output the column I want to save as the element value, it puts out correctly, as either a Trade or a Quote but when I try to save it into the activerecord, it won't work. FasterCSV.foreach("input.csv", :headers => true) do |row| d = DateTime.parse(row[1]+" "+row[2]) offset = Rational(row[3].to_i,24) o = d.new_offset(offset) t = Trade.create( :name => row[0], :type => row[4], :time => o, :price => row[6].to_f, :volume => row[7].to_i, :bidprice => row[10].to_f, :bidsize => row[11].to_i, :askprice => row[14].to_f, :asksize => row[15].to_i ) end Ideas? Name and Type are both strings, every other value works except for type. Have I missed something really simple?

    Read the article

  • Pentaho - Reporting Tool - Is the .prpt file (report template file) contains datasource information

    - by Yatendra Goel
    I am new Pentaho Reporting Tool. I have the following question: When I created a report using Pentaho Report Designer, it output a report file having .prpt extension. After that I found an example on internet where the following code were used to display the report in html format:| protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { ResourceManager manager = new ResourceManager(); manager.registerDefaults(); String reportPath = "file:" + this.getServletContext().getRealPath("sampleReport.prpt"); try { Resource res = manager.createDirectly(new URL(reportPath), MasterReport.class); MasterReport report = (MasterReport) res.getResource(); HtmlReportUtil.createStreamHTML(report, response.getOutputStream()); } catch (Exception e) { e.printStackTrace(); } } And the report got printed successfully. So as we haven't specified any datasource information here, I think that the .prpt file contains that information in it. If that's true than Isn't Jasper is better Reporting tool than Pentaho because when we display Jasper reports, we have to provide datasource details also so in that way our report is flexible and is not bound to any particular database.

    Read the article

  • Why does my REST request return garbage data?

    - by Alienfluid
    I am trying to use LWP::Simple to make a GET request to a REST service. Here's the simple code: use LWP::Simple; $uri = "http://api.stackoverflow.com/0.8/questions/tagged/php"; $jsonresponse= get $uri; print $jsonresponse; On my local machine, running Ubuntu 10.4, and Perl version 5.10.1: farhan@farhan-lnx:~$ perl --version This is perl, v5.10.1 (*) built for x86_64-linux-gnu-thread-multi I can get the correct response and have it printed on the screen. E.g.: farhan@farhan-lnx:~$ head -10 output.txt { "total": 1000, "page": 1, "pagesize": 30, "questions": [ { "tags": [ "php", "arrays", "coding-style" (... snipped ...) But on my host's machine to which I SSH into, I get garbage printed on the screen for the same exact code. I am assuming it has something to do with the encoding, but the REST service does not return the character set type in the response, so how do I force LWP::Simple to use the correct encoding? Any ideas what may be going on here? Here's the version of Perl on my host's machine: [dredd]$ perl --version This is perl, v5.8.8 built for x86_64-linux-gnu-thread-multi

    Read the article

  • Automatic Adjusting Range Table

    - by Bradford
    I have a table with a start date range, an end date range, and a few other additional columns. On input of a new record, I want to automatically adjust any overlapping date ranges (shrinking them to allow for the new input). I also want to ensure that no overlapping records can accidentally be inserted into this table. I'm using Oracle and Java for my application code. How should I enforce the prevention of overlapping date ranges and also allow for automatically adjusting overlapping ranges? Should I create an AFTER INSERT trigger, with a dbms_lock to serialize access, to prevent the overlapping data. Then in Java, apply the logic to auto adjust everything? Or should that part be in PL/SQL in stored procedure call? This is something that we need for a couple other tables so it'd be nice to abstract. If anyone has something like this already written, please share :) I did find this reference: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:474221407101 Here's an example of how each of the 4 overlapping cases should be handled for adjustment on insert: = Example 1 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (20, 50, 'A') Gives (0, 10, 'X') **(20, 50, 'A') **(51, 100, 'Z') (200, 500, 'Y') = Example 2 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (40, 80, 'A') Gives (0, 10, 'X') **(30, 39, 'Z') **(40, 80, 'A') **(81, 100, 'Z') (200, 500, 'Y') = Example 3 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (50, 120, 'A') Gives (0, 10, 'X') **(30, 49, 'Z') **(50, 120, 'A') (200, 500, 'Y') = Example 4 = In DB (Start, End, Value): (0, 10, 'X') **(30, 100, 'Z') (200, 500, 'Y') Input (20, 120, 'A') Gives (0, 10, 'X') **(20, 120, 'A') (200, 500, 'Y') The algorithm is as follows: given range = g; input range = i; output range set = o if i.start <= g.start if i.end >= g.end o_1 = i else o_1 = i o_2 = (o.end + 1, g.end) else if i.end >= g.end o_1 = (g.start, i.start - 1) o_2 = i else o_1 = (g.start, i.start - 1) o_2 = i o_3 = (i.end + 1, i.end)

    Read the article

  • NULL value when using NSDateFormatter after setting NSDate property via XML parsing

    - by David A Gibson
    Hello, I am using the following code to try and display in a time in a table cell. TimeSlot *timeSlot = [timeSlots objectAtIndex:indexPath.row]; NSDateFormatter *timeFormat = [[NSDateFormatter alloc] init]; [timeFormat setDateFormat:@"HH:mm:ss"]; NSLog(@"Time: %@", timeSlot.time); NSDate *mydate = timeSlot.time; NSLog(@"Time: %@", mydate); NSString *theTime = [timeFormat stringFromDate:mydate]; NSLog(@"Time: %@", theTime); The log output is this: 2010-04-14 10:23:54.626 MyApp[1080:207] Time: 2010-04-14T10:23:54 2010-04-14 10:23:54.627 MyApp[1080:207] Time: 2010-04-14T10:23:54 2010-04-14 10:23:54.627 MyApp[1080:207] Time: (null) I am new to developing for the iPhone and as it all compiles with no errors or warnings I am at a loss as to why I am getting NULL in the log. Is there anything wrong with this code? Thanks Further Info I used the code exactly from your answer lugte098 just to check and I was getting dates which leads me to believe that my TimeSlot class can't have a date correctly set in it's NSDate property. So my question becomes - how from XML do I set a NSDate property? I have this code (abbreviated): -(void)parser:(NSXMLParser *)parser foundCharacters:(NSString *) string { if ([currentElement isEqualToString:@"Time"]) { currentTimeSlot.time = string } } Thanks

    Read the article

  • How to debug browser crash when running Silverlight app

    - by onedozenbagels
    I am on a team of three people who are developing a Silverlight application. On two of our developers' machines the app seems to randomly crash. It never crashes on the third developer's machine. The nature of the crash is that internet explorer just dies with an "Internet Explorer has stopped working" message. The problem details look like this: Problem Event Name: BEX Application Name: IEXPLORE.EXE Application Version: 8.0.6001.18882 Application Timestamp: 4b3ed243 Fault Module Name: StackHash_2cd8 Fault Module Version: 0.0.0.0 Fault Module Timestamp: 00000000 Exception Offset: 0024df00 Exception Code: c0000005 Exception Data: 00000008 OS Version: 6.0.6002.2.2.0.256.6 Locale ID: 1033 Additional Information 1: 2cd8 Additional Information 2: 0c337fa6c2057a9dbce1860c5e2d8315 Additional Information 3: e13b Additional Information 4: 5da012709e52526a1af19795dc4a33fd Then windows displays this message: "To help protect your computer, Data Execution Prevention has closed Internet Explorer." If I am attached to the app with the Visual Studio debugger the only information I get is this line in the output window: "The program '[2140] iexplore.exe: Silverlight' has exited with code -1073741819 (0xc0000005)." How should I go about debugging this problem? I'm not really sure where to start.

    Read the article

  • uncompressing .zip file in linux [closed]

    - by Suren
    hi, I have a .zip file (It contains multiple files, ex: file1.txt file2.txt file3.txt.. n so on) in a directory. And my query is: How to extract the files from .zip archive to the very same directory and how to create the list of all the files extracted from .zip archive.** The extracted file name should be printed like this in the file named: file_list: file1.txt file2.txt file3.txt filen.txt I have tried the following command assuming that my .zip file name is "data.zip". unzip -qoj data.zip | unzip -ql data.zip > file_list I have used unzip -qoj data.zip to extract all the files in the same directory(quietly,overwrite,junk_path). When I try to insert -l with the first unzip command then the command doesn't extract the file in the current and only files are listed thats why I have to used unzip again after the first pipe(If I am making a mistake here let me know please). I get the following output Length Date Time Name -------- ---- ---- ---- 0 12-21-09 14:25 data/ 6148 12-21-09 14:25 data/.DS_Store 0 12-21-09 14:25 __MACOSX/ 0 12-21-09 14:25 __MACOSX/data/ 82 12-21-09 14:25 __MACOSX/data/._.DS_Store 82 12-11-09 13:59 data/file1.txt 120 12-11-09 13:59 data/file2.txt 166 12-11-09 13:59 data/file3.txt -------- ------- 6598 8 files How do I extract only file1.txt file2.txt file3.txt from this stdout? Is it possible to do this with linux command or I have to write a perl script for this? Thank you.

    Read the article

  • Fluent config not generating mapping files

    - by rboarman
    Hello, I am trying to get Fluent nHibernate to generate mappings so I can take a look at the files and the sql. My code is based on this post and on what I can glean from the documentation. http://stackoverflow.com/questions/1375146/fluent-mapping-entities-and-classmaps-in-different-assemblies I am using the latest code from git. Here’s my config code: Configuration cfg = new Configuration(); var ft = Fluently.Configure(cfg); //DbConnection by fluent ft.Database ( MsSqlConfiguration .MsSql2008 .ConnectionString("……") .ShowSql() .UseReflectionOptimizer() ); //get mapping files. ft.Mappings(m => { //set up the mapping locations m.FluentMappings.AddFromAssemblyOf<Entity>() .ExportTo(@"C:\temp"); m.Apply(cfg); }); I also tried: var sessionFactory = Fluently.Configure() .Database(MsSqlConfiguration .MsSql2008 .ShowSql() .ConnectionString(“……")) .Mappings(p => p.FluentMappings .AddFromAssemblyOf<Entity>() .ExportTo(@"c:\temp\")) .BuildSessionFactory(); I have verified that the connection string is correct. The issue is that no mapping files show up in the ExportTo folder and no sql code shows up in the output window or in the log file. No errors or exceptions are generated either. I have no idea where to go from here. Thank you in advance. Rick

    Read the article

  • Newly installed Ruby gems not showing up in $LOAD_PATH

    - by randombits
    I'm using MacPorts in order to manage my Ruby/Rails/Gems installations. Recently after doing a gem install wirble, wirble fails to load when I start an instance of irb. Here's the output: $ irb --simple-prompt Couldn't load Wirble: no such file to load -- wirble The Wirble gem doesn't show up in my $LOAD_PATH: >> puts $: /opt/local/lib/ruby1.9/gems/1.9.1/gems/actionmailer-2.3.5/lib /opt/local/lib/ruby1.9/gems/1.9.1/gems/actionpack-2.3.5/lib /opt/local/lib/ruby1.9/gems/1.9.1/gems/activerecord-2.3.5/lib /opt/local/lib/ruby1.9/gems/1.9.1/gems/activeresource-2.3.5/lib /opt/local/lib/ruby1.9/gems/1.9.1/gems/activesupport-2.3.5/lib /opt/local/lib/ruby1.9/gems/1.9.1/gems/mysql-2.8.1/lib /opt/local/lib/ruby1.9/gems/1.9.1/gems/mysql-2.8.1/ext /opt/local/lib/ruby1.9/gems/1.9.1/gems/mysql-2.8.1/bin /opt/local/lib/ruby1.9/gems/1.9.1/gems/rack-1.0.1/bin /opt/local/lib/ruby1.9/gems/1.9.1/gems/rack-1.0.1/lib /opt/local/lib/ruby1.9/gems/1.9.1/gems/rails-2.3.5/bin /opt/local/lib/ruby1.9/gems/1.9.1/gems/rails-2.3.5/lib /opt/local/lib/ruby1.9/gems/1.9.1/gems/rake-0.8.7/bin /opt/local/lib/ruby1.9/gems/1.9.1/gems/rake-0.8.7/lib /opt/local/lib/ruby1.9/gems/1.9.1/gems/rubygems-update-1.3.7/hide_lib_for_update /opt/local/lib/ruby1.9/gems/1.9.1/gems/rubygems-update-1.3.7/bin /opt/local/lib/ruby1.9/site_ruby/1.9.1 /opt/local/lib/ruby1.9/site_ruby/1.9.1/i386-darwin10 /opt/local/lib/ruby1.9/site_ruby /opt/local/lib/ruby1.9/vendor_ruby/1.9.1 /opt/local/lib/ruby1.9/vendor_ruby/1.9.1/i386-darwin10 /opt/local/lib/ruby1.9/vendor_ruby /opt/local/lib/ruby1.9/1.9.1 /opt/local/lib/ruby1.9/1.9.1/i386-darwin10 . => nil >> The gem is definitely installed: $ gem list |grep -i wirble wirble (0.1.3) It is located in /opt/local/lib/ruby/gems/1.9.1/gems/wirble-0.1.3/ How do I get this and future gems I installed appended to my $LOAD_PATH?

    Read the article

  • Help in alignment in HTML2pdf..

    - by piemesons
    I am having a html page with no style attribute.HTML tags i am using are "center,line break tag, bold tag".Html page doesnot contain any table. its a simple document.I need help for:- Adding margin of 1 inch on all sides of the pdf file. I want to start every para with space of two tabs. ("&nbsp" generate space in html file but not in pdf file.) Code i am using:-- ob_start(); // start buffering and displaying page echo 'All the content i m fetching according my requirements'; $file_name_string=substr($guid, 0, 8); $file_name=$file_name_string.".htm"; file_put_contents($file_name, ob_get_contents()); // end buffering and displaying page ob_end_flush(); $output_file=$file_name_string.".pdf"; require('html2fpdf.php'); $pdf=new HTML2FPDF(); $pdf->SetFont('Arial','B',12); $pdf->AddPage(); $fp = fopen($file_name,"r"); $strContent = fread($fp, filesize($file_name)); fclose($fp); $pdf->WriteHTML($strContent); $pdf->Output($output_file); EDIT ANybody who can help me....

    Read the article

  • Custom template for Django's comments application does not display fields

    - by Jannis
    Hi, I want to use django.contrib.comments in a blogging application and customize the way the form is displayed. My problem is that I can't get the fields to display although displaying the hidden fields works just fine. I had a look at the docs and compared it with the regular way of displaying forms but honestly I don't know why the following doesn't work out: {% get_comment_form for comments_object as form %} <form action="{% comment_form_target %}" method="POST"> […] {% for hidden in form.hidden_fields %} {{ hidden }} {% endfor %} {% for field in form.fields %} {{field}} {% endfor %} […] </form> The output looks like this: <form action="/comments/post/" method="POST"> <input type="hidden" name="content_type" value="flatpages.flatpage" id="id_content_type" /> <input type="hidden" name="object_pk" value="1" id="id_object_pk" /> <input type="hidden" name="timestamp" value="1269522506" id="id_timestamp" /> <input type="hidden" name="security_hash" value="ec4…0fd" id="id_security_hash" /> content_type object_pk timestamp security_hash name email url comment honeypot […] </form> </div> Can you tell me what I'm doing wrong? Thanks in advance

    Read the article

< Previous Page | 640 641 642 643 644 645 646 647 648 649 650 651  | Next Page >