Search Results

Search found 33382 results on 1336 pages for 'end tag'.

Page 152/1336 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • Robust line of sight test on the inside of a polygon with tolerance

    - by David Gouveia
    Foreword This is a followup to this question and the main problem I'm trying to solve. My current solution is an hack which involves inflating the polygon, and doing most calculations on the inflated polygon instead. My goal is to remove this step completely, and correctly solve the problem with calculations only. Problem Given a concave polygon and treating all of its edges as if they were walls in a level, determine whether two points A and B are in line of sight of each other, while accounting for some degree of floating point errors. I'm currently basing my solution on a series of line-segment interection tests. In other words: If any of the end points are outside the polygon, they are not in line of sight. If both end points are inside the polygon, and the line segment from A to B crosses any of the edges from the polygon, then they are not in line of sight. If both end points are inside the polygon, and the line segment from A to B does not cross any of the edges from the polygon, then they are in line of sight. But the problem is dealing correctly with all the edge cases. In particular, it must be able to deal with all the situations depicted below, where red lines are examples that should be rejected, and green lines are examples that should be accepted. I probably missed a few other situations, such as when the line segment from A to B is colinear with an edge, but one of the end points is outside the polygon. One point of particular interest is the difference between 1 and 9. In both cases, both end points are vertices of the polygon, and there are no edges being intersected, but 1 should be rejected while 9 should be accepted. How to distinguish these two? I could check some middle point within the segment to see if it falls inside or not, but it's easy to come up with situations in which it would fail. Point 7 was also pretty tricky and I had to to treat it as a special case, which checks if two points are adjacent vertices of the polygon directly. But there are also other chances of line segments being col linear with the edges of the polygon, and I'm still not entirely sure how I should handle those cases. Is there any well known solution to this problem?

    Read the article

  • Perl syntax error [closed]

    - by Linny
    I am a beginner taking a Perl programming course. We are trying to write a basic program for counting nucleotides in a DNA string. I'm getting syntax errors on the lines that have a single bracket on lines 28 & 70 and don't know why. It also reads that I have compilation errors. I have no idea where to start figuring that out. # The purpose of this program is to count the number of nucleotides in a strand. Each protein is counted separately # print "/n NOTE: Nucleotide counting /n"; # use strict; # enforce variable declarations use warnings; # enable compiler warnings # Display number of A,a,T,t,G,g,C,c, nucleotides in a word or sequence of letters. # my ($base) = ''; # an extracted letter from a string my ($nuceotide_count) = 0 ; # the current position within the word my ($position) = 0 ; # number of vowels in user-supplied word my ($word) = ''; # word to be processed my ($A_count) = 0 ; # of A nucleotides in the user-supplied sequence my ($a_count) = 0 ; # of A nucleotides in the user-supplied sequence my ($C_count) = 0 ; # of C nucleotides in the user-supplied sequence my ($c_count) = 0 ; # of C nucleotides in the user-supplied sequence my ($G_count) = 0 ; # of G nucleotides in the user-supplied sequence my ($g_count) = 0 ; # of G nucleotides in the user-supplied sequence my ($T_count) = 0 ; # of T nucleotides in the user-supplied sequence my ($t_count) = 0 ; # of T nucleotides in the user-supplied sequence word = (STDIN) for ($position = 0);($position if (($base eq 'a') or ($base eq 'A')) { ++$A_count; } # end if ++$position; if (($base eq 'T') or ($base eq 't')) { ++$T_count; } end if ++$position; if (($base eq 'G') or ($base eq 'g')) { ++$G_count; } # end if ++$position; if (($base eq 'C') or ($base eq 'c')) { ++$C_count; } # end if ++$position; } # end for # Display final results. # print " \n The number of A or a neucleotides is: $A_count"; print " \n The number of T or t neucleotides is: $T_count"; print " \n The number of G or g neucleotides is: $G_count"; print " \n The number of C or c neucleotides is: $C_count"; print " \n\n Program completed successfully. \n" ; exit ;

    Read the article

  • Power Distribution amongst connected nodes

    - by Perky
    In my game the map is represented by connected nodes, each node has a number of connected nodes. The nodes represent a system in which players can build structures and move units about. If you're familiar with Sins of a Solar Empire the game map is very similar. I want each node to be able to produce power and share it with all connected nodes. For example if A, B, C & D are all connected and produce 100 power units, then each system should have 400 power units available. If node B builds a structure that consumes 100 power units then A, B, C & D should then have 300 power units available. I've been working on this system all day and haven't been able to get it working quite the way I want. My current implementation is to first recurse through each nodes's connected node adding up the power, I keep a list of closed nodes so it doesn't loop, it's quite similar to A* actually. Pseudo code: All nodes start with the properties node.power = 0 node.basePower = 100 // could be different for each node. node.initialPower = node.basePower - function propagatePower( node, initialPower, closedNodes ) node.power += initialPower add( closedNodes, node ) connectedNodes = connected_nodes_except_from( closedNodes ) foreach node in connectedNodes do propagatePower( node, initialPower, closedNodes ) end end After this I iterate through all power consumers. foreach consumer in consumers do node = consumer.parentNode if node.power >= consumer.powerConsumption then consumer.powerConsumed += consumer.powerConsumption node.producedPower -= consumer.powerConsumption end end Then I adjust the initial power for the next propagation cycle. foreach node in nodes do node.initialPower = node.basePower - node.producedPower node.displayPower = node.power // for rendering the power. node.power = 0 end This seemed to work at first but then I came into a problem. Say two nodes A & B produce 100Pu each, it's shared so both A & B have 200Pu. I then make two structures that consume 80Pu each on A (160Pu). Then the nodes power is adjusted to basePower - producedPower (100-160 = -60). Nodes are propagated, both nodes now have 40Pu (A: -60 + B: 100 = 40). Which is correct because they started with 200Pu - 160Pu = 40Pu. However now node.power >= consumer.powerConsumption is false. Whats worse is it's false for any structure that uses more that 40Pu, so the whole system goes down. I could deduct from consumer.powerConsumption but what do I do if power is reduced elsewhere? I don't have the correct data to perform the necessary checks. It's late so I'm probably not thinking straight but I thought to ask on here to see if anyone has any other implementations, better or worse I'd be interested to know.

    Read the article

  • Start Time & Calculated Column Wonkiness in a SharePoint Event Calendar

    - by _zekeMouseOver
    I was creating some custom rollups on some of our event calendars and came across a very odd bug when trying to grab only the date component of the built-in Start Time field. One's first inclination will be to create a calculated column and give it the formula... =[Start Time]... and then assign its output type to be "Date Only." This works well until a user adds an All Day Event. For reasons unexplainable, the All Day Event flag causes your =[Start Time] to display the date minus one day. Here is an example of this in action:  Start Date and Time, Duration, Start Date Value and Start Day are all calculated fields. Notice how the Start Date and Time (=[Start Time]) is reporting 6:00PM of the previous day. The Start Date Value (=[Start Time] - Output Type: Number) confirms this (.75 = 6:00 PM.) Curiously enough, the Duration (=[End Time]-[Start Time]) is properly reporting the duration between 12:00AM and 11:59PM. Why? I don't know. Perhaps it's somehow bound to the regional settings on the site, but I'm not interested in changing a global site setting for the sake of one calculated field.With this information at our disposal, our calculated column to display the date part of the start date needs to be modified to add one day to the [Start Time] field if an All Day Event is selected. To determine this, we use the Duration above to assume the item is an all-day event and change our formula to be:=IF(TEXT(([End Time]-[Start Time])-TRUNC(([End Time]-[Start Time]),0),"0.000000000")="0.999305556",[Start Time] + 1, [Start Time])This will work, but what happens when the user de-selects the "All Day Event" checkbox? The duration stays the same, but all other values begin reporting the correct time: Since our formula above is strictly based on an expected duration, it will add one to the correct date, causing the date 5/11/2010 to appear. Notice though that the raw value of the start time (in this case) is a non-fractional number (40,308) whereas the all-day event was being represented as 6:00 PM (.75) of the previous day. We can use this to add one more nested branch of logic to our calculation:=IF(TEXT(([End Time]-[Start Time])-TRUNC(([End Time]-[Start Time]),0),"0.000000000")="0.999305556",IF([Start Time]=ROUND([Start Time],0),[Start Time],[Start Time]+1),[Start Time]) I feel somewhat... dirty about having to resort to this kind of calculation in what SHOULD have been a simple =[Start Time] to extract the date part of the Start Time field, but there you have it. Make sure to shower extra longer after having used it.

    Read the article

  • Issues with LVM partition size in Server 13.04

    - by Michael
    I am new to ubuntu and a little confused about how hard drive partitions and LVM works. I remember setting up Ubuntu server 13.04 and telling to to use 1TB of a 3TB server. Well I have maxed that out with blu-ray rips and want the rest of the drive for space. On log-in it says: System load: 2.24 Processes: 179 Usage of /: 88.7% of 912.89GB Users logged in: 0 Memory usage: 6% IP address for p5p1: 192.168.0.100 Swap usage: 0% => / is using 88.7% of 912.89GB lvdisplay outputs: --- Logical volume --- LV Path /dev/DeathStar-vg/root LV Name root VG Name DeathStar-vg LV Write Access read/write LV Creation host, time DeathStar, 2013-05-18 22:21:11 -0400 LV Status available # open 1 LV Size 2.70 TiB Current LE 707789 Segments 2 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:0 --- Logical volume --- LV Path /dev/DeathStar-vg/swap_1 LV Name swap_1 VG Name DeathStar-vg LV Write Access read/write LV Creation host, time DeathStar, 2013-05-18 22:21:11 -0400 LV Status available # open 2 LV Size 3.75 GiB Current LE 959 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 252:1 vgdisplay outputs: VG Name DeathStar-vg System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 1 Act PV 1 VG Size 2.73 TiB PE Size 4.00 MiB Total PE 715335 Alloc PE / Size 708748 / 2.70 TiB Free PE / Size 6587 / 25.73 GiB df outputs: Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/DeathStar--vg-root 957238932 848972636 59634696 94% / none 4 0 4 0% /sys/fs/cgroup udev 1864716 4 1864712 1% /dev tmpfs 374968 1060 373908 1% /run none 5120 4 5116 1% /run/lock none 1874824 148 1874676 1% /run/shm none 102400 24 102376 1% /run/user /dev/sda2 234153 56477 165184 26% /boot And fdisk /dev/sda -l outputs: Disk /dev/sda: 3000.6 GB, 3000592982016 bytes 255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sda1 1 4294967295 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. I just don't know what to make of all this and am not sure how I can make it use all 2.73TBs. Thanks in advance for any help. EDIT-- Yes I did make changes to the LVM Config, but it didnt do anything. As requested, output of parted -l /dev/sda Model: ATA WDC WD30EFRX-68A (scsi) Disk /dev/sda: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2097kB 1049kB bios_grub 2 2097kB 258MB 256MB ext2 3 258MB 3001GB 3000GB lvm Model: ATA WDC WD30EFRX-68A (scsi) Disk /dev/sdb: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: msdos Number Start End Size Type File system Flags Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/DeathStar--vg-swap_1: 4022MB Sector size (logical/physical): 512B/4096B Partition Table: loop Number Start End Size File system Flags 1 0.00B 4022MB 4022MB linux-swap(v1) Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/DeathStar--vg-root: 2969GB Sector size (logical/physical): 512B/4096B Partition Table: loop Number Start End Size File system Flags 1 0.00B 2969GB 2969GB ext4

    Read the article

  • Plans for Java 7 and E-Business Suite Certification

    - by Steven Chan (Oracle Development)
    As of June 2012, Java 7 has not been certified yet with Oracle E-Business Suite.  EBS customers should continue to run JRE 6 on their Windows end-user desktops, and JDK 6 on their EBS servers. If a search engine has brought you to this article, please check the Certifications summary for our latest certified Java release. Our plans for certifying Java 7 for the E-Business Suite We plan on releasing the Java 7 certification for E-Business Suite customers in two phases: Phase 1: Certify JRE 7 for Windows end-user desktops Phase 2: Certify JDK 7 for server-based components When will Java 7 be certified with EBS? We're working on the first phase now. As usual, I cannot discuss release dates here, but you can monitor or subscribe to this blog for updates. Current known issues with JRE 7 in EBS environments Our current testing shows that there are known incompatibilities between JRE 7 and the Forms-invocation process in EBS environments.  We have been working directly with the Java division on this for a while now.  In the meantime, EBS customers should not deploy JRE 7 to their end-user Windows desktop clients. You should stick with JRE 1.6 for now.  But wait, you previously said... Older JRE certification announcements stated: Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops from JRE 1.6.0_03 and higher.  We test all new JRE releases in parallel with the JRE development process, so all JRE releases are considered certified with the E-Business Suite on the same day that they're released by our Java team.  You do not need to wait for a certification announcement before applying new JRE releases to your EBS users' desktops. Yes, this is true.  This standard boilerplate text was written before JRE 7 was released, so there was no possibility of misunderstanding.  With the availability of JRE 7, that boilerplate needs to be revised to read: Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops from JRE 1.6.0_03 and later updates on the 1.6 codeline.  We test all new JRE 1.6 releases in parallel with the JRE development process, so all new JRE 1.6 releases are considered certified with the E-Business Suite on the same day that they're released by our Java team.  You do not need to wait for a certification announcement before applying new JRE 1.6 releases to your EBS users' desktops. References Recommended Browsers for Oracle Applications 11i (Metalink Note 285218.1) Upgrading Sun JRE (Native Plug-in) with Oracle Applications 11i for Windows Clients (Metalink Note 290807.1) Recommended Browsers for Oracle Applications 12 (MetaLink Note 389422.1) Upgrading JRE Plugin with Oracle Applications R12 (MetaLink Note 393931.1) Related Articles Mismanaged Session Cookie Issue Fixed for EBS in JRE 1.6.0_23 Roundup: Oracle JInitiator 1.3 Desupported for EBS Customers in July 2009

    Read the article

  • Inherit one instance variable from the global scope

    - by Julian
    I'm using Curses to create a command line GUI with Ruby. Everything's going well, but I have hit a slight snag. I don't think Curses knowledge (esoteric to be fair) is required to answer this question, just Ruby concepts such as objects and inheritance. I'm going to explain my problem now, but if I'm banging on, just look at the example below. Basically, every Window instance needs to have .close called on it in order to close it. Some Window instances have other Windows associated with it. When closing a Window instance, I want to be able to close all of the other Window instances associated with it at the same time. Because associated Windows are generated in a logical fashion, (I append the name with a number: instance_variable_set(self + integer, Window.new(10,10,10,10)) ), it's easy to target generated windows, because methods can anticipate what assosiated windows will be called, (I can recreate the instance variable name from scratch, and almost query it: instance_variable_get(self + integer). I have a delete method that handles this. If the delete method is just a normal, global method (called like this: delete_window(@win543) then everything works perfectly. However, if the delete method is an instance method, which it needs to be in-order to use the self keyword, it doesn't work for a very clear reason; it can 'query' the correct instance variable perfectly well (instance_variable_get(self + integer)), however, because it's an instance method, the global instances aren't scoped to it! Now, one way around this would obviously be to simply make a global method like this: delete_window(@win543). But I have attributes associated with my window instances, and it all works very elegantly. This is very simplified, but it literally translates the problem exactly: class Dog def speak woof end end def woof if @dog_generic == nil puts "@dog_generic isn't scoped when .woof is called from a class method!\n" else puts "@dog_generic is scoped when .woof is called from the global scope. See:\n" + @dog_generic end end @dog_generic = "Woof!" lassie = Dog.new lassie.speak #=> @dog_generic isn't scoped when .woof is called from an instance method!\n woof #=> @dog_generic is scoped when .woof is called from the global scope. See:\nWoof! TL/DR: I need lassie.speak to return this string: "@dog_generic is scoped when .woof is called from the global scope. See:\nWoof!" @dog_generic must remain as an insance variable. The use of Globals or Constants is not acceptable. Could woof inherit from the Global scope? Maybe some sort of keyword: def woof < global # This 'code' is just to conceptualise what I want to do, don't take offence! end Is there some way the .woof method could 'pull in' @dog_generic from the global scope? Will @dog_generic have to be passed in as a parameter?

    Read the article

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • VHDL - Problem with std_logic_vector

    - by wretrOvian
    Hi, i'm coding a 4-bit binary adder with accumulator: library ieee; use ieee.std_logic_1164.all; entity binadder is port(n,clk,sh:in bit; x,y:inout std_logic_vector(3 downto 0); co:inout bit; done:out bit); end binadder; architecture binadder of binadder is signal state: integer range 0 to 3; signal sum,cin:bit; begin sum<= (x(0) xor y(0)) xor cin; co<= (x(0) and y(0)) or (y(0) and cin) or (x(0) and cin); process begin wait until clk='0'; case state is when 0=> if(n='1') then state<=1; end if; when 1|2|3=> if(sh='1') then x<= sum & x(3 downto 1); y<= y(0) & y(3 downto 1); cin<=co; end if; if(state=3) then state<=0; end if; end case; end process; done<='1' when state=3 else '0'; end binadder; The output : -- Compiling architecture binadder of binadder ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(15): No feasible entries for infix operator "xor". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(15): Type error resolving infix expression "xor" as type std.standard.bit. ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(16): No feasible entries for infix operator "and". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(16): Bad expression in right operand of infix expression "or". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(16): No feasible entries for infix operator "and". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(16): Bad expression in left operand of infix expression "or". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(16): Bad expression in right operand of infix expression "or". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(16): Type error resolving infix expression "or" as type std.standard.bit. ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(28): No feasible entries for infix operator "&". ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(28): Type error resolving infix expression "&" as type ieee.std_logic_1164.std_logic_vector. ** Error: C:/Modeltech_pe_edu_6.5a/examples/binadder.vhdl(39): VHDL Compiler exiting I believe i'm not handling std_logic_vector's correctly. Please tell me how? :(

    Read the article

  • How to Resolve Jason issue

    - by Mohammad Nezhad
    I am working in a web scheduling application which uses DayPilot component I started by reading the documentation and use the calendar control. but whenever I do some thing which (fires an event on the Daypilot control) I face with following error An exception was thrown in the server-side event handler: System.InvalidCastException: Instance Of JasonData doesn't hold a double at DayPilot.Jason.JasonData.op_Explicit(JasonData data) at DayPilot.Web.Ui.Events.EventMoveEventArgs..ctor(JasonData parameters, string[] fields, JasonData data) at DayPilot.Web.Ui.DayPilotCalendar.ExecuteEventJASON(String ea) at DayPilot.Web.Ui.DayPilotCalendar.System.Web.UI.ICallbackEventHandler.RaiseCallbackEvent(string ea) at System.Web.UI.Page.PrepareCallback(String callbackControlID) here is the completed code in the ASPX page <DayPilot:DayPilotCalendar ID="DayPilotCalendar1" runat="server" Direction="RTL" Days="7" DataStartField="eventstart" DataEndField="eventend" DataTextField="name" DataValueField="id" DataTagFields="State" EventMoveHandling="CallBack" OnEventMove="DayPilotCalendar1_EventMove" EventEditHandling="CallBack" OnEventEdit="DayPilotCalendar1_EventEdit" EventClickHandling="Edit" OnBeforeEventRender="DayPilotCalendar1_BeforeEventRender" EventResizeHandling="CallBack" OnEventResize="DayPilotCalendar1_EventResize" EventDoubleClickHandling="CallBack" OnEventDoubleClick="DayPilotCalendar1_EventDoubleClick" oncommand="DayPilotCalendar1_Command" and here is the complete codebehind protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) Initialization(); } private void dbUpdateEvent(string id, DateTime start, DateTime end) { // update for move using (SqlConnection con = new SqlConnection(ConfigurationManager.ConnectionStrings["db"].ConnectionString)) { con.Open(); SqlCommand cmd = new SqlCommand("UPDATE [event] SET [eventstart] = @start, [eventend] = @end WHERE [id] = @id", con); cmd.Parameters.AddWithValue("id", id); cmd.Parameters.AddWithValue("start", start); cmd.Parameters.AddWithValue("end", end); cmd.ExecuteNonQuery(); } } private DataTable dbGetEvents(DateTime start, int days) { // get Data SqlDataAdapter da = new SqlDataAdapter("SELECT [id], [name], [eventstart], [eventend], [state], [other] FROM [event] WHERE NOT (([eventend] <= @start) OR ([eventstart] >= @end))", ConfigurationManager.ConnectionStrings["DayPilotTest"].ConnectionString); ; da.SelectCommand.Parameters.AddWithValue("start", start); da.SelectCommand.Parameters.AddWithValue("end", start.AddDays(days)); DataTable dt = new DataTable(); da.Fill(dt); return dt; } protected void DayPilotCalendar1_EventMove(object sender, DayPilot.Web.Ui.Events.EventMoveEventArgs e) { // Drag and Drop dbUpdateEvent(e.Value, e.NewStart, e.NewEnd); DayPilotCalendar1.DataSource = dbGetEvents(DayPilotCalendar1.StartDate, DayPilotCalendar1.Days); DayPilotCalendar1.DataBind(); DayPilotCalendar1.Update(); } private void Initialization() { // first bind DayPilotCalendar1.StartDate = DayPilot.Utils.Week.FirstDayOfWeek(new DateTime(2009, 1, 1)); DayPilotCalendar1.DataSource = dbGetEvents(DayPilotCalendar1.StartDate, DayPilotCalendar1.Days); DataBind(); } protected void DayPilotCalendar1_BeforeEventRender(object sender, BeforeEventRenderEventArgs e) { // change the color based on state if (e.Tag["State"] == "Fixed") { e.DurationBarColor = "Brown"; } } protected void DayPilotCalendar1_EventResize(object sender, DayPilot.Web.Ui.Events.EventResizeEventArgs e) { dbUpdateEvent(e.Value, e.NewStart, e.NewEnd); DayPilotCalendar1.DataSource = dbGetEvents(DayPilotCalendar1.StartDate, DayPilotCalendar1.Days); DayPilotCalendar1.DataBind(); DayPilotCalendar1.Update(); } protected void DayPilotCalendar1_EventDoubleClick(object sender, EventClickEventArgs e) { dbInsertEvent(e.Text , e.Start, e.End, "New", "New Meeting"); DayPilotCalendar1.DataSource = dbGetEvents(DayPilotCalendar1.StartDate, DayPilotCalendar1.Days); DayPilotCalendar1.DataBind(); DayPilotCalendar1.Update(); } note that I remove unnecessary code behinds

    Read the article

  • Binding to a dictionary in Silverlight with INotifyPropertyChanged

    - by rip
    In silverlight, I can not get INotifyPropertyChanged to work like I want it to when binding to a dictionary. In the example below, the page binds to the dictionary okay but when I change the content of one of the textboxes the CustomProperties property setter is not called. The CustomProperties property setter is only called when CustomProperties is set and not when the values within it are set. I am trying to do some validation on the dictionary values and so am looking to run some code when each value within the dictionary is changed. Is there anything I can do here? C# public partial class MainPage : UserControl { public MainPage() { InitializeComponent(); MyEntity ent = new MyEntity(); ent.CustomProperties.Add("Title", "Mr"); ent.CustomProperties.Add("FirstName", "John"); ent.CustomProperties.Add("Name", "Smith"); this.DataContext = ent; } } public class MyEntity : INotifyPropertyChanged { public event PropertyChangedEventHandler System.ComponentModel.INotifyPropertyChanged.PropertyChanged; public delegate void PropertyChangedEventHandler(object sender, System.ComponentModel.PropertyChangedEventArgs e); private Dictionary<string, object> _customProps; public Dictionary<string, object> CustomProperties { get { if (_customProps == null) { _customProps = new Dictionary<string, object>(); } return _customProps; } set { _customProps = value; if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs("CustomProperties")); } } } } VB Partial Public Class MainPage Inherits UserControl Public Sub New() InitializeComponent() Dim ent As New MyEntity ent.CustomProperties.Add("Title", "Mr") ent.CustomProperties.Add("FirstName", "John") ent.CustomProperties.Add("Name", "Smith") Me.DataContext = ent End Sub End Class Public Class MyEntity Implements INotifyPropertyChanged Public Event PropertyChanged(ByVal sender As Object, ByVal e As System.ComponentModel.PropertyChangedEventArgs) Implements System.ComponentModel.INotifyPropertyChanged.PropertyChanged Private _customProps As Dictionary(Of String, Object) Public Property CustomProperties As Dictionary(Of String, Object) Get If _customProps Is Nothing Then _customProps = New Dictionary(Of String, Object) End If Return _customProps End Get Set(ByVal value As Dictionary(Of String, Object)) _customProps = value RaiseEvent PropertyChanged(Me, New PropertyChangedEventArgs("CustomProperties")) End Set End Property End Class Xaml <TextBox Height="23" Name="TextBox1" Text="{Binding Path=CustomProperties[Title], Mode=TwoWay}" /> <TextBox Height="23" Name="TextBox2" Text="{Binding Path=CustomProperties[FirstName], Mode=TwoWay}" /> <TextBox Height="23" Name="TextBox3" Text="{Binding Path=CustomProperties[Name], Mode=TwoWay}" />

    Read the article

  • Creating a Blog ruby on Rails - Problem Deleting Comments

    - by bgadoci
    As I always type I am new to rails and programming in general so go easy. Thanks in advance. I have successfully followed the initial tutorial from Ryan Bates on how to build a weblog in 15 minutes. If you don't know this tutorial takes you through creating posts and allowing for comments on those post. It even introduces AJAX through the creating and displaying comments on the posts show.html.erb page. All works great. Here's the hiccup, when Ryan takes you though this tutorial he clears out the comments_controller and only shows the code for creating comments. I am trying to add back the ability to edit and destroy comments. Can't see to get it to work, keeps deleting the actual post not the comment (log shows that I keep sending DELETE request to PostsController). Here is my code: class CommentsController < ApplicationController def create @post = Post.find(params[:post_id]) @comment = @post.comments.create!(params[:comment]) respond_to do |format| format.html { redirect_to @post } format.js end end def destroy @comment = Comment.find(params[:id]) @comment.destroy respond_to do |format| format.html { redirect_to(posts_url) } format.xml { head :ok } end end end /views/posts/show.html.erb <%= render :partial => @post %> <p> <%= link_to 'Edit', edit_post_path (@post) %> | <%= link_to 'Destroy', @post, :method => :delete, :confirm => "Are you sure?" %> | <%= link_to 'See All Posts', posts_path %> </p> <h2>Comments</h2> <div id="comments"> <%= render :partial => @post.comments %> </div> <% remote_form_for [@post, Comment.new] do |f| %> <p> <%= f.label :body, "New Comment" %><br/> <%= f.text_area :body %> </p> <p> <%= f.submit "Add Comment" %></p> <% end %> /views/comments/_comment.html.erb <% div_for comment do %> <p> <strong>Posted <%= time_ago_in_words(comment.created_at) %> ago </strong><br/> <%= h(comment.body) %><br/> <%= link_to 'Destroy', @comments, :method => :delete, :confirm => "Are you sure?" %> </p> <% end %>

    Read the article

  • Nested attributes in the index view?

    - by user283179
    I seem to be getting error: uninitialized constant Style::Pic when I'm trying to render a nested object in to the index view the show view is fine. class Style < ActiveRecord::Base #belongs_to :users has_many :style_images, :dependent => :destroy accepts_nested_attributes_for :style_images, :reject_if => proc { |a| a.all? { |k, v| v.blank?} } #found this here http://ryandaigle.com/articles/2009/2/1/what-s-new-in-edge-rails-nested-attributes has_one :cover, :class_name => "Pic", :order => "updated_at DESC" accepts_nested_attributes_for :cover end class StyleImage < ActiveRecord::Base belongs_to :style #belongs_to :style_as_cover, :class_name => "Style", :foreign_key => "style_id" has_attached_file :pic, :styles => { :small => "200x0>", :normal => "600x> " } validates_attachment_presence :pic #validates_attachment_size :pic, :less_than => 5.megabytes end <% for style_image in @style.style_images %> <li><%= style_image.caption %></li> <div id="show_photo"> <%= image_tag style_image.pic.url(:normal) %></div> <% end %> As you can see from the above The main model style has many style_images, all these style_images are displayed in the show view but, in the the index view I wish to show one image which has been name and will act as a cover that is displayed for each style. in the index controller I have tried the following: class StylesController < ApplicationController layout "mini" def index @styles = Style.find(:all, :inculde => [:cover,]).reverse respond_to do |format| format.html # index.html.erb format.xml { render :xml => @styles } end end and the index <% @styles.each do |style| %> <%=image_tag style.cover.pic.url(:small) %> <% end %> class StyleImage < ActiveRecord::Base belongs_to :style #belongs_to :style_as_cover, :class_name => "Style", :foreign_key => "style_id" has_attached_file :pic, :styles => { :small => "200x0>", :normal => "600x> " } validates_attachment_presence :pic #validates_attachment_size :pic, :less_than => 5.megabytes end In the style_images table there is an cover_id also. From the about you can see that I have included the cover in the controller and the model. I have know idea where I'm going wrong here! If any one can help please do!

    Read the article

  • Delphi: how to set the length of a RTTI-accessed dynamic array using DynArraySetLength?

    - by conciliator
    I'd like to set the length of a dynamic array, as suggested in this post. I have two classes TMyClass and the related TChildClass defined as TChildClass = class private FField1: string; FField2: string; end; TMyClass = class private FField1: TChildClass; FField2: Array of TChildClass; end; The array augmentation is implemented as var RContext: TRttiContext; RType: TRttiType; Val: TValue; // Contains the TMyClass instance RField: TRttiField; // A field in the TMyClass instance RElementType: TRttiType; // The kind of elements in the dyn array DynArr: TRttiDynamicArrayType; Value: TValue; // Holding an instance as referenced by an array element ArrPointer: Pointer; ArrValue: TValue; ArrLength: LongInt; i: integer; begin RContext := TRTTIContext.Create; try RType := RContext.GetType(TMyClass.ClassInfo); Val := RType.GetMethod('Create').Invoke(RType.AsInstance.MetaclassType, []); RField := RType.GetField('FField2'); if (RField.FieldType is TRttiDynamicArrayType) then begin DynArr := (RField.FieldType as TRttiDynamicArrayType); RElementType := DynArr.ElementType; // Set the new length of the array ArrValue := RField.GetValue(Val.AsObject); ArrLength := 3; // Three seems like a nice number ArrPointer := ArrValue.GetReferenceToRawData; DynArraySetLength(ArrPointer, ArrValue.TypeInfo, 1, @ArrLength); { TODO : Fix 'Index out of bounds' } WriteLn(ArrValue.IsArray, ' ', ArrValue.GetArrayLength); if RElementType.IsInstance then begin for i := 0 to ArrLength - 1 do begin Value := RElementType.GetMethod('Create').Invoke(RElementType.AsInstance.MetaclassType, []); ArrValue.SetArrayElement(i, Value); // This is just a test, so let's clean up immediatly Value.Free; end; end; end; ReadLn; Val.AsObject.Free; finally RContext.Free; end; end. Being new to D2010 RTTI, I suspected the error could depend on getting ArrValue from the class instance, but the subsequent WriteLn prints "TRUE", so I've ruled that out. Disappointingly, however, the same WriteLn reports that the size of ArrValue is 0, which is confirmed by the "Index out of bounds"-exception I get when trying to set any of the elements in the array (through ArrValue.SetArrayElement(i, Value);). Do anyone know what I'm doing wrong here? (Or perhaps there is a better way to do this?) TIA!

    Read the article

  • Sortable_element with RJS does not working

    - by jaycode
    I have a list of images where user can arrange their orders. When user uploaded an image, I want the list to still be sortable. I am using a similar upload that was described here: http://kpumuk.info/ruby-on-rails/in-place-file-upload-with-ruby-on-rails/ Please help. Here are the code for upload in view file: <% form_for [:admin, @new_image], :html => { :target => 'upload_frame', :multipart => true } do |f| %> <%= hidden_field_tag :update, 'product_images'%> <%= f.hidden_field :image_owner_id %> <%= f.hidden_field :image_owner_type %> <%= f.file_field :image_file %><br /> or get image from this URL: <%= f.text_field :image_file_url %> <%= f.hidden_field :image_file_temp %><br /> <%= f.submit "Upload Image" %> <% end %> And in controller view: def create @image = Image.new(params[:image]) logger.debug "params are #{params.inspect}" if @image.save logger.debug "initiating javascript now" responds_to_parent do render :update do |page| logger.debug "javascript test #{sortable_element("product_images", :url => sort_admin_images_path, :handle => "handle", :constraint => false)}" page << "show_notification('Image Uploaded');" page.replace_html params[:update], :partial => '/admin/shared/editor/images', :locals => {:object => @image.image_owner, :updated_image => @image} page << sortable_element("product_images", :url => sort_admin_images_path, :handle => "handle", :constraint => false) end end #render :partial => '/admin/shared/editor/images', :locals => {:object => @image.image_owner, :updated_image => @image} else responds_to_parent do render :update do |page| page << "show_notification('Image Upload Error');" end end end end Or, to rephrase the question: Running this: page.replace_html params[:update], :partial => '/admin/shared/editor/images', :locals => {:object => @image.image_owner, :updated_image => @image} page << sortable_element("product_images", :url => sort_admin_images_path, :handle => "handle", :constraint => false) Will NOT adding sortable list feature. Please help, Thank you

    Read the article

  • Problem with format a single excel column with OLE automation using Delphi

    - by Snackmoore
    Dear All, I have piece of code which I use to format a range of cells in Excel. It works fine in Excel 2007 but when the range is only 1 column wide and it is Excel 2003 instead of 2007, I'll get an error saying the I am assigning invalid value for a border's line style. ** valuables such as "xlInsideHorizontal", I have declared them as CONSTANT with the proper values. Please help. procedure formatCells(FRCELLROW, FRCELLCOL, TOCELLROW, TOCELLCOL: Integer; TOPSTYLE, TOPCOLOUR, TOPWEIGHT, BOTTOMSTYLE, BOTTOMCOLOUR, BOTTOMWEIGHT, LEFTSTYLE, LEFTCOLOUR, LEFTWEIGHT, RIGHTSTYLE, RIGHTCOLOUR, RIGHTWEIGHT: Integer; INNERVSTYLE, INNERVCOLOUR, INNERVWEIGHT: Integer; INNERHSTYLE, INNERHCOLOUR, INNERHWEIGHT: Integer; HORIZONTALCELLALIGNMENT: Integer; FontBold: Boolean; NumberFormat: String ); var tmpRange: Variant; begin tmpRange := eclApp.range[eclApp.Cells[FRCELLROW, FRCELLCOL], eclApp.Cells[TOCELLROW, TOCELLCOL]]; tmpRange.Borders[xlEdgeTop].LineStyle := TOPSTYLE; if TOPSTYLE <> xlNone then begin tmpRange.Borders[xlEdgeTop].ColorIndex := TOPCOLOUR; tmpRange.Borders[xlEdgeTop].Weight := TOPWEIGHT; end; //if tmpRange.Borders[xlEdgeBottom].LineStyle := BOTTOMSTYLE; if BOTTOMSTYLE <> xlNone then begin tmpRange.Borders[xlEdgeBottom].ColorIndex := BOTTOMCOLOUR; tmpRange.Borders[xlEdgeBottom].Weight := BOTTOMWEIGHT; end; //if tmpRange.Borders[xlEdgeLeft].LineStyle := LEFTSTYLE; if LEFTSTYLE <> xlNone then begin tmpRange.Borders[xlEdgeLeft].ColorIndex := LEFTCOLOUR; tmpRange.Borders[xlEdgeLeft].Weight := LEFTWEIGHT; end; //if tmpRange.Borders[xlEdgeRight].LineStyle := RIGHTSTYLE; if RIGHTSTYLE <> xlNone then begin tmpRange.Borders[xlEdgeRight].ColorIndex := RIGHTCOLOUR; tmpRange.Borders[xlEdgeRight].Weight := RIGHTWEIGHT; end; //if tmpRange.Borders[xlInsideVertical].LineStyle := INNERVSTYLE; if INNERVSTYLE <> xlNone then begin tmpRange.Borders[xlInsideVertical].ColorIndex := INNERVCOLOUR; tmpRange.Borders[xlInsideVertical].Weight := INNERVWEIGHT; end; //if tmpRange.Borders[xlInsideHorizontal].LineStyle := INNERHSTYLE; if INNERHSTYLE <> xlNone then begin tmpRange.Borders[xlInsideHorizontal].ColorIndex := INNERHCOLOUR; tmpRange.Borders[xlInsideHorizontal].Weight := INNERHWEIGHT; end; //if tmpRange.HorizontalAlignment := HORIZONTALCELLALIGNMENT; tmpRange.Font.Bold := FontBold; tmpRange.NumberFormat := NumberFormat; end; //

    Read the article

  • RSpec mocking a nested model in Rails - ActionController problem

    - by emson
    Hi All I am having a problem in RSpec when my mock object is asked for a URL by the ActionController. The URL is a Mock one and not a correct resource URL. I am running RSpec 1.3.0 and Rails 2.3.5 Basically I have two models. Where a subject has many notes. class Subject < ActiveRecord::Base validates_presence_of :title has_many :notes end class Note < ActiveRecord::Base validates_presence_of :title belongs_to :subject end My routes.rb file nests these two resources as such: ActionController::Routing::Routes.draw do |map| map.resources :subjects, :has_many => :notes end The NotesController.rb file looks like this: class NotesController < ApplicationController # POST /notes # POST /notes.xml def create @subject = Subject.find(params[:subject_id]) @note = @subject.notes.create!(params[:note]) respond_to do |format| format.html { redirect_to(@subject) } end end end Finally this is my RSpec spec which should simply post my mocked objects to the NotesController and be executed... which it does: it "should create note and redirect to subject without javascript" do # usual rails controller test setup here subject = mock(Subject) Subject.stub(:find).and_return(subject) notes_proxy = mock('association proxy', { "create!" => Note.new }) subject.stub(:notes).and_return(notes_proxy) post :create, :subject_id => subject, :note => { :title => 'note title', :body => 'note body' } end The problem is that when the RSpec post method is called. The NotesController correctly handles the Mock Subject object, and create! the new Note object. However when the NoteController#Create method tries to redirect_to I get the following error: NoMethodError in 'NotesController should create note and redirect to subject without javascript' undefined method `spec_mocks_mock_url' for #<NotesController:0x1034495b8> Now this is caused by a bit of Rails trickery that passes an ActiveRecord object (@subject, in our case, which isn't ActiveRecord but a Mock object), eventually to url_for who passes all the options to the Rails' Routing, which then determines the URL. My question is how can I mock Subject so that the correct options are passed so that I my test passes. I've tried passing in :controller = 'subjects' options but no joy. Is there some other way of doing this? Thanks...

    Read the article

  • Rotation Matrix calculates by column not by row

    - by pinnacler
    I have a class called forest and a property called fixedPositions that stores 100 points (x,y) and they are stored 250x2 (rows x columns) in MatLab. When I select 'fixedPositions', I can click scatter and it will plot the points. Now, I want to rotate the plotted points and I have a rotation matrix that will allow me to do that. The below code should work: theta = obj.heading * pi/180; apparent = [cos(theta) -sin(theta) ; sin(theta) cos(theta)] * obj.fixedPositions; But it wont. I get this error. ??? Error using == mtimes Inner matrix dimensions must agree. Error in == landmarkslandmarks.get.apparentPositions at 22 apparent = [cos(theta) -sin(theta) ; sin(theta) cos(theta)] * obj.fixedPositions; When I alter forest.fixedPositions to store the variables 2x250 instead of 250x2, the above code will work, but it wont plot. I'm going to be plotting fixedPositions constantly in a simulation, so I'd prefer to leave it as it, and make the rotation work instead. Any ideas? Also, fixed positions, is the position of the xy points as if you were looking straight ahead. i.e. heading = 0. heading is set to 45, meaning I want to rotate points clockwise 45 degrees. Here is my code: classdef landmarks properties fixedPositions %# positions in a fixed coordinate system. [x, y] heading = 45; %# direction in which the robot is facing end properties (Dependent) apparentPositions end methods function obj = landmarks(numberOfTrees) %# randomly generates numberOfTrees amount of x,y coordinates and set %the array or matrix (not sure which) to fixedPositions obj.fixedPositions = 100 * rand([numberOfTrees,2]) .* sign(rand([numberOfTrees,2]) - 0.5); end function obj = set.apparentPositions(obj,~) theta = obj.heading * pi/180; [cos(theta) -sin(theta) ; sin(theta) cos(theta)] * obj.fixedPositions; end function apparent = get.apparentPositions(obj) %# rotate obj.positions using obj.facing to generate the output theta = obj.heading * pi/180; apparent = [cos(theta) -sin(theta) ; sin(theta) cos(theta)] * obj.fixedPositions; end end end P.S. If you change one line to this: obj.fixedPositions = 100 * rand([2,numberOfTrees]) .* sign(rand([2,numberOfTrees]) - 0.5); Everything will work fine... it just wont plot.

    Read the article

  • Odd ActiveRecord model dynamic initialization bug in production

    - by qfinder
    I've got an ActiveRecord (2.3.5) model that occasionally exhibits incorrect behavior that appears to be related to a problem in its dynamic initialization. Here's the code: class Widget < ActiveRecord::Base extend ActiveSupport::Memoizable serialize :settings VALID_SETTINGS = %w(show_on_sale show_upcoming show_current show_past) VALID_SETTINGS.each do |setting| class_eval %{ def #{setting}=(val); self.settings[:#{setting}] = (val == "1"); end def #{setting}; self.settings[:#{setting}]; end } end def initialize_settings self.settings ||= { :show_on_sale => true, :show_upcoming => true } end after_initialize :initialize_settings # All the other stuff the model does end The idea was to use a single record field (settings) to persist a bunch of configuration data for this object, but allow all the settings to seamlessly work with form helpers and the like. (Why this approach makes sense here is a little out of scope, but let's assume that it does.) Net-net, Widget should end up with instance methods (eg #show_on_sale= #show_on_sale) for all the entires in the VALID_SETTINGS array. Any default values should be specified in initialize_settings. And indeed this works, mostly. In dev and staging, no problems at all. But in production, the app sometimes ends up in a state where a) any writes to the dynamically generated setters fail and b) none of the default values appear to be set - although my leading theory is that the dynamically generated reader methods are just broken. The code, db, and environment is otherwise identical between the three. A typical error message / backtrace on the fail looks like: IndexError: index 141145 out of string (eval):2:in []=' (eval):2:inshow_on_sale=' [GEM_ROOT]/gems/activerecord-2.3.5/lib/active_record/base.rb:2746:in send' [GEM_ROOT]/gems/activerecord-2.3.5/lib/active_record/base.rb:2746:inattributes=' [GEM_ROOT]/gems/activerecord-2.3.5/lib/active_record/base.rb:2742:in each' [GEM_ROOT]/gems/activerecord-2.3.5/lib/active_record/base.rb:2742:inattributes=' [GEM_ROOT]/gems/activerecord-2.3.5/lib/active_record/base.rb:2634:in `update_attributes!' ...(then controller and all the way down) Ideas or theories as to what might be going on? My leading theory is that something is going wrong in instance initialization wherein the class instance variable settings is ending up as a string rather than a hash. This explains both the above setter failure (:show_on_sale is being used to index into the string) and the fact that getters don't work (an out of bounds [] call on a string just returns nil). But then how and why might settings occasionally end up as a string rather than hash?

    Read the article

  • C Programming - My program is good enough for my assignment but I know its not good

    - by Joe
    Hi there I'm just starting an assignment for uni and it's raised a question for me. I don't understand how to return a string from a function without having a memory leak. char* trim(char* line) { int start = 0; int end = strlen(line) - 1; /* find the start position of the string */ while(isspace(line[start]) != 0) { start++; } //printf("start is %d\n", start); /* find the position end of the string */ while(isspace(line[end]) != 0) { end--; } //printf("end is %d\n", end); /* calculate string length and add 1 for the sentinel */ int len = end - start + 2; /* initialise char array to len and read in characters */ int i; char* trimmed = calloc(sizeof(char), len); for(i = 0; i < (len - 1); i++) { trimmed[i] = line[start + i]; } trimmed[len - 1] = '\0'; return trimmed; } as you can see I am returning a pointer to char which is an array. I found that if I tried to make the 'trimmed' array by something like: char trimmed[len]; then the compiler would throw up a message saying that a constant was expected on this line. I assume this meant that for some reason you can't use variables as the array length when initialising an array, although something tells me that can't be right. So instead I made my array by allocating some memory to a char pointer. I understand that this function is probably waaaaay sub-optimal for what it is trying to do, but what I really want to know is: 1. Can you normally initialise an array using a variable to declare the length like: char trimmed[len]; ? 2. If I had an array that was of that type (char trimmed[]) would it have the same return type as a pointer to char (ie char*). 3. If I make my array by callocing some memory and allocating it to a char pointer, how do I free this memory. It seems to me that once I have returned this array, I can't access it to free it as it is a local variable. Many thanks in advance Joe

    Read the article

  • Python read multiline JSON

    - by Paul W
    I have been trying to use JSON to store settings for a program. I can't seem to get Python 2.6 's JSON Decoder to decode multi-line JSON strings... Here is example input: .settings file: """ {\ 'user':'username',\ 'password':'passwd',\ }\ """ I have tried a couple other syntaxes for this file, which I will specify below (with the traceback they cause). My python code for reading the file in is import json settings_text = open(".settings", "r").read() settings = json.loads(settings_text) The Traceback for this is: Traceback (most recent call last): File "json_test.py", line 4, in <module> print json.loads(text) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/json/__init__.py", line 307, in loads return _default_decoder.decode(s) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/json/decoder.py", line 322, in decode raise ValueError(errmsg("Extra data", s, end, len(s))) ValueError: Extra data: line 1 column 2 - line 7 column 1 (char 2 - 41) I assume the "Extra data" is the triple-quote. Here are the other syntaxes I have tried for the .settings file, with their respective Tracebacks: "{\ 'user':'username',\ 'pass':'passwd'\ }" Traceback (most recent call last): File "json_test.py", line 4, in <module> print json.loads(text) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/json/__init__.py", line 307, in loads return _default_decoder.decode(s) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/json/decoder.py", line 319, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/json/decoder.py", line 336, in raw_decode obj, end = self._scanner.iterscan(s, **kw).next() File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/json/scanner.py", line 55, in iterscan rval, next_pos = action(m, context) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/json/decoder.py", line 155, in JSONString return scanstring(match.string, match.end(), encoding, strict) ValueError: Invalid \escape: line 1 column 2 (char 2) '{\ "user":"username",\ "pass":"passwd",\ }' Traceback (most recent call last): File "json_test.py", line 4, in <module> print json.loads(text) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/json/__init__.py", line 307, in loads return _default_decoder.decode(s) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/json/decoder.py", line 319, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/json/decoder.py", line 338, in raw_decode raise ValueError("No JSON object could be decoded") ValueError: No JSON object could be decoded If I put the settings all on one line, it decodes fine.

    Read the article

  • Putting update logic in your migrations

    - by Daniel Abrahamsson
    A couple of times I've been in the situation where I've wanted to refactor the design of some model and have ended up putting update logic in migrations. However, as far as I've understood, this is not good practice (especially since you are encouraged to use your schema file for deployment, and not your migrations). How do you deal with these kind of problems? To clearify what I mean, say I have a User model. Since I thought there would only be two kinds of users, namely a "normal" user and an administrator, I chose to use a simple boolean field telling whether the user was an adminstrator or not. However, after I while I figured I needed some third kind of user, perhaps a moderator or something similar. In this case I add a UserType model (and the corresponding migration), and a second migration for removing the "admin" flag from the user table. And here comes the problem. In the "add_user_type_to_users" migration I have to map the admin flag value to a user type. Additionally, in order to do this, the user types have to exist, meaning I can not use the seeds file, but rather create the user types in the migration (also considered bad practice). Here comes some fictional code representing the situation: class CreateUserTypes < ActiveRecord::Migration def self.up create_table :user_types do |t| t.string :name, :nil => false, :unique => true end #Create basic types (can not put in seed, because of future migration dependency) UserType.create!(:name => "BASIC") UserType.create!(:name => "MODERATOR") UserType.create!(:name => "ADMINISTRATOR") end def self.down drop_table :user_types end end class AddTypeIdToUsers < ActiveRecord::Migration def self.up add_column :users, :type_id, :integer #Determine type via the admin flag basic = UserType.find_by_name("BASIC") admin = UserType.find_by_name("ADMINISTRATOR") User.all.each {|u| u.update_attribute(:type_id, (u.admin?) ? admin.id : basic.id)} #Remove the admin flag remove_column :users, :admin #Add foreign key execute "alter table users add constraint fk_user_type_id foreign key (type_id) references user_types (id)" end def self.down #Re-add the admin flag add_column :users, :admin, :boolean, :default => false #Reset the admin flag (this is the problematic update code) admin = UserType.find_by_name("ADMINISTRATOR") execute "update users set admin=true where type_id=#{admin.id}" #Remove foreign key constraint execute "alter table users drop foreign key fk_user_type_id" #Drop the type_id column remove_column :users, :type_id end end As you can see there are two problematic parts. First the row creation part in the first model, which is necessary if I would like to run all migrations in a row, then the "update" part in the second migration that maps the "admin" column to the "type_id" column. Any advice?

    Read the article

  • Issues in Ada Concurrency

    - by Arkapravo
    Hi I need some help and also some insight. This is a program in Ada-2005 which has 3 tasks. The output is 'z'. If the 3 tasks do not happen in the order of their placement in the program then output can vary from z = 2, z = 1 to z = 0 ( That is easy to see in the program, mutual exclusion is attempted to make sure output is z = 2). WITH Ada.Text_IO; USE Ada.Text_IO; WITH Ada.Integer_Text_IO; USE Ada.Integer_Text_IO; WITH System; USE System; procedure xyz is x : Integer := 0; y : Integer := 0; z : Integer := 0; task task1 is pragma Priority(System.Default_Priority + 3); end task1; task task2 is pragma Priority(System.Default_Priority + 2); end task2; task task3 is pragma Priority(System.Default_Priority + 1); end task3; task body task1 is begin x := x + 1; end task1; task body task2 is begin y := x + y; end task2; task body task3 is begin z := x + y + z; end task3; begin Put(" z = "); Put(z); end xyz; I first tried this program (a) without pragmas, the result : In 100 tries, occurence of 2: 86, occurence of 1: 10, occurence of 0: 4. Then (b) with pragmas, the result : In 100 tries, occurence of 2: 84, occurence of 1 : 14, occurence of 0: 2. Which is unexpected as the 2 results are nearly identical. Which means pragmas or no pragmas the output has same behavior. Those who are Ada concurrency Gurus please shed some light on this topic. Alternative solutions with semaphores (if possible) is also invited. Further in my opinion for a critical process (that is what we do with Ada), with pragmas the result should be z = 2, 100% at all times, hence or otherwise this program should be termed as 85% critical !!!! (That should not be so with Ada)

    Read the article

  • SQL Server 2005 - query with case statement

    - by user329266
    Trying to put a single query together to be used eventually in a SQL Server 2005 report. I need to: Pull in all distinct records for values in the "eventid" column for a time frame - this seems to work. For each eventid referenced above, I need to search for all instances of the same eventid to see if there is another record with TaskName like 'review1%'. Again, this seems to work. This is where things get complicated: For each record where TaskName is like review1, I need to see if another record exists with the same eventid and where TaskName='End'. Utimately, I need a count of how many records have TaskName like 'review1%', and then how many have TaskName like 'review1%' AND TaskName='End'. I would think this could be accomplished by setting a new value for each record, and for the eventid, if a record exists with TaskName='End', set to 1, and if not, set to 0. The query below seems to accomplish item #1 above: SELECT eventid, TimeStamp, TaskName, filepath FROM (SELECT eventid, TimeStamp, filepath, TaskName, ROW_NUMBER() OVER(PARTITION BY eventid ORDER BY TimeStamp DESC) AS seq FROM eventrecords where ((TimeStamp >= '2010-4-1 00:00:00.000') and (TimeStamp <= '2010-4-21 00:00:00.000'))) AS T WHERE seq = 1 order by eventid And the query below seems to accomplish #2: SELECT eventid, TimeStamp, TaskName, filepath FROM (SELECT eventid, TimeStamp, filepath, TaskName, ROW_NUMBER() OVER(PARTITION BY eventid ORDER BY TimeStamp DESC) AS seq FROM eventrecords where ((TimeStamp >= '2010-4-1 00:00:00.000') and (TimeStamp <= '2010-4-21 00:00:00.000')) and TaskName like 'Review1%') AS T WHERE seq = 1 order by eventid This will bring back the eventid's that also have a TaskName='End': SELECT eventid, TimeStamp, TaskName, filepath FROM (SELECT eventid, TimeStamp, filepath, TaskName, ROW_NUMBER() OVER(PARTITION BY eventid ORDER BY TimeStamp DESC) AS seq FROM eventrecords where ((TimeStamp >= '2010-4-1 00:00:00.000') and (TimeStamp <= '2010-4-21 00:00:00.000')) and TaskName like 'Review1%') AS T WHERE seq = 1 and eventid in (Select eventid from eventrecords where TaskName = 'End') order by eventid So I've tried the following to TRY to accomplish #3: SELECT eventid, TimeStamp, TaskName, filepath FROM (SELECT eventid, TimeStamp, filepath, TaskName, ROW_NUMBER() OVER(PARTITION BY eventid ORDER BY TimeStamp DESC) AS seq FROM eventrecords where ((TimeStamp >= '2010-4-1 00:00:00.000') and (TimeStamp <= '2010-4-21 00:00:00.000')) and TaskName like 'Review1%') AS T WHERE seq = 1 and case when (eventid in (Select eventid from eventrecords where TaskName = 'End') then 1 else 0) as bit end order by eventid When I try to run this, I get: "Incorrect syntax near the keyword 'then'." Not sure what I'm doing wrong. Haven't seen any examples anywhere quite like this. I should mention that eventrecords has a primary key, but it doesn't seem to help anything when I include it, and I am not permitted to change the table. (ugh) I've received one suggestion to use a cursor and temporary table, but am not sure how badley that would bog down performance when the report is running. Thanks in advance.

    Read the article

  • Stop method not working

    - by avoq
    Hi everyone , can anybody tell me why the following code doesn't work properly? I want to play and stop an audio file. I can do the playback but whenever I click the stop button nothing happens. Here's the code : Thank you. .................. import java.io.*; import javax.sound.sampled.*; import javax.swing.*; import java.awt.event.*; public class SoundClipTest extends JFrame { final JButton button1 = new JButton("Play"); final JButton button2 = new JButton("Stop"); int stopPlayback = 0; // Constructor public SoundClipTest() { button1.setEnabled(true); button2.setEnabled(false); // button play button1.addActionListener( new ActionListener(){ public void actionPerformed(ActionEvent e){ button1.setEnabled(false); button2.setEnabled(true); play(); }// end actionPerformed }// end ActionListener );// end addActionListener() // button stop button2.addActionListener( new ActionListener(){ public void actionPerformed( ActionEvent e){ //Terminate playback before EOF stopPlayback = 1; }//end actionPerformed }//end ActionListener );//end addActionListener() this.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); this.setTitle("Test Sound Clip"); this.setSize(300, 200); JToolBar bar = new JToolBar(); bar.add(button1); bar.add(button2); bar.setOrientation(JToolBar.VERTICAL); add("North", bar); add("West", bar); setVisible(true); } void play() { try { final File inputAudio = new File("first.wav"); // First, we get the format of the input file final AudioFileFormat.Type fileType = AudioSystem.getAudioFileFormat(inputAudio).getType(); // Then, we get a clip for playing the audio. final Clip c = AudioSystem.getClip(); // We get a stream for playing the input file. AudioInputStream ais = AudioSystem.getAudioInputStream(inputAudio); // We use the clip to open (but not start) the input stream c.open(ais); // We get the format of the audio codec (not the file format we got above) final AudioFormat audioFormat = ais.getFormat(); c.start(); if (stopPlayback == 1 ) {c.stop();} } catch (UnsupportedAudioFileException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } catch (LineUnavailableException e) { e.printStackTrace(); } }// end play public static void main(String[] args) { //new SoundClipTest().play(); new SoundClipTest(); } }

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >