Search Results

Search found 20909 results on 837 pages for 'boost multi array'.

Page 115/837 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • LLVM Clang 5.0 explicit in copy-initialization error

    - by kevzettler
    I'm trying to compile an open source project on OSX that has only been tested on Linux. $: g++ -v Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1 Apple LLVM version 5.0 (clang-500.2.79) (based on LLVM 3.3svn) Target: x86_64-apple-da I'm trying to compile with the following command line options g++ -MMD -Wall -std=c++0x -stdlib=libc++ -Wno-sign-compare -Wno-unused-variable -ftemplate-depth=1024 -I /usr/local/Cellar/boost/1.55.0/include/boost/ -g -O3 -c level.cpp -o obj-opt/level.o I am seeing several errors that look like this: ./square.h:39:70: error: chosen constructor is explicit in copy-initialization int strength = 0, double flamability = 0, map<SquareType, int> constructions = {}, bool ticking = false); The project states the following are requirements for the Linux setup. How can I confirm I'm making that? gcc-4.8.2 git libboost 1.5+ with libboost-serialize libsfml-dev 2+ (Ubuntu ppa that contains libsfml 2: ) freeglut-dev libglew-dev

    Read the article

  • Loosely coupled implicit conversion

    - by ltjax
    Implicit conversion can be really useful when types are semantically equivalent. For example, imagine two libraries that implement a type identically, but in different namespaces. Or just a type that is mostly identical, except for some semantic-sugar here and there. Now you cannot pass one type into a function (in one of those libraries) that was designed to use the other, unless that function is a template. If it's not, you have to somehow convert one type into the other. This should be trivial (or otherwise the types are not so identical after-all!) but calling the conversion explicitly bloats your code with mostly meaningless function-calls. While such conversion functions might actually copy some values around, they essentially do nothing from a high-level "programmers" point-of-view. Implicit conversion constructors and operators could obviously help, but they introduce coupling, so that one of those types has to know about the other. Usually, at least when dealing with libraries, that is not the case, because the presence of one of those types makes the other one redundant. Also, you cannot always change libraries. Now I see two options on how to make implicit conversion work in user-code: The first would be to provide a proxy-type, that implements conversion-operators and conversion-constructors (and assignments) for all the involved types, and always use that. The second requires a minimal change to the libraries, but allows great flexibility: Add a conversion-constructor for each involved type that can be externally optionally enabled. For example, for a type A add a constructor: template <class T> A( const T& src, typename boost::enable_if<conversion_enabled<T,A>>::type* ignore=0 ) { *this = convert(src); } and a template template <class X, class Y> struct conversion_enabled : public boost::mpl::false_ {}; that disables the implicit conversion by default. Then to enable conversion between two types, specialize the template: template <> struct conversion_enabled<OtherA, A> : public boost::mpl::true_ {}; and implement a convert function that can be found through ADL. I would personally prefer to use the second variant, unless there are strong arguments against it. Now to the actual question(s): What's the preferred way to associate types for implicit conversion? Are my suggestions good ideas? Are there any downsides to either approach? Is allowing conversions like that dangerous? Should library implementers in-general supply the second method when it's likely that their type will be replicated in software they are most likely beeing used with (I'm thinking of 3d-rendering middle-ware here, where most of those packages implement a 3D vector).

    Read the article

  • How would I use for_each to delete every value in an STL map?

    - by stusmith
    Suppose I have a STL map where the values are pointers, and I want to delete them all. How would I represent the following code, but making use of std::for_each? I'm happy for solutions to use Boost. for( stdext::hash_map<int, Foo *>::iterator ir = myMap.begin(); ir != myMap.end(); ++ir ) { delete ir->second; // delete all the (Foo *) values. } (I've found Boost's checked_delete, but I'm not sure how to apply that to the pair<int, Foo *> that the iterator represents). (Also, for the purposes of this question, ignore the fact that storing raw pointers that need deleting in an STL container isn't very sensible).

    Read the article

  • How can I "bulk paste" a clipboard string of multi-line text into a readable ordered list?

    - by gunshor
    How can I "bulk paste" a clipboard string of multi-line text into a readable ordered list? I'm trying to demonstrate how to turn any string of multi-line text into an ordered list. The script (preferably JS) needs to respect: - carriage returns at the end of a line, to mean "that line ends here" - indentations at the beginning of a line, to mean "this is part of the item above it" - dashes at the beginning of a line, to mean "this is a task, and the line above it is its project"

    Read the article

  • Drupal: Exposing a module's data to Views2 using its API

    - by Sepehr Lajevardi
    I'm forking the filefield_stats module to provide it with the ability of exposing data into the Views module via the API. The filefield_stats module db table schema is as follow: <?php function filefield_stats_schema() { $schema['filefield_stats'] = array( 'fields' => array( 'fid' => array('type' => 'int', 'unsigned' => TRUE, 'not null' => TRUE, 'description' => 'Primary Key: the {files}.fid'), 'vid' => array('type' => 'int', 'unsigned' => TRUE, 'not null' => TRUE, 'description' => 'Primary Key: the {node}.vid'), 'uid' => array('type' => 'int', 'unsigned' => TRUE, 'not null' => TRUE, 'description' => 'The {users}.uid of the downloader'), 'timestamp' => array('type' => 'int', 'unsigned' => TRUE, 'not null' => TRUE, 'description' => 'The timestamp of the download'), 'hostname' => array('type' => 'varchar', 'length' => 128, 'not null' => TRUE, 'default' => '', 'description' => 'The hostname downloading the file (usually IP)'), 'referer' => array('type' => 'text', 'not null' => FALSE, 'description' => 'Referer for the download'), ), 'indexes' => array('fid_vid' => array('fid', 'vid')), ); return $schema; } ?> Well, so I implemented the hook_views_api() in filefield_stats.module & added a filefield_stats.views.inc file in the module's root directory, here it is: <?php // $Id$ /** * @file * Provide the ability of exposing data to Views2, for filefield_stats module. */ function filefield_stats_views_data() { $data = array(); $data['filefield_stats']['table']['group'] = t('FilefieldStats'); // Referencing the {node_revisions} table. $data['filefield_stats']['table']['join'] = array( 'node_revisions' => array( 'left_field' => 'vid', 'field' => 'vid', ), /*'files' => array( 'left_field' => 'fid', 'field' => 'fid', ), 'users' => array( 'left_field' => 'uid', 'field' => 'uid', ),*/ ); // Introducing filefield_stats table fields to Views2. // vid: The node's revision ID which wrapped the downloaded file $data['filefield_stats']['vid'] = array( 'title' => t('Node revision ID'), 'help' => t('The node\'s revision ID which wrapped the downloaded file'), 'relationship' => array( 'base' => 'node_revisions', 'field' => 'vid', 'handler' => 'views_handler_relationship', 'label' => t('Node Revision Reference.'), ), ); // uid: The ID of the user who downloaded the file. $data['filefield_stats']['uid'] = array( 'title' => t('User ID'), 'help' => t('The ID of the user who downloaded the file.'), 'relationship' => array( 'base' => 'users', 'field' => 'uid', 'handler' => 'views_handler_relationship', 'label' => t('User Reference.'), ), ); // fid: The ID of the downloaded file. $data['filefield_stats']['fid'] = array( 'title' => t('File ID'), 'help' => t('The ID of the downloaded file.'), 'relationship' => array( 'base' => 'files', 'field' => 'fid', 'handler' => 'views_handler_relationship', 'label' => t('File Reference.'), ), ); // hostname: The hostname which the file has been downloaded from. $data['filefield_stats']['hostname'] = array( 'title' => t('The Hostname'), 'help' => t('The hostname which the file has been downloaded from.'), 'field' => array( 'handler' => 'views_handler_field', 'click sortable' => TRUE, ), 'sort' => array( 'handler' => 'views_handler_sort', ), 'filter' => array( 'handler' => 'views_handler_filter_string', ), 'argument' => array( 'handler' => 'views_handler_argument_string', ), ); // referer: The referer address which the file download link has been triggered from. $data['filefield_stats']['referer'] = array( 'title' => t('The Referer'), 'help' => t('The referer which the file download link has been triggered from.'), 'field' => array( 'handler' => 'views_handler_field', 'click sortable' => TRUE, ), 'sort' => array( 'handler' => 'views_handler_sort', ), 'filter' => array( 'handler' => 'views_handler_filter_string', ), 'argument' => array( 'handler' => 'views_handler_argument_string', ), ); // timestamp: The time of the download. $data['filefield_stats']['timestamp'] = array( 'title' => t('Download Time'), 'help' => t('The time of the download.'), 'field' => array( 'handler' => 'views_handler_field_date', 'click sortable' => TRUE, ), 'sort' => array( 'handler' => 'views_handler_sort_date', ), 'filter' => array( 'handler' => 'views_handler_filter_date', ), ); return $data; } // filefield_stats_views_data() ?> According to the Views2 documentations this should work as a minimum, I think. But it doesn't! Also there is no error of any kind, when I come through the views UI, there's nothing about filefield_stats data. Any idea?

    Read the article

  • [Ubuntu 10.04] mdadm - Can't get RAID5 Array To Start

    - by Matthew Hodgkins
    Hello, after a power failure my RAID array refuses to start. When I boot I have to sudo mdadm --assemble --force /dev/md0 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 to get mdadm to notice the array. Here are the details (after I force assemble). sudo mdadm --misc --detail /dev/md0: /dev/md0: Version : 00.90 Creation Time : Sun Apr 25 01:39:25 2010 Raid Level : raid5 Used Dev Size : 1465135872 (1397.26 GiB 1500.30 GB) Raid Devices : 6 Total Devices : 6 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Jun 17 23:02:38 2010 State : active, Not Started Active Devices : 6 Working Devices : 6 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 128K UUID : 44a8f730:b9bea6ea:3a28392c:12b22235 (local to host hodge-fs) Events : 0.1249691 Number Major Minor RaidDevice State 0 8 65 0 active sync /dev/sde1 1 8 81 1 active sync /dev/sdf1 2 8 97 2 active sync /dev/sdg1 3 8 49 3 active sync /dev/sdd1 4 8 33 4 active sync /dev/sdc1 5 8 17 5 active sync /dev/sdb1 mdadm.conf: # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions /dev/sdb1 /dev/sdb1 # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # definitions of existing MD arrays ARRAY /dev/md0 level=raid5 num-devices=6 UUID=44a8f730:b9bea6ea:3a28392c:12b22235 Any help would be appreciated.

    Read the article

  • Move an existing RAID 5 array from Ubuntu to Gentoo

    - by Cocoabean
    I have a 64-bit Ubuntu machine with a 4-disk RAID 5 using software raid (md). I've been able to boot an Ubuntu LiveCD and recognize the array with a simple mdadm -A /dev/md0. It was easy to mount after that and nothing had to rebuild. I'm installing Gentoo on this box now (multi-boot, non-RAID root partition) and I have md auto-detect turned on in the kernel. When I boot Gentoo I get: "invalid superblock magic on sdd" for each of the drives in the array. I boot back to Ubuntu and they mount no problem. I tried copying the mdadm.conf that works in Ubuntu to Gentoo, and then ran mdadm -A /dev/md0 but it reports that there is no array named md0. I don't want to lose data (obviously) and I don't want to have to let the RAID rebuild every time I switch between OSes. Any help is appreciated. Both are using mdadm 3.1.4 Both are running 64-bit kernels. mdadm -D /dev/md0 from Ubuntu yields: http://pastebin.com/5gj2QNkV UPDATE: After rebooting I noticed that it still complains about invalid blocks, but cat /proc/mdstat shows an inactive /dev/md127 with the same disks as my raid. I want to mount it but I don't want to get stuck waiting for a rebuild or destroying it inadvertently. mdadm -D /dev/md127 Here is pastebin of mdadm -D /dev/md127 on gentoo: http://pastebin.com/gDCWn0Rn UPDATE II: dmesg output about 'invalid raid superblocks' http://paste.ubuntu.com/885471/ fdisk -l from Ubuntu, /dev/md0 does not have any partitions but I do have it mounted and accessible: http://paste.ubuntu.com/885475/

    Read the article

  • Can one build a single disk RAID 1 array?

    - by Core Xii
    I have an Nvidia hardware RAID 1 array and need to reformat. But I don't have any spare storage media, so I need to get along with the 2 disks in the array. I figured I'd do as follows: Delete the array, so I have 2 separate but identical disks, A and B, with my files Put disk B aside Reformat disk A, build RAID 1 array with it, install Windows XP Put disk B back in Boot to Windows on disk A, copy my files from disk B to disk A Add disk B to the RAID 1 array, rebuild array And now I'd have a new RAID 1 array, fresh install and all my files intact (the ones I copied). Here are the parts I'm unsure about: Can I build a RAID 1 array using just one disk, then add the other one later? Can Windows on disk A see disk B and allow me to copy my files over?

    Read the article

  • Move OS from RAID5 array to RAID 1 arrays

    - by Antoine
    I want to give a last boost to my old ProLiant ML350 G5 server which just needs to be reliable for a few more year only ! With a defined budget of about 1500$ (I do not have more), i plan to replace the CPU (+ adding a second one), the battery cache of my raid controller (E200i), double the RAM, and change all hard drives. I have 7 HDD (SAS 10krpm, 72Gb) + 1 spare in RAID5, and my system is all FULL (no empty tray, full disks). in my current RAID5 array, I have 2 partitions: - 1 OS partition, 20Gb - 1 data partition, 350 Gb I plan to replace these 8 disks with : - 2 x 300Gb SAS 15krpm in RAID 1 (= 1 partition for OS) - 2 x 2Tb SATA 7.2krpm in RAID 1 (= 1 partition for DATA) My biggest constraint is that I have only 01 day to upgrade my server. Therefore, I'm looking for cloning all my files (OS + data partition) to my new arrays, i.e : - the OS partition shall be cloned to the RAID1 "2x300Gb array" - the data partition shall be cloned to the RAID1 "2x2Tb array" My second problem is that I need to physically remove all the old hard drives before inserting the new ones. I'm running Windows Server 2003 R2, and even if MS support will expire soon, I cannot buy a new licence and spent time in configuration. Obviously, with 1500$, I cannot also buy a new server that I could start configuring from now ! Thought about ASR (NTBackup), but I have no floppy drive (and do not really want to invest in one !) Thought about a clonezilla clone, and read this interesting link : Windows Server 2003 - move C: partition to a new SAS disk , but i'm not so confident in using Clonezilla with RAID5. What should be the best option to quickly and easily (if possible!) "copy/paste" my OS (so no need to reinstall and reconfigure all) and DATA / programs / services, etc... ? Thanks for your comments

    Read the article

  • How to force mdadm to stop RAID5 array?

    - by lucek
    I have /dev/md127 RAID5 array that consisted of four drives. I managed to hot remove them from the array and currently /dev/md127 does not have any drives: cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sdd1[0] sda1[1] 304052032 blocks super 1.2 [2/2] [UU] md1 : active raid0 sda5[1] sdd5[0] 16770048 blocks super 1.2 512k chunks md127 : active raid5 super 1.2 level 5, 512k chunk, algorithm 2 [4/0] [____] unused devices: <none> and mdadm --detail /dev/md127 /dev/md127: Version : 1.2 Creation Time : Thu Sep 6 10:39:57 2012 Raid Level : raid5 Array Size : 8790402048 (8383.18 GiB 9001.37 GB) Used Dev Size : 2930134016 (2794.39 GiB 3000.46 GB) Raid Devices : 4 Total Devices : 0 Persistence : Superblock is persistent Update Time : Fri Sep 7 17:19:47 2012 State : clean, FAILED Active Devices : 0 Working Devices : 0 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Number Major Minor RaidDevice State 0 0 0 0 removed 1 0 0 1 removed 2 0 0 2 removed 3 0 0 3 removed I've tried to do mdadm --stop /dev/md127 but: mdadm --stop /dev/md127 mdadm: Cannot get exclusive access to /dev/md127:Perhaps a running process, mounted filesystem or active volume group? I made sure that it's unmounted, umount -l /dev/md127 and confirmed that it indeed is unmounted: umount /dev/md127 umount: /dev/md127: not mounted I've tried to zero superblock of each drive and I get (for each drive): mdadm --zero-superblock /dev/sde1 mdadm: Unrecognised md component device - /dev/sde1 Here's output of lsof|grep md127: lsof|grep md127 md127_rai 276 root cwd DIR 9,0 4096 2 / md127_rai 276 root rtd DIR 9,0 4096 2 / md127_rai 276 root txt unknown /proc/276/exe What else can I do? LVM is not even installed so it can't be a factor.

    Read the article

  • mdadm: Replacing array with entirely new drives

    - by hellfur
    I have a server with three 500GB drives, with most of my data in a RAID5 configuration spanning the three of them. I just purchased and installed four 1TB drives, and the intention is to move off of the old drives and onto the new ones. I have enough SATA ports and power connectors to power all seven of my drives at once, so I've kept the old RAID running while I figure out what to do with the new drives. My question is: Should I create a whole new array on the 1TB drives, then move everything over and reconfigure linux to boot from the new md arrays? Or should I just expand the array, swapping out each of the three 500GBs with the 1TB, then adding the final drive? I've read up on the mdadm extending drive setup, and it makes sense, but I imagine I would use one of the drives as a full backup while I move things over, then add that drive back into the array once things are up and running on three of the 1TB drives, so there's some complication in going that route as well... I'm just not sure which is safer/recommended.

    Read the article

  • Multi-base conversion - using all combinations for URL shortener

    - by Guffa
    I am making an URL shortener, and I am struggling with the optimal way of encoding a number (id) into a character string. I am using the characters 0-9,A-Z,a-z so it will basically be a base-62 encoding. That is pretty basic, but it doesn't make use of all possible codes. The codes that it would produce would be: 0, 1, ... y, z, 10, 11, ... zy, zz, 100, 101, ... Notice that the codes 00 to 0z is not used, the same for 000 to 0zz, and so on. I would like to use all the codes, like this: 0, 1, ... y, z, 00, 01, ... zy, zz, 000, 001, ... It would be some combination of base-62 and base-63, with different bases depending on the position... Using base-62 is easy, for example: create procedure tiny_GetCode @UrlId int as set nocount on declare @Code varchar(10) set @Code = '' while (@UrlId > 0 or len(@Code) = 0) begin set @Code = substring('0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz', @UrlId % 62 + 1, 1) + @Code set @UrlId = @UrlId / 62 end select @Code But I haven't yet managed to make a multi-base conversion out of it, to make use of all the codes.

    Read the article

  • Generic Dictionary and generating a hashcode for multi-part key

    - by Andrew
    I have an object that has a multi-part key and I am struggling to find a suitable way override GetHashCode. An example of what the class looks like is. public class wibble{ public int keypart1 {get; set;} public int keypart2 {get; set;} public int keypart3 {get; set;} public int keypart4 {get; set;} public int keypart5 {get; set;} public int keypart6 {get; set;} public int keypart7 {get; set;} public single value {get; set;} } Note in just about every instance of the class no more than 2 or 3 of the keyparts would have a value greater than 0. Any ideas on how best to generate a unique hashcode in this situation? I have also been playing around with creating a key that is not unique, but spreads the objects evenly between the dictionaries buckets and then storing objects with matched hashes in a List< or LinkedList< or SortedList<. Any thoughts on this?

    Read the article

  • Mapping issue with multi-field primary keys using hibernate/JPA annotations

    - by Derek Clarkson
    Hi all, I'm stuck with a database which is using multi-field primary keys. I have a situation where I have a master and details table, where the details table's primary key contains fields which are also the foreign key's the the master table. Like this: Master primary key fields: master_pk_1 Details primary key fields: master_pk_1 details_pk_2 details_pk_3 In the Master class we define the hibernate/JPA annotations like this: @Id @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "idGenerator") @Column(name = "master_pk_1") private long masterPk1; @OneToMany(cascade=CascadeType.ALL) @JoinColumn(name = "master_pk_1", referencedColumnName = "master_pk_1") private List<Details> details = new ArrayList<Details>(); And in the details class I have defined the id and back reference like this: @EmbeddedId @AttributeOverrides( { @AttributeOverride( name = "masterPk1", column = @Column(name = "master_pk_1")), @AttributeOverride(name = "detailsPk2", column = @Column(name = "details_pk_2")), @AttributeOverride(name = "detailsPk2", column = @Column(name = "details_pk_2")) }) private DetailsPrimaryKey detailsPrimaryKey = new DetailsPrimaryKey(); @ManyToOne @JoinColumn(name = "master_pk_1", referencedColumnName = "master_pk_1", insertable=false) private Master master; The goal of all of this was that I could create a new master, add some details to it, and when saved, JPA/Hibernate would generate the new id for master in the masterPk1 field, and automatically pass it down to the details records, storing it in the matching masterPk1 field in the DetailsPrimaryKey class. At least that's what the documentation I've been looking at implies. What actually happens is that hibernate appears to corectly create and update the records in the database, but not pass the key to the details classes in memory. Instead I have to manually set it myself. I also found that without the insertable=true added to the back reference to master, that hibernate would create sql that had the master_pk_1 field listed twice in the insert statement, resulting in the database throwing an exception. My question is simply is this arrangement of annotations correct? or is there a better way of doing it?

    Read the article

  • Moses v1.0 multi language ini file

    - by Milan Kocic
    I was working with mosesserver 0.91 and everything works fine but now there is version 1.0 and nothing is same as before. Here is my situation: I want to have multi language translation from arabic to english and from english to arabic. All data and configuration file I have works with 0.91 version of mosesserver. Here is my config file: ------------------------------------------------- ######################### ### MOSES CONFIG FILE ### ######################### # D - decoding path, R - reordering model, L - language model [translation-systems] ar-en D 0 R 0 L 0 en-ar D 1 R 1 L 1 # input factors [input-factors] 0 # mapping steps [mapping] 0 T 0 1 T 1 # translation tables: table type (hierarchical(0), textual (0), binary (1)), source-factors, target-factors, number of scores, file # OLD FORMAT is still handled for back-compatibility # OLD FORMAT translation tables: source-factors, target-factors, number of scores, file # OLD FORMAT a binary table type (1) is assumed [ttable-file] 1 0 0 5 /mnt/models/ar-en/phrase-table/phrase-table 1 0 0 5 /mnt/models/en-ar/phrase-table/phrase-table # no generation models, no generation-file section # language models: type(srilm/irstlm), factors, order, file [lmodel-file] 1 0 5 /mnt/models/ar-en/language-model/en.qblm.mm 1 0 5 /mnt/models/en-ar/language-model/ar.lm.d1.blm.mm # limit on how many phrase translations e for each phrase f are loaded # 0 = all elements loaded [ttable-limit] 20 # distortion (reordering) files [distortion-file] 0-0 wbe-msd-bidirectional-fe-allff 6 /mnt/models/ar-en/reordering-table/reordering-table.wbe-msd-bidirectional-fe.gz 0-0 wbe-msd-bidirectional-fe-allff 6 /mnt/models/en-ar/reordering-model/reordering-table.wbe-msd-bidirectional-fe.gz # distortion (reordering) weight [weight-d] 0.3 0.3 # lexicalised distortion weights [weight-lr] 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 0.3 # language model weights [weight-l] 0.5000 0.5000 # translation model weights [weight-t] 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 0.2 # no generation models, no weight-generation section # word penalty [weight-w] -1 -1 [distortion-limit] 12 --------------------------------------------------------- So please can someone help me and rewrite this config file so it can work in version 1.0. And i need some python sample code of translation. I am using xmlrpc in python and earler I sent http request with: import xmlrpclib client = xmlrpclib.ServerProxy('http://localhost:8080') client.translate({'text': 'some text', 'system': 'en-ar'}) but now seems there is no more 'system' parameter and moses use always default settings.

    Read the article

  • has_many :through formtastic multi-select field

    - by Tristan O'Neil
    I'm trying to set up a many to many relationship using the has_many :through method and then use a multi-select field to setup the relationships. I'm following this tutorial: http://asciicasts.com/episodes/185-formtastic-part-2 However for some reason the form displays a strange hex number and it changes each page refresh, I'm not exactly sure what I'm doing wrong. Below is my model/view code. company.rb has_many :classifications has_many :sics, :through => :classifications sic.rb has_many :classifications has_many :companies, :through => :classifications classification.rb belongs_to :company belongs_to :sic _form.html.erb <% semantic_form_for @company do |f| %> <% f.inputs do %> <%= f.input :company %> <%= f.input :sics %> <% end %> <%= f.buttons %> <% end %> Also here is the the form looks like it's showing the correct number of entries for the field but it is clearly not showing the correct name for the relationship.

    Read the article

  • Multi-table Update(MySQL)

    - by smokinguns
    Hey all, I have a question regarding multi-table update(MySQL). Consider table t1 and t2. The PKEY for t1 is 'tid' which is a foreign Key in t2. There is a field "qtyt2" in t2 which depends on a field called "qtyt1" in table t1. Consider the foll SQL statement: UPDATE t2,t1 SET t2.qtyt2=IF(( t2.qtyt2- t1.qtyt1 )<0,0,( t2.qtyt2- t1.qtyt1 ) ), t1.qtyt1 ="Some value.." where t2.tid="some value.." AND t2.tid=t1.tid In this example qtyt2 depends on qtyt1 for update and the latter itself is updated.Now the result should return 2 if two rows are updated. Is there a guarantee that the fields will be updated in the order in which they appear in the statement( first qtyt2 will be set and then qtyt1).Is it possible that qtyt1 will be set first and then qtyt2? Is the order of tables in the statement important (UPDATE t2, t1 or UPDATE t1,t2)? I found that if I wrote "UPDATE t1,t2" then only t1 would get updated, but on changing the statement to "UPDATE t2,t1" everything worked correctly.

    Read the article

  • Validation with State Pattern for Multi-Page Forms in ASP.NET

    - by philrabin
    I'm trying to implement the state pattern for a multi-page registration form. The data on each page will be accumulated and stored in a session object. Should validation (including service layer calls to the DB) occur on the page level or inside each state class? In other words, should the concrete implementation of IState be concerned with the validation or should it be given a fully populated and valid object? See "EmptyFormState" class below: namespace Example { public class Registrar { private readonly IState formEmptyState; private readonly IState baseInformationComplete; public RegistrarSessionData RegistrarSessionData { get; set;} public Registrar() { RegistrarSessionData = new RegistrarSessionData(); formEmptyState = new EmptyFormState(this); baseInformationComplete = new BasicInfoCompleteState(this); State = formEmptyState; } public IState State { get; set; } public void SubmitData(RegistrarSessionData data) { State.SubmitData(data); } public void ProceedToNextStep() { State.ProceedToNextStep(); } } //actual data stored in the session //to be populated by page public class RegistrarSessionData { public string FirstName { get; set; } public string LastName { get; set; } //will include values of all 4 forms } //State Interface public interface IState { void SubmitData(RegistrarSessionData data); void ProceedToNextStep(); } //Concrete implementation of IState //Beginning state - no data public class EmptyFormState : IState { private readonly Registrar registrar; public EmptyFormState(Registrar registrar) { this.registrar = registrar; } public void SubmitData(RegistrarSessionData data) { //Should Validation occur here? //Should each state object contain a validation class? (IValidator ?) //Should this throw an exception? } public void ProceedToNextStep() { registrar.State = new BasicInfoCompleteState(registrar); } } //Next step, will have 4 in total public class BasicInfoCompleteState : IState { private readonly Registrar registrar; public BasicInfoCompleteState(Registrar registrar) { this.registrar = registrar; } public void SubmitData(RegistrarSessionData data) { //etc } public void ProceedToNextStep() { //etc } } }

    Read the article

  • Sending multi-part email from Google App Engine using Spring's JavaMailSender fails

    - by hleinone
    It works without the multi-part (modified from the example in Spring documentation): final MimeMessagePreparator preparator = new MimeMessagePreparator() { public void prepare(final MimeMessage mimeMessage) throws Exception { final MimeMessageHelper message = new MimeMessageHelper( mimeMessage); message.setTo(toAddress); message.setFrom(fromAddress); message.setSubject(subject); final String htmlText = FreeMarkerTemplateUtils .processTemplateIntoString(configuration .getTemplate(htmlTemplate), model); message.setText(htmlText, true); } }; mailSender.send(preparator); But once I change it to: final MimeMessagePreparator preparator = new MimeMessagePreparator() { public void prepare(final MimeMessage mimeMessage) throws Exception { final MimeMessageHelper message = new MimeMessageHelper( mimeMessage, true); ... message.setText(plainText, htmlText); } }; mailSender.send(preparator); I get: Failed message 1: javax.mail.MessagingException: Converting attachment data failed at com.google.appengine.api.mail.stdimpl.GMTransport.sendMessage(GMTransport.java:231) at org.springframework.mail.javamail.JavaMailSenderImpl.doSend(JavaMailSenderImpl.java:402) ... This is especially difficult since the GMTransport is proprietary Google class and no sources are available, which would make it a bit easier to debug. Anyone have any ideas what to try next? My bean config, for helping you to help me: <bean id="mailSender" class="org.springframework.mail.javamail.JavaMailSenderImpl" p:username="${mail.username}" p:password="${mail.password}" p:protocol="gm" />

    Read the article

  • Algorithm to detect how many words typed, also multi sentence support (Java)

    - by Alex Cheng
    Hello all. Problem: I have to design an algorithm, which does the following for me: Say that I have a line (e.g.) alert tcp 192.168.1.1 (caret is currently here) The algorithm should process this line, and return a value of 4. I coded something for it, I know it's sloppy, but it works, partly. private int counter = 0; public void determineRuleActionRegion(String str, int index) { if (str.length() == 0 || str.indexOf(" ") == -1) { triggerSuggestionList(1); return; } //remove duplicate space, spaces in front and back before searching int num = str.trim().replaceAll(" +", " ").indexOf(" ", index); //Check for occurances of spaces, recursively if (num == -1) { //if there is no space //no need to check if it's 0 times it will assign to 1 triggerSuggestionList(counter + 1); counter = 0; return; //set to rule action } else { //there is a space counter++; determineRuleActionRegion(str, num + 1); } } //end of determineactionRegion() So basically I find for the space and determine the region (number of words typed). However, I want it to change upon the user pressing space bar <space character>. How may I go around with the current code? Or better yet, how would one suggest me to do it the correct way? I'm figuring out on BreakIterator for this case... To add to that, I believe my algorithm won't work for multi sentences. How should I address this problem as well. -- The source of String str is acquired from textPane.getText(0, pos + 1);, the JTextPane. Thanks in advance. Do let me know if my question is still not specific enough.

    Read the article

  • Multi-accordion help (CSS issue maybe?)

    - by Josh
    So, I've been trying to develop this multi-accordion news section for this site. It's actually all working, thanks to an insightful plugin. I've modified it a little bit so it works as I want it to, but I've run into two issues, one which is possibly CSS. Issue #1: The idea for the user is that when they view this page, they see all the recent headlines. They can also see who it has been posted by and how many comments have been made to this article. If they wish, they can then click on the headline and the field will expand into the article. They can then either make a comment or view the comments via clicking the View Comments link or clicking the "number of comments" link in the "Posted by..." area (a shortcut to the comments basically). The problem I'm having is if I make the AUTHOR or the "0" comments a link, it breaks the accordion because the accordion uses an A CLASS to open it up. I'm looking for a fix, I've tried making it a H1 or a DIV but that also breaks it. Issue #2: This is a pretty picky one, but when you click the headline it expands, but at least in Firefox (haven't tested it in Chrome yet) the text jumps from the right and to the left, locking in place from which the CSS tells it to (padding-left). I don't know why it's exactly doing that, if anyone has any insight on that, it'd be appreciated. A two-parter to this issue is when you open the Headline to the article and then decide to close the article by clicking on the Headline, parts of the accordion jumps from the darker purple to the light purple before the task is finished. I'm also interested fixing this, but this issue in its entirety are all pretty nit picky things. You can view the demo of the site here: http://www.notedls.com/demo Please if anyone has any advice or fixes, I'd appreciate it, I've been trying to get this all to work to the best of my ability, but I'm clearly no guru or expert. Thanks!

    Read the article

  • C# Multi CheckboxList update inserts checked records but doesn't delete unchecked records

    - by DLL
    I have a multi checkboxlist on a formview. Both use queries in a tableadapter. I'm using VS 2012. When the user updates the form, I use the following code to update the checkbox data. If a user checks a new box, a new record is inserted correctly, however if the user unchecks a box the existing record is not deleted. The delete query works fine if I run it from the query builder in the table adapter, it's reaching the expected line in the code correctly, all values are correct and I receive no errors. I use a similar query to delete records when the form level data is deleted which works fine. The very last line is the one that doesn't work. Query: DELETE FROM [SLA_Categories] WHERE (([SLA_ID] = @SLA_ID) AND ([Choice_ID] = @Choice_ID)) protected void FormView1_ItemUpdating(object sender, FormViewUpdateEventArgs e) { if (FormView1.DataKey.Value != null) { Categs = (CheckBoxList)FormView1.FindControl("CheckBoxList1"); CurrentSLA_ID = (int)FormView1.DataKey.Value; } if (CurrentSLA_ID > 0) { foreach (ListItem li in Categs.Items) { // See if there's a record for the current sla in this category int CurrentChoice_ID = Convert.ToInt32(li.Value); SLADataSetTableAdapters.SLA_CategoriesTableAdapter myAdapter; myAdapter = new SLADataSetTableAdapters.SLA_CategoriesTableAdapter(); int myCount = (int)myAdapter.FindCategoryBySLA_IDAndChoice_ID(CurrentSLA_ID, CurrentChoice_ID); // If this category is checked and there is not an existing rec, insert one if (li.Selected == true && myCount < 1) { // Insert a rec for this sla myAdapter.InsertCategory(CurrentChoice_ID, CurrentSLA_ID); } // If this category is unchecked and there is and existing rec, delete it if (li.Selected == false && myCount > 0) { // Delete this rec myAdapter.DeleteCategoryBySLA_IDAndChoice_ID(CurrentChoice_ID, CurrentSLA_ID); } } } }

    Read the article

  • remove specific values from multi value dictionary

    - by Anthony
    I've seen posts here on how to make a dictionary that has multiple values per key, like one of the solutions presented in this link: Multi Value Dictionary it seems that I have to use a List< as the value for the keys, so that a key can store multiple values. the solution in the link is fine if you want to add values. But my problem now is how to remove specific values from a single key. I have this code for adding values to a dictionary: private Dictionary<TKey, List<TValue>> mEventDict; // this is for initializing the dictionary public void Subscribe(eVtEvtId inEvent, VtEvtDelegate inCallbackMethod) { if (mEventDict.ContainsKey(inEvent)) { mEventDict[inEvent].Add(inCallbackMethod); } else { mEventDict.Add(inEvent, new List<TValue>() { v }); } } // this is for adding values to the dictionary. // if the "key" (inEvent) is not yet present in the dictionary, // the key will be added first before the value my problem now is removing a specific value from a key. I have this code: public void Unsubscribe(eVtEvtId inEvent, VtEvtDelegate inCallbackMethod) { try { mEventDict[inEvent].Remove(inCallbackMethod); } catch (ArgumentNullException) { MessageBox.Show("The event is not yet present in the dictionary"); } } basically, what I did is just replace the Add() with Remove() . Will this work? Also, if you have any problems or questions with the code (initialization, etc.), feel free to ask. Thanks for the advice.

    Read the article

  • Raphael SVG VML Implement Multi Pivot Points for Rotation

    - by Cody N
    Over the last two days I've effectively figured out how NOT to rotate Raphael Elements. Basically I am trying to implement a multiple pivot points on element to rotate it by mouse. When a user enters rotation mode 5 pivots are created. One for each corner of the bounding box and one in the center of the box. When the mouse is down and moving it is simple enough to rotate around the pivot using Raphael elements.rotate(degrees, x, y) and calculating the degrees based on the mouse positions and atan2 to the pivot point. The problem arises after I've rotated the element, bbox, and the other pivots. There x,y position in the same only there viewport is different. In an SVG enabled browser I can create new pivot points based on matrixTransformation and getCTM. However after creating the first set of new pivots, every rotation after the pivots get further away from the transformed bbox due to rounding errors. The above is not even an option in IE since in is VML based and cannot account for transformation. Is the only effective way to implement element rotation is by using rotate absolute or rotating around the center of the bounding box? Is it possible at all the create multi pivot points for an object and update them after mouseup to remain in the corners and center of the transformed bbox?

    Read the article

  • Save PyML.classifiers.multi.OneAgainstRest(SVM()) object?

    - by Michael Aaron Safyan
    I'm using PYML to construct a multiclass linear support vector machine (SVM). After training the SVM, I would like to be able to save the classifier, so that on subsequent runs I can use the classifier right away without retraining. Unfortunately, the .save() function is not implemented for that classifier, and attempting to pickle it (both with standard pickle and cPickle) yield the following error message: pickle.PicklingError: Can't pickle : it's not found as __builtin__.PySwigObject Does anyone know of a way around this or of an alternative library without this problem? Thanks. Edit/Update I am now training and attempting to save the classifier with the following code: mc = multi.OneAgainstRest(SVM()); mc.train(dataset_pyml,saveSpace=False); for i, classifier in enumerate(mc.classifiers): filename=os.path.join(prefix,labels[i]+".svm"); classifier.save(filename); Notice that I am now saving with the PyML save mechanism rather than with pickling, and that I have passed "saveSpace=False" to the training function. However, I am still gettting an error: ValueError: in order to save a dataset you need to train as: s.train(data, saveSpace = False) However, I am passing saveSpace=False... so, how do I save the classifier(s)? P.S. The project I am using this in is pyimgattr, in case you would like a complete testable example... the program is run with "./pyimgattr.py train"... that will get you this error. Also, a note on version information: [michaelsafyan@codemage /Volumes/Storage/classes/cse559/pyimgattr]$ python Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. import PyML print PyML.__version__ 0.7.0

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >