Search Results

Search found 11543 results on 462 pages for 'partition wise join'.

Page 415/462 | < Previous Page | 411 412 413 414 415 416 417 418 419 420 421 422  | Next Page >

  • Merge replication server side foreign key violation from unpublished table

    - by Reiste
    We are using SQL Server 2005 Merge Replication with SQL CE 3.5 clients. We are using partitions with filtering for the separate subscriptions, and nHibernate for the ORM mapping. There is automatic ID range management from SQL Server for the subscriptions. We have a table, Item, and a table with a foreign key to Item - ItemHistory. Both of these are replicated down, filtered according to the subscription. Item has a column called UserId, and is filtered per subscription with this filter: WHERE UserId IN (SELECT... [complicated subselect]...) ItemHistory hangs off Item in the publication filter articles. On the server, we have a table ItemHistoryExport, which has a foreign key to ItemHistory. ItemHistoryExport is not published. Entries in the Item and ItemHistory tables are never deleted, on the server or the client. However, the "ownership" of items (and hence their ItemHistories) MAY change, which causes them to be moved from one client subscription/partition to another from time to time. When we sync, we occasionally get the following error: A row delete at '48269404 - 4108383dbb11' could not be propagated to 'MyServer\MyInstance.MyDatabase'. This failure can be caused by a constraint violation. The DELETE statement conflicted with the REFERENCE constraint "FK_ItemHistoryExport_ItemHistory". The conflict occurred in database "MyDatabase", table "dbo.ItemHistoryExport", column 'ItemHistoryId'. Can anyone help us understand why this happens? There shouldn't ever be a delete happening on the server side.

    Read the article

  • Django: Odd mark_safe behaviour?

    - by Mark
    I wrote this little function for writing out HTML tags: def html_tag(tag, content=None, close=True, attrs={}): lst = ['<',tag] for key, val in attrs.iteritems(): lst.append(' %s="%s"' % (key, escape_html(val))) if close: if content is None: lst.append(' />') else: lst.extend(['>', content, '</', tag, '>']) else: lst.append('>') return mark_safe(''.join(lst)) Which worked great, but then I read this article on efficient string concatenation (I know it doesn't really matter for this, but I wanted consistency) and decided to update my script: def html_tag(tag, body=None, close=True, attrs={}): s = StringIO() s.write('<%s'%tag) for key, val in attrs.iteritems(): s.write(' %s="%s"' % (key, escape_html(val))) if close: if body is None: s.write(' />') else: s.write('>%s</%s>' % (body, tag)) else: s.write('>') return mark_safe(s.getvalue()) But now my HTML get escaped when I try to render it from my template. Everything else is exactly the same. It works properly if I replace the last line with return mark_safe(unicode(s.getvalue())). I checked the return type of s.getvalue(). It should be a str, just like the first function, so why is this failing?? Also fails with SafeString(s.getvalue()) but succeeds with SafeUnicode(s.getvalue()). I'd also like to point out that I used return mark_safe(s.getvalue()) in a different function with no odd behavior. The "call stack" looks like this: class Input(Widget): def render(self): return html_tag('input', attrs={'type':self.itype, 'id':self.id, 'name':self.name, 'value':self.value, 'class':self.itype}) class Field: def __unicode__(self): return mark_safe(self.widget.render()) And then {{myfield}} is in the template. So it does get mark_safed'd twice, which I thought might have been the problem, but I tried removing that too..... I really have no idea what's causing this, but it's not too hard to work around, so I guess I won't fret about it.

    Read the article

  • Adding Table Columns to a Group by clause - Ruby on Rails - Postgresql

    - by bgadoci
    I am trying to use Heroku and apparently Postgresql is a lot more strict than SQL for aggregate functions. When I am pushing to Heroku I am getting an error stating the below. On another question I asked I received some guidance that said I should just add the columns to my group by clause and I am not sure how to do that. See the full error below and the PostsControll#index. SELECT posts.*, count(*) as vote_total FROM "posts" INNER JOIN "votes" ON votes.post_id = posts.id GROUP BY votes.post_id ORDER BY created_at DESC LIMIT 5 OFFSET 0): PostsController def index @tag_counts = Tag.count(:group => :tag_name, :order => 'count_all DESC', :limit => 20) conditions, joins = {}, :votes @ugtag_counts = Ugtag.count(:group => :ugctag_name, :order => 'count_all DESC', :limit => 20) conditions, joins = {}, :votes @vote_counts = Vote.count(:group => :post_title, :order => 'count_all DESC', :limit => 20) conditions, joins = {}, :votes unless(params[:tag_name] || "").empty? conditions = ["tags.tag_name = ? ", params[:tag_name]] joins = [:tags, :votes] end @posts=Post.paginate( :select => "posts.*, count(*) as vote_total", :joins => joins, :conditions=> conditions, :group => "votes.post_id", :order => "created_at DESC", :page => params[:page], :per_page => 5) @popular_posts=Post.paginate( :select => "posts.*, count(*) as vote_total", :joins => joins, :conditions=> conditions, :group => "votes.post_id", :order => "vote_total DESC", :page => params[:page], :per_page => 3) respond_to do |format| format.html # index.html.erb format.xml { render :xml => @posts } format.json { render :json => @posts } format.atom end end

    Read the article

  • MySQL stuck on "using filesort" when doing an "order by"

    - by noko
    I can't seem to get my query to stop using filesort. This is my query: SELECT s.`pilot`, p.`name`, s.`sector`, s.`hull` FROM `pilots` p LEFT JOIN `ships` s ON ( (s.`game` = p.`game`) AND (s.`pilot` = p.`id`) ) WHERE p.`game` = 1 AND p.`id` <> 2 AND s.`sector` = 43 AND s.`hull` > 0 ORDER BY p.`last_move` DESC Table structures: CREATE TABLE IF NOT EXISTS `pilots` ( `id` mediumint(5) unsigned NOT NULL AUTO_INCREMENT, `game` tinyint(3) unsigned NOT NULL DEFAULT '0', `last_move` int(10) NOT NULL DEFAULT '0', UNIQUE KEY `id` (`id`), KEY `last_move` (`last_move`), KEY `game_id_lastmove` (`game`,`id`,`last_move`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=8 ; CREATE TABLE IF NOT EXISTS `ships` ( `id` mediumint(5) unsigned NOT NULL AUTO_INCREMENT, `game` tinyint(3) unsigned NOT NULL DEFAULT '0', `pilot` mediumint(5) unsigned NOT NULL DEFAULT '0', `sector` smallint(5) unsigned NOT NULL DEFAULT '0', `hull` smallint(4) unsigned NOT NULL DEFAULT '50', UNIQUE KEY `id` (`id`), KEY `game` (`game`), KEY `pilot` (`pilot`), KEY `sector` (`sector`), KEY `hull` (`hull`), KEY `game_2` (`game`,`pilot`,`sector`,`hull`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=8 ; The explain: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE p ref id,game_id_lastmove game_id_lastmove 1 const 7 Using where; Using filesort 1 SIMPLE s ref game,pilot,sector... game_2 6 const,fightclub_alpha.p.id,const 1 Using where; Using index edit: I cut some of the unnecessary pieces out of my queries/table structure. Anybody have any ideas?

    Read the article

  • Query returns too few rows

    - by Tareq
    setup: mysql> create table product_stock( product_id integer, qty integer, branch_id integer); Query OK, 0 rows affected (0.17 sec) mysql> create table product( product_id integer, product_name varchar(255)); Query OK, 0 rows affected (0.11 sec) mysql> insert into product(product_id, product_name) values(1, 'Apsana White DX Pencil'); Query OK, 1 row affected (0.05 sec) mysql> insert into product(product_id, product_name) values(2, 'Diamond Glass Marking Pencil'); Query OK, 1 row affected (0.03 sec) mysql> insert into product(product_id, product_name) values(3, 'Apsana Black Pencil'); Query OK, 1 row affected (0.03 sec) mysql> insert into product_stock(product_id, qty, branch_id) values(1, 100, 1); Query OK, 1 row affected (0.03 sec) mysql> insert into product_stock(product_id, qty, branch_id) values(1, 50, 2); Query OK, 1 row affected (0.03 sec) mysql> insert into product_stock(product_id, qty, branch_id) values(2, 80, 1); Query OK, 1 row affected (0.03 sec) my query: mysql> SELECT IFNULL(SUM(s.qty),0) AS stock, product_name FROM product_stock s RIGHT JOIN product p ON s.product_id=p.product_id WHERE branch_id=1 GROUP BY product_name ORDER BY product_name; returns: +-------+-------------------------------+ | stock | product_name | +-------+-------------------------------+ | 100 | Apsana White DX Pencil | | 80 | Diamond Glass Marking Pencil | +-------+-------------------------------+ 1 row in set (0.00 sec) But I want to have the following result: +-------+------------------------------+ | stock | product_name | +-------+------------------------------+ | 0 | Apsana Black Pencil | | 100 | Apsana White DX Pencil | | 80 | Diamond Glass Marking Pencil | +-------+------------------------------+ To get this result what mysql query should I run?

    Read the article

  • how to return multiple array items using json/jquery

    - by Scarface
    Hey guys, quick question, I have a query that will usually return multiple results from a database, while I know how to return one result, I am not sure how to return multiple in jquery. I just want to take each of the returned results and run them through my prepare function. I have been trying to use 'for' to handle the array of data but I don't think it can work since I am returning different array values. If anyone has any suggestions, I would really appreciate it. JQUERY RETRIEVAL for(i=0; i < json.rows; i++) { $('#users_online').append(online_users(json[i])); $('#online_list-' + count2).fadeIn(1500); } PHP PROCESSING $qryuserscount1="SELECT active_users.username,COUNT(scrusersonline.id) AS rows FROM scrusersonline LEFT JOIN active_users ON scrusersonline.id=active_users.id WHERE topic_id='$topic_id'"; $userscount1=mysql_query($qryuserscount1); while ($row = mysql_fetch_array($userscount1)) { $onlineuser= $row['username']; $rows=$row['rows']; if ($username==$onlineuser){ $str2= "<a href=\"statistics.php?user=$onlineuser\"><div class=\"me\">$onlineuser</div></a>"; } else { $str2= "<b><a href=\"statistics.php?user=$onlineuser\"><div class=\"others\">$onlineuser</div></a></b>"; } $data['rows']=$rows; $data['entry']=$str1.$str2; }

    Read the article

  • should this database table be normalized?

    - by oo
    i have taken over a database that stores fitness information and we were having a debate about a certain table and whether it should stay as one table or get broken up into three tables. Today, there is one table called: workouts that has the following fields id, exercise_id, reps, weight, date, person_id So if i did 2 sets of 3 different exercises on one day, i would have 6 records in that table for that day. for example: id, exercise_id, reps, weight, date, person_id 1, 1, 10, 100, 1/1/2010, 10 2, 1, 10, 100, 1/1/2010, 10 3, 1, 10, 100, 1/1/2010, 10 4, 2, 10, 100, 1/1/2010, 10 5, 2, 10, 100, 1/1/2010, 10 6, 2, 10, 100, 1/1/2010, 10 So the question is, given that there is some redundant data (date, personid, exercise_id) in multiple records, should this be normalized to three tables WorkoutSummary: - id - date - person_id WorkoutExercise: - id - workout_id (foreign key into WorkoutSummary) - exercise_id WorkoutSets: - id - workout_exercise_id (foreign key into WorkoutExercise) - reps - weight I would guess the downside is that the queries would be slower after this refactoring as now we would need to join 3 tables to do the same query that had no joins before. The benefit of the refactoring allows up in the future to add new fields at the workout summary level or the exercise level with out adding in more duplication. any feedback on this debate?

    Read the article

  • Problem using Hibernate Projections

    - by Lucas
    Hello! I'm using Richfaces + HibernateQuery to create a data list. I'm trying to use Hibernate Projections to group my query result. Here is the code: final DetachedCriteria criteria = DetachedCriteria .forClass(Class.class, "c") .setProjection(Projections.projectionList() .add(Projections.groupProperty("c.id"))); ... in the .xhtml file i have the following code: <rich:dataTable width="100%" id="dataTable" value="#{myBean.dataModel}" var="row"> <f:facet name="header"> <rich:columnGroup> ...... </rich:columnGroup> </f:facet> <h:column> <h:outputText value="#{row.id}"/> </h:column> <h:column> <h:outputText value="#{row.name}"/> </h:column> But when i run the page it gives me the following error: Error: value="#{row.id}": The class 'java.lang.Long' does not have the property 'id'. If i take out the Projection from the code it works correctly, but it doesn't group the result. So, which mistake could be happening here? EDIT: Here is the full criteria: final DetachedCriteria criteria = DetachedCriteria.forClass(Class.class, "c"); criteria.setFetchMode("e.zzzzz", FetchMode.JOIN); criteria.createAlias("e.aaaaaaaa", "aa"); criteria.add(Restrictions.ilike("aa.information", "informations....")); criteria.setProjection(Projections.distinct(Projections.projectionList() .add(Projections.groupProperty("e.id").as("e.id")))); getDao().findByCriteria(criteria); if i take the "setProjection" line it works fine. I don't understand why it gives that error putting that line.

    Read the article

  • Problem counting item frequency on T-SQL

    - by Raúl Roa
    I'm trying to count the frequency of numbers from 1 to 100 on different fields of a table. Let's say I have the table "Results" with the following data: LottoId Winner Second Third --------- --------- --------- --------- 1 1 2 3 2 1 2 3 I'd like to be able to get the frequency per numbers. For that I'm using the following code: --Creating numbers temp table CREATE TABLE #Numbers( Number int) --Inserting the numbers into the temp table declare @counter int set @counter = 0 while @counter < 100 begin set @counter = @counter + 1 INSERT INTO #Numbers(Number) VALUES(@counter) end -- SELECT #Numbers.Number, Count(Results.Winner) as Winner,Count(Results.Second) as Second, Count(Results.Third) as Third FROM #Numbers LEFT JOIN Results ON #Numbers.Number = Results.Winner OR #Numbers.Number = Results.Second OR #Numbers.Number = Results.Third GROUP BY #Numbers.Number The problem is that the counts are repeating the same values for each number. In this particular case I'm getting the following result: Number Winner Second Third --------- --------- --------- --------- 1 2 2 2 2 2 2 2 3 2 2 2 ... When I should get this: Number Winner Second Third --------- --------- --------- --------- 1 2 0 0 2 0 2 0 3 0 0 2 ... What am I missing?

    Read the article

  • How do I Handle Ties When Ranking Results in MySQL?

    - by Laxmidi
    Hi, How does one handle ties when ranking results in a mysql query? I've simplified the table names and columns in this example, but it should illustrate my problem: SET @rank=0; SELECT student_names.students, @rank := @rank +1 AS rank, scores.grades FROM student_names LEFT JOIN scores ON student_names.students = scores.students ORDER BY scores.grades DESC So imagine the the above query produces: Students Rank Grades Al 1 90 Amy 2 90 George 3 78 Bob 4 73 Mary 5 NULL William 6 NULL Even though Al and Amy have the same grade, one is ranked higher than the other. Amy got ripped-off. How can I make it so that Amy and Al have the same ranking, so that they both have a rank of 1. Also, William and Mary didn't take the test. They bagged class and were smoking in the boy's room. They should be tied for last place. The correct ranking should be: Students Rank Grades Al 1 90 Amy 1 90 George 3 78 Bob 4 73 Mary 5 NULL William 5 NULL If anyone has any advice, please let me know. Thank you! -Laxmidi

    Read the article

  • Product Catalog Schema design

    - by FlySwat
    I'm building a proof of concept schema for a product catalog to possibly replace a very aging and crufty one we use. In our business, we sell both physical materials and services (one time and reoccurring charges). The current catalog schema has each distinct category broken out into individual tables, while this is nicely normalized and performs well, it is fairly difficult to extend. Adding a new attribute to a particular product involves changing the table schema and backpopulating old data. An idea I've been toying with has been something along the line of a base set of entity tables in 3rd normal form, these will contain the facts that are common among ALL products. Then, I'd like to build an Attribute-Entity-Value schema that allows each entity type to be extended in a flexible way using just data and no schema changes. Finally, I'd like to denormalize this data model into materialized views for each individual entity type. This views are what the application would access. We also have many tables that contain business rules and compatibility rules. These would join against the base entity tables instead of the views. My big concerns here are: Performance - Attribute-Entity-Value schemas are flexible, but typically perform poorly, should I be concerned? More Performance - Denormalizing using materialized views may have some risks, I'm not positive on this yet. Complexity - While this schema is flexible and maintainable using just data, I worry that the complexity of the design might make future schema changes difficult. For those who have designed product catalogs for large scale enterprises, am I going down the totally wrong path? Is there any good best practice schema design reading available for product catalogs?

    Read the article

  • Problem with stack based implementation of function 0x42 of int 0x13

    - by IceCoder
    I'm trying a new approach to int 0x13 (just to learn more about the way the system works): using stack to create a DAP.. Assuming that DL contains the disk number, AX contains the address of the bootable entry in PT, DS is updated to the right segment and the stack is correctly set, this is the code: push DWORD 0x00000000 add ax, 0x0008 mov si, ax push DWORD [ds:(si)] push DWORD 0x00007c00 push WORD 0x0001 push WORD 0x0010 push ss pop ds mov si, sp mov sp, bp mov ah, 0x42 int 0x13 As you can see: I push the dap structure onto the stack, update DS:SI in order to point to it, DL is already set, then set AX to 0x42 and call int 0x13 the result is error 0x01 in AH and obviously CF set. No sectors are transferred. I checked the stack trace endlessly and it is ok, the partition table is ok too.. I cannot figure out what I'm missing... This is the stack trace portion of the disk address packet: 0x000079ea: 10 00 adc %al,(%bx,%si) 0x000079ec: 01 00 add %ax,(%bx,%si) 0x000079ee: 00 7c 00 add %bh,0x0(%si) 0x000079f1: 00 00 add %al,(%bx,%si) 0x000079f3: 08 00 or %al,(%bx,%si) 0x000079f5: 00 00 add %al,(%bx,%si) 0x000079f7: 00 00 add %al,(%bx,%si) 0x000079f9: 00 a0 07 be add %ah,-0x41f9(%bx,%si) I'm using qemu latest version and trying to read from hard drive (0x80), have also tried with a 4bytes alignment for the structure with the same result (CF 1 AH 0x01), the extensions are present.

    Read the article

  • Shared Server Dreamhost

    - by Jseb
    I am trying to install an app on a shared server. If i understand properly because i am using a shared server, and that Dreamhost doesn't suppose rails 3.2.8 I must use FCGI, although i am not sure how to install and to make it run properly. From this tutorial http://wiki.dreamhost.com/Rails_3. To my understand here what I did, In dreamhost, activate PHP 5.x.x FastCGI and made sure Phusion Passenger is unchecked Create an app on my localmachine Because rails doesn't create a dispatch and access file i create the two following file in my /public folder dispatch.fcgi #!/home/username/.rvm/rubies/ruby-1.9.3-p327/bin/ruby ENV['RAILS_ENV'] ||= 'production' ENV['HOME'] ||= `echo ~`.strip ENV['GEM_HOME'] = File.expand_path('~/.rvm/gems/ruby 1.9.3-p327') ENV['GEM_PATH'] = File.expand_path('~/.rvm/gems/ruby 1.9.3-p327') + ":" + File.expand_path('~/.rvm/gems/ruby 1.9.3-p327@global') require 'fcgi' require File.join(File.dirname(__FILE__), '../config/environment') class Rack::PathInfoRewriter def initialize(app) @app = app end def call(env) env.delete('SCRIPT_NAME') parts = env['REQUEST_URI'].split('?') env['PATH_INFO'] = parts[0] env['QUERY_STRING'] = parts[1].to_s @app.call(env) end end Then created the file .htaccess <IfModule mod_fastcgi.c> AddHandler fastcgi-script .fcgi </IfModule> <IfModule mod_fcgid.c> AddHandler fcgid-script .fcgi </IfModule> Options +FollowSymLinks +ExecCGI RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ dispatch.fcgi/$1 [QSA,L] ErrorDocument 500 "Rails application failed to start properly" Uploaded to a folder and pointed to the public folder in dreamhost Made sure dispatch.fcgi has 777 for write ssh and run the following command in the public folder : ./dispatch.fcgi Crossing my finger but it doesn't work I get the following errors ./dispatch.fcgi: line 1: ENV[RAILS_ENV]: command not found ./dispatch.fcgi: line 1: =: command not found ./dispatch.fcgi: line 2: ENV[HOME]: command not found ./dispatch.fcgi: line 2: =: command not found ./dispatch.fcgi: line 3: syntax error near unexpected token (' ./dispatch.fcgi: line 3:ENV['GEM_HOME'] = File.expand_path('~/.rvm/gems/ruby 1.9.3-p327')' Doing wrong??? Oh and if i go on the server i get this Rails application failed to start properly

    Read the article

  • Is possible to reuse subqueries?

    - by Gothmog
    Hello, I'm having some problems trying to perform a query. I have two tables, one with elements information, and another one with records related with the elements of the first table. The idea is to get in the same row the element information plus several records information. Structure could be explain like this: table [ id, name ] [1, '1'], [2, '2'] table2 [ id, type, value ] [1, 1, '2009-12-02'] [1, 2, '2010-01-03'] [1, 4, '2010-01-03'] [2, 1, '2010-01-02'] [2, 2, '2010-01-02'] [2, 2, '2010-01-03'] [2, 3, '2010-01-07'] [2, 4, '2010-01-07'] And this is want I would like to achieve: result [id, name, Column1, Column2, Column3, Column4] [1, '1', '2009-12-02', '2010-01-03', , '2010-01-03'] [2, '2', '2010-01-02', '2010-01-02', '2010-01-07', '2010-01-07'] The following query gets the proper result, but it seems to me extremely inefficient, having to iterate table2 for each column. Would be possible in anyway to do a subquery and reuse it? SELECT a.id, a.name, (select min(value) from table2 t where t.id = subquery.id and t.type = 1 group by t.type) as Column1, (select min(value) from table2 t where t.id = subquery.id and t.type = 2 group by t.type) as Column2, (select min(value) from table2 t where t.id = subquery.id and t.type = 3 group by t.type) as Column3, (select min(value) from table2 t where t.id = subquery.id and t.type = 4 group by t.type) as Column4 FROM (SELECT distinct id FROM table2 t WHERE (t.type in (1, 2, 3, 4)) AND t.value between '2010-01-01' and '2010-01-07') as subquery LEFT JOIN table a ON a.id = subquery.id

    Read the article

  • EJB3 Entity and Lazy Problem

    - by Stefano
    My Entity beAN have 2 list: @Entity @Table(name = "TABLE_INTERNAL") public class Internal implements java.io.Serializable { ...SOME GETTERS AND SETTERS... private List<Match> matchs; private List<Regional> regionals; } mapped one FetchType.LAZY and one FetchType.EAGER : @OneToMany(fetch = FetchType.LAZY,mappedBy = "internal") public List<Match> getMatchs() { return matchs; } public void setMatchs(List<Match> matchs) { this.matchs = matchs; } @ManyToMany(targetEntity = Regional.class, mappedBy = "internals", fetch =FetchType.EAGER) public List<Regional> getRegionals() { return regionals; } public void setRegionals(List<Regional> regionals) { this.regionals = regionals; } I need both lists full ! But I cant put two FetchType.EAGER beacuse it's an error. I try some test: List<Internal> out; out= em.createQuery("from Internal").getResultList(); out= em.createQuery("from Internal i JOIN FETCH i.regionals ").getResultList(); I'm not able to fill both lists...Help!!! Stefano

    Read the article

  • SQL database problems with addressbook table design

    - by Sebastian Hoitz
    Hello! I am writing a addressbook module for my software right now. I have the database set up so far that it supports a very flexible address-book configuration. I can create n-entries for every type I want. Type means here data like 'email', 'address', 'telephone' etc. I have a table named 'contact_profiles'. This only has two columns: id Primary key date_created DATETIME And then there is a table called contact_attributes. This one is a little more complex: id PK #profile (Foreign key to contact_profiles.id) type VARCHAR describing the type of the entry (name, email, phone, fax, website, ...) I should probably change this to a SET later. value Text (containing the value for the attribute). I can now link to these profiles, for example from my user's table. But from here I run into problems. At the moment I would have to create a JOIN for each value that I want to retrieve. Is there a possibility to somehow create a View, that gives me a result with the type's as columns? So right now I would get something like #profile type value 1 email [email protected] 1 name Sebastian Hoitz 1 website domain.tld But it would be nice to get a result like this: #profile email name website 1 [email protected] Sebastian Hoitz domain.tld The reason I do not want to create the table layout like this initially is, that there might always be things to add and I want to be able to have multiple attributes of the same type. So do you know if there is any possibility to convert this dynamically? If you need a better description please let me know. Thank you!

    Read the article

  • symfony get data from array

    - by iggnition
    Hi, I'm trying to use an SQL query to get data from my database into the template of a symfony project. my query: SQL: SELECT l.loc_id AS l__loc_id, l.naam AS l__naam, l.straat AS l__straat, l.huisnummer AS l__huisnummer, l.plaats AS l__plaats, l.postcode AS l__postcode, l.telefoon AS l__telefoon, l.opmerking AS l__opmerking, o.org_id AS o__org_id, o.naam AS o__naam FROM locatie l LEFT JOIN organisatie o ON l.org_id = o.org_id This is generated by this DQL: DQL: $this->q = Doctrine_Query::create() ->select('l.naam, o.naam, l.straat, l.huisnummer, l.plaats, l.postcode, l.telefoon, l.opmerking') ->from('Locatie l') ->leftJoin('l.Organisatie o') ->execute(); But now when i try to acces this data in the template by either doing: <?php foreach ($q as $locatie): ?> <?php echo $locatie['o.naam'] ?> or <?php foreach ($q as $locatie): ?> <?php echo $locatie['o__naam'] ?> i get the error from symfony: 500 | Internal Server Error | Doctrine_Record_UnknownPropertyException Unknown record property / related component "o__naam" on "Locatie" Does anyone know what is going wrong here? i dont know how to call the value from the array if the names in both query's dont work.

    Read the article

  • Switching from Java to .NET from a career change point of view

    - by Joe
    Could anyone share with me their experience with switching from Java to .NET from a career point of view? I've been a Java developer for 12 years and am just getting tired of how fragmented the Java world has become. For my liking, there's just too many frameworks, tools, application servers, etc.. And it seems each new tool just adds complexity and time to even the simplest of projects. I'm not trying to start any wars - I'm just giving you the reason I ask the main question. I've read a few books on .NET and have done one WebForms job. I love the integrated environment and would like to hear how others transitioned from Java to .NET. What I mean by that is did you do it somehow as a contractor or did you join a company as a beginner .NET developer with much Java experience? Personally, I'm ready to take the leap if I can figure out how to not lose too much income in the process (Senior Java developer to beginner .NET developer). I would really appreciate hearing your stories.

    Read the article

  • Workflow engine BPMN, Drools, etc or ESB?

    - by Tom
    We currently have an application that is based on an in-house developed workflow engine with YAML based DSL. We are looking to move parts of it to Java. I have discovered a number of java solutions like Intalio, JBPM, Drools Expert, Drools Flow etc. They appear to be aimed at businesses where the business analyst creates the workflows using a graphical editor and submits them to the workflow engine. They seem geared towards ease of use for non-technical people rather than for developers with a focus on human interaction. The workflows tend to look like. Discover-a-file -\ -> join -> process-file -> move-file -> register-file Discover-some-metadata -/ If any step fails we need to retry it X times. We also need to be able to stop the system and be able to restart it and have it continue from where it was (durable). Some of our workflows can be defined by a set of goals we need to achieve so Jess's backwards rule chaining sounds interesting but it is not open source. It might be that what we are after is a Finite State Machine engine or just an Enterprise Service Bus and do everything as JMS queues. Is there a good open source workflow engine that is both standards-based but also geared towards developers. We don't particular want to use a graphical workflow designer or write reams of XML and it should ideally be in Java or language agnostic (makes REST/Soap calls to external services). Thanks, Tom

    Read the article

  • record output sound in python

    - by aaronstacy
    i want to programatically record sound coming out of my laptop in python. i found PyAudio and came up with the following program that accomplishes the task: import pyaudio, wave, sys chunk = 1024 FORMAT = pyaudio.paInt16 CHANNELS = 1 RATE = 44100 RECORD_SECONDS = 5 WAVE_OUTPUT_FILENAME = sys.argv[1] p = pyaudio.PyAudio() channel_map = (0, 1) stream_info = pyaudio.PaMacCoreStreamInfo( flags = pyaudio.PaMacCoreStreamInfo.paMacCorePlayNice, channel_map = channel_map) stream = p.open(format = FORMAT, rate = RATE, input = True, input_host_api_specific_stream_info = stream_info, channels = CHANNELS) all = [] for i in range(0, RATE / chunk * RECORD_SECONDS): data = stream.read(chunk) all.append(data) stream.close() p.terminate() data = ''.join(all) wf = wave.open(WAVE_OUTPUT_FILENAME, 'wb') wf.setnchannels(CHANNELS) wf.setsampwidth(p.get_sample_size(FORMAT)) wf.setframerate(RATE) wf.writeframes(data) wf.close() the problem is i have to connect the headphone jack to the microphone jack. i tried replacing these lines: input = True, input_host_api_specific_stream_info = stream_info, with these: output = True, output_host_api_specific_stream_info = stream_info, but then i get this error: Traceback (most recent call last): File "./test.py", line 25, in data = stream.read(chunk) File "/Library/Python/2.5/site-packages/pyaudio.py", line 562, in read paCanNotReadFromAnOutputOnlyStream) IOError: [Errno Not input stream] -9975 is there a way to instantiate the PyAudio stream so that it inputs from the computer's output and i don't have to connect the headphone jack to the microphone? is there a better way to go about this? i'd prefer to stick w/ a python app and avoid cocoa.

    Read the article

  • shoulda macros with rspec2 beta 5 and rails3 beta2

    - by Millisami
    I've setup Rspec2 beta5 and shoulda as following to use shoulda macros inside rspec model tests. Gemfile group :test do gem "rspec", ">= 2.0.0.beta.4" gem "rspec-rails", ">= 2.0.0.beta.4" gem 'shoulda', :git => 'git://github.com/bmaddy/ shoulda.git' gem "faker" gem "machinist" gem "pickle", :git => 'git://github.com/codegram/ pickle.git' gem 'capybara', :git => 'git://github.com/jnicklas/ capybara.git' gem 'database_cleaner', :git => 'git://github.com/bmabey/ database_cleaner.git' gem 'cucumber-rails', :git => 'git://github.com/aslakhellesoy/ cucumber-rails.git' end *spec_helper.rb* Dir["#{File.dirname(__FILE__)}/support/**/*.rb"].each {|f| require f} require 'shoulda' Rspec.configure do |config| *spec/models/outlet_spec.rb* require 'spec_helper' describe Outlet do it { should validate_presence_of(:name) } end And when I run the spec, I get the following error. [~/rails_apps/rails3_apps/automation (master)?] ? spec spec/models/ outlet_spec.rb DEPRECATION WARNING: RAILS_ROOT is deprecated! Use Rails.root instead. (called from join at /home/millisami/.rvm/gems/ruby-1.9.1-p378%rails3/ bundler/gems/shoulda-87e75311f83548760114cd4188afa4f83fecdc22-master/ lib/shoulda/autoload_macros.rb:40) F 1) Outlet Failure/Error: it { should validate_presence_of(:name) } undefined method `validate_presence_of' for #<Rspec::Core::ExampleGroup::Nested_1:0xc4dc138 @__memoized={}> # ./spec/models/outlet_spec.rb:4:in `block (2 levels) in <top (required)>' Finished in 0.0399 seconds 1 example, 1 failures [~/rails_apps/rails3_apps/automation (master)?] ? Why the "undefined method" ?? Is the shoulda getting loaded?

    Read the article

  • SQLDeveloper using over 100MB of PGA

    - by Leigh Riffel
    Perhaps this is normal, but in my Oracle 11g database I am seeing programmers using Oracle's SQL Developer regularly consume more than 100MB of combined UGA and PGA memory. I'd like to know if this is normal and what can be done about it. Our database is on the 32 bit version of Windows 2008, so memory limitations are becoming an increasing concern. I am using the following query to show the memory usage: SELECT e.SID, e.username, e.status, b.PGA_MEMORY FROM v$session e LEFT JOIN (select y.SID, y.value pga, TO_CHAR(ROUND(y.value/1024/1024),99999999) || ' MB' PGA_MEMORY from v$sesstat y, v$statname z where y.STATISTIC# = z.STATISTIC# and NAME = 'session pga memory') b ON e.sid=b.sid WHERE (PGA)/1024/1024 > 20 ORDER BY 4 DESC; It seems that the resource usage goes up any time a table is opened in SQLDeveloper, but even when it is closed the memory does not go away. The problem is worse if the table is sorted while it was open as that seems to use even more memory. I understand how this would use memory while it is sorting, and perhaps even while it is still open, but to use memory after it is closed seems wrong to me. Can anyone confirm this? Update: I discovered that my numbers were off due to not understanding that the UGA is stored in the PGA under dedicated server mode. This makes the numbers lower than they were, but the problem still remains that SQL Developer seems to use excessive PGA.

    Read the article

  • Code Golf: Easter Spiral

    - by friol
    What's more appropriate than a Spiral for Easter Code Golf sessions? Well, I guess almost anything. The Challenge The shortest code by character count to display a nice ASCII Spiral made of asterisks ('*'). Input is a single number, R, that will be the x-size of the Spiral. The other dimension (y) is always R-2. The program can assume R to be always odd and = 5. Some examples: Input 7 Output ******* * * * *** * * * * ***** * Input 9 Output ********* * * * ***** * * * * * * *** * * * * * ******* * Input 11 Output *********** * * * ******* * * * * * * * *** * * * * * * * * ***** * * * * * ********* * Code count includes input/output (i.e., full program). Any language is permitted. My easily beatable 303 chars long Python example: import sys; d=int(sys.argv[1]); a=[d*[' '] for i in range(d-2)]; r=[0,-1,0,1]; x=d-1;y=x-2;z=0;pz=d-2;v=2; while d>2: while v>0: while pz>0: a[y][x]='*'; pz-=1; if pz>0: x+=r[z]; y+=r[(z+1)%4]; z=(z+1)%4; pz=d; v-=1; v=2;d-=2;pz=d; for w in a: print ''.join(w); Now, enter the Spiral...

    Read the article

  • What is happening in this T-SQL code?

    - by Ben McCormack
    I'm just starting to learn T-SQL and could use some help in understanding what's going on in a particular block of code. I modified some code in an answer I received in a previous question, and here is the code in question: DECLARE @column_list AS varchar(max) SELECT @column_list = COALESCE(@column_list, ',') + 'SUM(Case When Sku2=' + CONVERT(varchar, Sku2) + ' Then Quantity Else 0 End) As [' + CONVERT(varchar, Sku2) + ' - ' + Convert(varchar,Description) +'],' FROM OrderDetailDeliveryReview Inner Join InvMast on SKU2 = SKU and LocationTypeID=4 GROUP BY Sku2 , Description ORDER BY Sku2 Set @column_list = Left(@column_list,Len(@column_list)-1) Select @column_list ---------------------------------------- 1 row is returned: ,SUM(Case When Sku2=157 Then Quantity Else 0 End) As [157 -..., SUM(Case ... The T-SQL code does exactly what I want, which is to make a single result based on the results of a query, which will then be used in another query. However, I can't figure out how the SELECT @column_list =... statement is putting multiple values into a single string of characters by being inside a SELECT statement. Without the assignment to @column_list, the SELECT statement would simply return multiple rows. How is it that by having the variable within the SELECT statement that the results get "flattened" down into one value? How should I read this T-SQL to properly understand what's going on?

    Read the article

  • LINQ Datacontext Disposal Issues

    - by Refracted Paladin
    I am getting a Cannot access object: DataContext after it's been disposed in the below DAL method. I thought that I would be okay calling dispose there. result is an IEnumurable and I thought it was IQueryable that caused these kinds of problems. What am I doing wrong? How SHOULD I be disposing of my DataContext. Is there something better to be returning then a DataTable? This is a Desktop app that points at SQL 2005. Example method that causes this error -- public static DataTable GetEnrolledMembers(Guid workerID) { var DB = CmoDataContext.Create(); var AllEnrollees = from enrollment in DB.tblCMOEnrollments where enrollment.CMOSocialWorkerID == workerID || enrollment.CMONurseID == workerID join supportWorker in DB.tblSupportWorkers on enrollment.EconomicSupportWorkerID equals supportWorker.SupportWorkerID into workerGroup from worker in workerGroup.DefaultIfEmpty() select new { enrollment.ClientID, enrollment.CMONurseID, enrollment.CMOSocialWorkerID, enrollment.EnrollmentDate, enrollment.DisenrollmentDate, ESFirstName = worker.FirstName, ESLastName = worker.LastName, ESPhone = worker.Phone }; var result = from enrollee in AllEnrollees.AsEnumerable() where (enrollee.DisenrollmentDate == null || enrollee.DisenrollmentDate > DateTime.Now) //let memberName = BLLConnect.MemberName(enrollee.ClientID) let lastName = BLLConnect.MemberLastName(enrollee.ClientID) let firstName = BLLConnect.MemberFirstName(enrollee.ClientID) orderby enrollee.DisenrollmentDate ascending, lastName ascending select new { enrollee.ClientID, //MemberName = memberName, LastName = lastName, FirstName = firstName, NurseName = BLLAspnetdb.NurseName(enrollee.CMONurseID), SocialWorkerName = BLLAspnetdb.SocialWorkerName(enrollee.CMOSocialWorkerID), enrollee.EnrollmentDate, enrollee.DisenrollmentDate, ESWorkerName = enrollee.ESFirstName + " " + enrollee.ESLastName, enrollee.ESPhone }; DB.Dispose(); return result.CopyLinqToDataTable(); } partial class where I create the DataContext -- partial class CmoDataContext { public static bool IsDisconnectedUser { get { return Settings.Default.IsDisconnectedUser; } } public static CmoDataContext Create() { var cs = IsDisconnectedUser ? Settings.Default.CMOConnectionString : Settings.Default.Central_CMOConnectionString; return new CmoDataContext(cs); }

    Read the article

< Previous Page | 411 412 413 414 415 416 417 418 419 420 421 422  | Next Page >