Search Results

Search found 9542 results on 382 pages for 'row'.

Page 333/382 | < Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >

  • SQL Query to duplicate records based on If statement

    - by user328371
    Hi, I'm trying to write an SQL query that will duplicate records depending on a field in another table. I am running mySQL 5. (I know duplicating records shows that the database structure is bad, but I did not design the database and am not in a position to redo it all - it's a shopp ecommerce database running on wordpress.) Each product with a particular attribute needs a link to the same few images, so the product will need a row per image in a table - the database doesn't actually contain the image, just its filename. (the images are of clipart for a customer to select from) Based on these records... SELECT * FROM `wp_shopp_spec` WHERE name='Can Be Personalised' and content='Yes' I want to do something like this.. For each record that matches that query, copy records 5134 - 5139 from wp_shopp_asset but change the id so it's unique and set the cell in column 'parent' to have the value of 'product' from the table wp_shopp_spec. This will mean 6 new records are created for each record matching the above query, all with the same value in 'parent' but with unique ids and every other column copied from the original (ie. records 5134-5139) Hope that's clear enough - any help greatly appreciated.

    Read the article

  • XmlDeserializer to handle inline lists

    - by d1k_is
    Im looking at implementing a fix in an XmlDeserializer to allow for element lists without a specific containing element. The XmlDeserializer im basing off checks for a list object type but then it gets the container element im trying to figure out how to get around this and make it work both ways. enter code here var t = type.GetGenericArguments()[0]; var list = (IList)Activator.CreateInstance(type); var container = GetElementByName(root, prop.Name.AsNamespaced(Namespace)); var first = container.Elements().FirstOrDefault(); var elements = container.Elements().Where(d => d.Name == first.Name); PopulateListFromElements(t, elements, list); prop.SetValue(x, list, null); The XML im working with is from the google weather API (forecast_conditions elements) <weather module_id="0" tab_id="0" mobile_row="0" mobile_zipped="1" row="0" section="0"> <forecast_information>...</forecast_information> <current_conditions>...</current_conditions> <forecast_conditions>...</forecast_conditions> <forecast_conditions>...</forecast_conditions> <forecast_conditions>...</forecast_conditions> </weather> EDIT: Im looking at this as an update to the RESTsharp open source .net library

    Read the article

  • mysql_real_escape_string() just makes an empty string?

    - by James P
    I am using a jQuery AJAX request to a page called like.php that connects to my database and inserts a row. This is the like.php code: <?php // Some config stuff define(DB_HOST, 'localhost'); define(DB_USER, 'root'); define(DB_PASS, ''); define(DB_NAME, 'quicklike'); $link = mysql_connect(DB_HOST, DB_USER, DB_PASS) or die('ERROR: ' . mysql_error()); $sel = mysql_select_db(DB_NAME, $link) or die('ERROR: ' . mysql_error()); $likeMsg = mysql_real_escape_string(trim($_POST['likeMsg'])); $timeStamp = time(); if(empty($likeMsg)) die('ERROR: Message is empty'); $sql = "INSERT INTO `likes` (like_message, timestamp) VALUES ('$likeMsg', $timeStamp)"; $result = mysql_query($sql, $link) or die('ERROR: ' . mysql_error()); echo mysql_insert_id(); mysql_close($link); ?> The problematic line is $likeMsg = mysql_real_escape_string(trim($_POST['likeMsg']));. It seems to just return an empty string, and in my database under the like_message column all I see is blank entries. If I remove mysql_real_escape_string() though, it works fine. Here's my jQuery code if it helps. $('#like').bind('keydown', function(e) { if(e.keyCode == 13) { var likeMessage = $('#changer p').html(); if(likeMessage) { $.ajax({ cache: false, url: 'like.php', type: 'POST', data: { likeMsg: likeMessage }, success: function(data) { $('#like').unbind(); writeLikeButton(data); } }); } else { $('#button_container').html(''); } } }); All this jQuery code works fine, I've tested it myself independently. Any help is greatly appreciated, thanks.

    Read the article

  • Why is Core Data not persisting these changes to disk?

    - by scott
    I added a new entity to my model and it loads fine but no changes made in memory get persisted to disk. My values set on the car object work fine in memory but aren't getting persisted to disk on hitting the home button (in simulator). I am using almost exactly the same code on another entity in my application and its values persist to disk fine (core data - sqlite3); Does anyone have a clue what I'm overlooking here? Car is the managed object, cars in an NSMutableArray of car objects and Car is the entity and Visible is the attribute on the entity which I am trying to set. Thanks for you assistance. Scott - (void)viewDidLoad { myAppDelegate* appDelegate = (myAppDelegate*)[[UIApplication sharedApplication] delegate]; NSManagedObjectContext* managedObjectContex = appDelegate.managedObjectContext; NSFetchRequest* request = [[NSFetchRequest alloc] init]; NSEntityDescription* entity = [NSEntityDescription entityForName:@"Car" inManagedObjectContext:managedObjectContex]; [request setEntity:entity]; NSSortDescriptor* sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"Name" ascending:YES]; NSArray* sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [request setSortDescriptors:sortDescriptors]; [sortDescriptors release]; [sortDescriptor release]; NSError* error = nil; cars = [[managedObjectContex executeFetchRequest:request error:&error] mutableCopy]; if (cars == nil) { NSLog(@"Can't load the Cars data! Error: %@, %@", error, [error userInfo]); } [request release]; } - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath*)indexPath { Car* car = [cars objectAtIndex:indexPath.row]; if (car.Visible == [NSNumber numberWithBool:YES]) { car.Visible = [NSNumber numberWithBool:NO]; [tableView cellForRowAtIndexPath:indexPath].accessoryType = UITableViewCellAccessoryNone; } else { car.Visible = [NSNumber numberWithBool:YES]; [tableView cellForRowAtIndexPath:indexPath].accessoryType = UITableViewCellAccessoryCheckmark; } }

    Read the article

  • Building Stored Procedure to group data into ranges with roughly equal results in each bucket

    - by Len
    I am trying to build one procedure to take a large amount of data and create 5 range buckets to display the data. the buckets ranges will have to be set according to the results. Here is my existing SP GO /****** Object: StoredProcedure [dbo].[sp_GetRangeCounts] Script Date: 03/28/2010 19:50:45 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[sp_GetRangeCounts] @idMenu int AS declare @myMin decimal(19,2), @myMax decimal(19,2), @myDif decimal(19,2), @range1 decimal(19,2), @range2 decimal(19,2), @range3 decimal(19,2), @range4 decimal(19,2), @range5 decimal(19,2), @range6 decimal(19,2) SELECT @myMin=Min(modelpropvalue), @myMax=Max(modelpropvalue) FROM xmodelpropertyvalues where modelPropUnitDescriptionID=@idMenu set @myDif=(@myMax-@myMin)/5 set @range1=@myMin set @range2=@myMin+@myDif set @range3=@range2+@myDif set @range4=@range3+@myDif set @range5=@range4+@myDif set @range6=@range5+@myDif select @myMin,@myMax,@myDif,@range1,@range2,@range3,@range4,@range5,@range6 select t.range as myRange, count(*) as myCount from ( select case when modelpropvalue between @range1 and @range2 then 'range1' when modelpropvalue between @range2 and @range3 then 'range2' when modelpropvalue between @range3 and @range4 then 'range3' when modelpropvalue between @range4 and @range5 then 'range4' when modelpropvalue between @range5 and @range6 then 'range5' end as range from xmodelpropertyvalues where modelpropunitDescriptionID=@idmenu) t group by t.range order by t.range This calculates the min and max value from my table, works out the difference between the two and creates 5 buckets. The problem is that if there are a small amount of very high (or very low) values then the buckets will appear very distorted - as in these results... range1 2806 range2 296 range3 75 range5 1 Basically I want to rebuild the SP so it creates buckets with equal amounts of results in each. I have played around with some of the following approaches without quite nailing it... SELECT modelpropvalue, NTILE(5) OVER (ORDER BY modelpropvalue) FROM xmodelpropertyvalues - this creates a new column with either 1,2,3,4 or 5 in it ROW_NUMBER()OVER (ORDER BY modelpropvalue) between @range1 and @range2 ROW_NUMBER()OVER (ORDER BY modelpropvalue) between @range2 and @range3 or maybe i could allocate every record a row number then divide into ranges from this?

    Read the article

  • Creating parallel selenium tests in C# and using Nunit as the runner application

    - by damianmartin
    I am writing a new test suite for the company to test a very complex ASP.NET application which is heavily AJAX driven. We have decided to use Selenium (Grid & Remote Control) and Nunit to run these tests. The actually tests are dynamically created at run time from a spreadsheet. Each Column in an excel spreadsheet relates to a new test and each row relates to a selenium command (but in plain English and the dll converts this into Selenium code). My problem i have at the moment is getting the tests running in parallel. There will be 1000+ tests so it is too time consuming to have 1 test run at a time. Selenium Grid and Selenium Remote Control(s) are setup correctly because I can run there demo. From what i have read I need to use Punit but i can not find any documentation on what a test in punit should look like. Nunit tests are [SetUp] [TearDown] [Test]. Can anyone point me in the right direction. Thanks in advance.

    Read the article

  • Why does this MySQL function return null?

    - by Shore
    Description: the query actually run have 4 results returned,as can be see from below, what I did is just concate the items then return, but unexpectedly,it's null. I think the code is self-explanatory: DELIMITER | DROP FUNCTION IF EXISTS get_idiscussion_ask| CREATE FUNCTION get_idiscussion_ask(iask_id INT UNSIGNED) RETURNS TEXT DETERMINISTIC BEGIN DECLARE done INT DEFAULT 0; DECLARE body varchar(600); DECLARE created DATETIME; DECLARE anonymous TINYINT(1); DECLARE screen_name varchar(64); DECLARE result TEXT; DECLARE cur1 CURSOR FOR SELECT body,created,anonymous,screen_name from idiscussion left join users on idiscussion.uid=users.id where idiscussion.iask_id=iask_id; DECLARE CONTINUE HANDLER FOR SQLSTATE '02000' SET done = 1; SET result = ''; OPEN cur1; REPEAT FETCH cur1 INTO body, created, anonymous, screen_name; SET result = CONCAT(result,'<comment><body><![CDATA[',body,']]></body>','<replier>',if(screen_name is not null and !anonymous,screen_name,''),'</replier>','<created>',created,'</created></comment>'); UNTIL done END REPEAT; CLOSE cur1; RETURN result; END | DELIMITER ; mysql> DELIMITER ; mysql> select get_idiscussion_ask(1); +------------------------+ | get_idiscussion_ask(1) | +------------------------+ | NULL | +------------------------+ 1 row in set (0.01 sec) mysql> SELECT body,created,anonymous,screen_name from idiscussion left join users on idiscussion.uid=users.id where idiscussion.iask_id=1; +------+---------------------+-----------+-------------+ | body | created | anonymous | screen_name | +------+---------------------+-----------+-------------+ | haha | 2009-05-27 04:57:51 | 0 | NULL | | haha | 2009-05-27 04:57:52 | 0 | NULL | | haha | 2009-05-27 04:57:52 | 0 | NULL | | haha | 2009-05-27 04:57:53 | 0 | NULL | +------+---------------------+-----------+-------------+ 4 rows in set (0.00 sec) For those who don't think the code is self-explanatory: Why the function returns NULL?

    Read the article

  • Using Python to get a CSV output for the following example.

    - by Az
    Hi there, I'm back again with my ongoing saga of Student-Project Allocation questions. Thanks to Moron (who does not match his namesake) I've got a bit of direction for an evaluation portion of my project. Going with the idea of the Assignment Problem and Hungarian Algorithm I would like to express my data in the form of a .csv file which would end up looking like this in spreadsheet form. This is based on the structure I saw here. | | Project 1 | Project 2 | Project 3 | |----------|-----------|-----------|-----------| |Student1 | | 2 | 1 | |----------|-----------|-----------|-----------| |Student2 | 1 | 2 | 3 | |----------|-----------|-----------|-----------| |Student3 | 1 | 3 | 2 | |----------|-----------|-----------|-----------| To make it less cryptic: the rows are the Students/Agents and the columns represent Projects/Task. Obviously ONE project can be assigned to ONE student. That, in short, is what my project is about. The fields represent the preference weights the students have placed upon the projects (ranging from 1 to 10). If blank, that student does not want that project and there's no chance of him/her being assigned such. Anyway, my data is stored within dictionaries. Specifically the students and projects dictionaries such that: students[student_id] = Student(student_id, student_name, alloc_proj, alloc_proj_rank, preferences) where preferences is in the form of a dictionary such that preferences[rank] = {project_id} and projects[project_id] = Project(project_id, project_name) I'm aware that sorted(students.keys()) will give me a sorted list of all the student IDs which will populate the row labels and sorted(projects.keys()) will give me the list I need to populate the column labels. Thus for each student, I'd go into their preferences dictionary and match the applicable projects to ranks. I can do that much. Where I'm failing is understanding how to create a .csv file. Any help, pointers or good tutorials will be highly appreciated.

    Read the article

  • R : remove columns from dataframe where ALL values are NA

    - by Sophomore
    hello everybody! I'm having some trouble with my huge data frame and couldn't really resolve that question myself: The dataframe has some properties as columns and each row represents one data set. I've done some sanatizing to this dataframe (e.g. get rid of datasets which are not to be included in evaluation). (Whoever might be interested: Beforehand I aggregate around 5000 single text files and put them in a tsv, some of the proerties have a sequence number like "button.pressed.1" ... ""button.pressed.n". Some of the sets excluded had really high numbers for n but got excluded, all sets left have much smaller numbers for n but the property "button.presed.50" is still there and all remaining sets have an NA in that column. Actually its a different property but the example should clarify my intention...) So the question is quite simple (for some sophisticated R pro): I need to get rid of columns where for ALL rows the value is NA. Could someone please help me out? (All I have managed to get rid of columns where at least one NA exists which dropped about half my columns)...

    Read the article

  • Convert Excel Range to ADO.NET DataSet or DataTable, etc.

    - by htbrady
    I have an Excel spreadsheet that will sit out on a network share drive. It needs to be accessed by my Winforms C# 3.0 application (many users could be using the app and hitting this spreadsheet at the same time). There is a lot of data on one worksheet. This data is broken out into areas that I have named as ranges. I need to be able to access these ranges individually, return each range as a dataset, and then bind it to a grid. I have found examples that use OLE and have got these to work. However, I have seen some warnings about using this method, plus at work we have been using Microsoft.Office.Interop.Excel as the standard thus far. I don't really want to stray from this unless I have to. Our users will be using Office 2003 on up as far as I know. I can get the range I need with the following code: MyDataRange = (Microsoft.Office.Interop.Excel.Range) MyWorkSheet.get_Range("MyExcelRange", Type.Missing); The OLE way was nice as it would take my first row and turn those into columns. My ranges (12 total) are for the most part different from each other in number of columns. Didn't know if this info would affect any recommendations. Is there any way to use Interop and get the returned range back into a dataset?

    Read the article

  • Rails - eager load the number of associated records, but not the record themselves.

    - by Max Williams
    I have a page that's taking ages to render out. Half of the time (3 seconds) is spent on a .find call which has a bunch of eager-loaded associations. All i actually need is the number of associated records in each case, to display in a table: i don't need the actual records themselves. Is there a way to just eager load the count? Here's a simplified example: @subjects = Subject.find(:all, :include => [:questions]) In my table, for each row (ie each subject) i just show the values of the subject fields and the number of associated questions for each subject. Can i optimise the above find call to suit these requirements? I thought about using a group field but my full call has a few different associations included, with some second-order associations, so i don't think group by will work. @subjects = Subject.find(:all, :include => [{:questions => :tags}, {:quizzes => :tags}], :order => "subjects.name") :tags in this case is a second-order association, via taggings. Here's my associations in case it's not clear what's going on. Subject has_many :questions has_many :quizzes Question belongs_to :subject has_many :taggings has_many :tags, :through => :taggings Quiz belongs_to :subject has_many :taggings has_many :tags, :through => :taggings Grateful for any advice - max

    Read the article

  • using jquery in mysql php

    - by JPro
    I am new to using Jquery using mysql and PHP I am using the following code to pull the data. But there is not data or error displayed. JQUERY: <html> <head> <script> function doAjaxPost() { // get the form values var field_a = $("#field_a").val(); $("#loadthisimage").show(); $.ajax({ type: "POST", url: "serverscript.php", data: "ID="+field_a, success: function(resp){ $("#resposnse").html(resp); $("#loadthisimage").hide(); }, error: function(e){ alert('Error: ' + e); } }); } </script> </head> <body> <select id="field_a"> <option value="data_1">data_1</option> <option value="data_2">data_2</option> </select> <input type="button" value="Ajax Request" onClick="doAjaxPost()"> <a href="#" onClick="doAjaxPost()">Here</a> </form> <div id="resposnse"> <img src="ajax-loader.gif" style="display:none" id="loadthisimage"> </div> </body> and now serverscript.php <?php if(isset($_POST['ID'])) { $nm = $_POST['ID']; echo $nm; //insert your code here for the display. mysql_connect("localhost", "root", "pop") or die(mysql_error()); mysql_select_db("JPro") or die(mysql_error()); $result1 = mysql_query("select Name from results where ID = \"$nm\" ") or die(mysql_error()); // store the record of the "example" table into $row while($row1 = mysql_fetch_array( $result1 )) { $tc = $row1['Name']; echo $tc; } } ?>

    Read the article

  • Is there a standard pattern for scanning a job table executing some actions?

    - by Howiecamp
    (I realize that my title is poor. If after reading the question you have an improvement in mind, please either edit it or tell me and I'll change it.) I have the relatively common scenario of a job table which has 1 row for some thing that needs to be done. For example, it could be a list of emails to be sent. The table looks something like this: ID Completed TimeCompleted anything else... ---- --------- ------------- ---------------- 1 No blabla 2 No blabla 3 Yes 01:04:22 ... I'm looking either for a standard practice/pattern (or code - C#/SQL Server preferred) for periodically "scanning" (I use the term "scanning" very loosely) this table, finding the not-completed items, doing the action and then marking them completed once done successfully. In addition to the basic process for accomplishing the above, I'm considering the following requirements: I'd like some means of "scaling linearly", e.g. running multiple "worker processes" simultaneously or threading or whatever. (Just a specific technical thought - I'm assuming that as a result of this requirement, I need some method of marking an item as "in progress" to avoid attempting the action multiple times.) Each item in the table should only be executed once. Some other thoughts: I'm not particularly concerned with the implementation being done in the database (e.g. in T-SQL or PL/SQL code) vs. some external program code (e.g. a standalone executable or some action triggered by a web page) which is executed against the database Whether the "doing the action" part is done synchronously or asynchronously is not something I'm considering as part of this question.

    Read the article

  • Event feed implementation - will it scale?

    - by SlappyTheFish
    Situation: I am currently designing a feed system for a social website whereby each user has a feed of their friends' activities. I have two possible methods how to generate the feeds and I would like to ask which is best in terms of ability to scale. Events from all users are collected in one central database table, event_log. Users are paired as friends in the table friends. The RDBMS we are using is MySQL. Standard method: When a user requests their feed page, the system generates the feed by inner joining event_log with friends. The result is then cached and set to timeout after 5 minutes. Scaling is achieved by varying this timeout. Hypothesised method: A task runs in the background and for each new, unprocessed item in event_log, it creates entries in the database table user_feed pairing that event with all of the users who are friends with the user who initiated the event. One table row pairs one event with one user. The problems with the standard method are well known – what if a lot of people's caches expire at the same time? The solution also does not scale well – the brief is for feeds to update as close to real-time as possible The hypothesised solution in my eyes seems much better; all processing is done offline so no user waits for a page to generate and there are no joins so database tables can be sharded across physical machines. However, if a user has 100,000 friends and creates 20 events in one session, then that results in inserting 2,000,000 rows into the database. Question: The question boils down to two points: Is this worst-case scenario mentioned above problematic, i.e. does table size have an impact on MySQL performance and are there any issues with this mass inserting of data for each event? Is there anything else I have missed?

    Read the article

  • Passing arguments and values from HTML to jQuery (events)

    - by Jaroslav Moravec
    What is the practice to pass arguments from HTML to jQuery events function. For example getting id of row from db: <tr class="jq_killMe" id="thisItemId-id"> ... </tr> and jQuery: $(".jq_killMe").click(function () { var tmp = $(this).attr('id).split("-"); var id = tmp[0] // ... } What's the best practise, if I want to pass more than one argument? Is it better not to use jQuery? For example: <tr onclick="killMe('id')"> ... </tr> I didn't find the answer on my question, I will be glad even for links. Thanks. Edit (pre solution) So you suggested two methods to do that: Add custom attributes to element (XHTML) Use attribute ID and parse it by regex Attribute data-* attributes in HTML5 Use hidden children elements I like first solution, but... I would like to (I have to (employer)) produce valid code. Here is a nice question and answers: http://stackoverflow.com/questions/994856/so-what-if-custom-html-attributes-arent-valid-xhtml And the second is not so pretty as the first, but valid. So the compromise is... The third is the solution for future, but here is a lot of CMS where we have to use XHTML or HTML4. (And HTML5 is the long process)

    Read the article

  • How can I get all content within <table></table> tags using a regex?

    - by Bob Dylan
    So I'm writing an application that will do a little screen scrapping. All the pages (about 1000 or so) contain this line: <table border="0" cellspacing="3"> <tr><td>First rows stuff</td></tr> <tr> <td> The data I want is in here <br /> and it's seperated by these annoying <br /> 's. No id's, classes, or even a single <p> tag. Just a bunch of <br /> tags. </td> </tr> </table> So I just need to get the data within the 2nd row out. How can I do this? Should I use a regex or something else?

    Read the article

  • MySQL Ratings From Two Tables

    - by DirtyBirdNJ
    I am using MySQL and PHP to build a data layer for a flash game. Retrieving lists of levels is pretty easy, but I've hit a roadblock in trying to fetch the level's average rating along with it's pointer information. Here is an example data set: levels Table: level_id | level_name 1 | Some Level 2 | Second Level 3 | Third Level ratings Table: rating_id | level_id | rating_value 1 | 1 | 3 2 | 1 | 4 3 | 1 | 1 4 | 2 | 3 5 | 2 | 4 6 | 2 | 1 7 | 3 | 3 8 | 3 | 4 9 | 3 | 1 I know this requires a join, but I cannot figure out how to get the average rating value based on the level_id when I request a list of levels. This is what I'm trying to do: SELECT levels.level_id, AVG(ratings.level_rating WHERE levels.level_id = ratings.level_id) FROM levels I know my SQL is flawed there, but I can't figure out how to get this concept across. The only thing I can get to work is returning a single average from the entire ratings table, which is not very useful. Ideal Output from the above conceptually valid but syntactically awry query would be: level_id | level_rating 1| 3.34 2| 1.00 3| 4.54 My main issue is I can't figure out how to use the level_id of each response row before the query has been returned. It's like I want to use a placeholder... or an alias... I really don't know and it's very frustrating. The solution I have in place now is an EPIC band-aid and will only cause me problems long term... please help!

    Read the article

  • SQL string manipulation to return multiple rows

    - by Andy Jacobs
    I'm an experienced programmer, but relatively new to SQL. We're using Oracle 10 and 11. I have a system in place using SQL that combines actual rows with virtual rows (e.g. "SELECT 1 from DUAL") doing unions and intersects as needed, which all seems to work. My problem is that I need to combine this system which is expecting rows of data, with new data that will have the data in (let's say for simplification) comma delimited strings. So I think what I need is a way to convert a string like: "5,6,7,8" into 4 rows with one column each, with "5" in the first row, "6" in the second, etc. In other languages, I'd do a "Split" with comma as the delimiter. Of course, the data won't always have 4 entries. There's a second question, but I'll ask it separately. But I suspect it will simplify things, if possible, if the solution to the above could be used as a table in another SQL statement (i.e., to work with my existing system). Thanks for any help.

    Read the article

  • Heterogeneous queries require the ANSI_NULLS

    - by Dezigo
    I wrote a trigger. USE [TEST] GO /****** Object: Trigger [dbo].[TR_POSTGRESQL_UPDATE_YC] Script Date: 05/26/2010 08:54:03 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER TRIGGER [dbo].[TR_POSTGRESQL_UPDATE_YC] ON [dbo].[BCT_CNTR_EVENTS] FOR INSERT AS BEGIN DECLARE @MOVE_TIME varchar(14); DECLARE @MOVE_TIME_FORMATED varchar(20); DECLARE @RELEASE_NOTE varchar(32); DECLARE @CMR_NUMBER varchar(15); DECLARE @MOVE_TYPE varchar(2); SELECT @MOVE_TIME = inserted.move_time ,@MOVE_TYPE = inserted.move_type ,@RELEASE_NOTE = inserted.release_note ,@CMR_NUMBER = inserted.cmr_number FROM inserted IF(@MOVE_TYPE = 'YC') BEGIN SET @MOVE_TIME_FORMATED = SUBSTRING(@MOVE_TIME,1,4) + '-' + SUBSTRING(@MOVE_TIME,5,2) + '-' + SUBSTRING(@MOVE_TIME,7,2) + ' 00:00:00' --UPDATE OpenQuery(POSTGRESQL_SERV,'SELECT visit_cmr,visit_timestamp,visit_pin FROM VISIT') -- SET visit_cmr = @RELEASE_NOTE -- WHERE visit_timestamp = @MOVE_TIME_FORMATED -- AND visit_pin = right(@CMR_NUMBER,5) -- AND visit_cmr IS NULL END SET NOCOUNT ON; END When I have inserted a row,I have get an error **Heterogeneous queries require the ANSI_NULLS and ANSI_WARNINGS options to be set for the connection. This ensures consistent query semantics. Enable these options and then reissue your query.** Then I ofcourse SET SET ANSI_WARNINGS is ON but it`s not work for me. (trigger fo linked server postgresql) I have restarted a server. not work again.

    Read the article

  • Counting string length in javascript and Ruby on Rails

    - by williamjones
    I've got a text area on a web site that should be limited in length. I'm allowing users to enter 255 characters, and am enforcing that limit with a Rails validation: validates_length_of :body, :maximum => 255 At the same time, I added a javascript char counter like you see on Twitter, to give feedback to the user on how many characters he has already used, and to disable the submit button when over length, and am getting that length in Javascript with a call like this: element.length Lastly, to enforce data integrity, in my Postgres database, I have created this field as a varchar(255) as a last line of defense. Unfortunately, these methods of counting characters do not appear to be directly compatible. Javascript counts the best, in that it counts what users consider as number of characters where everything is a single character. Once the submission hits Rails, however, all of the carriage returns have been converted to \r\n, now taking up 2 characters worth of space, which makes a close call fail Rails validations. Even if I were to handcode a different length validation in Rails, it would still fail when it hits the database I think, though I haven't confirmed this yet. What's the best way for me to make all this work the way the user would want? Best Solution: an approach that would enable me to meet user expectations, where each character of any type is only one character. If this means increasing the length of the varchar database field, a user should not be able to sneakily send a hand-crafted post that creates a row with more than 255 letters. Somewhat Acceptable Solution: a javascript change that enables the user to see the real character count, such that hitting return increments the counter 2 characters at a time, while properly handling all symbols that might have these strange behaviors.

    Read the article

  • Carriage Return\Line feed in Java

    - by Manu
    Guys, i have created text file in unix enviroment using java code. For writing the text file i am using java.io.FileWriter and BufferedWriter. and for newline after each row i am using bw.newLine() method. (where bw is object of BufferedWriter ) and sending that text file by attaching in mail from unix environment itself (automated that using unix commands). My issue is, after i download the text file from mail in windows system, if i opened that text file the data's are not properly aligned. newline() character is not working i think so. I want same text file alignment as it is in unix environment, if i opened the text file in windows environment also. How to resolve the problem.Please help ASAP. Thanks for your help in advance. pasting my java code here for your reference..(running java code in unix environment) File f = new File(strFileGenLoc); BufferedWriter bw = new BufferedWriter(new FileWriter(f, false)); rs = stmt.executeQuery("select * from jpdata"); while ( rs.next() ) { bw.write(rs.getString(1)==null? "":rs.getString(1)); bw.newLine(); }

    Read the article

  • Midiplayer stops playing sounds after 16 notes.

    - by user349673
    Hi. I am currently programming a Piano Keyboard editor, much like the one you can find in Cubase, Logic, Reason etc.. I have this big grid, double array new int [13][9], which makes it 13 rows, 9 columns. The first column [0-12][0] is the Keyboard, at the top there's "high C" (midi note 72) and at the bottom there's "low C" (midi note 60). That column is an array of JButtons and when you press for example "low C", note 60 is being played by the Synthesizer. I've gotten this to work pretty OK as for now, but one problem I have is that I can only play 16 notes in a row, then it's like the Synthesizer shuts down or something. Do you guys have ANY idea of what the problem is? A bit of the code: import java.awt.*; import javax.swing.*; import java.awt.event.*; import java.util.*; import javax.sound.midi.*; actionPerformed(ActionEvent ae){ for(int i = 0; i<13; i++){ if(o== instr[i]){//instr is the button array SpelaTangent(i); } } } public void SpelaTangent(int tangent){ int [] klaviatur = new int[13]; for(int i = 0; i<13; i++){ klaviatur[i] = (72-i); } try { Synthesizer synth = MidiSystem.getSynthesizer(); synth.open(); final MidiChannel[] mc = synth.getChannels(); Instrument[] instrument = synth.getDefaultSoundbank().getInstruments(); synth.loadInstrument(instrument[1]); mc[0].noteOn(klaviatur[tangent],350); mc[0].noteOff(klaviatur[tangent],350); } catch (MidiUnavailableException e) {} } Help is very much appreciated!

    Read the article

  • Conditional column values in NSTableView?

    - by velocityb0y
    I have an NSTableView that binds via an NSArrayController to an NSMutableArray. What's in the array are derived classes; the first few columns of the table are bound to properties that exist on the base class. That all works fine. Where I'm running into problem is a column that should only be populated if the row maps to one specific subclass. The property that column is meant to display only exists in that subclass, since it makes no sense in terms of the base class. The user will know, from the first two columns, why the third column's cell is populated/editable or not. The binding on the third column's value is on arrangedObjects, with a model path of something like "foo.name" where foo is the property on the subclass. However, this doesn't work, as the other subclasses in the hierarchy are not key-value compliant for foo. It seems like my only choice is to have foo be a property on the base class so everybody responds to it, but this clutters up the interfaces of the model objects. Has anyone come up with a clean design for this situation? It can't be uncommon (I'm a relative newcomer to Cocoa and I'm just learning the ins and outs of bindings.)

    Read the article

  • sql statement supposed to have 2 distinct rows, but only 1 is returned

    - by jello
    I have an sql statement that is supposed to return 2 rows. the first with psychological_id = 1, and the second, psychological_id = 2. here is the sql statement select * from psychological where patient_id = 12 and symptom = 'delire'; But with this code, with which I populate an array list with what is supposed to be 2 different rows, two rows exist, but with the same values: the second row. OneSymptomClass oneSymp = new OneSymptomClass(); ArrayList oneSympAll = new ArrayList(); string connStrArrayList = "Data Source=.\\SQLEXPRESS;AttachDbFilename=|DataDirectory|\\PatientMonitoringDatabase.mdf; " + "Initial Catalog=PatientMonitoringDatabase; " + "Integrated Security=True"; string queryStrArrayList = "select * from psychological where patient_id = " + patientID.patient_id + " and symptom = '" + SymptomComboBoxes[tag].SelectedItem + "';"; using (var conn = new SqlConnection(connStrArrayList)) using (var cmd = new SqlCommand(queryStrArrayList, conn)) { conn.Open(); using (SqlDataReader rdr = cmd.ExecuteReader()) { while (rdr.Read()) { oneSymp.psychological_id = Convert.ToInt32(rdr["psychological_id"]); oneSymp.patient_history_date_psy = (DateTime)rdr["patient_history_date_psy"]; oneSymp.strength = Convert.ToInt32(rdr["strength"]); oneSymp.psy_start_date = (DateTime)rdr["psy_start_date"]; oneSymp.psy_end_date = (DateTime)rdr["psy_end_date"]; oneSympAll.Add(oneSymp); } } conn.Close(); } OneSymptomClass testSymp = oneSympAll[0] as OneSymptomClass; MessageBox.Show(testSymp.psychological_id.ToString()); the message box outputs "2", while it's supposed to output "1". anyone got an idea what's going on?

    Read the article

  • Object reference not set to an instance of an object.

    - by Lambo
    I have the following code - private static void convert() { string csv = File.ReadAllText("test.csv"); string year = "2008"; XDocument doc = ConvertCsvToXML(csv, new[] { "," }); doc.Save("update.xml"); XmlTextReader reader = new XmlTextReader("update.xml"); XmlDocument testDoc = new XmlDocument(); testDoc.Load(@"update.xml"); XDocument turnip = XDocument.Load("update.xml"); webservice.singleSummary[] test = new webservice.singleSummary[1]; webservice.FinanceFeed CallWebService = new webservice.FinanceFeed(); foreach(XElement el in turnip.Descendants("row")) { test[0].account = el.Descendants("var").Where(x => (string)x.Attribute("name") == "account").SingleOrDefault().Attribute("value").Value; test[0].actual = System.Convert.ToInt32(el.Descendants("var").Where(x => (string)x.Attribute("name") == "actual").SingleOrDefault().Attribute("value").Value); test[0].commitment = System.Convert.ToInt32(el.Descendants("var").Where(x => (string)x.Attribute("name") == "commitment").SingleOrDefault().Attribute("value").Value); test[0].costCentre = el.Descendants("var").Where(x => (string)x.Attribute("name") == "costCentre").SingleOrDefault().Attribute("value").Value; test[0].internalCostCentre = el.Descendants("var").Where(x => (string)x.Attribute("name") == "internalCostCentre").SingleOrDefault().Attribute("value").Value; MessageBox.Show(test[0].account, "Account"); MessageBox.Show(System.Convert.ToString(test[0].actual), "Actual"); MessageBox.Show(System.Convert.ToString(test[0].commitment), "Commitment"); MessageBox.Show(test[0].costCentre, "Cost Centre"); MessageBox.Show(test[0].internalCostCentre, "Internal Cost Centre"); CallWebService.updateFeedStatus(test, year); } It is coming up with the error of - NullReferenceException was unhandled, saying that the object reference not set to an instance of an object. The error occurs on the first line test[0].account. How can I get past this?

    Read the article

< Previous Page | 329 330 331 332 333 334 335 336 337 338 339 340  | Next Page >