Search Results

Search found 17867 results on 715 pages for 'delete row'.

Page 634/715 | < Previous Page | 630 631 632 633 634 635 636 637 638 639 640 641  | Next Page >

  • popViewControllerAnimated doesn't update info iPhone SDK

    - by WrightsCS
    When I try to pop a view controller, it doesnt update info for the previous view. Example: I have a cell that displays text in a label in View1. When you click on the cell it goes to View2 (for example) When I choose an option in View2, popViewControllerAnimated is used to go back to View1, however, I want the label to now be updated with the new option in View1. My dilemma is that when I pop View2, the label in View1 does not update. Any ideas? I've tried adding a [view1 reloadData]; before the view pops, but no luck. //VIEW1 the cell that displays the label. ringLabel = [[UILabel alloc] initWithFrame: CGRectMake(25, 12.7f, 250, 20)]; ringLabel.adjustsFontSizeToFitWidth = YES; ringLabel.textColor = [UIColor blackColor]; ringLabel.font = [UIFont systemFontOfSize:17.0]; ringLabel.backgroundColor = [UIColor clearColor]; ringLabel.textAlignment = UITextAlignmentLeft; ringLabel.tag = 0; ringLabel.text = [plistDict objectForKey:@"MYOPTION"]; [ringLabel setEnabled:YES]; [cell addSubview: ringLabel]; [ringLabel release]; //VIEW2 when cell clicked CustomProfileViewController *cpvc = [CustomProfileViewController alloc]; cpvc.ringtone = [ringList objectAtIndex:indexPath.row]; [cpvc.tblCustomTable reloadData]; [self.navigationController popViewControllerAnimated:YES];

    Read the article

  • Keep the images downloaded in the listview during scrolling

    - by Paveliko
    I'm developing my first app and have been reading a LOT here. I've been trying to find a solution for the following issue for over a week with no luck. I have an Adapter that extends ArrayAdapter to show image and 3 lines of text in each row. Inside the getView I assign relevant information for the TextViews and use ImageLoader class to download image and assign it to the ImageView. Everything works great! I have 4.5 rows visible on my screen (out of total of 20). When I scroll down for the first time the images continue to download and be assigned in right order to the list. BUT when I scroll back the list looses all the images and start redrawing them again (0.5-1 sec per image) in correct order. From what I've been reading it's the standard list performance but I want to change it. I want that, once the image was downloaded, it will be "sticked" to the list for the whole session of the current window. Just like in Contacts list or in the Market. It is only 20 images (6-9kb each). Hope I managed to explain myself.

    Read the article

  • SQL with HAVING and temp table not working in Rails

    - by chrisrbailey
    I can't get the following SQL query to work quite right in Rails. It runs, but it fails to do the "HAVING row_number = 1" part, so I'm getting all the records, instead of just the first record from each group. A quick description of the query: it is finding hotel deals with various criteria, and in particular, priortizing them being paid, and then picking the one with the highest dealrank. So, if there are paid deal(s), it'll take the highest one of those (by dealrank) first, if no paid deals, it takes the highest dealrank unpaid deal for each hotel. Using MAX(dealrank) or something similar does not work as a way to pick off the first row of each hotel group, which is why I have the enclosing temptable and the creation of the row_number column. Here's the query: SELECT *, @num := if(@hid = hotel_id, @num + 1, 1) as row_number, @hid := hotel_id as dummy FROM ( SELECT hotel_deals.*, affiliates.cpc, (CASE when affiliates.cpc 0 then 1 else 0 end) AS paid FROM hotel_deals INNER JOIN hotels ON hotels.id = hotel_deals.hotel_id LEFT OUTER JOIN affiliates ON affiliates.id = hotel_deals.affiliate_id WHERE ((hotel_deals.percent_savings = 0) AND (hotel_deals.booking_deadline = ?)) GROUP BY hotel_deals.hotel_id, paid DESC, hotel_deals.dealrank ASC) temptable HAVING row_number = 1 I'm currently using Rails' find_by_sql to do this, although I've also tried putting it into a regular find using the :select, :from, and :having parts (but :having won't get used unless you have a :group as well). If there is a different way to write this query, that'd be good to know too. I am using Rails 2.3.5, MySQL 5.0.x.

    Read the article

  • Move SELECT to SQL Server side

    - by noober
    Hello all, I have an SQLCLR trigger. It contains a large and messy SELECT inside, with parts like: (CASE WHEN EXISTS(SELECT * FROM INSERTED I WHERE I.ID = R.ID) THEN '1' ELSE '0' END) AS IsUpdated -- Is selected row just added? as well as JOINs etc. I like to have the result as a single table with all included. Question 1. Can I move this SELECT to SQL Server side? If yes, how to do this? Saying "move", I mean to create a stored procedure or something else that can be executed before reading dataset in while cycle. The 2 following questions make sense only if answer is "yes". Why do I want to move SELECT? First off, I don't like mixing SQL with C# code. At second, I suppose that server-side queries run faster, since the server have more chances to cache them. Question 2. Am I right? Is it some sort of optimizing? Also, the SELECT contains constant strings, but they are localizable. For instance, WHERE R.Status = "Enabled" "Enabled" should be changed for French, German etc. So, I want to write 2 static methods -- OnCreate and OnDestroy -- then mark them as stored procedures. When registering/unregistering my assembly on server side, just call them respectively. In OnCreate format the SELECT string, replacing {0}, {1}... with required values from the assembly resources. Then I can localize resources only, not every script. Question 3. Is it good idea? Is there an existing attribute to mark methods to be executed by SQL Server automatically after (un)registartion an assembly? Regards,

    Read the article

  • Why does this simple MySQL procedure take way too long to complete?

    - by Howard Guo
    This is a very simple MySQL stored procedure. Cursor "commission" has only 3000 records, but the procedure call takes more than 30 seconds to run. Why is that? DELIMITER // DROP PROCEDURE IF EXISTS apply_credit// CREATE PROCEDURE apply_credit() BEGIN DECLARE done tinyint DEFAULT 0; DECLARE _pk_id INT; DECLARE _eid, _source VARCHAR(255); DECLARE _lh_revenue, _acc_revenue, _project_carrier_expense, _carrier_lh, _carrier_acc, _gross_margin, _fsc_revenue, _revenue, _load_count DECIMAL; DECLARE commission CURSOR FOR SELECT pk_id, eid, source, lh_revenue, acc_revenue, project_carrier_expense, carrier_lh, carrier_acc, gross_margin, fsc_revenue, revenue, load_count FROM ct_sales_commission; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1; DELETE FROM debug; OPEN commission; REPEAT FETCH commission INTO _pk_id, _eid, _source, _lh_revenue, _acc_revenue, _project_carrier_expense, _carrier_lh, _carrier_acc, _gross_margin, _fsc_revenue, _revenue, _load_count; INSERT INTO debug VALUES(concat('row ', _pk_id)); UNTIL done = 1 END REPEAT; CLOSE commission; END// DELIMITER ; CALL apply_credit(); SELECT * FROM debug;

    Read the article

  • Column Header Styling Issue in Data Grid in WPF

    - by sbrakl
    I have formated the Wcf Toolkit Datagrid and below in the is the ColumnHeader Style for it But, there are still some area in Column Header, which are not styled as shown in the image http://www.freeimagehosting.net/uploads/9aba4fbd93.jpg <Style x:Key="ColumnHeaderStyle" TargetType="{x:Type dg:DataGridColumnHeader}"> <Setter Property="VerticalContentAlignment" Value="Center" /> <Setter Property="Background" Value="Orange" /> <Setter Property="Foreground" Value="White" /> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="dg:DataGridColumnHeader"> <dg:DataGridHeaderBorder x:Name="headerBorder" Background="Orange"> <Border BorderThickness="2" CornerRadius="5" Background="Orange" BorderBrush="DarkOrange"> <Grid> <TextBlock Text="{TemplateBinding Content}" VerticalAlignment="Center" HorizontalAlignment="Center" TextWrapping="Wrap"/> </Grid> </Border> </dg:DataGridHeaderBorder> </ControlTemplate> </Setter.Value> </Setter> </Style> <dg:DataGrid Grid.Row="1" Grid.RowSpan="1" Name="dgQuestion" HorizontalAlignment="Left" AutoGenerateColumns="True" Width="740" MinWidth="200" MaxWidth="740" Background="Wheat" ColumnHeaderHeight="30" ColumnHeaderStyle="{DynamicResource ColumnHeaderStyle}" RowStyle="{StaticResource RowStyle}" CanUserAddRows="False" CanUserDeleteRows="False" AlternationCount="2"/>

    Read the article

  • Select rows from table1 and all the children from table2 into an object

    - by Patrick
    I want to pull data from table "Province_Notifiers" and also fetch all corresponding items from table "Province_Notifier_Datas". The table "Province_Notifier" has a guid to identify it (PK), table "Province_Notifier_Datas" has a column called BelongsToProvinceID witch is a foreign key to the "Province_Notifier" tables guid. I tried something like this: var records = from data in ctx.Province_Notifiers where DateTime.Now >= data.SendTime && data.Sent == false join data2 in ctx.Province_Notifier_Datas on data.Province_ID equals data2.BelongsToProvince_ID select new Province_Notifier { Email = data.Email, Province_ID = data.Province_ID, ProvinceName = data.ProvinceName, Sent = data.Sent, UserName = data.UserName, User_ID = data.User_ID, Province_Notifier_Datas = (new List<Province_Notifier_Data>().AddRange(data2)) }; This line is not working and i am trying to figure out how topull the data from table2 into that Province_Notifier_Datas variable. Province_Notifier_Datas = (new List<Province_Notifier_Data>().AddRange(data2)) I can add a record easily by adding the second table row into the Province_Notifier_Datas but i can't fetch it back. Province_Notifier dbNotifier = new Province_Notifier(); // set some values for dbNotifier dbNotifier.Province_Notifier_Datas.Add( new Province_Notifier_Data { BelongsToProvince_ID = userInput.Value.ProvinceId, EventText = GenerateNotificationDetail(notifierDetail) }); This works and inserts the data correctly into both tables. Edit: These error messages is thrown: Cannot convert from 'Province_Notifier_Data' to 'System.Collections.Generic.IEnumerable' If i look in Visual Studio, the variable "Province_Notifier_Datas" is of type System.Data.Linq.EntitySet The best overloaded method match for 'System.Collections.Generic.List.AddRange(System.Collections.Generic.IEnumerable)' has some invalid arguments Edit: var records = from data in ctx.Province_Notifiers where DateTime.Now >= data.SendTime && data.Sent == false join data2 in ctx.Province_Notifier_Datas on data.Province_ID equals data2.BelongsToProvince_ID into data2list select new Province_Notifier { Email = data.Email, Province_ID = data.Province_ID, ProvinceName = data.ProvinceName, Sent = data.Sent, UserName = data.UserName, User_ID = data.User_ID, Province_Notifier_Datas = new EntitySet<Province_Notifier_Data>().AddRange(data2List) }; Error 3 The name 'data2List' does not exist in the current context.

    Read the article

  • Using PHP Frameworks to get Web 2.0 or Ajax and Other Special Features

    - by user504958
    I'm still struggling to understand when or how to use a framework such as Zend or Yii. Here's some of the features I'm going to need on my next project and I don't understand frameworks well enough to know where the framework fits into the picture. I won't say exactly what the project is but think about something like Yelp or Merchant Circle, on a smaller scale of course - a directory project. It will contain a search box and links to all and/or popular categories. 1) Autosuggest in Search box. (I already know how to do this using jQuery) 2) Analyze the search terms entered into the search box to determine if they misspelled a word. Offer to correct the misspelling or automatically correct the word and show relevant results. 3) Offer items, links, or ads that are related to their search term. 4) Allow users to determine which fields are shown. 5) Allow users to sort the results however they choose. 6) Allow editing of records on a grid/list view. Post form without refreshing the page. Delete or Add records without going to a different page or reloading the current page.

    Read the article

  • Strange data swapping error occurs when I attempt to update rows in my table from another table in m

    - by Wesley
    So I have a table of data that is 10,000 lines long. Several of the columns in the table simply describe information about one of the columns, meaning, that only one column has the content, and the rest of the columns describe the location of the content (its for a book). Right now, only 6,000 of the 10,000 rows' content column is filled with its content. Rows 6-10,000's content column simply says null. I have another table in the db that has the content for rows 6,000-10,000, with the correct corresponding primary key which would (seemingly) make it easy to update the 10,000 row table. I have been trying an update query such as the following: UPDATE table(10,000) SET content_column = (SELECT content FROM table(6,000-10,000) WHERE table(10,000).id = table(6-10,000.id) Which kind of works, the only problem is that it pulls in the data from the second table just fine, but it replaces the existing content column with null. So rows 1-6,000's content column become null, and rows 6-10,000's content column have the correct values...Pretty strange I thought anyway. Does anybody have any thoughts about where I am going wrong? If you could show me a better sql query, I would appreciate it! Thanks

    Read the article

  • Importing ascii file into DataGrid in C# WPF

    - by heckler
    Hi, I just started programming in C# and using WPF so pardon my ignorance. I'm creating an WPF application where I need to dynamically make a grid. The grid headers will be different every time based on information in the text file and I will only need this grid if the user opens it. So right now, I'm able to brows for a file and get the path. Then after I create a datagrid, like this: //Create a new data grid DataGrid datagrid1 = new DataGrid(); Master.Children.Add(datagrid1); Grid.SetRow(datagrid1, 1); Grid.SetColumn(datagrid1, 1); Now, I have issues accessing the file and populating the grid. How would I be able to do this in C#? The file will first have this header: Time x y speed_x speed_y acc_x acc_y Target Leg Type The header can have more paramaters depending on the file. then it will have an unknown amount of row of data like this: 0.00 47.50 -42.50 -1.00 0.00 0.00 0.00 1 1 Sensor_1

    Read the article

  • JQuery DataTables link item

    - by rogcg
    I'm trying to link the items from a specific column, but each one will be linked for a different id from the json string. Unfortunately I can't find a way to do this using the API (I know there is a lot of ways to do that without using the API ), but I'm looking for a way to link a item from a column (each one with a link for a specific id). So here is my code, I use getJSON to get the JSON from the server, and I load the data from this JSON to the table like this: $.getJSON("/method/from/server/", function(data) { var total = 0; $("#table_body").empty(); var oTable = $('#mytable').dataTable( { "sPaginationType" : "full_numbers", "aaSorting": [[ 0, "asc" ]] }); oTable.fnClearTable(); $.each(data, function(i, item) { oTable.fnAddData( [ item.contact_name, item.contact_email ] ); }); }); What I want to do, is for each row, link the contact_name to its id, which is also in the JSON, and can be accessed inside this $.each loop by using item.contact_id. Is there a way to do this using DataTables API, if yes, could you explain me and provide a good resource that will help me with this? Thanks.

    Read the article

  • When is Googling it wrong?

    - by Drahcir
    I've been going through Stack Overflow for quite a bit now and noticed certain people (usually experienced programmers) frown upon Googling (researching) certain problems. Since I myself tend to use Google quite a bit to solve certain programming related issues I found certain comments rather demoralising. Now some of you may have come here trigger happy to delete this post but I needed some clarification. I usually Google things that usually syntax related that I would have never figured out on my own. For example I once wondered how to access the properties of a class that I didn't have a direct relationship to. So after a bit of research I discovered reflection and got what I wanted. Now in another scenario is learning a new language, in my case Silverlight were it differs in certain aspects of .NET compared to say ASP.NET. A few weeks ago I had no idea how to load another Silverlight page (usercontrol) and had to Google my way to the solution which I found wasn't as simple as I had imagined. In scenario three is were I myself frown up, that is just stealing a huge chunk of code to avoid doing the work yourself, for example paging a HTML table using JavaScript, where one just copies and pastes the JavasSript code without as much as trying to understand how it works. I do admit I have done this once or twice before for trivial tasks that had very little time limit and weren't all that important but most of the time still have to throw away what I found because it took too much time to adapt it and get what I wanted out of it. In the last scenario, I sometimes have a piece of code that I would be really unhappy about, as in I find it sloppy or too overcomplicated and try to look on the Internet to see other ways to tackle the same problem, let's say filtering through a table. With the knowledge I acquire I learned new coding practices that help me work more efficiently like "Do not repeat yourself" and such. Now in your opinion when do you find it wrong to use Google (or any other researching tool) to find a solution to your problem?

    Read the article

  • Website. VoteUp or VoteDown Videos. How to restrict users voting multiple times?

    - by DJDonaL3000
    Im working on a website (html,css,javascript, ajax, php,mysql), and I want to restrict the number of times a particular user votes for a particular video. Its similar to the YouTube system where you can voteUp or voteDown a particular video. Each vote involves adding a row to the video.votes table, which logs the time, vote direction(up or down), the client IPaddress( using PHP: $ip = $_SERVER['REMOTE_ADDR']; ), and of course the ID of the video in question. Adding votes is as simple as; (pseudocode): Javascript:onClick( vote( a,b,c,d ) ), which passes variables to PHP insertion script via ajax, and finally we replace the voteing buttons with a "Thank You For Voting" message. THE PROBLEM: If you reload/refresh the page after voting, you can vote again, and again, and again, you get the point. MY QUESTION: How do you limit the amount of times a particular user votes for a particular video?? MY THOUGHTS: Do you use cookies, and add a new cookie with the id of the video. And check for a cookie before you insert a new vote.? OR Before you insert the vote, do you use the IPaddress and the videoID to see if this same user(IP) has voted for this same video(vidID) in the past 24hrs(mktime), and either allow or dissallow the voteInsertion based on this query? OR Do you just not care? Take the assumption that most users are sane, and have better things to do than refresh pages and vote repeatedly.?? Any suggestions or ideas welcome.

    Read the article

  • csv file upload and update mysql db

    - by Indra
    I am very to new to php. i am using the following code to open a csv file and update my database. i need to check the value of first row-first column of the csv file. if it is matching "some text 1", then i need to run code1, if it is "some text 2", run code2, else code3. I can use if else condition but since i am using while loop It fails. Can anyone help me. $handle = fopen($file_tmp,"r"); while(($fileop = fgetcsv($handle,",")) !== false) { // I need to check here $companycode = mysql_real_escape_string($fileop[0]); $Item = mysql_real_escape_string($fileop[3]); $pack = preg_replace('/[^A-Za-z0-9\. -]/', '', $fileop[4]); $lastmonth = mysql_real_escape_string($fileop[5]); $ltlmonth = mysql_real_escape_string($fileop[6]); $op = mysql_real_escape_string($fileop[9]); $pur = mysql_real_escape_string($fileop[10]); $sale = mysql_real_escape_string($fileop[12]); $bal = mysql_real_escape_string($fileop[17]); $bval = mysql_real_escape_string($fileop[18]); $sval = mysql_real_escape_string($fileop[19]); $sq1 = mysql_query("INSERT INTO `sas` (companycode,Item,pack,lastmonth,ltlmonth,op,pur,sale,bal,bval,sval) VALUES ('$companycode','$Item','$pack','$lastmonth','$ltlmonth','$op','$pur','$sale','$bal','$bval','$sval')"); }

    Read the article

  • cellForRowAtIndexPath not called for all sections

    - by Wynn
    I have a UITableView that has five sections. Just as the title describes cellForRowAtIndexPath is only being called for the first four. All connections have been made concerning the datasource and delegate. Also, my numberOfSectionsInTableView clearly returns 5. Printing out the number of sections from within cellForRowAtIndexPath shows the correct number, thus confirming that cellForRowAtIndexPath is simply not being called for all sections. What on earth is going on? I looked pretty hard for an answer to this question but could't find one. If this has already been answered please forgive me and point me in the correct direction. My cellForRowAtIndexPath: - (UITableViewCell *)tableView:(UITableView *)theTableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [theTableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier]; } switch (indexPath.section) { case 0: cell.textLabel.text = ticket.description; break; case 1: cell.textLabel.text = ticket.ticketStatus; break; case 2: cell.textLabel.text = ticket.priority; break; case 3: cell.textLabel.text = ticket.customerOfficePhone; break; case 4: { //This never ever gets executed Comment *comment = [ticket.comments objectAtIndex:indexPath.row]; cell.textLabel.text = comment.commentContent; break; } } return cell; } My numberOfSectionsInTableView: - (NSInteger)numberOfSectionsInTableView:(UITableView *)tableView { return 5; } My numberOfRowsInSection: - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section { NSInteger numberOfRows; if (section == 4) { numberOfRows = [ticket.comments count]; } else { numberOfRows = 1; } return numberOfRows; } Any suggestions are appreciated. Thanks in advance.

    Read the article

  • jquery autosuggest: what is wrong with this code?

    - by Abu Hamzah
    i have been struggling to make it work my autosugest/autocomplete and its acting strange unless i am doing completely silly here. please have a look 1) does not do anything, does not work or nor fire the event. <script src="Scripts/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/jquery.autocomplete.js" type="text/javascript"></script> <form id="form1" runat="server"> <script type="text/javascript"> $(document).ready(function() { $("#<%=txtHost.UniqueID %>").autocomplete("HostService.asmx/GetHosts", { dataType: 'json' , contentType: "application/json; charset=utf-8" , parse: function(data) { var rows = Array(); debugger for (var i = 0; i < data.length; i++) { rows[i] = { data: data[i], value: data[i].LName, result: data[i].LName }; } return rows; } , formatItem: function(row, i, max) { return data.LName + ", " + data.FName; } }); }); </script> 2) this works if i remove the above code and replace with this code: <script type="text/javascript"> $(document).ready(function() { $("#txtHost").autocomplete("lazy blazy crazy daisy maisy ugh".split(" ")); }); </script> any help ?

    Read the article

  • Where should my "filtering" logic reside with Linq-2-SQL and ASP.NET-MVC in View or Controller?

    - by Nate Bross
    I have a main Table, with several "child" tables. TableA and TableAChild1 and TableAChild2. I have a view which shows the information in TableA, and then has two columns of all items in TableAChild1 and TableAChild2 respectivly, they are rendered with Partial views. Both child tables have a bit field for VisibleToAll, and depending on user role, I'd like to either display all related rows, or related rows where VisibleToAll = true. This code, feels like it should be in the controller, but I'm not sure how it would look, because as it stands, the controller (limmited version) looks like this: return View("TableADetailView", repos.GetTableA(id)); Would something like this be even work, and would it be bad what if my DataContext gets submitted, would that delete all the rows that have VisibleToAll == false? var tblA = repos.GetTableA(id); tblA.TableAChild1 = tblA.TableAChild1.Where(tmp => tmp.VisibleToAll == true); tblA.TableAChild2 = tblA.TableAChild2.Where(tmp => tmp.VisibleToAll == true); return View("TableADetailView", tblA); It would also be simple to add that logic to the RendarPartial call from the main view: <% Html.RenderPartial("TableAChild1", Model.TableAChild1.Where(tmp => tmp.VisibleToAll == true); %>

    Read the article

  • How to use a TFileStream to read 2D matrices into dynamic array?

    - by Robert Frank
    I need to read a large (2000x2000) matrix of binary data from a file into a dynamic array with Delphi 2010. I don't know the dimensions until run-time. I've never read raw data like this, and don't know IEEE so I'm posting this to see if I'm on track. I plan to use a TFileStream to read one row at a time. I need to be able to read as many of these formats as possible: 16-bit two's complement binary integer 32-bit two's complement binary integer 64-bit two's complement binary integer IEEE single precision floating-point For 32-bit two's complement, I'm thinking something like the code below. Changing to Int64 and Int16 should be straight forward. How can I read the IEEE? Am I on the right track? Any suggestions on this code, or how to elegantly extend it for all 4 data types above? Since my post-processing will be the same after reading this data, I guess I'll have to copy the matrix into a common format when done. I have no problem just having four procedures (one for each data type) like the one below, but perhaps there's an elegant way to use RTTI or buffers and then move()'s so that the same code works for all 4 datatypes? Thanks! type TRowData = array of Int32; procedure ReadMatrix; var Matrix: array of TRowData; NumberOfRows: Cardinal; NumberOfCols: Cardinal; CurRow: Integer; begin NumberOfRows := 20; // not known until run time NumberOfCols := 100; // not known until run time SetLength(Matrix, NumberOfRows); for CurRow := 0 to NumberOfRows do begin SetLength(Matrix[CurRow], NumberOfCols); FileStream.ReadBuffer(Matrix[CurRow], NumberOfCols * SizeOf(Int32)) ); end; end;

    Read the article

  • Understanding REST through an example

    - by grifaton
    My only real exposure to the ideas of REST has been through Ruby on Rails' RESTful routing. This has suited me well for the kind of CRUD-based applications I have built with Rails, but consequently my understanding of RESTfulness is somewhat limited. Let's say we have a finite collection of Items, each of which has a unique ID, and a number of properties, such as colour, shape, and size (which might be undefined for some Items). Items can be used by a client for a period of time, but each Item can only be used by one client at once. Access to Items is regulated by a server. Clients can request the temporary use of certain items from a server. Usually, clients will only be interested in getting access to a number of Items with particular properties, rather than getting access to specific Items. When a client requests use of a number of Items, the server responds with a list of IDs corresponding to the request, or with a response that says that the requested Items are not currently available or do not exist. A client can make the following kinds of request: Tell me how many green triangle Items there are (in total/available). Give me use of 200 large red Items. I have finished with Items 21, 23, 23. Add 100 new red square Items. Delete 50 small green Items. Modify all big yellow pentagon Items to be blue. The toy example above is like a resource allocation problem I have had to deal with recently. How should I go about thinking about it RESTfully?

    Read the article

  • std::vector elements initializing

    - by Chameleon
    std::vector<int> v1(1000); std::vector<std::vector<int>> v2(1000); std::vector<std::vector<int>::const_iterator> v3(1000); How elements of these 3 vectors initialized? About int, I test it and I saw that all elements become 0. Is this standard? I believed that primitives remain undefined. I create a vector with 300000000 elements, give non-zero values, delete it and recreate it, to avoid OS memory clear for data safety. Elements of recreated vector were 0 too. What about iterator? Is there a initial value (0) for default constructor or initial value remains undefined? When I check this, iterators point to 0, but this can be OS When I create a special object to track constructors, I saw that for first object, vector run the default constructor and for all others it run the copy constructor. Is this standard? Is there a way to completely avoid initialization of elements? Or I must create my own vector? (Oh my God, I always say NOT ANOTHER VECTOR IMPLEMENTATION) I ask because I use ultra huge sparse matrices with parallel processing, so I cannot use push_back() and of course I don't want useless initialization, when later I will change the value.

    Read the article

  • perl multithreading issue for autoincrement

    - by user3446683
    I'm writing a multi threaded perl script and storing the output in a csv file. I'm trying to insert a field called sl.no. in the csv file for each row entered but as I'm using threads, the sl. no. overlaps in most. Below is an idea of my code snippet. for ( my $count = 1 ; $count <= 10 ; $count++ ) { my $t = threads->new( \&sub1, $count ); push( @threads, $t ); } foreach (@threads) { my $num = $_->join; } sub sub1 { my $num = shift; my $start = '...'; #distributing data based on an internal logic my $end = '...'; #distributing data based on an internal logic my $next; for ( my $x = $start ; $x <= $end ; $x++ ) { my $count = $x + 1; #part of code from which I get @data which has name and age my $j = 0; if ( $x != 0 ) { $count = $next; } foreach (@data) { #j is required here for some extra code flock( OUTPUT, LOCK_EX ); print OUTPUT $count . "," . $name . "," . $age . "\n"; flock( OUTPUT, LOCK_UN ); $j++; $count++; } $next = $count; } return $num; } I need the count to be incremented which is the serial number for the rows that would be inserted in the csv file. Any help would be appreciated.

    Read the article

  • iPhone Development - CoreData runtime error

    - by Mustafa
    I'm facing a strange CoreData issue. Here's the log: 2010-04-07 15:59:36.913 MyProject[263:207] <MyEntity: 0x180370> (entity: MyEntity; id: 0x17e890 <x-coredata://0F55C533-41BD-4F09-9CCA-0CB304CAB065/MyEntity/p380> ; data: <fault>) 2010-04-07 15:59:36.918 MyProject[263:207] *** Terminating app due to uncaught exception 'NSObjectInaccessibleException', reason: 'The NSManagedObject with ID:0x17e890 <x-coredata://0F55C533-41BD-4F09-9CCA-0CB304CAB065/MyEntity/p380> has been invalidated.' I have a hierarchy of UITableViewControllers that use NSFetchedResultsController to populate the table, and when a particular row is selected, the detail view is shown. UITableView (MyMainEntity) UITableView (MyEntity) UITableView (MyEntity) detail view Both MyMainEntity UITableView and MyEntity UITableView use NSFetchedResultsController to show the records. Sometimes it crashes when i'm scrolling the tableView, and sometimes it crashes when i try to open the detail view. I can navigate to the MyEntity detail view multiple times before application crashes. What does this error mean? and how can i fix it!?

    Read the article

  • Is Subversion's 'Lazy Copy' still lazy when overwriting a previously deleted file?

    - by JW
    Is Subversion's 'Lazy Copy' still lazy when overwriting a previously deleted file? I store my externals in a separate folder for each version: i.e say for dojo I'd have: webroot\ scripts\ dojo-v-1.0.0\ dojo-v-1.1.0\ etc. By doing this, for me at least, I feel it makes it easier to switch over to a new version. By only adding each new version i am not really giving svn the history it needs to do lazy copies. So one tactic I have used is to svn copy over the old version over to where the new one will be then svn delete that whole folder then unpack my newer version into that place then svn add them The idea is to avoid having a massive amount of duplicated data in my repo. I hope svn is looking at the new files and saying, "hey, i already had this once, copied, then deleted...so i am only going to be lazy store the changes". That was my theory - but does that happen in practice? p.s. Yes I know an alternative is to set the 'externals properties on the folder' - but that's another question.

    Read the article

  • Vector does reallocation on every push_back

    - by Amrish
    IDE - Visual Studio 2008, Visual C++ I have a custom class Class1 with a copy constructor to it. I also have a vector Data is inserted using the following code Class1* objClass1; vector<Class1> vClass1; for(int i=0;i<1000;i++) { objClass1 = new Class1(); vClass1.push_back(*objClass1); delete objClass1; } Now on every insert, the vector gets re-allocated and all the existing contents are copied to new locations. For example, if the vector has 5 elements and if I insert the 6th one, the previous 5 elements along with the new one gets copied to a new location (I figured it out by adding log statements in the copy constructors.) On using reserve(), this however does not happen as expected! I have the following questions Is it mandatory to always use the reserve statement? Does vector does a reallocation every time I do a push_back; or does it happen because I am debugging?

    Read the article

  • avoiding code duplication in Rails 3 models

    - by Dustin Frazier
    I'm working on a Rails 3.1 application where there are a number of different enum-like models that are stored in the database. There is a lot of identical code in these models, as well as in the associated controllers and views. I've solved the code duplication for the controllers and views via a shared parent controller class and the new view/layout inheritance that's part of Rails 3. Now I'm trying to solve the code duplication in the models, and I'm stuck. An example of one of my enum models is as follows: class Format < ActiveRecord::Base has_and_belongs_to_many :videos attr_accessible :name validates :name, presence: true, length: { maximum: 20 } before_destroy :verify_no_linked_videos def verify_no_linked_videos unless self.videos.empty? self.errors[:base] << "Couldn't delete format with associated videos." raise ActiveRecord::RecordInvalid.new self end end end I have four or five other classes with nearly identical code (the association declaration being the only difference). I've tried creating a module with the shared code that they all include (which seems like the Ruby Way), but much of the duplicate code relies on ActiveRecord, so the methods I'm trying to use in the module (validate, attr_accessible, etc.) aren't available. I know about ActiveModel, but that doesn't get me all the way there. I've also tried creating a common, non-persistent parent class that subclasses ActiveRecord::Base, but all of the code I've seen to accomplish this assumes that you won't have subclasses of your non-persistent class that do persist. Any suggestions for how best to avoid duplicating these identical lines of code across many different enum models?

    Read the article

< Previous Page | 630 631 632 633 634 635 636 637 638 639 640 641  | Next Page >