Search Results

Search found 17867 results on 715 pages for 'delete row'.

Page 631/715 | < Previous Page | 627 628 629 630 631 632 633 634 635 636 637 638  | Next Page >

  • Hang during databinding of large amount of data to WPF DataGrid

    - by nihi_l_ist
    Im using WPFToolkit datagrid control and do the binding in such way: <WpfToolkit:DataGrid x:Name="dgGeneral" SelectionMode="Single" SelectionUnit="FullRow" AutoGenerateColumns="False" CanUserAddRows="False" CanUserDeleteRows="False" Grid.Row="1" ItemsSource="{Binding Path=Conversations}" > public List<CONVERSATION> Conversations { get { return conversations; } set { if (conversations != value) { conversations = value; NotifyPropertyChanged("Conversations"); } } } public event PropertyChangedEventHandler PropertyChanged; public void NotifyPropertyChanged(string propertyName) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } } public void GenerateData() { BackgroundWorker bw = new BackgroundWorker(); bw.WorkerSupportsCancellation = bw.WorkerReportsProgress = true; List<CONVERSATION> list = new List<CONVERSATION>(); bw.DoWork += delegate { list = RefreshGeneralData(); }; bw.RunWorkerCompleted += delegate { try { Conversations = list; } catch (Exception ex) { CustomException.ExceptionLogCustomMessage(ex); } }; bw.RunWorkerAsync(); } And than in the main window i call GenerateData() after setting DataCotext of the window to instance of the class, containing GenerateData(). RefreshGeneralData() returns some list of data i want and it returns it fast. Overall there are near 2000 records and 6 columns(im not posting the code i used during grid's initialization, because i dont think it can be the reason) and the grid hangs for almost 10 secs!

    Read the article

  • NHibernate many-to-many mapping

    - by Scozzard
    Hi, I am having an issue with many-to-many mapping using NHibernate. Basically I have 2 classes in my object model (Scenario and Skill) mapping to three tables in my database (Scenario, Skill and ScenarioSkill). The ScenarioSkills table just holds the IDs of the SKill and Scenario table (SkillID, ScenarioID). In the object model a Scenario has a couple of general properties and a list of associated skills (IList) that is obtained from the ScenarioSkills table. There is no associated IList of Scenarios for the Skill object. The mapping from Scenario and Skill to ScenarioSkill is a many-to-many relationship: Scenario * --- * ScenarioSkill * --- * Skill I have mapped out the lists as bags as I believe this is the best option to use from what I have read. The mappings are as follows: Within the Scenario class <bag name="Skills" table="ScenarioSkills"> <key column="ScenarioID" foreign-key="FK_ScenarioSkill_ScenarioID"/> <many-to-many class="Domain.Skill, Domain" column="SkillID" /> </bag> And within the Skill class <bag name="Scenarios" table="ScenarioSkills" inverse="true" access="noop" cascade="all"> <key column="SkillID" foreign-key="FK_ScenarioSkill_SkillID" /> <many-to-many class="Domain.Scenario, Domain" column="ScenarioID" /> </bag> Everything works fine, except when I try to delete a skill, it cannot do so as there is a reference constraint on the SkillID column of the ScenarioSkill table. Can anyone help me? I am using NHibernate 2 on an C# asp.net 3.5 web application solution.

    Read the article

  • Approach For Syncing One SharePoint List With One or More SharePoint Lists

    - by plattnum
    What would be the best approach or strategy for configuring, customizing or developing in SharePoint a solution that allows me to keep one or more SharePoint lists in sync with a SharePoint list I have designated as a master or parent list. I would like to be able to create a master/parent list of some information that can be extended or used by different parts of the organization without them being able to CRUD any items on the actual columns of the master list. (I have seen some commercial web parts that offer column security on SharePoint lists and although that’s one way of potentially meeting my needs I would like to explore other options.) Scenario: I have a list called FOO: FOO Title Description I would like to create a new list BAR based off of FOO (BAR is managed by sub-organization that doesn't have access to FOO List): BAR FOO.Title (Read-Only) FOO.Description (Read-Only) NewColumn1 NewColumn2 Actions: Create- If a new item is entered in FOO I would like the new item added to BAR. Read - N/A Update - If the title or description is changed in FOO I would like it changed in BAR. Delete- No Deletes in the scenario. (Deletes are handled by the business with status column.) Templates with content extraction offer me this but it’s a one time shot at list creation. Just not sure what the best approach or strategy would be for this in MOSS 2007. Thanks!

    Read the article

  • how to sort the items in listbox alphabetically?

    - by user2745378
    i need to sort the items alphabetically in listbox when sort button is clicked. (I have sort button in appbar). But I dunno how to achieve this. here is XAML. All help will be much appreciated. <phone:PhoneApplicationPage.Resources> <DataTemplate x:Key="ProjectTemplate"> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="400" /> </Grid.ColumnDefinitions> <TextBlock Grid.Column="1" Text="{Binding Name}" Style="{StaticResource PhoneTextLargeStyle}" /> </Grid> </DataTemplate> </phone:PhoneApplicationPage.Resources> <Grid x:Name="ContentPanel" Grid.Row="1" Margin="0,0,12,0"> <ListBox x:Name="projectList" ItemsSource="{Binding Items}" SelectionChanged="ListBox_SelectionChanged" ItemTemplate="{StaticResource ProjectTemplate}" /> </Grid> Here's my ViewModel namespace PhoneApp.ViewModels { public class ProjectsViewModel: ItemsViewModelBase<Project> { public ProjectsViewModel(TaskDataContext taskDB) : base(taskDB) { } public override void LoadData() { base.LoadData(); var projectsInDB = _taskDB.Projects.ToList(); Items = new ObservableCollection<Project>(projectsInDB); } public override void AddItem(Project item) { _taskDB.Projects.InsertOnSubmit(item); _taskDB.SubmitChanges(); Items.Add(item); } public override void RemoveItem(int id) { var projects = from p in Items where p.Id == id select p; var item = projects.FirstOrDefault(); if (item != null) { var tasks = (from t in App.TasksViewModel.Items where t.ProjectId == item.Id select t).ToList(); foreach (var task in tasks) App.TasksViewModel.RemoveItem(task.Id); Items.Remove(item); _taskDB.Projects.DeleteOnSubmit(item); _taskDB.SubmitChanges(); } } } } I have added the ViewModel C# Code herewith

    Read the article

  • Get the src part of a string [duplicate]

    - by Kay Lakeman
    This question already has an answer here: Grabbing the href attribute of an A element 7 answers First post ever here, and i really hope you can help. I use a database where a large piece of html is stored, now i just need the src part of the image tag. I already found a thread, but i just doesn't do the trick. My code: Original string: <p><img alt=\"\" src=\"http://domain.nl/cms/ckeditor/filemanager/userfiles/background.png\" style=\"width: 80px; height: 160px;\" /></p> How i start: $image = strip_tags($row['information'], '<img>'); echo stripslashes($image); This returns: <img alt="" src="http://domain.nl/cms/ckeditor/filemanager/userfiles/background.png" style="width: 80px; height: 160px;" /> Next step: extract the src part: preg_match('/< *img[^>]*src *= *["\']?([^"\']*)/i', $image, $matches); echo $matches ; This last echo returns: Array What is going wrong? Thanks in advance for your anwser.

    Read the article

  • Uncommitted reads in SSIS

    - by OldBoy
    I'm trying to debug some legacy Integration Services code, and really want some confirmation on what I think the problem is: We have a very large data task inside a control flow container. This control flow container is set up with TransactionOption = supported - i.e. it will 'inherit' transactions from parent containers, but none are set up here. Inside the data flow there is a call to a stored proc that writes to a table with pseudo code something like: "If a record doesn't exist that matches these parameters then write it" Now, the issue is that there are three records being passed into this proc all with the same parameters, so logically the first record doesn't find a match and a record is created. The second record (with the same parameters) also doesn't find a match and another record is created. My understanding is that the first 'record' passed to the proc in the dataflow is uncommitted and therefore can't be 'read' by the second call. The upshot being that all three records create a row, when logically only the first should. In this scenario am I right in thinking that it is the uncommitted transaction that stops the second call from seeing the first? Even setting the isolation level on the container doesn't help because it's not being wrapped in a transaction anyway.... Hope that makes sense, and any advice gratefully received. Work-arounds confer god-like status on you.

    Read the article

  • Issue with GCD and too many threads

    - by dariaa
    I have an image loader class which provided with NSURL loads and image from the web and executes completion block. Code is actually quite simple - (void)downloadImageWithURL:(NSString *)URLString completion:(BELoadImageCompletionBlock)completion { dispatch_async(_queue, ^{ // dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{ UIImage *image = nil; NSURL *URL = [NSURL URLWithString:URLString]; if (URL) { image = [UIImage imageWithData:[NSData dataWithContentsOfURL:URL]]; } dispatch_async(dispatch_get_main_queue(), ^{ completion(image, URLString); }); }); } When I replace dispatch_async(_queue, ^{ with commented out dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{ Images are loading much faster, wich is quite logical (before that images would be loaded one at a time, now a bunch of them are loading simultaneously). My issue is that I have perhaps 50 images and I call downloadImageWithURL:completion: method for all of them and when I use global queue instead of _queue my app eventually crashes and I see there are 85+ threads. Can the problem be that my calling dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0) 50 times in a row makes GCD create too many threads? I thought that gcd handles all the treading and makes sure the number of threads is not huge, but if it's not the case is there any way I can influence number of threads?

    Read the article

  • MySQL - What is wrong with this query or my database? Terrible performance.

    - by Moss
    SELECT * from `employees` a LEFT JOIN (SELECT phone1 p1, count(*) c, FROM `employees` GROUP BY phone1) b ON a.phone1 = b.p1; I'm not sure if it is this query in particular that has the problem. I have been getting terrible performance in general with this database. The table in question has 120,000 rows. I have tried this particular query remotely and locally with the MyISAM and InnoDB engines, with different types of joins, and with and without an index on phone1. I can get this to complete in about 4 minutes on a 10,000 row table successfully but performance drops exponentially with larger tables. Remotely it will lose connection to the server and locally it brings my system to its knees and seems to go on forever. This query is only a smaller step I was trying to do when a larger query couldn't complete. Maybe I should explain the whole scenario. I have one big flat ugly table that lists a bunch of people and their contact info and the info of the companies they work for. I'm trying to normalize the database and intelligently determine which phone numbers apply to individual people and which apply to an office location. My reasoning is that if a phone number occurs multiple times and the number of occurrence equals the number of times that the street address it is attached to occurs then it must be an office number. So the first step is to count each phone number grouping by phone number. Normally if you just use COUNT()...GROUP BY it will only list the first record it finds in that group so I figured I have to join the full table to the count table where the phone number matches. This does work but as I said I can't successfully complete it on any table much larger than 10,000 rows. This seems pathetic and this doesn't seem like a crazy query to do. Is there a better way to achieve what I want or do I have to break my large table into 12 pieces or is there something wrong with the table or db?

    Read the article

  • Hadoop File Read

    - by user3684584
    Hadoop Distributed Cache Wordcount example in hadoop 2.2.0. Copied file into hdfs filesystem to be used inside setup of mapper class. protected void setup(Context context) throws IOException,InterruptedException { Path[] uris = DistributedCache.getLocalCacheFiles(context.getConfiguration()); cacheData=new HashMap(); for(Path urifile: uris) { try { BufferedReader readBuffer1 = new BufferedReader(new FileReader(urifile.toString())); String line; while ((line=readBuffer1.readLine())!=null) { System.out.println("**************"+line); cacheData.put(line,line); } readBuffer1.close(); } catch (Exception e) { System.out.println(e.toString()); } } } Inside Driver Main class Configuration conf = new Configuration(); String[] otherArgs = new GenericOptionsParser(conf,args).getRemainingArgs(); if (otherArgs.length != 3) { System.err.println("Usage: wordcount <in> <out>"); System.exit(2); } Job job = new Job(conf, "word_count"); job.setJarByClass(WordCount.class); job.setMapperClass(Map.class); job.setReducerClass(Reduce.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(otherArgs[0])); Path outputpath=new Path(otherArgs[1]); outputpath.getFileSystem(conf).delete(outputpath,true); FileOutputFormat.setOutputPath(job,outputpath); System.out.println("CachePath****************"+otherArgs[2]); DistributedCache.addCacheFile(new URI(otherArgs[2]),job.getConfiguration()); System.exit(job.waitForCompletion(true) ? 0 : 1); But getting exception java.io.FileNotFoundException: file:/home/user12/tmp/mapred/local/1408960542382/cache (No such file or directory) So Cache functionality not working properly. Any Idea ?

    Read the article

  • File.Replace throwing IOException

    - by WebDevHobo
    I have an app that can make modify images. In some cases, this makes the filesize smaller, in some cases bigger. The program doesn't have an option to "not replace the file if result has a bigger filesize". So I wrote a little C# app to try and solve this. Instead of overwriting the files, I make the app write the result to a folder under the current one and name that folder Test. The C# app I wrote compares grabs the contents of both folders and puts the full path to the file(s) in two List objects. I then compare and replace. The replacing isn't working however. I get the following IOException: Unable to remove the file to be replaced The location is on an external hard-drive, on which I have full rights. Now, I know I can just do File.Delete and File.Move in that order, but this exception has gotten me interested in why this particular setup wont work. Here's the source code: http://pastebin.com/4Vq82Umu And yes, the file specified as last argument of the Replace function does exist.

    Read the article

  • Deleting an option in a <select> list

    - by Tyzak
    hello, i have following function: function delete_auswahl() { var anzahl =document.getElementById ("warenkorbfeld").length ; for (var i =0; i<=anzahl; i++) { if (document.getElementById ("warenkorbfeld").options[i].selected==true) { if (document.getElementById ("warenkorbfeld").options[i].id == "Margherita" ) gesamtbetrag = gesamtbetrag - 4; if (document.getElementById ("warenkorbfeld").options[i].id=="Salami" ) gesamtbetrag = gesamtbetrag - 4.50; if (document.getElementById ("warenkorbfeld").options[i].id=="Hawaii" ) gesamtbetrag = gesamtbetrag - 5.50; document.getElementById ("warenkorbfeld").options[i]=null; i--;// auf der gleichen stelle bleiben, da dass nächste feld nachrückt } } document.getElementById('gesamtbetrag').innerHTML=gesamtbetrag ; } before i added values with function hinzu (pizza) { NeuerEintrag = new Option(pizza, pizza, false, false); document.getElementById("warenkorbfeld").options[document.getElementById("warenkorbfeld").length] = NeuerEintrag ; if (pizza=="Margherita") { gesamtbetrag = gesamtbetrag + 4; } if (pizza=="Salami") { gesamtbetrag = gesamtbetrag + 4.50; } if (pizza=="Hawaii") { gesamtbetrag = gesamtbetrag + 5.50; } document.getElementById('gesamtbetrag').innerHTML=gesamtbetrag ; } now, in the delete function doesn't substract the price. despite this, all works. what's wrong with this term? if (document.getElementById ("warenkorbfeld").options[i].id == "Margherita" ) gesamtbetrag = gesamtbetrag - 4; thanks in advance

    Read the article

  • How to optimize a postgreSQL server for a "write once, read many"-type infrastructure ?

    - by mhu
    Greetings, I am working on a piece of software that logs entries (and related tagging) in a PostgreSQL database for storage and retrieval. We never update any data once it has been inserted; we might remove it when the entry gets too old, but this is done at most once a day. Stored entries can be retrieved by users. The insertion of new entries can happen rather fast and regularly, thus the database will commonly hold several millions elements. The tables used are pretty simple : one table for ids, raw content and insertion date; and one table storing tags and their values associated to an id. User search mostly concern tags values, so SELECTs usually consist of JOIN queries on ids on the two tables. To sum it up : 2 tables Lots of INSERT no UPDATE some DELETE, once a day at most some user-generated SELECT with JOIN huge data set What would an optimal server configuration (software and hardware, I assume for example that RAID10 could help) be for my PostgreSQL server, given these requirements ? By optimal, I mean one that allows SELECT queries taking a reasonably little amount of time. I can provide more information about the current setup (like tables, indexes ...) if needed.

    Read the article

  • i read that for RESTful websites. it is not good to use $_SESSION. Why is it not good? how then do i

    - by keisimone
    I read that it is not good to use $_SESSION. http://www.recessframework.org/page/towards-restful-php-5-basic-tips I am creating a WEBSITE, not web service in PHP. and i am trying to make it more RESTful. at least in spirit. right now i am rewriting all the action to use Form tags POST and add in a hidden value called _method which would be "delete" for deleting action and "put" for updating action. however, i am not sure why it is recommended NOT to use $_SESSION. i would like to know why and what can i do to improve. To allow easy authorization checking, what i did was to after logging in the user, the username is stored in the $_SESSION. Everytime the user navigates to a page, the page would check if the username is stored inside $_SESSION and then based on the $_SESSION retrieves all the info including privileges from the database and then evaluates the authorization to access the page based on the info retrieved. Is the way I am implementing bad? not RESTful? how do i improve performance and security? Thank you.

    Read the article

  • Can't access dynamically generated div with Jquery

    - by Bug Magnet
    Still trying to get my head around JQuery and dynamically generated content. I have the following code that is dynamically generated with JQuery whenever a user clicks on an 'Add' button: <div id="practices_div" style="border-top: 1px dotted gray"> <br/> <a href="#" class="remove_link" rel="practices_div">Remove</a> ....content goes here </div> The JQuery code that dynamically loads the above content is as follows: $.ajax({ url: '/addAjax.html', success: function(response) { $('#container').append(response); } }); What I'm trying to do is when a user clicks on the Remove link, JQuery should hide and delete the dynamically generated content from the page. The code that does this is as follows: $('.remove_link').live('click', function() { if (confirm("Remove this item?")) { $('#practices_div').fadeOut('medium', function() { $(this).remove(); }); } return false; }); So, when the content is dynamically loaded via Ajax, and I click on the Remove link, the Confirmation dialogue box is displayed and if I confirm, nothing happens. For some reason, JQuery is not able find the dynamically generated #practices_div. Any idea what I may be doing wrong?

    Read the article

  • How can I introduce a regex action to match the first element in a Catalyst URI ?

    - by RET
    Background: I'm using a CRUD framework in Catalyst that auto-generates forms and lists for all tables in a given database. For example: /admin/list/person or /admin/add/person or /admin/edit/person/3 all dynamically generate pages or forms as appropriate for the table 'person'. (In other words, Admin.pm has actions edit, list, add, delete and so on that expect a table argument and possibly a row-identifying argument.) Question: In the particular application I'm building, the database will be used by multiple customers, so I want to introduce a URI scheme where the first element is the customer's identifier, followed by the administrative action/table etc: /cust1/admin/list/person /cust2/admin/add/person /cust2/admin/edit/person/3 This is for "branding" purposes, and also to ensure that bookmarks or URLs passed from one user to another do the expected thing. But I'm having a lot of trouble getting this to work. I would prefer not to have to modify the subs in the existing framework, so I've been trying variations on the following: sub customer : Regex('^(\w+)/(admin)$') { my ($self, $c, @args) = @_; #validation of captured arg snipped.. my $path = join('/', 'admin', @args); $c->request->path($path); $c->dispatcher->prepare_action($c); $c->forward($c->action, $c->req->args); } But it just will not behave. I've used regex matching actions many times, but putting one in the very first 'barrel' of a URI seems unusually traumatic. Any suggestions gratefully received.

    Read the article

  • parsing python to csv

    - by user185955
    I'm trying to download some game stats to do some analysis, only problem is each season the data their isn't 100% consistent. I grab the json file from the site, then wish to save it to a csv with the first line in the csv containing the heading for that column, so the heading would be essentially the key from the python data type. #!/usr/bin/env python import requests import json import csv base_url = 'http://www.afl.com.au/api/cfs/afl/' token_url = base_url + 'WMCTok' player_url = base_url + 'matchItems/round' def printPretty(data): print(json.dumps(data, sort_keys=True, indent=2, separators=(',', ': '))) session = requests.Session() # session makes it simple to use the token across the requests token = session.post(token_url).json()['token'] # get the token session.headers.update({'X-media-mis-token': token}) # set the token Season = 2014 Roundno = 4 if Roundno<10: strRoundno = '0'+str(Roundno) else: strRoundno = str(Roundno) # get some data (could easily be a for loop, might want to put in a delay using Sleep so that you don't get IP blocked) data = session.get(player_url + '/CD_R'+str(Season)+'014'+strRoundno) # print everything printPretty(data.json()) with open('stats_game_test.csv', 'w', newline='') as csvfile: spamwriter = csv.writer(csvfile, delimiter="'",quotechar='|', quoting=csv.QUOTE_ALL) for profile in data.json()['items']: spamwriter.writerow(['%s' %(profile)]) #for key in data.json().keys(): # print("key: %s , value: %s" % (key, data.json()[key])) The above code grabs the json and writes it to a csv, but it puts the key in each individual cell next to the value (eg 'venueId': 'CD_V190'), the key needs to be just across the first row as a heading. It gives me a csv file with data in the cells like this Column A B 'tempInCelsius': 17.0 'totalScore': 32 'tempInCelsius': 16.0 'totalScore': 28 What I want is the data like this tempInCelsius totalScore 17 32 16 28 As I mentioned up the top, the data isn't always consistent so if I define what fields to grab with spamwriter.writerow([profile['tempInCelsius'], profile['totalScore']]) then it will error out on certain data grabs. This is why I'm now trying the above method so it just grabs everything regardless of what data is there.

    Read the article

  • perl multithreading issue for autoincrement

    - by user3446683
    I'm writing a multi threaded perl script and storing the output in a csv file. I'm trying to insert a field called sl.no. in the csv file for each row entered but as I'm using threads, the sl. no. overlaps in most. Below is an idea of my code snippet. for ( my $count = 1 ; $count <= 10 ; $count++ ) { my $t = threads->new( \&sub1, $count ); push( @threads, $t ); } foreach (@threads) { my $num = $_->join; } sub sub1 { my $num = shift; my $start = '...'; #distributing data based on an internal logic my $end = '...'; #distributing data based on an internal logic my $next; for ( my $x = $start ; $x <= $end ; $x++ ) { my $count = $x + 1; #part of code from which I get @data which has name and age my $j = 0; if ( $x != 0 ) { $count = $next; } foreach (@data) { #j is required here for some extra code flock( OUTPUT, LOCK_EX ); print OUTPUT $count . "," . $name . "," . $age . "\n"; flock( OUTPUT, LOCK_UN ); $j++; $count++; } $next = $count; } return $num; } I need the count to be incremented which is the serial number for the rows that would be inserted in the csv file. Any help would be appreciated.

    Read the article

  • NHibernate: What are child sessions and why and when should I use them?

    - by stefando
    In the comments for the ayende's blog about the auditing in NHibernate there is a mention about the need to use a child session:session.GetSession(EntityMode.Poco). As far as I understand it, it has something to do with the order of the SQL operation which session.Flush will emit. (For example: If I wanted to perform some delete operation in the pre-insert event but the session was already done with deleting operations, I would need some way to inject them in.) However I did not find documentation about this feature and behavior. Questions: Is my understanding of child sessions correct? How and in which scenarios should I use them? Are they documented somewhere? Could they be used for session "scoping"? (For example: I open the master session which will hold some data and then I create 2 child-sessions from the master one. I'd expect that the two child-scopes will be separated but the will share objects from the master session cache. Is this the case?) Are they first class citizens in NHibernate or are they just hack to support some edge-case scenarios? Thanks in advance for any info.

    Read the article

  • CSS: 100% of container height without modifying the container

    - by Rena
    Yeah, this ugly question again. I'm writing some HTML that gets inserted into a page. I have no control over the rest of the page. The structure is something like: <table <tr <td rowspan="2"left column</td <td height="1"top row above content</td </tr <tr<td height="220"my content here</td</tr </table I can write whatever I want into that table cell (including style tags to pack in my CSS), but I can't touch anything outside of it, which means I can't set the height of any parent element (including html and body), add a doctype (it has none), etc - that already kills just about every solution I can find (all seem to be "add a doctype" and/or "give the parent container a fixed height"). What I want to do is simply have a <div fill the entire cell. Width is no problem but unsurprisingly height is being a massive pain. Writing "height: 100%" doesn't do anything unless the container has a fixed height (the height="220" attribute apparently doesn't count) or the div uses absolute positioning - and then it seems to want to use 100% of the window's height (and width even) instead of the cell's. The root of the problem is the left column varies in height, as does the content, and when the left column cell is larger than the content, it won't expand to fill the cell it's in. If I set a fixed height for the content, it'll be much larger than necessary most of the time, and if I don't, it doesn't take up all of the cell and leaves an ugly gap at the bottom.

    Read the article

  • Is there any function in Matlab for changing the form of matrix?

    - by niko
    Hi, I have to get the unknown matrix by changing the form of known matrix considering the following rules: H = [-P'|I] G = [I|P] where H is known matrix G is unknown matrix which has to be calculated I is identity matrix So for example if we had a matrix H = [1 1 1 1 0 0; 0 0 1 1 0 1; 1 0 0 1 1 0] its form has to be changed to H = [1 1 1 1 0 0; 0 1 1 0 1 0; 1 1 0 0 0 1] so -P' = [1 1 1; 0 1 0; 1 1 0] and in case of binary matrices -P = P therefore G = [1 0 0 1 1 1; 0 1 0 0 1 0; 0 0 1 1 1 0] I know how to solve it on the paper by performing basic row operations but don't know how if there is any function already written in Matlab to calculate G from H or H from G by considering the above rules. I would be very thankful if anyone of you could suggest any method for solving the given problem. Thank you.

    Read the article

  • Visual Studio 2008 closes unexpectedly

    - by Jose
    I don't know if I can really get an answer to this question, but it really irks me and I would like to know if someone has an idea how to arrive to an answer. I have a pretty large solution in VS 2008 that maybe every week/every other week whenever I click properties to get to the project properties the IDE closes without warning. After that happens it will close EVERY time I try and view the properties. At that point I try and delete the .suo file, I resize the IDE, I close the tabs within the project, I restore default VS Settings(when I'm desperate). Eventually 20-30 minutes later I can actually view the properties. I haven't figured out exactly what fixes it, seems to be different every time. Once it's "fixed" I can't break it again so I can figure out what "fixed" it. This seems to be project specific, because I can view properties of other projects while this project is misbehaving. I guess my first question is, does VS log reasons for closing unexpectedly? Can I find out what the offending reason behind this is? The main frustration is I don't know that cause, nor the cure. Any ideas?

    Read the article

  • What, if any, printable character did a user type based on the values in a given System.Windows.Form

    - by Corey Trager
    As a workaround for a problem, I think I have to handle KeyDown events to get the printable character the user actually typed. KeyDown supplies me with a KeyEventArgs object with the properities KeyCode, KeyData, KeyValue, Modifiers, Alt, Shift, Control. My first attempt was just to consider the KeyCode to be the ascii code, but KeyCode on my keyboard is 46, a period ("."), so I end up printing a period when the user types the delete key. So, I know my logic is inadequate. (For those who are curious, the problem is that I have my own combobox in a DataGridView's control collection and somehow SOME characters I type don't produce the KeyPress and TextChanged ComboBox events. These letters include Q, $, %.... This code will reproduce the problem. Generate a Form App and replace the ctor with this code. Run it, and try typing the letter Q into the two comboxes. public partial class Form1 : Form { ComboBox cmbInGrid; ComboBox cmbNotInGrid; DataGridView grid; public Form1() { InitializeComponent(); grid = new DataGridView(); cmbInGrid = new ComboBox(); cmbNotInGrid = new ComboBox(); cmbInGrid.Items.Add("a"); cmbInGrid.Items.Add("b"); cmbNotInGrid.Items.Add("c"); cmbNotInGrid.Items.Add("d"); this.Controls.Add(cmbNotInGrid); this.Controls.Add(grid); grid.Location = new Point(0, 100); this.grid.Controls.Add(cmbInGrid); }

    Read the article

  • Conversion failed when converting datetime from character string

    - by salvationishere
    I am developing a C# VS 2008 / SQL Server 2005 Express website application. I have tried some of the fixes for this problem but my call stack differs from others. And these fixes did not fix my problem. What steps can I take to troubleshoot this? Here is my error: System.Data.SqlClient.SqlException was caught Message="Conversion failed when converting datetime from character string." Source=".Net SqlClient Data Provider" ErrorCode=-2146232060 LineNumber=10 Number=241 Procedure="AppendDataCT" Server="\\\\.\\pipe\\772EF469-84F1-43\\tsql\\query" State=1 StackTrace: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at ADONET_namespace.ADONET_methods.AppendDataCT(DataTable dt, Dictionary`2 dic) in c:\Documents and Settings\Admin\My Documents\Visual Studio 2008\WebSites\Jerry\App_Code\ADONET methods.cs:line 102 And here is the related code. When I debugged this code, "dic" only looped through the 3 column names, but did not look into row values which are stored in "dt", the Data Table. public static string AppendDataCT(DataTable dt, Dictionary<string, string> dic) { if (dic.Count != 3) throw new ArgumentOutOfRangeException("dic can only have 3 parameters"); string connString = ConfigurationManager.ConnectionStrings["AW3_string"].ConnectionString; string errorMsg; try { using (SqlConnection conn2 = new SqlConnection(connString)) { using (SqlCommand cmd = conn2.CreateCommand()) { cmd.CommandText = "dbo.AppendDataCT"; cmd.CommandType = CommandType.StoredProcedure; cmd.Connection = conn2; foreach (string s in dic.Keys) { SqlParameter p = cmd.Parameters.AddWithValue(s, dic[s]); p.SqlDbType = SqlDbType.VarChar; } conn2.Open(); cmd.ExecuteNonQuery(); conn2.Close(); errorMsg = "The Person.ContactType table was successfully updated!"; } } }

    Read the article

  • std::vector elements initializing

    - by Chameleon
    std::vector<int> v1(1000); std::vector<std::vector<int>> v2(1000); std::vector<std::vector<int>::const_iterator> v3(1000); How elements of these 3 vectors initialized? About int, I test it and I saw that all elements become 0. Is this standard? I believed that primitives remain undefined. I create a vector with 300000000 elements, give non-zero values, delete it and recreate it, to avoid OS memory clear for data safety. Elements of recreated vector were 0 too. What about iterator? Is there a initial value (0) for default constructor or initial value remains undefined? When I check this, iterators point to 0, but this can be OS When I create a special object to track constructors, I saw that for first object, vector run the default constructor and for all others it run the copy constructor. Is this standard? Is there a way to completely avoid initialization of elements? Or I must create my own vector? (Oh my God, I always say NOT ANOTHER VECTOR IMPLEMENTATION) I ask because I use ultra huge sparse matrices with parallel processing, so I cannot use push_back() and of course I don't want useless initialization, when later I will change the value.

    Read the article

  • Gridview column right click issue

    - by peter
    I have grid named 'GridView1' and displaying columns like this <asp:GridView ID="GridView1" OnRowCommand="ScheduleGridView_RowCommand" runat="server" AutoGenerateColumns="False" Height="60px" Style="text-align: center" Width="869px" EnableViewState="False"> <Columns> <asp:BoundField HeaderText="Topic" DataField="Topic" /> <asp:BoundField DataField="Moderator" HeaderText="Moderator" /> <asp:BoundField DataField="Expert" HeaderText="Expert" /> <asp:BoundField DataField="StartTime" HeaderText="Start" > <HeaderStyle Width="175px" /> </asp:BoundField> <asp:BoundField DataField="EndTime" HeaderText="End" > <HeaderStyle Width="175px" /> </asp:BoundField> <asp:TemplateField HeaderText="Join" ShowHeader="False"> <ItemTemplate> <asp:Button ID="JoinBT" runat="server" CommandArgument="<%# Container.DataItemIndex %>" CommandName="Join" Text="Join" Width="52px" /> </ItemTemplate> <HeaderStyle Height="15px" /> </asp:TemplateField> </Columns> </asp:GridView> Here what is my requirement is whenever we right click on each row in gridview ,it should display three option join meeting(if we click it will go to meeting.aspx),,view details(will go to detail.aspx),,Subscribe(subscribe.aspx) just like when we click right any where we can see view,sortby,refresh like that..Do we need to implement javascript here

    Read the article

< Previous Page | 627 628 629 630 631 632 633 634 635 636 637 638  | Next Page >