Search Results

Search found 8344 results on 334 pages for 'count'.

Page 307/334 | < Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >

  • Optimising speeds in HDF5 using Pytables

    - by Sree Aurovindh
    The problem is with respect to the writing speed of the computer (10 * 32 bit machine) and the postgresql query performance.I will explain the scenario in detail. I have data about 80 Gb (along with approprite database indexes in place). I am trying to read it from Postgresql database and writing it into HDF5 using Pytables.I have 1 table and 5 variable arrays in one hdf5 file.The implementation of Hdf5 is not multithreaded or enabled for symmetric multi processing.I have rented about 10 computers for a day and trying to write them inorder to speed up my data handling. As for as the postgresql table is concerned the overall record size is 140 million and I have 5 primary- foreign key referring tables.I am not using joins as it is not scalable So for a single lookup i do 6 lookup without joins and write them into hdf5 format. For each lookup i do 6 inserts into each of the table and its corresponding arrays. The queries are really simple select * from x.train where tr_id=1 (primary key & indexed) select q_t from x.qt where q_id=2 (non-primary key but indexed) (similarly five queries) Each computer writes two hdf5 files and hence the total count comes around 20 files. Some Calculations and statistics: Total number of records : 14,37,00,000 Total number of records per file : 143700000/20 =71,85,000 The total number of records in each file : 71,85,000 * 5 = 3,59,25,000 Current Postgresql database config : My current Machine : 8GB RAM with i7 2nd generation Processor. I made changes to the following to postgresql configuration file : shared_buffers : 2 GB effective_cache_size : 4 GB Note on current performance: I have run it for about ten hours and the performance is as follows: The total number of records written for each file is about 6,21,000 * 5 = 31,05,000 The bottle neck is that i can only rent it for 10 hours per day (overnight) and if it processes in this speed it will take about 11 days which is too high for my experiments. Please suggest me on how to improve. Questions: 1. Should i use Symmetric multi processing on those desktops(it has 2 cores with about 2 GB of RAM).In that case what is suggested or prefereable? 2. If i change my postgresql configuration file and increase the RAM will it enhance my process. 3. Should i use multi threading.. In that case any links or pointers would be of great help Thanks Sree aurovindh V

    Read the article

  • Array Sorting Question for News System

    - by lemonpole
    Hello all. I'm currently stuck trying to figure out how to sort my array files. I have a simple news posting system that stores the content in seperate .dat files and then stores them in an array. I numbered the files so that my array can sort them from lowest number to greatest; however, I have run into a small problem. To begin here is some more information on my system so that you can understand it better. The function that gathers my files is: function getNewsList() { $fileList = array(); // Open the actual directory if($handle = opendir(ABSPATH . ADMIN . "data")) { // Read all file from the actual directory while($file = readdir($handle)) { if(!is_dir($file)) { $fileList[] = $file; } } } // Return the array. return $fileList; } On a seperate file is the programming that processes the news post. I didn't post that code for simplicity's sake but I will explain how the files are named. The files are numbered and the part of the post's title is used... for the numbering I get a count of the array and add "1" as an offset. I get the title of the post, encode it to make it file-name-friendly and limit the amount of text so by the end of it all I end up with: // Make the variable that names the file that will contain // the post. $filename = "00{$newnumrows}_{$snipEncode}"; When running print_r on the above function I get: Array ( [0] => 0010_Mira_mi_Soledad.dat [1] => 0011_WOah.dat [2] => 0012_Sinep.dat [3] => 0013_Living_in_Warfa.dat [4] => 0014_Hello.dat [5] => 001_AS.dat [6] => 002_ASASA.dat [7] => 003_SSASAS.dat ... [13] => 009_ASADADASADAFDAF.dat ) And this is how my content is displayed. For some reason according to the array sorting 0010 comes before 001...? Is there a way I can get my array to sort 001 before 0010?

    Read the article

  • Code golf - hex to (raw) binary conversion

    - by Alnitak
    In response to this question asking about hex to (raw) binary conversion, a comment suggested that it could be solved in "5-10 lines of C, or any other language." I'm sure that for (some) scripting languages that could be achieved, and would like to see how. Can we prove that comment true, for C, too? NB: this doesn't mean hex to ASCII binary - specifically the output should be a raw octet stream corresponding to the input ASCII hex. Also, the input parser should skip/ignore white space. edit (by Brian Campbell) May I propose the following rules, for consistency? Feel free to edit or delete these if you don't think these are helpful, but I think that since there has been some discussion of how certain cases should work, some clarification would be helpful. The program must read from stdin and write to stdout (we could also allow reading from and writing to files passed in on the command line, but I can't imagine that would be shorter in any language than stdin and stdout) The program must use only packages included with your base, standard language distribution. In the case of C/C++, this means their respective standard libraries, and not POSIX. The program must compile or run without any special options passed to the compiler or interpreter (so, 'gcc myprog.c' or 'python myprog.py' or 'ruby myprog.rb' are OK, while 'ruby -rscanf myprog.rb' is not allowed; requiring/importing modules counts against your character count). The program should read integer bytes represented by pairs of adjacent hexadecimal digits (upper, lower, or mixed case), optionally separated by whitespace, and write the corresponding bytes to output. Each pair of hexadecimal digits is written with most significant nibble first. The behavior of the program on invalid input (characters besides [a-fA-F \t\r\n], spaces separating the two characters in an individual byte, an odd number of hex digits in the input) is undefined; any behavior (other than actively damaging the user's computer or something) on bad input is acceptable (throwing an error, stopping output, ignoring bad characters, treating a single character as the value of one byte, are all OK) The program may write no additional bytes to output. Code is scored by fewest total bytes in the source file. (Or, if we wanted to be more true to the original challenge, the score would be based on lowest number of lines of code; I would impose an 80 character limit per line in that case, since otherwise you'd get a bunch of ties for 1 line).

    Read the article

  • linq to sql with nservicebus table lock issue

    - by IGoor
    I am building a system using NServiceBus and my DataLayer is using Linq 2 SQL. The system is made up of 2 services. Service1 receives messages from NSB. It will query Table1 in my database and inserts a record into Table1 If a certain condition is met a new NSB message is sent to the 2nd service Service2 will update records also in Table1 when it receives messages from Service1 and it does some other non database related work. Service2 is a long running process. The problem I am having is the moment Service2 updates a record in Table1, the table is locked. The lock seems to be in place until Service2 has completed all it is processing. i.e. The lock is not released after my datacontext is disposed. This causes the query in Service1 to timeout. Once Service2 completes processing, Service1 resumes processing again without problem. So for example Service1 code may look like: int x =0; using (DataContext db = new DataContext()) { x = (from dp in db.Table1 select dp).Count(); // this line will timeout while service2 is processing Table1 t = new Table1(); t.Data = "test"; db.Table1.InsertOnSubmit(t); db.SubmitChanges(); } if(x % 50 == 0) CallService2(); The code in service2 may look like: using (DataContext db = new DataContext()) { Table1 t = db.Table1.Where(t => t.id == myId); t.Data = "updated"; db.SubmitChanges(); } // I would have expected the lock to have been released at this point, but this is not the case. DoSomeLongRunningTasks(); // lock will be released once service2 exits I don't understand why the lock is not released when the datacontext is disposed in Service2. To get around the problem I have been calling: db.ExecuteCommand("SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED"); and this works, but I am not happy using it. I want to solve this problem properly. Has any one experienced this sort of problem before and does any one know how to solve it? Why is the lock not released after the datacontext has been disposed? Thanks in advance. p.s. sorry for the extremely long post.

    Read the article

  • translating specifications into query predicates

    - by Jeroen
    I'm trying to find a nice and elegant way to query database content based on DDD "specifications". In domain driven design, a specification is used to check if some object, also known as the candidate, is compliant to a (domain specific) requirement. For example, the specification 'IsTaskDone' goes like: class IsTaskDone extends Specification<Task> { boolean isSatisfiedBy(Task candidate) { return candidate.isDone(); } } The above specification can be used for many purposes, e.g. it can be used to validate if a task has been completed, or to filter all completed tasks from a collection. However, I want to re-use this, nice, domain related specification to query on the database. Of course, the easiest solution would be to retrieve all entities of our desired type from the database, and filter that list in-memory by looping and removing non-matching entities. But clearly that would not be optimal for performance, especially when the entity count in our db increases. Proposal So my idea is to create a 'ConversionManager' that translates my specification into a persistence technique specific criteria, think of the JPA predicate class. The services looks as follows: public interface JpaSpecificationConversionManager { <T> Predicate getPredicateFor(Specification<T> specification, Root<T> root, CriteriaQuery<?> cq, CriteriaBuilder cb); JpaSpecificationConversionManager registerConverter(JpaSpecificationConverter<?, ?> converter); } By using our manager, the users can register their own conversion logic, isolating the domain related specification from persistence specific logic. To minimize the configuration of our manager, I want to use annotations on my converter classes, allowing the manager to automatically register those converters. JPA repository implementations could then use my manager, via dependency injection, to offer a find by specification method. Providing a find by specification should drastically reduce the number of methods on our repository interface. In theory, this all sounds decent, but I feel like I'm missing something critical. What do you guys think of my proposal, does it comply to the DDD way of thinking? Or is there already a framework that does something identical to what I just described?

    Read the article

  • Magento set Store Id - customer login - but still logged out

    - by user3564050
    I've got an overridden AccountController in which i set the current store to an other as currently running (example: Customer is in website default and store default, going to login page, click login, my loginPostAction sets the store to id "2" (on website 2) and then executes the parent code loginPostAction. The store is set, of course, but after the login and the redirect to home, the customer is not logged in anymore... Customer-sendlogindata-myaccountcontroller sets store-original account controller logs in without errors (cause $session customer is set)-redirect to home-customer is not logged in anymore... i set the store with Mage::app()-setCurrentStore($id); . And in index.php i've got an extra where the store is set to the right id (2) too and this works... but the customer is not logged in anymore.. is that an issue with the session cause different websites ? I don't want to globally share customer.. each website has his own customers, but every customer has to be able to login on default store. AccountController.php overridden: public $Website_Ids = array( array("code" => "gerstore", "id" => "3", "website" => "ger"), array("code" => "ukstore", "id" => "2", "website" => "uk"), array("code" => "esstore", "id" => "4", "website" => "es"), array("code" => "frstore", "id" => "5", "website" => "fr") ); public function loginPostAction() { $login = $this->getRequest()->get('login'); if(isset($login['username'])) { $found = null; foreach($this->Website_Ids as $WebsiteId) { $customer = Mage::getModel('customer/customer'); $customer->setWebsiteId($WebsiteId['id']); $customer->loadByEmail($login['username']); if(count($customer->getData()) > 0) { $found = $WebsiteId; } } if($found != null && Mage::app()->getStore()->getId() != $found['id']) { /* found, so set currentstore to id */ Mage::app()->setCurrentStore($found['id']); $_SESSION['current_store_b2b'] = $found; } /* not found, doesn't matter cause mage login exception handling */ } parent::loginPostAction(); } Index.php : session_start(); $session = $_SESSION['current_store_b2b']; if($session != null || $session != "") { Mage::app()->setCurrentStore($session['id']); Mage::run($session['code'], 'store'); } else { /* Store or website code */ $mageRunCode = isset($_SERVER['MAGE_RUN_CODE']) ? $_SERVER['MAGE_RUN_CODE'] : ''; /* Run store or run website */ $mageRunType = isset($_SERVER['MAGE_RUN_TYPE']) ? $_SERVER['MAGE_RUN_TYPE'] : 'store'; Mage::run($mageRunCode, $mageRunType); } Whats the matter ? Thanks.

    Read the article

  • Child data grid is not showing the values in the page

    - by prince23
    hi, Child data grid is not showing the values in the page for the child datagrid I am binding with an list <sdk:DataGrid MinHeight="100" x:Name="contacts" Margin="51,21,88,98" RowDetailsVisibilityChanged="contacts_RowDetailsVisibilityChanged" LoadingRowDetails="contacts_LoadingRowDetails" RowDetailsVisibilityMode="VisibleWhenSelected" MouseLeftButtonUp="contacts_MouseLeftButtonUp" MouseLeftButtonDown="contacts_MouseLeftButtonDown"> <sdk:DataGrid.Columns> <sdk:DataGridTextColumn Binding="{Binding EmployeeID}" Header="ID" /> <sdk:DataGridTextColumn Binding="{Binding EmployeeFName}" Header="Fname" /> <sdk:DataGridTextColumn Binding="{Binding EmployeeLName}" Header="LName" /> <sdk:DataGridTextColumn Binding="{Binding EmployeeMailID}" Header="MailID" /> </sdk:DataGrid.Columns> <sdk:DataGrid.RowDetailsTemplate> <DataTemplate> <sdk:DataGrid x:Name="dgrdRowDetail" Width="200" AutoGenerateColumns="False" HorizontalAlignment="Center" IsReadOnly="True"> <sdk:DataGrid.Columns> <sdk:DataGridTextColumn Header="CompanyName" Binding="{Binding Company name}"/> <sdk:DataGridTextColumn Header="CompanyName" Binding="{Binding EmpID}"/> </sdk:DataGrid.Columns> </sdk:DataGrid> </DataTemplate> </sdk:DataGrid.RowDetailsTemplate> </sdk:DataGrid> I am having 2 grids "contacts" and "dgrdRowDetail" globally i have defined an variable like this:- DataGrid dgrdRowDetail; in the contacts_RowDetailsVisibilityChanged event I have this code if (e.Row.DataContext != null) { string strEmpID = ((SilverlightApplication1.DBServiceEMP.Employee)((e.DetailsElement).DataContext)).EmployeeID; dgrdRowDetail = (DataGrid)e.DetailsElement.FindName("dgrdRowDetail"); // here i am finding the child datgrid control in contacts datagrid // then in dgrdRowDetail i will be binding this grid with new values if (strEmpID != null) { int EmpID = Convert.ToInt32(strEmpID.ToString()); DBServiceEmp.GetEmployeeIDCompleted += new EventHandler<GetEmployeeIDCompletedEventArgs>(DBServiceEmp_GetEmployeeIDCompleted); DBServiceEmp.GetEmployeeIDAsync(EmpID); } } this is my method void DBServiceEmp_GetEmployeeIDCompleted(object sender, GetEmployeeIDCompletedEventArgs e) { // List<Employee> Employes = new List<Employee>(); List<Employee> rows = new List<Employee>(); for (int i = 0; i < e.Result.Count; i++) { rows.Add(e.Result[i]); } dgrdRowDetail.ItemsSource = rows; // here i am binding the child datagrid with new data source } dgrdRowDetail.ItemsSource = rows// what ever rows i am binding to dgrdRowDetail are not shown in the page if i check the rows i am able to see the value ther. but in the child grid it is not reflecting plz plz help me out i am struck thanks in advance prince

    Read the article

  • how do i scroll through 100 photos in UIScrollView in IPhone

    - by mwangima
    I'm trying to scroll through images being downloaded from a users online album (like in the facebook iphone app) since i can't load all images into memory, i'm loading 3 at a time (prev,current & next). then removing image(prev-1) & image (next +1) from the uiscroller subviews. my logic works fine in the simulator but fails in the device with this error: [CALayer retain]: message sent to deallocated instance what could be the problem below is my code sample - (void)scrollViewDidEndDecelerating:(UIScrollView *)_scrollView { pageControlIsChangingPage = NO; CGFloat pageWidth = _scrollView.frame.size.width; int page = floor((_scrollView.contentOffset.x - pageWidth / 2) / pageWidth) + 1; if (page1 && page<=(pageControl.numberOfPages-3)) { [self removeThisView:(page-2)]; [self removeThisView:(page+2)]; } if(page0) { NSLog(@"<< PREVIOUS"); [self showPhoto:(page-1)]; } [self showPhoto:page]; if(page<(pageControl.numberOfPages-1)) { //NSLog(@"NEXT "); [self showPhoto:page+1]; NSLog(@"FINISHED LOADING NEXT "); } } -(void) showPhoto:(NSInteger)index { CGFloat cx = scrollView.frame.size.width*index; CGFloat cy = 40; CGRect rect=CGRectMake( 0, 0,320, 480); rect.origin.x = cx; rect.origin.y = cy; AsyncImageView* asyncImage = [[AsyncImageView alloc] initWithFrame:rect]; asyncImage.tag = 999; NSURL *url = [NSURL URLWithString:[pics objectAtIndex:index]]; [asyncImage loadImageFromURL:url place:CGRectMake(150, 190, 30, 30) member:memberid isSlide:@"Yes" picId:[picsIds objectAtIndex:index]]; [scrollView addSubview:asyncImage]; [asyncImage release]; } -(void) removeThisView:(NSInteger)index { if(index<[[scrollView subviews] count] && [[scrollView subviews] objectAtIndex:index]!=nil){ if ([[[scrollView subviews] objectAtIndex:index] isKindOfClass:[AsyncImageView class]] || [[[scrollView subviews] objectAtIndex:index] isKindOfClass:[UIImageView class]]) { [[[scrollView subviews] objectAtIndex:index] removeFromSuperview]; } } } For the record it works OK in the simulator, but not the iphone device itself. any ideas will be appreciated. cheers, fred.

    Read the article

  • CoreData : App crashes when deleting last instance created

    - by Leo
    Hello, I have a 2 tabs application. In the first one, I'm creating objects of the "Sample" and "SampleList" entities. Each sampleList contains an ID and a set of samples. Each sample contains a date and temperature property. In the second tab, I'm displaying my data in a tableView. I implemented the - (void)tableView:(UITableView *)tableView commitEditingStyle:(UITableViewCellEditingStyle)editingStyle forRowAtIndexPath:(NSIndexPath *)indexPath method in order to delete SampleLists. In my xcdatamodel the delete rule for my relationship between SampleList and Sample is Cascade. My problem is that when I try to delete SampleList I just created, the app crashes and I receive a EXC_BAD_ACCESS signal. If I restart it, then I'm able to delete "old" sampleList without any problems. Earlier, I had the following problem : I couldn't display the the sampleLists I created since I launched the app, because it crashed too. I received also the EXC_BAD_ACCESS signal. Actually, it seemed that the date of the last sample created of the set was nil. If I am not releasing the NSDate I'm using to set the sample's date, I don't have this problem anymore... If anyone could help me to find out what could cause my troubles it would be great !! Here is the method I'm using to create new instances : SampleList *newSampleList = (SampleList *)[NSEntityDescription insertNewObjectForEntityForName:@"SampleList" inManagedObjectContext:managedObjectContext]; [newSampleList setPatchID:patchID]; NSMutableSet *newSampleSet = [[NSMutableSet alloc] init]; NSCalendar *gregorian = [[NSCalendar alloc] initWithCalendarIdentifier:NSGregorianCalendar]; for (int i = 0; i < [byteArray count]; i=i+4, sampleCount++) { NSDateComponents *comps = [[NSDateComponents alloc] init]; [comps setYear:year]; [comps setMonth:month]; [comps setDay:day]; [comps setHour:hours]; [comps setMinute:minutes]; NSDate *sampleDate = [gregorian dateFromComponents:comps]; Sample *newSample = (Sample *)[NSEntityDescription insertNewObjectForEntityForName:@"Sample" inManagedObjectContext:managedObjectContext]; [newSample setSampleDate:sampleDate]; [newSample setSampleTemperature:[NSNumber numberWithInt:temperature]]; [newSampleSet addObject:newSample]; [comps release]; //[sampleDate release]; } [newSampleList setSampleSet:newSampleSet]; // [newSampleSet release]; NSError *error; if (![managedObjectContext save:&error]) { NSLog(@"Could not Save the context !!"); } [gregorian release];

    Read the article

  • Reducer getting fewer records than expected

    - by sathishs
    We have a scenario of generating unique key for every single row in a file. we have a timestamp column but the are multiple rows available for a same timestamp in few scenarios. We decided unique values to be timestamp appended with their respective count as mentioned in the below program. Mapper will just emit the timestamp as key and the entire row as its value, and in reducer the key is generated. Problem is Map outputs about 236 rows, of which only 230 records are fed as an input for reducer which outputs the same 230 records. public class UniqueKeyGenerator extends Configured implements Tool { private static final String SEPERATOR = "\t"; private static final int TIME_INDEX = 10; private static final String COUNT_FORMAT_DIGITS = "%010d"; public static class Map extends Mapper<LongWritable, Text, Text, Text> { @Override protected void map(LongWritable key, Text row, Context context) throws IOException, InterruptedException { String input = row.toString(); String[] vals = input.split(SEPERATOR); if (vals != null && vals.length >= TIME_INDEX) { context.write(new Text(vals[TIME_INDEX - 1]), row); } } } public static class Reduce extends Reducer<Text, Text, NullWritable, Text> { @Override protected void reduce(Text eventTimeKey, Iterable<Text> timeGroupedRows, Context context) throws IOException, InterruptedException { int cnt = 1; final String eventTime = eventTimeKey.toString(); for (Text val : timeGroupedRows) { final String res = SEPERATOR.concat(getDate( Long.valueOf(eventTime)).concat( String.format(COUNT_FORMAT_DIGITS, cnt))); val.append(res.getBytes(), 0, res.length()); cnt++; context.write(NullWritable.get(), val); } } } public static String getDate(long time) { SimpleDateFormat utcSdf = new SimpleDateFormat("yyyyMMddhhmmss"); utcSdf.setTimeZone(TimeZone.getTimeZone("America/Los_Angeles")); return utcSdf.format(new Date(time)); } public int run(String[] args) throws Exception { conf(args); return 0; } public static void main(String[] args) throws Exception { conf(args); } private static void conf(String[] args) throws IOException, InterruptedException, ClassNotFoundException { Configuration conf = new Configuration(); Job job = new Job(conf, "uniquekeygen"); job.setJarByClass(UniqueKeyGenerator.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(Text.class); job.setMapperClass(Map.class); job.setReducerClass(Reduce.class); job.setInputFormatClass(TextInputFormat.class); job.setOutputFormatClass(TextOutputFormat.class); // job.setNumReduceTasks(400); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); job.waitForCompletion(true); } } It is consistent for higher no of lines and the difference is as huge as 208969 records for an input of 20855982 lines. what might be the reason for reduced inputs to reducer?

    Read the article

  • Matching array elements to get an array element at another index

    - by bccarlso
    I have a PHP array that I'm using to generate an HTML form. The PHP array is this: <?php $vdb = array ( array( "Alabama", 275), array( "Alaska", 197), array( "Arizona", 3322)); ?> The PHP to generate the HTML form is below. I need to have the value be the name of the state, because there is some AJAX I'm using to display which states a user has chosen. <?php echo "<table border='1'><thead><tr><th></th><th>State</th><th>Contacts</th><th>Email</th></tr></thead>"; for ($row = 0; $row < 42; $row++) { echo "<tr><td class='input_button'><input type='checkbox' name='vdb[]' value='".$vdb[$row][0]."' title='".$vdb[$row][1]."' /></td>"; echo "<td>".$vdb[$row][0]."</td>"; echo "<td>".$vdb[$row][1]."</td>"; } echo "</table>"; ?> What I'm trying to do is, on submission of the form, with the states the user selected, loop through the PHP array and total the numbers from the selected states. So if I checked Alabama and Alaska, I'd want to add 275 + 197. This is what I thought would have worked, but it's not: <?php $vendors = array(); if (isset($_POST["vdb"])) { $vendors = $_POST["vdb"]; } $ven_i = 0; $ven_j = 0; $ven_total = 0; foreach ($vendors as $value) { foreach ($vdb as $vdb_value) { if ($vendors[$ven_i] == $vdb[$ven_j][0]) { $ven_total += $vdb[$ven_j][1]; } $ven_j++; } $ven_i++; } ?> and then $ven_total should be the total I'm looking for. However, $ven_total just ends up being the first checkbox selected, and it ignores the rest. I am doing this correctly with the AJAX, displaying the total on the front end, but I don't know how to pass that on to the form submission. I'd rather not using GET and URL variables, because a user could type something into the URL and modify the count. Any idea what I'm doing wrong, or a better way to approach this that I would be able to understand? (Very much a novice programmer.)

    Read the article

  • iPhone app works hundreds of times, then crashes from memory error on startup, then never works unti

    - by peter
    I have a Cocos2d/openGL iPhone game. It's a universal app and I'm dealing with an occasional but nasty error on the iPad. We are loading a lot of textures up front (3 2048x2048 textures). I'm working on reducing this up front load, but what worries me is I really don't understand the root cause of this crash that permanently breaks the app. This is the deal: 1. App works fine for hundreds of plays on the iPad 2. Eventually (I'm guessing due to other programs using up some memory and not letting go or whatever) the app starts crashing on startup. It just closes again in the middle of loading. 3. The App will now never work again on that iPad, closing immediately every time, until the iPad is restarted. Obviously my app is demanding too much memory up front to work reliably every time, I get that. What I don't get is why when it fails once, it has failed forever until the iPad is restarted. Can anyone explain what is going on here? EDIT: forgot to add organizer crash lags just say low memory, like this every time (I changed my app name to MyAppName below). Again, I know it's low memory, but why does it stay low memory until restart?: Incident Identifier: E7A2507C-3FB1-4E3B-B315-09F094236541 CrashReporter Key: 0fda9d667f2c6073f20a76809aa25438b6854d15 OS Version: iPhone OS 3.2 (7B367) Date: 2010-04-30 16:59:44 -0400 Free pages: 437 Wired pages: 17228 Purgeable pages: 0 Largest process: MyAppName Processes Name UUID Count resident pages MyAppName <6307ce41802850944baa78d29224fa7f> 22385 (jettisoned) (active) mediaserverd <ea8bac28b06fe3980fdd44b5caceb563> 242 DTMobileIS <a0f651e43881e66f50f8a95abea72921> 5826 notification_pro <4c9a7ee0a5bbe160465991228f2d2f2e> 67 syslog_relay <4ceaed776d2df957fa130712f4ef21d0> 66 notification_pro <4c9a7ee0a5bbe160465991228f2d2f2e> 67 notification_pro <4c9a7ee0a5bbe160465991228f2d2f2e> 67 afcd <4f3c9566e33b4463f05603d990584e5d> 72 ptpd <83de0f774bd6553d513ae9e19b0f9b56> 181 syslogd <66247e305d5c0bf6f1ce1cc950653263> 81 lsd <a4d852c1c8da2b3d231bdc90887b52ba> 130 iapd <a8534cbde4b90ad5915dd26ab03ff3e3> 204 notifyd <5e9d5bee7c3eae1c8b494c79eb11406e> 71 BTServer <64e4a6ea6b1240db2331e05a29caa862> 108 CommCenter <97bf297944ac4bde19bcee96dd23bd5f> 181 SpringBoard <c7a5904c12db7b14334a4edaa4cabaa9> 5339 (active) configd <aca9fa3380322669164fd6b1a3864300> 373 fairplayd.K48 <2d997ffca1a568f9c5400ac32d8f0782> 84 locationd <dd1ea88105c62173908ce767db5c4d37> 599 mDNSResponder <820560222d47a1f2a0dce98a7f8a9721> 108 lockdownd <497fd54c79a680bf29f5d9320f514613> 303 MobileStorageMou <c277b79c2157c4dc5cfc5c3ca35bd5f2> 69 launchd <66972eee4d865c4383b33d985d22994b> 98 **End**

    Read the article

  • Updating DetailViewController from RootController

    - by Stefano Salmaso
    I'm trying to create an iPad application with a similar user interface to Apple's Mail application, i.e: RootView controller (table view) on the left hand side of the split view for navigation with a multiple view hierarchy. When a table cell is selected a new table view is pushed on the left hand side The new view on the left side can update the detail view. I can accomplish both tasks BUT NOT TOGETHER. I mean I can make a multi-level table view in the RootController.(HERE you can find the working source code). Or I can make a single-level table view in the RootController which can update the detailViewController (here there is the source code:http://www.megaupload.com/?d=D6L0463G). Can anyone tell me how to make a multi-level table in the RootController which can update a detailViewController? There is more source code at the link but below is the method in which I presume I have to declare a new detailViewController (which has to be put in the UISplitViewController): - (void)tableView:(UITableView *)TableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { NSDictionary *dictionary = [self.tableDataSource objectAtIndex:indexPath.row]; //Get the children of the present item. NSArray *Children = [dictionary objectForKey:@"Children"]; // if([Children count] == 0) { /* Create and configure a new detail view controller appropriate for the selection. */ NSUInteger row = indexPath.row; UIViewController <SubstitutableDetailViewController> *detailViewController = nil; if (row == 0) { FirstDetailViewController *newDetailViewController = [[FirstDetailViewController alloc]initWithNibName:@"FirstDetailView" bundle:nil]; detailViewController = newDetailViewController; } if (row == 1) { SecondDetailViewController *newDetailViewController = [[SecondDetailViewController alloc]initWithNibName:@"SecondDetailView" bundle:nil]; detailViewController = newDetailViewController; } // Update the split view controller's view controllers array. NSArray *viewControllers = [[NSArray alloc] initWithObjects:self.navigationController, detailViewController, nil]; splitViewController.viewControllers = viewControllers//nothing happens..... [viewControllers release];// } else { //Prepare to tableview. RootViewController *rvController = [[RootViewController alloc]initWithNibName:@"RootViewController" bundle:[NSBundle mainBundle]]; //Increment the Current View rvController.current_level += 1; //Set the title; rvController.current_title = [dictionary objectForKey:@"Title"]; //Push the new table view on the stack [self.navigationController pushViewController:rvController animated:YES]; rvController.tableDataSource = Children; [rvController.tableView reloadData]; //without this instrucion,items won't be loaded inside the second level of the table [rvController release]; } }

    Read the article

  • Neo4j 1.9.4 (REST Server,CYPHER) performance issue

    - by user2968943
    I have Neo4j 1.9.4 installed on 24 core 24Gb ram (centos) machine and for most queries CPU usage spikes goes to 200% with only few concurrent requests. Domain: some sort of social application where few types of nodes(profiles) with 3-30 text/array properties and 36 relationship types with at least 3 properties. Most of nodes currently has ~300-500 relationships. Current data set footprint(from console): LogicalLogSize=4294907 (32MB) ArrayStoreSize=1675520 (12MB) NodeStoreSize=1342170 (10MB) PropertyStoreSize=1739548 (13MB) RelationshipStoreSize=6395202 (48MB) StringStoreSize=1478400 (11MB) which is IMHO really small. most queries looks like this one(with more or less WITH .. MATCH .. statements and few queries with variable length relations but the often fast): START targetUser=node({id}), currentUser=node({current}) MATCH targetUser-[contact:InContactsRelation]->n, n-[:InLocationRelation]->l, n-[:InCategoryRelation]->c WITH currentUser, targetUser,n, l,c, contact.fav is not null as inFavorites MATCH n<-[followers?:InContactsRelation]-() WITH currentUser, targetUser,n, l,c,inFavorites, COUNT(followers) as numFollowers RETURN id(n) as id, n.name? as name, n.title? as title, n._class as _class, n.avatar? as avatar, n.avatar_type? as avatar_type, l.name as location__name, c.name as category__name, true as isInContacts, inFavorites as isInFavorites, numFollowers it runs in ~1s-3s(for first run) and ~1s-70ms (for consecutive and it depends on query) and there is about 5-10 queries runs for each impression. Another interesting behavior is when i try run query from console(neo4j) on my local machine many consecutive times(just press ctrl+enter for few seconds) it has almost constant execution time but when i do it on server it goes slower exponentially and i guess it somehow related with my problem. Problem: So my problem is that neo4j is very CPU greedy(for 24 core machine its may be not an issue but its obviously overkill for small project). First time i used AWS EC2 m1.large instance but over all performance was bad, during testing, CPU always was over 100%. Some relevant parts of configuration: neostore.nodestore.db.mapped_memory=1280M wrapper.java.maxmemory=8192 note: I already tried configuration where all memory related parameters where HIGH and it didn't worked(no change at all). Question: Where to digg? configuration? scheme? queries? what i'm doing wrong? if need more info(logs, configs) just ask ;)

    Read the article

  • Alert on gridview edit based on permission

    - by Vicky
    I have a gridview with edit option at the start of the row. Also I maintain a seperate table called Permission where I maintain user permissions. I have three different types of permissions like Admin, Leads, Programmers. These all three will have access to the gridview. Except admin if anyone tries to edit the gridview on clicking the edit option, I need to give an alert like This row has important validation and make sure you make proper changes. When I edit, the action with happen on table called Application. The table has a column called Comments. Also the alert should happen only when they try to edit rows where the Comments column have these values in them. ManLog datas Funding Approved Exported Applications My try so far. public bool IsApplicationUser(string userName) { return CheckUser(userName); } public static bool CheckUser(string userName) { string CS = ConfigurationManager.ConnectionStrings["ConnectionString"].ToString(); DataTable dt = new DataTable(); using (SqlConnection connection = new SqlConnection(CS)) { SqlCommand command = new SqlCommand(); command.Connection = connection; string strquery = "select * from Permissions where AppCode='Nest' and UserID = '" + userName + "'"; SqlCommand cmd = new SqlCommand(strquery, connection); SqlDataAdapter da = new SqlDataAdapter(cmd); da.Fill(dt); } if (dt.Rows.Count >= 1) return true; else return true; } protected void Details_RowCommand(object sender, GridViewCommandEventArgs e) { string currentUser = HttpContext.Current.Request.LogonUserIdentity.Name; string str = ConfigurationManager.ConnectionStrings["ConnectionString"].ToString(); string[] words = currentUser.Split('\\'); currentUser = words[1]; bool appuser = IsApplicationUser(currentUser); if (appuser) { DataSet ds = new DataSet(); using (SqlConnection connection = new SqlConnection(str)) { SqlCommand command = new SqlCommand(); command.Connection = connection; string strquery = "select Role_Cd from User_Role where AppCode='PM' and UserID = '" + currentUser + "'"; SqlCommand cmd = new SqlCommand(strquery, connection); SqlDataAdapter da = new SqlDataAdapter(cmd); da.Fill(ds); } if (e.CommandName.Equals("Edit") && ds.Tables[0].Rows[0]["Role_Cd"].ToString().Trim() != "ADMIN") { int index = Convert.ToInt32(e.CommandArgument); GridView gvCurrentGrid = (GridView)sender; GridViewRow row = gvCurrentGrid.Rows[index]; string strID = ((Label)row.FindControl("lblID")).Text; string strAppName = ((Label)row.FindControl("lblAppName")).Text; Response.Redirect("AddApplication.aspx?ID=" + strID + "&AppName=" + strAppName + "&Edit=True"); } } } Kindly let me know if I need to add something. Thanks for any suggestions.

    Read the article

  • Normalizing Item Names & Synonyms

    - by RabidFire
    Consider an e-commerce application with multiple stores. Each store owner can edit the item catalog of his store. My current database schema is as follows: item_names: id | name | description | picture | common(BOOL) items: id | item_name_id | picture | price | description | picture item_synonyms: id | item_name_id | name | error(BOOL) Notes: error indicates a wrong spelling (eg. "Ericson"). description and picture of the item_names table are "globals" that can optionally be overridden by "local" description and picture fields of the items table (in case the store owner wants to supply a different picture for an item). common helps separate unique item names ("Jimmy Joe's Cheese Pizza" from "Cheese Pizza") I think the bright side of this schema is: Optimized searching & Handling Synonyms: I can query the item_names & item_synonyms tables using name LIKE %QUERY% and obtain the list of item_name_ids that need to be joined with the items table. (Examples of synonyms: "Sony Ericsson", "Sony Ericson", "X10", "X 10") Autocompletion: Again, a simple query to the item_names table. I can avoid the usage of DISTINCT and it minimizes number of variations ("Sony Ericsson Xperia™ X10", "Sony Ericsson - Xperia X10", "Xperia X10, Sony Ericsson") The down side would be: Overhead: When inserting an item, I query item_names to see if this name already exists. If not, I create a new entry. When deleting an item, I count the number of entries with the same name. If this is the only item with that name, I delete the entry from the item_names table (just to keep things clean; accounts for possible erroneous submissions). And updating is the combination of both. Weird Item Names: Store owners sometimes use sentences like "Harry Potter 1, 2 Books + CDs + Magic Hat". There's something off about having so much overhead to accommodate cases like this. This would perhaps be the prime reason I'm tempted to go for a schema like this: items: id | name | picture | price | description | picture (... with item_names and item_synonyms as utility tables that I could query) Is there a better schema you would suggested? Should item names be normalized for autocomplete? Is this probably what Facebook does for "School", "City" entries? Is the first schema or the second better/optimal for search? Thanks in advance! References: (1) Is normalizing a person's name going too far?, (2) Avoiding DISTINCT

    Read the article

  • How to Convert arrays or SimpleXML-Objects into an XML-String

    - by streetparade
    I want to create a xml from a given string, i have a function but i didn't wrote it.It seems a bit cryptical too. Can please some one review it and give me some Ideas, how it could be written clearer for everybody? /** * Converts arrays or SimpleXML-Objects into an XML-String * @params mixed Accepts an array or xml string with data to Post * @params integer DO NOT PROVIDE. Internal Usage for recursion only */ private function mixedDataToXML($data, $level = 1) { if(!$data){ return FALSE; } if(is_array($data)) { $xml = ''; if ($level==1) { $xml .= '<?xml version="1.0" encoding="ISO-8859-1"?>'."\n"; } foreach ($data as $key => $value) { $key = strtolower($key); if (is_array($value)) { $multi_tags = false; foreach($value as $key2=>$value2) { if (is_array($value2)) { $xml .= str_repeat("\t",$level)."<$key>\n"; $xml .= $this->mixedDataToXML($value2, $level+1); $xml .= str_repeat("\t",$level)."</$key>\n"; $multi_tags = true; } else { if (trim($value2)!='') { if (htmlspecialchars($value2)!=$value2) { $xml .= str_repeat("\t",$level). "<$key><![CDATA[$value2]]>". "</$key>\n"; } else { $xml .= str_repeat("\t",$level). "<$key>$value2</$key>\n"; } } $multi_tags = true; } } if (!$multi_tags and count($value)>0) { $xml .= str_repeat("\t",$level)."<$key>\n"; $xml .= $this->mixedDataToXML($value, $level+1); $xml .= str_repeat("\t",$level)."</$key>\n"; } } else { if (trim($value)!='') { if (htmlspecialchars($value)!=$value) { $xml .= str_repeat("\t",$level)."<$key>". "<![CDATA[$value]]></$key>\n"; } else { $xml .= str_repeat("\t",$level). "<$key>$value</$key>\n"; } } } } return $xml; }else{ return (string)$data; } }

    Read the article

  • Embedded "Smart" character LCD driver. Is this a good idea?

    - by chris12892
    I have an embedded project that I am working on, and I am currently coding the character LCD driver. At the moment, the LCD driver only supports "dumb" writing. For example, let's say line 1 has some text on it, and I make a call to the function that writes to the line. The function will simply seek to the beginning of the line and write the text (plus enough whitespace to erase whatever was last written). This is well and good, but I get the feeling it is horribly inefficient sometimes, since some lines are simply: "Some-reading: some-Value" Rather than "brute force" replacing the entire line, I wanted to develop some code that would figure out the best way to update the information on the LCD. (just as background, it takes 2 bytes to seek to any char position. I can then begin writing the string) My idea was to first have a loop. This loop would compare the input to the last write, and in doing so, it would cover two things: A: Collect all the differences between the last write and the input. For every contiguous segment (be it same or different) add two bytes to the byte count. This is referenced in B to determine if we are wasting serial bandwidth. B: The loop would determine if this is really a smart thing to do. If we end up using more bytes to update the line than to "brute force" the line, then we should just return and let the brute force method take over. We should exit the smart write function as soon as this condition is met to avoid wasting time. The next part of the function would take all the differences, seek to the required char on the LCD, and write them. Thus, if we have a string like this already on the LCD: "Current Temp: 80F" and we want to update it to "Current Temp: 79F" The function will go through and see that it would take less bandwidth to simply seek to the "8" and write "79". The "7" will cover the "8" and the "9" will cover the "0". That way, we don't waste time writing out the entire string. Does this seem like a practical idea?

    Read the article

  • How to change quicksort to output elements in descending order?

    - by masato-san
    Hi, I wrote a quicksort algorithm however, I would like to make a change somewhere so that this quicksort would output elements in descending order. I searched and found that I can change the comparison operator (<) in partition() to other way around (like below). //This is snippet from partition() function while($array[$l] < $pivot) { $l++; } while($array[$r] > $pivot) { $r--; } But it is not working.. If I quicksort the array below, $array = (3,9,5,7); should be: $array = (9,7,5,3) But actual output is: $array = (3,5,7,9) Below is my quicksort which trying to output elements in descending order. How should I make change to sort in descending order? If you need any clarification please let me know. Thanks! $array = (3,9,5,7); $app = new QuicksortDescending(); $app->quicksort($array, 0, count($array)); print_r($array); class QuicksortDescending { public function partitionDesc(&$array, $left, $right) { $l = $left; $r = $right; $pivot = $array[($right+$left)/2]; while($l <= $r) { while($array[$l] > $pivot) { $l++; } while($array[$r] < $pivot) { $r--; } if($l <= $r) {// if L and R haven't cross $this->swap($array, $l, $r); $l ++; $j --; } } return $l; } public function quicksortDesc(&$array, $left, $right) { $index = $this->partition($array, $left, $right); if($left < $index-1) { //if there is more than 1 element to sort on right subarray $this->quicksortDesc($array, $left, $index-1); } if($index < $right) { //if there is more than 1 element to sort on right subarray $this->quicksortDesc($array, $index, $right); } } }

    Read the article

  • delete row from selected gridview and database

    - by user175084
    i am trying to delete a row from the gridview and database... It should be deleted if a delte linkbutton is clicked in the gridview.. I am gettin the row index as follows: protected void LinkButton1_Click(object sender, EventArgs e) { LinkButton btn = (LinkButton)sender; GridViewRow row = (GridViewRow)btn.NamingContainer; if (row != null) { LinkButton LinkButton1 = (LinkButton)sender; // Get reference to the row that hold the button GridViewRow gvr = (GridViewRow)LinkButton1.NamingContainer; // Get row index from the row int rowIndex = gvr.RowIndex; string str = rowIndex.ToString(); //string str = GridView1.DataKeys[row.RowIndex].Value.ToString(); RemoveData(str); //call the delete method } } now i want to delete it... so i am having problems with this code.. i get an error Must declare the scalar variable "@original_MachineGroupName"... any suggestions private void RemoveData(string item) { SqlConnection conn = new SqlConnection(@"Data Source=JAGMIT-PC\SQLEXPRESS; Initial Catalog=SumooHAgentDB;Integrated Security=True"); string sql = "DELETE FROM [MachineGroups] WHERE [MachineGroupID] = @original_MachineGroupID; SqlCommand cmd = new SqlCommand(sql, conn); cmd.Parameters.AddWithValue("@original_MachineGroupID", item); conn.Open(); cmd.ExecuteNonQuery(); conn.Close(); } Blockquote <asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="<%$ ConnectionStrings:SumooHAgentDBConnectionString %>" SelectCommand="SELECT MachineGroups.MachineGroupID, MachineGroups.MachineGroupName, MachineGroups.MachineGroupDesc, MachineGroups.TimeAdded, MachineGroups.CanBeDeleted, COUNT(Machines.MachineName) AS Expr1, DATENAME(month, (MachineGroups.TimeAdded - 599266080000000000) / 864000000000) + SPACE(1) + DATENAME(d, (MachineGroups.TimeAdded - 599266080000000000) / 864000000000) + ', ' + DATENAME(year, (MachineGroups.TimeAdded - 599266080000000000) / 864000000000) AS Expr2 FROM MachineGroups FULL OUTER JOIN Machines ON Machines.MachineGroupID = MachineGroups.MachineGroupID GROUP BY MachineGroups.MachineGroupID, MachineGroups.MachineGroupName, MachineGroups.MachineGroupDesc, MachineGroups.TimeAdded, MachineGroups.CanBeDeleted" DeleteCommand="DELETE FROM [MachineGroups] WHERE [MachineGroupID] =@original_MachineGroupID" > <DeleteParameters> <asp:Parameter Name="@original_MachineGroupID" Type="Int16" /> <asp:Parameter Name="@original_MachineGroupName" Type="String" /> <asp:Parameter Name="@original_MachineGroupDesc" Type="String" /> <asp:Parameter Name="@original_CanBeDeleted" Type="Boolean" /> <asp:Parameter Name="@original_TimeAdded" Type="Int64" /> </DeleteParameters> </asp:SqlDataSource> I still get an error : Must declare the scalar variable "@original_MachineGroupID"

    Read the article

  • Slow MySQL query....only sometimes

    - by Shane N
    I have a query that's used in a reporting system of ours that sometimes runs quicker than a second, and other times takes 1 to 10 minutes to run. Here's the entry from the slow query log: # Query_time: 543 Lock_time: 0 Rows_sent: 0 Rows_examined: 124948974 use statsdb; SELECT count(distinct Visits.visitorid) as 'uniques' FROM Visits,Visitors WHERE Visits.visitorid=Visitors.visitorid and candidateid in (32) and visittime>=1275721200 and visittime<=1275807599 and (omit=0 or omit>=1275807599) AND Visitors.segmentid=9 AND Visits.visitorid NOT IN (SELECT Visits.visitorid FROM Visits,Visitors WHERE Visits.visitorid=Visitors.visitorid and candidateid in (32) and visittime<1275721200 and (omit=0 or omit>=1275807599) AND Visitors.segmentid=9); It's basically counting unique visitors, and it's doing that by counting the visitors for today and then substracting those that have been here before. If you know of a better way to do this, let me know. I just don't understand why sometimes it can be so quick, and other times takes so long - even with the same exact query under the same server load. Here's the EXPLAIN on this query. As you can see it's using the indexes I've set up: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY Visits range visittime_visitorid,visitorid visittime_visitorid 4 NULL 82500 Using where; Using index 1 PRIMARY Visitors eq_ref PRIMARY,cand_visitor_omit PRIMARY 8 statsdb.Visits.visitorid 1 Using where 2 DEPENDENT SUBQUERY Visits ref visittime_visitorid,visitorid visitorid 8 func 1 Using where 2 DEPENDENT SUBQUERY Visitors eq_ref PRIMARY,cand_visitor_omit PRIMARY 8 statsdb.Visits.visitorid 1 Using where I tried to optimize the query a few weeks ago and came up with a variation that consistently took about 2 seconds, but in practice it ended up taking more time since 90% of the time the old query returned much quicker. Two seconds per query is too long because we are calling the query up to 50 times per page load, with different time periods. Could the quick behavior be due to the query being saved in the query cache? I tried running 'RESET QUERY CACHE' and 'FLUSH TABLES' between my benchmark tests and I was still getting quick results most of the time. Note: last night while running the query I got an error: Unable to save result set. My initial research shows that may be due to a corrupt table that needs repair. Could this be the reason for the behavior I'm seeing? In case you want server info: Accessing via PHP 4.4.4 MySQL 4.1.22 All tables are InnoDB We run optimize table on all tables weekly The sum of both the tables used in the query is 500 MB MySQL config: key_buffer = 350M max_allowed_packet = 16M thread_stack = 128K sort_buffer = 14M read_buffer = 1M bulk_insert_buffer_size = 400M set-variable = max_connections=150 query_cache_limit = 1048576 query_cache_size = 50777216 query_cache_type = 1 tmp_table_size = 203554432 table_cache = 120 thread_cache_size = 4 wait_timeout = 28800 skip-external-locking innodb_file_per_table innodb_buffer_pool_size = 3512M innodb_log_file_size=100M innodb_log_buffer_size=4M

    Read the article

  • [C#] Problems with implementing generic IEnumerator and IComparable

    - by r0h
    Hi all! I'm working on an AVL Tree. The tree itself seems to be working but I need a iterator to walk through the values of the tree. Therefore I tried to implement the IEnumerator interace. Unfortunately I get a compile time error implementing IEnumerator and IComparable. First the code and below that the error. class AvlTreePreOrderEnumerator<T> : IEnumerator<T> where T :IComparable<T> { private AvlTreeNode<T> current = default(T); private AvlTreeNode<T> tree = null; private Queue<AvlTreeNode<T>> traverseQueue = null; public AvlTreePreOrderEnumerator(AvlTreeNode<T> tree) { this.tree = tree; //Build queue traverseQueue = new Queue<AvlTreeNode<T>>(); visitNode(this.tree.Root); } private void visitNode(AvlTreeNode<T> node) { if (node == null) return; else { traverseQueue.Enqueue(node); visitNode(node.LeftChild); visitNode(node.RightChild); } } public T Current { get { return current.Value; } } object IEnumerator.Current { get { return Current; } } public void Dispose() { current = null; tree = null; } public void Reset() { current = null; } public bool MoveNext() { if (traverseQueue.Count > 0) current = traverseQueue.Dequeue(); else current = null; return (current != null); } } The error given by VS2008: Error 1 The type 'T' cannot be used as type parameter 'T' in the generic type or method 'Opdr2_AvlTreeTest_Final.AvlTreeNode'. There is no boxing conversion or type parameter conversion from 'T' to 'System.IComparable'. For now I've not included the tree and node logic. I anybody thinks is necessary to resolve this probleem, just say so! Thx!

    Read the article

  • Spring.Net Message Selectors with compound statements don't seem to be working

    - by Jonathan Beerhalter
    I'm using Spring.NET to connect to ActiveMQ and do some fairly simple pub sub routing. Everything works fine when my selector is a simple expression like Car='Honda' but if I try a compound expression like Car='Honda' AND Make='Pilot' I never get any matches on my subscription. Here's the code to generate the subscription, does anyone see where I might be doing something wrong? public bool AddSubscription(string topicName, Dictionary<string,string> selectorList, GDException exp) { try { ActiveMQTopic topic = new ActiveMQTopic(topicName); string selectorString = ""; if (selectorList.Keys.Count == 0) { // Select all items for this topic selectorString = "2>1"; } else { foreach (string key in selectorList.Keys) { selectorString += key + " = '" + selectorList[key] + "'" + " AND "; } selectorString = selectorString.Remove(selectorString.Length - 5, 5); } IMessageConsumer consumer = this._subSession.CreateConsumer(topic, selectorString, false); if (consumer != null) { _consumers.Add(consumer); consumer.Listener += new MessageListener(HandleRecieveMessage); return true; } else { exp.SetValues("Error adding subscription, null consumer returned"); return false; } } catch (Exception ex) { exp.SetValues(ex); return false; } } And then the code to send the message, which seems simple enough to me public void SendMessage(GDPubSubMessage messageToSend) { if (!this.isDisposed) { if (_producers.ContainsKey(messageToSend.Topic)) { IBytesMessage bytesMessage = this._pubSession.CreateBytesMessage(messageToSend.Payload); foreach (string key in messageToSend.MessageProperties.Keys) { bytesMessage.Properties.SetString(key, messageToSend.MessageProperties[key]); } _producers[messageToSend.Topic].Send(bytesMessage, false, (byte)255, TimeSpan.FromSeconds(1)); } else { ActiveMQTopic topic = new ActiveMQTopic(messageToSend.Topic); _producers.Add(messageToSend.Topic, this._pubSession.CreateProducer(topic)); IBytesMessage bytesMessage = this._pubSession.CreateBytesMessage(messageToSend.Payload); foreach (string key in messageToSend.MessageProperties.Keys) { bytesMessage.Properties.SetString(key, messageToSend.MessageProperties[key]); } _producers[messageToSend.Topic].Send(bytesMessage); } } else { throw new ObjectDisposedException(this.GetType().FullName); } } 07/102009: Update Ok, found the problem bytesMessage.Properties.SetString(key, messageToSend.MessageProperties[key]); This justs sets a single property, so my messages are only being tagged with a single property, hence the combo subscription never gets hit. Anyone know how to add more properties? You'd think bytesMessage.Properties would have a Add method, but it doesn't.

    Read the article

  • How to calculate the maximum block length in C of a binary number

    - by user1664272
    I want to reiterate the fact that I am not asking for direct code to my problem rather than wanting information on how to reach that solution. I asked a problem earlier about counting specific integers in binary code. Now I would like to ask how one comes about counting the maximum block length within binary code. I honestly just want to know where to get started and what exactly the question means by writing code to figure out the "Maximum block length" of an inputted integers binary representation. Ex: Input 456 Output: 111001000 Number of 1's: 4 Maximum Block Length: ? Here is my code so far for reference if you need to see where I'm coming from. #include <stdio.h> int main(void) { int integer; // number to be entered by user int i, b, n; unsigned int ones; printf("Please type in a decimal integer\n"); // prompt fflush(stdout); scanf("%d", &integer); // read an integer if(integer < 0) { printf("Input value is negative!"); // if integer is less than fflush(stdout); return; // zero, print statement } else{ printf("Binary Representation:\n", integer); fflush(stdout);} //if integer is greater than zero, print statement for(i = 31; i >= 0; --i) //code to convert inputted integer to binary form { b = integer >> i; if(b&1){ printf("1"); fflush(stdout); } else{ printf("0"); fflush(stdout); } } printf("\n"); fflush(stdout); ones = 0; //empty value to store how many 1's are in binary code while(integer) //while loop to count number of 1's within binary code { ++ones; integer &= integer - 1; } printf("Number of 1's in Binary Representation: %d\n", ones); // prints number fflush(stdout); //of ones in binary code printf("Maximum Block Length: \n"); fflush(stdout); printf("\n"); fflush(stdout); return 0; }//end function main

    Read the article

  • Searching in Ruby on Rails - How do I search on each word entered and not the exact string?

    - by bgadoci
    I have built a blog application w/ ruby on rails and I am trying to implement a search feature. The blog application allows for users to tag posts. The tags are created in their own table and belong_to :post. When a tag is created, so is a record in the tag table where the name of the tag is tag_name and associated by post_id. Tags are strings. I am trying to allow a user to search for any word tag_name in any order. Here is what I mean. Lets say a particular post has a tag that is 'ruby code controller'. In my current search feature, that tag will be found if the user searches for 'ruby', 'ruby code', or 'ruby code controller'. It will not be found if the user types in 'ruby controller'. Essentially what I am saying is that I would like each word entered in the search to be searched for, not necessarily the 'string' that is entered into the search. I have been experimenting with providing multiple textfields to allow the user to type in multiple words, and also have been playing around with the code below, but can't seem to accomplish the above. I am new to ruby and rails so sorry if this is an obvious question and prior to installing a gem or plugin I thought I would check to see if there was a simple fix. Here is my code: View: /views/tags/index.html.erb <% form_tag tags_path, :method => 'get' do %> <p> <%= text_field_tag :search, params[:search], :class => "textfield-search" %> <%= submit_tag "Search", :name => nil, :class => "search-button" %> </p> <% end %> TagsController def index @tags = Tag.search(params[:search]).paginate :page => params[:page], :per_page => 5 @tagsearch = Tag.search(params[:search]) @tag_counts = Tag.count(:group => :tag_name, :order => 'count_all DESC', :limit => 100) respond_to do |format| format.html # index.html.erb format.xml { render :xml => @tags } end end Tag Model class Tag < ActiveRecord::Base belongs_to :post validates_length_of :tag_name, :maximum=>42 validates_presence_of :tag_name def self.search(search) if search find(:all, :order => "created_at DESC", :conditions => ['tag_name LIKE ?', "%#{search}%"]) else find(:all, :order => "created_at DESC") end end end

    Read the article

< Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >