Search Results

Search found 2646 results on 106 pages for 'fetch'.

Page 33/106 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Problem inserting in two different tables [closed]

    - by imvarunkmr
    I have written an insert statement which inserts a record into Table1. Table1 has a column "ID" which is an auto_increment(Identity) primary key. How can I fetch the newly generated "ID" and as I need to Insert this value as foreign key in Table2? Note : I have written INSERT statement in a stored procedure and I am calling this procedure using C# Alternative suggestions to link both tables are also welcomed :)

    Read the article

  • Fetching e-mails into Redmine via IMAP

    - by Danilo Bargen
    I'm trying to fetch e-mails into Redmine via IMAP. The e-mails I'm generating look like this: FooBar Ltd 123456 http://example.com/Foobar-Ltd-123456.html Project: backend Tracker: Dataerror Beschreibung: This is the description =========================== CLIENT_IP: 192.168.1.215 HTTP_USER_AGENT: mozilla/asdfjköl I try to fetch them into Redmine via this command: rake -f /var/www/projects/redmine/Rakefile redmine:email:receive_imap \ RAILS_ENV="production" host=example.com port=993 ssl=true username=redmine \ password=1234 project=myproject tracker=other \ allow_override=project,tracker,category,priority \ move_on_success=read move_on_failure=failed But the e-mails get moved into the failed folder. I had this setup running some time ago with a different e-mail generator but pretty much the same template, and I can't figure out why it's not working. The permissions seem to be OK too. In order to further debug this issue, I need some logfiles. Are there any logfiles written by this command? Or are there any other suggestions to solve this issue? My environment: danilo@jabba:/var/www/projects/redmine$ RAILS_ENV=production script/about About your application's environment Ruby version 1.8.7 (i486-linux) RubyGems version 1.3.5 Rack version 1.0 Rails version 2.3.5 Active Record version 2.3.5 Active Resource version 2.3.5 Action Mailer version 2.3.5 Active Support version 2.3.5 Application root /var/www/projects/redmine Environment production Database adapter mysql Database schema version 20100819172912

    Read the article

  • CloneZilla PXE Boot Without NFS

    - by John
    I am trying to setup CloneZilla to be bootable via PXE without using NFS. I do not have NFS running on our PXE server and would like to keep it that way. However, most of the information that I have found online indicates that you need to setup NFS in order to PXE boot CloneZilla. I believe that I am pretty close in getting it to work, but am not sure where to go next. Listed below are the different PXE menu option configurations that I have used so far. LABEL Clonezilla Live MENU LABEL Clonezilla Live KERNEL utilities/clonezilla/vmlinuz APPEND initrd=utilities/clonezilla/initrd.img boot=live live-config noswap nolocales edd=on nomodeset ocs_live_run="ocs-live-general" ocs_live_extra_param="" ocs_live_keymap="" ocs_live_batch="no" o$ I have also tried the following append lines, without success: APPEND initrd=utilities/clonezilla/initrd.img boot=live union=aufs noswap noprompt vga=788 fetch=tftp://10.130.155.23/filesystem.squashfs APPEND initrd=utilities/clonezilla/initrd.img boot=live union=aufs noswap noprompt vga=normal nomodeset nosplash fetch=tftp://10.130.155.23/filesystem.squashfs Each of them have resulted in a no go with the following error: "Unable to find a live file system on the network". It looks like it gets to the point of trying to load the filesystem.squashfs file, hangs, and then throws the error. Any help would be greatly appreciated.

    Read the article

  • Trouble getting latest version of Git

    - by TheMethod
    I am using Ubuntu 10.04 LTS. I'm looking at using git as source control for personal projects and Github as a remote repository. I was having trouble pushing a commit to my remote github repo getting the following error message: The requested URL returned error: 403 while accessing https://github.com/Jstall/helloworld.git/info/refs When I did some digging I found that the problem could be me not having the latest version of Git. When I did a --version I found that I have version 1.7.0.4 locally. So I tried to update git using: sudo apt-get install git but get the following error: Reading package lists... Done Building dependency tree Reading state information... Done Package git is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package git has no installation candidate I've tried running: sudo apt-get update and trying again but it didn't seem to make a difference. I'm not sure if it's relevant but I'm also getting a couple of 404's when I run update: Err http://wine.budgetdedicated.com edgy/main Packages 404 Not Found Fetched 4,117B in 0s (5,142B/s) W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/edgy/universe/binary-i386/Packages.gz 404 Not Found [IP: 91.189.91.15 80] W: Failed to fetch http://wine.budgetdedicated.com/apt/dists/edgy/main/binary-i386/Packages.gz 404 Not Found I'm not sure when I should try next. Could anyone suggest a course of action to get this resolved? Any advice would be appreciated. Thanks much!

    Read the article

  • wget recursively download from pages with lots of links

    - by Shadow
    When using wget with the recursive option turned on I am getting an error message when it is trying to download a file. It thinks the link is a downloadable file when in reality it should just be following it to get to the page that actually contains the files(or more links to follow) that I want. wget -r -l 16 --accept=jpg website.com The error message is: .... since it should be rejected. This usually occurs when the website link it is trying to fetch ends with a sql statement. The problem however doesn't occur when using the very same wget command on that link. I want to know how exactly it is trying to fetch the pages. I guess I could always take a poke around the source although I don't know how messy the project is. I might also be missing exactly what "recursive" means in the context of wget. I thought it would run through and travel in each link getting the files with the extension I have requested. I posted this up over at stackOverFlow but they turned me over here:) Hoping you guys can help.

    Read the article

  • Having munin server monitoring problem: Graphs not being generated.

    - by geerlingguy
    When I run munin-cron (munin-cron --debug), I get the following error: 2010/05/10 13:39:01 [WARNING] Call to accept timed out. Remaining workers: archstl.org;archstl.archstl.org 2010/05/10 13:39:01 [DEBUG] Active workers: 1/8 These errors simply keep repeating themselves until I quit munin-cron. I've followed the directions for debugging munin on the 'Debugging Munin plugins' wiki page, but I get the following results when going through their directions: After telnetting to localhost 4949, I can see a list of plugins, see a node at archstl.archstl.org, but can't fetch anything. The output is as follows: >fetch cpu . However, on the same machine (which is both the node and the master munin server), I can run munin-run cpu, and it prints the results correctly to the command line, like so: user.value 100829130 nice.value 3479880 system.value 13969362 idle.value 664312639 iowait.value 12180168 irq.value 14242 softirq.value 199526 steal.value 0 Looking at the wiki page mentioned above, it looks like it might be a plugin environment problem, but I can't figure out how to fix/change this... If the plugin does run with munin-run but not through telnet, you probably have a PATH problem. Tip: Set env.PATH for the plugin in the plugin's environment file.

    Read the article

  • Router vs switch in a LAN [closed]

    - by servernewbie
    If I have a LAN and and connect it with a switch, I understand it uses a CAM table to route packets in layer 2 (by saving mac to port relations). So far all good. However, when using a router for a LAN (ONLY for a LAN, not to connect it to "the outside" WAN/internet/etc) I get a bit confused as to how it internally processes packets. I would first split this into two router scenarios: Router with buit-in switch In this scenario, I would expect that it will act exactly as a switch with a CAM table internally. This would probably benefit a bit in speed (guessing here?) compared to the next option. Router without built-in switch Here is where I get confused. If hostA wants to send a packet to hostB, it will ARP to find hostB's MAC address and send it there. Now, if we had a switch (above scenario) this would be easy. But how does it work now in a router WITHOUT a switch? If I would guess, hostA would send an Ethernet frame with hostB's MAC address to the line. The router would fetch the packet (even though the router has another MAC address, it would still fetch this packet even if it only contains hostB's MAC address). It would strip the Ethernet frame header and check the IP, and then check its own internal ARP table again for the MAC address. Now, this would seem like a waste of resources compared to a router with a built-in switch. But maybe it does not work like that at all. Does it also contain a CAM table? If that would be true, what would then the difference between these two routers really be?

    Read the article

  • Upgrade of Ubuntu 8.10 distribution fails due to missing packages

    - by Tim
    I have a server that I've forgotten to upgrade for ages, which is still running Intrepid (8.10). I'd like to upgrade it to a newer version of the distribution, so that I can get security patches etc. I found some instructions that tell me to install the package update-manager-core. I tried the following: $ sudo apt-get install update-manager-core but this fails since some of the necessary packages can't be found: ... Err http://archive.ubuntu.com intrepid/main python-apt 0.7.7.1ubuntu4 404 Not Found [IP: 91.189.88.40 80] Err http://archive.ubuntu.com intrepid-updates/main update-manager-core 1:0.93.34 404 Not Found [IP: 91.189.88.40 80] Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/p/python-apt/python-apt_0.7.7.1ubuntu4_amd64.deb 404 Not Found [IP: 91.189.88.40 80] Failed to fetch http://archive.ubuntu.com/ubuntu/pool/main/u/update-manager/update-manager-core_0.93.34_amd64.deb 404 Not Found [IP: 91.189.88.40 80] ... I know that Intrepid is no longer supported, and so I guess some of the necessary files may no longer be maintained. But this seems rather unhelpful: I can't upgrade because it's too old, and the only way to fix this would be to upgrade it. Is there a way round this? Is something else wrong?

    Read the article

  • Apache DS fails to list users

    - by CuriousMind
    Apache ds fails to list the users INFO | jvm 1 | 2012/03/28 15:54:04 | java.lang.Error: ERR_546 CRITICAL: page header magic for block 59 not OK 0 INFO | jvm 1 | 2012/03/28 15:54:04 | at jdbm.recman.PageHeader.(PageHeader.java:95) INFO | jvm 1 | 2012/03/28 15:54:04 | at jdbm.recman.PageHeader.getView(PageHeader.java:124) INFO | jvm 1 | 2012/03/28 15:54:04 | at jdbm.recman.PageManager.getNext(PageManager.java:234) INFO | jvm 1 | 2012/03/28 15:54:04 | at jdbm.recman.PageCursor.next(PageCursor.java:104) INFO | jvm 1 | 2012/03/28 15:54:04 | at jdbm.recman.PhysicalRowIdManager.fetch(PhysicalRowIdManager.java:158) INFO | jvm 1 | 2012/03/28 15:54:04 | at jdbm.recman.BaseRecordManager.fetch(BaseRecordManager.java:324) INFO | jvm 1 | 2012/03/28 15:54:04 | at jdbm.recman.CacheRecordManager.fetch(CacheRecordManager.java:262) INFO | jvm 1 | 2012/03/28 15:54:04 | at jdbm.btree.BPage.loadBPage(BPage.java:899) INFO | jvm 1 | 2012/03/28 15:54:04 | at jdbm.btree.BPage.childBPage(BPage.java:890) INFO | jvm 1 | 2012/03/28 15:54:04 | at jdbm.btree.BPage.find(BPage.java:284) INFO | jvm 1 | 2012/03/28 15:54:04 | at jdbm.btree.BPage.find(BPage.java:285) INFO | jvm 1 | 2012/03/28 15:54:04 | at jdbm.btree.BTree.find(BTree.java:408) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.core.partition.impl.btree.jdbm.JdbmTable.get(JdbmTable.java:395) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.core.partition.impl.btree.jdbm.JdbmMasterTable.get(JdbmMasterTable.java:155) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.core.partition.impl.btree.jdbm.JdbmStore.lookup(JdbmStore.java:1332) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.core.partition.impl.btree.jdbm.JdbmStore.lookup(JdbmStore.java:70) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.xdbm.search.impl.EqualityEvaluator.evaluate(EqualityEvaluator.java:126) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.xdbm.search.impl.AndCursor.matches(AndCursor.java:234) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.xdbm.search.impl.AndCursor.next(AndCursor.java:143) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.xdbm.search.impl.AndCursor.next(AndCursor.java:139) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.core.partition.impl.btree.ServerEntryCursorAdaptor.next(ServerEntryCursorAdaptor.java:178) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.core.filtering.BaseEntryFilteringCursor.next(BaseEntryFilteringCursor.java:499) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.ldap.handlers.SearchHandler.readResults(SearchHandler.java:314) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.ldap.handlers.SearchHandler.doSimpleSearch(SearchHandler.java:749) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.ldap.handlers.SearchHandler.handleIgnoringReferrals(SearchHandler.java:978) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.ldap.handlers.SearchHandler.handleIgnoringReferrals(SearchHandler.java:78) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.ldap.handlers.ReferralAwareRequestHandler.handle(ReferralAwareRequestHandler.java:83) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.ldap.handlers.ReferralAwareRequestHandler.handle(ReferralAwareRequestHandler.java:57) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.ldap.handlers.LdapRequestHandler.handleMessage(LdapRequestHandler.java:208) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.ldap.handlers.LdapRequestHandler.handleMessage(LdapRequestHandler.java:58) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.mina.handler.demux.DemuxingIoHandler.messageReceived(DemuxingIoHandler.java:232) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.directory.server.ldap.LdapProtocolHandler.messageReceived(LdapProtocolHandler.java:193) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.mina.core.filterchain.DefaultIoFilterChain$TailFilter.messageReceived(DefaultIoFilterChain.java:713) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.mina.core.filterchain.DefaultIoFilterChain.callNextMessageReceived(DefaultIoFilterChain.java:434) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.mina.core.filterchain.DefaultIoFilterChain.access$1200(DefaultIoFilterChain.java:46) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.mina.core.filterchain.DefaultIoFilterChain$EntryImpl$1.messageReceived(DefaultIoFilterChain.java:793) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.mina.core.filterchain.IoFilterEvent.fire(IoFilterEvent.java:71) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.mina.core.session.IoEvent.run(IoEvent.java:63) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.mina.filter.executor.UnorderedThreadPoolExecutor$Worker.runTask(UnorderedThreadPoolExecutor.java:480) INFO | jvm 1 | 2012/03/28 15:54:04 | at org.apache.mina.filter.executor.UnorderedThreadPoolExecutor$Worker.run(UnorderedThreadPoolExecutor.java:434) INFO | jvm 1 | 2012/03/28 15:54:04 | at java.lang.Thread.run(Thread.java:619) INFO | jvm 1 | 2012/03/28 15:54:04 | [15:54:04] WARN [org.apache.directory.server.ldap.LdapProtocolHandler] - Null LdapSession given to cleanUpSession. INFO | jvm 1 | 2012/03/28 15:55:20 | [15:55:20] WARN [org.apache.directory.server.ldap.LdapProtocolHandler] - Unexpected exception forcing session to close: sending disconnect notice to client.

    Read the article

  • Django manager for _set in model

    - by Daniel Johansson
    Hello, I'm in the progress of learning Django at the moment but I can't figure out how to solve this problem on my own. I'm reading the book Developers Library - Python Web Development With Django and in one chapter you build a simple CMS system with two models (Story and Category), some generic and custom views together with templates for the views. The book only contains code for listing stories, story details and search. I wanted to expand on that and build a page with nested lists for categories and stories. - Category1 -- Story1 -- Story2 - Category2 - Story3 etc. I managed to figure out how to add my own generic object_list view for the category listing. My problem is that the Story model have STATUS_CHOICES if the Story is public or not and a custom manager that'll only fetch the public Stories per default. I can't figure out how to tell my generic Category list view to also use a custom manager and only fetch the public Stories. Everything works except that small problem. I'm able to create a list for all categories with a sub list for all stories in that category on a single page, the only problem is that the list contains non public Stories. I don't know if I'm on the right track here. My urls.py contains a generic view that fetches all Category objects and in my template I'm using the *category.story_set.all* to get all Story objects for that category, wich I then loop over. I think it would be possible to add a if statement in the template and use the VIEWABLE_STATUS from my model file to check if it should be listed or not. The problem with that solution is that it's not very DRY compatible. Is it possible to add some kind of manager for the Category model too that only will fetch in public Story objects when using the story_set on a category? Or is this the wrong way to attack my problem? Related code urls.py (only category list view): urlpatterns += patterns('django.views.generic.list_detail', url(r'^categories/$', 'object_list', {'queryset': Category.objects.all(), 'template_object_name': 'category' }, name='cms-categories'), models.py: from markdown import markdown import datetime from django.db import models from django.db.models import permalink from django.contrib.auth.models import User VIEWABLE_STATUS = [3, 4] class ViewableManager(models.Manager): def get_query_set(self): default_queryset = super(ViewableManager, self).get_query_set() return default_queryset.filter(status__in=VIEWABLE_STATUS) class Category(models.Model): """A content category""" label = models.CharField(blank=True, max_length=50) slug = models.SlugField() class Meta: verbose_name_plural = "categories" def __unicode__(self): return self.label @permalink def get_absolute_url(self): return ('cms-category', (), {'slug': self.slug}) class Story(models.Model): """A hunk of content for our site, generally corresponding to a page""" STATUS_CHOICES = ( (1, "Needs Edit"), (2, "Needs Approval"), (3, "Published"), (4, "Archived"), ) title = models.CharField(max_length=100) slug = models.SlugField() category = models.ForeignKey(Category) markdown_content = models.TextField() html_content = models.TextField(editable=False) owner = models.ForeignKey(User) status = models.IntegerField(choices=STATUS_CHOICES, default=1) created = models.DateTimeField(default=datetime.datetime.now) modified = models.DateTimeField(default=datetime.datetime.now) class Meta: ordering = ['modified'] verbose_name_plural = "stories" def __unicode__(self): return self.title @permalink def get_absolute_url(self): return ("cms-story", (), {'slug': self.slug}) def save(self): self.html_content = markdown(self.markdown_content) self.modified = datetime.datetime.now() super(Story, self).save() admin_objects = models.Manager() objects = ViewableManager() category_list.html (related template): {% extends "cms/base.html" %} {% block content %} <h1>Categories</h1> {% if category_list %} <ul id="category-list"> {% for category in category_list %} <li><a href="{{ category.get_absolute_url }}">{{ category.label }}</a></li> {% if category.story_set %} <ul> {% for story in category.story_set.all %} <li><a href="{{ story.get_absolute_url }}">{{ story.title }}</a></li> {% endfor %} </ul> {% endif %} {% endfor %} </ul> {% else %} <p> Sorry, no categories at the moment. </p> {% endif %} {% endblock %}

    Read the article

  • Backbone.js Model change events in nested collections not firing as expected

    - by Pallavi Kaushik
    I'm trying to use backbone.js in my first "real" application and I need some help debugging why certain model change events are not firing as I would expect. I have a web service at /employees/{username}/tasks which returns a JSON array of task objects, with each task object nesting a JSON array of subtask objects. For example, [{ "id":45002, "name":"Open Dining Room", "subtasks":[ {"id":1,"status":"YELLOW","name":"Clean all tables"}, {"id":2,"status":"RED","name":"Clean main floor"}, {"id":3,"status":"RED","name":"Stock condiments"}, {"id":4,"status":"YELLOW","name":"Check / replenish trays"} ] },{ "id":47003, "name":"Open Registers", "subtasks":[ {"id":1,"status":"YELLOW","name":"Turn on all terminals"}, {"id":2,"status":"YELLOW","name":"Balance out cash trays"}, {"id":3,"status":"YELLOW","name":"Check in promo codes"}, {"id":4,"status":"YELLOW","name":"Check register promo placards"} ] }] Another web service allows me to change the status of a specific subtask in a specific task, and looks like this: /tasks/45002/subtasks/1/status/red [aside - I intend to change this to a HTTP Post-based service, but the current implementation is easier for debugging] I have the following classes in my JS app: Subtask Model and Subtask Collection var Subtask = Backbone.Model.extend({}); var SubtaskCollection = Backbone.Collection.extend({ model: Subtask }); Task Model with a nested instance of a Subtask Collection var Task = Backbone.Model.extend({ initialize: function() { // each Task has a reference to a collection of Subtasks this.subtasks = new SubtaskCollection(this.get("subtasks")); // status of each Task is based on the status of its Subtasks this.update_status(); }, ... }); var TaskCollection = Backbone.Collection.extend({ model: Task }); Task View to renders the item and listen for change events to the model var TaskView = Backbone.View.extend({ tagName: "li", template: $("#TaskTemplate").template(), initialize: function() { _.bindAll(this, "on_change", "render"); this.model.bind("change", this.on_change); }, ... on_change: function(e) { alert("task model changed!"); } }); When the app launches, I instantiate a TaskCollection (using the data from the first web service listed above), bind a listener for change events to the TaskCollection, and set up a recurring setTimeout to fetch() the TaskCollection instance. ... TASKS = new TaskCollection(); TASKS.url = ".../employees/" + username + "/tasks" TASKS.fetch({ success: function() { APP.renderViews(); } }); TASKS.bind("change", function() { alert("collection changed!"); APP.renderViews(); }); // Poll every 5 seconds to keep the models up-to-date. setInterval(function() { TASKS.fetch(); }, 5000); ... Everything renders as expected the first time. But at this point, I would expect either (or both) a Collection change event or a Model change event to get fired if I change a subtask's status using my second web service, but this does not happen. Funnily, I did get change events to fire if I added one additional level of nesting, with the web service returning a single object that has the Tasks Collection embedded, for example: "employee":"pkaushik", "tasks":[{"id":45002,"subtasks":[{"id":1..... But this seems klugey... and I'm afraid I haven't architected my app right. I'll include more code if it helps, but this question is already rather verbose. Thoughts?

    Read the article

  • NSFetchedResultsController: using of NSManagedObjectContext during update brings to crash

    - by Kentzo
    Here is the interface of my controller class: @interface ProjectListViewController : UITableViewController <NSFetchedResultsControllerDelegate> { NSFetchedResultsController *fetchedResultsController; NSManagedObjectContext *managedObjectContext; } @end I use following code to init fetchedResultsController: if (fetchedResultsController != nil) { return fetchedResultsController; } // Create and configure a fetch request with the Project entity. NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Project" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; // Create the sort descriptors array. NSSortDescriptor *projectIdDescriptor = [[NSSortDescriptor alloc] initWithKey:@"projectId" ascending:YES]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:projectIdDescriptor, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; // Create and initialize the fetch results controller. NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:managedObjectContext sectionNameKeyPath:nil cacheName:nil]; self.fetchedResultsController = aFetchedResultsController; fetchedResultsController.delegate = self; As you can see, I am using the same managedObjectContext as defined in my controller class Here is an adoption of the NSFetchedResultsControllerDelegate protocol: - (void)controllerWillChangeContent:(NSFetchedResultsController *)controller { // The fetch controller is about to start sending change notifications, so prepare the table view for updates. [self.tableView beginUpdates]; } - (void)controller:(NSFetchedResultsController *)controller didChangeObject:(id)anObject atIndexPath:(NSIndexPath *)indexPath forChangeType:(NSFetchedResultsChangeType)type newIndexPath:(NSIndexPath *)newIndexPath { UITableView *tableView = self.tableView; switch(type) { case NSFetchedResultsChangeInsert: [tableView insertRowsAtIndexPaths:[NSArray arrayWithObject:newIndexPath] withRowAnimation:UITableViewRowAnimationFade]; break; case NSFetchedResultsChangeDelete: [tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; break; case NSFetchedResultsChangeUpdate: [self _configureCell:(TDBadgedCell *)[tableView cellForRowAtIndexPath:indexPath] atIndexPath:indexPath]; break; case NSFetchedResultsChangeMove: if (newIndexPath != nil) { [tableView deleteRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; [tableView insertRowsAtIndexPaths:[NSArray arrayWithObject:newIndexPath] withRowAnimation:UITableViewRowAnimationFade]; } else { [tableView reloadSections:[NSIndexSet indexSetWithIndex:indexPath.section] withRowAnimation:UITableViewRowAnimationFade]; } break; } } - (void)controller:(NSFetchedResultsController *)controller didChangeSection:(id <NSFetchedResultsSectionInfo>)sectionInfo atIndex:(NSUInteger)sectionIndex forChangeType:(NSFetchedResultsChangeType)type { switch(type) { case NSFetchedResultsChangeInsert: [self.tableView insertSections:[NSIndexSet indexSetWithIndex:sectionIndex] withRowAnimation:UITableViewRowAnimationFade]; break; case NSFetchedResultsChangeDelete: [self.tableView deleteSections:[NSIndexSet indexSetWithIndex:sectionIndex] withRowAnimation:UITableViewRowAnimationFade]; break; } } - (void)controllerDidChangeContent:(NSFetchedResultsController *)controller { [self.tableView endUpdates]; } Inside of the _configureCell:atIndexPath: method I have following code: NSFetchRequest *issuesNumberRequest = [NSFetchRequest new]; NSEntityDescription *issueEntity = [NSEntityDescription entityForName:@"Issue" inManagedObjectContext:managedObjectContext]; [issuesNumberRequest setEntity:issueEntity]; NSPredicate *predicate = [NSPredicate predicateWithFormat:@"projectId == %@", project.projectId]; [issuesNumberRequest setPredicate:predicate]; NSUInteger issuesNumber = [managedObjectContext countForFetchRequest:issuesNumberRequest error:nil]; [issuesNumberRequest release]; I am using the managedObjectContext again. But when I am trying to insert new Project, app crashes with following exception: Assertion failure in -[UITableView _endCellAnimationsWithContext:], /SourceCache/UIKit_Sim/UIKit-984.38/UITableView.m:774 Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Invalid update: invalid number of rows in section 0. The number of rows contained in an existing section after the update (4) must be equal to the number of rows contained in that section before the update (4), plus or minus the number of rows inserted or deleted from that section (1 inserted, 0 deleted).' Fortunately, I've found a workaround: if I create and use separate NSManagedObjectContext inside of the _configureCell:atIndexPath: method app won't crash! I only want to know, is this behavior correct or not?

    Read the article

  • Problem detaching entire object graph in GAE-J with JDO

    - by tempy
    I am trying to load the full object graph for User, which contains a collection of decks, which then contains a collection of cards, as such: User: @PersistenceCapable(detachable = "true") @Inheritance(strategy = InheritanceStrategy.SUBCLASS_TABLE) @FetchGroup(name = "decks", members = { @Persistent(name = "_Decks") }) public abstract class User { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) protected Key _ID; @Persistent protected String _UniqueIdentifier; @Persistent(mappedBy = "_Owner") @Element(dependent = "true") protected Set<Deck> _Decks; protected User() { } } Each Deck has a collection of Cards, as such: @PersistenceCapable(detachable = "true") @FetchGroup(name = "cards", members = { @Persistent(name = "_Cards") }) public class Deck { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Key _ID; @Persistent String _Name; @Persistent(mappedBy = "_Parent") @Element(dependent = "true") private Set<Card> _Cards = new HashSet<Card>(); @Persistent private Set<String> _Tags = new HashSet<String>(); @Persistent private User _Owner; } And finally, each card: @PersistenceCapable public class Card { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Key _ID; @Persistent private Text _Question; @Persistent private Text _Answer; @Persistent private Deck _Parent; } I am trying to retrieve and then detach the entire object graph. I can see in the debugger that it loads fine, but then when I get to detaching, I can't make anything beyond the User object load. (No Decks, no Cards). At first I tried without a transaction to simply "touch" all the fields on the attached object before detaching, but that didn't help. Then I tried adding everything to the default fetch group, but that just generated warnings about GAE not supporting joins. I tried setting the fetch plan's max fetch depth to -1, but that didn't do it. Finally, I tried using FetchGroups as you can see above, and then retrieving with the following code: PersistenceManager pm = _pmf.getPersistenceManager(); pm.setDetachAllOnCommit(true); pm.getFetchPlan().setGroup("decks"); pm.getFetchPlan().setGroup("cards"); Transaction tx = pm.currentTransaction(); Query query = null; try { tx.begin(); query = pm.newQuery(GoogleAccountsUser.class); //Subclass of User query.setFilter("_UniqueIdentifier == TheUser"); query.declareParameters("String TheUser"); List<User> results = (List<User>)query.execute(ID); //ID = Supplied parameter //TODO: Test for more than one result and throw if(results.size() == 0) { tx.commit(); return null; } else { User usr = (User)results.get(0); //usr = pm.detachCopy(usr); tx.commit(); return usr; } } finally { query.closeAll(); if (tx.isActive()) { tx.rollback(); } pm.close(); } This also doesn't work, and I'm running out of ideas...

    Read the article

  • Errors with parameter datatype in PostgreSql query

    - by John
    Im trying to execute a query to postgresql using the following code. It's written in C/C++ and I keep getting the following error when declaring a cursor: DECLARE CURSOR failed: ERROR: could not determine data type of parameter $1 Searching on here and on google, I can't find a solution. Can anyone find where I have made and error and why this is happening? thanks! void searchdb( PGconn *conn, char* name, char* offset ) { // Will hold the number of field in table int nFields; // Start a transaction block PGresult *res = PQexec(conn, "BEGIN"); if (PQresultStatus(res) != PGRES_COMMAND_OK) { printf("BEGIN command failed: %s", PQerrorMessage(conn)); PQclear(res); exit_nicely(conn); } // Clear result PQclear(res); printf("BEGIN command - OK\n"); //set the values to use const char *values[3] = {(char*)name, (char*)RESULTS_LIMIT, (char*)offset}; //calculate the lengths of each of the values int lengths[3] = {strlen((char*)name), sizeof(RESULTS_LIMIT), sizeof(offset)}; //state which parameters are binary int binary[3] = {0, 0, 1}; res = PQexecParams(conn, "DECLARE emprec CURSOR for SELECT name, id, 'Events' as source FROM events_basic WHERE name LIKE '$1::varchar%' UNION ALL " " SELECT name, fsq_id, 'Venues' as source FROM venues_cache WHERE name LIKE '$1::varchar%' UNION ALL " " SELECT name, geo_id, 'Cities' as source FROM static_cities WHERE name LIKE '$1::varchar%' OR FIND_IN_SET('$1::varchar%', alternate_names) != 0 LIMIT $2::int4 OFFSET $3::int4", 3, //number of parameters NULL, //ignore the Oid field values, //values to substitute $1 and $2 lengths, //the lengths, in bytes, of each of the parameter values binary, //whether the values are binary or not 0); //we want the result in text format // Fetch rows from table if (PQresultStatus(res) != PGRES_COMMAND_OK) { printf("DECLARE CURSOR failed: %s", PQerrorMessage(conn)); PQclear(res); exit_nicely(conn); } // Clear result PQclear(res); res = PQexec(conn, "FETCH ALL in emprec"); if (PQresultStatus(res) != PGRES_TUPLES_OK) { printf("FETCH ALL failed"); PQclear(res); exit_nicely(conn); } // Get the field name nFields = PQnfields(res); // Prepare the header with table field name printf("\nFetch record:"); printf("\n********************************************************************\n"); for (int i = 0; i < nFields; i++) printf("%-30s", PQfname(res, i)); printf("\n********************************************************************\n"); // Next, print out the record for each row for (int i = 0; i < PQntuples(res); i++) { for (int j = 0; j < nFields; j++) printf("%-30s", PQgetvalue(res, i, j)); printf("\n"); } PQclear(res); // Close the emprec res = PQexec(conn, "CLOSE emprec"); PQclear(res); // End the transaction res = PQexec(conn, "END"); // Clear result PQclear(res); }

    Read the article

  • SQL SERVER – DQS Error – Cannot connect to server – A .NET Framework error occurred during execution of user-defined routine or aggregate “SetDataQualitySessions” – SetDataQualitySessionPhaseTwo

    - by pinaldave
    Earlier I wrote a blog post about how to install DQS in SQL Server 2012. Today I decided to write a second part of this series where I explain how to use DQS, however, as soon as I started the DQS client, I encountered an error that will not let me pass through and connect with DQS client. It was a bit strange to me as everything was functioning very well when I left it last time.  The error was very big but here are the first few words of it. Cannot connect to server. A .NET Framework error occurred during execution of user-defined routine or aggregate “SetDataQualitySessions”: System.Data.SqlClient.SqlException (0×80131904): A .NET Framework error occurred during execution of user-defined routine or aggregate “SetDataQualitySessionPhaseTwo”: The error continues – here is the quick screenshot of the error. As my initial attempts could not fix the error I decided to search online and I finally received a wonderful solution from Microsoft Site. The error has happened due to latest update I had installed on .NET Framework 4. There was a  mismatch between the Module Version IDs (MVIDs) of the SQL Common Language Runtime (SQLCLR) assemblies in the SQL Server 2012 database and the Global Assembly Cache (GAC). This mismatch was to be resolved for the DQS to work properly. The workaround is specified here in detail. Scroll to subtopic 4.23 Some .NET Framework 4 Updates Might Cause DQS to Fail. The script was very much straight forward. Here are the few things to not to miss while applying workaround. Make sure DQS client is properly closed The NETAssemblies is based on your OS. NETAssemblies for 64 bit machine – which is my machine is “c:\windows\Microsoft.NET\Framework64\v4.0.30319″. If you have Winodws installed on any other drive other than c:\windows do not forget to change that in the above path. Additionally if you have 32 bit version installed on c:\windows you should use path as ”c:\windows\Microsoft.NET\Framework\v4.0.30319″ Make sure that you execute the script specified in 4.23 sections in this article in the database DQS_MAIN. Do not run this in the master database as this will not fix your error. Do not forget to restart your SQL Services once above script has been executed. Once you open the client it will work this time. Here is the script which I have bit modified from original script. I strongly suggest that you use original script mentioned 4.23 sections. However, this one is customized my own machine. /* Original source: http://bit.ly/PXX4NE (Technet) Modifications: -- Added Database context -- Added environment variable @NETAssemblies -- Main script modified to use @NETAssemblies */ USE DQS_MAIN GO BEGIN -- Set your environment variable -- assumption - Windows is installed in c:\windows folder DECLARE @NETAssemblies NVARCHAR(200) -- For 64 bit uncomment following line SET @NETAssemblies = 'c:\windows\Microsoft.NET\Framework64\v4.0.30319\' -- For 32 bit uncomment following line -- SET @NETAssemblies = 'c:\windows\Microsoft.NET\Framework\v4.0.30319\' DECLARE @AssemblyName NVARCHAR(200), @RefreshCmd NVARCHAR(200), @ErrMsg NVARCHAR(200) DECLARE ASSEMBLY_CURSOR CURSOR FOR SELECT name AS NAME FROM sys.assemblies WHERE name NOT LIKE '%ssdqs%' AND name NOT LIKE '%microsoft.sqlserver.types%' AND name NOT LIKE '%practices%' AND name NOT LIKE '%office%' AND name NOT LIKE '%stdole%' AND name NOT LIKE '%Microsoft.Vbe.Interop%' OPEN ASSEMBLY_CURSOR FETCH NEXT FROM ASSEMBLY_CURSOR INTO @AssemblyName WHILE @@FETCH_STATUS = 0 BEGIN BEGIN TRY SET @RefreshCmd = 'ALTER ASSEMBLY [' + @AssemblyName + '] FROM ''' + @NETAssemblies + @AssemblyName + '.dll' + ''' WITH PERMISSION_SET = UNSAFE' EXEC sp_executesql @RefreshCmd PRINT 'Successfully upgraded assembly ''' + @AssemblyName + '''' END TRY BEGIN CATCH IF ERROR_NUMBER() != 6285 BEGIN SET @ErrMsg = ERROR_MESSAGE() PRINT 'Failed refreshing assembly ' + @AssemblyName + '. Error message: ' + @ErrMsg END END CATCH FETCH NEXT FROM ASSEMBLY_CURSOR INTO @AssemblyName END CLOSE ASSEMBLY_CURSOR DEALLOCATE ASSEMBLY_CURSOR END GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Dynamically creating meta tags in asp.net mvc

    - by Jalpesh P. Vadgama
    As we all know that Meta tag has very important roles in Search engine optimization and if we want to have out site listed with good ranking on search engines then we have to put meta tags. Before some time I have blogged about dynamically creating meta tags in asp.net 2.0/3.5 sites, in this blog post I am going to explain how we can create a meta tag dynamically very easily. To have meta tag dynamically we have to create a meta tag on server-side. So I have created a method like following. public string HomeMetaTags() { System.Text.StringBuilder strMetaTag = new System.Text.StringBuilder(); strMetaTag.AppendFormat(@"<meta content='{0}' name='Keywords'/>","Home Action Keyword"); strMetaTag.AppendFormat(@"<meta content='{0}' name='Descption'/>", "Home Description Keyword"); return strMetaTag.ToString(); } Here you can see that I have written a method which will return a string with meta tags. Here you can write any logic you can fetch it from the database or you can even fetch it from xml based on key passed. For the demo purpose I have written that hardcoded. So it will create a meta tag string and will return it. Now I am going to store that meta tag in ViewBag just like we have a title tag. In this post I am going to use standard template so we have our title tag there in viewbag message. Same way I am going save meta tag like following in ViewBag. public ActionResult Index() { ViewBag.Message = "Welcome to ASP.NET MVC!"; ViewBag.MetaTag = HomeMetaTags(); return View(); } Here in the above code you can see that I have stored MetaTag ViewBag. Now as I am using standard ASP.NET MVC3 template so we have our we have out head element in Shared folder _layout.cshtml file. So to render meta tag I have modified the Head tag part of _layout.cshtml like following. <head> <title>@ViewBag.Title</title> <link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" /> <script src="@Url.Content("~/Scripts/jquery-1.5.1.min.js")" type="text/javascript"></script> @Html.Raw(ViewBag.MetaTag) </head> Here in the above code you can see I have use @Html.Raw method to embed meta tag in _layout.cshtml page. This HTML.Raw method will embed output to head tag section without encoding html. As we have already taken care of html tag in string function we don’t need the html encoding. Now it’s time to run application in browser. Now once you run your application in browser and click on view source you will find meta tag for home page as following. That’s its It’s very easy to create dynamically meta tag. Hope you liked it.. Stay tuned for more.. Till then happy programming.

    Read the article

  • BIP BIServer Query Debug

    - by Tim Dexter
    With some help from Bryan, I have uncovered a way of being able to debug or at least log what BIServer is doing when BIP sends it a query request. This is not for those of you querying the database directly but if you are using the BIServer and its datamodel to fetch data for a BIP report. If you have written or used the query builder against BIServer and when you run the report it chokes with a cryptic message, that you have no clue about, read on. When BIP runs a piece of BIServer logical SQL to fetch data. It does not appear to validate it, it just passes it through, so what is BIServer doing on its end? As you may know, you are not writing regular physical sql its actually logical sql e.g. select Jobs."Job Title" as "Job Title", Employees."Last Name" as "Last Name", Employees.Salary as Salary, Locations."Department Name" as "Department Name", Locations."Country Name" as "Country Name", Locations."Region Name" as "Region Name" from HR.Locations Locations, HR.Employees Employees, HR.Jobs Jobs The tables might not even be a physical tables, we don't care, that's what the BIServer and its model are for. You have put all the effort into building the model, just go get me the data from where ever it might be. The BIServer takes the logical sql and uses its vast brain to work out what the physical SQL is, executes it and passes the result back to BIP. select distinct T32556.JOB_TITLE as c1, T32543.LAST_NAME as c2, T32543.SALARY as c3, T32537.DEPARTMENT_NAME as c4, T32532.COUNTRY_NAME as c5, T32577.REGION_NAME as c6 from JOBS T32556, REGIONS T32577, COUNTRIES T32532, LOCATIONS T32569, DEPARTMENTS T32537, EMPLOYEES T32543 where ( T32532.COUNTRY_ID = T32569.COUNTRY_ID and T32532.REGION_ID = T32577.REGION_ID and T32537.DEPARTMENT_ID = T32543.DEPARTMENT_ID and T32537.LOCATION_ID = T32569.LOCATION_ID and T32543.JOB_ID = T32556.JOB_ID ) Not a very tough example I know but you get the idea. How do I know what the BIServer is up to? How can I find out what the issue might be if BIServer chokes on my query? There are a couple of steps: In the Administrator tool you need to set the logging level for the Administrator user to something greater than the default '0'. '7' is going to give you the max. Just remember to take it back down after you have finished the debug. I needed to bounce my BIServer service Now here's the secret sauce. Prefix the following to your BIP query set variable LOGLEVEL = 7; Set the log level to that you have in the admin tool Now run your BIP report. With the prefix in place; BIServer will write to the NQQuery.log file. This is located in the ./OracleBI/server/Log directory. In there you are going to find the complete process the BIServer has gone through to try and get the data back for you A quick note, if the BIServer can, its going to hit that great BIEE cache to get your data and you may not see the full log. IF this is the case. Get inot hte Administration page (via the browser login) and clear out your BIP report cursor. Then re-run. This will hopefully help out if you are trying to debug that annoying BIP report that will not run or is getting some strange data. Don't forget to turn that logging level back down once you are done. This will avoid the DBA screaming at you for sucking up all the disk space on the system.

    Read the article

  • Indexing data from multiple tables with Oracle Text

    - by Roger Ford
    It's well known that Oracle Text indexes perform best when all the data to be indexed is combined into a single index. The query select * from mytable where contains (title, 'dog') 0 or contains (body, 'cat') 0 will tend to perform much worse than select * from mytable where contains (text, 'dog WITHIN title OR cat WITHIN body') 0 For this reason, Oracle Text provides the MULTI_COLUMN_DATASTORE which will combine data from multiple columns into a single index. Effectively, it constructs a "virtual document" at indexing time, which might look something like: <title>the big dog</title> <body>the ginger cat smiles</body> This virtual document can be indexed using either AUTO_SECTION_GROUP, or by explicitly defining sections for title and body, allowing the query as expressed above. Note that we've used a column called "text" - this might have been a dummy column added to the table simply to allow us to create an index on it - or we could created the index on either of the "real" columns - title or body. It should be noted that MULTI_COLUMN_DATASTORE doesn't automatically handle updates to columns used by it - if you create the index on the column text, but specify that columns title and body are to be indexed, you will need to arrange triggers such that the text column is updated whenever title or body are altered. That works fine for single tables. But what if we actually want to combine data from multiple tables? In that case there are two approaches which work well: Create a real table which contains a summary of the information, and create the index on that using the MULTI_COLUMN_DATASTORE. This is simple, and effective, but it does use a lot of disk space as the information to be indexed has to be duplicated. Create our own "virtual" documents using the USER_DATASTORE. The user datastore allows us to specify a PL/SQL procedure which will be used to fetch the data to be indexed, returned in a CLOB, or occasionally in a BLOB or VARCHAR2. This PL/SQL procedure is called once for each row in the table to be indexed, and is passed the ROWID value of the current row being indexed. The actual contents of the procedure is entirely up to the owner, but it is normal to fetch data from one or more columns from database tables. In both cases, we still need to take care of updates - making sure that we have all the triggers necessary to update the indexed column (and, in case 1, the summary table) whenever any of the data to be indexed gets changed. I've written full examples of both these techniques, as SQL scripts to be run in the SQL*Plus tool. You will need to run them as a user who has CTXAPP role and CREATE DIRECTORY privilege. Part of the data to be indexed is a Microsoft Word file called "1.doc". You should create this file in Word, preferably containing the single line of text: "test document". This file can be saved anywhere, but the SQL scripts need to be changed so that the "create or replace directory" command refers to the right location. In the example, I've used C:\doc. multi_table_indexing_1.sql : creates a summary table containing all the data, and uses multi_column_datastore Download link / View in browser multi_table_indexing_2.sql : creates "virtual" documents using a procedure as a user_datastore Download link / View in browser

    Read the article

  • Script to UPDATE STATISTICS with time window

    - by Bill Graziano
    I recently spent some time troubleshooting odd query plans and came to the conclusion that we needed better statistics.  We’ve been running sp_updatestats but apparently it wasn’t sampling enough of the table to get us what we needed.  I have a pretty limited window at night where I can hammer the disks while this runs.  The script below just calls UPDATE STATITICS on all tables that “need” updating.  It defines need as any table whose statistics are older than the number of days you specify (30 by default).  It also has a throttle so it breaks out of the loop after a set amount of time (60 minutes).  That means it won’t start processing a new table after this time but it might take longer than this to finish what it’s doing.  It always processes the oldest statistics first so it will eventually get to all of them.  It defaults to sample 25% of the table.  I’m not sure that’s a good default but it works for now.  I’ve tested this in SQL Server 2005 and SQL Server 2008.  I liked the way Michelle parameterized her re-index script and I took the same approach. CREATE PROCEDURE dbo.UpdateStatistics ( @timeLimit smallint = 60 ,@debug bit = 0 ,@executeSQL bit = 1 ,@samplePercent tinyint = 25 ,@printSQL bit = 1 ,@minDays tinyint = 30 )AS/******************************************************************* Copyright Bill Graziano 2010*******************************************************************/SET NOCOUNT ON;PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + 'Launching...'IF OBJECT_ID('tempdb..#status') IS NOT NULL DROP TABLE #status;CREATE TABLE #status( databaseID INT , databaseName NVARCHAR(128) , objectID INT , page_count INT , schemaName NVARCHAR(128) Null , objectName NVARCHAR(128) Null , lastUpdateDate DATETIME , scanDate DATETIME CONSTRAINT PK_status_tmp PRIMARY KEY CLUSTERED(databaseID, objectID));DECLARE @SQL NVARCHAR(MAX);DECLARE @dbName nvarchar(128);DECLARE @databaseID INT;DECLARE @objectID INT;DECLARE @schemaName NVARCHAR(128);DECLARE @objectName NVARCHAR(128);DECLARE @lastUpdateDate DATETIME;DECLARE @startTime DATETIME;SELECT @startTime = GETDATE();DECLARE cDB CURSORREAD_ONLYFOR select [name] from master.sys.databases where database_id > 4OPEN cDBFETCH NEXT FROM cDB INTO @dbNameWHILE (@@fetch_status <> -1)BEGIN IF (@@fetch_status <> -2) BEGIN SELECT @SQL = ' use ' + QUOTENAME(@dbName) + ' select DB_ID() as databaseID , DB_NAME() as databaseName ,t.object_id ,sum(used_page_count) as page_count ,s.[name] as schemaName ,t.[name] AS objectName , COALESCE(d.stats_date, ''1900-01-01'') , GETDATE() as scanDate from sys.dm_db_partition_stats ps join sys.tables t on t.object_id = ps.object_id join sys.schemas s on s.schema_id = t.schema_id join ( SELECT object_id, MIN(stats_date) as stats_date FROM ( select object_id, stats_date(object_id, stats_id) as stats_date from sys.stats) as d GROUP BY object_id ) as d ON d.object_id = t.object_id where ps.row_count > 0 group by s.[name], t.[name], t.object_id, COALESCE(d.stats_date, ''1900-01-01'') ' SET ANSI_WARNINGS OFF; Insert #status EXEC ( @SQL); SET ANSI_WARNINGS ON; END FETCH NEXT FROM cDB INTO @dbNameENDCLOSE cDBDEALLOCATE cDBDECLARE cStats CURSORREAD_ONLYFOR SELECT databaseID , databaseName , objectID , schemaName , objectName , lastUpdateDate FROM #status WHERE DATEDIFF(dd, lastUpdateDate, GETDATE()) >= @minDays ORDER BY lastUpdateDate ASC, page_count desc, [objectName] ASC OPEN cStatsFETCH NEXT FROM cStats INTO @databaseID, @dbName, @objectID, @schemaName, @objectName, @lastUpdateDateWHILE (@@fetch_status <> -1)BEGIN IF (@@fetch_status <> -2) BEGIN IF DATEDIFF(mi, @startTime, GETDATE()) > @timeLimit BEGIN PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + '*** Time Limit Reached ***'; GOTO __DONE; END SELECT @SQL = 'UPDATE STATISTICS ' + QUOTENAME(@dBName) + '.' + QUOTENAME(@schemaName) + '.' + QUOTENAME(@ObjectName) + ' WITH SAMPLE ' + CAST(@samplePercent AS NVARCHAR(100)) + ' PERCENT;'; IF @printSQL = 1 PRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + @SQL + ' (Last Updated: ' + CAST(@lastUpdateDate AS VARCHAR(100)) + ')' IF @executeSQL = 1 BEGIN EXEC (@SQL); END END FETCH NEXT FROM cStats INTO @databaseID, @dbName, @objectID, @schemaName, @objectName, @lastUpdateDateEND__DONE:CLOSE cStatsDEALLOCATE cStatsPRINT '[ ' + CAST(GETDATE() AS VARCHAR(100)) + ' ] ' + 'Completed.'GO

    Read the article

  • ConcurrentDictionary<TKey,TValue> used with Lazy<T>

    - by Reed
    In a recent thread on the MSDN forum for the TPL, Stephen Toub suggested mixing ConcurrentDictionary<T,U> with Lazy<T>.  This provides a fantastic model for creating a thread safe dictionary of values where the construction of the value type is expensive.  This is an incredibly useful pattern for many operations, such as value caches. The ConcurrentDictionary<TKey, TValue> class was added in .NET 4, and provides a thread-safe, lock free collection of key value pairs.  While this is a fantastic replacement for Dictionary<TKey, TValue>, it has a potential flaw when used with values where construction of the value class is expensive. The typical way this is used is to call a method such as GetOrAdd to fetch or add a value to the dictionary.  It handles all of the thread safety for you, but as a result, if two threads call this simultaneously, two instances of TValue can easily be constructed. If TValue is very expensive to construct, or worse, has side effects if constructed too often, this is less than desirable.  While you can easily work around this with locking, Stephen Toub provided a very clever alternative – using Lazy<TValue> as the value in the dictionary instead. This looks like the following.  Instead of calling: MyValue value = dictionary.GetOrAdd( key, () => new MyValue(key)); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } We would instead use a ConcurrentDictionary<TKey, Lazy<TValue>>, and write: MyValue value = dictionary.GetOrAdd( key, () => new Lazy<MyValue>( () => new MyValue(key))) .Value; This simple change dramatically changes how the operation works.  Now, if two threads call this simultaneously, instead of constructing two MyValue instances, we construct two Lazy<MyValue> instances. However, the Lazy<T> class is very cheap to construct.  Unlike “MyValue”, we can safely afford to construct this twice and “throw away” one of the instances. We then call Lazy<T>.Value at the end to fetch our “MyValue” instance.  At this point, GetOrAdd will always return the same instance of Lazy<MyValue>.  Since Lazy<T> doesn’t construct the MyValue instance until requested, the actual MyClass instance returned is only constructed once.

    Read the article

  • How can I best implement 'cache until further notice' with memcache in multiple tiers?

    - by ajreal
    the term "client" used here is not referring to client's browser, but client server Before cache workflow 1. client make a HTTP request --> 2. server process --> 3. store parsed results into memcache for next use (cache indefinitely) --> 4. return results to client --> 5. client get the result, store into client's local memcache with TTL After cache workflow 1. another client make a HTTP request --> 2. memcache found return memcache results to client --> 3. client get the result, store into client's local memcache with TTL TTL = time to live Is possible for me to know when the data was updated, and to expire relevant memcache(s) accordingly. However, the pitfalls on client site cache TTL Any data update before the TTL is not pick-up by client memcache. In reverse manner, where there is no update, client memcache still expire after the TTL First request (or concurrent requests) after cache TTL will get throttle as it need to repeat the "Before cache workflow" In the event where client require several HTTP requests on a single web page, it could be very bad in performance. Ideal solution should be client to cache indefinitely until further notice. Here are the three proposals about futher notice Proposal 1 : Make use on HTTP header (current implementation) 1. client sent HTTP request last modified time header 2. server check if last data modified time=last cache time return status 304 3. client based on header to decide further processing GOOD? ---- - save some parsing for client - lesser data transfer BAD? ---- - fire a HTTP request is still slow - server end still need to process lots of requests Proposal 2 : Consistently issue a HTTP request to check all data group last modified time 1. client fire a HTTP request 2. server to return last modified time for all data group 3. client compare local last cache time with the result 4. if data group last cache time < server last modified time then request again for that data group only GOOD? ---- - only fetch what is no up-to-date - less requests for server BAD? ---- - every web page require a HTTP request Proposal 3 : Tell client when new data is available (Push) 1. when server end notice there is a change on a data group 2. notify clients on the changes 3. help clients to fetch again data 4. then reset client local memcache after data is parsed GOOD? ---- - let the cache act/behave like a true cache BAD? ---- - encourage race condition My preference is on proposal 3, and something like Gearman could be ideal Where there is a change, Gearman server to sent the task to multiple clients (workers). Am I crazy? (I know my first question is a bit crazy)

    Read the article

  • Options for different domain and hosting

    - by Carl
    The situation I have a hosting service (one.com) on which I have installed a wordpress.org site in a subdirectory 'wordpress': myhost.com/wordpress/ (myhost.com is actually my own domain, but it already has contents and I don't want wordpress/ to appear in the root of that domain.) I want to use a second domain for this site. Thinking I would be able to forward to the wordpress site without problems, I registered the domain at GoDaddy.com: mydomain.com What I want So when my visitors type in mydomaind.com, I want them to see the contents on myhost.com/wordpress/, and the same for all subpages (mydomain.com/a/subpage fetches from myhost.com/wordpress/a/subpage). Just a redirect isn't enough, I want my visitors to see only mydomain.com as their domain. Some notes If I set up forward with URL masking at GoDaddy, they just give a full frame, pointing to myhost.com/wordpress/. This isn't good enough for me, since mydomain.com will always show up in the adress bar, also for subpages (I want mydomain/a/subpage to show in the adress bar for a subpage). I believe this could in principle be done with a .htaccess file with URL rewriting, but I have no hosting with GoDaddy so I can't upload such a file there. Hosting with GoDaddy is very expensive (of course) so I don't want to do that. I don't think I can use DNS settings; the host of mydomain.com says they don't allow anyone else to point to their name servers. If possible, I wouldn't want to re-install the wordpress site, it would take quite some time. I'd prefer to keep it at myhost.com/wordpress/ (if possible) Anything involving transferring the domain is supposed to take 5-7 working days. I would need my site up-and-running earlier than that, so I'd like to avoid it if possible. Am I locked in? As it seems, I am rather locked-in with GoDaddy. I can't use the domain with .htaccess since I can't upload such a file (and won't pay for hosting by GoDaddy). I can't use any of their forward options since none of them do what I want (one just forwards, the one that masks the URL does it with frames). Would you agree? Possible solutions Transfer the domain to any hosting service with reasonable hosting pricing, as opposed to GoDaddy (I'd probably use one.com, the same host as for myhost.com, in that case), and there either re-install wordpress on the new account, or use .htaccess with URL rewrite on the new account to fetch the contents from myhost.com/wordpress/. Can this be set-up to work with sub-pages as well? And visitors won't ever see "myhost.com/wordpress", just "mydomain.com"? E.i., mydomain.com/a/subpage/ wold fetch from myhost.com/wordpress/a/subpage/? This might be a long shot but: Find some free (preferrably) hosting allowing to point to their nameservers Make DNS settings at GoDaddy so that my domain appears at the site above at that site, put a .htaccess file with URL rewriting to forward to myhost.com/wordpress/ Could this be possible? What services could I use in that case? As I see it, this would be the only way not to have to transfer a domain (taking 5-7 working days) and not having to re-install the wordpress site. Sorry for the long question. All info and ideas are welcome.

    Read the article

  • git pull gives error: 401 Authorization Required while accessing https://git.foo.com/bar.git

    - by spuder
    My macbook pro is able to clone/push/pull from the company git server. My cent 6.3 vm gets a 401 error git clone https://git.acme.com/git/torque-setup "error: The requested URL returned error: 401 Authorization Required while accessing https://git.acme.com/git/torque-setup/info/refs As a work around, I've tried creating a folder, with an empty repository, then setting the remote to the company server. I get the same error when trying a git pull The remotes are identical between the machines MacBook Pro (working) git --version git version 1.7.10.2 (Apple Git-33) git remote -v origin https://git.acme.com/git/torque-setup (fetch) origin https://git.acme.com/git/torque-setup (push) Cent 6.3 (not working) yum install -y git git --version git version 1.7.1 git remote -v origin https://git.acme.com/git/torque-setup (fetch) origin https://git.acme.com/git/torque-setup (push) The git server only allows https. Not git or ssh connections. Why is the macbook pro able to do a git pull, while the cent os machine can't? Solution Update 2013-5-15 As jku mentioned, the culprit is the old version of git installed on the cent box. Unfortunately, 1.7.1 is what you get when you run yum install git The work around is to manually install a newer version of git, or simply add the username to the repo git clone https://[email protected]/git/torque-setup

    Read the article

  • I/O error(socket error): [Errno 111] Connection refused

    - by Schitti
    I have a program that uses urllib to periodically fetch a url, and I see intermittent errors like : I/O error(socket error): [Errno 111] Connection refused. It works 90% of the time, but the othe r10% it fails. If retry the fetch immediately after it fails, it succeeds. I'm unable to figure out why this is so. I tried to see if any ports are available, and they are. Any debugging ideas? For additional info, the stack trace is: File "/usr/lib/python2.6/urllib.py", line 235, in retrieve fp = self.open(url, data) File "/usr/lib/python2.6/urllib.py", line 203, in open return getattr(self, name)(url) File "/usr/lib/python2.6/urllib.py", line 342, in open_http h.endheaders() File "/usr/lib/python2.6/httplib.py", line 868, in endheaders self._send_output() File "/usr/lib/python2.6/httplib.py", line 740, in _send_output self.send(msg) File "/usr/lib/python2.6/httplib.py", line 699, in send self.connect() File "/usr/lib/python2.6/httplib.py", line 683, in connect self.timeout) File "/usr/lib/python2.6/socket.py", line 512, in create_connection raise error, msg Edit - A google search isn't very helpful, what I got out of it is that the server I'm fetching from sometimes refuses connections, how can I verify its not a bug in my code and this is indeed the case?

    Read the article

  • Encoding issue: Cocoa Error 261?

    - by Attacus
    So I'm fetching a JSON string from a php script in my iPhone app using: NSURL *baseURL = [NSURL URLWithString:@"test.php"]; NSError *encodeError = [[NSError alloc] init]; NSString *jsonString = [NSString stringWithContentsOfURL:baseURL encoding:NSUTF8StringEncoding error:&encodeError]; NSLog(@"Error: %@", [encodeError localizedDescription]); NSLog(@"STRING: %@", jsonString); The JSON string validates when I test the output. Now I'm having an encoding issue. When I fetch a single echo'd line such as: { "testKey":"é" } (I'm aware I could\should be using NSUrlConnection for asynchronous fetching of data, but at this point in the app development, I don't really need it.) The JSON parser works fine and I am able to create a valid JSON object. However, when I fetch my 2MB JSON string, I get presented with: Error: Operation could not be completed. (Cocoa error 261.) and a Null string. My PHP file is UTF8 itself and I am not using utf8_encode() because that seems to double encode the data since I'm already pulling the data as NSUTF8StringEncoding. Either way, in my single-echo test, it's the approach that allowed me to successfully log \ASDAS style UTF8 escapes when building the JSON object. What could be causing the error in the case of the larger string? Also, I'm not sure if it makes a difference, but I'm using the php function addslashes() on my parsed php data to account for quotes and such when building the JSON string.

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >