Search Results

Search found 17686 results on 708 pages for 'high level'.

Page 436/708 | < Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >

  • Run SSIS package from SQL client

    - by Pramodtech
    I deployed my working package on server which is enterprise edition, SSIS installed on it. When I tries to run package by connecting to integration services engine from my desktop SQL client (which doesn't have SSIS installed) I get error "The task "Send Mail Task" cannot run on this edition of Integration Services. It requires a higher level edition." Does it mean that I need to login to the server (RDP) and then run the package? Also, when I schedule the package thru SQL agent it fails saying login time out but my windos auth login works for everything from connecting, deployment. Any clue?

    Read the article

  • What is the fastest way to unzip textfiles in Matlab during a function?

    - by Paul
    Hello all, I would like to scan text of textfiles in Matlab with the textscan function. Before I can open the textfile with fid = fopen('C:\path'), I need to unzip the files first. The files have the extension: *.gz There are thousands of files which I need to analyze and high performance is important. I have two ideas: (1) Use an external program an call it from the command line in Matlab (2) Use a Matlab 'zip'toolbox. I have heard of gunzip, but don't know about its performance. Does anyone knows a way to unzip these files as quick as possible from within Matlab? Thanks!

    Read the article

  • LINQ error when deployed - Security Exception - cannot create DataContext

    - by aximili
    The code below works locally, but when I deploy it to the server it gives the following error. Security Exception Description: The application attempted to perform an operation not allowed by the security policy. To grant this application the required permission please contact your system administrator or change the application's trust level in the configuration file. Exception Details: System.Security.SecurityException: Request for the permission of type 'System.Security.Permissions.FileIOPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. The code is protected void Page_Load(object sender, EventArgs e) { DataContext context = new DataContext(Global.ConnectionString); // <-- throws the exception //Table<Group> _kindergartensTable = context.GetTable<Group>(); Response.Write("ok"); } I have set full write permissons on all files and folders on the server. Any suggestions how to solve this problem? Thanks in advance.

    Read the article

  • Serial Mac OS X constantly freezes/locks/dissappears for USB to Arduino

    - by Niraj D
    I have a problem with my C++ code running in Xcode with both the AMSerial library as well as the generic C (ioctl, termios). After a fresh restart, my application works well but after I "kill" the program the Serial (I think) is not released. I have checked my open files under /dev and have killed the connection to serial USB from there, but my C++ still can't open the USB port. I have narrowed this down to being a low level Mac OS X issue, regarding blocking the port indefinitely, regardless of closing it using the aforementioned libraries. Just for context, I'm trying to send numbers through my USB port, serially to an Arduino Duemilanove at 9600 baud. Running Serial Monitor in Arduino is perfectly fine, however, running through a C++ application it freezes up my computer, occasionally, my mouse/keyboard freeze up: requiring a hard reset. How can this problem be fixed? It seems like Mac OS X is not USB friendly!

    Read the article

  • SElinux process killed while trying to set boolean

    - by Antonio
    I've got a strange problem. I can not allow apache to connect to database at my CentOC 6.4 box: [root@centos6 ~]# setsebool -P httpd_can_network_connect on Killed [root@centos6 ~]# sestatus -b | grep httpd_can_network_connect httpd_can_network_connect off httpd_can_network_connect_cobbler off httpd_can_network_connect_db off I watched log file, but there was no log messages: tail -f /var/log/audit/audit.log UPDATE: There are some information in /var/log/messages: Nov 9 19:07:16 vs302 kernel: setsebool invoked oom-killer: gfp_mask=0x280da, order=0, oom_adj=0, oom_score_adj=0 Nov 9 19:07:16 vs302 kernel: setsebool cpuset=/ mems_allowed=0 Nov 9 19:07:16 vs302 kernel: Pid: 1660, comm: setsebool Not tainted 2.6.32-358.23.2.el6.x86_64 #1 Nov 9 19:07:16 vs302 kernel: Call Trace: Nov 9 19:07:16 vs302 kernel: [<ffffffff810cb641>] ? cpuset_print_task_mems_allowed+0x91/0xb0 Nov 9 19:07:16 vs302 kernel: [<ffffffff8111ce40>] ? dump_header+0x90/0x1b0 Nov 9 19:07:16 vs302 kernel: [<ffffffff8111d2c2>] ? oom_kill_process+0x82/0x2a0 Nov 9 19:07:16 vs302 kernel: [<ffffffff8111d201>] ? select_bad_process+0xe1/0x120 Nov 9 19:07:16 vs302 kernel: [<ffffffff8111d700>] ? out_of_memory+0x220/0x3c0 Nov 9 19:07:16 vs302 kernel: [<ffffffff8112c3dc>] ? __alloc_pages_nodemask+0x8ac/0x8d0 Nov 9 19:07:16 vs302 kernel: [<ffffffff81160d6a>] ? alloc_pages_vma+0x9a/0x150 Nov 9 19:07:16 vs302 kernel: [<ffffffff81143f0b>] ? handle_pte_fault+0x76b/0xb50 Nov 9 19:07:16 vs302 kernel: [<ffffffff81228664>] ? task_has_capability+0xb4/0x110 Nov 9 19:07:16 vs302 kernel: [<ffffffff81004a49>] ? __raw_callee_save_xen_pmd_val+0x11/0x1e Nov 9 19:07:16 vs302 kernel: [<ffffffff8114452a>] ? handle_mm_fault+0x23a/0x310 Nov 9 19:07:16 vs302 kernel: [<ffffffff811485b6>] ? vma_adjust+0x556/0x5e0 Nov 9 19:07:16 vs302 kernel: [<ffffffff810474e9>] ? __do_page_fault+0x139/0x480 Nov 9 19:07:16 vs302 kernel: [<ffffffff81148b8a>] ? vma_merge+0x29a/0x3e0 Nov 9 19:07:16 vs302 kernel: [<ffffffff81149fdc>] ? do_brk+0x26c/0x350 Nov 9 19:07:16 vs302 kernel: [<ffffffff8100ba1d>] ? retint_restore_args+0x5/0x6 Nov 9 19:07:16 vs302 kernel: [<ffffffff81513bfe>] ? do_page_fault+0x3e/0xa0 Nov 9 19:07:16 vs302 kernel: [<ffffffff81510fb5>] ? page_fault+0x25/0x30 Nov 9 19:07:16 vs302 kernel: Mem-Info: Nov 9 19:07:16 vs302 kernel: Node 0 DMA per-cpu: Nov 9 19:07:16 vs302 kernel: CPU 0: hi: 0, btch: 1 usd: 0 Nov 9 19:07:16 vs302 kernel: Node 0 DMA32 per-cpu: Nov 9 19:07:16 vs302 kernel: CPU 0: hi: 186, btch: 31 usd: 30 Nov 9 19:07:16 vs302 kernel: active_anon:132249 inactive_anon:46 isolated_anon:0 Nov 9 19:07:16 vs302 kernel: active_file:56 inactive_file:59 isolated_file:0 Nov 9 19:07:16 vs302 kernel: unevictable:0 dirty:2 writeback:0 unstable:0 Nov 9 19:07:16 vs302 kernel: free:1369 slab_reclaimable:1774 slab_unreclaimable:11588 Nov 9 19:07:16 vs302 kernel: mapped:54 shmem:48 pagetables:1211 bounce:0 Nov 9 19:07:16 vs302 kernel: Node 0 DMA free:2440kB min:72kB low:88kB high:108kB active_anon:12156kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:14648kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:24kB slab_unreclaimable:8kB kernel_stack:0kB pagetables:16kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes Nov 9 19:07:16 vs302 kernel: lowmem_reserve[]: 0 590 590 590 Nov 9 19:07:16 vs302 kernel: Node 0 DMA32 free:3036kB min:3072kB low:3840kB high:4608kB active_anon:516840kB inactive_anon:184kB active_file:224kB inactive_file:236kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:604988kB mlocked:0kB dirty:8kB writeback:0kB mapped:216kB shmem:192kB slab_reclaimable:7072kB slab_unreclaimable:46344kB kernel_stack:880kB pagetables:4828kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:128 all_unreclaimable? no Nov 9 19:07:16 vs302 kernel: lowmem_reserve[]: 0 0 0 0 Nov 9 19:07:16 vs302 kernel: Node 0 DMA: 0*4kB 1*8kB 0*16kB 0*32kB 0*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 1*2048kB 0*4096kB = 2440kB Nov 9 19:07:16 vs302 kernel: Node 0 DMA32: 129*4kB 67*8kB 30*16kB 19*32kB 6*64kB 2*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 3036kB Nov 9 19:07:16 vs302 kernel: 182 total pagecache pages Nov 9 19:07:16 vs302 kernel: 0 pages in swap cache Nov 9 19:07:16 vs302 kernel: Swap cache stats: add 0, delete 0, find 0/0 Nov 9 19:07:16 vs302 kernel: Free swap = 0kB Nov 9 19:07:16 vs302 kernel: Total swap = 0kB Nov 9 19:07:16 vs302 kernel: 157439 pages RAM Nov 9 19:07:16 vs302 kernel: 6271 pages reserved Nov 9 19:07:16 vs302 kernel: 2686 pages shared Nov 9 19:07:16 vs302 kernel: 146395 pages non-shared Nov 9 19:07:16 vs302 kernel: [ pid ] uid tgid total_vm rss cpu oom_adj oom_score_adj name Nov 9 19:07:16 vs302 kernel: [ 271] 0 271 2798 231 0 -17 -1000 udevd Nov 9 19:07:16 vs302 kernel: [ 476] 0 476 2797 230 0 -17 -1000 udevd Nov 9 19:07:16 vs302 kernel: [ 718] 0 718 2279 122 0 0 0 dhclient Nov 9 19:07:16 vs302 kernel: [ 762] 0 762 6909 58 0 -17 -1000 auditd Nov 9 19:07:16 vs302 kernel: [ 787] 0 787 62270 147 0 0 0 rsyslogd Nov 9 19:07:16 vs302 kernel: [ 801] 25 801 40326 2655 0 0 0 named Nov 9 19:07:16 vs302 kernel: [ 850] 0 850 16563 172 0 -17 -1000 sshd Nov 9 19:07:16 vs302 kernel: [ 875] 0 875 23451 240 0 0 0 sshd Nov 9 19:07:16 vs302 kernel: [ 966] 498 966 4780 44 0 0 0 wrapper Nov 9 19:07:16 vs302 kernel: [ 968] 498 968 497404 40812 0 0 0 java Nov 9 19:07:16 vs302 kernel: [ 1057] 0 1057 20216 225 0 0 0 master Nov 9 19:07:16 vs302 kernel: [ 1064] 89 1064 20278 209 0 0 0 qmgr Nov 9 19:07:16 vs302 kernel: [ 1071] 0 1071 27075 121 0 0 0 bash Nov 9 19:07:16 vs302 kernel: [ 1111] 0 1111 24880 350 0 0 0 httpd Nov 9 19:07:16 vs302 kernel: [ 1117] 48 1117 24913 351 0 0 0 httpd Nov 9 19:07:16 vs302 kernel: [ 1118] 48 1118 24880 337 0 0 0 httpd Nov 9 19:07:16 vs302 kernel: [ 1119] 48 1119 24880 337 0 0 0 httpd Nov 9 19:07:16 vs302 kernel: [ 1120] 48 1120 24880 337 0 0 0 httpd Nov 9 19:07:16 vs302 kernel: [ 1121] 48 1121 24880 337 0 0 0 httpd Nov 9 19:07:16 vs302 kernel: [ 1122] 48 1122 24880 337 0 0 0 httpd Nov 9 19:07:16 vs302 kernel: [ 1124] 48 1124 24880 337 0 0 0 httpd Nov 9 19:07:16 vs302 kernel: [ 1125] 48 1125 24880 337 0 0 0 httpd Nov 9 19:07:16 vs302 kernel: [ 1129] 0 1129 29313 151 0 0 0 crond Nov 9 19:07:16 vs302 kernel: [ 1143] 0 1143 1018 22 0 0 0 agetty Nov 9 19:07:16 vs302 kernel: [ 1146] 0 1146 1015 22 0 0 0 mingetty Nov 9 19:07:16 vs302 kernel: [ 1514] 0 1514 23451 237 0 0 0 sshd Nov 9 19:07:16 vs302 kernel: [ 1517] 0 1517 27075 113 0 0 0 bash Nov 9 19:07:16 vs302 kernel: [ 1641] 89 1641 20236 218 0 0 0 pickup Nov 9 19:07:16 vs302 kernel: [ 1659] 0 1659 25234 39 0 0 0 tail Nov 9 19:07:16 vs302 kernel: [ 1660] 0 1660 89903 85712 0 0 0 setsebool Nov 9 19:07:16 vs302 kernel: Out of memory: Kill process 1660 (setsebool) score 568 or sacrifice child Nov 9 19:07:16 vs302 kernel: Killed process 1660, UID 0, (setsebool) total-vm:359612kB, anon-rss:342708kB, file-rss:140kB

    Read the article

  • Best practice to display POI in iPhone's MapKit?

    - by iamj4de
    Assuming I have a database of POI with their respective coordinates (longitude & latitude). What would be the "standard" way to display the POI as annotations around the user's current location? To elaborate: Given a zoom level, I guess I have to search through the database for all POI whose distance to the current location < a certain threshold, then create annotations for them. Or is there any smarter way? If the user zooms in/out, moves the map... I will need to redo the whole thing again? It seems that MapKit has a mechanism to cache/reuse annotations. Should I create a lot of them right away and let MapKit decides what to render when the visible region changes? I guess this would make the transition smoother, but also consumes more memory. What is your experience with this? Thanks.

    Read the article

  • ASP.NET - Missing #includes cause compilation errors: Failed to map the path '...'

    - by frankadelic
    I have an ASP.NET application which features some server-side includes. For example: <!--#include virtual="/scripts.inc" --> These files are not present in my ASP.NET website project because my website starts in a virtual directory: /path-to-my-application When I choose Build Web Site, I get this error: Failed to map the path '/scripts.inc' Visual Studio cannot resolve these include files that are defined at the root directory level. They are not visible in the website project. Aside from manually commenting out the #include references, is there any way I can get the website to build? Can I force Visual Studio to ignore those errors and compile the site? Once the website is pushed out to IIS, there is no problem, because all the #include files are in place. NOTE - Web Controls are not an option for this application. Please assume #include files are a requirement. Also, I cannot move the include files since they are used by other applications.

    Read the article

  • Rails Associations Question

    - by Mutuelinvestor
    I'm new to rails and have volunteered to help out the local High School Track team with a simple database that tracks the runners performances. For the moment, I have three models: Runners, Race_Data and Races. I have the following associations. Runners have_many Race_Data Races have_many Race_Data I also want create the association Runners Have_Many Races Through Race_Data, but as my look at the diagram I have drawn, there is already a many to one relationship from Race_data to Races. Does the combination of Runners having many Race_Data and Race_Data having one Race imply a Many_to_Many relationship between Runners and Races?

    Read the article

  • Delegate, BeginInvoke. EndInvoke - How to clean up multiple Async threat calls to the same delegate?

    - by Dan
    I've created a Delegate that I intend to call Async. Module Level Delegate Sub GetPartListDataFromServer(ByVal dvOriginal As DataView, ByVal ProgramID As Integer) Dim dlgGetPartList As GetPartListDataFromServer The following code I use in a method Dim dlgGetPartList As New GetPartListDataFromServer(AddressOf AsyncThreadMethod_GetPartListDataFromServer) dlgGetPartList.BeginInvoke(ucboPart.DataSource, ucboProgram.Value, AddressOf AsyncCallback_GetPartListDataFromServer, Nothing) The method runs and does what it needs to The Asyn callback is fired upon completion where I do an EndInvoke Sub AsyncCallback_GetPartListDataFromServer(ByVal ar As IAsyncResult) dlgGetPartList.EndInvoke(Nothing) End Sub It works as long as the method that starts the BeginInvoke on the delegate only ever runs while there is not a BeginInvoke/Thread operation already running. Problem is that the a new thread could be invoked while another thread on the delegate is still running and hasnt yet been EndInvoke'd. The program needs to be able to have the delegate run in more than one instance at a time if necessary and they all need to complete and have EndInvoke called. Once I start another BeginInvoke I lose the reference to the first BeginInvoke so I am unable to clean up the new thread with an EndInvoke. What is a clean solution and best practice to overcome this problem?

    Read the article

  • Bitbucket Wiki TableOfContents

    - by Philipp Andre
    Hello guys, I started with bitbucket and tried to use the wiki for my documentation. Therefore, i created a folder "documentation" unter the folder "wiki" (same level as home.wiki) and added some .wiki files. Now i'm trying to display all these contents of Documentation/ as table of contents. Therefore, i added the line: <<toc Documentation/ 2 >> After commit and push my home.wiki page really shows a TOC, but it contains only the first file stored in the folder Documentation/. I want them all to be listed. What is my mistake? Best regards Philipp

    Read the article

  • JAXB: @XmlTransient on third-party or external super class

    - by Phil
    Hi, I need some help regarding the following issue with JAXB 2.1. Sample: I've created a SpecialPerson class that extends a abstract class Person. Now I want to transform my object structure into a XML schema using JAXB. Thereby I don't want the Person XML type to appear in my XML schema to keep the schema simple. Instead I want the fields of the Person class to appear in the SpecialPerson XML type. Normally I would add the annotation @XmlTransient on class level into the Person code. The problem is that Person is a third-party class and I have no possibility to add @XmlTransient here. How can I tell JAXB that it should ignore the Person class without annotating the class. Is it possible to configure this externally somehow? Have you had the same problem before? Any ideas what the best solution for this problem would be?

    Read the article

  • How do I create a UIViewController programmatically?

    - by pankaj
    I am working in a app where i have data in UITableView. It is like a drill down application. User will click on a row and will go to next page showing more records in a UITableView. But problem in my case is that i dont know upto how many level user can drill. The number of levels are not fixed. So now i am thinking to create and add the viewcontrollers programmatically. Is it possible?? if yes how? thanks in advance.

    Read the article

  • Collection.contains(Enum.Value) in HQL?

    - by Seth
    I'm a little confused about how to do something in HQL. So let's say I have a class Foo that I'm persisting in hibernate. It contains a set of enum values, like so: public class Foo { @CollectionOfElements private Set<Bar> barSet = new HashSet<Bar>(); //getters and setters here ... } and public enum Bar { A, B } Is there an HQL statement I can use to fetch only Foo instances who'se barSet containst Bar.B? List foos = session.createQuery("from Foo as foo " + "where foo.barSet.contains.Bar.B").list(); Or am I stuck fetching all Foo instances and filtering them out at the DAO level? List foos = session.createQuery("from Foo as foo").list(); List results = new ArrayList(); for(Foo f : foos) { if(f.barSet.contains(Bar.B)) results.add(f); } Thanks!

    Read the article

  • Distributing APNS providers

    - by Sam
    I'm writing a business-focused iPhone app which includes a self-hosted server component. I'd like to include push notification functionality in the server; reading through the programming guide it looks as if this would involve either: Distributing the provider certificate with the server component - this doesn't sound like a terribly good idea (even if Apple permits it?) Hosting a shared notification provider and forwarding notifications to APNS from the servers. For an ongoing, high-availability service, this is likely to require including a subscription pricing component, which I would prefer to avoid. Require customers to apply for their own provider certificate. However, it's not clear whether multiple organisations are allowed to apply for provider certificates with a single bundle ID, and it would significantly increase the barrier to adoption. APNS looks to me as if it's specifically geared for centrally hosted services. Is anyone distributing self-hosted notification providers? Are there any other options?

    Read the article

  • Python imaging alternatives

    - by Paul McMillan
    I have python code that needs to do just a couple simple things to photographs: crop, resize, and overlay a watermark. I've used PIL, and the resample/resize results are TERRIBLE. I've used imagemagick, and the interface and commands were designed by packaging a cat in a box, and then repeatedly throwing it down a set of stairs at a keyboard. I'm looking for something which is not PIL or Imagemagick that I can use with python to do simple, high-quality image transformations. For that matter, it doesn't even have to have python bindings if the command line interface is good. Oh, and it needs to be relatively platform agnostic, our production servers are linux, but some of our devs develop on windows. It can't require the installation of a bunch of silly gui code to use as a library, either.

    Read the article

  • SVN project folder tree structure vs production folder tree structure

    - by Marco Demaio
    While developing a PHP+JS web application we always try to separate big blocks of code into small modules/components, in order to make these last ones as much reusable as possible in other applications. Let's say we now have: the EcommerceApp (an ecommerce main application) a Server-file-mgr component (a component to view/manage file on server) a Mylib (a library of useful functions) a MailistApp (another main application to handle mail lists) ... EcommerceApp needs both Server-file-mgr component and Mylib to work Server-file-mgr needs Mylib to work MaillistApp needs both Server-file-mgr component and Mylib to work too. My idea is to simply structure the SVN project folder tree putting everything at the same level: trunk/EcommerceApp trunk/Server-file-mgr trunk/Mylib trunk/MaillistApp But in real life to make these apps to work the folder tree structure must be the following: EcommerceApp |_ Mylib |_ Server-file-mgr MaillistApp |_ Mylib |_ Server-file-mgr I mean Mylib and Server-file-mgr needs to be inside the EcommerceApp/MaillistApp folder. How would you then structure the SVN folder, as I did or in a different/better/smarter way???

    Read the article

  • ASP.NET MVC View Engine Comparison

    - by McKAMEY
    EDIT: added a community wiki to begin capturing people's experience with various View Engines. Please respectfully add any experiences you've had. I've been searching on SO & Google for a breakdown of the various View Engines available for ASP.NET MVC, but haven't found much more than simple high-level descriptions of what a view engine is. I'm not necessarily looking for "best" or "fastest" but rather some real world comparisons of advantages / disadvantages of the major players (e.g. the default WebFormViewEngine, MvcContrib View Engines, etc.) for various situations. I think this would be really helpful in determining if switching from the default engine would be advantageous for a given project or development group. Has anyone encountered such a comparison?

    Read the article

  • algorithm to use to return a specific range of nodes in a directed graph

    - by GatesReign
    I have a class Graph with two lists types namely nodes and edges I have a function List<int> GetNodesInRange(Graph graph, int Range) when I get these parameters I need an algorithm that will go through the graph and return the list of nodes only as deep (the level) as the range. The algorithm should be able to accommodate large number of nodes and large ranges. Atop this, should I use a similar function List<int> GetNodesInRange(Graph graph, int Range, int selected) I want to be able to search outwards from it, to the number of nodes outwards (range) specified. So in the first function, I expect it to return the nodes placed in the blue box. The other function, if I pass the nodes as in the graph with a range of 1 and it starts at node 5, I want it to return the list of nodes that satisfy this criteria (placed in the orange box)

    Read the article

  • How is Entity Framework 4's POCO support compared to NHibernate?

    - by Kevin Pang
    Just wondering if anyone has had any experience using Entity Framework 4's POCO support and how it stands up compared to NHibernate. If they're the same, I'd be very interested in making Entity Framework 4 my ORM of choice if only because it would: Support both data first AND object first development Have a robust LINQ provider Be easier to pitch to clients (since it's developed by Microsoft) Come baked into the .NET framework rather than requiring 8 dlls to get up and running In other words, are there any major shortcomings to EF4? Does it support all of the basic functionality NHibernate supports (lazy-loading, eager-loading, 1st level caching, etc.) or is it still rough around the edges? Is the syntax for setting up the mappings as easy as NHibernate and/or Fluent NHibernate? Edit: Please don't bring up the vote of no confidence. That was ages ago and dealt with some serious shortcomings of EF1 that really don't seem to apply anymore to EF4.

    Read the article

  • How to add a view on top of a UIPopoverController

    - by Noah Witherspoon
    I've got an iPad app with a “drawer” table displayed in a popover. The user can tap-and-hold on an item in the drawer to drag that item out of it and into my main view. That part works fine; unfortunately, the view being dragged appears under the popover, and is too small to be visible until it's dragged out from underneath it. If I add the view as a subview of the view controller in the popover, it gets clipped by the popover's frame, and as I can't access the UIPopoverController's view, I can't disable its layer's masksToBounds—and that probably wouldn't be a great idea anyway. I suspect that I could use an additional UIWindow with a high windowLevel value to force the dragged view to appear on top of the popover, but this seems like overkill. Is there a better solution?

    Read the article

  • Parsing Wiki XML Dumps ver0.4 just got tough

    - by syed
    Hello, I am trying to parse Wikipedia XML Dump using "Parse-MediaWikiDump-1.0.4" along with "Wikiprep.pl" script. I guess this script works fine with ver0.3 Wiki XML Dumps but not with the latest ver0.4 Dumps. I get the following error. Can't locate object method "page" via package "Parse::MediaWikiDump::Pages" at wikiprep.pl line 390. Also, under the "Parse-MediaWikiDump-1.0.4" documentation @ http://search.cpan.org/~triddle/Parse-MediaWikiDump-1.0.4/lib/Parse/MediaWikiDump/Pages.pm, I read "LIMITATIONS Version 0.4 This class was updated to support version 0.4 dump files from a MediaWiki instance but it does not currently support any of the new information available in those files." Any work arounds would help me get to the next level. Note: one may wonder why cannot we directly use SAX or STAX parser instead, wikipedia dump is a 25GB plus single file, stack/memory issues are obvious. Hence, the above perl script resolves this issue but currently I am stuck with this version problem.

    Read the article

  • show all tables in DB2 using the LIST command

    - by Robert
    This is embarrassing, but I can't seem to find a way to list the names of the tables in our DB2 database. Here is what I tried: root@VO11555:~# su - db2inst1 root@VO11555:~# . ~db2inst1/sqllib/db2profile root@VO11555:~# LIST ACTIVE DATABASES We receive this error: SQL1092N "ROOT" does not have the authority to perform the requested command or operation. The DB2 version number follows. root@VO11555:~# db2level DB21085I Instance "db2inst1" uses "64" bits and DB2 code release "SQL09071" with level identifier "08020107". Informational tokens are "DB2 v9.7.0.1", "s091114", "IP23034", and Fix Pack "1". Product is installed at "/opt/db2V9.7".

    Read the article

  • Copy hashtable to another hashtable using c++

    - by zengr
    I am starting with c++ and need to know, what should be the approach to copy one hashtable to another hashtable in C++? We can easily do this in java using: HashMap copyOfOriginal=new HashMap(original); But what about C++? How should I go about it? UPDATE Well, I am doing it at a very basic level,perhaps the java example was a wrong one to give. This is what I am trying to implement using C++: I have this hash array and each element of the array point to the head of a linked list. Which has it's individual nodes (data and next pointer). And now, I need to copy the complete hash array and the linked list each node is pointing to.

    Read the article

  • Bacula windows client could not connect to Bacula director

    - by pr0f-r00t
    I have a Bacula server on my Linux Debian squeeze host (Bacula version 5.0.2) and a Bacula client on Windows XP SP3. On my network each client can see each other, can share files and can ping. On my local server I could run bconsole and the server responds but when I run bconsole or bat on my windows client the server does not respond. Here are my configuration files: bacula-dir.conf: # # Default Bacula Director Configuration file # # The only thing that MUST be changed is to add one or more # file or directory names in the Include directive of the # FileSet resource. # # For Bacula release 5.0.2 (28 April 2010) -- debian squeeze/sid # # You might also want to change the default email address # from root to your address. See the "mail" and "operator" # directives in the Messages resource. # Director { # define myself Name = nima-desktop-dir DIRport = 9101 # where we listen for UA connections QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 1 Password = "Cv70F6pf1t6pBopT4vQOnigDrR0v3L" # Console password Messages = Daemon DirAddress = 127.0.0.1 # DirAddress = 72.16.208.1 } JobDefs { Name = "DefaultJob" Type = Backup Level = Incremental Client = nima-desktop-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = File Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" } # # Define the main nightly save backup job # By default, this job will back up to disk in /nonexistant/path/to/file/archive/dir Job { Name = "BackupClient1" JobDefs = "DefaultJob" } #Job { # Name = "BackupClient2" # Client = nima-desktop2-fd # JobDefs = "DefaultJob" #} # Backup the catalog database (after the nightly save) Job { Name = "BackupCatalog" JobDefs = "DefaultJob" Level = Full FileSet="Catalog" Schedule = "WeeklyCycleAfterBackup" # This creates an ASCII copy of the catalog # Arguments to make_catalog_backup.pl are: # make_catalog_backup.pl <catalog-name> RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog" # This deletes the copy of the catalog RunAfterJob = "/etc/bacula/scripts/delete_catalog_backup" Write Bootstrap = "/var/lib/bacula/%n.bsr" Priority = 11 # run after main backup } # # Standard Restore template, to be changed by Console program # Only one such job is needed for all Jobs/Clients/Storage ... # Job { Name = "RestoreFiles" Type = Restore Client=nima-desktop-fd FileSet="Full Set" Storage = File Pool = Default Messages = Standard Where = /nonexistant/path/to/file/archive/dir/bacula-restores } # job for vmware windows host Job { Name = "nimaxp-fd" Type = Backup Client = nimaxp-fd FileSet = "nimaxp-fs" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = Default Write Bootstrap = "/var/bacula/working/rsys-win-www-1-fd.bsr" #Change this } # job for vmware windows host Job { Name = "arg-michael-fd" Type = Backup Client = nimaxp-fd FileSet = "arg-michael-fs" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = Default Write Bootstrap = "/var/bacula/working/rsys-win-www-1-fd.bsr" #Change this } # List of files to be backed up FileSet { Name = "Full Set" Include { Options { signature = MD5 } # # Put your list of files here, preceded by 'File =', one per line # or include an external list with: # # File = <file-name # # Note: / backs up everything on the root partition. # if you have other partitions such as /usr or /home # you will probably want to add them too. # # By default this is defined to point to the Bacula binary # directory to give a reasonable FileSet to backup to # disk storage during initial testing. # File = /usr/sbin } # # If you backup the root directory, the following two excluded # files can be useful # Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } # List of files to be backed up FileSet { Name = "nimaxp-fs" Enable VSS = yes Include { Options { signature = MD5 } File = "C:\softwares" File = C:/softwares File = "C:/softwares" } } # List of files to be backed up FileSet { Name = "arg-michael-fs" Enable VSS = yes Include { Options { signature = MD5 } File = "C:\softwares" File = C:/softwares File = "C:/softwares" } } # # When to do the backups, full backup on first sunday of the month, # differential (i.e. incremental since full) every other sunday, # and incremental backups other days Schedule { Name = "WeeklyCycle" Run = Full 1st sun at 23:05 Run = Differential 2nd-5th sun at 23:05 Run = Incremental mon-sat at 23:05 } # This schedule does the catalog. It starts after the WeeklyCycle Schedule { Name = "WeeklyCycleAfterBackup" Run = Full sun-sat at 23:10 } # This is the backup of the catalog FileSet { Name = "Catalog" Include { Options { signature = MD5 } File = "/var/lib/bacula/bacula.sql" } } # Client (File Services) to backup Client { Name = nima-desktop-fd Address = localhost FDPort = 9102 Catalog = MyCatalog Password = "_MOfxEuRzxijc0DIMcBqtyx9iW1tzE7V6" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # Client file service for vmware windows host Client { Name = nimaxp-fd Address = nimaxp FDPort = 9102 Catalog = MyCatalog Password = "Ku8F1YAhDz5EMUQjiC9CcSw95Aho9XbXailUmjOaAXJP" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # Client file service for vmware windows host Client { Name = arg-michael-fd Address = 192.168.0.61 FDPort = 9102 Catalog = MyCatalog Password = "b4E9FU6s/9Zm4BVFFnbXVKhlyd/zWxj0oWITKK6CALR/" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # # Second Client (File Services) to backup # You should change Name, Address, and Password before using # #Client { # Name = nima-desktop2-fd # Address = localhost2 # FDPort = 9102 # Catalog = MyCatalog # Password = "_MOfxEuRzxijc0DIMcBqtyx9iW1tzE7V62" # password for FileDaemon 2 # File Retention = 30 days # 30 days # Job Retention = 6 months # six months # AutoPrune = yes # Prune expired Jobs/Files #} # Definition of file storage device Storage { Name = File # Do not use "localhost" here Address = localhost # N.B. Use a fully qualified name here SDPort = 9103 Password = "Cj-gtxugC4dAymY01VTSlUgMTT5LFMHf9" Device = FileStorage Media Type = File } # Definition of DDS tape storage device #Storage { # Name = DDS-4 # Do not use "localhost" here # Address = localhost # N.B. Use a fully qualified name here # SDPort = 9103 # Password = "Cj-gtxugC4dAymY01VTSlUgMTT5LFMHf9" # password for Storage daemon # Device = DDS-4 # must be same as Device in Storage daemon # Media Type = DDS-4 # must be same as MediaType in Storage daemon # Autochanger = yes # enable for autochanger device #} # Definition of 8mm tape storage device #Storage { # Name = "8mmDrive" # Do not use "localhost" here # Address = localhost # N.B. Use a fully qualified name here # SDPort = 9103 # Password = "Cj-gtxugC4dAymY01VTSlUgMTT5LFMHf9" # Device = "Exabyte 8mm" # MediaType = "8mm" #} # Definition of DVD storage device #Storage { # Name = "DVD" # Do not use "localhost" here # Address = localhost # N.B. Use a fully qualified name here # SDPort = 9103 # Password = "Cj-gtxugC4dAymY01VTSlUgMTT5LFMHf9" # Device = "DVD Writer" # MediaType = "DVD" #} # Generic catalog service Catalog { Name = MyCatalog # Uncomment the following line if you want the dbi driver # dbdriver = "dbi:sqlite3"; dbaddress = 127.0.0.1; dbport = dbname = "bacula"; dbuser = ""; dbpassword = "" } # Reasonable message delivery -- send most everything to email address # and to the console Messages { Name = Standard # # NOTE! If you send to two email or more email addresses, you will need # to replace the %r in the from field (-f part) with a single valid # email address in both the mailcommand and the operatorcommand. # What this does is, it sets the email address that emails would display # in the FROM field, which is by default the same email as they're being # sent to. However, if you send email to more than one address, then # you'll have to set the FROM address manually, to a single address. # for example, a '[email protected]', is better since that tends to # tell (most) people that its coming from an automated source. # mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: %t %e of %c %l\" %r" operatorcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: Intervention needed for %j\" %r" mail = root@localhost = all, !skipped operator = root@localhost = mount console = all, !skipped, !saved # # WARNING! the following will create a file that you must cycle from # time to time as it will grow indefinitely. However, it will # also keep all your messages if they scroll off the console. # append = "/var/lib/bacula/log" = all, !skipped catalog = all } # # Message delivery for daemon messages (no job). Messages { Name = Daemon mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula daemon message\" %r" mail = root@localhost = all, !skipped console = all, !skipped, !saved append = "/var/lib/bacula/log" = all, !skipped } # Default pool definition Pool { Name = Default Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year } # File Pool definition Pool { Name = File Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year Maximum Volume Bytes = 50G # Limit Volume size to something reasonable Maximum Volumes = 100 # Limit number of Volumes in Pool } # Scratch pool definition Pool { Name = Scratch Pool Type = Backup } # # Restricted console used by tray-monitor to get the status of the director # Console { Name = nima-desktop-mon Password = "-T0h6HCXWYNy0wWqOomysMvRGflQ_TA6c" CommandACL = status, .status } bacula-fd.conf on client: # # Default Bacula File Daemon Configuration file # # For Bacula release 5.0.3 (08/05/10) -- Windows MinGW32 # # There is not much to change here except perhaps the # File daemon Name # # # "Global" File daemon configuration specifications # FileDaemon { # this is me Name = nimaxp-fd FDport = 9102 # where we listen for the director WorkingDirectory = "C:\\Program Files\\Bacula\\working" Pid Directory = "C:\\Program Files\\Bacula\\working" # Plugin Directory = "C:\\Program Files\\Bacula\\plugins" Maximum Concurrent Jobs = 10 } # # List Directors who are permitted to contact this File daemon # Director { Name = Nima-desktop-dir Password = "Cv70F6pf1t6pBopT4vQOnigDrR0v3L" } # # Restricted Director, used by tray-monitor to get the # status of the file daemon # Director { Name = nimaxp-mon Password = "q5b5g+LkzDXorMViFwOn1/TUnjUyDlg+gRTBp236GrU3" Monitor = yes } # Send all messages except skipped files back to Director Messages { Name = Standard director = Nima-desktop = all, !skipped, !restored } I have checked my firewall and disabled the firewall but it doesn't work.

    Read the article

  • Where can I find a description of the old British Standard structured flow charts?

    - by Steve314
    Some professional organisation defined these in, IIRC, the early 80s as similar to the more well known flow charts, but "structured". Instead of having arbitrary "goto" arrows, they had the equivalent of loops etc. They were standardized, and I vaguely remember studying them briefly at O Level. Of course they were about as useful as the well-known chocolate teapot, but I'd still like to be able to find a reference guide for them if possible - for roughly the same reason I was looking for a reference for standard Basic a while back. Google tells me - well, nothing really. They may as well never have existed. Which is probably nearly (and perhaps completely) true - I certainly never heard of them anywhere else except when I was at school. There's a chance that they may even be my computer science teachers little joke.

    Read the article

< Previous Page | 432 433 434 435 436 437 438 439 440 441 442 443  | Next Page >