Search Results

Search found 22546 results on 902 pages for 'mode management'.

Page 97/902 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Laptop Hibernates after adapter is unplugged

    - by SpyrosR
    Hello i am experiencing a strange problem under Ubuntu 12.04. If I unplug the charger while my laptop is on,i don't see any messages but my laptop goes straight into a suspend or hibernate mode and it's a real pain to get it stable again afterwards. It always suspends (or hibernates) It always takes 2 attempts to restart it (press power button, it starts and it hibernates again, press power button and it starts and stays on) It sometimes has an effect on the wireless which doesn't come back clean and i either have to switch the wireless on and off a couple of times. I sometimes have to restart the laptop totally Is anyone else experiencing the same issues and if yes,is there any solution? Thank you in advance.

    Read the article

  • FileSystemWatcher.Changed fires immediately when Excel 2007 opens XLS file in compatibility mode

    - by Rick Mogstad
    We use a FileSystemWatcher to monitor documents opened from our Document Management system, and if the user saves the document, we ask if they would also like them updated in our system. We have a problem with XLS files in Excel 2007 (have not verified that the problem does not exist in 2003, but it only seems to be files that open in compatibility mode in 2007) where the Changed event fires immediately upon opening the file, and then once more upon closing the file, even if nothing has changed or the user chooses not to save upon closing. This same behavior does not exist when opening XLSX files. I wrote a test app to verify the behavior, which you can find at (http://www.just2guys.net/SOFiles/FSWExcel.zip). In the app, there is one FileSystemWatcher for each NotifyFilter type, so that it is apparent why the Changed event was fired. Any way you can think of to only prompt the user when the document is actually saved in some way by the user? I can start monitoring the file after Process.Start is called, which allows me to skip the message upon opening the document, but I still get one upon closing the document, even when nothing was changed.

    Read the article

  • Cannot use READPAST in snapshot isolation mode

    - by Marcus
    I have a process which is called from multiple threads which does the following: Start transaction Select unit of work from work table with by finding the next row where IsProcessed=0 with hints (UPDLOCK, HOLDLOCK, READPAST) Process the unit of work (C# and SQL stored procedures) Commit the transaction The idea of this is that a thread dips into the pool for the "next" piece of work, and processes it, and the locks are there to ensure that a single piece of work is not processed twice. (the order doesn't matter). All this has been working fine for months. Until today that is, when I happened to realise that despite enabling snapshot isolation and making it the default at the database level, the actual transaction creation code was manually setting an isolation level of "ReadCommitted". I duly changed that to "Snapshot", and of course immediately received the "You can only specify the READPAST lock in the READ COMMITTED or REPEATABLE READ" error message. Oops! The main reason for locking the row was to "mark the row" in such a way that the "mark" would be removed when the transaction that applied the mark was committed and the lock seemed to be the best way to do this, since this table isn't read otherwise except by these threads. If I were to use the IsProcessed flag as the lock, then presumably I would need to do the update first, and then select the row I just updated, but I would need to employ the NOLOCK flag to know whether any other thread had set the flag on a row. All sounds a bit messy. The easiest option would be to abandon the snapshot isolation mode altogether, but the design of step #3 requires it. Any bright ideas on the best way to resolve this problem? Thanks Marcus

    Read the article

  • SerializationException Occurring Only in Release Mode

    - by Calvin Nguyen
    Hi, I am working on an ASP.NET web app using Visual Studio 2008 and a third-party library. Things are fine in my development environment. Things are also good if the web app is deployed in Debug configuration. However, when it is deployed in Release mode, SerializationExceptions appear intermittently, breaking other functionality. In the Windows event log, the following error can be seen: "An unhandled exception occurred and the process was terminated. Application ID: DefaultDomain Process ID: 3972 Exception: System.Runtime.Serialization.SerializationException Message: Unable to find assembly 'MyThirdPartyLibrary, Version=1.234.5.67, Culture=neutral, PublicKeyToken=3d67ed1f87d44c89'. StackTrace: at System.Runtime.Serialization.Formatters.Binary.BinaryAssemblyInfo.GetAssembly( ) at System.Runtime.Serialization.Formatters.Binary.ObjectReader.GetType(BinaryAsse mblyInfo assemblyInfo, String name) at System.Runtime.Serialization.Formatters.Binary.ObjectMap..ctor(String objectName, String[] memberNames, BinaryTypeEnum[] binaryTypeEnumA, Object[] typeInformationA, Int32[] memberAssemIds, ObjectReader objectReader, Int32 objectId, BinaryAssemblyInfo assemblyInfo, SizedArray assemIdToAssemblyTable) at System.Runtime.Serialization.Formatters.Binary.ObjectMap.Create(String name, String[] memberNames, BinaryTypeEnum[] binaryTypeEnumA, Object[] typeInformationA, Int32[] memberAssemIds, ObjectReader objectReader, Int32 objectId, BinaryAssemblyInfo assemblyInfo, SizedArray assemIdToAssemblyTable) at System.Runtime.Serialization.Formatters.Binary._BinaryParser.ReadObjectWithMa pTyped(BinaryObjectWithMapTyped record) at System.Runtime.Serialization.Formatters.Binary._BinaryParser.ReadObjectWithMa pTyped(BinaryHeaderEnum binaryHeaderEnum) at System.Runtime.Serialization.Formatters.Binary.__BinaryParser.Run() at System.Runtime.Serialization.Formatters.Binary.ObjectReader.Deserialize(Header Handler handler, __BinaryParser serParser, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage) at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize(Str eam serializationStream, HeaderHandler handler, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage) at System.Runtime.Remoting.Channels.CrossAppDomainSerializer.DeserializeObject(Me moryStream stm) at System.AppDomain.Deserialize(Byte[] blob) at System.AppDomain.UnmarshalObject(Byte[] blob) For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp." Using FUSLOGVW.exe (i.e., Assembly Binding Log Viewer), I can see the problem is that IIS attempts to find MyThirdPartyLibrary in directory C:\windows\system32\inetsrv. It refuses to look in the bin folder of the web app, where the DLL is actually located. Does anyone know what the problem is? Thanks, Calvin

    Read the article

  • using ghostscript in server mode to convert pdfs to pngs

    - by emh
    while i am able to convert a specific page of a PDF to a PNG like so: gs -dSAFER -dBATCH -dNOPAUSE -sDEVICE=png16m -dGraphicsAlphaBits=4 -sOutputFile=gymnastics-20.png -dFirstPage=20 -dLastPage=20 gymnastics.pdf i am wondering if i can somehow use ghostscript's JOBSERVER mode to process several conversions without having to incur the cost of starting up ghostscript each time. from: http://pages.cs.wisc.edu/~ghost/doc/svn/Use.htm -dJOBSERVER Define \004 (^D) to start a new encapsulated job used for compatibility with Adobe PS Interpreters that ordinarily run under a job server. The -dNOOUTERSAVE switch is ignored if -dJOBSERVER is specified since job servers always execute the input PostScript under a save level, although the exitserver operator can be used to escape from the encapsulated job and execute as if the -dNOOUTERSAVE was specified. This also requires that the input be from stdin, otherwise an error will result (Error: /invalidrestore in --restore--). Example usage is: gs ... -dJOBSERVER - < inputfile.ps -or- cat inputfile.ps | gs ... -dJOBSERVER - Note: The ^D does not result in an end-of-file action on stdin as it may on some PostScript printers that rely on TBCP (Tagged Binary Communication Protocol) to cause an out-of-band ^D to signal EOF in a stream input data. This means that direct file actions on stdin such as flushfile and closefile will affect processing of data beyond the ^D in the stream. the idea is to run ghostscript in-process. the script would receive a request for a particular page of a pdf and would use ghostscript to generate the specified image. i'd rather not start up a new ghostscript process every time.

    Read the article

  • dximagetransform.matrix, absolutely position child elements not rotating in IE 8 standards mode

    - by davydka
    I've looked all over for more information on this, and would like to know why it happens. Here's the code: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html> <head> </head> <body> <div style="position:absolute; top:200px; left:200px; height:200px; width:200px; border:1px solid black; filter: progid:DXImageTransform.Microsoft.Matrix(sizingMethod='auto expand', M11=0.9886188373396114, M12=-0.15044199698646263, M21=0.15044199698646263, M22=0.9886188373396114);"> <div style="position:absolute; top:10px; left:10px; border:1px solid darkblue;"> I do not rotate in IE 8. </div> </div> </body> </html> The problem is that absolutely or relatively positioned elements within a div that has been rotated using MS's dximagetransform.matrix do not inherit the transformation in IE 8. IE 6 & 7 render correctly, and I can solve the IE8 problem by triggering compatibility mode, but I'd rather not do that. Does anyone have any experience with this? I'm using css3 transform on other browsers and using dximagetransform.matrix to achieve this effect in IE. EDIT: Added the opening html tag. Problem still exists. http://i45.tinypic.com/nf4gmq.png

    Read the article

  • GWT dev mode throws ArrayIndexOutOfBoundsException when compile GinjectorImpl.java

    - by Jiang Zhu
    I'm getting following exception when open my GWT app in development mode. the exact same code can compile successfully using mvn gwt:compile Caused by: java.lang.ArrayIndexOutOfBoundsException: 3667 at com.google.gwt.dev.asm.ClassReader.readClass(ClassReader.java:1976) at com.google.gwt.dev.asm.ClassReader.accept(ClassReader.java:464) at com.google.gwt.dev.asm.ClassReader.accept(ClassReader.java:420) at com.google.gwt.dev.shell.rewrite.HasAnnotation.hasAnnotation(HasAnnotation.java:45) at com.google.gwt.dev.shell.CompilingClassLoader.findClass(CompilingClassLoader.java:1100) at com.google.gwt.dev.shell.CompilingClassLoader.loadClass(CompilingClassLoader.java:1203) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:247) at com.google.gwt.dev.shell.ModuleSpace.loadClassFromSourceName(ModuleSpace.java:665) at com.google.gwt.dev.shell.ModuleSpace.rebindAndCreate(ModuleSpace.java:468) at com.google.gwt.dev.shell.GWTBridgeImpl.create(GWTBridgeImpl.java:49) at com.google.gwt.core.shared.GWT.create(GWT.java:57) at com.google.gwt.core.client.GWT.create(GWT.java:85) at ... I overdid ModuleSpace.java and printed out the class name at line 665 before Class.forName() which points out it is trying to load the generated GinjectorImpl.java I found out my generated GinjectorImpl.java is about 9MB and with 100K+ lines of code. When I randomly remove some modules from my GWT app it works again, so I'm guessing it is too large for ASM to compile. Any suggestions? Thanks Environment: GWT 2.5.0, GIN 1.5.0, gwt-maven-plugin 2.5.0, Java 6 SE

    Read the article

  • How to loop video using NetStream in Data Generation Mode

    - by WesleyJohnson
    I'm using a NetStream in Data Generation Mode to play an embeded FLV using appendBytes. When the stream is finished playing, I'd like to loop the FLV file. I'm not sure how to achieve this. Here is what I have so far (this isn't a complete example): public function createBorderAnimation():void { // Load the skin image borderAnimation = Assets.BorderAnimation; // Convert the animation to a byte array borderAnimationBytes = new borderAnimation(); // Initialize the net connection border_nc = new NetConnection(); border_nc.connect( null ); // Initialize the net stream border_ns = new NetStream( border_nc ); border_ns.client = { onMetaData:function( obj:Object ):void{ trace(obj); } } border_ns.addEventListener( NetStatusEvent.NET_STATUS, border_netStatusHandler ); border_ns.play( null ); border_ns.appendBytes( borderAnimationBytes ); // Initialize the animation border_vd = new Video( 1024, 768 ); border_vd.attachNetStream( border_ns ); // Add the animation to the stage ui = new UIComponent(); ui.addChild( DisplayObject( border_vd ) ); grpBackground.addElement( ui ); } protected function border_netStatusHandler( event:NetStatusEvent ):void { if( event.info.code == "NetStream.Buffer.Flush" || event.info.code == "NetStream.Buffer.Empty" ) { border_ns.appendBytesAction( NetStreamAppendBytesAction.RESET_BEGIN ); border_ns.appendBytes( borderAnimationBytes ); border_ns.appendBytesAction( NetStreamAppendBytesAction.END_SEQUENCE ); } } This will loop the animation, but it starts chewing up memory like crazy. I've tried using NetStream.seek(0) and NetStream.appendBytesAction( NetStreamAppendBytesAction.RESET_SEEK ), but then I'm not sure what to do next. If you just try to call appendBytes again after that, it doesn't work, presumably because I'm appending the full byte array which has the FLV header and stuff? I'm not very familiar with how that all works. Any help is greatly appreciated.

    Read the article

  • Get_user running at kernel mode returns error

    - by Fangkai Yang
    Hi, all, I have a problem with get_user() macro. What I did is as follows: I run the following program int main() { int a = 20; printf("address of a: %p", &a); sleep(200); return 0; } When the program runs, it outputs the address of a, say, 0xbff91914. Then I pass this address to a module running in Kernel Mode that retrieves the contents at this address (at the time when I did this, I also made sure the process didn't terminate, because I put it to sleep for 200 seconds... ): The address is firstly sent as a string, and I cast them into pointer type. int * ptr = (int*)simple_strtol(buffer, NULL,16); printk("address: %p",ptr); // I use this line to make sure the cast is correct. When running, it outputs bff91914, as expected. int val = 0; int res; res= get_user(val, (int*) ptr); However, res is always not 0, meaning that get_user returns error. I am wondering what is the problem.... Thank you!! -- Fangkai

    Read the article

  • Why qry.post executed with asynchronous mode?

    - by Ryan
    Recently I met a strange problem, see code snips as below: var sqlCommand: string; connection: TADOConnection; qry: TADOQuery; begin connection := TADOConnection.Create(nil); try connection.ConnectionString := 'Provider=Microsoft.Jet.OLEDB.4.0;Data Source=Test.MDB;Persist Security Info=False'; connection.Open(); qry := TADOQuery.Create(nil); try qry.Connection := connection; qry.SQL.Text := 'Select * from aaa'; qry.Open; qry.Append; qry.FieldByName('TestField1').AsString := 'test'; qry.Post; beep; finally qry.Free; end; finally connection.Free; end; end; First, Create a new access database named test.mdb and put it under the directory of this test project, we can create a new table named aaa in it which has only one text type field named TestField1. We set a breakpoint at line of "beep", then lunch the test application under ide debug mode, when ide stops at the breakpoint line (qry.post has been executed), at this time we use microsoft access to open test.mdb and open table aaa you will find there are no any changes in table aaa, if you let the ide continue running after pressing f9 you can find a new record is inserted in to table aaa, but if you press ctrl+f2 to terminate the application at the breakpoint, you will find the table aaa has no record been inserted, but in normal circumstance, a new record should be inserted in to the table aaa after qry.post executed. who can explain this problem , it troubles me so long time. thanks !!! BTW, the ide is delphi 2010, and the access mdb file is created by microsoft access 2007 under windows 7

    Read the article

  • Node & Redis: Crucial Design Issues in Production Mode

    - by Ali
    This question is a hybrid one, being both technical and system design related. I'm developing the backend of an application that will handle approx. 4K request per second. We are using Node.js being super fast and in terms of our database struction we are using MongoDB, with Redis being a layer between Node and MongoDB handling volatile operations. I'm quite stressed because we are expecting concurrent requests that we need to handle carefully and we are quite close to launch. However I do not believe I've applied the correct approach on redis. I have a class Student, and they constantly change stages(such as 'active', 'doing homework','in lesson' etc. Thus I created a Redis DB for each state. (1 for being 'active', 2 for being 'doing homework'). Above I have the structure of the 'active' students table; xa5p - JSON stringified object #1 pQrW - JSON stringified object #2 active_student_table - {{studentId:'xa5p'}, {studentId:'pQrW'}} Since there is no 'select all keys method' in Redis, I've been suggested to use a set such that when I run command 'smembers' I receive the keys and later on do 'get' for each id in order to find a specific user (lets say that age older than 15). I've been also suggested that in fact I used never use keys in production mode. My question is, no matter how 'conceptual' it is, what specific things I should avoid doing in Node & Redis in production stage?. Are they any issues related to my design? Students must be objects and I sure can list them in a list but I haven't done yet. Is it that crucial in production stage?

    Read the article

  • VS2010 development web server does not use integrated-mode HTTP handlers/modules

    - by Domenic
    I am developing an ASP.NET MVC 2 web site, targeted for .NET Framework 4.0, using Visual Studio 2010. My web.config contains the following code: <system.webServer> <modules runAllManagedModulesForAllRequests="true"> <add name="XhtmlModule" type="DomenicDenicola.Website.XhtmlModule" /> </modules> <handlers> <add name="DotLess" type="dotless.Core.LessCssHttpHandler,dotless.Core" path="*.less" verb="*" /> </handlers> </system.webServer> When I use Build > Publish to put the web site on my local IIS7 instance, it works great. However, when I use Debug > Start Debugging, neither the HTTP handler nor module are executed on any requests. Strangely enough, when I put the handler and module <add /> tags back into <system.web /> under <httpHandlers /> and <httpModules />, they work. This seems to imply that the development web server is running in classic mode. How do I fix this?

    Read the article

  • Windows loader problem - turn on verbose mode

    - by doobop
    Hi, I'm in the process of reorganizing some of the legacy libraries in our application which has unmanaged code calling into libraries of managed code. While I have the code reorganized, it produces the following loader error: ... 'app.exe': Loaded 'C:\WINDOWS\system32\CsDisp.dll' 'app.exe': Loaded 'C:\WINDOWS\system32\psapi.dll' 'app.exe': Loaded 'C:\WINDOWS\system32\shell32.dll' 'app.exe': Loaded 'C:\appCode\Debug\daq206_32.dll', Binary was not built with debug information. 'app.exe': Loaded 'C:\appCode\Debug\SiUSBXp.dll', Binary was not built with debug information. 'app.exe': Loaded 'C:\appCode\Debug\AdlinkDAQ.dll', Symbols loaded. 'app.exe': Loaded 'C:\WINDOWS\system32\P9842.dll', Binary was not built with debug information. LDR: LdrRelocateImageWithBias() failed 0xc0000018 LDR: OldBase : 10000000 LDR: NewBase : 00A80000 LDR: Diff : 0x7c90d6fa0012f6cc LDR: NextOffset : 00000000 LDR: *NextOffset : 0x0 LDR: SizeOfBlock : 0xa80000 Debugger:: An unhandled non-continuable exception was thrown during process load I believe 0xc0000018 error is an overlapping address range. So, I have two questions. First, what linker options may cause this error? I'm currently linking with /DYNAMICBASE:NO and /FIXED:No as this was how some of the previous libraries were set up. Second, is there a way to turn on verbose mode for the loader so I can see what exactly it's trying to load? P9842 is a third party library so I imagine it is getting to one of my libraries after P9842 and failing on that one. Can I narrow it down? Thanks.

    Read the article

  • Boost multi_index_container crash in release mode

    - by Zan Lynx
    I have a program that I just changed to using a boost::multi_index_container collection. After I did that and tested my code in debug mode, I was feeling pretty good about myself. However, then I compiled a release build with NDEBUG set, and the code crashed. Not immediately, but sometimes in single-threaded tests and often in multi-threaded tests. The segmentation faults happen deep inside boost insert and rotate functions related to the index updates and they are happening because a node has NULL left and right pointers. My code looks a bit like this: struct Implementation { typedef std::pair<uint32_t, uint32_t> update_pair_type; struct watch {}; struct update {}; typedef boost::multi_index_container< update_pair_type, boost::multi_index::indexed_by< boost::multi_index::ordered_unique< boost::multi_index::tag<watch>, boost::multi_index::member<update_pair_type, uint32_t, &update_pair_type::first> >, boost::multi_index::ordered_non_unique< boost::multi_index::tag<update>, boost::multi_index::member<update_pair_type, uint32_t, &update_pair_type::second> > > > update_map_type; typedef std::vector< update_pair_type > update_list_type; update_map_type update_map; update_map_type::iterator update_hint; void register_update(uint32_t watch, uint32_t update); void do_updates(uint32_t start, uint32_t end); }; void Implementation::register_update(uint32_t watch, uint32_t update) { update_pair_type new_pair( watch_offset, update_offset ); update_hint = update_map.insert(update_hint, new_pair); if( update_hint->second != update_offset ) { bool replaced _unused_ = update_map.replace(update_hint, new_pair); assert(replaced); } }

    Read the article

  • visual studio attaching to a process in debug mode

    - by user1612986
    i have a strange problem. the dll that i built (lets call it my.dll) in c++ visual studio 2010 uses a third party library (say tp.lib) which in turn calls a third party dll (say tp.dll). for debugging prupose i have in configurationProperties-debugging-command: Excel.exe and configurationProperties-debugging-commandArguments: "$(TargetPath)" in my computer i also set PATH variable to the directory where tp.dll resides now when i hit the F5 in visual studio excel opens up with my.dll and crashes giving me a "cannot open in dos mode" error. the reason this happens is tp.dll is not deployed when debug version of my.dll is deployed. when i open an instance of excel seperately and manually drop the debug version of my.dll then everything works fine and i can see all my functions that i wrote in my.dll the only issue is now i do not know how to debug becuase i do not know how to attach visual studio to the instance of excel i opened up seperately. my question is: 1 how can i attach visual studio to an already opened instance of Excel or 2 how can i hit F5 and still make Excel pick up the required tp.dll from the directory specified in the PATH variable before it starts to deploy my.dll. any of these two will allow my to step through the code for the purpose of debugging. thanks in advance.

    Read the article

  • Free web-based software for team collaboration/documentation

    - by Jason Antman
    Looking for some advice here, as my search has turned up to be pretty fruitless. My group (9 people - SAs, programmers, and two network guys) is looking for some sort of web tool to... ahem... "facilitate increased collaboration" (we didn't use a buzzword generator, I swear). At the moment, we have an unified ticketing system that's braindead, but is here to stay for political/logistical reasons. We've got 2 wikis ("old" and "new"), neither of which fulfill our needs, and are therefore not used very often. We're looking for a free (as in both cost and open source) web-based tool. Management side: Wants to be able to track project status, who's doing what, whether deadlines are being met, etc. Doesn't want full-fledged "project management" app, just something where we can update "yeah this was done" or "waiting for Bob to configure the widgets". TeamBox (www.teambox.com) was suggested, but it seems almost too gimmicky, and doesn't meet any of the other requirements: Non-management side: - flexible, powerful wiki for all documentation (i.e. includes good tables, easy markup, syntax highlighting, etc.) - good full text search of everything (i.e. type in a hostname and get every instance anyone ever uttered that name) - task lists or ToDo lists, hopefully about to be grouped into a number of "projects" - file uploads - RSS or Atom feeds, email alerts of updates We're open to doing some customizations (adding some features, notification/feeds, searching, SVN integration, etc.) but need something F/OSS that will run under Apache. My conundrum is that most of the choices I've found so far fall into one of these categories: project management/task tracking with poor wiki/documentation/knowledge base support wiki with no task tracking support ticketing system with everything else bolted on (we already have one that we're stuck with) code-centric application (we do little "development", mostly SA work) Any suggestions? Or, lacking that, any comments on which software would be easiest to add the lacking features to (hopefully ending up with something that actually looks good and works well)?

    Read the article

  • How to automate org-refile for multiple todo

    - by lawlist
    I'm looking to automate org-refile so that it will find all of the matches and re-file them to a specific location (but not archive). I found a fully automated method of archiving multiple todo, and I am hopeful to find or create (with some help) something similar to this awesome function (but for a different heading / location other than archiving): https://github.com/tonyday567/jwiegley-dot-emacs/blob/master/dot-org.el (defun org-archive-done-tasks () (interactive) (save-excursion (goto-char (point-min)) (while (re-search-forward "\* \\(None\\|Someday\\) " nil t) (if (save-restriction (save-excursion (org-narrow-to-subtree) (search-forward ":LOGBOOK:" nil t))) (forward-line) (org-archive-subtree) (goto-char (line-beginning-position)))))) I also found this (written by aculich), which is a step in the right direction, but still requires repeating the function manually: http://stackoverflow.com/questions/7509463/how-to-move-a-subtree-to-another-subtree-in-org-mode-emacs ;; I also wanted a way for org-refile to refile easily to a subtree, so I wrote some code and generalized it so that it will set an arbitrary immediate target anywhere (not just in the same file). ;; Basic usage is to move somewhere in Tree B and type C-c C-x C-m to mark the target for refiling, then move to the entry in Tree A that you want to refile and type C-c C-w which will immediately refile into the target location you set in Tree B without prompting you, unless you called org-refile-immediate-target with a prefix arg C-u C-c C-x C-m. ;; Note that if you press C-c C-w in rapid succession to refile multiple entries it will preserve the order of your entries even if org-reverse-note-order is set to t, but you can turn it off to respect the setting of org-reverse-note-order with a double prefix arg C-u C-u C-c C-x C-m. (defvar org-refile-immediate nil "Refile immediately using `org-refile-immediate-target' instead of prompting.") (make-local-variable 'org-refile-immediate) (defvar org-refile-immediate-preserve-order t "If last command was also `org-refile' then preserve ordering.") (make-local-variable 'org-refile-immediate-preserve-order) (defvar org-refile-immediate-target nil) "Value uses the same format as an item in `org-refile-targets'." (make-local-variable 'org-refile-immediate-target) (defadvice org-refile (around org-immediate activate) (if (not org-refile-immediate) ad-do-it ;; if last command was `org-refile' then preserve ordering (let ((org-reverse-note-order (if (and org-refile-immediate-preserve-order (eq last-command 'org-refile)) nil org-reverse-note-order))) (ad-set-arg 2 (assoc org-refile-immediate-target (org-refile-get-targets))) (prog1 ad-do-it (setq this-command 'org-refile))))) (defadvice org-refile-cache-clear (after org-refile-history-clear activate) (setq org-refile-targets (default-value 'org-refile-targets)) (setq org-refile-immediate nil) (setq org-refile-immediate-target nil) (setq org-refile-history nil)) ;;;###autoload (defun org-refile-immediate-target (&optional arg) "Set current entry as `org-refile' target. Non-nil turns off `org-refile-immediate', otherwise `org-refile' will immediately refile without prompting for target using most recent entry in `org-refile-targets' that matches `org-refile-immediate-target' as the default." (interactive "P") (if (equal arg '(16)) (progn (setq org-refile-immediate-preserve-order (not org-refile-immediate-preserve-order)) (message "Order preserving is turned: %s" (if org-refile-immediate-preserve-order "on" "off"))) (setq org-refile-immediate (unless arg t)) (make-local-variable 'org-refile-targets) (let* ((components (org-heading-components)) (level (first components)) (heading (nth 4 components)) (string (substring-no-properties heading))) (add-to-list 'org-refile-targets (append (list (buffer-file-name)) (cons :regexp (format "^%s %s$" (make-string level ?*) string)))) (setq org-refile-immediate-target heading)))) (define-key org-mode-map "\C-c\C-x\C-m" 'org-refile-immediate-target) It sure would be helpful if aculich, or some other maven, could please create a variable similar to (setq org-archive-location "~/0.todo.org::* Archived Tasks") so users can specify the file and heading, which is already a part of the org-archive-subtree functionality. I'm doing a search and mark because I don't have the wherewithal to create something like org-archive-location for this setup. EDIT: One step closer -- almost home free . . . (defun lawlist-auto-refile () (interactive) (beginning-of-buffer) (re-search-forward "\* UNDATED") (org-refile-immediate-target) ;; cursor must be on a heading to work. (save-excursion (re-search-backward "\* UNDATED") ;; must be written in such a way so that sub-entries of * UNDATED are not searched; or else infinity loop. (while (re-search-backward "\* \\(None\\|Someday\\) " nil t) (org-refile) ) ) )

    Read the article

  • Integrating Oracle Hyperion Smart View Data Queries with MS Word and Power Point

    - by Andreea Vaduva
    Untitled Document table { border: thin solid; } Most Smart View users probably appreciate that they can use just one add-in to access data from the different sources they might work with, like Oracle Essbase, Oracle Hyperion Planning, Oracle Hyperion Financial Management and others. But not all of them are aware of the options to integrate data analyses not only in Excel, but also in MS Word or Power Point. While in the past, copying and pasting single numbers or tables from a recent analysis in Excel made the pasted content a static snapshot, copying so called Data Points now creates dynamic, updateable references to the data source. It also provides additional nice features, which can make life easier and less stressful for Smart View users. So, how does this option work: after building an ad-hoc analysis with Smart View as usual in an Excel worksheet, any area including data cells/numbers from the database can be highlighted in order to copy data points - even single data cells only.   TIP It is not necessary to highlight and copy the row or column descriptions   Next from the Smart View ribbon select Copy Data Point. Then transfer to the Word or Power Point document into which the selected content should be copied. Note that in these Office programs you will find a menu item Smart View;from it select the Paste Data Point icon. The copied details from the Excel report will be pasted, but showing #NEED_REFRESH in the data cells instead of the original numbers. =After clicking the Refresh icon on the Smart View menu the data will be retrieved and displayed. (Maybe at that moment a login window pops up and you need to provide your credentials.) It works in the same way if you just copy one single number without any row or column descriptions, for example in order to incorporate it into a continuous text: Before refresh: After refresh: From now on for any subsequent updates of the data shown in your documents you only need to refresh data by clicking the Refresh button on the Smart View menu, without copying and pasting the context or content again. As you might realize, trying out this feature on your own, there won’t be any Point of View shown in the Office document. Also you have seen in the example, where only a single data cell was copied, that there aren’t any member names or row/column descriptions copied, which are usually required in an ad-hoc report in order to exactly define where data comes from or how data is queried from the source. Well, these definitions are not visible, but they are transferred to the Word or Power Point document as well. They are stored in the background for each individual data cell copied and can be made visible by double-clicking the data cell as shown in the following screen shot (but which is taken from another context).   So for each cell/number the complete connection information is stored along with the exact member/cell intersection from the database. And that’s not all: you have the chance now to exchange the members originally selected in the Point of View (POV) in the Excel report. Remember, at that time we had the following selection:   By selecting the Manage POV option from the Smart View meny in Word or Power Point…   … the following POV Manager – Queries window opens:   You can now change your selection for each dimension from the original POV by either double-clicking the dimension member in the lower right box under POV: or by selecting the Member Selector icon on the top right hand side of the window. After confirming your changes you need to refresh your document again. Be aware, that this will update all (!) numbers taken from one and the same original Excel sheet, even if they appear in different locations in your Office document, reflecting your recent changes in the POV. TIP Build your original report already in a way that dimensions you might want to change from within Word or Power Point are placed in the POV. And there is another really nice feature I wouldn’t like to miss mentioning: Using Dynamic Data Points in the way described above, you will never miss or need to search again for your original Excel sheet from which values were taken and copied as data points into an Office document. Because from even only one single data cell Smart View is able to recreate the entire original report content with just a few clicks: Select one of the numbers from within your Word or Power Point document by double-clicking.   Then select the Visualize in Excel option from the Smart View menu. Excel will open and Smart View will rebuild the entire original report, including POV settings, and retrieve all data from the most recent actual state of the database. (It might be necessary to provide your credentials before data is displayed.) However, in order to make this work, an active online connection to your databases on the server is necessary and at least read access to the retrieved data. But apart from this, your newly built Excel report is fully functional for ad-hoc analysis and can be used in the common way for drilling, pivoting and all the other known functions and features. So far about embedding Dynamic Data Points into Office documents and linking them back into Excel worksheets. You can apply this in the described way with ad-hoc analyses directly on Essbase databases or using Hyperion Planning and Hyperion Financial Management ad-hoc web forms. If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations. You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here (please make sure to select your country/region at the top of this page) or in the OU Learning paths section , where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly: [email protected] . About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.  

    Read the article

  • Orchestrating the Virtual Enterprise

    - by John Murphy
    During the American Industrial Revolution, the Ford Motor Company did it all. It turned raw materials into a showroom full of Model Ts. It owned a steel mill, a glass factory, and an automobile assembly line. The company was both self-sufficient and innovative and went on to become one of the largest and most profitable companies in the world. Nowadays, it's unusual for any business to follow this vertical integration model because its much harder to be best in class across such a wide a range of capabilities and services. Instead, businesses focus on their core competencies and outsource other business functions to specialized suppliers. They exchange vertical integration for collaboration. When done well, all parties benefit from this arrangement and the collaboration leads to the creation of an agile, lean and successful "virtual enterprise." Case in point: For Sun hardware, Oracle outsources most of its manufacturing and all of its logistics to third parties. These are vital activities, but ones where Oracle doesn't have a core competency, so we shift them to business partners who do. Within our enterprise, we always retain the core functions of product development, support, and most of the sales function, because that's what constitutes our core value to our customers. This is a perfect example of a virtual enterprise.  What are the implications of this? It means that we must exchange direct internal control for indirect external collaboration. This fundamentally changes the relative importance of different business processes, the boundaries of security and information sharing, and the relationship of the supply chain systems to the ERP. The challenge is that the systems required to support this virtual paradigm are still mired in "island enterprise" thinking. But help is at hand. Developments such as the Web, social networks, collaboration, and rules-based orchestration offer great potential to fundamentally re-architect supply chain systems to better support the virtual enterprise.  Supply Chain Management Systems in a Virtual Enterprise Historically enterprise software was constructed to automate the ERP - and then the supply chain systems extended the ERP. They were joined at the hip. In virtual enterprises, the supply chain system needs to be ERP agnostic, sitting above each of the ERPs that are distributed across the virtual enterprise - most of which are operating in other businesses. This is vital so that the supply chain system can manage the flow of material and the related information through the multiple enterprises. It has to have strong collaboration tools. It needs to be highly flexible. Users need to be able to see information that's coming from multiple sources and be able to react and respond to events across those sources.  Oracle Fusion Distributed Order Orchestration (DOO) is a perfect example of a supply chain system designed to operate in this virtual way. DOO embraces the idea that a company's fulfillment challenge is a distributed, multi-enterprise problem. It enables users to manage the process and the trading partners in a uniform way and deliver a consistent user experience while operating over a heterogeneous, virtual enterprise. This is a fundamental shift at the core of managing supply chains. It forces virtual enterprises to think architecturally about how best to construct their supply chain systems.  Case in point, almost everyone has ordered from Amazon.com at one time or another. Our orders are as likely to be fulfilled by third parties as they are by Amazon itself. To deliver the order promptly and efficiently, Amazon has to send it to the right fulfillment location and know the availability in that location. It needs to be able to track status of the fulfillment and deal with exceptions. As a virtual enterprise, Amazon's operations, using thousands of trading partners, requires a very different approach to fulfillment than the traditional 'take an order and ship it from your own warehouse' model. Amazon had no choice but to develop a complex, expensive and custom solution to tackle this problem as there used to be no product solution available. Now, other companies who want to follow similar models have a better off-the-shelf choice -- Oracle Distributed Order Orchestration (DOO).  Consider how another of our customers is using our distributed orchestration solution. This major airplane manufacturer has a highly complex business and interacts regularly with the U.S. Government and major airlines. It sits in the middle of an intricate supply chain and needed to improve visibility across its many different entities. Oracle Fusion DOO gives the company an orchestration mechanism so it could improve quality, speed, flexibility, and consistency without requiring an organ transplant of these highly complex legacy systems. Many retailers face the challenge of dealing with brick and mortar, Web, and reseller channels. They all need to be knitted together into a virtual enterprise experience that is consistent for their customers. When a large U.K. grocer with a strong brick and mortar retail operation added an online business, they turned to Oracle Fusion DOO to bring these entities together. Disturbing the Peace with Acquisitions Quite often a company's ERP system is disrupted when it acquires a new company. An acquisition can inject a new set of processes and systems -- or even introduce an entirely new business like Sun's hardware did at Oracle. This challenge has been a driver for some of our DOO customers. A large power management company is using Oracle Fusion DOO to provide the flexibility to rapidly integrate additional products and services into its central fulfillment operation. The Flip Side of Fulfillment Meanwhile, we haven't ignored similar challenges on the supply side of the equation. Specifically, how to manage complex supply in a flexible way when there are multiple trading parties involved? How to manage the supply to suppliers? How to manage critical components that need to merge in a tier two or tier three supply chain? By investing in supply orchestration solutions for the virtual enterprise, we plan to give users better visibility into their network of suppliers to help them drive down costs. We also think this technology and full orchestration process can be applied to the financial side of organizations. An example is transactions that flow through complex internal structures to minimize tax exposure. We can help companies manage those transactions effectively by thinking about the internal organization as a virtual enterprise and bringing the same solution set to this internal challenge.  The Clear Front Runner No other company is investing in solving the virtual enterprise supply chain issues like Oracle is. Oracle is in a unique position to become the gold standard in this market space. We have the infrastructure of Oracle technology. We already have an Oracle Fusion DOO application which embraces the best of what's required in this area. And we're absolutely committed to extending our Fusion solution to other use cases and delivering even more business value.

    Read the article

  • Which doctype should I use for GWT 2.0?

    - by David
    I think I should use <!DOCTYPE html> for my new GWT application; I understand that doing so will put my application into standards-compliant mode. Am I correct? Are there any disadvantages to using this doctype? Does GWT work properly in standards-compliant mode? I'm wary because the GWT tutorial still uses the HTML 4.01 transitional doctype.

    Read the article

  • Connecting to Active Directory Application Mode from Perl

    - by Khurram Aziz
    I am trying to connect to Active Directory Application Mode instance. The instance is conenctable from third party LDAP clients like Softerra LDAP Browser. But I am getting the following error when connecting from Perl Net::LDAP=HASH(0x876d8e4) sending: Net::LDAP=HASH(0x876d8e4) received: 30 84 00 00 00 A7 02 01 02 65 84 00 00 00 9E 0A 0........e...... 01 01 04 00 04 84 00 00 00 93 30 30 30 30 30 34 ..........000004 44 43 3A 20 4C 64 61 70 45 72 72 3A 20 44 53 49 DC: LdapErr: DSI 44 2D 30 43 30 39 30 36 32 42 2C 20 63 6F 6D 6D D-0C09062B, comm 65 6E 74 3A 20 49 6E 20 6F 72 64 65 72 20 74 6F ent: In order to 20 70 65 72 66 6F 72 6D 20 74 68 69 73 20 6F 70 perform this op 65 72 61 74 69 6F 6E 20 61 20 73 75 63 63 65 73 eration a succes 73 66 75 6C 20 62 69 6E 64 20 6D 75 73 74 20 62 sful bind must b 65 20 63 6F 6D 70 6C 65 74 65 64 20 6F 6E 20 74 e completed on t 68 65 20 63 6F 6E 6E 65 63 74 69 6F 6E 2E 2C 20 he connection., 64 61 74 61 20 30 2C 20 76 65 63 65 00 __ __ __ data 0, vece.` My directory structure is Partition: CN=Apps,DC=MyCo,DC=COM User exists as CN=myuser,CN=Apps,DC=MyCo,DC=COM I have couple of other entries of the custom class which I am interested to browse; those instances appear fine in ADSI Edit, Softerra LDAP Browser etc. I am new to Perl....My perl code is #!/usr/bin/perl use Net::LDAP; $ldap = Net::LDAP->new("127.0.0.1", debug => 2, user => "CN=myuser,CN=Apps,DC=MyCo,DC=COM", password => "secret" ) or die "$@"; $ldap->bind(version => 3) or die "$@"; print "Connected to ldap\n"; $mesg = $ldap->search( filter => "(objectClass=*)" ) or die ("Failed on search.$!"); my $max = $mesg->count; print "$max records found!\n"; for( my $index = 0 ; $index < $max ; $index++) { my $entry = $mesg->entry($index); my $dn = $entry->dn; @attrs = $entry->attributes; foreach my $var (@attrs) { $attr = $entry->get_value( $var, asref => 1 ); if ( defined($attr) ) { foreach my $value ( @$attr ) { print "$var: $value\n"; } } } } $ldap->unbind();

    Read the article

  • Which is a good opensource user management system?

    - by Lost_in_code
    I'm new to php/mySQL and am trying to create a website which will allow users to register. In future, there will be a paid content area where content will be shown based on the payment status. Is there a good opensource lightweight framework which takes care of the user management part? (Register, edit user info, retrieve lost password etc). I'm a flash platform developer and not aware of how to take care of stuff like session hijacking, XSS etc. Should I go ahead and learn to do all this on my own, without using any framework? I thought of using Wordpress' user management system, but not sure how easy that would be. Any suggestion would be great.

    Read the article

  • NHibernate Session Management Advice

    - by Hugusta
    I need some advice on NHibernate Session Management for a C# WinForms application. I am currently porting an application to use NHibernate. I am also employing a UnitOfWork pattern as described in the link below; http://nhforge.org/wikis/patternsandpractices/nhibernate-and-the-unit-of-work-pattern.aspx My question relates to Sessions. Can you only have one session running per thread at all times? I have a scenario in which a Session (UnitOfWork) may be open for a form shown by the application but the user opens another form (i.e. Tools - Options) which I would like to have its own UnitOfWork. Clearly in this instance it would make more sense to open another Session for the "Tools - Options" form and not use the currently open session for the underlying form. Can we have a Dictionary of Sessions on the one thread? Any advice on session management is appreciated.

    Read the article

  • Animating custom-drawn UITableViewCell when entering edit mode

    - by Adam Alexander
    Background First of all, much gratitude to atebits for their very informative blog post Fast Scrolling in Tweetie with UITableView. The post explains in detail how the developers were able to squeeze as much scrolling performance as possible out of the UITableViews in Tweetie. Goals Beginning with the source code linked from the blog post (original) (my github repo): Allow a UITableView using these custom cells to switch to edit mode, exposing the UI for deleting an item from the table. (github commit) Move the cell's text aside as the deletion control slides in from the left. This is complete, although the text jumps back and forth without animation. (github commit) Apply animation to the text movement in goal 2 above for a smooth user experience. This is the step where I became stuck. The Question What is the best way to introduce this animation to complete goal 3? It would be nice if this could be done in a manner that keeps the logic from my last commit because I would love the option to move the conflicting part of the view only, while any non-conflicting portions (such as right-justified text) stay in the same place or move a different number of pixels. If the above is not possible then undoing my last commit and replacing it with an option that slides the entire view to the right would be a workable solution also. I appreciate any help anyone can provide, from quick pointers and ideas all the way to code snippets or github commits. Of course you are welcome to fork my repo if you would like. I will be staying involved with this question to make sure any successful resolution is committed to github and fully documented here. Thanks very much for your time! Updated thoughts I have been thinking about this a lot since my first post and realized that moving some text items relative to others in the view could undo some of the original performance goals solved in the original blog post. So at this point I am thinking a solution where the entire single subview is animated to its new postion may be the best one. Second, if it is done in this way there may be an instance where the subview has a custom color or gradient background. Hopefully this can be done in a way that in its normal position the background extends unseen off to the left enough so that when the view is slid to the right the custom background is still visible across the entire cell.

    Read the article

  • Difficulty getting Saxon into XQuery mode instead of XSLT

    - by Rosarch
    I'm having difficulty getting XQuery to work. I downloaded Saxon-HE 9.2. It seems to only want to work with XSLT. When I type: java -jar saxon9he.jar I get back usage information for XSLT. When I use the command syntax for XQuery, it doesn't recognize the parameters (like -q), and gives XSLT usage information. Here are some command line interactions: >java -jar saxon9he.jar No source file name Saxon-HE 9.2.0.6J from Saxonica Usage: see http://www.saxonica.com/documentation/using-xsl/commandline.html Options: -a Use xml-stylesheet PI, not -xsl argument -c:filename Use compiled stylesheet from file -config:filename Use configuration file -cr:classname Use collection URI resolver class -dtd:on|off Validate using DTD -expand:on|off Expand defaults defined in schema/DTD -explain[:filename] Display compiled expression tree -ext:on|off Allow|Disallow external Java functions -im:modename Initial mode -ief:class;class;... List of integrated extension functions -it:template Initial template -l:on|off Line numbering for source document -m:classname Use message receiver class -now:dateTime Set currentDateTime -o:filename Output file or directory -opt:0..10 Set optimization level (0=none, 10=max) -or:classname Use OutputURIResolver class -outval:recover|fatal Handling of validation errors on result document -p:on|off Recognize URI query parameters -r:classname Use URIResolver class -repeat:N Repeat N times for performance measurement -s:filename Initial source document -sa Use schema-aware processing -strip:all|none|ignorable Strip whitespace text nodes -t Display version and timing information -T[:classname] Use TraceListener class -TJ Trace calls to external Java functions -tree:tiny|linked Select tree model -traceout:file|#null Destination for fn:trace() output -u Names are URLs not filenames -val:strict|lax Validate using schema -versionmsg:on|off Warn when using XSLT 1.0 stylesheet -warnings:silent|recover|fatal Handling of recoverable errors -x:classname Use specified SAX parser for source file -xi:on|off Expand XInclude on all documents -xmlversion:1.0|1.1 Version of XML to be handled -xsd:file;file.. Additional schema documents to be loaded -xsdversion:1.0|1.1 Version of XML Schema to be used -xsiloc:on|off Take note of xsi:schemaLocation -xsl:filename Stylesheet file -y:classname Use specified SAX parser for stylesheet --feature:value Set configuration feature (see FeatureKeys) -? Display this message param=value Set stylesheet string parameter +param=filename Set stylesheet document parameter ?param=expression Set stylesheet parameter using XPath !option=value Set serialization option >java -jar saxon9he.jar -q:"..\w3xQueryTut.xq" Unknown option -q:..\w3xQueryTut.xq Saxon-HE 9.2.0.6J from Saxonica Usage: see http://www.saxonica.com/documentation/using-xsl/commandline.html Options: -a Use xml-stylesheet PI, not -xsl argument // etc... >java net.sf.saxon.Query -q:"..\w3xQueryTut.xq" Exception in thread "main" java.lang.NoClassDefFoundError: net/sf/saxon/Query Caused by: java.lang.ClassNotFoundException: net.sf.saxon.Query // etc... Could not find the main class: net.sf.saxon.Query. Program will exit. I'm probably making some stupid mistake. Do you know what it could be?

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >