Search Results

Search found 3767 results on 151 pages for 'workflow foundation'.

Page 32/151 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • How to manage multiple versions of the same record

    - by Darvis Lombardo
    I am doing short-term contract work for a company that is trying to implement a check-in/check-out type of workflow for their database records. Here's how it should work... 1) A user creates a new entity within the application. There are about 20 related tables that will be populated in addition to the main entity table. 2) Once the entity is created the user will mark it as the master. 3) Another user can make changes to the master only by "checking out" the entity. Multiple users can checkout the entity at the same time. 4) Once the user has made all the necessary changes to the entity, they put it in a "needs approval" status. 5) After an authorized user reviews the entity, they can promote it to master which will put the original record in a tombstoned status. The way they are currently accomplishing the "check out" is by duplicating the entity records in all the tables. The primary keys include EntityID + EntityDate, so they duplicate the entity records in all related tables with the same EntityID and an updated EntityDate and give it a status of "checked out". When the record is put into the next state (needs approval), the duplication occurs again. Eventually it will be promoted to master at which time the final record is marked as master and the original master is marked as dead. This design seems hideous to me, but I understand why they've done it. When someone looks up an entity from within the application, they need to see all current versions of that entity. This was a very straightforward way for making that happen. But the fact that they are representing the same entity multiple times within the same table(s) doesn't sit well with me, nor does the fact that they are duplicating EVERY piece of data rather than only storing deltas. I would be interested in hearing your reaction to the design, whether positive or negative. I would also be grateful for any resoures you can point me to that might be useful for seeing how someone else has implemented such a mechanism. Thanks! Darvis

    Read the article

  • 2nd Year College - Learning - Microsoft Server Products

    - by Ryan
    As the title says, I just finished my first year of college (majoring in Software Engineering). Fortunately my school likes Microsoft enough, and I can get pretty much anything I want that Microsoft sells. I also can get IBM Websphere and the like for free as well. Earlier this year, I set up an oldish computer (2.6 Pentium D, x64) to run ubuntu server headless. I'm predominately a Java developer, so Apache, Maven, Nexus, Sonar, SVN, etc made it onto the machine. It worked really well for personal and school projects, especially team projects (quick ramp up). Anyways, I started to pick up C# to complement my Java knowledge (don't judge me :P), and am interested in working with some of the associated Microsoft equivalents. The machine currently has the Ubuntu install, as well as Windows 7 Ultimate. I do all of my actual development work off my laptop, also running Windows 7 Ultimate. I was wondering what software you would recommend putting on the machine. I’m not actually serving anything off the machine itself, but in Ubuntu I had it doing integration tests with Hudson on every commit, and profiling my applications, etc, etc. The machine would be running headless, and I would remote into it. Here is what I am currently leaning towards / wondering about: Windows 7 Ultimate vs Windows Server 2008 (R2) (no one is really clear why I should go with one over the other) Windows Team Foundation Sharepoint (Never used it before, kind of meh about it) IBM Websphere or Glassfish (Some Java EE web server) SQL Server 2008 A DVCS In order to better control product conflicts / limit resource use, I’m wondering if I should install things into virtual machines (I can get VmWare or Microsoft Virtualization Products) I also plan on installing everything I had running under Linux (it’s almost entirely Java based development software, so it’ll run on both, only reason I went with ubuntu during the year was because the apache build seemed better). I’m primarily looking to become familiar with enterprise software development tools, as well as get something functional that will help my development process. (IE, I’ll still use project and assign tasks even though I might be the only one to assign tasks to, just to practice doing so). Is there any other software / configuration details I should explore? Opinions on my current list? I primarily use C#, Java, and PHP. I'm familiar with ruby, and python as well. Thanks!

    Read the article

  • CFStrings and storing them into models, related topics

    - by Jasconius
    I have a very frustrating issue that I believe involves CFStringRef and passing them along to custom model properties. The code is pretty messy right now as I am in a debug state, but I will try to describe the problem in words as best as I can. I have a custom model, User, which for irrelevant reasons, I am storing CF types derived from the Address Book API into. Examples include: Name, email as NSStrings. I am simply retrieving the CFStringRef value from the AddressBook API and casting as a string, whereupon I assign to the custom model instance and then CFRelease the string. These NSString properties are set as (nonatomic, retain). I then store this model into an NSArray, and I use this Array as a datasource for a UITableView When accessing the object in the cellForRowAtIndexPath, I get a memory access error. When I do a Debug, I see that the value for this datasource array appears at first glance to be corrupted. I've seen strange values assigned to it, including just plain strings, such as one that I fed to an NSLog function in earlier in the method. So, the thing that leads me to believe that this is Core Foundation related is that I am executing this exact same code, in the same class even, on non-Address Book data, in fact, just good old JSON parsed strings, which produce true Cocoa NSStrings, that I follow the same exact steps to create the datasource array. This code works fine. I have a feeling that my (retain) property declaration and/or my [stringVar release] in my custom model dealloc method may be causing memory problems (since it is my understanding that you shouldn't call a Cocoa retain or release on a CF object). Here is the code. I know some of this is super-roundabout but I was trying to make things as explicit as possible for the sake of debugging. NSMutableArray *friendUsers = [[NSMutableArray alloc] init]; int numberOfPeople = CFArrayGetCount(people); for (int i = 0; i < numberOfPeople; i++) { ABMutableMultiValueRef emails = ABRecordCopyValue(CFArrayGetValueAtIndex(people, i), kABPersonEmailProperty); if (ABMultiValueGetCount(emails) > 0) { User *addressContact = [[User alloc] init]; NSString *firstName = (NSString *)ABRecordCopyValue(CFArrayGetValueAtIndex(people, i), kABPersonFirstNameProperty); NSString *lastName = (NSString *)ABRecordCopyValue(CFArrayGetValueAtIndex(people, i), kABPersonLastNameProperty); NSLog(@"%@ and %@", firstName, lastName); NSString *fullName = [NSString stringWithFormat:@"%@ %@", firstName, lastName]; NSString *email = [NSString stringWithFormat:@"%@", (NSString *)ABMultiValueCopyValueAtIndex(emails, 0)]; NSLog(@"the email: %@", email); [addressContact setName:fullName]; [addressContact setEmail:email]; [friendUsers addObject:addressContact]; [firstName release]; [lastName release]; [email release]; [addressContact release]; } CFRelease(emails); } NSLog(@"friend count: %d", [friendUsers count]); abFriends = [NSArray arrayWithArray:friendUsers]; [friendUsers release]; All of that works, every logging statement returns as expected. But when I use abFriends as a datasource, poof. Dead. Is my approach all wrong? Any advice?

    Read the article

  • NServiceBus pipeline with Distributors

    - by David
    I'm building a processing pipeline with NServiceBus but I'm having trouble with the configuration of the distributors in order to make each step in the process scalable. Here's some info: The pipeline will have a master process that says "OK, time to start" for a WorkItem, which will then start a process like a flowchart. Each step in the flowchart may be computationally expensive, so I want the ability to scale out each step. This tells me that each step needs a Distributor. I want to be able to hook additional activities onto events later. This tells me I need to Publish() messages when it is done, not Send() them. A process may need to branch based on a condition. This tells me that a process must be able to publish more than one type of message. A process may need to join forks. I imagine I should use Sagas for this. Hopefully these assumptions are good otherwise I'm in more trouble than I thought. For the sake of simplicity, let's forget about forking or joining and consider a simple pipeline, with Step A followed by Step B, and ending with Step C. Each step gets its own distributor and can have many nodes processing messages. NodeA workers contain a IHandleMessages processor, and publish EventA NodeB workers contain a IHandleMessages processor, and publish Event B NodeC workers contain a IHandleMessages processor, and then the pipeline is complete. Here are the relevant parts of the config files, where # denotes the number of the worker, (i.e. there are input queues NodeA.1 and NodeA.2): NodeA: <MsmqTransportConfig InputQueue="NodeA.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeA.Distrib.Control" DistributorDataAddress="NodeA.Distrib.Data" > <MessageEndpointMappings> </MessageEndpointMappings> </UnicastBusConfig> NodeB: <MsmqTransportConfig InputQueue="NodeB.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeB.Distrib.Control" DistributorDataAddress="NodeB.Distrib.Data" > <MessageEndpointMappings> <add Messages="Messages.EventA, Messages" Endpoint="NodeA.Distrib.Data" /> </MessageEndpointMappings> </UnicastBusConfig> NodeC: <MsmqTransportConfig InputQueue="NodeC.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeC.Distrib.Control" DistributorDataAddress="NodeC.Distrib.Data" > <MessageEndpointMappings> <add Messages="Messages.EventB, Messages" Endpoint="NodeB.Distrib.Data" /> </MessageEndpointMappings> </UnicastBusConfig> And here are the relevant parts of the distributor configs: Distributor A: <add key="DataInputQueue" value="NodeA.Distrib.Data"/> <add key="ControlInputQueue" value="NodeA.Distrib.Control"/> <add key="StorageQueue" value="NodeA.Distrib.Storage"/> Distributor B: <add key="DataInputQueue" value="NodeB.Distrib.Data"/> <add key="ControlInputQueue" value="NodeB.Distrib.Control"/> <add key="StorageQueue" value="NodeB.Distrib.Storage"/> Distributor C: <add key="DataInputQueue" value="NodeC.Distrib.Data"/> <add key="ControlInputQueue" value="NodeC.Distrib.Control"/> <add key="StorageQueue" value="NodeC.Distrib.Storage"/> I'm testing using 2 instances of each node, and the problem seems to come up in the middle at Node B. There are basically 2 things that might happen: Both instances of Node B report that it is subscribing to EventA, and also that NodeC.Distrib.Data@MYCOMPUTER is subscribing to the EventB that Node B publishes. In this case, everything works great. Both instances of Node B report that it is subscribing to EventA, however, one worker says NodeC.Distrib.Data@MYCOMPUTER is subscribing TWICE, while the other worker does not mention it. In the second case, which seem to be controlled only by the way the distributor routes the subscription messages, if the "overachiever" node processes an EventA, all is well. If the "underachiever" processes EventA, then the publish of EventB has no subscribers and the workflow dies. So, my questions: Is this kind of setup possible? Is the configuration correct? It's hard to find any examples of configuration with distributors beyond a simple one-level publisher/2-worker setup. Would it make more sense to have one central broker process that does all the non-computationally-intensive traffic cop operations, and only sends messages to processes behind distributors when the task is long-running and must be load balanced? Then the load-balanced nodes could simply reply back to the central broker, which seems easier. On the other hand, that seems at odds with the decentralization that is NServiceBus's strength. And if this is the answer, and the long running process's done event is a reply, how do you keep the Publish that enables later extensibility on published events?

    Read the article

  • Blank Screen at boot Ubuntu 12.04 - nvidia-current - Macbook Air 3,2

    - by soulnafein
    I've installed nvidia-current using the Additional Drivers application in Ubuntu 12.04. I need those drivers so I can use accelerated WebGL. After installing the drivers, and rebooting X fails to start and I have a frozen system/dark screen. Below is the content of Xorg.0.log How can I fix this problem? [ 4.666] X.Org X Server 1.11.3 Release Date: 2011-12-16 [ 4.666] X Protocol Version 11, Revision 0 [ 4.666] Build Operating System: Linux 2.6.42-23-generic x86_64 Ubuntu [ 4.666] Current Operating System: Linux david-macbook-air 3.2.0-34-generic #53-Ubuntu SMP Thu Nov 15 10:48:16 UTC 2012 x86_64 [ 4.666] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.2.0-34-generic root=UUID=b3d5ae2a-72af-4ef9-b775-0d40b5f80f9b ro quiet splash vt.handoff=7 [ 4.666] Build Date: 29 August 2012 12:12:33AM [ 4.666] xorg-server 2:1.11.4-0ubuntu10.8 (For technical support please see http://www.ubuntu.com/support) [ 4.666] Current version of pixman: 0.24.4 [ 4.666] Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. [ 4.666] Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. [ 4.666] (==) Log file: "/var/log/Xorg.0.log", Time: Thu Dec 13 10:18:02 2012 [ 4.668] (==) Using system config directory "/usr/share/X11/xorg.conf.d" [ 4.668] (==) No Layout section. Using the first Screen section. [ 4.668] (==) No screen section available. Using defaults. [ 4.668] (**) |-->Screen "Default Screen Section" (0) [ 4.668] (**) | |-->Monitor "<default monitor>" [ 4.668] (==) No monitor specified for screen "Default Screen Section". Using a default monitor configuration. [ 4.668] (==) Automatically adding devices [ 4.668] (==) Automatically enabling devices [ 4.668] (WW) The directory "/usr/share/fonts/X11/cyrillic" does not exist. [ 4.668] Entry deleted from font path. [ 4.668] (WW) The directory "/usr/share/fonts/X11/100dpi/" does not exist. [ 4.668] Entry deleted from font path. [ 4.669] (WW) The directory "/usr/share/fonts/X11/75dpi/" does not exist. [ 4.669] Entry deleted from font path. [ 4.669] (WW) The directory "/usr/share/fonts/X11/100dpi" does not exist. [ 4.669] Entry deleted from font path. [ 4.669] (WW) The directory "/usr/share/fonts/X11/75dpi" does not exist. [ 4.669] Entry deleted from font path. [ 4.669] (WW) The directory "/var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType" does not exist. [ 4.669] Entry deleted from font path. [ 4.669] (==) FontPath set to: /usr/share/fonts/X11/misc, /usr/share/fonts/X11/Type1, built-ins [ 4.669] (==) ModulePath set to "/usr/lib/x86_64-linux-gnu/xorg/extra-modules,/usr/lib/xorg/extra-modules,/usr/lib/xorg/modules" [ 4.669] (II) The server relies on udev to provide the list of input devices. If no devices become available, reconfigure udev or disable AutoAddDevices. [ 4.669] (II) Loader magic: 0x7f6222467b00 [ 4.669] (II) Module ABI versions: [ 4.669] X.Org ANSI C Emulation: 0.4 [ 4.669] X.Org Video Driver: 11.0 [ 4.669] X.Org XInput driver : 16.0 [ 4.669] X.Org Server Extension : 6.0 [ 4.670] (--) PCI:*(0:2:0:0) 10de:08a3:106b:00d3 rev 162, Mem @ 0x92000000/16777216, 0x80000000/268435456, 0x90000000/33554432, I/O @ 0x00001000/128, BIOS @ 0x????????/131072 [ 4.670] (II) Open ACPI successful (/var/run/acpid.socket) [ 4.670] (II) LoadModule: "extmod" [ 4.671] (II) Loading /usr/lib/xorg/modules/extensions/libextmod.so [ 4.671] (II) Module extmod: vendor="X.Org Foundation" [ 4.671] compiled for 1.11.3, module version = 1.0.0 [ 4.671] Module class: X.Org Server Extension [ 4.671] ABI class: X.Org Server Extension, version 6.0 [ 4.671] (II) Loading extension MIT-SCREEN-SAVER [ 4.671] (II) Loading extension XFree86-VidModeExtension [ 4.671] (II) Loading extension XFree86-DGA [ 4.671] (II) Loading extension DPMS [ 4.671] (II) Loading extension XVideo [ 4.671] (II) Loading extension XVideo-MotionCompensation [ 4.671] (II) Loading extension X-Resource [ 4.671] (II) LoadModule: "dbe" [ 4.671] (II) Loading /usr/lib/xorg/modules/extensions/libdbe.so [ 4.671] (II) Module dbe: vendor="X.Org Foundation" [ 4.671] compiled for 1.11.3, module version = 1.0.0 [ 4.671] Module class: X.Org Server Extension [ 4.671] ABI class: X.Org Server Extension, version 6.0 [ 4.671] (II) Loading extension DOUBLE-BUFFER [ 4.671] (II) LoadModule: "glx" [ 4.671] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/libglx.so [ 4.869] (II) Module glx: vendor="NVIDIA Corporation" [ 4.869] compiled for 4.0.2, module version = 1.0.0 [ 4.869] Module class: X.Org Server Extension [ 4.869] (II) NVIDIA GLX Module 295.40 Thu Apr 5 21:57:38 PDT 2012 [ 4.869] (II) Loading extension GLX [ 4.869] (II) LoadModule: "record" [ 4.870] (II) Loading /usr/lib/xorg/modules/extensions/librecord.so [ 4.870] (II) Module record: vendor="X.Org Foundation" [ 4.870] compiled for 1.11.3, module version = 1.13.0 [ 4.870] Module class: X.Org Server Extension [ 4.870] ABI class: X.Org Server Extension, version 6.0 [ 4.870] (II) Loading extension RECORD [ 4.870] (II) LoadModule: "dri" [ 4.870] (II) Loading /usr/lib/xorg/modules/extensions/libdri.so [ 4.870] (II) Module dri: vendor="X.Org Foundation" [ 4.870] compiled for 1.11.3, module version = 1.0.0 [ 4.870] ABI class: X.Org Server Extension, version 6.0 [ 4.870] (II) Loading extension XFree86-DRI [ 4.870] (II) LoadModule: "dri2" [ 4.871] (II) Loading /usr/lib/xorg/modules/extensions/libdri2.so [ 4.871] (II) Module dri2: vendor="X.Org Foundation" [ 4.871] compiled for 1.11.3, module version = 1.2.0 [ 4.871] ABI class: X.Org Server Extension, version 6.0 [ 4.871] (II) Loading extension DRI2 [ 4.871] (==) Matched nvidia as autoconfigured driver 0 [ 4.871] (==) Matched nouveau as autoconfigured driver 1 [ 4.871] (==) Matched nv as autoconfigured driver 2 [ 4.871] (==) Matched vesa as autoconfigured driver 3 [ 4.871] (==) Matched fbdev as autoconfigured driver 4 [ 4.871] (==) Assigned the driver to the xf86ConfigLayout [ 4.871] (II) LoadModule: "nvidia" [ 4.871] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so [ 4.887] (II) Module nvidia: vendor="NVIDIA Corporation" [ 4.887] compiled for 4.0.2, module version = 1.0.0 [ 4.887] Module class: X.Org Video Driver [ 4.892] (II) LoadModule: "nouveau" [ 4.894] (II) Loading /usr/lib/xorg/modules/drivers/nouveau_drv.so [ 4.894] (II) Module nouveau: vendor="X.Org Foundation" [ 4.894] compiled for 1.11.3, module version = 1.0.2 [ 4.894] Module class: X.Org Video Driver [ 4.894] ABI class: X.Org Video Driver, version 11.0 [ 4.894] (II) LoadModule: "nv" [ 4.895] (WW) Warning, couldn't open module nv [ 4.895] (II) UnloadModule: "nv" [ 4.895] (II) Unloading nv [ 4.895] (EE) Failed to load module "nv" (module does not exist, 0) [ 4.895] (II) LoadModule: "vesa" [ 4.895] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so [ 4.896] (II) Module vesa: vendor="X.Org Foundation" [ 4.896] compiled for 1.11.3, module version = 2.3.0 [ 4.896] Module class: X.Org Video Driver [ 4.896] ABI class: X.Org Video Driver, version 11.0 [ 4.896] (II) LoadModule: "fbdev" [ 4.896] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so [ 4.896] (II) Module fbdev: vendor="X.Org Foundation" [ 4.896] compiled for 1.11.3, module version = 0.4.2 [ 4.896] ABI class: X.Org Video Driver, version 11.0 [ 4.896] (==) Matched nvidia as autoconfigured driver 0 [ 4.896] (==) Matched nouveau as autoconfigured driver 1 [ 4.896] (==) Matched nv as autoconfigured driver 2 [ 4.896] (==) Matched vesa as autoconfigured driver 3 [ 4.896] (==) Matched fbdev as autoconfigured driver 4 [ 4.896] (==) Assigned the driver to the xf86ConfigLayout [ 4.896] (II) LoadModule: "nvidia" [ 4.896] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so [ 4.896] (II) Module nvidia: vendor="NVIDIA Corporation" [ 4.896] compiled for 4.0.2, module version = 1.0.0 [ 4.896] Module class: X.Org Video Driver [ 4.896] (II) UnloadModule: "nvidia" [ 4.896] (II) Unloading nvidia [ 4.896] (II) Failed to load module "nvidia" (already loaded, 32610) [ 4.896] (II) LoadModule: "nouveau" [ 4.897] (II) Loading /usr/lib/xorg/modules/drivers/nouveau_drv.so [ 4.897] (II) Module nouveau: vendor="X.Org Foundation" [ 4.897] compiled for 1.11.3, module version = 1.0.2 [ 4.897] Module class: X.Org Video Driver [ 4.897] ABI class: X.Org Video Driver, version 11.0 [ 4.897] (II) UnloadModule: "nouveau" [ 4.897] (II) Unloading nouveau [ 4.897] (II) Failed to load module "nouveau" (already loaded, 32610) [ 4.897] (II) LoadModule: "nv" [ 4.897] (WW) Warning, couldn't open module nv [ 4.897] (II) UnloadModule: "nv" [ 4.897] (II) Unloading nv [ 4.897] (EE) Failed to load module "nv" (module does not exist, 0) [ 4.897] (II) LoadModule: "vesa" [ 4.898] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so [ 4.898] (II) Module vesa: vendor="X.Org Foundation" [ 4.898] compiled for 1.11.3, module version = 2.3.0 [ 4.898] Module class: X.Org Video Driver [ 4.898] ABI class: X.Org Video Driver, version 11.0 [ 4.898] (II) UnloadModule: "vesa" [ 4.898] (II) Unloading vesa [ 4.898] (II) Failed to load module "vesa" (already loaded, 0) [ 4.898] (II) LoadModule: "fbdev" [ 4.898] (II) Loading /usr/lib/xorg/modules/drivers/fbdev_drv.so [ 4.898] (II) Module fbdev: vendor="X.Org Foundation" [ 4.898] compiled for 1.11.3, module version = 0.4.2 [ 4.898] ABI class: X.Org Video Driver, version 11.0 [ 4.898] (II) UnloadModule: "fbdev" [ 4.898] (II) Unloading fbdev [ 4.899] (II) Failed to load module "fbdev" (already loaded, 0) [ 4.899] (II) NVIDIA dlloader X Driver 295.40 Thu Apr 5 21:38:35 PDT 2012 [ 4.899] (II) NVIDIA Unified Driver for all Supported NVIDIA GPUs [ 4.899] (II) NOUVEAU driver Date: Wed Sep 12 13:42:43 2012 +0200 [ 4.899] (II) NOUVEAU driver for NVIDIA chipset families : [ 4.899] RIVA TNT (NV04) [ 4.899] RIVA TNT2 (NV05) [ 4.899] GeForce 256 (NV10) [ 4.899] GeForce 2 (NV11, NV15) [ 4.899] GeForce 4MX (NV17, NV18) [ 4.899] GeForce 3 (NV20) [ 4.900] GeForce 4Ti (NV25, NV28) [ 4.900] GeForce FX (NV3x) [ 4.900] GeForce 6 (NV4x) [ 4.900] GeForce 7 (G7x) [ 4.900] GeForce 8 (G8x) [ 4.900] GeForce GTX 200 (NVA0) [ 4.900] GeForce GTX 400 (NVC0) [ 4.900] (II) VESA: driver for VESA chipsets: vesa [ 4.900] (II) FBDEV: driver for framebuffer: fbdev [ 4.900] (++) using VT number 7 [ 4.902] (II) Loading sub module "fb" [ 4.902] (II) LoadModule: "fb" [ 4.902] (II) Loading /usr/lib/xorg/modules/libfb.so [ 4.902] (II) Module fb: vendor="X.Org Foundation" [ 4.902] compiled for 1.11.3, module version = 1.0.0 [ 4.902] ABI class: X.Org ANSI C Emulation, version 0.4 [ 4.902] (II) Loading sub module "wfb" [ 4.902] (II) LoadModule: "wfb" [ 4.903] (II) Loading /usr/lib/xorg/modules/libwfb.so [ 4.905] (II) Module wfb: vendor="X.Org Foundation" [ 4.905] compiled for 1.11.3, module version = 1.0.0 [ 4.905] ABI class: X.Org ANSI C Emulation, version 0.4 [ 4.905] (II) Loading sub module "ramdac" [ 4.905] (II) LoadModule: "ramdac" [ 4.905] (II) Module "ramdac" already built-in [ 4.907] (II) Loading /usr/lib/x86_64-linux-gnu/xorg/extra-modules/nvidia_drv.so [ 4.907] (II) Loading /usr/lib/xorg/modules/libwfb.so [ 4.907] (II) Loading /usr/lib/xorg/modules/libfb.so [ 4.912] (WW) Falling back to old probe method for vesa [ 4.912] (WW) Falling back to old probe method for fbdev [ 4.912] (II) Loading sub module "fbdevhw" [ 4.912] (II) LoadModule: "fbdevhw" [ 4.912] (II) Loading /usr/lib/xorg/modules/libfbdevhw.so [ 4.912] (II) Module fbdevhw: vendor="X.Org Foundation" [ 4.912] compiled for 1.11.3, module version = 0.0.2 [ 4.912] ABI class: X.Org Video Driver, version 11.0 [ 4.912] (II) NVIDIA(0): Creating default Display subsection in Screen section "Default Screen Section" for depth/fbbpp 24/32 [ 4.912] (==) NVIDIA(0): Depth 24, (==) framebuffer bpp 32 [ 4.912] (==) NVIDIA(0): RGB weight 888 [ 4.912] (==) NVIDIA(0): Default visual is TrueColor [ 4.912] (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) [ 4.912] (**) NVIDIA(0): Enabling 2D acceleration [ 5.442] (EE) NVIDIA(0): Failed to initialize the display subsystem for the NVIDIA [ 5.442] (EE) NVIDIA(0): graphics device! [ 5.442] (EE) NVIDIA(0): Failed to get supported display device(s) [ 5.442] (EE) NVIDIA(0): Failed to initialize dac HAL [ 5.442] (II) UnloadModule: "nvidia" [ 5.442] (II) Unloading nvidia [ 5.442] (II) UnloadModule: "wfb" [ 5.442] (II) Unloading wfb [ 5.442] (II) UnloadModule: "fb" [ 5.443] (II) Unloading fb [ 5.443] (EE) Screen(s) found, but none have a usable configuration. [ 5.443] Fatal server error: [ 5.443] no screens found [ 5.443] Please consult the The X.Org Foundation support at http://wiki.x.org for help. [ 5.443] Please also check the log file at "/var/log/Xorg.0.log" for additional information. [ 5.443] [ 5.447] ddxSigGiveUp: Closing log [ 5.447] Server terminated with error (1). Closing log file.

    Read the article

  • usb hub not working on resume from suspend

    - by user1781498
    All the usb ports on my laptop work but when I resume from suspend some the usb ports don't work. lsusb Before Suspend: Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 003: ID 04f3:014b Elan Microelectronics Corp. Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 002: ID 04f2:b3a6 Chicony Electronics Co., Ltd Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub lsusb After Suspend: Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 003: ID 04f3:014b Elan Microelectronics Corp. Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

    Read the article

  • Github Workflow: Pushing small fix branches to remote, or keep them local?

    - by Isaac Hodes
    In Scott Chacon's workflow (explained eg in this SO answer), with essentially two silos (development, and master), if, say I have a small bug to fix (e.g. can be fixed with a few characters) is the optimal way of doing that: a) branch off of development a branch called e.g. fix_123. Push this branch to origin as I work on it. When it's done, code-reviewed, whatever, merge into development and push development to origin. b) Same as above, but without pushing fix_123 to origin.

    Read the article

  • What diagrams, other than the class diagram and the workflow diagram, are useful for explaining how an application works?

    - by Goran_Mandic
    I am working on a small Delphi project, composed of two units. One unit is for the GUI, and the other for data management, file parsing, list iterating and so on.. I've already made a class diagram, and my workflow looks like hell- it's too complex, even for anyone to read. I've considered making a dataflow diagram, but it would be even more complex. A use case diagram wouldn't be of use either. Am I missing some diagram type which could somehow represent the relationship between my two units?

    Read the article

  • How to improve workflow between developer and designer with Expression Blend?

    - by Amenti
    We use WPF and Expression Blend 4. I'm trying to improve our workflow by tutoring one of our designers to use it for styling and animation. Slowly but surely I get the impression Blend in itself is to technical for the designer in question. I myself use it only occasionally (it's great for Visual States for instance) because a lot of things are easier done in code or not possible at all in Blend alone. It seems a developer with design experience is a lot more productive with it than a sole designer. Are there any good online resources or advice you could give me how to improve this situation?

    Read the article

  • Experience with SVN vs. Team Foundation Server?

    - by bcwood
    A few months back my team switched our source control over to Subversion from Visual SourceSafe, and we haven't been happier. Recently I've been looking at Team Foundation Server, and at least on the surface, it seems very impressive. There is some great integration with Visual Studio, and lots of great tools for DBA's, testers, project managers, etc. The most obvious difference between these two products is price. It's hard to beat Subversion (free). Team Foundation Server is quite expensive, so the extra features would really have to kick Subversion in the pants. My question is: does anyone have practical experience with both? How do they compare, and is Team Foundation Server actually worth all the money?

    Read the article

  • Any foundation to administrate an Android open source application?

    - by Nicolas Raoul
    Our open source application is quite popular, and we are many developers. The app uses my Android Market account, and I shared the keys with a developer. But if both of us disappear, the application's Market account will be lost, and all users trapped. Giving the keys to all developers is not a solution either, for security reasons. Is there a foundation (like in Mozilla Foundation or Apache Foundation) that could accept to hold our Android Market account and release new versions in accordance with their own guidelines and our community consensus? There are quite a lot of Open Source foundations, but I could not find any that tackles this particular aspect of Android applications.

    Read the article

  • Optimierter Workflow durch neues Geo-Datawarehouse: Ein Erfolgsprojekt für die LINZ AG – von CISS TDI und Primebird

    - by A&C Redaktion
    Zufriedene Kunden sind die beste Marketingstrategie. Deshalb bieten wir spezialisierten Partnern die Möglichkeit, professionelle Anwenderberichte über eigene erfolgreiche Oracle Projekte erstellen zu lassen. Hier im Blog präsentieren wir Ihnen in loser Folge Referenzberichte, mit denen Partner bereits erfolgreich werben. Heute: Die Oracle Partner CISS TDI, PRIMEBIRD und deren gemeinsames Großprojekt für die LINZ AGDie österreichische LINZ AG ist als Energieversorgungsunternehmen unter anderem für Strom, Gas und Fernwärme, das Wasser- und Kanalsystem sowie den öffentlichen Personennahverkehr zuständig. Seit Jahren schon nutzt sie zur Verwaltung ihrer stetig wachsenden Geodaten-Bestände Oracle Lösungen. 2012 nun haben die Oracle Partner CISS TDI und PRIMEBIRD die bisherige Oracle Lösung zu einem “Geo-Datawarehouse” ausgebaut und das Datenmodell für die Internet-Planauskunft optimiert. Das neue Datawarehouse stellt die geografischen Datenbestände der LINZ AG in einheitlicher Struktur dar und ermöglicht so eine deutliche Workflow-Optimierung. Die Voreile: der administrative Aufwand wurde reduziert, der Prozess der Datensammlung vereinheitlicht und der notwendige Datenexport, etwa an Bauträger oder die Kommune, läuft mit der neuen Web-Anwendung reibungslos. Details zum genauen Projektverlauf, den spezifischen Anforderungen bei Geodaten und zur Zusammenarbeit zwischen der Linz AG, CISS TDI und PRIMEBIRD finden Sie hier im Anwenderbericht Linz AG.Die Möglichkeit, sich und ihre Arbeit gewinnbringend zu präsentieren, können alle spezialisierten Partner nutzen, die ein repräsentatives Oracle Projekt abgeschlossen haben. Erfahrene Fachjournalisten interviewen dann sowohl Partner als auch Endkunde und erstellen einen ausführlichen, ansprechend aufbereiteten Bericht. Die Veröffentlichung erfolgt über verschiedene Marketing-Kanäle. Natürlich können die Partner die Anwenderberichte auch für eigene Marketingzwecke nutzen, z. B. für Veranstaltungen.Haben Sie Interesse? Dann wenden Sie sich an Frau Marion Aschenbrenner. Wir benötigen von Ihnen einige Eckdaten wie Kundenname, Ansprechpartner und eingesetzte Oracle Produkte, eine Beschreibung des Projektes in 3-4 Sätzen und Ihren Ansprechpartner im Haus. Und dann: Lassen Sie Ihre gute Arbeit für sich sprechen!

    Read the article

  • Optimierter Workflow durch neues Geo-Datawarehouse: Ein Erfolgsprojekt für die LINZ AG – von CISS TDI und Primebird

    - by A&C Redaktion
    Zufriedene Kunden sind die beste Marketingstrategie. Deshalb bieten wir spezialisierten Partnern die Möglichkeit, professionelle Anwenderberichte über eigene erfolgreiche Oracle Projekte erstellen zu lassen. Hier im Blog präsentieren wir Ihnen in loser Folge Referenzberichte, mit denen Partner bereits erfolgreich werben. Heute: Die Oracle Partner CISS TDI, PRIMEBIRD und deren gemeinsames Großprojekt für die LINZ AGDie österreichische LINZ AG ist als Energieversorgungsunternehmen unter anderem für Strom, Gas und Fernwärme, das Wasser- und Kanalsystem sowie den öffentlichen Personennahverkehr zuständig. Seit Jahren schon nutzt sie zur Verwaltung ihrer stetig wachsenden Geodaten-Bestände Oracle Lösungen. 2012 nun haben die Oracle Partner CISS TDI und PRIMEBIRD die bisherige Oracle Lösung zu einem “Geo-Datawarehouse” ausgebaut und das Datenmodell für die Internet-Planauskunft optimiert. Das neue Datawarehouse stellt die geografischen Datenbestände der LINZ AG in einheitlicher Struktur dar und ermöglicht so eine deutliche Workflow-Optimierung. Die Voreile: der administrative Aufwand wurde reduziert, der Prozess der Datensammlung vereinheitlicht und der notwendige Datenexport, etwa an Bauträger oder die Kommune, läuft mit der neuen Web-Anwendung reibungslos. Details zum genauen Projektverlauf, den spezifischen Anforderungen bei Geodaten und zur Zusammenarbeit zwischen der Linz AG, CISS TDI und PRIMEBIRD finden Sie hier im Anwenderbericht Linz AG.Die Möglichkeit, sich und ihre Arbeit gewinnbringend zu präsentieren, können alle spezialisierten Partner nutzen, die ein repräsentatives Oracle Projekt abgeschlossen haben. Erfahrene Fachjournalisten interviewen dann sowohl Partner als auch Endkunde und erstellen einen ausführlichen, ansprechend aufbereiteten Bericht. Die Veröffentlichung erfolgt über verschiedene Marketing-Kanäle. Natürlich können die Partner die Anwenderberichte auch für eigene Marketingzwecke nutzen, z. B. für Veranstaltungen.Haben Sie Interesse? Dann wenden Sie sich an Frau Marion Aschenbrenner. Wir benötigen von Ihnen einige Eckdaten wie Kundenname, Ansprechpartner und eingesetzte Oracle Produkte, eine Beschreibung des Projektes in 3-4 Sätzen und Ihren Ansprechpartner im Haus. Und dann: Lassen Sie Ihre gute Arbeit für sich sprechen!

    Read the article

  • How should I track approval workflow when users at every security level can create a request?

    - by Eric Belair
    I am writing a new application that allows users to enter requests. Once a request is entered, it must follow an approval workflow to be finally approved by a user the highest security level. So, let's say a user at Security Level 1 enters a request. This request must be approved by his superior - a user at Security Level 2. Once the Security Level 2 user approves it, it must be approved by a user at Security Level 3. Once the Security Level 3 user approves it, it is considered fully approved. However, users at any of the three Security Levels can enter requests. So, if a Security Level 3 user enters a request, it is automatically considered "fully approved". And, if a Security Level 2 user enters a request, it must only be approved by a Security Level 3 user. I'm currently storing each approval status in a Database Log Table, like so: STATUS_ID (PK) REQUEST_ID STATUS STATUS_DATE -------------- ------------- ---------------- ----------------------- 1 1 USER_SUBMIT 2012-09-01 00:00:00.000 2 1 APPROVED_LEVEL2 2012-09-01 01:00:00.000 3 1 APPROVED_LEVEL3 2012-09-01 02:00:00.000 4 2 USER_SUBMIT 2012-09-01 02:30:00.000 5 2 APPROVED_LEVEL2 2012-09-01 02:45:00.000 My question is, which is a better design: Record all three statuses for every request ...or... Record only the statuses needed according to the Security Level of the user submitting the request In Case 2, the data might look like this for two requests - one submitted by Security Level 2 User and another submitted by Security Level 3 user: STATUS_ID (PK) REQUEST_ID STATUS STATUS_DATE -------------- ------------- ---------------- ----------------------- 1 3 APPROVED_LEVEL2 2012-09-01 01:00:00.000 2 3 APPROVED_LEVEL3 2012-09-01 02:00:00.000 3 4 APPROVED_LEVEL3 2012-09-01 02:00:00.000

    Read the article

  • Finding back to an old project that was turned upside-down by the developer. Your workflow?

    - by Kreativrandale
    after some time I'm asked to work on a heavy web-project I did (layout, html/css) about a year ago. There are some changes that have to be made, basically some css and js stuff. By now the whole project was turned upside down by the developer. It gives me a hard time to connect to the work of him, especially because my old files and file-structure won't work anymore. Thats why I need a up-to-date working-environment, but I don't want to change the files on the server directly. Need some testing and improving while doing this. So, what is your workflow in such a case? Thought about copying the whole/parts of the server to a own homeserver. But even that will be a big task for me (I'm more the front-end-guy). Would be great if theres a way to shrink it down (php, mysql,...), since I only need to change some css/html javascript. Are there any tools available? Love to hear how you handle such situations. Thanks a lot!

    Read the article

  • Query on MVVM pattern in WPF?

    - by Ashish Ashu
    I am implementing a MVVM pattern in my WPF application. My application main window is divided into four parts: Main Menu On the Top Outlook Navigation Control on the Left. A List View on the Middle. Another List view on the bottom. The Navigation control shows different setting (configuration) controls in the Tab items. All the four above are user controls which are placed in the main window. And corresponding to each user control there is separate view model which is bounded with a view model in the XAML of each control, however the model class remain the same between all the view model. And a MainWindow has a seperate View Model which is also bounded with a view model in the XAML of each control. Please help me out in framing a design in which each view models of all the controls above will interact with each other. Please let me know if my question is not clear to you!!

    Read the article

  • No attachable databases were found on the SQL Server

    - by George
    I'm fumbling my way through a basic installation of TFS 2010... I see that the TFS_Configuration and Tfs_DefaultCollection databases were successfully created on my local machine. The installation seemed to go smoothly, but it looks like I need to configure a Team Project Collect before I can start using it. Yet, the wizard seems unable to locate the appropriate database that I expected should have been created during the setup process. The server name defaulted properly to the name of the local machine. Did I miss a step somewhere?

    Read the article

  • Team Build: The path 'Path' is already mapped in workspace 'workspace' error even after deleting all

    - by Glenn Slaven
    I have this problem when I queue a build. The build dies with the error The path C:\[Path]\Sources is already mapped in workspace [Server Name]. the same as this question. but I've removed all the workspaces on the build agent by running this command: tf workspaces /remove:* and also by deleting the TFS cache folder. I've also restarted the server, but the error keeps happening on each build.

    Read the article

  • TFS 2010 Workitem auto transition

    - by Luka
    Hi, How can I make work item to auto tansit from new to assigned state when I populate Assigned To field. Ie. I have reported a bug so its state is new. Now I populate (select from dropdown) the Assigned To field, and I want it to automatically transit to Assigned state (Active state). Please help.

    Read the article

  • Roll up project-level tasks to the project collection portal in TFS2010

    - by adam.mokan
    I have a Project Collection setup in my TFS2010RC deployment. I have two Projects setup under this collection with their own task lists, which are populated with data. I fully expected the tasks from these individual projects to "roll up" and appear in the task list at the Project Collection level, but they do not. The Project Collection task list is empty. Basically, I'm looking to provide a view so a supervisor could see all tasks across projects quickly and easily. I'm sure I could write a reporting services report, but it seems like this is something so basic that it would have been included and it just need to be turned on or something. I'm sure I'm probably missing something really simple here. Thanks.

    Read the article

  • Custom activity in WF 4.0: WorkflowItemsPresenter wont show converted array

    - by lotusnote
    Hi There, we have an Array which is converted via a Binded Converter: else if (TTools.IsOfBaseClass(value.GetType(), typeof(System.Activities.Presentation.Model.ModelItemCollection))) { OurBaseClass[] test = (value as ModelItemCollection).GetCurrentValue() as OurBaseClass[]; List<OurBaseClass> listOfArray = new List<OurBaseClass>(); foreach (OurBaseClass item in test) { listOfArray.Add(item); } return listOfArray; } the convertion works well but it is not shown in our dynamically gui gui code with bindings: <sap:WorkflowItemsPresenter xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:sap="clr-namespace:System.Activities.Presentation;assembly=System.Activities.Presentation" Grid.Column="0" Name="MyArray" Items="{Binding Path=ModelItem.MyArray}" MinWidth="150" Margin="0"> <sap:WorkflowItemsPresenter.SpacerTemplate > <DataTemplate> <TextBlock Foreground="DarkGray" Margin="30">..</TextBlock> </DataTemplate> </sap:WorkflowItemsPresenter.SpacerTemplate> <sap:WorkflowItemsPresenter.ItemsPanel> <ItemsPanelTemplate> <StackPanel Orientation="Vertical" HorizontalAlignment="Center" Margin="0"/> </ItemsPanelTemplate> </sap:WorkflowItemsPresenter.ItemsPanel> </sap:WorkflowItemsPresenter> Why is the gui not shown as a List??? it works well without converter. Thanks

    Read the article

  • Drools flow architecture, Drools flow events with AND join nodes

    - by Shoukry K
    I have been evaluating a number of frameworks including jBPM and Drools flow for my application requirements. Lots of the opinions seem to be inclined towards Drools flow as its more flexible, knowledge oriented, easier to integrate with business rules, etc.. The application is some sort of an Email Campaign manager , where different customers can sign in, prepare (design) and launch email campaigns. The application should be able to do the following : 1- Receive a list of email addresses, send emails to each of these addresses starting from a certain date and during a certain time interval of the day , do some custom actions, and then wait for reply emails. 2- If a reply email is received and depending on the response text of the email , and depending on the time the email was received certain actions need to happen, web service calls need to take place, and error handling for these calls. 3- The application will manage and run many and different campaigns (different customers and different flows for each customer) at any point of time. The first question is : Is Drools flow the way to go about this? My main concerns are scalability, suspending, resuming flows, and long wait, and flows management. As you see from the requirements : There is a scheduling part : Certain flows need to be run at a certain point in time, they need to get suspended and then resumed. For example start sending emails starting on Dec 1st 2010 and send emails only in the time interval between 08:00 and 17:00 GMT. By then it might be that all subscribers have been sent emails, but it might not be the case, the process needs to (resume) on Dec 2nd and send a second batch, however certain (users) already received emails and they should be able to (continue at different stages of the flow) There are long wait states : Days or even weeks , i need to persist, suspend / resume and terminate flows (manage flows) External Events : This is where i got stuck first, i tried to put together a simple flow (see attached screenshot) See image http://img46.imageshack.us/img46/9620/workflowwithevents.png , there is a start node , connected to an action node, connected to a join. An event node is connected to a second action node, which is connected to the join. The join is an AND join , after the join there is an action and the end node. Here is the sample code i am using to launch the flow : KnowledgeBuilder builder = KnowledgeBuilderFactory .newKnowledgeBuilder(); builder.add(ResourceFactory.newClassPathResource("campaign.rf", CampaignsDroolsPoc.class), ResourceType.DRF); if (builder.hasErrors()) { KnowledgeBuilderErrors errors = builder.getErrors(); Iterator<KnowledgeBuilderError> iterator = errors.iterator(); while (iterator.hasNext()) { System.out.println(iterator.next().toString()); } } KnowledgeBase base = KnowledgeBaseFactory.newKnowledgeBase(); base.addKnowledgePackages(builder.getKnowledgePackages()); final StatefulKnowledgeSession ksession = base .newStatefulKnowledgeSession(); // KnowledgeRuntimeLoggerFactory.newConsoleLogger(ksession); ksession.getWorkItemManager().registerWorkItemHandler("Log", new SendSMSWorkItemHandler()); ProcessInstance startProcess = ksession.startProcess("flow"); System.out.println("Signaling event"); startProcess.signalEvent("ev1", "ev1"); System.out.println("Signaled"); ksession.fireUntilHalt(); I am noticing that the event get triggered, the action node connected to the event gets triggered, however things seem to get stuck at the join. The flow does not continue past the AND join and the flow seems to get stuck. The action following the node does not get triggered. I also went through the drools flow documentation , and all the example codes, however i didn't find anything there. In addition any hints about the way to go about architecting the solution, and implementing it would be great.

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >