Search Results

Search found 9271 results on 371 pages for 'properties'.

Page 265/371 | < Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >

  • More Maintenance Plan Weirdness

    - by AjarnMark
    I’m not a big fan of the built-in Maintenance Plan functionality in SQL Server.  I like the interface in SQL 2005 better than 2000 (it looks more like building an SSIS package) but it’s still a bit of a black box.  You don’t really know what commands are being run based on the selections you have made, and you can easily make some unwise choices without realizing it, such as shrinking your database on a regular basis.  I really prefer to know exactly what commands and with which options are being run on my servers. Recently I had another very strange thing happen with a Maintenance Plan, this time in SQL 2005, SP3.  I inherited this server and have done a bit of cleanup on it, but had not yet gotten around to replacing the Maintenance Plans with all my own scripts.  However, one of the maintenance plans which was just responsible for doing LOG backups was running more frequently than that system needed, and I thought I would just tweak the schedule a bit.  So I opened the Maintenance Plan and edited the properties of the Subplan, setting a new schedule, saved it and figured all was good to go.  But the next execution of the Scheduled Job that triggers the Maintenance Plan code failed with an error about the Owner of the job.  Specifically the error was, “Unable to determine if the owner (OldDomain\OldDBAUserID) of job MaintenancePlanName.Subplan has server access (reason: Could not obtain information about Windows NT group/user 'OldDomain\OldDBAUserID’..”  I was really confused because I had previously updated all of the jobs to have current accounts as the owners.  At first I thought it was just a fluke, but it happened on the next scheduled cycle so I investigated further and sure enough, that job had the old DBA’s account listed as the owner.  I fixed it and the job successfully ran to completion. Now, I don’t really like mysteries like that, so I did some more testing and verified that, sure enough, just editing the Subplan schedule and saving the Maintenance Job caused the Scheduled Job to be recreated with the old credentials.  I don’t know where it is getting those credentials, but I can only assume that it is the same as the original creator of the Maintenance Plan, and for some reason it insists on using that ID for the job owner.  I looked through the options in SSMA and could not find anything would let me easily set the value that I wanted it to use.  I suspect that if I did something like executing sp_changeobjectowner against the Maintenance Plan that it would use that new ID instead.  I’m sure that there is good reason that it works this way, but rather than mess around with it much more, I’m just going to spend my time rolling out my replacement scripts instead. Chalk this little hidden oddity up as yet one more reason I’m not a fan of Maintenance Plans.

    Read the article

  • Intel Centrino Wireless N 1000 doesn't work on a Lenovo Z560

    - by Timetraveler
    I upgraded my Ubuntu 11.04 to 11.10 and my Wifi has stopped working. I have a Lenovo Z560 that has Intel centrino wireless-N 1000. I have searched various threads having similar problems for a solution and have no success. The wlan0 is not even showing up in rfkill. Please help me find a solution. I am giving below the output of various debug commands. Thanks in advance. DISTRIB_ID=Ubuntu DISTRIB_RELEASE=11.10 DISTRIB_CODENAME=oneiric DISTRIB_DESCRIPTION="Ubuntu 11.10" ----##uname -a Linux gurucharapathy-laptop 3.0.0-17-generic-pae #30-Ubuntu SMP Thu Mar 8 17:53:35 UTC 2012 i686 i686 i386 GNU/Linux ----##lspci -nnk | grep -iA2 net 05:00.0 Network controller [0280]: Intel Corporation Centrino Wireless-N 1000 [8086:0084] Subsystem: Intel Corporation Centrino Wireless-N 1000 BGN [8086:1315] Kernel modules: iwlagn 06:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller [10ec:8136] (rev 02) Subsystem: Lenovo Device [17aa:392e] Kernel driver in use: r8169 ----##iwconfig lo no wireless extensions. eth0 no wireless extensions. ----##iwlist scan lo Interface doesn't support scanning. eth0 Interface doesn't support scanning. ----##rfkill list all 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 1: ideapad_wlan: Wireless LAN Soft blocked: no Hard blocked: no 2: ideapad_bluetooth: Bluetooth Soft blocked: no Hard blocked: no ----##lsmod Module Size Used by rfcomm 38408 8 bnep 17923 2 parport_pc 32114 0 ppdev 12849 0 binfmt_misc 17292 1 snd_hda_codec_hdmi 31426 1 snd_hda_codec_conexant 52460 1 uvcvideo 67271 0 videodev 85626 1 uvcvideo snd_hda_intel 28358 2 snd_hda_codec 91859 3 snd_hda_codec_hdmi,snd_hda_codec_conexant,snd_hda_intel snd_hwdep 13276 1 snd_hda_codec joydev 17393 0 snd_pcm 80435 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec snd_seq_midi 13132 0 i915 509554 9 drm_kms_helper 32889 1 i915 snd_rawmidi 25241 1 snd_seq_midi snd_seq_midi_event 14475 1 snd_seq_midi snd_seq 51567 2 snd_seq_midi,snd_seq_midi_event snd_timer 28932 2 snd_pcm,snd_seq drm 196290 5 i915,drm_kms_helper snd_seq_device 14172 3 snd_seq_midi,snd_rawmidi,snd_seq mei 36466 0 mac80211 393421 0 snd 55902 14 snd_hda_codec_hdmi,snd_hda_codec_conexant,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device ideapad_laptop 13575 0 intel_ips 17753 0 btusb 18160 2 i2c_algo_bit 13199 1 i915 soundcore 12600 1 snd bluetooth 148839 23 rfcomm,bnep,btusb cfg80211 172427 1 mac80211 psmouse 63474 0 serio_raw 12990 0 snd_page_alloc 14108 2 snd_hda_intel,snd_pcm sparse_keymap 13658 1 ideapad_laptop wmi 18744 0 video 18908 1 i915 lp 17455 0 parport 40930 3 parport_pc,ppdev,lp ahci 21634 2 libahci 25761 1 ahci r8169 47200 0 ----##nm-tool NetworkManager Tool State: asleep Device: eth0 ----------------------------------------------------------------- Type: Wired Driver: r8169 State: unmanaged Default: no HW Address: 88:AE:1D:DE:5F:9C Capabilities: Carrier Detect: yes Speed: 100 Mb/s Wired Properties Carrier: on ----##lshw -C network *-network UNCLAIMED description: Network controller product: Centrino Wireless-N 1000 vendor: Intel Corporation physical id: 0 bus info: pci@0000:05:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:d6400000-d6401fff *-network description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:06:00.0 logical name: eth0 version: 02 serial: 88:ae:1d:de:5f:9c size: 100Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=N/A ip=192.168.0.100 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:41 ioport:2000(size=256) memory:d2410000-d2410fff memory:d2400000-d240ffff memory:d2420000-d243ffff

    Read the article

  • Developing for 2005 using VS2008!

    - by Vincent Grondin
    I joined a fairly large project recently and it has a particularity… Once finished, everything has to be sent to the client under VS2005 using VB.Net and can target either framework 2.0 or 3.0… A long time ago, the decision to use VS2008 and to target framework 3.0 was taken but people knew they would need to establish a few rules to ensure that each dev would use VS2008 as if it was VS2005… Why is that so? Well simply because the compiler in VS2005 is different from the compiler inside VS2008…  I thought it might be a good idea to note the things that you cannot use in VS2008 if you plan on going back to VS2005. Who knows, this might save someone the headache of going over all their code to fix errors… -        Do not use LinQ keywords (from, in, select, orderby…).   -        Do not use LinQ standard operators under the form of extension methods.   -        Do not use type inference (in VB.Net you can switch it OFF in each project properties). o   This means you cannot use XML Literals.   -        Do not use nullable types under the following declarative form:    Dim myInt as Integer? But using:   Dim myInt as Nullable(Of Integer)     is perfectly fine.   -        Do not test nullable types with     Is Nothing    use    myInt.HasValue     instead.   -        Do not use Lambda expressions (there is no Lambda statements in VB9) so you cannot use the keyword “Function”.   -        Pay attention not to use relaxed delegates because this one is easy to miss in VS2008   -        Do not use Object Initializers   -        Do not use the “ternary If operator” … not the IIf method but this one     If(confition, truepart, falsepart).   As a side note, I talked about not using LinQ keyword nor the extension methods but, this doesn’t mean not to use LinQ in this scenario. LinQ is perfectly accessible from inside VS2005. All you need to do is reference System.Core, use namespace System.Linq and use class “Enumerable” as a helper class… This is one of the many classes containing various methods that VS2008 sees as extensions. The trick is you can use them too! Simply remember that the first parameter of the method is the object you want to query on and then pass in the other parameters needed… That’s pretty much all I see but I could have missed a few… If you know other things that are specific to the VS2008 compiler and which do not work under VS2005, feel free to leave a comment and I’ll modify my list accordingly (and notify our team here…) ! Happy coding all!

    Read the article

  • Should UTF-16 be considered harmful?

    - by Artyom
    I'm going to ask what is probably quite a controversial question: "Should one of the most popular encodings, UTF-16, be considered harmful?" Why do I ask this question? How many programmers are aware of the fact that UTF-16 is actually a variable length encoding? By this I mean that there are code points that, represented as surrogate pairs, take more than one element. I know; lots of applications, frameworks and APIs use UTF-16, such as Java's String, C#'s String, Win32 APIs, Qt GUI libraries, the ICU Unicode library, etc. However, with all of that, there are lots of basic bugs in the processing of characters out of BMP (characters that should be encoded using two UTF-16 elements). For example, try to edit one of these characters: 𝄞 (U+1D11E) MUSICAL SYMBOL G CLEF 𝕥 (U+1D565) MATHEMATICAL DOUBLE-STRUCK SMALL T 𝟶 (U+1D7F6) MATHEMATICAL MONOSPACE DIGIT ZERO 𠂊 (U+2008A) Han Character You may miss some, depending on what fonts you have installed. These characters are all outside of the BMP (Basic Multilingual Plane). If you cannot see these characters, you can also try looking at them in the Unicode Character reference. For example, try to create file names in Windows that include these characters; try to delete these characters with a "backspace" to see how they behave in different applications that use UTF-16. I did some tests and the results are quite bad: Opera has problem with editing them (delete required 2 presses on backspace) Notepad can't deal with them correctly (delete required 2 presses on backspace) File names editing in Window dialogs in broken (delete required 2 presses on backspace) All QT3 applications can't deal with them - show two empty squares instead of one symbol. Python encodes such characters incorrectly when used directly u'X'!=unicode('X','utf-16') on some platforms when X in character outside of BMP. Python 2.5 unicodedata fails to get properties on such characters when python compiled with UTF-16 Unicode strings. StackOverflow seems to remove these characters from the text if edited directly in as Unicode characters (these characters are shown using HTML Unicode escapes). WinForms TextBox may generate invalid string when limited with MaxLength. It seems that such bugs are extremely easy to find in many applications that use UTF-16. So... Do you think that UTF-16 should be considered harmful?

    Read the article

  • Using multiple diagrams per model in Entity Framework 5.0

    - by nikolaosk
    I have downloaded .Net framework 4.5 and Visual Studio 2012 since it was released to MSDN subscribers on the 15th of August.For people that do not know about that yet please have a look at Jason Zander's excellent blog post .Since then I have been investigating the many new features that have been introduced in this release.In this post I will be looking into theIn order to follow along this post you must have Visual Studio 2012 and .Net Framework 4.5 installed in your machine.Download and install VS 20120 using this link.My machine runs on Windows 8 and Visual Studio 2012 works just fine. I have also installed in my machine SQL Server 2012 developer edition. I have also downloaded and installed AdventureWorksLT2012 database.You can download this database from the codeplex website.   Before I start showcasing the demo I want to say that I strongly believe that Entity Framework is maturing really fast and now at version 5.0 can be used as your data access layer in all your .Net projects.I have posted extensively about Entity Framework in my blog.Please find all the EF related posts here. In this demo I will show you how to split an entity model into multiple diagrams using the new enhanced EF designer. We will not build an application in this demo.Sometimes our model can become too large to edit or view.In earlier versions we could only have one diagram per EDMX file.In EF 5.0 we can split the model into more diagrams.1) Launch VS 2012. Express edition will work fine.2) Create a New Project. From the available templates choose a Web Forms application  3) Add a new item in your project, an ADO.Net Entity Data Model. I have named it AdventureWorksLT.edmx.Then we will create the model from the database and click Next.Create a new connection by specifying the SQL Server instance and the database name and click OK.Then click Next in the wizard.In the next screen of the wizard select all the tables from the database and hit Finish.4) It will take a while for our .edmx diagram to be created. When I select an Entity (e.g Customer) from my diagram and right click on it,a new option appears "Move to new Diagram".Make sure you have the Model Browser window open.Have a look at the picture below 5) When we do that a new diagram is created and our new Entity is moved there.Have a look at the picture below  6) We can also right-click and include the related entities. Have a look at the picture below. 7) When we do that the related entities are copied to the new diagram.Have a look at the picture below  8) Now we can cut (CTRL+X) the entities from Diagram2 and paste them back to Diagram1.9) Finally another great enhancement of the EF 5.0 designer is that you can change colors in the various entities that make up the model.Select the entities you want to change color, then in the Properties window choose the color of your choice. Have a look at the picture below. To recap we have demonstrated how to split your entity model in multiple diagrams which comes handy in EF models that have a large number of entities in them Hope it helps!!!!

    Read the article

  • Storing game objects with generic object information

    - by Mick
    In a simple game object class, you might have something like this: public abstract class GameObject { protected String name; // other properties protected double x, y; public GameObject(String name, double x, double y) { // etc } // setters, getters } I was thinking, since a lot of game objects (ex. generic monsters) will share the same name, movement speed, attack power, etc, it would be better to have all that information shared between all monsters of the same type. So I decided to have an abstract class "ObjectData" to hold all this shared information. So whenever I create a generic monster, I would use the same pre-created "ObjectData" for it. Now the above class becomes more like this: public abstract class GameObject { protected ObjectData data; protected double x, y; public GameObject(ObjectData data, double x, double y) { // etc } // setters, getters public String getName() { return data.getName(); } } So to tailor this specifically for a Monster (could be done in a very similar way for Npcs, etc), I would add 2 classes. Monster which extends GameObject, and MonsterData which extends ObjectData. Now I'll have something like this: public class Monster extends GameObject { public Monster(MonsterData data, double x, double y) { super(data, x, y); } } This is where my design question comes in. Since MonsterData would hold data specific to a generic monster (and would vary with what say NpcData holds), what would be the best way to access this extra information in a system like this? At the moment, since the data variable is of type ObjectData, I'll have to cast data to MonsterData whenever I use it inside the Monster class. One solution I thought of is this, but this might be bad practice: public class Monster extends GameObject { private MonsterData data; // <- this part here public Monster(MonsterData data, double x, double y) { super(data, x, y); this.data = data; // <- this part here } } I've read that for one I should generically avoid overwriting the underlying classes variables. What do you guys think of this solution? Is it bad practice? Do you have any better solutions? Is the design in general bad? How should I redesign this if it is? Thanks in advanced for any replies, and sorry about the long question. Hopefully it all makes sense!

    Read the article

  • Best Creational Pattern for loggers in a multi-threaded system?

    - by Dipan Mehta
    This is a follow up question on my past questions : Concurrency pattern of logger in multithreaded application As suggested by others, I am putting this question separately. As the learning from the last question. In a multi-threaded environment, the logger should be made thread safe and probably asynchronous (where in messages are queued while a background thread does writing releasing the requesting object thread). The logger could be signleton or it can be a per-group logger which is a generalization of the above. Now, the question that arise is how does logger should be assigned to the object? There are two options I can think of: 1. Object requesting for the logger: Should each of the object call some global API such as get_logger()? Such an API returns "the" singleton or the group logger. However, I feel this involves assumption about the Application environment to implement the logger -which I think is some kind of coupling. If the same object needs to be used by other application - this new application also need to implement such a method. 2. Assign logger through some known API The other alternative approach is to create a kind of virtual class which is implemented by application based on App's own structure and assign the object sometime in the constructor. This is more generalized method. Unfortunately, when there are so many objects - and rather a tree of objects passing on the logger objects to each level is quite messy. My question is there a better way to do this? If you need to pick any one of the above, which approach is would you pick and why? Other questions remain open about how to configure them: How do objects' names or ID are assigned so that will be used for printing on the log messages (as the module names) How do these objects find the appropriate properties (such as log levels, and other such parameters) In the first approach, the central API needs to deal with all this varieties. In the second approach - there needs to be additional work. Hence, I want to understand from the real experience of people, as to how to write logger effectively in such an environment.

    Read the article

  • Base Pages and Interfaces for ASP.NET Pages

    - by geekrutherford
    For quite a while I have been using the concept of base pages when developing pages in ASP.NET applications. It is a wonderful method for exposing common functions to all of your applications pages and also overriding certain events for various purposes (i.e. dynamic themes).  Recently I found out a new developer will be joining my team. This prompted me to review the applications code for readability and ease of maintenance. I began adding comments through out the code behind for all pages within the application. While doing so I noted that I had used common method names for such things as loading data, configuring controls, applying filters, etc.   Bringing a new developer on board, I wanted to make the transition as seamless as possible while also ensuring they follow existing coding practices we already have in place. While I could have created virtual methods for the common page methods allowing them to overridden, what I really needed was a way to ensure the new developer implemented the same methods for each and every page. Thus I created an interface to force the issue.   Now, every page not only inherits the base page class but also implements an interface. This provides every page not only common functions and overridden page events but also imposes rules for implementing certain common methods :-)   Interface   public interface BasePageInterface { /// Configures page based on users security permissions. void CheckPermissions(); /// Configures Filter Form control for current page.  /// Ensure you have set the FilteredGrid and PageAjaxManager properties of the FilterForm control in PageLoad!!!  void ConfigureFilters(); /// Sets event handlers and default settings for controls on the current page. void ConfigureControls(); /// Exports data bound to grid in selected format. void ExportGridData(ExportFormat fmt); /// Loads data and binds to grid. /// Columns are turned on/off in grid depending on tab selected and users permissions.  void LoadData(); }   Page code-behind class definition:   public partial class MyPage : BasePage, BasePageInterface Note, you could not use an abstract class to accomplish this considering C# does not allow for multiple inheritance.  Nor could the base page class be abstract since it needs to inherit from the System.Web.UI.Page class in order to override page events.

    Read the article

  • Power Distribution amongst connected nodes

    - by Perky
    In my game the map is represented by connected nodes, each node has a number of connected nodes. The nodes represent a system in which players can build structures and move units about. If you're familiar with Sins of a Solar Empire the game map is very similar. I want each node to be able to produce power and share it with all connected nodes. For example if A, B, C & D are all connected and produce 100 power units, then each system should have 400 power units available. If node B builds a structure that consumes 100 power units then A, B, C & D should then have 300 power units available. I've been working on this system all day and haven't been able to get it working quite the way I want. My current implementation is to first recurse through each nodes's connected node adding up the power, I keep a list of closed nodes so it doesn't loop, it's quite similar to A* actually. Pseudo code: All nodes start with the properties node.power = 0 node.basePower = 100 // could be different for each node. node.initialPower = node.basePower - function propagatePower( node, initialPower, closedNodes ) node.power += initialPower add( closedNodes, node ) connectedNodes = connected_nodes_except_from( closedNodes ) foreach node in connectedNodes do propagatePower( node, initialPower, closedNodes ) end end After this I iterate through all power consumers. foreach consumer in consumers do node = consumer.parentNode if node.power >= consumer.powerConsumption then consumer.powerConsumed += consumer.powerConsumption node.producedPower -= consumer.powerConsumption end end Then I adjust the initial power for the next propagation cycle. foreach node in nodes do node.initialPower = node.basePower - node.producedPower node.displayPower = node.power // for rendering the power. node.power = 0 end This seemed to work at first but then I came into a problem. Say two nodes A & B produce 100Pu each, it's shared so both A & B have 200Pu. I then make two structures that consume 80Pu each on A (160Pu). Then the nodes power is adjusted to basePower - producedPower (100-160 = -60). Nodes are propagated, both nodes now have 40Pu (A: -60 + B: 100 = 40). Which is correct because they started with 200Pu - 160Pu = 40Pu. However now node.power >= consumer.powerConsumption is false. Whats worse is it's false for any structure that uses more that 40Pu, so the whole system goes down. I could deduct from consumer.powerConsumption but what do I do if power is reduced elsewhere? I don't have the correct data to perform the necessary checks. It's late so I'm probably not thinking straight but I thought to ask on here to see if anyone has any other implementations, better or worse I'd be interested to know.

    Read the article

  • Programming Language, Turing Completeness and Turing Machine

    - by Amumu
    A programming language is said to be Turing Completeness if it can successfully simulate a universal TM. Let's take functional programming language for example. In functional programming, function has highest priority over anything. You can pass functions around like any primitives or objects. This is called first class function. In functional programming, your function does not produce side effect i.e. output strings onto screen, change the state of variables outside of its scope. Each function has a copy of its own objects if the objects are passed from the outside, and the copied objects are returned once the function finishes its job. Each function written purely in functional style is completely independent to anything outside of it. Thus, the complexity of the overall system is reduced. This is referred as referential transparency. In functional programming, each function can have its local variables kept its values even after the function exits. This is done by the garbage collector. The value can be reused the next time the function is called again. This is called memoization. A function usually should solve only one thing. It should model only one algorithm to answer a problem. Do you think that a function in a functional language with above properties simulate a Turing Machines? Functions (= algorithms = Turing Machines) are able to be passed around as input and returned as output. TM also accepts and simulate other TMs Memoization models the set of states of a Turing Machine. The memorized variables can be used to determine states of a TM (i.e. which lines to execute, what behavior should it take in a give state ...). Also, you can use memoization to simulate your internal tape storage. In language like C/C++, when a function exits, you lose all of its internal data (unless you store it elsewhere outside of its scope). The set of symbols are the set of all strings in a programming language, which is the higher level and human-readable version of machine code (opcode) Start state is the beginning of the function. However, with memoization, start state can be determined by memoization or if you want, switch/if-else statement in imperative programming language. But then, you can't Final accepting state when the function returns a value, or rejects if an exception happens. Thus, the function (= algorithm = TM) is decidable. Otherwise, it's undecidable. I'm not sure about this. What do you think? Is my thinking true on all of this? The reason I bring function in functional programming because I think it's closer to the idea of TM. What experience with other programming languages do you have which make you feel the idea of TM and the ideas of Computer Science in general? Can you specify how you think?

    Read the article

  • Some More New ADF Features in JDeveloper 11.1.2

    - by Steven Davelaar
    The official list of new features in JDeveloper 11.1.2 is documented here. While playing with JDeveloper 11.1.2 and scanning the web user interface developer's guide for 11.1.2, I noticed some additional new features in ADF Faces, small but might come in handy:  You can use the af:formatString and af:formatNamed constructs in EL expressions to use substituation variables. For example: <af:outputText value="#{af:formatString('The current user is: {0}',someBean.currentUser)}"/> See section 3.5.2 in web user interface guide for more info. A new ADF Faces Client Behavior tag: af:checkUncommittedDataBehavior. See section 20.3 in web user interface guide for more info. For this tag to work, you also need to set the  uncommittedDataWarning  property on the af:document tag. And this property has quite some issues as you can read here. I did a quick test, the alert is shown for a button that is on the same page, however, if you have a menu in a shell page with dynamic regions, then clicking on another menu item does not raise the alert if you have pending changes in the currently displayed region. For now, the JHeadstart implementation of pending changes still seems the best choice (will blog about that soon). New properties on the af:document tag: smallIconSource creates a so-called favicon that is displayed in front of the URL in the browser address bar. The largeIconSource property specifies the icon used by a mobile device when bookmarking the page to the home page. See section 9.2.5 in web user interface guide for more info. Also notice the failedConnectionText property which I didn't know but was already available in JDeveloper 11.1.1.4. The af:showDetail tag has a new property handleDisclosure which you can set to client for faster rendering. In JDeveloper 11.1.1.x, an expression like #{bindings.JobId.inputValue} would return the internal list index number when JobId was a list binding. To get the actual JobId attribute value, you needed to use #{bindings.JobId.attributeValue}. In JDeveloper 11.1.2 this is no longer needed, the #{bindings.JobId.inputValue} expression will return the attribute value corresponding with the selected index in the choice list. Did you discover other "hidden" new features? Please add them as comment to this blog post so everybody can benefit. 

    Read the article

  • How to export 3D models that consist of several parts (eg. turret on a tank)?

    - by Will
    What are the standard alternatives for the mechanics of attaching turrets and such to 3D models for use in-game? I don't mean the logic, but rather the graphics aspects. My naive approach is to extend the MD2-like format that I'm using (blender-exported using a script) to include a new set of properties for a mesh that: is anchored in another 'parent' mesh. The anchor is a point and normal in the parent mesh and a point and normal in the child mesh; these will always be colinear, giving the child rotation but not translation relative to the parent point. has a normal that is aligned with a 'target'. Classically this target is the enemy that is being engaged, but it might be some other vector e.g. 'the wind' (for sails and flags (and smoke, which is a particle system but the same principle applies)) or 'upwards' (e.g. so bodies of riders bend properly when riding a horse up an incline etc). that the anchor and target alignments have maximum and minimum and a speed coeff. there is game logic for multiple turrets and on a model and deciding which engages which enemy. 'primary' and 'secondary' or 'target0' ... 'targetN' or some such annotation will be there. So to illustrate, a classic tank would be made from three meshes; a main body mesh, a turret mesh that is anchored to the top of the main body so it can spin only horizontally and a barrel mesh that is anchored to the front of the turret and can only move vertically within some bounds. And there might be a forth flag mesh on top of the turret that is aligned with 'wind' where wind is a function the engine solves that merges environment's wind angle with angle the vehicle is travelling in an velocity, or something fancy. This gives each mesh one degree of freedom relative to its parent. Things with multiple degrees of freedom can be modelled by zero-vertex connecting meshes perhaps? This is where I think the approach I outlined begins to feel inelegant, yet perhaps its still a workable system? This is why I want to know how it is done in professional games ;) Are there better approaches? Are there formats that already include this information? Is this routine?

    Read the article

  • How to wrap console utils in webserver

    - by Alex Brown
    I have a big dataset (100Mbs/day) and a bunch of console a TCL/TK tools to view it - I want to turn it into a web app that I can build, and others can maintain. In long: my group runs simulations yielding 100s of Mbs of data daily, in multiple (mostly but not only) text forms. We have a bunch of scripts and tools, mostly old school 1990's style stuff requiring a 5-button mouse, as well as lots of ad-hoc scripts that engineers build out of frustration every month or so. These produces UIs, graphs, spreadsheets (various sizes), logs, event histories etc. I want to replace (or at least supplement) the xwindows / console style UI with a web-based one, so I need the following properties: pleasant to program can wrap existing command-line tools in separate views (I don't need to scrape GUIs or anything) as I port logic from the existing scripts I can create a modularised and pleasant codebase to replace it I can attach a web-ui to navigate between views - each view is likely to contain keys which might make sense to view in another I am new to building systems that have logic on the back-end and front-end of a web-server. from that point of view, they do this: backend wraps old-school executables, constructs calls into them and them takes the output and wraps it up, niceifies it and delivers it to the web client. For instance the tool might generate a number of indexed images (per invocation) which I might deliver all at once or on-demand. May (probably) need to to heavy stats on some sources. frontend provides navigation connecting multiple views, performs requests from one view for data from another (or self to self), etc. Probably will have some views with a lot of interactivity. Can people please point me towards viable solutions for this? I know it's a bit of an open question so as answers come in I hope to refine the spec until we have a good match. I guess I expect to see answers like "RoR!" "beans!" "Scala!" but please give an indication of why those are a good fit; I know nothing! I got bumped off SO for asking an open-ended question, so sorry if its OT here too (let me know). I take the policy that I use the best/closest matched language for a project but most of my team are extremely low level (ie pipeline stages and CDyn) so I don't have the peer group to know where to start.

    Read the article

  • Web Development Goes Pre-Visual InterDev

    - by Ken Cox [MVP]
    As a longtime and hardcore ASP.NET webforms developer, I’m finding the new client-side development world a bit of a grind.  I love learning new technologies, but I can’t help feeling we’ve regressed and lost our old RAD advantage as we move heavy lifting to the client. For my latest project, I’m using Telerik’s KendoUI in Visual Studio 2012. To say I feel clumsy writing this much JavaScript is an understatement. It seems like the only safe way to ‘write’ this code is by copying a working snippet from someone else and pasting it into my HTML page.  For me, JavaScript has largely been for small UI tasks like client-side validation and a bit of AJAX – and often emitted by a server-side control. I find myself today lost in nests of curly braces that Ctrl+K, Ctrl+D doesn’t seem to understand that well either. IntelliSense, my old syntax saviour, doesn’t seem to have kept up with this cobweb of code either. Code completion? Not seeing it. As I fumbled about this evening, I thought about how web development rocketed forward when Microsoft introduced Visual InterDev. Its Design-Time Controls (DTCs) changed the way we created sites. All the iterations of Visual Studio have enhanced that server-side experience where you let a tool write the bulk of the code and manually finesse it from there. What happened? Why am I typing  properties and values (especially default values!) into VS 2012 to get a client-side grid on a page? Where are the drag and drop objects that traditionally provided 70 percent of the mark-up and configuration?  Did we forget how to write Property Pages where you enter a value and the correct syntax appears magically in the source code? To me, the tooling was looking the other way as the scene shifted from server-side code to nimble client-side script. It’ll have to catch up. Although JavaScript is the lingua franca of web browsers, the language is unwieldy, tough to maintain, and messy to debug. If a .NET JIT compiler can turn our VB, F#, and C# source code into an Intermediate Language that executes on a computer, I don’t see why there can’t be a client-side compiler that turns a .NET language into JavaScript that browsers can consume.

    Read the article

  • Exchange 2010, Exchange 2003 Mail Flow issue

    - by Ryan Roussel
    While performing the initial Exchange 2010 deployment for a customer migrating from Exchange 2003, I ran into an issue with mail flow between the two environments.  The Exchange 2003 mailboxes could send to Exchange 2010, as well as to and from the internet.  Exchange 2010 mailboxes could send and receive to the internet, however they could not send to Exchange 2003 mailboxes.   After scouring the internet for a solution, it seemed quite a few people were experiencing this issue with no resolution to be found, or at least not easily.  After many attempts of manually deleting and recreating the routing group connectors,  I finally lucked onto the answer in an obscure comment left to another blogger.   If inheritable permissions are not allowed on the Exchange 2003 object in the Active Directory schema, exchange server authentication cannot be achieved between the servers.   It seems when Blackberry Enterprise Server gets added to 2003 environments, a lot of Admins get tricky and add the BES Admin user explicitly to the server object  to allow  inheritance down from there to all mailboxes.  The problem is they also coincidently turn off inheritance to the server object itself from its parent containers.  You can re-establish inheritance without overwriting the existing ACL however so that the BES Admin can remain in the server object ACL.   By re-establishing inheritance to the 2003 server object, mail flow was instantly restored between the servers.    To re-establish inheritance: 1. Open ASDIedit by adding the snap-in to a MMC (should be included on your 2008 server where Exchange 2010 is installed) 2. Navigate to Configuration > Services > Microsoft Exchange > Exchange Organization > Administrative Groups > First Administrative Group > Servers 3. In the right pane, right click on the CN=Server Name of your Exchange 2003 Server, select properties 4. Navigate to the Security tab, hit advanced toward the bottom. 5. Check the checkbox that reads “include inheritable permissions” toward the bottom of the dialogue box.

    Read the article

  • How to leverage the internal HTTP endpoint available on Azure web roles?

    - by Alfredo Delsors
    Imagine you have a Web application using an in-memory collection that changes occasionally but is used very often. The collection gets loaded from storage on the Application_Start global.asax event and is updated whenever its content changes. If you want to deploy this application on Azure you need to keep in mind that more than one instance of the application can be running at any time and therefore you need to provide some mechanism to keep all instances informed with the latest changes. Because the communication through internal endpoints between Azure role instances is at no cost, a good solution can be maintaining the information on Azure Storage Tables, reading its contents on the Application_Start event and populating its changes to all other instances using the internal HTTP port available on Azure Web Roles. You need to follow these steps to leverage the internal HTTP endpoint available on Azure web roles to maintain all instances up to date. 1.   Define an internal HTTP endpoint in the Web Role properties, for example InternalHttpEndpoint   2.   Add a new WCF service to the Web Role, for example NotificationService.svc 3.   Disable multiple site bindings in web.config: <serviceHostingEnvironment multipleSiteBindingsEnabled="false"> 4.   Add a method on the new service to receive notifications from other role instances. namespace Service { [ServiceContract] public interface INotificationService { [OperationContract(IsOneWay = true)] void Notify(Information info); } } 5.   Declare a class that inherits from System.ServiceModel.Activation.ServiceHostFactory and override the method CreateServiceHost to host the internal endpoint. public class InternalServiceFactory : ServiceHostFactory { protected override ServiceHost CreateServiceHost(Type serviceType, Uri[] baseAddresses) { var internalEndpointAddress = string.Format( "http://{0}/NotificationService.svc", RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["InternalHttpEndpoint"].IPEndpoint); ServiceHost host = new ServiceHost( typeof(NotificationService), new Uri(internalEndpointAddress)); BasicHttpBinding binding = new BasicHttpBinding(SecurityMode.None); host.AddServiceEndpoint( typeof(INotificationService), binding, internalEndpointAddress); return host; } } Note that you can use SecurityMode.None because the internal endpoint is private to the instances of the service. 6.   Edit the markup of the service right clicking the svc file and selecting "View markup" to add the new factory as the factory to be used to create the service <%@ ServiceHost Language="C#" Debug="true" Factory="Service.InternalServiceFactory" Service="Service.NotificationService" CodeBehind="NotificationService.svc.cs" %> 7.   Now you can notify changes to other instances using this code: var current = RoleEnvironment.CurrentRoleInstance; var endPoints = current.Role.Instances .Where(instance => instance != current) .Select(instance => instance.InstanceEndpoints["InternalHttpEndpoint"]); foreach (var ep in endPoints) { EndpointAddress address = new EndpointAddress( String.Format("http://{0}/NotificationService.svc", ep.IPEndpoint)); BasicHttpBinding binding = new BasicHttpBinding(SecurityMode.None); var factory = new ChannelFactory<INotificationService>(binding); INotificationService instance = factory.CreateChannel(address); instance.Notify(changedinfo); }

    Read the article

  • iwlwifi on lenovo z570 disabled by hardware switch

    - by Kevin Gallagher
    It was working fine with windows 7. The hardware switch is not disabled. I've toggled it back and forth dozens of times. The wifi light never turns on and it always lists as hardware disabled. I have the latest updates installed. I've been searching for solutions, but none of them seem to work for me. I've tried removing acer-wmi. I've tried setting 11n_disable=1. I've tried resetting the bios. I've tried using rfkill to unblock (only removes soft block). I've rebooted dozens of times. The wifi light turns off as soon as grub loads. Edit: I have a usb edimax wireless nic. It shows hardware disabled as well (although rfkill lists as unblocked). If I unload iwlwifi the usb nic works fine. uname -a `Linux xxx-Ideapad-Z570 3.2.0-55-generic #85-Ubuntu SMP Wed Oct 2 12:29:27 UTC 2013 x86_64 x86_64 x86_64 GNU/Linu`x rfkill list 19: phy18: Wireless LAN Soft blocked: no Hard blocked: yes dmesg [43463.022996] Intel(R) Wireless WiFi Link AGN driver for Linux, in-tree: [43463.023002] Copyright(c) 2003-2011 Intel Corporation [43463.023107] iwlwifi 0000:03:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 [43463.023190] iwlwifi 0000:03:00.0: setting latency timer to 64 [43463.023253] iwlwifi 0000:03:00.0: pci_resource_len = 0x00002000 [43463.023257] iwlwifi 0000:03:00.0: pci_resource_base = ffffc900057c8000 [43463.023261] iwlwifi 0000:03:00.0: HW Revision ID = 0x0 [43463.023797] iwlwifi 0000:03:00.0: irq 43 for MSI/MSI-X [43463.024013] iwlwifi 0000:03:00.0: Detected Intel(R) Centrino(R) Wireless-N 1000 BGN, REV=0x6C [43463.024250] iwlwifi 0000:03:00.0: L1 Enabled; Disabling L0S [43463.045496] iwlwifi 0000:03:00.0: device EEPROM VER=0x15d, CALIB=0x6 [43463.045501] iwlwifi 0000:03:00.0: Device SKU: 0X50 [43463.045504] iwlwifi 0000:03:00.0: Valid Tx ant: 0X1, Valid Rx ant: 0X3 [43463.045542] iwlwifi 0000:03:00.0: Tunable channels: 13 802.11bg, 0 802.11a channels [43463.045744] iwlwifi 0000:03:00.0: RF_KILL bit toggled to disable radio. [43463.047652] iwlwifi 0000:03:00.0: loaded firmware version 39.31.5.1 build 35138 [43463.047823] Registered led device: phy18-led [43463.047895] cfg80211: Ignoring regulatory request Set by core since the driver uses its own custom regulatory domain [43463.048037] ieee80211 phy18: Selected rate control algorithm 'iwl-agn-rs' [43463.055533] ADDRCONF(NETDEV_UP): wlan0: link is not ready nm-tool State: connected (global) - Device: wlan0 ---------------------------------------------------------------- Type: 802.11 WiFi Driver: iwlwifi State: unavailable Default: no HW Address: 74:E5:0B:4A:9F:C2 Capabilities: Wireless Properties WEP Encryption: yes WPA Encryption: yes WPA2 Encryption: yes Wireless Access Points lshw -C network *-network DISABLED description: Wireless interface product: Centrino Wireless-N 1000 [Condor Peak] vendor: Intel Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: wlan0 version: 00 serial: 74:e5:0b:4a:9f:c2 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=3.2.0-55-generic firmware=39.31.5.1 build 35138 latency=0 link=no multicast=yes wireless=IEEE 802.11bg resources: irq:43 memory:f1500000-f1501fff lspci 03:00.0 Network controller: Intel Corporation Centrino Wireless-N 1000 [Condor Peak]

    Read the article

  • How to deal with Warning : "Uncommittable transaction is detected at the end of the batch. The trans

    - by VishnuTiwariBlog
    Hi, If you are integrating with SQL Server and dealing with batch messages, you may encounter this problem. And this is evitable. The reason is the contention of resources. If your batch contains four messages and all the four messages have to be updated to SQL Server and then at the same time four process will contend for SQL server table and resources and the obvious result will be, few of your transaction will be left uncomitted and if you are not handling dehydration [not modifying the default property of the Dehydration] then your orchestration will dehydrate and will go for retry. If retry is set for every five minutes then after five minutes Port will send the message to the database. Reason for writing this post was as I did not want to see so many DEHYDRATED messages. And this was happening as Host Throttling was not set. Thus as soon as the BizTalk Process finds that SQL resources are unavailable it will go and dehydrate that process and process will go for retry. The contension of resources is unavoidable though we can fine tune the Dehydration setting. If you increase the time that an orchestration can be blocked at a subscription before being dehydrated, possibly you will give more time BizTalk Engine to handle to SQL resource availability. At least I solve the problem by fine tuning the Dehydration properties. Below is the section of config info which you need to add to the BTSNTsvc.exe.config.   <?xml version="1.0" ?> <configuration>        <configSections>               <section name="xlangs" type="Microsoft.XLANGs.BizTalk.CrossProcess.XmlSerializationConfigurationSectionHandler, Microsoft.XLANGs.BizTalk.CrossProcess" />        </configSections>        <runtime>               <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">                      <probing privatePath="BizTalk Assemblies;Developer Tools;Tracking" />               </assemblyBinding>        </runtime>        <xlangs>               <Configuration>                      <Dehydration MaxThreshold="1800" MinThreshold="1" ConstantThreshold="-1">                             <VirtualMemoryThrottlingCriteria OptimalUsage="900" MaximalUsage="1300" IsActive="true" />                             <PrivateMemoryThrottlingCriteria OptimalUsage="50" MaximalUsage="350" IsActive="true" />                             <PhysicalMemoryThrottlingCriteria OptimalUsage="50" MaximalUsage="350" IsActive="false" />                      </Dehydration>               </Configuration>        </xlangs> </configuration>

    Read the article

  • As a tooling/automation developer, can I be making better use of OOP?

    - by Tom Pickles
    My time as a developer (~8 yrs) has been spent creating tooling/automation of one sort or another. The tools I develop usually interface with one or more API's. These API's could be win32, WMI, VMWare, a help-desk application, LDAP, you get the picture. The apps I develop could be just to pull back data and store/report. It could be to provision groups of VM's to create live like mock environments, update a trouble ticket etc. I've been developing in .Net and I'm currently reading into design patterns and trying to think about how I can improve my skills to make better use of and increase my understanding of OOP. For example, I've never used an interface of my own making in anger (which is probably not a good thing), because I honestly cannot identify where using one would benefit later on when modifying my code. My classes are usually very specific and I don't create similar classes with similar properties/methods which could use a common interface (like perhaps a car dealership or shop application might). I generally use an n-tier approach to my apps, having a presentation layer, a business logic/manager layer which interfaces with layer(s) that make calls to the API's I'm working with. My business entities are always just method-less container objects, which I populate with data and pass back and forth between my API interfacing layer using static methods to proxy/validate between the front and the back end. My code by nature of my work, has few common components, at least from what I can see. So I'm struggling to see how I can better make use of OOP design and perhaps reusable patterns. Am I right to be concerned that I could be being smarter about how I work, or is what I'm doing now right for my line of work? Or, am I missing something fundamental in OOP? EDIT: Here is some basic code to show how my mgr and api facing layers work. I use static classes as they do not persist any data, only facilitate moving it between layers. public static class MgrClass { public static bool PowerOnVM(string VMName) { // Perform logic to validate or apply biz logic // call APIClass to do the work return APIClass.PowerOnVM(VMName); } } public static class APIClass { public static bool PowerOnVM(string VMName) { // Calls to 3rd party API to power on a virtual machine // returns true or false if was successful for example } }

    Read the article

  • WIF, ADFS 2 and WCF&ndash;Part 1: Overview

    - by Your DisplayName here!
    A lot has been written already about passive federation and integration of WIF and ADFS 2 into web apps. The whole active/WS-Trust feature area is much less documented or covered in articles and blogs. Over the next few posts I will try to compile all relevant information about the above topics – but let’s start with an overview. ADFS 2 has a number of endpoints under the /services/trust base address that implement the WS-Trust protocol. They are grouped by the WS-Trust version they support (/13 and /2005), the client credential type (/windows*, /username*, /certificate*) and the security mode (*transport, *mixed and message). You can see the endpoints in the MMC console under the Service/Endpoints page. So in other words, you use one of these endpoints (which exactly depends on your configuration / system setup) to request tokens from ADFS 2. The bindings behind the endpoints are more or less standard WCF bindings, but with SecureConversation (establishSecurityContext) disabled. That means that whenever you need to programmatically talk to these endpoints – you can (easily) create client bindings that are compatible. Another option is to use the special bindings that come with WIF (in the Microsoft.IdentityModel.Protocols.WSTrust.Bindings namespace). They are already pre-configured to be compatible with the ADFS endpoints. The downside of these bindings is, that you can’t use them in configuration. That’s definitely a feature request of mine for the next version of WIF. The next important piece of information is the so called Federation Service Identifier. This is the value that you (at least by default) have to use as a realm/appliesTo whenever you are requesting a token for ADFS (e.g. in  IdP –> RSTS scenario). Or (even more) technically speaking, ADFS 2 checks for this value in the audience URI restriction in SAML tokens. You can get to this value by clicking the “Edit Federation Service Properties” in the MMC when the Service tree-node is selected. OK – I will come back to this basic information in the following posts. Basically I want to go through the following scenarios: ADFS in the IdP role ADFS in the R-STS role (with a chained claims provider) Using the WCF bindings for automatic token issuance Using WSTrustChannelFactory for manual token handling Stay tuned…

    Read the article

  • My Automated NuGet Workflow

    - by Wes McClure
    When we develop libraries (whether internal or public), it helps to have a rapid ability to make changes and test them in a consuming application. Building Setup the library with automatic versioning and a nuspec Setup library assembly version to auto increment build and revision AssemblyInfo –> [assembly: AssemblyVersion("1.0.*")] This autoincrements build and revision based on time of build Major & Minor Major should be changed when you have breaking changes Minor should be changed once you have a solid new release During development I don’t increment these Create a nuspec, version this with the code nuspec - set version to <version>$version$</version> This uses the assembly’s version, which is auto-incrementing Make changes to code Run automated build (ruby/rake) run “rake nuget” nuget task builds nuget package and copies it to a local nuget feed I use an environment variable to point at this so I can change it on a machine level! The nuget command below assumes a nuspec is checked in called Library.nuspec next to the csproj file $projectSolution = 'src\\Library.sln' $nugetFeedPath = ENV["NuGetDevFeed"] msbuild :build => [:clean] do |msb| msb.properties :configuration => :Release msb.targets :Build msb.solution = $projectSolution end task :nuget => [:build] do sh "nuget pack src\\Library\\Library.csproj /OutputDirectory " + $nugetFeedPath end Setup the local nuget feed as a nuget package source (this is only required once per machine) Go to the consuming project Update the package Update-Package Library or Install-Package TLDR change library code run “rake nuget” run “Update-Package library” in the consuming application build/test! If you manually execute any of this process, especially copying files, you will find it a burden to develop the library and will find yourself dreading it, and even worse, making changes downstream instead of updating the shared library for everyone’s sake. Publishing Once you have a set of changes that you want to release, consider versioning and possibly increment the minor version if needed. Pick the package out of your local feed, and copy it to a public / shared feed! I have a script to do this where I can drop the package on a batch file Replace apikey with your nuget feed's apikey Take out the confirm(s) if you don't want them @ECHO off echo Upload %1? set /P anykey="Hit enter to continue " nuget push %1 apikey set /P anykey="Done " Note: helps to prune all the unnecessary versions during testing from your local feed once you are done and ready to publish TLDR consider version number run command to copy to public feed

    Read the article

  • What layer to introduce human readable error messages?

    - by MrLane
    One of the things that I have never been happy with on any project I have worked on over the years and have really not been able to resolve myself is exactly at what tier in an application should human readable error information be retrieved for display to a user. A common approach that has worked well has been to return strongly typed/concrete "result objects" from the methods on the public surface of the business tier/API. A method on the interface may be: public ClearUserAccountsResult ClearUserAccounts(ClearUserAccountsParam param); And the result class implementation: public class ClearUserAccountsResult : IResult { public readonly List<Account> ClearedAccounts{get; set;} public readonly bool Success {get; set;} // Implements IResult public readonly string Message{get; set;} // Implements IResult, human readable // Constructor implemented here to set readonly properties... } This works great when the API needs to be exposed over WCF as the result object can be serialized. Again this is only done on the public surface of the API/business tier. The error message can also be looked up from the database, which means it can be changed and localized. However, it has always been suspect to me, this idea of returning human readable information from the business tier like this, partly because what constitutes the public surface of the API may change over time...and it may be the case that the API will need to be reused by other API components in the future that do not need the human readable string messages (and looking them up from a database would be an expensive waste). I am thinking a better approach is to keep the business objects free from such result objects and keep them simple and then retrieve human readable error strings somewhere closer to the UI layer or only in the UI itself, but I have two problems here: 1) The UI may be a remote client (Winforms/WPF/Silverlight) or an ASP.NET web application hosted on another server. In these cases the UI will have to fetch the error strings from the server. 2) Often there are multiple legitimate modes of failure. If the business tier becomes so vague and generic in the way it returns errors there may not be enough information exposed publicly to tell what the error actually was: i.e: if a method has 3 modes of legitimate failure but returns a boolean to indicate failure, you cannot work out what the appropriate message to display to the user should be. I have thought about using failure enums as a substitute, they can indicate a specific error that can be tested for and coded against. This is sometimes useful within the business tier itself as a way of passing via method returns the specifics of a failure rather than just a boolean, but it is not so good for serialization scenarios. Is there a well worn pattern for this? What do people think? Thanks.

    Read the article

  • What is a good strategy for binding view objects to model objects in C++?

    - by B.J.
    Imagine I have a rich data model that is represented by a hierarchy of objects. I also have a view hierarchy with views that can extract required data from model objects and display the data (and allow the user to manipulate the data). Actually, there could be multiple view hierarchies that can represent and manipulate the model (e.g. an overview-detail view and a direct manipulation view). My current approach for this is for the controller layer to store a reference to the underlying model object in the View object. The view object can then get the current data from the model for display, and can send the model object messages to update the data. View objects are effectively observers of the model objects and the model objects broadcast notifications when properties change. This approach allows all the views to update simultaneously when any view changes the model. Implemented carefully, this all works. However, it does require a lot of work to ensure that no view or model objects hold any stale references to model objects. The user can delete model objects or sub-hierarchies of the model at any time. Ensuring that all the view objects that hold references to the model objects that have been deleted is time-consuming and difficult. It feels like the approach I have been taking is not especially clean; while I don't want to have to have explicit code in the controller layer for mediating the communication between the views and the model, it seems like there must be a better (implicit) approach for establishing bindings between the view and the model and between related model objects. In particular, I am looking for an approach (in C++) that understands two key points: There is a many to one relationship between view and model objects If the underlying model object is destroyed, all the dependent view objects must be cleaned up so that no stale references exist While shared_ptr and weak_ptr can be used to manage the lifetimes of the underlying model objects and allows for weak references from the view to the model, they don't provide for notification of the destruction of the underlying object (they do in the sense that the use of a stale weak_ptr allows for notification), but I need an approach that notifies the dependent objects that their weak reference is going away. Can anyone suggest a good strategy to manage this?

    Read the article

  • design a model for a system of dependent variables

    - by dbaseman
    I'm dealing with a modeling system (financial) that has dozens of variables. Some of the variables are independent, and function as inputs to the system; most of them are calculated from other variables (independent and calculated) in the system. What I'm looking for is a clean, elegant way to: define the function of each dependent variable in the system trigger a re-calculation, whenever a variable changes, of the variables that depend on it A naive way to do this would be to write a single class that implements INotifyPropertyChanged, and uses a massive case statement that lists out all the variable names x1, x2, ... xn on which others depend, and, whenever a variable xi changes, triggers a recalculation of each of that variable's dependencies. I feel that this naive approach is flawed, and that there must be a cleaner way. I started down the path of defining a CalculationManager<TModel> class, which would be used (in a simple example) something like as follows: public class Model : INotifyPropertyChanged { private CalculationManager<Model> _calculationManager = new CalculationManager<Model>(); // each setter triggers a "PropertyChanged" event public double? Height { get; set; } public double? Weight { get; set; } public double? BMI { get; set; } public Model() { _calculationManager.DefineDependency<double?>( forProperty: model => model.BMI, usingCalculation: (height, weight) => weight / Math.Pow(height, 2), withInputs: model => model.Height, model.Weight); } // INotifyPropertyChanged implementation here } I won't reproduce CalculationManager<TModel> here, but the basic idea is that it sets up a dependency map, listens for PropertyChanged events, and updates dependent properties as needed. I still feel that I'm missing something major here, and that this isn't the right approach: the (mis)use of INotifyPropertyChanged seems to me like a code smell the withInputs parameter is defined as params Expression<Func<TModel, T>>[] args, which means that the argument list of usingCalculation is not checked at compile time the argument list (weight, height) is redundantly defined in both usingCalculation and withInputs I am sure that this kind of system of dependent variables must be common in computational mathematics, physics, finance, and other fields. Does someone know of an established set of ideas that deal with what I'm grasping at here? Would this be a suitable application for a functional language like F#? Edit More context: The model currently exists in an Excel spreadsheet, and is being migrated to a C# application. It is run on-demand, and the variables can be modified by the user from the application's UI. Its purpose is to retrieve variables that the business is interested in, given current inputs from the markets, and model parameters set by the business.

    Read the article

  • OData &ndash; The easiest service I can create

    - by Jon Dalberg
    I wanted to create an OData service with the least amount of code so I fired up Visual Studio and got cracking. I decided to serve up a list of naughty words and make them read-only. Create a new web project. I created an empty MVC 2 application but MVC is not required for OData. Add a new WCF Data Service to the project. I named mine NastyWords.svc since I’m serving up a list of nasty words. Add a class to expose via the service: NastyWord 1: [DataServiceKey("Word")] 2: public class NastyWord 3: { 4: public string Word { get; set; } 5: }   I need to be able to uniquely identify instances of NastyWords for the DataService so I used the DataServiceKey attribute with the “Word” property as the key. I could have added an “ID” property which would have uniquely identified them and would then not need the “DataServiceKey” attribute because the DataService would apply some reflection and heuristics to guess at which property would be the unique identifier. However, the words themselves are unique so adding an “ID” property would be redundantly repetitive. Then I created a data source to expose my NastyWord objects to the service. This is just a simple class with IQueryable<T> properties exposing the entities for my service: 1: public class NastyWordsDataSource 2: { 3: private static IList<NastyWord> words = new List<NastyWord> 4: { 5: new NastyWord{ Word="crap"}, 6: new NastyWord{ Word="darn"}, 7: new NastyWord{ Word="hell"}, 8: new NastyWord{ Word="shucks"} 9: }; 10:   11: public NastyWordsDataSource() 12: { 13: NastyWords = words.AsQueryable(); 14: } 15:   16: public IQueryable<NastyWord> NastyWords { get; private set; } 17: }   Now I can go to the NastyWords.svc class and tell it which data source to use and which entities to expose: 1: public class NastyWords : DataService<NastyWordsDataSource> 2: { 3: // This method is called only once to initialize service-wide policies. 4: public static void InitializeService(DataServiceConfiguration config) 5: { 6: config.SetEntitySetAccessRule("*", EntitySetRights.AllRead); 7: config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 8: } 9: }   Compile and browse to my NastWords.svc and weep with joy Now I can query my service just like any other OData service. Next time, I’ll modify this service to allow updates to sent so I can build up my list of nasty words. Enjoy!

    Read the article

< Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >