Search Results

Search found 842 results on 34 pages for 'greg bray'.

Page 25/34 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Exposed onsite vs IFD deployments for MS Dynamics CRM

    - by Greg McGuffey
    I'm working for the first time on a MS Dyanmics CRM 4.0 project. Our company has a high number of remote employees and even more remote consultants. As such it will be necessary to make the CRM solution available over the internet. As near as I can tell, I have three options: Have everyone use a VPN to access an intranet site (typical onsite deployment). However, we have found that VPNs are far from trouble free and cause many support issues. We avoid them like the plague. Use IFD to expose the CRM on the internet. I don't know much about this except that the URL will be different than the onsite URL, which could cause some headaches (see below). Expose the CRM site by opening the site to the internet, using SSL to encrypt traffic. We currently do this with our MS sharepoint sites. I'm not sure how secure this would be (one of the reasons for this question). I'd like to avoid using both the onsite intranet deployment and the IFD together for a couple of reasons. One of the requests for the solution is use email to notify users that they've been assigned a task, and include the URL to the task within the email. For this reason. If both deployments are used, then I'll need to include two URLs and the user would need to know which to use. Which leads to the second reason, the main users of the solution split time between being in the office and being remote. Thus they would need to access the solution two different ways, and know when to use which. Bad. So, what are the advantages/disadvantages of any of these methods? Any other options? Is there any issue using IFD from within the intranet? Security issues? Thanks!

    Read the article

  • extension methods with generics - when does caller need to include type parameters?

    - by Greg
    Hi, Is there a rule for knowing when one has to pass the generic type parameters in the client code when calling an extension method? So for example in the Program class why can I (a) not pass type parameters for top.AddNode(node), but where as later for the (b) top.AddRelationship line I have to pass them? class Program { static void Main(string[] args) { // Create Graph var top = new TopologyImp<string>(); // Add Node var node = new StringNode(); node.Name = "asdf"; var node2 = new StringNode(); node2.Name = "test child"; top.AddNode(node); top.AddNode(node2); top.AddRelationship<string, RelationshipsImp>(node,node2); // *** HERE *** } } public static class TopologyExtns { public static void AddNode<T>(this ITopology<T> topIf, INode<T> node) { topIf.Nodes.Add(node.Key, node); } public static INode<T> FindNode<T>(this ITopology<T> topIf, T searchKey) { return topIf.Nodes[searchKey]; } public static void AddRelationship<T,R>(this ITopology<T> topIf, INode<T> parentNode, INode<T> childNode) where R : IRelationship<T>, new() { var rel = new R(); rel.Child = childNode; rel.Parent = parentNode; } } public class TopologyImp<T> : ITopology<T> { public Dictionary<T, INode<T>> Nodes { get; set; } public TopologyImp() { Nodes = new Dictionary<T, INode<T>>(); } }

    Read the article

  • What setting needs to be made to make .Net Automation responsive?

    - by Greg
    Have an app that is looking for application windows being created on the desktop using class Unresponsive { private StructureChangedEventHandler m_UIAeventHandler = new StructureChangedEventHandler(OnStructureChanged); public Unresponsive() { Automation.AddStructureChangedEventHandler(AutomationElement.RootElement, TreeScope.Children, m_UIAeventHandler); } private void OnStructureChanged(object sender, StructureChangedEventArgs e) { Debug.WriteLine("Change event"); } } You can see the same issue using UISpy.exe, selecting the desktop and configuring scope for children and just the structure changed event. The problem I'm trying to resolve is that the events are not raised in a timely manner, there seems to be some grouping/delay which makes the app appear to be non responsive. If you start a new app with 1 window and wait a second you get the event, seems alright. If you start the same app several times without delay (say clicking on quickstart), it's not until all of the instances of the app get 'initialised' by the AutomationProxies that you get the notice for the first app (and in short order the other apps/windows). I've sat watching task manager as each instance of the app starts to grow as it is initialised, waiting until the last app is done and then seeing the events all come in. Similarly any time any apps are starting windows within a timeframe there seems to be some blocking. I can't see how to configure this timeframe, or get each structure changed event to be sent on as soon as it happens. Also, this process of listening for structure changed events seems to be leaking, just by listening there is a leak in native memory. (visible in UISpy and my app)

    Read the article

  • How do you write a consistent UI Automation for MS? MSAA & UI Automation don't seem to overlap.

    - by Greg
    Working on a general Automation tool, considering moving from Win32 Message hooks to .net UI Automation, however the feature set of UI Automation doesn't cover all we have in Win32 and still doesn't seem to support all the GUI on Windows. One such example is Windows Live Messenger. Windows Live messenger 2009 is still using the older DirectUIHwnd to draw the gui. This means that you can't use windows messages to send to the controls, because the controls don't have their own HWND. It also seems to defeat the new .net UI Automation framework though the documentation seems to make out as if it can be joined in the UI Automation and Microsoft Active Accessibility document. Looking at MS Accessibility pointed to Active Accessibility 2.0 SDK Tools which showed that MSAA can interact with the contents. Is there some trick to getting the older MSAA technology that UI Automation seems to be trying to replace to actually work with UI Automation? I'd rather not have multiple solutions trying to automate the same windows for windows unlike Windows Live Messenger where each of these techniques is valid and will work.

    Read the article

  • Why does Int32.MaxValue * Int32.MaxValue == 1 ???

    - by Greg Balajewicz
    OK, I know, Int32.MaxValue * Int32.MaxValue will yield a number larger than Int32 BUT, shouldn't this statement raise some kind of an exception? I ran across this when doing something like IF (X * Y Z) where all are Int32. in X and Y are sufficiently large enough, you get a bogus value from X*Y Why is this so and how to get around this? besides casting everything to int64

    Read the article

  • Source Lookup Path is correct but debugger can't find file (Eclipse EE IDE)?

    - by Greg McNulty
    When debugging stepping over each line does work. Stepping into a function located in another file debugger displays: Source not found. Also displays option for Edit Source Lookup Path... but the correct package is listed there. (Also tried pointing with the directory path.) No other breakpoints set, as is a common solution. Any point in the right direction is helpful. Thank You. Thread[main] in the debugger window: Thread [main] (Suspended) ClassNotFoundException(Throwable).<init>(String, Throwable) line: 217 ClassNotFoundException(Exception).<init>(String, Throwable) line: not available ClassNotFoundException.<init>(String) line: not available URLClassLoader$1.run() line: not available AccessController.doPrivileged(PrivilegedExceptionAction<T>, AccessControlContext) line: not available [native method] Launcher$ExtClassLoader(URLClassLoader).findClass(String) line: not available Launcher$ExtClassLoader.findClass(String) line: not available Launcher$ExtClassLoader(ClassLoader).loadClass(String, boolean) line: not available Launcher$AppClassLoader(ClassLoader).loadClass(String, boolean) line: not available Launcher$AppClassLoader.loadClass(String, boolean) line: not available Launcher$AppClassLoader(ClassLoader).loadClass(String) line: not available MyMain.<init>() line: 24 MyMain.main(String[]) line: 36

    Read the article

  • How can I receive mouse events when a wrapped control has set capture?

    - by Greg
    My WndProc isn't seeing mouse-up notifications when I click with a modifier key (shift or control) pressed. I see them without the modifier key, and I see mouse-down notifications with the modifier keys. I'm trying to track user actions in a component I didn't write, so I'm using the Windows Forms NativeWindow wrapper (wrapping the component) to get Windows messages from the WndProc() method. I've tried tracking the notifications I do get, and I the only clue I see is WM_CAPTURECHANGED. I've tried calling SetCapture when I receive the WM_LBUTTONDOWN message, but it doesn't help. Without modifier (skipping paint, timer and NCHITTEST messages): WM_PARENTNOTIFY WM_MOUSEACTIVATE WM_MOUSEACTIVATE WM_SETCURSOR WM_LBUTTONDOWN WM_SETCURSOR WM_MOUSEMOVE WM_SETCURSOR WM_LBUTTONUP With modifier (skipping paint, timer and NCHITTEST messages): WM_KEYDOWN WM_PARENTNOTIFY WM_MOUSEACTIVATE WM_MOUSEACTIVATE WM_SETCURSOR WM_LBUTTONDOWN WM_SETCURSOR (repeats) WM_KEYDOWN (repeats) WM_KEYUP If I hold the mouse button down for a long time, I can usually get a WM_LBUTTONUP notification, but it should be possible to make it more responsive.. Edit: I've tried control-clicking outside of the component of interest and moving the cursor into it before releasing the mouse button, and then I do get a WM_LBUTTONUP notification, so it looks like the component is capturing the mouse on mouse-down. Is there any way to receive that notification when another window has captured the mouse? Thanks.

    Read the article

  • log4net - why would the same MyLog.Debug line not work at one point of startup, but work at another

    - by Greg
    Hi, During startup of my WinForms application I'm noting that there are a couple of points (before the MainForm renders) that do a "MyDataSet.GetInstance()". For the first one the MyLog.Debug line comes through in the VS2008 output window, but for a later one it does work and come through. What could explain this? What settings could I check at debug time to see why an output line for a MyLog.Debug line doesn't come out in the output window? namespace IntranetSync { public class MyDataSet { private static readonly ILog MyLog = LogManager.GetLogger(typeof(MyDataSet)); public static MyDataSet GetInstance() { MyLog.Debug("MyDataSet GetInstance() ====================================="); if (myDataSet == null) { myDataSet = new MyDataSet(); } return myDataSet; } . . . PS. What I have been doing re log4net repository initialization is putting the following line as a private variables in the classes I use logging - is this OK? static class Program { private static readonly ILog MyLog = LogManager.GetLogger(typeof(MainForm)); . . . public class Coordinator { private static readonly ILog MyLog = LogManager.GetLogger(typeof(MainForm)); . . . public class MyDataSet { private static readonly ILog MyLog = LogManager.GetLogger(typeof(MyDataSet)); . . .

    Read the article

  • Is Lightweight Code Generation (LCG) dead?

    - by Greg Beech
    In the .NET 2.0-3.5 frameworks, LCG (aka the DynamicMethod class) was a decent way to emit lightweight methods at runtime when no class structure was needed to support them. In .NET 4.0, expression trees now support statements and blocks, and as such appear to provide sufficient functionality to build just about any functionality you could require from such a method, and can be constructed in a much easier and safer way than directly emitting CIL op-codes. (This statement is borne from today's experimentation of converting some of our most complex LCG code to use expression tree building and compilation instead.) So is there any reason why one would use LCG in any new code? Is there anything it can do that expression trees cannot? Or is it now a 'dead' piece of functionality?

    Read the article

  • Windows Question: RunOnce/Second Boot Issues [closed]

    - by Greg
    Moved to Super User: Windows Question: RunOnce/Second Boot Issues I am attempting to create a Windows XP SP3 image that will run my application on Second Boot. Here is the intended workflow. 1) Run Image Prep Utility (I wrote) on windows to add my runonce entries and clean a few things up. 2) Reboot to ghost, make image file. 3) Package into my ISO and distribute. 4) System will be imaged by user. 5) On first boot, I have about 5 things that run, one of which includes a driver updater (I wrote) for my own specific devices. 6) One of the entries inside of HKCU/../runonce is a reg file, which adds another key to HKLM/../runonce. This is how second boot is acquired. 7) As a result of the driver updater, user is prompted to reboot. 8) My application is then launched from HKLM/../runonce on second boot. This workflow works perfectly, except for a select few legacy systems that contain devices that cause the add hardware wizard to pop up. When the add hardware wizard pops up is when I begin to see problems. It's important to note, that if I manually inspect the registry after the add hardware wizard pops up, it appears as I would expect, with all the first boot scripts having run, and it's sitting in a state I would correctly expect it to be in for a second boot scenario. The problem comes when I click next on the add hardware wizard, it seems to re-run the single entry I've added, and re-executes the runonce scripts. (only one script now as it's already executed and cleared out the initial entries). This causes my application to open as if it were a second boot, only when next is clicked on the add hardware wizard. If I click cancel, and reboot, then it also works as expected. I don't care as much about other solutions, because I could design a system that doesn't fully rely on Microsoft's registry. I simply can't find any information as to WHY this is happening. I believe this is some type of Microsoft issue that's presenting itself as a result of an overstretched image that's expected to support too many legacy platforms, but any help that can be provided would be appreciated. Thanks,

    Read the article

  • Rails: common approach for handling exceptions in restful actions on objects that have been destroye

    - by Greg
    It is very common in Rails for an objects_controller controller to have RESTful edit and destroy actions like so: def edit @object = Object.find(params[:id]) end def destroy @object = Object.find(params[:id]) @object.destroy redirect_to :back end With an associated view that provides edit and destroy links like so: <%= link_to "Edit the Object", edit_object_path(object) %> <%= link_to "Delete", object, :confirm => 'Are you sure?', :method => :delete %> And it is easy to blow this up. If I open two browser windows, A and B, destroy an object with the "Delete" link in browser A and then press the "Edit" link in browser B, the find() in the edit action throws an exception. Obviously there are several ways to deal with this in the edit action: catch the exception and recover gracefully use @object = find(:first, "conditions... etc. and test the @object before going further But seeing as this is such a common pattern, I would love to know how other folks deal with this situation.

    Read the article

  • iPhone: detect "touch-and-drag" gesture from UIBarButtonItem?

    - by Greg Maletic
    I have an "add" button that's represented by a UIBarButtonItem. Hitting the "add" button adds an object into a list that represents a moment in time. By default, that time is "now"...but I'd like to be able to use dragging behavior to let the user specify earlier times for the object. Here's the behavior I want to implement: If the user touches on the UIBarButtonItem and lets go quickly, an object is added to the list that represents "now." If the user touches on the UIBarButtonItem and drags, a little UI pops up that shows the time that the distance of their drag represents. The further they drag, the further back in time their touch will represent. When they let go, the object representing an earlier time will get added to the list. (Though the description of the behavior is complicated, I'm convinced this will be pretty intuitive for users of the app.) I haven't implemented code for anything but the most simple touches in the past, and I'm at a loss as to the best way to try this. Does anyone have any suggestions, or could point me towards some sample code that implements something like this? Thanks very much.

    Read the article

  • Manual metrics and treemap components

    - by Greg
    I have a problem with SonarQube. I use web API to inject manual metrics values for a project like this : curl -u nom:password -d "resource=<projet>&metric=<key de la metric>&val=<valeur>" http://localhost:8081/sonar/api/manual_measures One of these metrics is a percentage and this metric is declared as a Percentage value in Sonar in Settings = Manual Metrics window. I have a project with components and each project and components have this metric value. When I want to show this metric as a color metric in a "treemap of components" of widget, all the treemap is grey (as if values are not defined). But if I put mouse on the name of component in treemap, I saw the color metric value as a percentage value like this : myComponent - ncloc: 800 - myMetric: 84,0% Moreover, scale metric color does not appear in treemap title (after Size ncloc Color <my metric>).

    Read the article

  • Domain model for an optional many-many relationship

    - by Greg
    Let's say I'm modeling phone numbers. I have one entity for PhoneNumber, and one for Person. There's a link table that expresses the link (if any) between the PhoneNumber and Person. The link table also has a field for DisplayOrder. When accessing my domain model, I have several Use Cases for viewing a Person. I can look at them without any PhoneNumber information. I can look at them for a specific PhoneNumber. I can look at them and all of their current (or past) PhoneNumbers. I'm trying to model Person, not only for the standard CRUD operations, but for the (un)assignment of PhoneNumbers to a Person. I'm having trouble expressing the relationship between the two, especially with respects to the DisplayOrder property. I can think of several solutions but I'm not sure of which (if any) would be best. A PhoneNumberPerson class that has a Person and PhoneNumber property (most closely resembles database design) A PhoneCarryingPerson class that inherits from Person and has a PhoneNumber property. A PhoneNumber and/or PhoneNumbers property on Person (and vis-a-versa, a Person property on PhoneNumber) What would be a good way to model this that makes sense from a domain model perspective? How do I avoid misplaced properties (DisplayOrder on Person) or conditionally populated properties?

    Read the article

  • How to interpret Objective-C errors?

    - by Greg Maletic
    I'm getting the following error: 2010-05-11 17:46:28.475 MyApp[54112:5e1b] bool _WebTryThreadLock(bool), 0x140faa0: Tried to obtain the web lock from a thread other than the main thread or the web thread. This may be a result of calling to UIKit from a secondary thread. Crashing now... Is there any way for me to figure out where [54112:5e1b] is in my code, so I can try to narrow down the error? Thanks.

    Read the article

  • can QuickGraph support these requirements? (includes database persistence support)

    - by Greg
    Hi, Would QuickGraph be able to help me out with my requirements below? (a) want to model a graph of nodes and directional relationships between nodes - for example to model web pages/files linked under a URL, or modeling IT infrastructure and dependencies between hardware/software. The library would include methods such as * Node.GetDirectParents() //i.e. there could be more than one direct parent for a node * Node.GetRootParents() //i.e. traverse the tree to the top root parent(s) for the given node * Node.GetDirectChildren() * Node.GetAllChildren() (b) have to persist the data to a database - so it should support SQL Server and ideally SQLite as well. If it does support these requirement then I'd love to hear: any pointers to any parts of QuickGraph to dig into? what is the best concept re it's usage in terms of how to use database persistence - is it a simpler design to assume every search/method works directly on the database, or does QuickGraph support smarts to be able to work in memory and the "save" to database all changes at an appropriate point in time (e.g. like ADO.net does with DataTable etc) Thanks in advance

    Read the article

  • Contains performs MUCH slower with variable vs constant string MS SQL Server

    - by Greg R
    For some unknown reason I'm running into a problem when passing a variable to a full text search stored procedure performs many times slower than executing the same statement with a constant value. Any idea why and how can that be avoided? This executes very fast: SELECT * FROM table WHERE CONTAINS (comments, '123') This executes very slowly and times out: DECLARE @SearchTerm nvarchar(30) SET @SearchTerm = '123' SET @SearchTerm = '"' + @SearchTerm + '"' SELECT * FROM table WHERE CONTAINS (comments, @SearchTerm) Does this make any sense???

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >