Search Results

Search found 1485 results on 60 pages for 'dan c'.

Page 34/60 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Non Document Centric SharePoint Workflow

    - by Dan Revell
    SharePoint workflows are document centric in that the base thing the workflow runs on has to be a thing; be it a document or just a list item. The workflow itself is task based, so stuff a user has to do. Now I can put any sort of code in these tasks that I want to and even put complex InfoPath forms in for the user to perform the task. This has been fine on all my previous workflows. But what if I want the tasks to be actual official forms themselves. The item that the workflow runs on is just some abstract concept like an event. An example could be an accident has happened. There isn't an accident form, but a whole set of forms that need to be completed by different people. Task forms aren't really a nice way to go, because it locks all the forms into the task list. You can only access the forms by not deleting the tasks when complete and going to the workflow summery and following the task links to the InfoPath forms or going straight to the tasks list and doing a filter on particular "accidents". These are official documents so ideally there would be a library for each type of document and the workflow would orchestrate the completion of the right forms. It would mean each task would have to create a new blank form and then link the user to that form. The user would go complete the form but then have to go back to the task form and click yes I've completed it until the workflow could progress. Well this is short of the workflow monitoring the forms library form for some completion trigger. But then it all gets messy with the user experience from clicking the link in the task email, to open the Infopath task form, to clicking the link in the subsequent Infopath library form and then return through these forms on completion. It just gets messy trying to retrofit this non document centric sort of workflow into SharePoint. I would really appreciate any input on what might be the best way to do this. Store the forms as task forms Store the forms as library forms and create/link from the task forms Store the forms as different infopath views, and use a forms library. The workflow would trigger variables that progress the view the infopath form shows. Using the same form template for both task forms and a forms library and when a task form is complete, copy the xml into the forms library to have a official record outside of the workflow. Thanks

    Read the article

  • ListView in ArrayAdapter order get's mixed up when scrolling

    - by Dan B
    Hi, I have a ListView in a custom ArrayAdapter that displays an icon ImageView and a TextView in each row. When I make the list long enough to let you scroll through it, the order starts out right, but when I start to scroll down, some of the earlier entries start re-appearing. If I scroll back up, the old order changes. Doing this repeatedly eventually causes the entire list order to be seemingly random. So scrolling the list is either causing the child order to change, or the drawing is not refreshing correctly. What could cause something like this to happen? I need the order the items are displayed to the user to be the same order they are added to the ArrayList, or at LEAST to remain in one static order. If I need to provide more detailed information, please let me know. Any help is appreciated. Thanks.

    Read the article

  • How to return all aspnet_compiler errors (not just those in first directory)

    - by Dan Atkinson
    Hi there! Is there a way to get the aspnet_compiler to go through all views and return all errors, rather than just the errors in the current view directory? For example, lets say I have a project that has a bunch of folders... Views Folder1 Folder2 Folder3 Folder4 Two of them (Folder2 and Folder3) have errors. aspnet_compiler will run, and only return the errors it comes across in Folder2. It won't return those in Folder3 at the same time. Once I fix the errors in Folder2 and run it again, it'll then pick up the ones in the Folder3. I fix those. And then have to run the tool again, and again until it's all fixed. This is getting annoying!! For reference, here's the command I use: C:\Windows\Microsoft.NET\Framework\v2.0.50727\aspnet_compiler -v / -p "C:\path\to\project" Thanks in advance!

    Read the article

  • Suppressing PostSharp Multicast with Attribute

    - by Dan Bryant
    I've recently started experimenting with PostSharp and I found a particularly helpful aspect to automate implementation of INotifyPropertyChanged. You can see the example here. The basic functionality is excellent (all properties will be notified), but there are cases where I might want to suppress notification. For instance, I might know that a particular property is set once in the constructor and will never change again. As such, there is no need to emit the code for NotifyPropertyChanged. The overhead is minimal when classes are not frequently instantiated and I can prevent the problem by switching from an automatically generated property to a field-backed property and writing to the field. However, as I'm learning this new tool, it would be helpful to know if there is a way to tag a property with an attribute to suppress the code generation. I'd like to be able to do something like this: [NotifyPropertyChanged] public class MyClass { public double SomeValue { get; set; } public double ModifiedValue { get; private set; } [SuppressNotify] public double OnlySetOnce { get; private set; } public MyClass() { OnlySetOnce = 1.0; } }

    Read the article

  • c# Regex on XML string handler

    - by Dan Sewell
    Hi guys. Trying to fiddle around with regex here, my first attempt. Im trying to extract some figures out of content from an XML tag. The content looks like this: www.blahblah.se/maps.aspx?isAlert=true&lat=51.958855252721&lon=-0.517657021473527 I need to extract the lat and long numerical vales out of each link. They will always be the same amount of characters, and the lon may or may not have a "-" sign. I thought about doing something like this below: (The string in question is in the "link" tag): var document = XDocument.Load(e.Result); if (document.Root == null) return; var events = from ev in document.Descendants("item1") select new { Title = (ev.Element("title").Value), Latitude = Regex.xxxxxxx(ev.Element("link").Value, @"lat=(?<Lat>[+-]?\d*\.\d*)", String.Empty), Longitude = Convert.ToDouble(ev.Element("link").Value), }; foreach (var ev in events) { do stuff } Many thanks!

    Read the article

  • .NET JAXB equivalent?

    - by Dan
    Is there an equivalent library for JAXB in .NET? I am trying to convert an XML I get to a .NET class. I have the XSD, but not sure how to convert the XML received into a concrete Class? I used the XSD tool to generate a class from the schema, but what I want to to convert the XML I receive on the fly to a object that I can work with in code. I've seen the thread here that deals with this, but my query is - I want the object created to contain the data that I receive in the XML (i.e. the field values must be populated).

    Read the article

  • Java3d resetting to a new scene

    - by Dan Howard
    Hi all, I'm working on a game in Java3D. I read all my level info from a file and it works fine. But now I want to re-initialize the scene from reading data from a different file. How do I reset the scene? Should I just destroy the whole canvas3D and universe objects?

    Read the article

  • Convert Decimal to ASCII

    - by Dan Snyder
    I'm having difficulty using reinterpret_cast. Before I show you my code I'll let you know what I'm trying to do. I'm trying to get a filename from a vector full of data being used by a MIPS I processor I designed. Basically what I do is compile a binary from a test program for my processor, dump all the hex's from the binary into a vector in my c++ program, convert all of those hex's to decimal integers and store them in a DataMemory vector which is the data memory unit for my processor. I also have instruction memory. So When my processor runs a SYSCALL instruction such as "Open File" my C++ operating system emulator receives a pointer to the beginning of the filename in my data memory. So keep in mind that data memory is full of ints, strings, globals, locals, all sorts of stuff. When I'm told where the filename starts I do the following: Convert the whole decimal integer element that is being pointed to to its ASCII character representation, and then search from left to right to see if the string terminates, if not then just load each character consecutively into a "filename" string. Do this until termination of the string in memory and then store filename in a table. My difficulty is generating filename from my memory. Here is an example of what I'm trying to do: C++ Syntax (Toggle Plain Text) 1.Index Vector NewVector ASCII filename 2.0 240faef0 128123792 'abc7' 'a' 3.0 240faef0 128123792 'abc7' 'ab' 4.0 240faef0 128123792 'abc7' 'abc' 5.0 240faef0 128123792 'abc7' 'abc7' 6.1 1234567a 243225 'k2s0' 'abc7k' 7.1 1234567a 243225 'k2s0' 'abc7k2' 8.1 1234567a 243225 'k2s0' 'abc7k2s' 9. //EXIT LOOP// 10.1 1234567a 243225 'k2s0' 'abc7k2s' Index Vector NewVector ASCII filename 0 240faef0 128123792 'abc7' 'a' 0 240faef0 128123792 'abc7' 'ab' 0 240faef0 128123792 'abc7' 'abc' 0 240faef0 128123792 'abc7' 'abc7' 1 1234567a 243225 'k2s0' 'abc7k' 1 1234567a 243225 'k2s0' 'abc7k2' 1 1234567a 243225 'k2s0' 'abc7k2s' //EXIT LOOP// 1 1234567a 243225 'k2s0' 'abc7k2s' Here is the code that I've written so far to get filename (I'm just applying this to element 1000 of my DataMemory vector to test functionality. 1000 is arbitrary.): C++ Syntax (Toggle Plain Text) 1.int i = 0; 2.int step = 1000;//top->a0; 3.string filename; 4.char *temp = reinterpret_cast<char*>( DataMemory[1000] );//convert to char 5.cout << "a0:" << top->a0 << endl;//pointer supplied 6.cout << "Data:" << DataMemory[top->a0] << endl;//my vector at pointed to location 7.cout << "Data(1000):" << DataMemory[1000] << endl;//the element I'm testing 8.cout << "Characters:" << &temp << endl;//my temporary char array 9. 10.while(&temp[i]!=0) 11.{ 12. filename+=temp[i];//add most recent non-terminated character to string 13. i++; 14. if(i==4)//when 4 chatacters have been added.. 15. { 16. i=0; 17. step+=1;//restart loop at the next element in DataMemory 18. temp = reinterpret_cast<char*>( DataMemory[step] ); 19. } 20. } 21. cout << "Filename:" << filename << endl; int i = 0; int step = 1000;//top-a0; string filename; char *temp = reinterpret_cast( DataMemory[1000] );//convert to char cout << "a0:" << top-a0 << endl;//pointer supplied cout << "Data:" << DataMemory[top-a0] << endl;//my vector at pointed to location cout << "Data(1000):" << DataMemory[1000] << endl;//the element I'm testing cout << "Characters:" << &temp << endl;//my temporary char array while(&temp[i]!=0) { filename+=temp[i];//add most recent non-terminated character to string i++; if(i==3)//when 4 chatacters have been added.. { i=0; step+=1;//restart loop at the next element in DataMemory temp = reinterpret_cast( DataMemory[step] ); } } cout << "Filename:" << filename << endl; So the issue is that when I do the conversion of my decimal element to a char array I assume that 8 hex #'s will give me 4 characters. Why isn't this this case? Here is my output: C++ Syntax (Toggle Plain Text) 1.a0:0 2.Data:0 3.Data(1000):4428576 4.Characters:0x7fff5fbff128 5.Segmentation fault

    Read the article

  • JSON VIEW using GROUP_CONCAT question

    - by Dan Beam
    Hey DBAs and overall smart dudes. I have a question for you. We use MySQL VIEWs to format our data as JSON when it's returned (as a BLOB), which is convenient (though not particularly nice on performance, but we already know this). But, I can't seem to get a particular query working right now (each row contains NULL when it should contain a created JSON object with the values of multiple JOINs). Here's the general idea: SELECT CONCAT( "{", "\"some_list\":[", GROUP_CONCAT( DISTINCT t1.id ), "],", "\"other_list\":[", GROUP_CONCAT( DISTINCT t2.id ), "],", "}" ) cool_json FROM table_name tn INNER JOIN ( some_table st ) ON st.some_id = tn.id LEFT JOIN ( another_table at, another_one ao, used_multiple_times t1 ) ON st.id = at.some_id AND at.different_id = ao.different_id AND ao.different_id = t1.id LEFT JOIN ( another_table2 at2, another_one2 ao2, used_multiple_times t2 ) ON st.id = at2.some_id AND at2.different_id = ao2.different_id AND ao2.different_id = t2.id GROUP BY tn.id ORDER BY tn.name Anybody know the problem here? Am I missing something I should be grouping by? It was working when I was only doing 1 LEFT JOIN & GROUP_CONCAT, but now with multiple JOINs / GROUP_CONCATs it's messing it up. When I move the GROUP_CONCATs from the "cool_json" field they work as expected, but I'd like my data formatted as JSON so I can decode it server-side or client-side in one step.

    Read the article

  • Unlocking a mutex from a different thread (C++)

    - by dan
    I'm using the C++ boost::thread library, which in my case means I'm using pthreads. Officially, a mutex must be unlocked from the same thread which locks it, and I want the effect of being able to lock in one thread and then unlock in another. There are many ways to accomplish this. One possibility would be to write a new mutex class which allows this behavior. For example: class inter_thread_mutex{ bool locked; boost::mutex mx; boost::condition_variable cv; public: void lock(){ boost::unique_lock<boost::mutex> lck(mx); while(locked) cv.wait(lck); locked=true; } void unlock(){ { boost::lock_guard<boost::mutex> lck(mx); if(!locked) error(); locked=false; } cv.notify_one(); } // bool try_lock(); void error(); etc. } I should point out that the above code doesn't guarantee FIFO access, since if one thread calls lock() while another calls unlock(), this first thread may acquire the lock ahead of other threads which are waiting. (Come to think of it, the boost::thread documentation doesn't appear to make any explicit scheduling guarantees for either mutexes or condition variables). But let's just ignore that (and any other bugs) for now. My question is, if I decide to go this route, would I be able to use such a mutex as a model for the boost Lockable concept. For example, would anything go wrong if I use a boost::unique_lock< inter_thread_mutex for RAII-style access, and then pass this lock to boost::condition_variable_any.wait(), etc. On one hand I don't see why not. On the other hand, "I don't see why not" is usually a very bad way of determining whether something will work. The reason I ask is that if it turns out that I have to write wrapper classes for RAII locks and condition variables and whatever else, then I'd rather just find some other way to achieve the same effect.

    Read the article

  • Refactor/rewrite code or continue?

    - by Dan
    I just completed a complex piece of code. It works to spec, it meets performance requirements etc etc but I feel a bit anxious about it and am considering rewriting and/or refactoring it. Should I do this (spending time that could otherwise be spent on features that users will actually notice)? The reasons I feel anxious about the code are: The class hierarchy is complex and not obvious Some classes don't have a well defined purpose (they do a number of unrelated things) Some classes use others internals (they're declared as friend classes) to bypass the layers of abstraction for performance, but I feel they break encapsulation by doing this Some classes leak implementation details (eg, I changed a map to a hash map earlier and found myself having to modify code in other source files to make the change work) My memory management/pooling system is kinda clunky and less-than transparent They look like excellent reasons to refactor and clean code, aiding future maintenance and extension, but could be quite time consuming. Also, I'll never be perfectly happy with any code I write anyway... So, what does stackoverflow think? Clean code or work on features?

    Read the article

  • How do I ensure only the headers are shown for the first item in an ItemsControl in WPF?

    - by Dan Ryan
    I am using MVVM binding an ObservableCollection of children to an ItemsControl. The ItemsControl contains a UserControl used to style the UI for the children. <ItemsControl ItemsSource="{Binding Documents}"> <ItemsControl.ItemTemplate> <DataTemplate> <View:DocumentView Margin="0, 10, 0, 0" /> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> I want to show a header row for the contents of the ItemsControl but only want to show this once at the top (not for every child). How can I implement this behaviour in the DocumentView user control? Fyi I am using a Grid layout to style the child rows: <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="34"/> <ColumnDefinition Width="100"/> <ColumnDefinition Width="*" /> <ColumnDefinition Width="60" /> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition /> <RowDefinition /> </Grid.RowDefinitions> <TextBlock Grid.ColumnSpan="4" Grid.Row="0" Text="Should only show this at the top"></TextBlock> <Image Grid.Column="0" Grid.Row="1" Height="24" Width="24" Source="/Beazley.Documents.Presentation;component/Icons/error.png"></Image> <ComboBox Grid.Column="1" Grid.Row="1" Name="ContentTypes" ItemsSource="{Binding RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type View:MainView}}, Path=DataContext.ContentTypes}" SelectedValue="{Binding ContentType}"/> <TextBox Grid.Column="2" Grid.Row="1" Text="{Binding Path=FileName}"/> <Button Grid.Column="3" Grid.Row="1" Command="{Binding RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type View:MainView}}, Path=DataContext.RemoveFile}" CommandParameter="{Binding}">Remove</Button> </Grid>

    Read the article

  • Writing to a java socket channel which should be closed does not generate an exception

    - by Dan Serfaty
    Hi all, We have a java server that keeps a socket channel open with an Android client in order to provide push capabilities to our client application. However, after putting the Android in airplane mode, which I expected would sever the connection, the server can still write to the SocketChannel object associated with that Android client and no error is thrown. Calling SocketChannel.isConnected() before writing to it returns true. What are we missing? Is the handling of sockets different with mobile devices? Thanks in advance for your help.

    Read the article

  • How can I prevent SerializeJSON from changing Yes/No/True/False strings to boolean?

    - by Dan Roberts
    I have a data struct being stored in JSON format, converted using the serializeJSON function. The problem I am running into is that strings that can be boolean in CF such as Yes,No,True,and False are converted into JSON as boolean values. Below is example code. Any ideas on how to prevent this? Code: <cfset test = {str='Yes'}> <cfset json = serializeJSON(test)> <cfset fromJSON = deserializeJSON(json)> <cfoutput> #test.str#<br> #json#<br> #fromJSON.str# </cfoutput> Result: Yes {"STR":true} YES

    Read the article

  • Using OUTPUT/INTO within instead of insert trigger invalidates 'inserted' table

    - by Dan
    I have a problem using a table with an instead of insert trigger. The table I created contains an identity column. I need to use an instead of insert trigger on this table. I also need to see the value of the newly inserted identity from within my trigger which requires the use of OUTPUT/INTO within the trigger. The problem is then that clients that perform INSERTs cannot see the inserted values. For example, I create a simple table: CREATE TABLE [MyTable]( [MyID] [int] IDENTITY(1,1) NOT NULL, [MyBit] [bit] NOT NULL, CONSTRAINT [PK_MyTable_MyID] PRIMARY KEY NONCLUSTERED ( [MyID] ASC )) Next I create a simple instead of trigger: create trigger [trMyTableInsert] on [MyTable] instead of insert as BEGIN DECLARE @InsertedRows table( MyID int, MyBit bit); INSERT INTO [MyTable] ([MyBit]) OUTPUT inserted.MyID, inserted.MyBit INTO @InsertedRows SELECT inserted.MyBit FROM inserted; -- LOGIC NOT SHOWN HERE THAT USES @InsertedRows END; Lastly, I attempt to perform an insert and retrieve the inserted values: DECLARE @tbl TABLE (myID INT) insert into MyTable (MyBit) OUTPUT inserted.MyID INTO @tbl VALUES (1) SELECT * from @tbl The issue is all I ever get back is zero. I can see the row was correctly inserted into the table. I also know that if I remove the OUTPUT/INTO from within the trigger this problem goes away. Any thoughts as to what I'm doing wrong? Or is how I want to do things not feasible? Thanks.

    Read the article

  • Is this a correct Interlocked synchronization design?

    - by Dan Bryant
    I have a system that takes Samples. I have multiple client threads in the application that are interested in these Samples, but the actual process of taking a Sample can only occur in one context. It's fast enough that it's okay for it to block the calling process until Sampling is done, but slow enough that I don't want multiple threads piling up requests. I came up with this design (stripped down to minimal details): public class Sample { private static Sample _lastSample; private static int _isSampling; public static Sample TakeSample(AutomationManager automation) { //Only start sampling if not already sampling in some other context if (Interlocked.CompareExchange(ref _isSampling, 0, 1) == 0) { try { Sample sample = new Sample(); sample.PerformSampling(automation); _lastSample = sample; } finally { //We're done sampling _isSampling = 0; } } return _lastSample; } private void PerformSampling(AutomationManager automation) { //Lots of stuff going on that shouldn't be run in more than one context at the same time } } Is this safe for use in the scenario I described?

    Read the article

  • How to apply coding methodologies and practices to non-coding work?

    - by Dan
    I can talk for hours about best-practice, source control, change management, feature tracking, development cycles and the lot, but most of what I've learnt or read seems to apply to nuts-and-bolts programming of compiled applications. You know, ASCII files that gets turned into 1s and 0s. How does one apply the same discipline and wisdom to working in environments that are point-and-click, config-centric. I'm thinking of CMSs and specifically, my current 9 to 5, SharePoint. Traditional practices of source control, dev-staging-production seem to break down since we're not working with code, and the live environment changes with user input. So to sum up a rather lengthy question, what works in a no-code environment?

    Read the article

  • How would you code a washing machine?

    - by Dan
    Imagine I have a class that represents a simple washing machine. It can perform following operations in the following order: turn on - wash - centrifuge - turn off. I see two basic alternatives: A) I can have a class WashingMachine with methods turnOn(), wash(int minutes), centrifuge(int revs), turnOff(). The problem with this is that the interface says nothing about the correct order of operations. I can at best throw InvalidOprationException if the client tries to centrifuge before machine was turned on. B) I can let the class itself take care of correct transitions and have the single method nextOperation(). The problem with this on the other hand, is that the semantics is poor. Client will not know what will happen when he calls the nextOperation(). Imagine you implement the centrifuge button’s click event so it calls nextOperation(). User presses the centrifuge button after machine was turned on and ups! machine starts to wash. I will probably need a few properties on my class to parameterize operations, or maybe a separate Program class with washLength and centrifugeRevs fields, but that is not really the problem. Which alternative is better? Or maybe there are some other, better alternatives that I missed to describe?

    Read the article

  • Checking for empty arrays: count vs empty

    - by Dan McG
    This question on 'How to tell if a PHP array is empty' had me thinking of this question Is there a reason that count should be used instead of empty when determining if an array is empty or not? My personal thought would be if the 2 are equivalent for the case of empty arrays you should use empty because it gives a boolean answer to a boolean question. From the question linked above, it seems that count($var) == 0 is the popular method. To me, while technically correct, makes no sense. E.g. Q: $var, are you empty? A: 7. Hmmm... Is there a reason I should use count == 0 instead or just a matter of personal taste? As pointed out by others in comments for a now deleted answer, count will have performance impacts for large arrays because it will have to count all elements, whereas empty can stop as soon as it knows it isn't empty. So, if they give the same results in this case, but count is potentially inefficient, why would we ever use count($var) == 0?

    Read the article

  • Craziest JavaScript behavior I've ever seen

    - by Dan Ray
    And that's saying something. This is based on the Google Maps sample for Directions in the Maps API v3. <html> <head> <meta name="viewport" content="initial-scale=1.0, user-scalable=no"/> <meta http-equiv="content-type" content="text/html; charset=UTF-8"/> <title>Google Directions</title> <script type="text/javascript" src="http://maps.google.com/maps/api/js?sensor=false"></script> <script type="text/javascript"> var directionDisplay; var directionsService = new google.maps.DirectionsService(); var map; function initialize() { directionsDisplay = new google.maps.DirectionsRenderer(); var myOptions = { zoom:7, mapTypeId: google.maps.MapTypeId.ROADMAP } map = new google.maps.Map(document.getElementById("map_canvas"), myOptions); directionsDisplay.setMap(map); directionsDisplay.setPanel(document.getElementById("directionsPanel")); } function render() { var start; if(navigator.geolocation) { navigator.geolocation.getCurrentPosition(function(position) { start = new google.maps.LatLng(position.coords.latitude,position.coords.longitude); }, function() { handleNoGeolocation(browserSupportFlag); }); } else { // Browser doesn't support Geolocation handleNoGeolocation(); } alert("booga booga"); var end = '<?= $_REQUEST['destination'] ?>'; var request = { origin:start, destination:end, travelMode: google.maps.DirectionsTravelMode.DRIVING }; directionsService.route(request, function(response, status) { if (status == google.maps.DirectionsStatus.OK) { directionsDisplay.setDirections(response); } }); } </script> </head> <body style="margin:0px; padding:0px;" onload="initialize()"> <div><div id="map_canvas" style="float:left;width:70%; height:100%"></div> <div id="directionsPanel" style="float:right;width:30%;height 100%"></div> <script type="text/javascript">render();</script> </body> </html> See that "alert('booga booga')" in there? With that in place, this all works fantastic. Comment that out, and var start is undefined when we hit the line to define var request. I discovered this when I removed the alert I put in there to show me the value of var start, and it quit working. If I DO ask it to alert me the value of var start, it tells me it's undefined, BUT it has a valid (and accurate!) value when we define var request a few lines later. I'm suspecting it's a timing issue--like an asynchronous something is having time to complete in the background in the moment it takes me to dismiss the alert. Any thoughts on work-arounds?

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >