Search Results

Search found 12907 results on 517 pages for 'todd little'.

Page 463/517 | < Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >

  • An offscreen MKMapView behaves differently in 3.2, 4.0

    - by Duane Fields
    In 3.1 I've been using an "offscreen" MKMapView to create map images that I can rotate, crop and so forth before presenting them the user. In 3.2 and 4.0 this technique no longer works quite right. Here's some code that illustrates the problem, followed by my theory. // create map view _mapView = [[MKMapView alloc] initWithFrame:CGRectMake(0, 0, MAP_FRAME_SIZE, MAP_FRAME_SIZE)]; _mapView.zoomEnabled = NO; _mapView.scrollEnabled = NO; _mapView.delegate = self; _mapView.mapType = MKMapTypeSatellite; // zoom in to something enough to fill the screen MKCoordinateRegion region; CLLocationCoordinate2D center = {30.267222, -97.763889}; region.center = center; MKCoordinateSpan span = {0.1, 0.1 }; region.span = span; _mapView.region = region; // set scrollview content size to full the imageView _scrollView.contentSize = _imageView.frame.size; // force it to load #ifndef __IPHONE_3_2 // in 3.1 we can render to an offscreen context to force a load UIGraphicsBeginImageContext(_mapView.frame.size); [_mapView.layer renderInContext:UIGraphicsGetCurrentContext()]; UIGraphicsEndImageContext(); #else // in 3.2 and above, the renderInContext trick doesn't work... // this at least causes the map to render, but it's clipped to what appears to be // the viewPort size, plus some padding [self.view addSubview:_mapView]; #endif when the map is done loading, I snap picture of it and stuff it in my scrollview - (void)mapViewDidFinishLoadingMap:(MKMapView *)mapView { NSLog(@"[MapBuilder] mapViewDidFinishLoadingMap"); // render the map to a UIImage UIGraphicsBeginImageContext(mapView.bounds.size); // the first sub layer is just the map, the second is the google layer, this sublayer structure might change of course [[[mapView.layer sublayers] objectAtIndex:0] renderInContext:UIGraphicsGetCurrentContext()]; // we are done with the mapView at this point, we need its ram! _mapView.delegate = nil; [_mapView release]; [_mapView removeFromSuperview]; _mapView = nil; UIImage* mapImage = [UIGraphicsGetImageFromCurrentImageContext() retain]; UIGraphicsEndImageContext(); _imageView.image = mapImage; [mapImage release], mapImage = nil; } The first problem is that in 3.1 rendering to a context would trigger the map to begin loading. This no longer works in 3.2, 4.0. The only thing I have found would trigger the load is to temporarily add the map to the view (i.e. make it visible). The problem being that the map only renders to the visible area of the screen, plus a little padding. The frame/bounds are fine, but it appears to be "helpfully" optimizes the loading to limit the tiles to those visible on the screen or close to it. Any ideas how to force the map to load at full size? Anyone else have this issue?

    Read the article

  • Can't create a fullscreen WPF popup

    - by Scrappydog
    Using WPF .NET 4.0 in VS2010 RTM: I can't create a fullscreen WPF popup. If I create a popup that is sized 50% width and 100% height everything works fine, but if I try to create a "full screen" popup sized to 100% width and height it ends up displaying at 100% width and 75% height... the bottom is truncated. Note: The width and height are actually being expressed in pixels in code, I'm using percent to make the situation a little more understandable... It "feels" like there is some sort of limit preventing the area of a popup from exceeding ~75% of the total area of the screen. UPDATE: Here is a Hello World example that shows the problem. <Window x:Class="TechnologyVisualizer.PopupTest" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="PopupTest" WindowStyle="None" WindowState="Maximized" Background="DarkGray"> <Canvas x:Name="MainCanvas" Width="1920" Height="1080"> <Popup Placement="Center" VerticalAlignment="Center" HorizontalAlignment="Center" Width="1900" Height="1060" Name="popContent"> <TextBlock Background="Red">Hello World</TextBlock> </Popup> <Button Canvas.Left="50" Canvas.Top="50" Content="Menu" Height="60" Name="button1" Width="80" FontSize="22" Foreground="White" Background="Black" Click="button1_Click" /> </Canvas> </Window> using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Windows; using System.Windows.Controls; using System.Windows.Data; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Imaging; using System.Windows.Shapes; namespace TechnologyVisualizer { /// <summary> /// Interaction logic for PopupTest.xaml /// </summary> public partial class PopupTest : Window { public PopupTest() { InitializeComponent(); } private void button1_Click(object sender, RoutedEventArgs e) { popContent.IsOpen = true; } } } If you run this the bottom 25% of the popup is missing if you change the width of the popup to 500 then it will go full height

    Read the article

  • MVVM load data during or after ViewModel construction?

    - by mkmurray
    My generic question is as the title states, is it best to load data during ViewModel construction or afterward through some Loaded event handling? I'm guessing the answer is after construction via some Loaded event handling, but I'm wondering how that is most cleanly coordinated between ViewModel and View? Here's more details about my situation and the particular problem I'm trying to solve: I am using the MVVM Light framework as well as Unity for DI. I have some nested Views, each bound to a corresponding ViewModel. The ViewModels are bound to each View's root control DataContext via the ViewModelLocator idea that Laurent Bugnion has put into MVVM Light. This allows for finding ViewModels via a static resource and for controlling the lifetime of ViewModels via a Dependency Injection framework, in this case Unity. It also allows for Expression Blend to see everything in regard to ViewModels and how to bind them. So anyway, I've got a parent View that has a ComboBox databound to an ObservableCollection in its ViewModel. The ComboBox's SelectedItem is also bound (two-way) to a property on the ViewModel. When the selection of the ComboBox changes, this is to trigger updates in other views and subviews. Currently I am accomplishing this via the Messaging system that is found in MVVM Light. This is all working great and as expected when you choose different items in the ComboBox. However, the ViewModel is getting its data during construction time via a series of initializing method calls. This seems to only be a problem if I want to control what the initial SelectedItem of the ComboBox is. Using MVVM Light's messaging system, I currently have it set up where the setter of the ViewModel's SelectedItem property is the one broadcasting the update and the other interested ViewModels register for the message in their constructors. It appears I am currently trying to set the SelectedItem via the ViewModel at construction time, which hasn't allowed sub-ViewModels to be constructed and register yet. What would be the cleanest way to coordinate the data load and initial setting of SelectedItem within the ViewModel? I really want to stick with putting as little in the View's code-behind as is reasonable. I think I just need a way for the ViewModel to know when stuff has Loaded and that it can then continue to load the data and finalize the setup phase. Thanks in advance for your responses.

    Read the article

  • WPF CommandParameter is NULL first time CanExecute is called

    - by Jonas Follesø
    I have run into an issue with WPF and Commands that are bound to a Button inside the DataTemplate of an ItemsControl. The scenario is quite straight forward. The ItemsControl is bound to a list of objects, and I want to be able to remove each object in the list by clicking a Button. The Button executes a Command, and the Command takes care of the deletion. The CommandParameter is bound to the Object I want to delete. That way I know what the user clicked. A user should only be able to delete their "own" objects - so I need to do some checks in the "CanExecute" call of the Command to verify that the user has the right permissions. The problem is that the parameter passed to CanExecute is NULL the first time it's called - so I can't run the logic to enable/disable the command. However, if I make it allways enabled, and then click the button to execute the command, the CommandParameter is passed in correctly. So that means that the binding against the CommandParameter is working. The XAML for the ItemsControl and the DataTemplate looks like this: <ItemsControl x:Name="commentsList" ItemsSource="{Binding Path=SharedDataItemPM.Comments}" Width="Auto" Height="Auto"> <ItemsControl.ItemTemplate> <DataTemplate> <StackPanel Orientation="Horizontal"> <Button Content="Delete" FontSize="10" Command="{Binding Path=DataContext.DeleteCommentCommand, ElementName=commentsList}" CommandParameter="{Binding}" /> </StackPanel> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> So as you can see I have a list of Comments objects. I want the CommandParameter of the DeleteCommentCommand to be bound to the Command object. So I guess my question is: have anyone experienced this problem before? CanExecute gets called on my Command, but the parameter is always NULL the first time - why is that? Update: I was able to narrow the problem down a little. I added an empty Debug ValueConverter so that I could output a message when the CommandParameter is data bound. Turns out the problem is that the CanExecute method is executed before the CommandParameter is bound to the button. I have tried to set the CommandParameter before the Command (like suggested) - but it still doesn't work. Any tips on how to control it. Update2: Is there any way to detect when the binding is "done", so that I can force re-evaluation of the command? Also - is it a problem that I have multiple Buttons (one for each item in the ItemsControl) that bind to the same instance of a Command-object? Update3: I have uploaded a reproduction of the bug to my SkyDrive: http://cid-1a08c11c407c0d8e.skydrive.live.com/self.aspx/Code%20samples/CommandParameterBinding.zip

    Read the article

  • Problem with waitable timers in Windows (timeSetEvent and CreateTimerQueueTimer)

    - by MusiGenesis
    I need high-resolution (more accurate than 1 millisecond) timing in my application. The waitable timers in Windows are (or can be made) accurate to the millisecond, but if I need a precise periodicity of, say, 35.7142857141 milliseconds, even a waitable timer with a 36 ms period will drift out of sync quickly. My "solution" to this problem (in ironic quotes because it's not working quite right) is to use a series of one-shot timers where I use the expiration of each timer to call the next timer. Normally a process like this would be subject to cumulative error over time, but in each timer callback I check the current time (with System.Diagnostics.Stopwatch) and use this to calculate what the period of the next timer needs to be (so if a timer happens to expire a little late, the next timer will automagically have a shorter period to compensate). This works as expected, except that after maybe 10-15 seconds the timer system seems to get bogged down, and a few timer callbacks here and there arrive anywhere from 25 to 100 milliseconds late. After a couple of seconds the problem goes away and everything runs smoothly again for 10-15 seconds, and then the stuttering again. Since I'm using Stopwatch to set each timer period, I'm also using it to monitor the arrival times of each timer callback. During the smooth-running periods, most (maybe 95%) of the intervals are either 35 or 36 milliseconds, and no intervals are ever more than 5 milliseconds away from the expected 35.7142857143. During the "glitchy" stretches, the distribution of intervals is very nearly identical, except that a very small number are unusually large (a couple more than 60 ms and one or two longer than 100 ms during maybe a 3-second stretch). This stuttering is very noticeable, and it's what I'm trying to fix, if possible. For the high-resolution timer, I was using the extremely antique timeSetEvent() multimedia timer from winmm.dll. In pursuit of this problem, I switched to using CreateTimerQueueTimer (along with timeBeginPeriod to set the high-resolution), but I'm seeing the same problem with both timer mechanisms. I've tried experimenting with the various flags for CreateTimerQueueTimer which determine which thread the timer runs on, but the stuttering appears no matter what. Is this just a fundamental problem with using timers in this way (i.e. using each one-shot timer to call the next)? If so, do I have any alternatives? One thing I was considering was to determine how many consecutive 1-millisecond-accuracy ticks would keep my within some arbitrary precision limit before I need to reset the timer. So, for example, if I wanted a 35.71428 period, I could let a 36 ms timer elapse 15 times before it was off by 5 milliseconds, then kill it and start a new one.

    Read the article

  • Compiling program that uses libpcap on Mac OSX using iPhone 3.1.1 SDK for use on iPhone

    - by Alan
    Hey SOV users, I have a question that I'm hoping some iPhone Developers may be able to help with. I had a look at statically compiling a binary on my Mac and moving it over to the iPhone for execution. I have managed to get this bit of it working by installing the iPhone 3.1.3 SDK on my Mac and setting the architecture to the iPhone in the gcc line as follows; /Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/gcc -I ~/Downloads/libpcap-1.1.1/pcap-compiled/ -arch armv6 -isysroot /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS2.1.sdk -o test test.c I have managed to successfully compile a "Hello World" C program and executed it on the iPhone with success. e.g. include int main() { printf("Hello, World!\n"); return(0); } This worked a charm. I am also using 'ldid' to sign the application (but only if necessary). Anyways, I have been trying to get a program to compile which uses libpcap (http://www.tcpdump.org/) but with little success. I have downloaded and installed libpcap-1.1.1 on my mac and set the configure --prefix to /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS2.1.sdk/usr/local and build the application. I then saw that the includes actually reside in Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS2.1.sdk/usr/includes and so moved the Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS2.1.sdk/usr/local/include files (which contained only the pcap stuff) to the correct location. I then attempted to compile the test program using; /Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/gcc -I /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.1.3.sdk/usr/include/ -arch armv6 -isysroot /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS2.1.sdk -o pcap pcap.c -lpcap This worked a lot better than other tests but produces an error; i.e. ld: library not found for -lcrt1.o collect2: ld returned 1 exit status Do you have any ideas as to how I can do this successfully? I've tried a load of different things but none seem to be successful. Basically, I just want to install (or add) some headers to the existing iPhoneOS SDK for use in compiling programs. Any ideas? Cheers, A

    Read the article

  • Creating an Objective-C++ Static Library in Xcode

    - by helixed
    So I've developed an engine for the iPhone with which I'd like to build a couple different games. Rather than copy and paste the files for the engine inside of each game's project directory, I'd a way to link to the engine from each game, so if I need to make a change to it I only have to do so once. After reeding around a little bit, it seems like static libraries are the best way to do this on the iPhone. I created a new project called Skeleton and copied all of my engine files over to it. I used this guide to create a static library, and I imported the library into a project called Chooser. However, when I tried to compile the project, Xcode started complaining about some C++ data structures I included in a file called ControlScene.mm. Here's my build errors: "operator delete(void*)", referenced from: -[ControlScene dealloc] in libSkeleton.a(ControlScene.o) -[ControlScene init] in libSkeleton.a(ControlScene.o) __gnu_cxx::new_allocator<operation_t>::deallocate(operation_t*, unsigned long)in libSkeleton.a(ControlScene.o) __gnu_cxx::new_allocator<operation_t*>::deallocate(operation_t**, unsigned long)in libSkeleton.a(ControlScene.o) "operator new(unsigned long)", referenced from: -[ControlScene init] in libSkeleton.a(ControlScene.o) __gnu_cxx::new_allocator<operation_t*>::allocate(unsigned long, void const*)in libSkeleton.a(ControlScene.o) __gnu_cxx::new_allocator<operation_t>::allocate(unsigned long, void const*)in libSkeleton.a(ControlScene.o) "std::__throw_bad_alloc()", referenced from: __gnu_cxx::new_allocator<operation_t*>::allocate(unsigned long, void const*)in libSkeleton.a(ControlScene.o) __gnu_cxx::new_allocator<operation_t>::allocate(unsigned long, void const*)in libSkeleton.a(ControlScene.o) "___cxa_rethrow", referenced from: std::_Deque_base<operation_t, std::allocator<operation_t> >::_M_create_nodes(operation_t**, operation_t**)in libSkeleton.a(ControlScene.o) std::_Deque_base<operation_t, std::allocator<operation_t> >::_M_initialize_map(unsigned long)in libSkeleton.a(ControlScene.o) "___cxa_end_catch", referenced from: std::_Deque_base<operation_t, std::allocator<operation_t> >::_M_create_nodes(operation_t**, operation_t**)in libSkeleton.a(ControlScene.o) std::_Deque_base<operation_t, std::allocator<operation_t> >::_M_initialize_map(unsigned long)in libSkeleton.a(ControlScene.o) "___gxx_personality_v0", referenced from: ___gxx_personality_v0$non_lazy_ptr in libSkeleton.a(ControlScene.o) ___gxx_personality_v0$non_lazy_ptr in libSkeleton.a(MenuLayer.o) "___cxa_begin_catch", referenced from: std::_Deque_base<operation_t, std::allocator<operation_t> >::_M_create_nodes(operation_t**, operation_t**)in libSkeleton.a(ControlScene.o) std::_Deque_base<operation_t, std::allocator<operation_t> >::_M_initialize_map(unsigned long)in libSkeleton.a(ControlScene.o) ld: symbol(s) not found collect2: ld returned 1 exit status If anybody could offer some insight as to why these problems are occuring, I'd appreciate it. Thanks, helixed

    Read the article

  • Could not import Django settings into Google App Engine

    - by gkelsall
    Hello all you Google App Engine experts, I have used Django a little before but am new to Google App Engine and am trying to use it's development web server with Django for the first time. I don't know if this is relevent but I previously had Django 1.1 and Python 2.6 on my Windows XP and even though I have uninstalled Python 2.6 there is still a folder and entries in the registry. I have followed the instructions from Google but when I browse to the GAE developemnt web server it cannot find my settings (details below). Any hints gratefully received. Regards Geoff C:\Documents and Settings\GeoffK\My Documents\ing\ingsite>echo %PATH% C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS \system32\WindowsPowerShell\v1.0;;C:\Python25;C:\Python25\Lib\site- packages\django\bin;C:\Documents and Settings\GeoffK\My Documents\ing \ingsite;C:\Program Files\Google\google_appengine\ C:\Documents and Settings\GeoffK\My Documents\ing\ingsite>echo %PYTHONPATH% C:\Documents and Settings\GeoffK\My Documents\ing\ingsite C:\Documents and Settings\GeoffK\My Documents\ing\ingsite>C:\Documents and Settings\GeoffK\My Documents\ing\ingsite>dev_appserver.py -- debug_imports ingiliz\ INFO 2009-08-04 07:29:45,328 appengine_rpc.py:157] Server: appengine.google. com INFO 2009-08-04 07:29:45,358 appcfg.py:322] Checking for updates to the SDK. INFO 2009-08-04 07:29:45,578 appcfg.py:336] The SDK is up to date. WARNING 2009-08-04 07:29:45,578 datastore_file_stub.py:404] Could not read data store data from c:\docume~1\geoffk\locals~1\temp \dev_appserver.datastore WARNING 2009-08-04 07:29:45,578 datastore_file_stub.py:404] Could not read data store data from c:\docume~1\geoffk\locals~1\temp \dev_appserver.datastore.history WARNING 2009-08-04 07:29:45,608 dev_appserver.py:3296] Could not initialize ima ges API; you are likely missing the Python "PIL" module. ImportError: No module named _imaging INFO 2009-08-04 07:29:45,625 dev_appserver_main.py:465] Running application ingiliz on port 8080: http://localhost:8080 ..... Now attempting to browse if need more detail here I can post ..... if not settings.DATABASE_ENGINE: File "C:\Python25\lib\site-packages\django\conf\__init__.py", line 28, in __ge tattr__ self._import_settings() File "C:\Python25\lib\site-packages\django\conf\__init__.py", line 59, in _imp ort_settings self._target = Settings(settings_module) File "C:\Python25\lib\site-packages\django\conf\__init__.py", line 94, in __in it__ raise ImportError, "Could not import settings '%s' (Is it on sys.path? Does it have syntax errors?): %s" % (self.SETTINGS_MODULE, e) ImportError: Could not import settings 'settings' (Is it on sys.path? Does it ha ve syntax errors?): No module named settings INFO 2009-08-04 07:31:02,187 dev_appserver.py:2982] "GET / HTTP/ 1.1" 500 -

    Read the article

  • How to override C# DateTime serialization with class auto-generated from wsdl?

    - by Calvin Fisher
    I have a WSDL that the consumer of my web service expects will be adhered to strictly. I converted it into an interface with wsdl.exe and had my web service implement it. Except for this problem, I have been generally pleased with the results. A simple GetCurrentTime method will have the following response class generated from the WSDL in the interface definition: [System.CodeDobmCompiler.GeneratedCodeAttribute("wsdl", "2.0.50727.3038")] [System.SerializableAttribute()] [System.Diagnostics.DebuggerStepThroughAttribute()] [System.ComponentModel.DesignerCategoryAttribute("code")] [System.Xml.Serialization.XmlTypeAttribute(Namespace="[Client Namespace]")] public partial class GetCurrentTimeResponse { private System.DateTime timeStampField; [System.Xml.Serialization.XmlElementAttribute(Form=System.Xml.Schema.XmlSchemaForm.Unqualified] public System.DateTime TimeStamp{ // [accesses timeStampField] } } When I put the response data into the automatically generated response class, it gets serialized into an appropriate XML response. (Most of the web methods have much more complicated return types with multiple levels of arrays.) The problem is that the default serialization of DateTime objects violates one of the requirements in the WSDL: ... <xsd:simpleType name="SearchTimeStamp"> <xsd:restriction base="xsd:dateTime"> <xsd:pattern value="[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}(.[0-9]{1,7})?Z"> </xsd:restriction> </xsd:simpleType> ... Note the last part of the pattern where subseconds must be either 1 or 7 characters if they are included. The client seems to be rejecting the response because it does not match that requirement. The main issue is that when .NET serializes a DateTime object, it omits all trailing zeroes, meaning the resulting subsecond value varies in length. (e.g., "12:34:56.700" gets serialized as "<TimeStamp>12:34:56:7</TimeStamp>" by default). We use millisecond precision, so I need all timestamps to format with 7 subsecond digits in order to be compliant with the WSDL. It would be easy if I could specify a format string, but I'm not sure how to control the string that the DateTime object uses to serialize to XML, or to otherwise override the serialization behavior. How do I do this? Keeping in mind the following... I would like to modify the generated code as little as possible... preferably not at all if the change can be made through a partial class or inherited class. Using an inherited class for the return type of the web method will cause the web service to no longer implement the auto-generated interface. The TimeStamp type occurs in other, more complex response types. So, manually overriding the entire serialization process may be prohibitively time-consuming.

    Read the article

  • C# and WUAPI: BeginDownload function

    - by Laurus Schluep
    first things first: I have no experience in object-oriented programming, whatsoever. I created my share of VB scripts and a bit of Java in school, but that's it. So my problem most likely lies there. But nevertheless, for the past few days, I've been trying to get a little application together that allows me to scan for, select and install Windows updates. So far I've been able to understand most of the reference and with the help of a few posts around the internet and I'm now at the point where I can select and download updates. So far I've been able to download a collection of updates using the following code: UpdateCollection CurrentInstallCollection = (UpdateCollection)e.Argument; UpdateDownloader CurrentDownloader = CurrentSession.CreateUpdateDownloader(); CurrentDownloader.Updates = CurrentInstallCollection; This is run in a background worker and returns once the download is done. It works just fine, I can see the updates getting downloaded on the file system but there isn't really a way to display the progress within the application. To do such a thing, there is the IDownloadJob interface that allows me to use the .BeginDownload method of the downloader (UpdateSession.CreateUpdateDownloader)...i think, at least. :D And here comes the problem: I have now tried for about 6 hours to get the code working, but no matter what I tried nothing worked. Also, there isn't much information around on the internet about the .BeginDownload method (or at least it seems that way), but my call of the method doesn't work: IDownloadJob CurrentDownloadJob = CurrentDownloader.BeginDownload(); I have no clue what arguments to supply...I've tried methods, objects...to no avail. The complete block of code looks like this: UpdateCollection CurrentInstallCollection = (UpdateCollection)e.Argument; UpdateDownloader CurrentDownloader = CurrentSession.CreateUpdateDownloader(); CurrentDownloader.Updates = CurrentInstallCollection; IDownloadJob CurrentDownloadJob = CurrentDownloader.BeginDownload(); IDownloadProgress CurrentJobProgess = CurrentDownloadJob.GetProgress(); tbStatus.Text = Convert.ToString(CurrentJobProgess.PercentComplete); I've found one source on the internet that called the method with .BeginDownload(this,this,this), which does not report any error in the code editor but probably won't help with reporting as it is my understanding, that the arguments supplied are the methods that are called when the described event occurrs (progress has changed or the download has finished). I also tried this, but it didn't work either: http://social.msdn.microsoft.com/Forums/en/csharpgeneral/thread/636a8399-2bc1-46ff-94df-a58cebfe688c A detailed description of the BeginDownload method: http://msdn.microsoft.com/en-us/library/aa386132(v=VS.85).aspx WUAPI Reference: Unfortunately I'm not allowed to post the link, but the link to the BeginDownload method goes to the same place. :) I know, it's quite a bit to ask, but if someone could point me in the right direction (as in which arguments to pass and how), it'd be very much appreciated! :)

    Read the article

  • Get Local IP-Address using Boost.Asio

    - by MOnsDaR
    Hey, I'm currently searching for a portable way of getting the local IP-addresses. Because I'm using Boost anyway I thought it would be a good idea to use Boost.Asio for this task. There are serveral examples on the net which should do the trick. Examples: Official Boost.Asio Documentation Some Asian Page I tried both codes with just slight modifications. The Code on Boost.Doc was changed to not resolve "www.boost.org" but "localhost" or my hostname instead. For getting the hostname I used boost::asio::ip::host_name() or typed it directly as a string. Additionally I wrote my own code which was a merge of the above examples and my (little) knowledge I gathered from the Boost Documentation and other examples. All the sources worked, but they did just return the following IP: 127.0.1.1 (Thats not a typo, its .1.1 at the end) I run and compiled the code on Ubuntu 9.10 with GCC 4.4.1 A colleague tried the same code on his machine and got 127.0.0.2 (Not a typo too...) He compiled and run on Suse 11.0 with GCC 4.4.1 (I'm not 100% sure) I don't know if it is possible to change the localhost (127.0.0.1), but I know that neither me or my colleague did it. ifconfig says loopback uses 127.0.0.1. ifconfig also finds the public IP I am searching for (141.200.182.30 in my case, subnet is 255.255.0.0) So is this a Linux-issue and the code is not as portable as I thought? Do I have to change something else or is Boost.Asio not working as a solution for my problem at all? I know there are much questions about similar topics on Stackoverflow and other pages, but I cannot find information which is useful in my case. If you got useful links, it would be nice if you could point me to it. Thanks in advance, MOnsDaR PS: Here is the modified code I used from Boost.Doc: #include <boost/asio.hpp> using boost::asio::ip::tcp; boost::asio::io_service io_service; tcp::resolver resolver(io_service); tcp::resolver::query query(boost::asio::ip::host_name(), ""); tcp::resolver::iterator iter = resolver.resolve(query); tcp::resolver::iterator end; // End marker. while (iter != end) { tcp::endpoint ep = *iter++; std::cout << ep << std::endl; }

    Read the article

  • Need additional help with binding multiple CommandParameters using MultiBinding

    - by Dave
    I need to have a command handler for a ToggleButton that can take multiple parameters, namely the IsChecked property of said ToggleButton, along with a constant value, which could be a string, byte, int... doesn't matter. I found this great question on SO and followed the answer's link and read up on MultiBinding and IMultiValueConverter. It went really smoothly until I had to write the MultiBinding, when I realized that I also need to pass a constant value and couldn't do something like <Binding Value="1" /> I then came across another similar question that Kent Boogaart answered, and then I started to think about ways that I could get around this. One possible way is to not use MVVM and simply add the Tag property to my ToggleButton, in which case my MultiBinding would look like this: <MultiBinding Converter="{StaticResource MyConverter}"> <MultiBinding.Bindings> <Binding Path="IsChecked" /> <Binding Path="Tag" /> </MultiBinding.Bindings> </MultiBinding> Kent had made a comment along the lines of, "if you're using MVVM you should be able to get around this issue". However, I'm not sure that's an option for me, even though I have adopted MVVM as my WPF pattern of necessity choice. The reason why I say this is that I have wayyyy more than one ToggleButton in the UserControl, and each of the ToggleButtons' Commands need to call the same function. But since they are ToggleButtons, I can't use the property bound to IsChecked in the ViewModel, because I don't know which one was last clicked. I suppose I could add another private property to keep track of this, but it seems a little silly. As far as the constant goes, I could probably get rid of this if I did the tracking idea, but not sure of any other way to get around it. Does anyone have good suggestions for me here? :) EDIT -- ok, so I need to update my bindings, which still don't work quite right: <ToggleButton Tag="1" Command="{Binding MyCommand}" Style="{StaticResource PassFailToggleButtonStyle}" HorizontalContentAlignment="Center" Background="Transparent" BorderBrush="Transparent" BorderThickness="0"> <ToggleButton.CommandParameter> <MultiBinding Converter="{StaticResource MyConverter}"> <MultiBinding.Bindings> <Binding Path="IsChecked" RelativeSource="{RelativeSource Mode=Self}" /> <Binding Path="Tag" RelativeSource="{RelativeSource Mode=Self}" /> </MultiBinding.Bindings> </MultiBinding> </ToggleButton.CommandParameter> </ToggleButton> IsChecked was working, but not Tag. I just realized that Tag is a string... duh. It's working now! The key was to use a RelativeSource of Self.

    Read the article

  • WPF Animation FPS vs. CPU usage - Am I expecting too much?

    - by Cory Charlton
    Working on a screen saver for my wife, http://cchearts.codeplex.com/, and while I've been able to improve FPS on lower end machines (switch from Path to StreamGeometry, use DrawingVisual instead of UserControl, etc) the CPU usage still seems very high. Here's some numbers I ran from a few 5 minute sampling periods: ~60FPS 35% average CPU on Core 2 Duo T7500 @ 2.2GHz, 3GB ram, NVIDIA Quadro NVS 140M (128MB), Vista [My dev laptop] ~40FPS 50% average CPU on Pentium D @ 3.4GHz, 1.5GB ram, Standard VGA Graphics Adapter (unknown), 2003 Server [A crappy desktop] I can understand the lower frame rate and higher CPU usage on the crappy desktop but it still seems pretty high and 35% on my dev laptop seems high as well. I'd really like to analyze the application to get more details but I'm having issues there as well so I'm wondering if I'm doing something wrong (never profiled WPF before). WPF Performance Suite: Process Launch Error Unable to attach to process: CCHearts.exe Do you want to kill it? This error message occurs when I click cancel after attempting launch. If I don't click cancel it sits there idle, I guess waiting to attach. Performance Explorer: Could not launch C:\Projects2\CC.Hearts\CC.Hearts\bin\Debug (USEVISUAL)\CCHearts.exe. Previous attempt to profile the application finished unsuccessfully. Please restart the application. Output Window from Performance: Profiling started. Profiling process ID 5360 (CCHearts). Process ID 5360 has exited. Data written to C:\Projects2\CC.Hearts\CCHearts100608.vsp. Profiling finished. PRF0025: No data was collected. Profiling complete. So I'm stuck wanting to improve performance but have no concrete way to determine where the bottleneck is. Have been relatively successful throwing darts at this point but I'm beyond that now :) PS: Screensaver is hosted at CodePlex if you want to look at the source and missed the link above. Edit: My RenderOptions darts... // NOTE: Grasping at straws here ;-) RenderOptions.SetBitmapScalingMode(newHeart, BitmapScalingMode.LowQuality); RenderOptions.SetCachingHint(newHeart, CachingHint.Cache); RenderOptions.SetEdgeMode(newHeart, EdgeMode.Aliased); I threw those a little while back and didn't see much difference (not sure if the bitmap scaling even comes into play). Really wish I could get profiling working to know where I should try to optimize. For now I assume there is some overhead in creating a new HeartVisual and the DrawingVisual contained inside. Maybe if I reset and reused the hearts (tossed them in a queue once they completed or something) I'd see an improvement. Shrug Throwing darts while blindfolder is always fun.

    Read the article

  • FILESTREAM/FILETABLE Clarifications for Implementation

    - by user1209734
    Recently our team was looking at FILESTREAM to expand the capabilities of our proprietary application. The main purpose of this app is managing the various PDFS, Images and documents to all of the parts we manufacture. Our ASP application uses a few third party tools to allow viewing of these files. We currently have 980GB of data on the Fileserver. We have around 200GB of Binary data in SQL Server that we would like to extract since it is not performing well hence FILESTREAM seems to be a good compromise to the two major data storage/access issues. A few things are not exactly clear to us: FILESTREAM Can or Cannot store its data on a drive that is not locally attached. We already have a File Server with a RAID 10 (1.5TB drives). This server stores all of the documents right now, would we have to move these drives to the SQL Server for FILESTREAM? That would be a tough bullet to bite since the server also is doubling as the Application Server (Two VMs on one physical server). FILETABLE stores the common metadata about the files but where is the Full Text part of it stored to allow searching of files like doc/docx? Is this separate? Are you able to freely add criteria to this to search by? If so any links to clarify would be appreciated. Can FILETABLE be referenced in another table with a foreign key? Thank you in advance EDIT: For those having these questions this web video covered everything and more in terms of explaining filestream from 2008 to 2012 and the cavets to consider (I would seriously rep him if I could): http://channel9.msdn.com/Events/TechDays/Techdays-2012-the-Netherlands/2270 In conclusion we will not be using FILESTREAM as it would be way to huge of an upsurge to accommodate for investment. EDIT 2: Update to #1 - After carefully assessing FileTable in addition to FILESTREAM we got a winning combination. We did have to move the files over to the new server (wasn't to painful since they were on the same VM).It honestly took more time to write an extraction tool to dump the binary data within SQL to the File System. Update to #2 - This was seperate but again Bob had an excellent webinar explaining this: http://channel9.msdn.com/Events/TechEd/Europe/2012/DBI411 Update to #3 - Using TFT inheritance we recycled the Docs table we had (minus the huge binary blobs) which required very little changes in our legacy apps. This was a huge upshot for the developer team.

    Read the article

  • Stop lazy loading or skip loading a property in NHibernate? Proxy cannot be serialized through WCF

    - by HelloSam
    Consider I have a parent, child relationship class and mapping. I am using NHibernate to read the object from the database, and intended to use WCF to send the object across the wire. Goal For reading the parent object, I want to selectively, at different execution path, decide when I would want to load the child object. Because I don't want to read more than what I needed. Those partially loaded object must be able to sent through WCF. When I mean I don't load it, neither side will access such property. Problem When such partially loaded object is being sent through WCF, as those property is marked as [DataContract], it cannot be serialized as the property is lazy load proxy instead of real known type. What I want to archive, or solution that I can think of lazy=false or lazy=true doesn't work. Former will eagerly fetch all the relationships, latter will create a proxy. But I want nothing instead - a null would be the best. I don't need lazy load. I hope to get a null for those references that I don't want to fetch. A null, but not just a proxy. This will makes WCF happy, and waste less time to have a lazy-load proxy constructed. Like could I have a null proxy factory? -OR- Or making WCF ignoring those property that's a proxy instead of real. I tried the IDataContractSurrogate solution, but only parent is passed to GetObjectToSerialize, I never observe an proxy being passed through GetObjectToSerialize, leaving no chance to un-proxy it. Edit After reading the comments, more surfing on the Internet... It seems to me that DTO would shift major part of the computation to the server side. But for the project I am working on, 50% of time the client is "smarter" than the server and the server is more like a data store with validation and verification. Though I agree the server is not exactly dumb - I have to decide when to fetch the extra references already, and DTO will make this very explicit. Maybe I should just take the pain. I didn't know http://automapper.codeplex.com/ before, this motivates me a little more to take the pain. On the other hand, I found http://trentacular.com/2009/08/how-to-use-nhibernate-lazy-initializing-proxies-with-web-services-or-wcf/, which seems to be working with IDataContractSurrogate.GetObjectToSerialize.

    Read the article

  • Rx Reactive extensions: Unit testing with FromAsyncPattern

    - by Andrew Anderson
    The Reactive Extensions have a sexy little hook to simplify calling async methods: var func = Observable.FromAsyncPattern<InType, OutType>( myWcfService.BeginDoStuff, myWcfService.EndDoStuff); func(inData).ObserveOnDispatcher().Subscribe(x => Foo(x)); I am using this in an WPF project, and it works great at runtime. Unfortunately, when trying to unit test methods that use this technique I am experiencing random failures. ~3 out of every five executions of a test that contain this code fails. Here is a sample test (implemented using a Rhino/unity auto-mocking container): [TestMethod()] public void SomeTest() { // arrange var container = GetAutoMockingContainer(); container.Resolve<IMyWcfServiceClient>() .Expect(x => x.BeginDoStuff(null, null, null)) .IgnoreArguments() .Do( new Func<Specification, AsyncCallback, object, IAsyncResult>((inData, asyncCallback, state) => { return new CompletedAsyncResult(asyncCallback, state); })); container.Resolve<IRepositoryServiceClient>() .Expect(x => x.EndRetrieveAttributeDefinitionsForSorting(null)) .IgnoreArguments() .Do( new Func<IAsyncResult, OutData>((ar) => { return someMockData; })); // act var target = CreateTestSubject(container); target.DoMethodThatInvokesService(); // Run the dispatcher for everything over background priority Dispatcher.CurrentDispatcher.Invoke(DispatcherPriority.Background, new Action(() => { })); // assert Assert.IsTrue(my operation ran as expected); } The problem that I see is that the code that I specified to run when the async action completed (in this case, Foo(x)), is never called. I can verify this by setting breakpoints in Foo and observing that they are never reached. Further, I can force a long delay after calling DoMethodThatInvokesService (which kicks off the async call), and the code is still never run. I do know that the lines of code invoking the Rx framework were called. Other things I've tried: I have attempted to modify the second last line according to the suggestions here: Reactive Extensions Rx - unit testing something with ObserveOnDispatcher No love. I have added .Take(1) to the Rx code as follows: func(inData).ObserveOnDispatcher().Take(1).Subscribe(x = Foo(x)); This improved my failure rate to something like 1 in 5, but they still occurred. I have rewritten the Rx code to use the plain jane Async pattern. This works, however my developer ego really would love to use Rx instead of boring old begin/end. In the end I do have a work around in hand (i.e. don't use Rx), however I feel that it is not ideal. If anyone has ran into this problem in the past and found a solution, I'd dearly love to hear it.

    Read the article

  • Very simple application fails with "multiple target patterns" from Eclipse

    - by Paul Lammertsma
    Since I'm more comfortable using Eclipse, I thought I'd try converting my project from Visual Studio. Yesterday I tried a very simple little test. No matter what I try, make fails with "multiple target patterns". (This is similar to this unanswered question.) I have three files: Application.cpp: using namespace std; #include "Window.h" int main() { Window *win = new Window(); delete &win; return 0; } Window.h: #ifndef WINDOW_H_ #define WINDOW_H_ class Window { public: Window(); ~Window(); }; #endif Window.cpp: #include <cv.h> #include <highgui.h> #include "Window.h" const char* WINDOW_NAME = "MyApp"; Window::Window() { cvNamedWindow(WINDOW_NAME, CV_WINDOW_AUTOSIZE); cvResizeWindow(WINDOW_NAME, 200, 200); cvMoveWindow(WINDOW_NAME, 0, 0); int key = 0; while (true) { key = cvWaitKey(0); if (key==27 || cvGetWindowHandle(WINDOW_NAME)==0) { break; } } } Window::~Window() { cvDestroyWindow(WINDOW_NAME); } I have added the following paths to the compiler include path (-I): "$(OPENCV)/cv/include" "$(OPENCV)/cxcore/include" "$(OPENCV)/otherlibs/highgui" I have added the following libraries to the linker (-l): cv cxcore highgui And the following library search path (-L): "$(OPENCV)/lib/" Eclipse, the compiler and the linker all succeed in including the headers and libraries. I am using the GNU C/C++ compiler & linker from Cygwin. When compiling, I get the following make error: src/Window.d:1: *** multiple target patterns. Stop. Window.d contains: src/Window.d src/Window.o: ../src/Window.cpp \ C:/Program\ Files/OpenCV/cv/include/cv.h \ C:/Program\ Files/OpenCV/cxcore/include/cxcore.h \ C:/Program\ Files/OpenCV/cxcore/include/cxtypes.h \ C:/Program\ Files/OpenCV/cxcore/include/cxerror.h \ C:/Program\ Files/OpenCV/cxcore/include/cvver.h \ C:/Program\ Files/OpenCV/cxcore/include/cxcore.hpp \ C:/Program\ Files/OpenCV/cv/include/cvtypes.h \ C:/Program\ Files/OpenCV/cv/include/cv.hpp \ C:/Program\ Files/OpenCV/cv/include/cvcompat.h \ C:/Program\ Files/OpenCV/otherlibs/highgui/highgui.h \ C:/Program\ Files/OpenCV/cxcore/include/cxcore.h ../src/Constants.h \ ../src/Window.h C:/Program\ Files/OpenCV/cv/include/cv.h: C:/Program\ Files/OpenCV/cxcore/include/cxcore.h: C:/Program\ Files/OpenCV/cxcore/include/cxtypes.h: C:/Program\ Files/OpenCV/cxcore/include/cxerror.h: C:/Program\ Files/OpenCV/cxcore/include/cvver.h: C:/Program\ Files/OpenCV/cxcore/include/cxcore.hpp: C:/Program\ Files/OpenCV/cv/include/cvtypes.h: C:/Program\ Files/OpenCV/cv/include/cv.hpp: C:/Program\ Files/OpenCV/cv/include/cvcompat.h: C:/Program\ Files/OpenCV/otherlibs/highgui/highgui.h: C:/Program\ Files/OpenCV/cxcore/include/cxcore.h: ../src/Constants.h: ../src/Window.h: I tried removing all OpenCV headers from Window.d (from line 2 onwards), but the error remains. Also, I've updated Eclipse and OpenCV, all to no avail. Do you have any ideas worth trying? I'm willing to try anything!

    Read the article

  • James Server connection failure exception

    - by John
    Hi, I'm trying to connect Javamail application to James server, but I'm getting javax.mail.MessagingException: Could not connect to SMTP host:localhost, port:4555; nested exception is: java.net.SocketException: Invalid arguent: connect Here's the code, which is creating a little problem for me: import java.security.Security; import java.io.IOException; import java.io.PrintWriter; import java.util.Properties; import javax.mail.*; import javax.mail.internet.*; public class mail { public static void main(String[] argts) { Security.addProvider(new com.sun.net.ssl.internal.ssl.Provider()); String mailHost = "your.smtp.server"; String to = "blue@localhost"; String from = "red@localhost"; String subject = "jdk"; String body = "Down to wind"; if ((from != null) && (to != null) && (subject != null) && (body != null)) // we have mail to send { try { //Get system properties Properties props = System.getProperties(); props.put("mail.smtp.user", "red"); props.put("mail.smtp.host", "localhost"); props.put("mail.debug", "true"); props.put("mail.smtp.port", 4555); props.put("mail.smtp.socketFactory.port", 4555); props.put("mail.smtp.socketFactory.class", "javax.net.ssl.SSLSocketFactory"); props.put("mail.smtp.socketFactory.fallback", "false"); Session session = Session.getInstance(props,null); Message message = new MimeMessage(session); message.setFrom(new InternetAddress(from)); message.setRecipients(Message.RecipientType.TO, new InternetAddress[] { new InternetAddress(to) }); message.setSubject(subject); message.setContent(body, "text/plain"); message.setText(body); Transport.send(message); System.out.println("<b>Thank you. Your message to " + to + " was successfully sent.</b>"); } catch (Throwable t) { System.out.println("Teri maa ki "+t); } } } } Thanks in advance. :)

    Read the article

  • Detect an Uninstall in a Launch Condition using Wix MSIs

    - by coxymla
    I've been playing around with Wix, making a little app with auto-generated installer and three versions to test the upgradability, 1.0, 1.1 and 2.0. 1.1 is meant to be able to upgrade from 1.0, and not to allow the user to install 1.1 if 1.1 is already present. <Upgrade Id="F30C4129-F14E-43ee-BD5E-03AA89AD8E07"> <UpgradeVersion Minimum="1.0.0" IncludeMinimum="yes" Maximum="1.0.0" IncludeMaximum="yes" Property="OLDERVERSIONBEINGUPGRADED" /> <UpgradeVersion Minimum="1.1.0" IncludeMinimum="yes" OnlyDetect="yes" Property="NEWERVERSIONDETECTED" /> </Upgrade> <Condition Message="A later version of [ProductName] is already installed. Setup will now exit."> NOT (NEWERVERSIONDETECTED OR Installed) </Condition> Problem #1: 1.1 can't be uninstalled, because the condition is set and checked during the uninstall. 2.0 is meant to be able to upgrade from 1.1, and not to upgrade from 1.0 ('too old'.) It shouldn't be able to install on top of itself either. <Upgrade Id="F30C4129-F14E-43ee-BD5E-03AA89AD8E07"> <UpgradeVersion Minimum="1.1.0" IncludeMinimum="yes" Maximum="1.1.0" IncludeMaximum="yes" Property="OLDERVERSIONBEINGUPGRADED" /> </Upgrade> <Upgrade Id="F30C4129-F14E-43ee-BD5E-03AA89AD8E07"> <UpgradeVersion Minimum="2.0.0" OnlyDetect="yes" Property="NEWERVERSIONDETECTED" /> </Upgrade> <Upgrade Id="F30C4129-F14E-43ee-BD5E-03AA89AD8E07"> <UpgradeVersion Minimum="1.0.0" IncludeMinimum="yes" Maximum="1.0.0" IncludeMaximum="yes" Property="TOOOLDVERSIONDETECTED" /> </Upgrade> <Condition Message="A later version of [ProductName] is already installed. Setup will now exit."> NOT NEWERVERSIONDETECTED OR Installed </Condition> <Condition Message="A version of [ProductName] that is already installed is too old to be upgraded. Setup will now exit."> NOT TOOOLDVERSIONDETECTED </Condition> Problem #2: If I try to upgrade from 1.1, I hit my modified later version condition. (Error: A later version of Main Application 1.1 is already installed. Setup will now exit.) Problem #3: The installer allows me to install 2.0 over the top of itself. What am I doing wrong with my Upgrade code and conditions to get these problems in my MSIs?

    Read the article

  • Use Dojo Drag and Drop together with Dojo Moveable

    - by Select0r
    Hi, I'm using Dojo.dnd to transfer items between to areas. The problem is: the items will snap into place once I drop them, but I'd like to have them stay where I drop them, but only for one area. Here's a little code to explain this better: <div id="dropZone" class="dropZone"> <div id="itemNodes"></div> <div id="targetZone" dojoType="dojo.dnd.Source"></div> </div> "dropZone" is a DIV that contains two dojo.dnd.Source-areas, "itemNodes" (created programmatically) and "targetZone". Items (DIVs with images) should be dragged from a simple list out of "itemNodes" into "targetZone" and stay where they are dropped. As soon as they are dragged out of "targetZone" they should snap back to the list inside "itemNodes". Here's the code I use to create the items: var nodelist = new dojo.dnd.Source("itemNodes"); {Smarty-Loop} nodelist.insertNodes(false, ['<img class="dragItem" src="{$items->info.itemtext}" alt="{$items->info.itemtext}" border="0" />']); {/Smarty-Loop} But this way I just have two lists of items, the items dropped into "targetZone" won't stay where I dropped them. I've tried a loop dojo.query(".dojoDndItem").forEach(function(node) to grab all items and change them to a "moveable"-type: using dojo.dnd.move.constrainedMoveable will change the items so they can always be moved around (even in "itemNodes") using dojo.dnd.move.boxConstrainedMoveable and defining the "box" to the borders of "targetZone" makes it possible to just move the items around inside "targetZone", but as soon as I drop them, I can't grab and move them back out So here's the question: is it possible to create two dnd.Sources where I can move items back and forth and let the items be "moveable" only in one of the sources?Or is there a workaround like making the items moveable and if they're not dropped into "targetZone" they'll be moved back to the list in "itemNodes" automatically? Once the page is submitted, I have to save the position of every item that has been placed into "targetZone". (The next step will be positioning the items inside "targetZone" on page load if the grid has already been filled before, but I'd be happy to just get the thing working in the first place.) Any hint is appreciated. Greetings, Select0r

    Read the article

  • PostgreSQL to Data-Warehouse: Best approach for near-real-time ETL / extraction of data

    - by belvoir
    Background: I have a PostgreSQL (v8.3) database that is heavily optimized for OLTP. I need to extract data from it on a semi real-time basis (some-one is bound to ask what semi real-time means and the answer is as frequently as I reasonably can but I will be pragmatic, as a benchmark lets say we are hoping for every 15min) and feed it into a data-warehouse. How much data? At peak times we are talking approx 80-100k rows per min hitting the OLTP side, off-peak this will drop significantly to 15-20k. The most frequently updated rows are ~64 bytes each but there are various tables etc so the data is quite diverse and can range up to 4000 bytes per row. The OLTP is active 24x5.5. Best Solution? From what I can piece together the most practical solution is as follows: Create a TRIGGER to write all DML activity to a rotating CSV log file Perform whatever transformations are required Use the native DW data pump tool to efficiently pump the transformed CSV into the DW Why this approach? TRIGGERS allow selective tables to be targeted rather than being system wide + output is configurable (i.e. into a CSV) and are relatively easy to write and deploy. SLONY uses similar approach and overhead is acceptable CSV easy and fast to transform Easy to pump CSV into the DW Alternatives considered .... Using native logging (http://www.postgresql.org/docs/8.3/static/runtime-config-logging.html). Problem with this is it looked very verbose relative to what I needed and was a little trickier to parse and transform. However it could be faster as I presume there is less overhead compared to a TRIGGER. Certainly it would make the admin easier as it is system wide but again, I don't need some of the tables (some are used for persistent storage of JMS messages which I do not want to log) Querying the data directly via an ETL tool such as Talend and pumping it into the DW ... problem is the OLTP schema would need tweaked to support this and that has many negative side-effects Using a tweaked/hacked SLONY - SLONY does a good job of logging and migrating changes to a slave so the conceptual framework is there but the proposed solution just seems easier and cleaner Using the WAL Has anyone done this before? Want to share your thoughts?

    Read the article

  • Asynchronous COMET query with Tornado and Prototype

    - by grundic
    Hello everyone. I'm trying to write simple web application using Tornado and JS Prototype library. So, the client can execute long running job on server. I wish, that this job runs Asynchronously - so that others clients could view page and do some stuff there. Here what i've got: #!/usr/bin/env/ pytthon import tornado.httpserver import tornado.ioloop import tornado.options import tornado.web from tornado.options import define, options import os import string from time import sleep from datetime import datetime define("port", default=8888, help="run on the given port", type=int) class MainHandler(tornado.web.RequestHandler): def get(self): self.render("templates/index.html", title="::Log watcher::", c_time=datetime.now()) class LongHandler(tornado.web.RequestHandler): @tornado.web.asynchronous def get(self): self.wait_for_smth(callback=self.async_callback(self.on_finish)) print("Exiting from async.") return def wait_for_smth(self, callback): t=0 while (t < 10): print "Sleeping 2 second, t={0}".format(t) sleep(2) t += 1 callback() def on_finish(self): print ("inside finish") self.write("Long running job complete") self.finish() def main(): tornado.options.parse_command_line() settings = { "static_path": os.path.join(os.path.dirname(__file__), "static"), } application = tornado.web.Application([ (r"/", MainHandler), (r"/longPolling", LongHandler) ], **settings ) http_server = tornado.httpserver.HTTPServer(application) http_server.listen(options.port) tornado.ioloop.IOLoop.instance().start() if __name__ == "__main__": main() This is server part. It has main view (shows little greeting, current server time and url for ajax query, that executes long running job. If you press a button, a long running job executes. And server hangs :( I can't view no pages, while this job is running. Here is template page: <html> <head> <title>{{ title }}</title> <script type="text/javascript" language="JavaScript" src="{{ static_url("js/prototype.js")}}"></script> <script type='text/javascript' language='JavaScript'> offset=0 last_read=0 function test(){ new Ajax.Request("http://172.22.22.22:8888/longPolling", { method:"get", asynchronous:true, onSuccess: function (transport){ alert(transport.responseText); } }) } </script> </head> <body> Current time is {{c_time}} <br> <input type="button" value="Test" onclick="test();"/> </body> </html> what am I doing wrong? How can implement long pooling, using Tornado and Prototype (or jQuery) PS: I have looked at Chat example, but it too complicated. Can't understand how it works :( PSS Download full example

    Read the article

  • Revisiting .NET, but what should I focus on?

    - by Wayne M
    After about a two-year hiatus, I'm brushing up on my .NET skills to find a .NET job (my previous two positions have very little development, or development using legacy technologies, so apart from a few very minor apps I have not touched .NET in close to two years). I'm aware of things like ASP.NET MVC, and I have previously read on things like NHibernate and DI/IOC, albeit I have yet to use them apart from very trivial "Hello World" type applications. I have a subscription to Rob Conery's Tekpub website and occasionally watch these videos when I have free time. My concern is this: I don't live in a very technical area. I would be surprised if any but the most tech-savvy companies have heard of, let alone use, ASP.NET MVC, NHibernate (or even LINQ/EF), or know about IoC. I would be willing to bet a large sum of money that 95% of the possible jobs I could obtain will use the following: Visual Source Safe, if any VCS at all ASP.NET 2.0 Webforms (3.5 if lucky) Raw ADO.NET on top of a very thin implementation of the Gateway pattern Stored Procedures in the database for most CRUD operations Gratuitous use of code-behind, with a Service layer if I'm lucky If I were extremely lucky, I might find a shop that has heard of ORMs and either uses one, or has wrote their own data abstraction. Also if I were lucky, the company would be using Model-View-Presenter. In light of this I'm not sure what I should focus on learning. Personally, I would prefer to be using the latest stuff - ASP.NET MVC, NHibernate, jQuery, WCF etc. Reality says I should go back to the basics, since it looks like most potential opportunities aren't going to be anywhere near the cutting edge, or anywhere close to it. And, as much as I would like to find a position and start to show the other developers the benefits, in my past experience this has usually resulted in my being fired for "not being a team player" and doing things the bad old way. So, I am curious how you would approach a situation like this? What should I focus on, in order to A) Reaquaint myself with .NET, and B) Prepare myself to obtain a .NET job again that is more than likely going to use techniques that I and most other knowledgeable developers will scoff at?

    Read the article

  • gcc 4.5 installation problem under ubuntu

    - by Claire Huang
    I tried to install gcc 4.5 on ubuntu 10.04 but failed. Here is a compile error that I don't know how to solve. Is there anyone successfully install the latest gcc on ubuntu? Following is my steps and the error message, I'd like to know where is the problem.... Step1: download these files: gcc-core-4.5.0.tar.gz gcc-g++-4.5.0.tar.gz gmp-4.3.2.tar.bz2 mpc-0.8.1.tar.gz mpfr-2.4.2.tar.gz Step2: Unzip above files Step3: move gmp, mpc, mpfr to the gcc-4.5.0/ directory. mv gmp-4.3.2 gcc-4.5.0/gmp mv mpc-0.8.1 gcc-4.5.0/mpc mv mpfr-2.4.2 gcc-4.5.0/mpfr Step4: go to gcc-4.5.0 directory and do configuration: sudo ./configure Step5: compile and install sudo make sudo make install The first 4 steps is OK, I can configure it successfully. However, when I try to compile it, following error message comes out, I cannot figure out what the problem is. Should I change the name from "gcc 4.5" to "gcc"?? It's a little strange that we need to do this by ourself. Is there anything I missed during the installation? xxx@xxx-laptop:/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0$ sudo make [sudo] password for xxx: [ -f stage_final ] || echo stage3 > stage_final /bin/bash: line 2: test: /media/Data/Tool/linux/gcc: binary operator expected /bin/bash: /media/Data/Tool/linux/gcc: No such file or directory make[1]: Entering directory `/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0' make[2]: Entering directory `/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0' make[3]: Entering directory `/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0' rm -f stage_current make[3]: Leaving directory `/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0' make[2]: Leaving directory `/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0' make[2]: Entering directory `/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0' Configuring stage 1 in host-x86_64-unknown-linux-gnu/intl /bin/bash: /media/Data/Tool/linux/gcc: No such file or directory make[2]: *** [configure-stage1-intl] Error 127 make[2]: Leaving directory `/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0' make[1]: *** [stage1-bubble] Error 2 make[1]: Leaving directory `/media/Data/Tool/linux/gcc 4.5/gcc-4.5.0' make: *** [all] Error 2

    Read the article

  • Unable to retrieve information form HP-UX pst_status object

    - by bogertron
    I am attempting to get process information by using the HP-UX C/C++ library. After scouring the internet, I have discovered that HP-UX has the pstat.h header file which allows programmers to retrieve the process information. After attempting to understand code from the internet hp website, I attempted to create a little test sample to comprehend what the code does. I attempted to use example 3, however, I ran into several issues. The first issue came when I attempted to execute the following line of code: (void)printf("pid is %d, command is %s\n", pst[i].pst_pid, pst[i].pst_ucomm); When I attempted to print the string, I hit a memory fault. So I decided to attempt to see what the string is and came up with the following: #include <sys/param.h> #include <sys/pstat.h> #include <sys/unistd.h> #include <string.h> int main(int argc, char** argv) { #define BURST ((size_t)10) struct pst_status pst[BURST]; int i, count; int idx = 0; /* index within the context */ int index = 0; /* loop until count == 0, will occur all have been returned */ while ((count=pstat_getproc(pst, sizeof(pst[0]),BURST,idx))>0) { index = 0; printf("index: %d", index); /* got count (max of BURST) this time. process them */ while (pst[i].pst_ucomm[index] != '\0') { printf("%c", pst[i].pst_ucomm[index]); index++; } printf("\n"); for (i = 0; i < count; i++) { printf("pid is %d, command is \n", pst[i].pst_pid); } /* * now go back and do it again, using the next index after * the current 'burst' */ idx = pst[count-1].pst_idx + 1; } if (count == -1) perror("pstat_getproc()"); #undef BURST } Unfortunately, what happens is that I get the first process printed, then pid is 2, command is pid is 2, command is pid is 2, command is... I know that I must be doing something foolish since my C/C++ skills are not that great, but I cannot figure out what the issue is since the code is largely copied from the hp website. So here's the question(s) for clarity: 1. Why can't printf("%s", pst[i].pst_ucomm); handle strings? 2. Why can't I iterate over the processes in the system? Any help is greatly appreciated.

    Read the article

< Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >