Search Results

Search found 8552 results on 343 pages for '14 04'.

Page 332/343 | < Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >

  • BIND returns serverfail when querying for its authoriative domain

    - by estol
    Hi there Serverfault folks! First of all: sorry about the title, I had some problem coming up with the proper title. I have a little home server set up, for internet sharing, samba, basic http, dlna mediaserver and what not, and I happend to have a domain at hand, so I thought why not direct it to this computer? I have a BIND 9.8.0 installed, and - afaik - configured it properly. For a few days, the public view did not worked, and I really did not cared, since the local view worked. But now suddenly, even the local view fails. If I try to query the nameserver for anything in my domain, it returns the following error: $ nslookup andromeda.dafaces.com ;; Got SERVFAIL reply from ::1, trying next server ;; Got SERVFAIL reply from ::1, trying next server Server: 127.0.0.1 Address: 127.0.0.1#53 ** server can't find andromeda.dafaces.com.dafaces.com: SERVFAIL Also, the public view points to the old ip address of the domain, probably because of the same error. Some information about the system: $ uname -a Linux tressis 2.6.37-ARCH #1 SMP PREEMPT Tue Mar 15 09:21:17 CET 2011 x86_64 AMD Athlon(tm) 64 X2 Dual Core Processor 5000+ AuthenticAMD GNU/Linux $ named -v BIND 9.8.0 And the named.conf file: # cat /etc/named.conf // // /etc/named.conf // include "/etc/rndc.key"; #controls { # inet 127.0.0.1 allow {localhost; } keys { "dnskulcs"; }; #}; options { directory "/var/named"; pid-file "/var/run/named/named.pid"; auth-nxdomain yes; datasize default; // Uncomment these to enable IPv6 connections support // IPv4 will still work: listen-on-v6 { any; }; listen-on { any; }; // Add this for no IPv4: // listen-on { none; }; // Default security settings. // allow-recursion { 127.0.0.1; ::1; 192.168.1.0/24; }; // allow-recursion { any; }; allow-query { any; }; allow-transfer { 127.0.0.1; ::1; 92.243.14.172; 87.98.164.164; 88.191.64.64; }; allow-update { key "dnskulcs"; }; version none; hostname none; server-id none; zone-statistics yes; forwarders { 213.46.246.53; 213.26.246.54; 8.8.8.8; 8.8.4.4; 192.188.242.65; 193.227.196.3; 2001:470:20::2; }; }; view "local" { match-clients { 192.168.1.0/24; 127.0.0.1; ::1; fec0:0:0:ffff::/64; }; recursion yes; zone "localhost" IN { type master; file "localhost.zone"; allow-transfer { any; }; }; zone "0.0.127.in-addr.arpa" IN { type master; file "127.0.0.zone"; allow-transfer { any; }; }; zone "." IN { type hint; file "root.hint"; }; zone "dafaces.com" IN { type master; file "internal/dafaces.com.fw"; allow-update { key "dnskulcs"; }; }; zone "1.168.192.in-addr.arpa" IN { type master; file "internal/dafaces.com.rev"; allow-update { key "dnskulcs"; }; }; }; view "public" { match-clients { any;}; recursion no; zone "dafaces.com" IN { type master; file "external/dafaces.com.fw"; allow-transfer { 87.98.164.164; 195.234.42.1; 88.191.64.64; }; }; }; //zone "example.org" IN { // type slave; // file "example.zone"; // masters { // 192.168.1.100; // }; // allow-query { any; }; // allow-transfer { any; }; //}; logging { channel xfer-log { file "/var/log/named.log"; print-category yes; print-severity yes; print-time yes; severity info; }; category xfer-in { xfer-log; }; category xfer-out { xfer-log; }; category notify { xfer-log; }; }; All help would be highly appreciated! EDIT: Zone files: # cat /var/named/internal/dafaces.com.fw $ORIGIN . $TTL 3600 ; 1 hour dafaces.com IN SOA tressis.dafaces.com. postmaster.dafaces.com. ( 2011032201 ; serial 28800 ; refresh (8 hours) 7200 ; retry (2 hours) 2419200 ; expire (4 weeks) 3600 ; minimum (1 hour) ) NS tressis.dafaces.com. A 192.168.1.1 MX 10 mail.dafaces.com. $ORIGIN _tcp.dafaces.com. _http SRV 0 5 80 www.dafaces.com. _ssh SRV 0 5 22 tressis.dafaces.com. $ORIGIN dafaces.com. acrisius A 192.168.1.230 andromeda A 192.168.1.7 andromeda-win7 CNAME andromeda aspasia A 192.168.1.233 athena A 192.168.1.232 callisto A 192.168.1.102 db A 192.168.1.1 management A 192.168.1.1 ; web management for the router functions haley A 192.168.1.5 hoth A 192.168.1.101 mail A 192.168.1.1 satelite A 192.168.1.20 sony-player A 192.168.1.103 TXT "310f16de2d2712dfc4ae6e5c54f60f828e" torrent A 192.168.1.1 tracker A 192.168.1.1 tressis A 192.168.1.1 www A 192.168.1.1 zeus A 192.168.1.231 and # cat /var/named/external/dafaces.com.fw $ORIGIN . $TTL 3600 dafaces.com IN SOA ns.dafaces.com. postmaster.dafaces.com. ( 2011032405; serial 28800; refresh 7200; retry 2419200; expire 3600; minimum ) NS ns.dafaces.com. NS ns0.xname.org. NS ns1.xname.org. NS ns2.xname.org. A 89.135.129.37 MX 10 mail.dafaces.com. $ORIGIN dafaces.com. ;Szolgaltatasok _ssh._tcp SRV 0 5 22 tressis _http._tcp SRV 0 5 80 www ns A 89.135.129.37 hoth A 89.135.129.37 www A 89.135.129.37 mail A 89.135.129.37 db A 89.135.129.37 torrent A 89.135.129.37 tracker A 89.135.129.37 Edit: Ohh, hell I almost forgot. Since the node is connected to the internet via a residential connection, there is a possibility, that the public ipv4 address will change(but thank god, it is a very rare case), so I daily update the external IP address in the zone file with a shellscript: # cat /etc/cron.daily/dnsupdate #!/bin/sh FILE="/var/named/external/dafaces.com.fw" SERIAL=$(date +%Y%m%d05) PUBLIC_IP=$(ifconfig internet |sed -n "/inet addr:.*255.255.255.255/{s/.*inet addr://; s/ .*//; p}") cat $FILE | sed --posix 's/^.* serial$/\t\t\t\t\t'$SERIAL'; serial/' | sed --posix 's/[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*/'$PUBLIC_IP'/' > /tmp/ujzona mv /tmp/ujzona $FILE /etc/rc.d/named reload

    Read the article

  • Why cache static files with Varnish, why not pass

    - by Saif Bechan
    I have a system runnning nginx / php-fpm / varnish / wordpress and amazon s3. Now I have looked at a lot of configuration files while setting up the system, and in all of them I found something like this: /* If the request is for pictures, javascript, css, etc */ if (req.url ~ "\.(jpg|jpeg|png|gif|css|js)$") { /* Remove the cookie and make the request static */ unset req.http.cookie; return (lookup); } I do not understand why this is done. Most of the examples also run NginX as a webserver. Now the question is, why would you use the varnish cache to cache these static files. It makes much more sense to me to only cache the dynamic files so that php-fpm / mysql don't get hit that much. Am I correct or am I missing something here? UPDATE I want to add some info to the question based on the answer given. If you have a dynamic website, where the content actually changes a lot, chaching does not make sense. But if you use WordPress for a static website for example, this can be cached for long periods of time. That said, more important to me is static conent. I have found a link with some test and benchmarks on different cache apps and webserver apps. http://nbonvin.wordpress.com/2011/03/14/apache-vs-nginx-vs-varnish-vs-gwan/ NginX is actually faster in getting your static content, so it makes more sense to just let it pass. NginX works great with static files. -- Apart from that, most of the time static content is not even in the webserver itself. Most of the time this content is stores on a CDN somewhere, maybe AWS S3, something like that. I think the varnish cache is the last place where you want to have you static content stored.

    Read the article

  • Identifying Exchange 2010 regular process that is walking the mailbox database

    - by toongeneral
    I have an Exchange 2010 server running on a SAN-backed platform. The platform does block-level backups based on a snapshot/incremental basis, that only capture changed data. I was surprised to see a regular period of time where the data changes were happening at a high, sustained rate. Due to the way this system works, that can lead to 1.2TB of stored data per month. The regularity implied a scheduled task, but it is not a fixed interval. It is approximately every 26-32hrs. The disks were performing read operations of ~5MB/s and write operations of ~4.5MB/s, for a period of 3-4hrs. The total written data was ~55-60GB. Reading on TechNet, I am wondering if the following is causing this: http://blogs.technet.com/b/exchange/archive/2011/12/14/database-maintenance-in-exchange-2010.aspx#checksumming The somewhat restrictive thing is that the process only happens at most once every 24 hours. I was able to investigate while it was running, finding the following: the process is store.exe it is working on the mailbox database files while running, it is generating .log files (in the mailbox database folder) consistent with database changes the mailbox database is ~60GB in size, which fits with the total data changes on each iteration I have currently switched to a fixed maintenance window, as a test. It's not clear whether this is the cause, as the symptoms fit, but are not conclusive. Does anyone have any suggestions for additional troubleshooting?

    Read the article

  • Spring Security RememberMe Services with Session Cookie

    - by Jarrod
    I am using Spring Security's RememberMe Services to keep a user authenticated. I would like to find a simple way to have the RememberMe cookie set as a session cookie rather than with a fixed expiration time. For my application, the cookie should persist until the user closes the browser. Any suggestions on how to best implement this? Any concerns on this being a potential security problem? The primary reason for doing so is that with a cookie-based token, any of the servers behind our load balancer can service a protected request without relying on the user's Authentication to be stored in an HttpSession. In fact, I have explicitly told Spring Security to never create sessions using the namespace. Further, we are using Amazon's Elastic Load Balancing, and so sticky sessions are not supported. NB: Although I am aware that as of Apr. 08, Amazon now supports sticky sessions, I still do not want to use them for a handful of other reasons. Namely that the untimely demise of one server would still cause the loss of sessions for all users associated with it. http://aws.amazon.com/about-aws/whats-new/2010/04/08/support-for-session-stickiness-in-elastic-load-balancing/

    Read the article

  • MPNowPlayingInfoCenter defaultCenter will not update or retrieve information

    - by Jayson Lane
    I'm working to update the MPNowPlayingInfoCenter and having a bit of trouble. I've tried quite a bit to the point where I'm at a loss. The following is my code: self.audioPlayer.allowsAirPlay = NO; Class playingInfoCenter = NSClassFromString(@"MPNowPlayingInfoCenter"); if (playingInfoCenter) { NSMutableDictionary *songInfo = [[NSMutableDictionary alloc] init]; MPMediaItemArtwork *albumArt = [[MPMediaItemArtwork alloc] initWithImage:[UIImage imageNamed:@"series_placeholder"]]; [songInfo setObject:thePodcast.title forKey:MPMediaItemPropertyTitle]; [songInfo setObject:thePodcast.author forKey:MPMediaItemPropertyArtist]; [songInfo setObject:@"NCC" forKey:MPMediaItemPropertyAlbumTitle]; [songInfo setObject:albumArt forKey:MPMediaItemPropertyArtwork]; [[MPNowPlayingInfoCenter defaultCenter] setNowPlayingInfo:songInfo]; } This isn't working, I've also tried: [[MPNowPlayingInfoCenter defaultCenter] setNowPlayingInfo:nil]; In an attempt to get it to remove the existing information from the iPod app (or whatever may have info there). In addition, just to see if I could find out the problem, I've tried retrieving the current information on app launch: NSDictionary *info = [[MPNowPlayingInfoCenter defaultCenter] nowPlayingInfo]; NSString *title = [info valueForKey:MPMediaItemPropertyTitle]; NSString *author = [info valueForKey:MPMediaItemPropertyArtist]; NSLog(@"Currently playing: %@ // %@", title, author); and I get Currently playing: (null) // (null) I've researched this quite a bit and the following articles explain it pretty thoroughly, however, I am still unable to get this working properly. Am I missing something? Would there be anything interfering with this? Is this a service something my app needs to register to access (didn't see this in any docs)? Apple's Docs Change lock screen background audio controls Now playing info ignored UPDATE: I've created a tutorial outlining how I was able to get this functioning properly: http://jaysonlane.net/tech-blog/2012/04/lock-screen-now-playing-with-mpnowplayinginfocenter/

    Read the article

  • CA2000 and disposal of WCF client

    - by Mayo
    There is plenty of information out there concerning WCF clients and the fact that you cannot simply rely on a using statement to dispose of the client. This is because the Close method can throw an exception (i.e. if the server hosting the service doesn't respond). I've done my best to implement something that adheres to the numerous suggestions out there. public void DoSomething() { MyServiceClient client = new MyServiceClient(); // from service reference try { client.DoSomething(); } finally { client.CloseProxy(); } } public static void CloseProxy(this ICommunicationObject proxy) { if (proxy == null) return; try { if (proxy.State != CommunicationState.Closed && proxy.State != CommunicationState.Faulted) { proxy.Close(); } else { proxy.Abort(); } } catch (CommunicationException) { proxy.Abort(); } catch (TimeoutException) { proxy.Abort(); } catch { proxy.Abort(); throw; } } This appears to be working as intended. However, when I run Code Analysis in Visual Studio 2010 I still get a CA2000 warning. CA2000 : Microsoft.Reliability : In method 'DoSomething()', call System.IDisposable.Dispose on object 'client' before all references to it are out of scope. Is there something I can do to my code to get rid of the warning or should I use SuppressMessage to hide this warning once I am comfortable that I am doing everything possible to be sure the client is disposed of? Related resources that I've found: http://www.theroks.com/2011/03/04/wcf-dispose-problem-with-using-statement/ http://www.codeproject.com/Articles/151755/Correct-WCF-Client-Proxy-Closing.aspx http://codeguru.earthweb.com/csharp/.net/net_general/tipstricks/article.php/c15941/

    Read the article

  • Using Unit of Work design pattern / NHibernate Sessions in an MVVM WPF

    - by Echiban
    I think I am stuck in the paralysis of analysis. Please help! I currently have a project that Uses NHibernate on SQLite Implements Repository and Unit of Work pattern: http://blogs.hibernatingrhinos.com/nhibernate/archive/2008/04/10/nhibernate-and-the-unit-of-work-pattern.aspx MVVM strategy in a WPF app Unit of Work implementation in my case supports one NHibernate session at a time. I thought at the time that this makes sense; it hides inner workings of NHibernate session from ViewModel. Now, according to Oren Eini (Ayende): http://msdn.microsoft.com/en-us/magazine/ee819139.aspx He convinces the audience that NHibernate sessions should be created / disposed when the view associated with the presenter / viewmodel is disposed. He presents issues why you don't want one session per windows app, nor do you want a session to be created / disposed per transaction. This unfortunately poses a problem because my UI can easily have 10+ view/viewmodels present in an app. He is presenting using a MVP strategy, but does his advice translate to MVVM? Does this mean that I should scrap the unit of work and have viewmodel create NHibernate sessions directly? Should a WPF app only have one working session at a time? If that is true, when should I create / dispose a NHibernate session? And I still haven't considered how NHibernate Stateless sessions fit into all this! My brain is going to explode. Please help!

    Read the article

  • Git subtree not properly using .gitignore when doing a partial clone

    - by D W
    I am a graduate student with many scripts, bibliography data in bibtex, thesis draft in latex, presentations in open office, posters in scribus, and figures and result data. I would like to put everything in one project under version control. Then when I need to work on a portion such as the bibliography data, I would like to check that subdirectory out, modify it as necessary and merge it back.I would like the ability to check out one version to my home computer, and a different one to my work computer and make changes to each independently and eventually merge them back. I would also like to be able to check out a piece of code from this big project and import it with versioning into a separate project. If I may changes I'd like to be able to merge them back to the original project. Based on my understanding git subtree can do this. http://github.com/apenwarr/git-subtree There is an example that is along the lines of what I'm trying to do at: http://psionides.jogger.pl/2010/02/04/sharing-code-between-projects-with-git-subtree/ Say the trunk of my project contained the directories: (bib bin cfg data fig src todo). When I use git subtree split -P bib -b export git checkout export I get a the bib directory, plus all files that should have been ignored or considered binary based on .gitignore such as the src directory and everything in it that ends in a tilde or the ./data directory. dwickrama@DWwork:~/research/trunk$ ls * -r biblography.bib JabRef src: script1.sh~ README~ script2.sh~ script3.sh~ script4.R~ script5.awk~ script5.py~ cfg: cfgFile1.ini~ cfgFile2.ini~ cfgFile3.ini~ bin: bigBinaryPackage1 bigBinaryPackage2 dwickrama@DWwork:~/research/trunk$ My .gitignore file is as follows: *.doc diff=word *.tex diff=tex *.bib diff=bibtex *.py diff=python *.eps binary *.jpg binary *.png binary ./bin/* binary *~ How do I prevent this?

    Read the article

  • Using git subtree to clone a subdirectory of a project with versioning history then merge it back af

    - by D W
    I am a graduate student with many scripts, bibliography data in bibtex, thesis draft in latex, presentations in open office, posters in scribus, and figures and result data. I would like to put everything in one project under version control. Then when I need to work on a portion such as the bibliography data, I would like to check that subdirectory out, modify it as necessary and merge it back.I would like the ability to check out one version to my home computer, and a different one to my work computer and make changes to each independently and eventually merge them back. I would also like to be able to check out a piece of code from this big project and import it with versioning into a separate project. If I may changes I'd like to be able to merge them back to the original project. Based on my understanding git subtree can do this. http://github.com/apenwarr/git-subtree There is an example that is along the lines of what I'm trying to do at: http://psionides.jogger.pl/2010/02/04/sharing-code-between-projects-with-git-subtree/ This code is from that site: git clone git://git2.kernel.org/pub/scm/git/git.git newtree=$(git subtree split --prefix=gitweb --annotate='(split) ' \ 0a8f4f0^.. --onto=1130ef3 --rejoin) git branch latest_gitweb $newtree gitk latest_gitweb Say the trunk of my project contained the directories: (bib bin cfg data fig src todo). How would I use git-subtree to split off the bib (bibliography) directory with versioning? When I use git-subtree split --prefix=bib I get 884842f6f4e9896e2e4e9402ee0ef762cd617257 as output, but I don't know where to go from there.

    Read the article

  • Android: ScrollView in flipper

    - by Manu
    I have a flipper: <?xml version="1.0" encoding="utf-8"?> <LinearLayout android:id="@+id/ParentLayout" xmlns:android="http://schemas.android.com/apk/res/android" style="@style/MainLayout" > <LinearLayout android:id="@+id/FlipperLayout" style="@style/FlipperLayout"> <ViewFlipper android:id="@+id/viewflipper" style="@style/ViewFlipper"> <!--adding views to ViewFlipper--> <include layout="@layout/home1" android:layout_gravity="center_horizontal" /> <include layout="@layout/home2" android:layout_gravity="center_horizontal" /> </ViewFlipper> </LinearLayout> </LinearLayout> The first layout,home1, consists of a scroll view. What should I do to distinguish between the flipping gesture and the scrolling? Presently: if I remove the scroll view, I can swipe across if I add the scroll view, I can only scroll. I saw a suggestion that I should override onInterceptTouchEvent(MotionEvent), but I do not know how to do this. My code, at this moment, looks like this: public class HomeActivity extends Activity { -- declares @Override public void onCreate(Bundle savedInstanceState) { -- declares & preliminary actions LinearLayout layout = (LinearLayout) findViewById(R.id.ParentLayout); layout.setOnTouchListener(new OnTouchListener() { public boolean onTouch(View v, MotionEvent event) { if (gestureDetector.onTouchEvent(event)) { return true; } return false; }}); @Override public boolean onTouchEvent(MotionEvent event) { gestureDetector.onTouchEvent(event); return true; } class MyGestureDetector extends SimpleOnGestureListener { @Override public boolean onFling(MotionEvent e1, MotionEvent e2, float velocityX, float velocityY) { // http://www.codeshogun.com/blog/2009/04/16/how-to-implement-swipe-action-in-android/ } } } Can anybody please guide me in the right direction? Thank you.

    Read the article

  • R and version control for the solo data analyst

    - by Jeromy Anglim
    Many data analysts that I respect use version control. For example: http://github.com/hadley/ See comments on http://permut.wordpress.com/2010/04/21/revision-control-statistics-bleg/ However, I'm evaluating whether adopting a version control system such as git would be worthwhile. A brief overview: I'm a social scientist who uses R to analyse data for research publications. I don't currently produce R packages. My R code for a project typically includes a few thousand lines of code for data input, cleaning, manipulation, analyses, and output generation. Publications are typically written using LaTeX. With regards to version control there are many benefits which I have read about, yet they seem to be less relevant to the solo data analyst. Backup: I have a backup system already in place. Forking and rewinding: I've never felt the need to do this, but I can see how it could be useful (e.g., you are preparing multiple journal articles based on the same dataset; you are preparing a report that is updated monthly, etc) Collaboration: Most of the time I am analysing data myself, thus, I woudln't get the collaboration benefits of version control. There are also several potential costs involved with adopting version control: Time to evaluate and learn a version control system A possible increase in complexity over my current file management system However, I still have the feeling that I'm missing something. General guides on version control seem to be addressed more towards computer scientists than data analysts. Thus, specifically in relation to data analysts in circumstances similar to those listed above: Is version control worth the effort? What are the main pros and cons of adopting version control? What is a good strategy for getting started with version control for data analysis with R (e.g., examples, workflow ideas, software, links to guides)?

    Read the article

  • calling CreateFile, specifying FILE_ATTRIBUTE_TEMPORARY | FILE_FLAG_DELETE_ON_CLOSE.

    - by alexander-daniels
    Before I describe my problem, here is a description of the program I'm writting: This is a C++ application. The purpose of my program is to create file on RAM memory. I read that if specify FILE_ATTRIBUTE_TEMPORARY | FILE_FLAG_DELETE_ON_CLOSE when creating file it will be loaded direct to the RAM memory. One of blogs that talk about is this one: http://blogs.msdn.com/larryosterman/archive/2004/04/19/116084.aspx I have built a mini-program, but it not achieves the goal. Instead, it creates a file on hard-drive on directory I specify. Here's my program: void main () { LPCWSTR str = L"c:\temp.txt"; HANDLE fh = CreateFile(str,GENERIC_WRITE,0,NULL,CREATE_ALWAYS, FILE_ATTRIBUTE_TEMPORARY | FILE_FLAG_DELETE_ON_CLOSE,NULL); if (fh == INVALID_HANDLE_VALUE) { printf ("Could not open TWO.TXT"); return; } DWORD dwBytesWritten; for (long i=0; i<20000000; i++) { WriteFile(fh, "This is a test\r\n", 16, &dwBytesWritten, NULL); } return; } I think there problem in CreateFile function, but I can't fix it. Please help me.

    Read the article

  • How to stream partial content with ASP.NET MVC FileStreamResult

    - by o_o
    We're using a FileStreamResult to provide video data to a Silverlight MediaElement based video player: public ActionResult Preview(Guid id) { return new FileStreamResult( Services.AssetStore.GetStream(id, ContentType.Preview), "application/octet-stream"); } Unfortunately, the Silverlight video player downloads the entire video file before it starts playing. This behavior is expected as our Preview Action does not support downloading partial content. (side note: if the file is hosted in an IIS virtual directory we can start playback at any location in the video while it is still downloading. however for security and auditing reasons we can't provide a direct download link. so this is not an option.) How can we improve the Controller Action to support partial HTTP content? I assume we first have to inform the client that we support it (adding an "Accept-Ranges:bytes" header to a HEAD request), then we have to evaluate the HTTP "Range" header and stream the requested file range with a response code of 206. Will that work with ASP.NET MVC hosted on IIS6? Is there already some code available? Also see: http://en.wikipedia.org/wiki/List_of_HTTP_headers http://blogs.msdn.com/anilkumargupta/archive/2009/04/29/downloadprogress-downloadprogressoffset-and-bufferprogress-of-the-mediaelement.aspx http://benramsey.com/archives/206-partial-content-and-range-requests/

    Read the article

  • WPF Memory Leak on XP (CMilChannel, HWND)

    - by vanja.
    My WPF application leaks memory at about 4kb/s. The memory usage in Task Manager climbs constantly until the application crashes with an "Out of Memory" exception. By doing my own research I have found that the problem is discussed here: http://stackoverflow.com/questions/801589/track-down-memory-leak-in-wpf and #8 here: http://blogs.msdn.com/jgoldb/archive/2008/02/04/finding-memory-leaks-in-wpf-based-applications.aspx The problem described is: This is a leak in WPF present in versions of the framework up to and including .NET 3.5 SP1. This occurs because of the way WPF selects which HWND to use to send messages from the render thread to the UI thread. This sample destroys the first HWND created and starts an animation in a new Window. This causes messages sent from the render thread to pile up without being processed, effectively leaking memory. The solution offered is: The workaround is to create a new HwndSource first thing in your App class constructor. This MUST be created before any other HWND is created by WPF. Simply by creating this HwndSource, WPF will use this to send messages from the render thread to the UI thread. This assures all messages will be processed, and that none will leak. But I don't understand the solution! I have a subclass of Application that I am using and I have tried creating a window in that constructor but that has not solved the problem. Following the instructions given literally, it looks like I just need to add this to my Application constructor: new HwndSource(new HwndSourceParameters("MyApplication"));

    Read the article

  • Remove file from git repository (history)

    - by Devenv
    (solved, see bottom of the question body) Looking for this for a long time now, what I have till now is: http://dound.com/2009/04/git-forever-remove-files-or-folders-from-history/ and http://progit.org/book/ch9-7.html Pretty much the same method, but both of them leave objects in pack files... Stuck. What I tried: git filter-branch --index-filter 'git rm --cached --ignore-unmatch file_name' rm -Rf .git/refs/original rm -Rf .git/logs/ git gc Still have files in the pack, and this is how I know it: git verify-pack -v .git/objects/pack/pack-3f8c0...bb.idx | sort -k 3 -n | tail -3 And this: git filter-branch --index-filter "git rm -rf --cached --ignore-unmatch file_name" HEAD rm -rf .git/refs/original/ && git reflog expire --all && git gc --aggressive --prune The same... Tried git clone trick, it removed some of the files (~3000 of them) but the largest files are still there... I have some large legacy files in the repository, ~200M, and I really don't want them there... And I don't want to reset the repository to 0 :( SOLUTION: This is the shortest way to get rid of the files: check .git/packed-refs - my problem was that I had there a refs/remotes/origin/master line for a remote repository, delete it, otherwise git won't remove those files (optional) git verify-pack -v .git/objects/pack/#{pack-name}.idx | sort -k 3 -n | tail -5 - to check for the largest files (optional) git rev-list --objects --all | grep a0d770a97ff0fac0be1d777b32cc67fe69eb9a98 - to check what files those are git filter-branch --index-filter 'git rm --cached --ignore-unmatch file_names' - to remove the file from all revisions rm -rf .git/refs/original/ - to remove git's backup git reflog expire --all --expire='0 days' - to expire all the loose objects (optional) git fsck --full --unreachable - to check if there are any loose objects git repack -A -d - repacking the pack git prune - to finally remove those objects

    Read the article

  • SQL - Rank() on a table

    - by Abhi
    create table v (mydate,value) as select to_date('20/03/2010 00','dd/mm/yyyy HH24'),98 from dual union all select to_date('20/03/2010 01','dd/mm/yyyy HH24'),124 from dual union all select to_date('20/03/2010 02','dd/mm/yyyy HH24'),140 from dual union all select to_date('20/03/2010 03','dd/mm/yyyy HH24'),138 from dual union all select to_date('20/03/2010 04','dd/mm/yyyy HH24'),416 from dual union all select to_date('20/03/2010 05','dd/mm/yyyy HH24'),196 from dual union all select to_date('20/03/2010 06','dd/mm/yyyy HH24'),246 from dual union all select to_date('20/03/2010 07','dd/mm/yyyy HH24'),176 from dual union all select to_date('20/03/2010 08','dd/mm/yyyy HH24'),124 from dual union all select to_date('20/03/2010 09','dd/mm/yyyy HH24'),128 from dual union all select to_date('20/03/2010 10','dd/mm/yyyy HH24'),32010 from dual union all select to_date('20/03/2010 11','dd/mm/yyyy HH24'),384 from dual union all select to_date('20/03/2010 12','dd/mm/yyyy HH24'),368 from dual union all select to_date('20/03/2010 13','dd/mm/yyyy HH24'),392 from dual union all select to_date('20/03/2010 14','dd/mm/yyyy HH24'),374 from dual union all select to_date('20/03/2010 15','dd/mm/yyyy HH24'),350 from dual union all select to_date('20/03/2010 16','dd/mm/yyyy HH24'),248 from dual union all select to_date('20/03/2010 17','dd/mm/yyyy HH24'),396 from dual union all select to_date('20/03/2010 18','dd/mm/yyyy HH24'),388 from dual union all select to_date('20/03/2010 19','dd/mm/yyyy HH24'),360 from dual union all select to_date('20/03/2010 20','dd/mm/yyyy HH24'),194 from dual union all select to_date('20/03/2010 21','dd/mm/yyyy HH24'),234 from dual union all select to_date('20/03/2010 22','dd/mm/yyyy HH24'),328 from dual union all select to_date('20/03/2010 23','dd/mm/yyyy HH24'),216 from dual From this table, how to rank() over 'value', partitioning by each hour of the day? and select only the 1st ranked result?

    Read the article

  • Creating custom IP-STS for sharepoint foundation 2010 without ADFS

    - by user252229
    I plan to create very simple custom IP-STS for SharePoint foundation 2010 without ADFS server so anyone can integrate Windows Live ID to SharePoint foundation 2010 simply without ADFS, I can't use ADFS server because it could not install on Windows Web Server 2008 (Web Edition), also I found many article use LDAP provider but it does not exists in SharePoint Foundation too (it requires Sharepoint Server Edition). After too much searching I just found the following article and find all technique except one problem. 1) Creating Custom Claim Provider: blogs.technet.com/b/speschka/archive/2010/03/13/writing-a-custom-claims-provider-for-sharepoint-2010-part-1.aspx 2) Creating Custom STS Provider: http://blogs.msdn.com/b/chunliu/archive/2010/04/02/how-to-make-use-of-a-custom-ip-sts-with-sharepoint-2010-part-1.aspx Only one step remains: I got following error after enter username in STS site and redirect to localhost/_trust/default.aspx , ( I leave EncryptingCertificateName empty). Operation is not valid due to the current state of the object I expect to get access denied error instead of that error. 1.Is it possible anyway? 2.Can anyone help me where can I find working article to create custom IP-STS without ADFS server Any idea will help me Thanks

    Read the article

  • Query Months help

    - by StealthRT
    Hey all i am in need of some helpful tips/advice on how to go about my problem. I have a database that houses a "signup" table. The date for this table is formated as such: 2010-04-03 00:00:00 Now suppose i have 10 records in this database: 2010-04-03 00:00:00 2010-01-01 00:00:00 2010-06-22 00:00:00 2010-02-08 00:00:00 2010-02-05 00:00:00 2010-03-08 00:00:00 2010-09-29 00:00:00 2010-11-16 00:00:00 2010-04-09 00:00:00 2010-05-21 00:00:00 And i wanted to get each months total registers... so following the example above: Jan = 1 Feb = 2 Mar = 1 Apr = 2 May = 1 Jun = 1 Jul = 0 Aug = 0 Sep = 1 Oct = 0 Nov = 1 Dec = 0 Now how can i use a query to do that but not have to use a query like: WHERE left(date, 7) = '2010-01' and keep doing that 12 times? I would like it to be a single query call and just have it place the months visits into a array like so: do until EOF theMonthArray[0] = "total for jan" theMonthArray[1] = "total for feb" theMonthArray[2] = "total for mar" theMonthArray[3] = "total for apr" ...etc loop I just can not think of a way to do that other than the example i posted with the 12 query called-one for each month. This is my query as of right now. Again, this only populates for one month where i am trying to populate all 12 months all at once. SELECT count(idNumber) as numVisits, theAccount, signUpDate, theActive from userinfo WHERE theActive = 'YES' AND idNumber = '0203' AND theAccount = 'SUB' AND left(signUpDate, 7) = '2010-04' GROUP BY idNumber ORDER BY numVisits; The example query above outputs this: numVisits | theAccount | signUpDate | theActive 2 SUB 2010-04-16 00:00:00 YES Which is correct because i have 2 records within the month of April. But again, i am trying to do all 12 months at one time (in a single query) so i do not tax the database server as much when compared to doing 12 different query's... UPDATE I'm looking to do something like along these lines: if NOT rst.EOF if left(rst("signUpDate"), 7) = "2010-01" then theMonthArray[0] = rst("numVisits") end if if left(rst("signUpDate"), 7) = "2010-02" then theMonthArray[1] = rst("numVisits") end if etc etc.... end if Any help would be great! :) David

    Read the article

  • Top Container Background Problem

    - by Norbert
    Here's a screenshot: http://dl.getdropbox.com/u/118004/Screen%20shot%202010-04-13%20at%202.50.49%20PM.png The red bar on the left is the background I set for the #personal div and I would like it to align to the top of the container, vertically. The problem is that I have a background for the #container-top div on top of the #container div with absolute positioning. Is there any way to move the #personal div up so there would be no space left? HTML <div id="container"> <div id="container-top"></div> <div id="personal"> <h1>Jonathan Doe</h1> <p>Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliqua erat volutpat.</p> </div> <!-- end #personal --> </div> <!-- end #container --> CSS #container { background: url(images/bg-mid.png) repeat-y top center; width: 835px; margin: 40px auto; position: relative; } #container-top { background: url(images/bg-top.png) no-repeat top center; position: absolute; height: 12px; width: 835px; top: -12px; } #container-bottom { background: url(images/bg-bottom.png) no-repeat top center; position: absolute; height: 27px; width: 835px; bottom: -27px; } #personal { background: url(images/personal-info.png) no-repeat 0px left; }

    Read the article

  • Is there a more easy way to create a WCF/OData Data Service Query Provider?

    - by routeNpingme
    I have a simple little data model resembling the following: InventoryContext { IEnumerable<Computer> GetComputers() IEnumerable<Printer> GetPrinters() } Computer { public string ComputerName { get; set; } public string Location { get; set; } } Printer { public string PrinterName { get; set; } public string Location { get; set; } } The results come from a non-SQL source, so this data does not come from Entity Framework connected up to a database. Now I want to expose the data through a WCF OData service. The only way I've found to do that thus far is creating my own Data Service Query Provider, per this blog tutorial: http://blogs.msdn.com/alexj/archive/2010/01/04/creating-a-data-service-provider-part-1-intro.aspx ... which is great, but seems like a pretty involved undertaking. The code for the provider would be 4 times longer than my whole data model to generate all of the resource sets and property definitions. Is there something like a generic provider in between Entity Framework and writing your own data source from zero? Maybe some way to build an object data source or something, so that the magical WCF unicorns can pick up my data and ride off into the sunset without having to explicitly code the provider?

    Read the article

  • forward/strong enum in VS2010

    - by Noah Roberts
    At http://blogs.msdn.com/vcblog/archive/2010/04/06/c-0x-core-language-features-in-vc10-the-table.aspx there is a table showing C++0x features that are implemented in 2010 RC. Among them are listed forwarding enums and strongly typed enums but they are listed as "partial". The main text of the article says that this means they are either incomplete or implemented in some non-standard way. So I've got VS2010RC and am playing around with the C++0x features. I can't figure these ones out and can't find any documentation on these two features. Not even the simplest attempts compile. enum class E { test }; int main() {} fails with: 1e:\dev_workspace\experimental\2010_feature_assessment\2010_feature_assessment\main.cpp(518): error C2332: 'enum' : missing tag name 1e:\dev_workspace\experimental\2010_feature_assessment\2010_feature_assessment\main.cpp(518): error C2236: unexpected 'class' 'E'. Did you forget a ';'? 1e:\dev_workspace\experimental\2010_feature_assessment\2010_feature_assessment\main.cpp(518): error C3381: 'E' : assembly access specifiers are only available in code compiled with a /clr option 1e:\dev_workspace\experimental\2010_feature_assessment\2010_feature_assessment\main.cpp(518): error C2143: syntax error : missing ';' before '}' 1e:\dev_workspace\experimental\2010_feature_assessment\2010_feature_assessment\main.cpp(518): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== int main() { enum E : short; } Fails with: 1e:\dev_workspace\experimental\2010_feature_assessment\2010_feature_assessment\main.cpp(513): warning C4480: nonstandard extension used: specifying underlying type for enum 'main::E' 1e:\dev_workspace\experimental\2010_feature_assessment\2010_feature_assessment\main.cpp(513): error C2059: syntax error : ';' ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== So it seems it must be some totally non-standard implementation that has allowed them to justify calling this feature "partially" done. How would I rewrite that code to access the forwarding and strong type feature?

    Read the article

  • jquery: prepopulating autocomplete fields

    - by David Tildon
    I'm using the tokenizing autocomplete plugin for jquery ( http://loopj.com/2009/04/25/jquery-plugin-tokenizing-autocomplete-text-entry ). I mostly use Ruby, and I'm really unfamiliar with javascript, though. My basic setup looks like this, and works fine for a new, blank form: $(document).ready(function () { $("#tag_ids_field").tokenInput("/tags", { queryParam: "search" }); }); The problem comes when I try to prepopulate it, like for an edit page. I'm trying to do something like this (where the "#tag_ids_field" text box contains the JSON when the page is loaded - that way is just cleaner on the application side of things). $(document).ready(function () { var tags = $("#tag_ids_field").html(); $("#tag_ids_field").tokenInput("/tags", { queryParam: "search", prePopulate: tags }); }); However, when the page loads I see that it's just filled with hundreds of entries that read 'undefined'. I get this even if I take the JSON output that Rails provides and try sticking it right in the .js file: $(document).ready(function () { $("#tag_ids_field").tokenInput("/tags", { queryParam: "search", prePopulate: "[{\"id\":\"44\",\"name\":\"omnis sit impedit et numquam voluptas enim\"},{\"id\":\"515\",\"name\":\"deserunt odit id doloremque reiciendis aliquid qui vel\"},{\"id\":\"943\",\"name\":\"exercitationem numquam possimus quasi iste nisi illum\"}]" }); }); That's obviously not a solution, I just tried it out of frustration and I get the same behavior. My two questions: One, why are my text boxes being filled with "undefined" tags when I try to prepopulate, and how can I get them to prepopulate successfully? Two, I'm planning on having many autocomplete fields like this on the same page (for when several records are edited at once - they all query the same place). How can I make each autocomplete field take it's prepopulated values from it's own textbox? Something like (applying these settings to all input boxes with a certain class, not just the one of a particular id): $(document).ready(function () { $(".tag_ids_field").tokenInput("/tags", { queryParam: "search", prePopulate: (the contents of that particular ".tag_ids_field" input box) }); });

    Read the article

  • iPhone multitouch - Some touches dispatch touchesBegan: but not touchesMoved:

    - by zkarcher
    I'm developing a multitouch application. One touch is expected to move, and I need to track its position. For all other touches, I need to track their beginnings and endings, but their movement is less critical. Sometimes, when 3 or more touches are active, my UIView does not receive touchesMoved: events for the moving touch. This problem is intermittent, and can always be reproduced after a few attempts: Touch the screen with 2 fingers. Touch the screen with another finger, and move this finger around. The moving finger always dispatches touchesBegan: and touchesEnded:, but sometimes does not dispatch any touchesMoved: events. Whenever the moving touch does not dispatch touchesMoved: events, I can force it to dispatch touchesMoved: if I move one of the other touches. This seems to "force" every touch to recheck its position, and I successfully receive a touchesMoved: event. However, this is clumsy. This bug is reproducible on both the iPhone 2G and 3GS models. My question is: How do I ensure that my moving touch dispatches touchesMoved: events? Does anyone have any experience with this issue? I've spent few fruitless days searching the web for answers. I found a post describing how to sync touch events with the VBL: http://www.71squared.com/2009/04/maingameloop-changes/ . However, this has not solved the problem. I really don't know how to proceed. Any help is appreciated!

    Read the article

  • High-concurrency counters without sharding

    - by dound
    This question concerns two implementations of counters which are intended to scale without sharding (with a tradeoff that they might under-count in some situations): http://appengine-cookbook.appspot.com/recipe/high-concurrency-counters-without-sharding/ (the code in the comments) http://blog.notdot.net/2010/04/High-concurrency-counters-without-sharding My questions: With respect to #1: Running memcache.decr() in a deferred, transactional task seems like overkill. If memcache.decr() is done outside the transaction, I think the worst-case is the transaction fails and we miss counting whatever we decremented. Am I overlooking some other problem that could occur by doing this? What are the significiant tradeoffs between the two implementations? Here are the tradeoffs I see: #2 does not require datastore transactions. To get the counter's value, #2 requires a datastore fetch while with #1 typically only needs to do a memcache.get() and memcache.add(). When incrementing a counter, both call memcache.incr(). Periodically, #2 adds a task to the task queue while #1 transactionally performs a datastore get and put. #1 also always performs memcache.add() (to test whether it is time to persist the counter to the datastore). Conclusions (without actually running any performance tests): #1 should typically be faster at retrieving a counter (#1 memcache vs #2 datastore). Though #1 has to perform an extra memcache.add() too. However, #2 should be faster when updating counters (#1 datastore get+put vs #2 enqueue a task). On the other hand, with #1 you have to be a bit more careful with the update interval since the task queue quota is almost 100x smaller than either the datastore or memcahce APIs.

    Read the article

  • WCF, IIS6.0 (413) Request Entity Too Large.

    - by Andrew Kalashnikov
    Hello, guys. I've got annoyed problem. I've got WCF service(basicHttpBinding with Transport security Https). This service implements contract which consists 2 methods. LoadData. GetData. GetData works OK!. My client received pachage ~2Mb size without problems. All work correctly. But when I try load data by bool LoadData(Stream data); - signature of method I'll get (413) Request Entity Too Large. Stack Trace: Server stack trace: ? ServiceModel.Channels.HttpChannelUtilities.ValidateRequestReplyResponse(HttpWebRequest request, HttpWebResponse response, HttpChannelFactory factory, WebException responseException, ChannelBinding channelBinding) System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout) System.ServiceModel.Channels.RequestChannel.Request(Message message, TimeSpan timeout) System.ServiceModel.Dispatcher.RequestChannelBinder.Request(Message message, TimeSpan timeout) I try this http://blogs.msdn.com/jiruss/archive/2007/04/13/http-413-request-entity-too-large-can-t-upload-large-files-using-iis6.aspx. But it doesn't work! My server is 2003 with IIS6.0. Please help.

    Read the article

< Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >