Search Results

Search found 6339 results on 254 pages for 'josh 24 2'.

Page 45/254 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • How to make the internal subwoofer work on an Asus G73JW?

    - by CodyLoco
    I have an Asus G73JW laptop which has an internal subwoofer built-in. Currently, the system detects the internal speakers as a 2.0 system (or I can change do 4.0 is the only other option). I found a bug report here: https://bugs.launchpad.net/ubuntu/+source/alsa-driver/+bug/673051 which discusses the bug and according to them a fix was sent upstream back at the end of 2010. I would have thought this would have made it into 12.04 but I guess not? I tried following the link given at the very bottom to install the latest ALSA drivers, here: https://wiki.ubuntu.com/Audio/InstallingLinuxAlsaDriverModules however I keep running into an error when trying to install: sudo apt-get install linux-alsa-driver-modules-$(uname -r) Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package linux-alsa-driver-modules-3.2.0-24-generic E: Couldn't find any package by regex 'linux-alsa-driver-modules-3.2.0-24-generic' I believe I have added the repository correctly: sudo add-apt-repository ppa:ubuntu-audio-dev/ppa [sudo] password for codyloco: You are about to add the following PPA to your system: This PPA will be used to provide testing versions of packages for supported Ubuntu releases. More info: https://launchpad.net/~ubuntu-audio-dev/+archive/ppa Press [ENTER] to continue or ctrl-c to cancel adding it Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /tmp/tmp.7apgZoNrqK --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver hkp://keyserver.ubuntu.com:80/ --recv 4E9F485BF943EF0EABA10B5BD225991A72B194E5 gpg: requesting key 72B194E5 from hkp server keyserver.ubuntu.com gpg: key 72B194E5: public key "Launchpad Ubuntu Audio Dev team PPA" imported gpg: Total number processed: 1 gpg: imported: 1 (RSA: 1) And I also ran an update as well (followed the instructions on the fix above). Any ideas?

    Read the article

  • Installation on SSD with Windows preinstalled

    - by ebbot
    I bought a laptop with this fancy SSD drive, fancy new UEFI aso. I figured at first Windows out Ubuntu in but after doing 3 DoA on 3 laptops in one day I realized that maybe keeping Windows could come in handy. So dual boot it is. And this is what I've got: Disk 1 - 500 Gb HD 300 Mb Windoze only says "Healthy" don't know what it's for. 600 Mb "Healthy (EFI partition)" 186.30 Gb NTFS "OS (C:)" "Healthy (Boot, Page File, Crash Dump, Primary Partition)" 258.45 Gb NTFS "Data (D:)" "Healthy" 20.00 Gb "Healthy (Recovery Partition)" Disk 2 - 24 Gb SSD 4.00 Gb "Healthy (OEM Partition)" 18.36 Gb "Healthy (Primary Partition)" So I'm not sure what the first partition on each drive does (the 300 Gb on the HD and the OEM Partition on the SSD. Nor do I know what Data (D:). I think the 2nd partition on the SSD is for some speedup of Windoze. I'm debating if I should shrink the OS (C:) drive to around 120 GB or so. Clear the Data (D:) and also use the whole SSD for Ubuntu. That would leave me 24 Gb for e.g. / on the SSD and some 320 Gb on the HD for /home and swap. Is this a reasonable setup? Do I need to configure fstab for the SSD differently to a HD?

    Read the article

  • AngularJS dealing with large data sets (Strategy)

    - by Brian
    I am working on developing a personal temperature logging viewer based on my rasppi curl'ing data into my web server's api. Temperatures are taken every 2 seconds and I can have several temperature sensors posting data. Needless to say I will have a lot of data to handle even within the scope of an hour. I have implemented a very simple paging api from the server so the server doesn't timeout and is currently only returning data in 1000 units per call, then paging through the data. I had the idea to intially show say the last 20 minutes of data from a sensor (or all sensors depending on user choices), then allowing the user to select other timeframes from which to show data. The issue comes in when you want to view all sensors or an extended time period (say 24 hours). Is there a best practice of handling this large amount of data? Would it be useful to load those first 20 minutes into the live view and then cache into local storage something like the last 24 hours? I haven't been able to find a decent idea of this in use yet even though there are a lot of ways to take this problem. I am just looking for some suggestions as to what might provide a good balance between good performance and not caching the entire data set on the client side (as beyond a week of data this might not be feasible).

    Read the article

  • Dual Monitor in Ubuntu 11.10 is resetting the theme

    - by Mengu
    i'm experiencing a strange problem with dual monitors. when i set the dual monitor via "nvidia settings" and save the setting to xorg.conf, the default unity theme and icons are turning back to gtk default. i also get an error telling me "could not apply the stored configuration for monitors" here is my xorg.conf: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 280.13 (buildd@yellow) Fri Aug 5 12:31:28 UTC 2011 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "Chi Mei Optoelectronics corp." HorizSync 30.0 - 75.0 VertRefresh 60.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GT 540M" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "1" Option "TwinViewXineramaInfoOrder" "DFP-0" Option "metamodes" "DFP: nvidia-auto-select +0+0, CRT: 1680x1050 +1920+0" SubSection "Display" Depth 24 EndSubSection EndSection here is an example of what i'm talking about: http://i.stack.imgur.com/vrlW1.png how can i fix this? thanks in advance.

    Read the article

  • No grub selection after installing kernel Ubuntu 14.04

    - by CPJ
    I have installed a new kernel on my system which can be found by grub but when I restart I can only select the old kernel. Things I tried in other threds with similar problems didn't help. sudo update-grub gives Generating grub configuration file ... Found linux image: /boot/vmlinuz-3.15.0-031500rc2-lowlatency Found initrd image: /boot/initrd.img-3.15.0-031500rc2-lowlatency Found linux image: /boot/vmlinuz-3.15.0-031500rc2-generic Found initrd image: /boot/initrd.img-3.15.0-031500rc2-generic Found linux image: /boot/vmlinuz-3.13.0-24-generic Found initrd image: /boot/initrd.img-3.13.0-24-generic Found memtest86+ image: /boot/memtest86+.elf Found memtest86+ image: /boot/memtest86+.bin done However, afer a reboot I can only choose the 3.13 kernel. Any ideas what happened? I I have a full encrypted hard drive so maybe I have overseen something while installing the new kernel to get this work? The grub config file is GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT_QUIET=false GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="" GRUB_ENABLE_CRYPTODISK=1 Any ideas? Thanks.

    Read the article

  • Slight stuttering when moving windows in fresh 12.04 install

    - by Konsolkongen
    Installed Ubuntu 12.04 today and my problem is that when I'm moving the windows around my screen it doesn't feel smooth at all. Usually I can fix this by changing the refresh rate to 60Hz, but this time it doesn't help. My graphics card is a Nvidia GTX 560Ti and I've tried both the 295.40, 295.45 and 304.43 (which I'm currently using) but neither has resolved my problem. I searched around a bit and tried changing the refresh-rate using compizconfig-settings-manager and xrandr. No change using CCSM, but when I tried xrandr I got this reply: konsolkongen@konsolkongen-desktop:~$ xrandr -r 60Rate 60.0 Hz not available for this size - which is nonsense of course. This is what my xorg.conf file looks like: # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 295.33 (buildd@allspice) Fri Mar 30 15:25:24 UTC 2012 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "Samsung SyncMaster" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 75.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 560 Ti" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "TwinView" "0" Option "TwinViewXineramaInfoOrder" "DFP-0" Option "metamodes" "DFP-0: 1680x1050_60 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection Any help would be greatly appreciated, my obsession with video quality can't stand stuttering like this. For what it's worth though, I don't have any screen tearing, so at least V-sync is on. Thanks.

    Read the article

  • 12.10 Quantal display issues using nvidiaXineramaInfoOverride

    - by AvatarKava
    After updating to 12.10 today, my xorg.conf doesn't seem to be respected by Quantal. Not sure if this is a 'bug' or whether it's just an adjustment I have to make due to changes in the OS. When logging in, it seems Ubuntu is now recognizing only one 3840x1080 screen named "Matrox" and maximizing windows spans them across both screens. In 12.04, this configuration file successfully allowed me to override the data provided by my TripleHead2Go and maximize windows to a single monitor. Any ideas or where to start on trying to debug this? After a bit of searching, I tried to make changes according to the update here: http://www.phoronix.com/scan.php?page=news_item&px=MTEyMDk Here's where the config file sits currently: Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Files" EndSection Section "InputDevice" Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" # HorizSync source: edid, VertRefresh source: edid Identifier "Monitor0" VendorName "Unknown" ModelName "Matrox" HorizSync 31.5 - 80.0 VertRefresh 59.9 - 75.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "GeForce GTX 260M" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "nvidiaXineramaInfo" "true" Option "nvidiaXineramaInfoOrder" "CRT-0" #Option "metamodes" "CRT: nvidia-auto-select +0+0" Option "nvidiaXineramaInfoOverride" "1920x1080 +0+0, 1920x1080 +1920+0" Option "Stereo" "0" SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • 2014?9???OTN?????????????&????

    - by OTN-J Master
    ????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????9??OTN????????JPOUG(Japan Oracle User Group)????JPOUG> SET EVENTS 20140907??????????????????????????????????????????ORACLE MASTER Bronze??????????????????????!????JPOUG> SET EVENTS 201409072014?9?7?(?)13:00~17:00 @ ?????????????????? (??????)Japan Oracle User Group(JPOUG)?????????????Oracle Database?MySQL????????IT???????????????????????????????????????????????????????????????????????????????????????? ~Recovery Manager(RMAN)??????! ~(??????2??????)9?17?(?)15:30 ~17:00 @ ?????????????????(???)9?17?(?)18:30~20:00 @ ?????????????????(???)Oracle?Backup/Recovery ?????????? Recovery Manager(RMAN)?Backup ????????????????????????????????????????????????????????????RMAN??????????????????Oracle Database 12c ? Recovery Manager ????????????????????????????????Oracle Database Appliance???9?18?(?)15:30 ~17:30 @ ??????????(??????)??????????????????????????????????·?????Oracle Database Appliance??????????????????????????????????????????????????????????????????·?????Oracle Database Appliance ?????????????????????????????????? Java EE 9?24?(?)18:30 ~ 20:00 @ ??????????(??????)???? Java EE ???????????Java EE ????? ?? JavaServer Faces (JSF) ? ????????? Contexts and Dependency Injection (CDI) ?? Java EE 6 ?????????Web ????????????????????????????????ORACLE MASTER Bronze Oracle Database 12c ?????????????? 9?30?(?)14:30 ~ 16:30 @ ??????????(??????)???24????????????? ORACLE MASTER???????????????????????ORACLE MASTER Bronze Oracle Database 12c????????????Bronze ???????????????(????????)- ORACLE MASTER Oracle Database 12c ????-??????:ORACLE MASTER Bronze?12c SQL?? [12c SQL]?-??????:ORACLE MASTER Bronze?Bronze DBA 12c?ORACLE MASTER??????????????????ORACLE MASTER????????12c?????????????????????????

    Read the article

  • Silverlight Cream for April 07, 2010 -- #833

    - by Dave Campbell
    In this Issue: Alan Mendelevich, Siyamand Ayubi, Rudi Grobler(-2-), Josh Smith, VinitYadav, and Dave Campbell. Shoutouts: Jordan Knight has a demo up of a project he did for DigiGirlz: DigiGirlz, Deep Zoom and Azure, hopefully we'll get source later :) Jeremy Likness has a must-read post on his Ten Reasons to use the Managed Extensibility Framework I put this on another post earlier, but if you want some desktop bling for WP7, Ozymandias has some: I Love Windows Phone Wallpaper If you're not going to be in 'Vegas next week, Tim Heuer reminds us there's an alternative: Watch the Silverlight 4 Launch event and LIVE QA with ScottGu and others From SilverlightCream.com: Ghost Lines in Silverlight Alan Mendelevich reports an issue when drawing lines with odd coordinate values. He originated it in Silverlight 3, but it is there in SL4RC as well... check it out and leave him a comment. A Framework to Animate WPF and Silverlight Pages Similar to the PowerPoint Slides Siyamand Ayubi has an interesting post up on animating WPF or Silverlight pages to make them progress in the manner of a PPT slideshow. And it can also make phone calls… Rudi Grobler has a list of 'tasks' you can do with WP7 such as PhoneCallTask or EmailComposeTask ... looks like this should be plasticized :) Using the GPS, Accelerometer & Vibration Controller Rudi Grobler is also investigating how to use the GPS, Accelerometer, and Vibration in WP7 with a bunch of external links to back it up. Assembly-level initialization at design time Josh Smith has a solution to the problem of initializing design-time data in Blend (did you know that was an issue?) ... the solution is great and so is the running commentary between Josh and Karl Shifflett in the comments! ySurf : A Yahoo Messenger Clone built in Silverlight VinitYadav built a Yahoo Messenger app in Silverlight and has detailed out all the ugly bits for us on the post, plus made everything available. Your First Silverlight Application Dave Campbell's first post at DZone cracking open a beginner's series on Silverlight. If you're expecting something heavy-duty, skip this. If you're wanting to learn Silverlight and haven't jumped in yet, give it a try. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Silverlight Cream for May 20, 2010 -- #866

    - by Dave Campbell
    In this Issue: Mike Snow, Victor Gaudioso, Ola Karlsson, Josh Twist(-2-), Yavor Georgiev, Jeff Wilcox, and Jesse Liberty. Shoutouts: Frank LaVigne has an interesting observation on his site: The Big Take-Away from MIX10 Rishi has updated all his work including a release of nRoute to the latest bits: nRoute Samples Revisited Looks like I posted one of Erik Mork's links two days in a row :) ... that's because I meant to post this one: Silverlight Week – How to Choose a Mobile Platform Just in case you missed it (and for me to find it easy), Scott Guthrie has an excellent post up on Silverlight 4 Tools for VS 2010 and WCF RIA Services Released From SilverlightCream.com: Silverlight Tip of the Day #23 – Working with Strokes and Shapes Mike Snow's Silverlight Tip of the Day number 23 is up and about Strokes and Shapes -- as in dotted and dashed lines. New Silverlight Video Tutorial: How to Fire a Visual State based upon the value of a Boolean Variable Victor Gaudioso's latest video tutorial is up and is on selecting and firing a video state based on a boolean... project included. Simultaneously calling multiple methods on a WCF service from silverlight Ola Karlsson details a problem he had where he was calling multiple WCF services to pull all his data and had problems... turns out it was a blocking call and he found the solution in the forums and details it all out for us... actually, a search at SilverlightCream.com would have found one of the better posts listed once you knew the problem :) Securing Your Silverlight Applications Josh Twist has an article in MSDN on Silverlight Security. He talks about Windows, forms, and .NET authorization then WCF, WCF Data, cross domain and XAP files. He also has some good external links. Template/View selection with MEF in Silverlight Josh Twist points out that this next article is just a simple demonstration, but he's discussing, and provides code for, a MEF-driven ViewModel navigation scheme with animation on the navigation. Workaround for accessing some ASMX services from Silverlight 4 Are you having problems hitting you asmx web service with Silverlight 4? Yeah... others are too! Yavor Georgiev at the Silverlight Web Services Team blog has a post up about it... why it's a sometimes problem and a workaround for it. Using Silverlight 4 features to create a Zune-like context menu Jeff Wilcox used Silverlight 4 and the Toolkit to create some samples of menus, then demonstrates a duplication of the Zune menu. You Already Are A Windows Phone 7 Programmer Jesse Liberty is demonstrating the fact that Silverlight developers are WP7 developers by creating a Silverlight and a WP7 app side by side using the same code... this is a closer look at the Silverlight TV presentation he did. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • MSDN Magazine May Issue is Live

    Editor's Note: This Way-Cool 'Internet' Doohickey It wasn't all that long ago that surfing meant grabbing a board and hanging 10. Keith Ward Silverlight Security: Securing Your Silverlight Applications Josh Twist explains the unique challenges developers face in securing Silverlight applications. He shows where to focus your efforts, concentrating on the key aspects of authentication and authorization. Josh Twist Now Playing: Building Custom Players with the Silverlight Media Framework...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Service Discovery in WCF 4.0 &ndash; Part 1

    - by Shaun
    When designing a service oriented architecture (SOA) system, there will be a lot of services with many service contracts, endpoints and behaviors. Besides the client calling the service, in a large distributed system a service may invoke other services. In this case, one service might need to know the endpoints it invokes. This might not be a problem in a small system. But when you have more than 10 services this might be a problem. For example in my current product, there are around 10 services, such as the user authentication service, UI integration service, location service, license service, device monitor service, event monitor service, schedule job service, accounting service, player management service, etc..   Benefit of Discovery Service Since almost all my services need to invoke at least one other service. This would be a difficult task to make sure all services endpoints are configured correctly in every service. And furthermore, it would be a nightmare when a service changed its endpoint at runtime. Hence, we need a discovery service to remove the dependency (configuration dependency). A discovery service plays as a service dictionary which stores the relationship between the contracts and the endpoints for every service. By using the discovery service, when service X wants to invoke service Y, it just need to ask the discovery service where is service Y, then the discovery service will return all proper endpoints of service Y, then service X can use the endpoint to send the request to service Y. And when some services changed their endpoint address, all need to do is to update its records in the discovery service then all others will know its new endpoint. In WCF 4.0 Discovery it supports both managed proxy discovery mode and ad-hoc discovery mode. In ad-hoc mode there is no standalone discovery service. When a client wanted to invoke a service, it will broadcast an message (normally in UDP protocol) to the entire network with the service match criteria. All services which enabled the discovery behavior will receive this message and only those matched services will send their endpoint back to the client. The managed proxy discovery service works as I described above. In this post I will only cover the managed proxy mode, where there’s a discovery service. For more information about the ad-hoc mode please refer to the MSDN.   Service Announcement and Probe The main functionality of discovery service should be return the proper endpoint addresses back to the service who is looking for. In most cases the consume service (as a client) will send the contract which it wanted to request to the discovery service. And then the discovery service will find the endpoint and respond. Sometimes the contract and endpoint are not enough. It also contains versioning, extensions attributes. This post I will only cover the case includes contract and endpoint. When a client (or sometimes a service who need to invoke another service) need to connect to a target service, it will firstly request the discovery service through the “Probe” method with the criteria. Basically the criteria contains the contract type name of the target service. Then the discovery service will search its endpoint repository by the criteria. The repository might be a database, a distributed cache or a flat XML file. If it matches, the discovery service will grab the endpoint information (it’s called discovery endpoint metadata in WCF) and send back. And this is called “Probe”. Finally the client received the discovery endpoint metadata and will use the endpoint to connect to the target service. Besides the probe, discovery service should take the responsible to know there is a new service available when it goes online, as well as stopped when it goes offline. This feature is named “Announcement”. When a service started and stopped, it will announce to the discovery service. So the basic functionality of a discovery service should includes: 1, An endpoint which receive the service online message, and add the service endpoint information in the discovery repository. 2, An endpoint which receive the service offline message, and remove the service endpoint information from the discovery repository. 3, An endpoint which receive the client probe message, and return the matches service endpoints, and return the discovery endpoint metadata. WCF 4.0 discovery service just covers all these features in it's infrastructure classes.   Discovery Service in WCF 4.0 WCF 4.0 introduced a new assembly named System.ServiceModel.Discovery which has all necessary classes and interfaces to build a WS-Discovery compliant discovery service. It supports ad-hoc and managed proxy modes. For the case mentioned in this post, what we need to build is a standalone discovery service, which is the managed proxy discovery service mode. To build a managed discovery service in WCF 4.0 just create a new class inherits from the abstract class System.ServiceModel.Discovery.DiscoveryProxy. This class implemented and abstracted the procedures of service announcement and probe. And it exposes 8 abstract methods where we can implement our own endpoint register, unregister and find logic. These 8 methods are asynchronized, which means all invokes to the discovery service are asynchronously, for better service capability and performance. 1, OnBeginOnlineAnnouncement, OnEndOnlineAnnouncement: Invoked when a service sent the online announcement message. We need to add the endpoint information to the repository in this method. 2, OnBeginOfflineAnnouncement, OnEndOfflineAnnouncement: Invoked when a service sent the offline announcement message. We need to remove the endpoint information from the repository in this method. 3, OnBeginFind, OnEndFind: Invoked when a client sent the probe message that want to find the service endpoint information. We need to look for the proper endpoints by matching the client’s criteria through the repository in this method. 4, OnBeginResolve, OnEndResolve: Invoked then a client sent the resolve message. Different from the find method, when using resolve method the discovery service will return the exactly one service endpoint metadata to the client. In our example we will NOT implement this method.   Let’s create our own discovery service, inherit the base System.ServiceModel.Discovery.DiscoveryProxy. We also need to specify the service behavior in this class. Since the build-in discovery service host class only support the singleton mode, we must set its instance context mode to single. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using System.ServiceModel; 7:  8: namespace Phare.Service 9: { 10: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 11: public class ManagedProxyDiscoveryService : DiscoveryProxy 12: { 13: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 14: { 15: throw new NotImplementedException(); 16: } 17:  18: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 19: { 20: throw new NotImplementedException(); 21: } 22:  23: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 24: { 25: throw new NotImplementedException(); 26: } 27:  28: protected override IAsyncResult OnBeginResolve(ResolveCriteria resolveCriteria, AsyncCallback callback, object state) 29: { 30: throw new NotImplementedException(); 31: } 32:  33: protected override void OnEndFind(IAsyncResult result) 34: { 35: throw new NotImplementedException(); 36: } 37:  38: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 39: { 40: throw new NotImplementedException(); 41: } 42:  43: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 44: { 45: throw new NotImplementedException(); 46: } 47:  48: protected override EndpointDiscoveryMetadata OnEndResolve(IAsyncResult result) 49: { 50: throw new NotImplementedException(); 51: } 52: } 53: } Then let’s implement the online, offline and find methods one by one. WCF discovery service gives us full flexibility to implement the endpoint add, remove and find logic. For the demo purpose we will use an internal dictionary to store the services’ endpoint metadata. In the next post we will see how to serialize and store these information in database. Define a concurrent dictionary inside the service class since our it will be used in the multiple threads scenario. 1: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 2: public class ManagedProxyDiscoveryService : DiscoveryProxy 3: { 4: private ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata> _services; 5:  6: public ManagedProxyDiscoveryService() 7: { 8: _services = new ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata>(); 9: } 10: } Then we can simply implement the logic of service online and offline. 1: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 2: { 3: _services.AddOrUpdate(endpointDiscoveryMetadata.Address, endpointDiscoveryMetadata, (key, value) => endpointDiscoveryMetadata); 4: return new OnOnlineAnnouncementAsyncResult(callback, state); 5: } 6:  7: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 8: { 9: OnOnlineAnnouncementAsyncResult.End(result); 10: } 11:  12: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 13: { 14: EndpointDiscoveryMetadata endpoint = null; 15: _services.TryRemove(endpointDiscoveryMetadata.Address, out endpoint); 16: return new OnOfflineAnnouncementAsyncResult(callback, state); 17: } 18:  19: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 20: { 21: OnOfflineAnnouncementAsyncResult.End(result); 22: } Regards the find method, the parameter FindRequestContext.Criteria has a method named IsMatch, which can be use for us to evaluate which service metadata is satisfied with the criteria. So the implementation of find method would be like this. 1: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 2: { 3: _services.Where(s => findRequestContext.Criteria.IsMatch(s.Value)) 4: .Select(s => s.Value) 5: .All(meta => 6: { 7: findRequestContext.AddMatchingEndpoint(meta); 8: return true; 9: }); 10: return new OnFindAsyncResult(callback, state); 11: } 12:  13: protected override void OnEndFind(IAsyncResult result) 14: { 15: OnFindAsyncResult.End(result); 16: } As you can see, we checked all endpoints metadata in repository by invoking the IsMatch method. Then add all proper endpoints metadata into the parameter. Finally since all these methods are asynchronized we need some AsyncResult classes as well. Below are the base class and the inherited classes used in previous methods. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.Threading; 6:  7: namespace Phare.Service 8: { 9: abstract internal class AsyncResult : IAsyncResult 10: { 11: AsyncCallback callback; 12: bool completedSynchronously; 13: bool endCalled; 14: Exception exception; 15: bool isCompleted; 16: ManualResetEvent manualResetEvent; 17: object state; 18: object thisLock; 19:  20: protected AsyncResult(AsyncCallback callback, object state) 21: { 22: this.callback = callback; 23: this.state = state; 24: this.thisLock = new object(); 25: } 26:  27: public object AsyncState 28: { 29: get 30: { 31: return state; 32: } 33: } 34:  35: public WaitHandle AsyncWaitHandle 36: { 37: get 38: { 39: if (manualResetEvent != null) 40: { 41: return manualResetEvent; 42: } 43: lock (ThisLock) 44: { 45: if (manualResetEvent == null) 46: { 47: manualResetEvent = new ManualResetEvent(isCompleted); 48: } 49: } 50: return manualResetEvent; 51: } 52: } 53:  54: public bool CompletedSynchronously 55: { 56: get 57: { 58: return completedSynchronously; 59: } 60: } 61:  62: public bool IsCompleted 63: { 64: get 65: { 66: return isCompleted; 67: } 68: } 69:  70: object ThisLock 71: { 72: get 73: { 74: return this.thisLock; 75: } 76: } 77:  78: protected static TAsyncResult End<TAsyncResult>(IAsyncResult result) 79: where TAsyncResult : AsyncResult 80: { 81: if (result == null) 82: { 83: throw new ArgumentNullException("result"); 84: } 85:  86: TAsyncResult asyncResult = result as TAsyncResult; 87:  88: if (asyncResult == null) 89: { 90: throw new ArgumentException("Invalid async result.", "result"); 91: } 92:  93: if (asyncResult.endCalled) 94: { 95: throw new InvalidOperationException("Async object already ended."); 96: } 97:  98: asyncResult.endCalled = true; 99:  100: if (!asyncResult.isCompleted) 101: { 102: asyncResult.AsyncWaitHandle.WaitOne(); 103: } 104:  105: if (asyncResult.manualResetEvent != null) 106: { 107: asyncResult.manualResetEvent.Close(); 108: } 109:  110: if (asyncResult.exception != null) 111: { 112: throw asyncResult.exception; 113: } 114:  115: return asyncResult; 116: } 117:  118: protected void Complete(bool completedSynchronously) 119: { 120: if (isCompleted) 121: { 122: throw new InvalidOperationException("This async result is already completed."); 123: } 124:  125: this.completedSynchronously = completedSynchronously; 126:  127: if (completedSynchronously) 128: { 129: this.isCompleted = true; 130: } 131: else 132: { 133: lock (ThisLock) 134: { 135: this.isCompleted = true; 136: if (this.manualResetEvent != null) 137: { 138: this.manualResetEvent.Set(); 139: } 140: } 141: } 142:  143: if (callback != null) 144: { 145: callback(this); 146: } 147: } 148:  149: protected void Complete(bool completedSynchronously, Exception exception) 150: { 151: this.exception = exception; 152: Complete(completedSynchronously); 153: } 154: } 155: } 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using Phare.Service; 7:  8: namespace Phare.Service 9: { 10: internal sealed class OnOnlineAnnouncementAsyncResult : AsyncResult 11: { 12: public OnOnlineAnnouncementAsyncResult(AsyncCallback callback, object state) 13: : base(callback, state) 14: { 15: this.Complete(true); 16: } 17:  18: public static void End(IAsyncResult result) 19: { 20: AsyncResult.End<OnOnlineAnnouncementAsyncResult>(result); 21: } 22:  23: } 24:  25: sealed class OnOfflineAnnouncementAsyncResult : AsyncResult 26: { 27: public OnOfflineAnnouncementAsyncResult(AsyncCallback callback, object state) 28: : base(callback, state) 29: { 30: this.Complete(true); 31: } 32:  33: public static void End(IAsyncResult result) 34: { 35: AsyncResult.End<OnOfflineAnnouncementAsyncResult>(result); 36: } 37: } 38:  39: sealed class OnFindAsyncResult : AsyncResult 40: { 41: public OnFindAsyncResult(AsyncCallback callback, object state) 42: : base(callback, state) 43: { 44: this.Complete(true); 45: } 46:  47: public static void End(IAsyncResult result) 48: { 49: AsyncResult.End<OnFindAsyncResult>(result); 50: } 51: } 52:  53: sealed class OnResolveAsyncResult : AsyncResult 54: { 55: EndpointDiscoveryMetadata matchingEndpoint; 56:  57: public OnResolveAsyncResult(EndpointDiscoveryMetadata matchingEndpoint, AsyncCallback callback, object state) 58: : base(callback, state) 59: { 60: this.matchingEndpoint = matchingEndpoint; 61: this.Complete(true); 62: } 63:  64: public static EndpointDiscoveryMetadata End(IAsyncResult result) 65: { 66: OnResolveAsyncResult thisPtr = AsyncResult.End<OnResolveAsyncResult>(result); 67: return thisPtr.matchingEndpoint; 68: } 69: } 70: } Now we have finished the discovery service. The next step is to host it. The discovery service is a standard WCF service. So we can use ServiceHost on a console application, windows service, or in IIS as usual. The following code is how to host the discovery service we had just created in a console application. 1: static void Main(string[] args) 2: { 3: using (var host = new ServiceHost(new ManagedProxyDiscoveryService())) 4: { 5: host.Opened += (sender, e) => 6: { 7: host.Description.Endpoints.All((ep) => 8: { 9: Console.WriteLine(ep.ListenUri); 10: return true; 11: }); 12: }; 13:  14: try 15: { 16: // retrieve the announcement, probe endpoint and binding from configuration 17: var announcementEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 18: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 19: var binding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 20: var announcementEndpoint = new AnnouncementEndpoint(binding, announcementEndpointAddress); 21: var probeEndpoint = new DiscoveryEndpoint(binding, probeEndpointAddress); 22: probeEndpoint.IsSystemEndpoint = false; 23: // append the service endpoint for announcement and probe 24: host.AddServiceEndpoint(announcementEndpoint); 25: host.AddServiceEndpoint(probeEndpoint); 26:  27: host.Open(); 28:  29: Console.WriteLine("Press any key to exit."); 30: Console.ReadKey(); 31: } 32: catch (Exception ex) 33: { 34: Console.WriteLine(ex.ToString()); 35: } 36: } 37:  38: Console.WriteLine("Done."); 39: Console.ReadKey(); 40: } What we need to notice is that, the discovery service needs two endpoints for announcement and probe. In this example I just retrieve them from the configuration file. I also specified the binding of these two endpoints in configuration file as well. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> And this is the console screen when I ran my discovery service. As you can see there are two endpoints listening for announcement message and probe message.   Discoverable Service and Client Next, let’s create a WCF service that is discoverable, which means it can be found by the discovery service. To do so, we need to let the service send the online announcement message to the discovery service, as well as offline message before it shutdown. Just create a simple service which can make the incoming string to upper. The service contract and implementation would be like this. 1: [ServiceContract] 2: public interface IStringService 3: { 4: [OperationContract] 5: string ToUpper(string content); 6: } 1: public class StringService : IStringService 2: { 3: public string ToUpper(string content) 4: { 5: return content.ToUpper(); 6: } 7: } Then host this service in the console application. In order to make the discovery service easy to be tested the service address will be changed each time it’s started. 1: static void Main(string[] args) 2: { 3: var baseAddress = new Uri(string.Format("net.tcp://localhost:11001/stringservice/{0}/", Guid.NewGuid().ToString())); 4:  5: using (var host = new ServiceHost(typeof(StringService), baseAddress)) 6: { 7: host.Opened += (sender, e) => 8: { 9: Console.WriteLine("Service opened at {0}", host.Description.Endpoints.First().ListenUri); 10: }; 11:  12: host.AddServiceEndpoint(typeof(IStringService), new NetTcpBinding(), string.Empty); 13:  14: host.Open(); 15:  16: Console.WriteLine("Press any key to exit."); 17: Console.ReadKey(); 18: } 19: } Currently this service is NOT discoverable. We need to add a special service behavior so that it could send the online and offline message to the discovery service announcement endpoint when the host is opened and closed. WCF 4.0 introduced a service behavior named ServiceDiscoveryBehavior. When we specified the announcement endpoint address and appended it to the service behaviors this service will be discoverable. 1: var announcementAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 2: var announcementBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 3: var announcementEndpoint = new AnnouncementEndpoint(announcementBinding, announcementAddress); 4: var discoveryBehavior = new ServiceDiscoveryBehavior(); 5: discoveryBehavior.AnnouncementEndpoints.Add(announcementEndpoint); 6: host.Description.Behaviors.Add(discoveryBehavior); The ServiceDiscoveryBehavior utilizes the service extension and channel dispatcher to implement the online and offline announcement logic. In short, it injected the channel open and close procedure and send the online and offline message to the announcement endpoint.   On client side, when we have the discovery service, a client can invoke a service without knowing its endpoint. WCF discovery assembly provides a class named DiscoveryClient, which can be used to find the proper service endpoint by passing the criteria. In the code below I initialized the DiscoveryClient, specified the discovery service probe endpoint address. Then I created the find criteria by specifying the service contract I wanted to use and invoke the Find method. This will send the probe message to the discovery service and it will find the endpoints back to me. The discovery service will return all endpoints that matches the find criteria, which means in the result of the find method there might be more than one endpoints. In this example I just returned the first matched one back. In the next post I will show how to extend our discovery service to make it work like a service load balancer. 1: static EndpointAddress FindServiceEndpoint() 2: { 3: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 4: var probeBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 5: var discoveryEndpoint = new DiscoveryEndpoint(probeBinding, probeEndpointAddress); 6:  7: EndpointAddress address = null; 8: FindResponse result = null; 9: using (var discoveryClient = new DiscoveryClient(discoveryEndpoint)) 10: { 11: result = discoveryClient.Find(new FindCriteria(typeof(IStringService))); 12: } 13:  14: if (result != null && result.Endpoints.Any()) 15: { 16: var endpointMetadata = result.Endpoints.First(); 17: address = endpointMetadata.Address; 18: } 19: return address; 20: } Once we probed the discovery service we will receive the endpoint. So in the client code we can created the channel factory from the endpoint and binding, and invoke to the service. When creating the client side channel factory we need to make sure that the client side binding should be the same as the service side. WCF discovery service can be used to find the endpoint for a service contract, but the binding is NOT included. This is because the binding was not in the WS-Discovery specification. In the next post I will demonstrate how to add the binding information into the discovery service. At that moment the client don’t need to create the binding by itself. Instead it will use the binding received from the discovery service. 1: static void Main(string[] args) 2: { 3: Console.WriteLine("Say something..."); 4: var content = Console.ReadLine(); 5: while (!string.IsNullOrWhiteSpace(content)) 6: { 7: Console.WriteLine("Finding the service endpoint..."); 8: var address = FindServiceEndpoint(); 9: if (address == null) 10: { 11: Console.WriteLine("There is no endpoint matches the criteria."); 12: } 13: else 14: { 15: Console.WriteLine("Found the endpoint {0}", address.Uri); 16:  17: var factory = new ChannelFactory<IStringService>(new NetTcpBinding(), address); 18: factory.Opened += (sender, e) => 19: { 20: Console.WriteLine("Connecting to {0}.", factory.Endpoint.ListenUri); 21: }; 22: var proxy = factory.CreateChannel(); 23: using (proxy as IDisposable) 24: { 25: Console.WriteLine("ToUpper: {0} => {1}", content, proxy.ToUpper(content)); 26: } 27: } 28:  29: Console.WriteLine("Say something..."); 30: content = Console.ReadLine(); 31: } 32: } Similarly, the discovery service probe endpoint and binding were defined in the configuration file. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> OK, now let’s have a test. Firstly start the discovery service, and then start our discoverable service. When it started it will announced to the discovery service and registered its endpoint into the repository, which is the local dictionary. And then start the client and type something. As you can see the client asked the discovery service for the endpoint and then establish the connection to the discoverable service. And more interesting, do NOT close the client console but terminate the discoverable service but press the enter key. This will make the service send the offline message to the discovery service. Then start the discoverable service again. Since we made it use a different address each time it started, currently it should be hosted on another address. If we enter something in the client we could see that it asked the discovery service and retrieve the new endpoint, and connect the the service.   Summary In this post I discussed the benefit of using the discovery service and the procedures of service announcement and probe. I also demonstrated how to leverage the WCF Discovery feature in WCF 4.0 to build a simple managed discovery service. For test purpose, in this example I used the in memory dictionary as the discovery endpoint metadata repository. And when finding I also just return the first matched endpoint back. I also hard coded the bindings between the discoverable service and the client. In next post I will show you how to solve the problem mentioned above, as well as some additional feature for production usage. You can download the code here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Macs don't connect to wifi access point but PCs will

    - by Josh
    So, as a side project I'm going to try and figure out why the wifi APs in my building exhibit the following behavior: - They typically allow all types of computers to connect without issues - Sometimes Apples can't get an IP address but will still connect to the AP's signal - Less often, PCs can't connect to the wifi (same as above - yes signal, no IP addy) - Don't let Raiders fans on no matter the time of day! My first thought was that the DHCP leases were all taken up when the Apples would try to connect, and it was just their unlucky timing, but I would then try to log on with a PC that had a new, unleased MAC address and it would work... Could this be something to do with interoperability between an apple wifi card, and the APs? Different parts of the DHCP lease being taken up first? The fact that the Seattle Mariners might actually be good this year?? If this hasn't used up everyone's patience (with my crappy sports jokes), something else I could use some help with: - We don't have the model or type of AP - This is because there is no documentation available for them, and they literally look like small white boxes with no writing on them. Also, the company that installed them is out of business, so the situation might be that no docs will ever be on the way. -- Do you guys have any ideas on how to figure out what we have? Thanks as always for all the help, and I'm looking forward to the day when I know enough to start contributing back to the site, Josh

    Read the article

  • WP7 “Phantom Data” Source Possibly Revealed?

    - by Bil Simser
    Recently there’s been rumours floating around regarding “phantom” Windows Phone 7 data being magically sent and received on the latest WP7 phones. The news has mostly been floating around twitter so I didn’t pay it much attention. The BBC Technology News picked it up so I thought I would look more into it myself seeing that we have WP7 phones and maybe there was some truth to all this (and more importantly what was the cause). Full disclosure. I don’t have a lot of data points around this. This is from looking at a few phone logs, changing the configuration and looking back again after the change. I haven’t done a clean baseline test nor have I done testing with hundreds of phones. I leave the experience up to the reader to decide. So I went spelunking into the phone logs to see what was up. Most providers will show you data usage, at least on a daily basis. I lucked out with the provider and plan in that they provide hourly breakdowns. Here’s a snapshot from my usage throughout one night. Timestamp Data Usage 12:38:30 AM 2098 Kilobytes 1:30:30 AM 2 Kilobytes 2:38:30 AM 7118 Kilobytes 3:38:30 AM 6622 Kilobytes 4:38:30 AM 76 Kilobytes 5:38:30 AM 29 Kilobytes 6:38:30 AM 19 Kilobytes 7:38:30 AM 20 Kilobytes So a few observations from this data: Data seems to be collected on a regular basis. Looking at some other people phone logs, the times vary but it’s always hourly. There’s not a tremendous amount of data here (about 16 megabytes) but it seems like a lot for 7 hours The phone was connected to my home Wifi during this period Nothing was running and the phone was in a locked state Like I said, not a lot of data but it adds up. 16MB for 7 hours = about 50MB in a 24 hour period. That’s just plain old data being collected (somewhere, somehow) and not actual usage (Marketplace, Email, Browsing, etc.). Besides, when connected to a WiFi network you shouldn’t be charged data usage from your phone company (in theory, YMMV). After reviewing the logs I made a theory that the only thing that could possibly be sending data is the Feedback feature. With no other apps running under lock, what else could it be? In Windows 7 under your Settings the last option is Feedback. This sends feedback to Microsoft to “help improve Windows Phone”. On this page you have three options: Send feedback and use my cellular data connection Send feedback and (presumably) use my WiFi connection Don’t send feedback Knowing what I know about Microsoft, they do use the feedback data. For example some of the placement and inclusion of features in Office 2007 was based on that Feedback data that Office sends (assuming you had opted in). However in the Privacy Statement (it’s long but a good read at least once in your life), the Phone manual, and every other source I could look at there is no information about how much data it’s planning to send, just that it’s sending some data and that “some data charges with your carrier may apply”. Looking back at the logs, I have to wonder. 6MB at 3:30 and *then* 7MB the next hour. That’s a lot of information. And it adds up. 50MB in a 24 hour period X 30 days puts most people over a normal 1GB plan. And frankly why am I paying for a data plan only to have 80% of it chewed up by Microsoft, with no real benefit to me. If they included porn in the 50mb daily transfer I’d be okay with this, but I don’t see any new movies on my phone. So I turned it off. Set Feedback to disabled and wait. I waited. And waited. And generally didn’t use the phone if I could. The next day I went back to look at the data usage logs from the time period after turning the feedback mechanism off. Here are the results. Timestamp Data Usage 1:19:48 PM 0 Kilobytes 2:19:48 PM 0 Kilobytes 3:19:48 PM 0 Kilobytes 4:19:48 PM 678 Kilobytes (took a phone call) 5:19:48 PM 82 Kilobytes 6:19:48 PM 88 Kilobytes 7:20:30 PM 86 Kilobytes (guess they changed their reporting time) 8:20:30 PM 86 Kilobytes 9:20:30 PM 66 Kilobytes 10:20:30 PM 67 Kilobytes 11:20:30 PM 49 Kilobytes 12:20:30 AM 32 Kilobytes 1:20:30 AM 38 Kilobytes 2:20:31 AM 18 Kilobytes 3:20:31 AM 27 Kilobytes 4:20:31 AM 86 Kilobytes 5:20:31 AM 53 Kilobytes 6:20:31 AM 22 Kilobytes 7:22:15 AM 30 Kilobytes (another reporting time change) 8:22:15 AM 29 Kilobytes 9:22:15 AM 74 Kilobytes 10:22:15 AM 154 Kilobytes (phone call) 11:22:15 AM 12 Kilobytes 12:13:27 PM 49 Kilobytes 1:13:27 PM 197 Kilobytes (phone call) Quite a *drastic* change from what Feedback was turned on. I mean for a 24 hour period (sans 3 phone calls) I consumed about 1MB. Still quite a bit of transfer going on but at least it amounts to 30MB per month, not 30MB per day! Like I said this observation is neither scientific or conclusive. You decide what to do but frankly until Microsoft makes this data transfer exempt from your data plan (like that will happen) I would just turn Feedback off. YMMV.

    Read the article

  • Blank screen after installing nvidia restricted driver

    - by LaMinifalda
    I have a new machine with a MSI N560GTX Ti Twin Frozr II/OC graphic card and MSI PH67A-C43 (B3) main board. If i install the current nvidia restricted driver and reboot the machine on Natty (64-bit), then i only get a black screen after reboot and my system does not respond. I can´t see the login screen. On nvidia web page i saw that the current driver is 270.41.06. Is that driver used as current driver? Btw, i am an ubuntu/linux beginner and therefore not very familiar with ubuntu. What can i do to solve the black screen problem? EDIT: Setting the nomodeset parameter does not solve the problem. After ubuntu start, first i see the ubuntu logo, then strange pixels and at the end the black screen. HELP! EDIT2: Thank you, but setting the "video=vesa:off gfxpayload=text" parameters do no solve the problem too. Same result as in last edit. HELP. I would like to see Unity. This is my grub: GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="video=vesa:off gfxpayload=text nomodeset quiet splash" GRUB_CMDLINE_LINUX=" vga=794" EDIT3: I dont know if it is important. If this edit is unnecessary and helpless I will delete it. There are some log files (Xorg.0.log - Xorg.4.log). I dont know how these log files relate to each other. Please, check the errors listed below. In Xorg.1.log I see the following error: [ 20.603] (EE) Failed to initialize GLX extension (ComIatible NVIDIA X driver not found) In Xorg.2.log I see the following error: [ 25.971] (II) Loading /usr/lib/xorg/modules/libfb.so [ 25.971] (**) NVIDIA(0): Depth 24, (--) framebuffer bpp 32 [ 25.971] (==) NVIDIA(0): RGB weight 888 [ 25.971] (==) NVIDIA(0): Default visual is TrueColor [ 25.971] (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) [ 26.077] (EE) NVIDIA(0): Failed to initialize the NVIDIA GPU at PCI:1:0:0. Please [ 26.078] (EE) NVIDIA(0): check your system's kernel log for additional error [ 26.078] (EE) NVIDIA(0): messages and refer to Chapter 8: Common Problems in the [ 26.078] (EE) NVIDIA(0): README for additional information. [ 26.078] (EE) NVIDIA(0): Failed to initialize the NVIDIA graphics device! [ 26.078] (II) UnloadModule: "nvidia" [ 26.078] (II) Unloading nvidia [ 26.078] (II) UnloadModule: "wfb" [ 26.078] (II) Unloading wfb [ 26.078] (II) UnloadModule: "fb" [ 26.078] (II) Unloading fb [ 26.078] (EE) Screen(s) found, but none have a usable configuration. [ 26.078] Fatal server error: [ 26.078] no screens found [ 26.078] Please consult the The X.Org Found [...] In Xorg.4.log I see the following errors: [ 15.437] (**) NVIDIA(0): Depth 24, (--) framebuffer bpp 32 [ 15.437] (==) NVIDIA(0): RGB weight 888 [ 15.437] (==) NVIDIA(0): Default visual is TrueColor [ 15.437] (==) NVIDIA(0): Using gamma correction (1.0, 1.0, 1.0) [ 15.703] (II) NVIDIA(0): NVIDIA GPU GeForce GTX 560 Ti (GF114) at PCI:1:0:0 (GPU-0) [ 15.703] (--) NVIDIA(0): Memory: 1048576 kBytes [ 15.703] (--) NVIDIA(0): VideoBIOS: 70.24.11.00.00 [ 15.703] (II) NVIDIA(0): Detected PCI Express Link width: 16X [ 15.703] (--) NVIDIA(0): Interlaced video modes are supported on this GPU [ 15.703] (--) NVIDIA(0): Connected display device(s) on GeForce GTX 560 Ti at [ 15.703] (--) NVIDIA(0): PCI:1:0:0 [ 15.703] (--) NVIDIA(0): none [ 15.706] (EE) NVIDIA(0): No display devices found for this X screen. [ 15.943] (II) UnloadModule: "nvidia" [ 15.943] (II) Unloading nvidia [ 15.943] (II) UnloadModule: "wfb" [ 15.943] (II) Unloading wfb [ 15.943] (II) UnloadModule: "fb" [ 15.943] (II) Unloading fb [ 15.943] (EE) Screen(s) found, but none have a usable configuration. [ 15.943] Fatal server error: [ 15.943] no screens found EDIT4 There was a file /etc/X11/xorg.conf. As fossfreedom suggested I executed sudo mv /etc/X11/xorg.conf /etc/X11/xorg.conf.backup However, there is still the black screen after reboot. EDIT5 Neutro's advice (reinstalling the headers) did not solve the problem, too. :-( Any further help is appreciated! EDIT6 I just installed driver 173.xxx. After reboot the system shows me only "Checking battery state". Just for information. I will google the problem, but help is also appreciated! ;-) EDIT7 When using the free driver (Ubuntu says that the free driver is in use and activated), Xorg.0.log shows the following errors: [ 9.267] (II) LoadModule: "nouveau" [ 9.267] (II) Loading /usr/lib/xorg/modules/drivers/nouveau_drv.so [ 9.267] (II) Module nouveau: vendor="X.Org Foundation" [ 9.267] compiled for 1.10.0, module version = 0.0.16 [ 9.267] Module class: X.Org Video Driver [ 9.267] ABI class: X.Org Video Driver, version 10.0 [ 9.267] (II) LoadModule: "nv" [ 9.267] (WW) Warning, couldn't open module nv [ 9.267] (II) UnloadModule: "nv" [ 9.267] (II) Unloading nv [ 9.267] (EE) Failed to load module "nv" (module does not exist, 0) [ 9.267] (II) LoadModule: "vesa" [...] [ 9.399] drmOpenDevice: node name is /dev/dri/card14 [ 9.402] drmOpenDevice: node name is /dev/dri/card15 [ 9.406] (EE) [drm] failed to open device [ 9.406] (II) Loading /usr/lib/xorg/modules/drivers/vesa_drv.so [ 9.406] (WW) Falling back to old probe method for fbdev [ 9.406] (II) Loading sub module "fbdevhw" [ 9.406] (II) LoadModule: "fbdevhw" EDIT8 In the meanwhile i tried to install WIN7 64 bit on my machine. As a result i got a BSOD after installing the nvidia driver. :-) For this reason i sent my new machine back to the hardware reseller. I will inform you as soon as i have a new system. Thank you all for the great help and support. EDIT9 In the meanwhile I have a complete new system with "only" a MSI N460GTX Hawk, but more RAM. The system works perfect. :-) The original N560GTX had a hardware defect. Is is possible to close this question? THX!

    Read the article

  • How can I set my screen resolution to match my TV?

    - by Scott Severance
    I have a computer in my classroom that's connected to an LG smart TV (that's actually not so smart. I wouldn't recommend buying one.). For the touch interface, the TV wants a resolution of 1920x1080 at 60Hz. However, I can't seem to set the computer to that resolution. The display settings only offer 1024x768 and 640x480. The computer dual boots with Windows XP, where widescreen options are available in approximately the required size, but the exact resolution -- or even aspect ratio-- isn't available in XP either. I tried the following command: xrandr -s 1920x1080 -r 60 The response was: Size 1920x1080 not found in available modes Back in the old days, the solution would be to edit xorg.conf. However, since that file no longer exists, and I haven't found up-to-date info, I don't know what else to do. If it helps, this machine will never be connected to a different display, so resolution flexibility isn't important. Here's the output of lshw: *-display:0 description: VGA compatible controller product: 4 Series Chipset Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 03 width: 64 bits clock: 33MHz capabilities: vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:42 memory:fe800000-febfffff memory:d0000000-dfffffff ioport:ecd8(size=8) *-display:1 UNCLAIMED description: Display controller product: 4 Series Chipset Integrated Graphics Controller vendor: Intel Corporation physical id: 2.1 bus info: pci@0000:00:02.1 version: 03 width: 64 bits clock: 33MHz According to the system settings, my graphics driver is unknown and my "experience" is standard. This is 64-bit Ubuntu 12.04 (Precise) Note: There are a number of similar questions to this one, but they didn't include any answers that helped me. Update After posting this question, I noticed one in the sidebar that I hadn't found through search but which appeared to contain the answer. Based on that question, I created the /etc/X11/xorg.conf file below: Section "ServerLayout" Identifier "X.org Configured" Screen 0 "Screen0" 0 0 InputDevice "Mouse0" "CorePointer" InputDevice "Keyboard0" "CoreKeyboard" EndSection Section "Files" ModulePath "/usr/lib/xorg/modules" FontPath "/usr/share/fonts/X11/misc" FontPath "/var/lib/defoma/x-ttcidfont-conf.d/dirs/TrueType" FontPath "built-ins" EndSection Section "Module" Load "glx" Load "dri2" Load "dbe" Load "dri" Load "record" Load "extmod" EndSection Section "InputDevice" Identifier "Keyboard0" Driver "kbd" EndSection Section "InputDevice" Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/input/mice" Option "ZAxisMapping" "4 5 6 7" EndSection Section "Monitor" Identifier "Monitor0" VendorName "LG" ModelName "Smart TV" EndSection Section "Device" ### Available Driver options are:- ### Values: <i>: integer, <f>: float, <bool>: "True"/"False", ### <string>: "String", <freq>: "<f> Hz/kHz/MHz", ### <percent>: "<f>%" ### [arg]: arg optional #Option "DRI" # [<bool>] #Option "ColorKey" # <i> #Option "VideoKey" # <i> #Option "FallbackDebug" # [<bool>] #Option "Tiling" # [<bool>] #Option "LinearFramebuffer" # [<bool>] #Option "Shadow" # [<bool>] #Option "SwapbuffersWait" # [<bool>] #Option "TripleBuffer" # [<bool>] #Option "XvMC" # [<bool>] #Option "XvPreferOverlay" # [<bool>] #Option "DebugFlushBatches" # [<bool>] #Option "DebugFlushCaches" # [<bool>] #Option "DebugWait" # [<bool>] #Option "HotPlug" # [<bool>] #Option "RelaxedFencing" # [<bool>] Identifier "Card0" Driver "intel" BusID "PCI:0:2:0" EndSection Section "Screen" Identifier "Screen0" Device "Card0" Monitor "Monitor0" DefaultDepth 24 #SubSection "Display" # Viewport 0 0 # Depth 1 #EndSubSection #SubSection "Display" # Viewport 0 0 # Depth 4 #EndSubSection #SubSection "Display" # Viewport 0 0 # Depth 8 #EndSubSection #SubSection "Display" # Viewport 0 0 # Depth 15 #EndSubSection #SubSection "Display" # Viewport 0 0 # Depth 16 #EndSubSection SubSection "Display" Viewport 0 0 Depth 24 Modes "1024x768" "1920x1080" EndSubSection EndSection According to /var/log/Xorg.0.log, my settings aren't being applied. In fact, I wonder if the config file is even being read. [ 1209.083] (**) intel(0): Depth 24, (--) framebuffer bpp 32 [ 1209.084] (==) intel(0): RGB weight 888 [ 1209.084] (==) intel(0): Default visual is TrueColor [ 1209.084] (II) intel(0): Integrated Graphics Chipset: Intel(R) G41 [ 1209.084] (--) intel(0): Chipset: "G41" [ 1209.084] (**) intel(0): Relaxed fencing enabled [ 1209.084] (**) intel(0): Wait on SwapBuffers? enabled [ 1209.084] (**) intel(0): Triple buffering? enabled [ 1209.084] (**) intel(0): Framebuffer tiled [ 1209.084] (**) intel(0): Pixmaps tiled [ 1209.084] (**) intel(0): 3D buffers tiled [ 1209.084] (**) intel(0): SwapBuffers wait enabled [ 1209.084] (==) intel(0): video overlay key set to 0x101fe [ 1209.172] (II) intel(0): Output VGA1 using monitor section Monitor0 [ 1209.260] (II) intel(0): EDID for output VGA1 [ 1209.260] (II) intel(0): Printing probed modes for output VGA1 [ 1209.260] (II) intel(0): Modeline "1024x768"x60.0 65.00 1024 1048 1184 1344 768 771 777 806 -hsync -vsync (48.4 kHz) [ 1209.260] (II) intel(0): Modeline "800x600"x60.3 40.00 800 840 968 1056 600 601 605 628 +hsync +vsync (37.9 kHz) [ 1209.260] (II) intel(0): Modeline "800x600"x56.2 36.00 800 824 896 1024 600 601 603 625 +hsync +vsync (35.2 kHz) [ 1209.260] (II) intel(0): Modeline "848x480"x60.0 33.75 848 864 976 1088 480 486 494 517 +hsync +vsync (31.0 kHz) [ 1209.260] (II) intel(0): Modeline "640x480"x59.9 25.18 640 656 752 800 480 489 492 525 -hsync -vsync (31.5 kHz) [ 1209.260] (II) intel(0): Output VGA1 connected [ 1209.260] (II) intel(0): Using user preference for initial modes [ 1209.260] (II) intel(0): Output VGA1 using initial mode 1024x768 [ 1209.260] (II) intel(0): Using default gamma of (1.0, 1.0, 1.0) unless otherwise stated. [ 1209.260] (II) intel(0): Kernel page flipping support detected, enabling [ 1209.260] (==) intel(0): DPI set to (96, 96)

    Read the article

  • How can I increase resolution when impossible to set up nvidia driver?

    - by Asimov4
    I was stuck at 1024x758 before creating an /etc/X11/xorg.conf file as follows which works well to allow me to have a resolution of 1920x1200 with the vesa driver. Section "Device" Identifier "Configured Video Device" Driver "vesa" EndSection Section "Monitor" Identifier "Configured Monitor" HorizSync 24.0 - 83.0 VertRefresh 50.0 - 75.0 EndSection Section "Screen" Identifier "Default Screen" Monitor "Configured Monitor" Device "Configured Video Device" SubSection "Display" Viewport 0 0 Depth 24 Modes "1920x1200" EndSubSection EndSection But now my display is kind of slow. I have an NVIDIA GeForce 8600GT but really cannot find how to set up the driver to work. Are there any alternatives to VESA that would work faster?

    Read the article

  • Why would VMWare to go defunct? How to recover from/prevent it?

    - by Josh
    I am running VMWare Server 2.0.2 (Build 203138) on a dual core Intel i5 with Ubuntu Server 10.04 LTS system (kernel 2.6.32-22-server #33-Ubuntu SMP). Disk Subsystem is a software RAID5 array. The system has been set up for a little over a week. For the past 5 days I have been running at leat 3 VMs (Linux and a variety of Windows OSes) with no issues whatsoever. But while I was installing Linux onto a new VM, suddenly all VMs became unresponsive, including the one I was installing to. I could not log in to the VMWare Management Interface, and the system was somewhat unresponsive via SSH. When I looked at top, I saw: top - 16:14:51 up 6 days, 1:49, 8 users, load average: 24.29, 24.33 17.54 Tasks: 203 total, 7 running, 195 sleeping, 0 stopped, 1 zombie Cpu(s): 0.2%us, 25.6%sy, 0.0%ni, 74.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 8056656k total, 5927580k used, 2129076k free, 20320k buffers Swap: 7811064k total, 240216k used, 7570848k free, 5045884k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 21549 root 39 19 0 0 0 Z 100 0.0 15:02.44 [vmware-vmx] <defunct> 2115 root 20 0 0 0 0 S 1 0.0 170:32.08 [vmware-rtc] 2231 root 21 1 1494m 126m 100m S 1 1.6 892:58.05 /usr/lib/vmware/bin/vmware-vmx -# product=2; 2280 jnet 20 0 19320 1164 800 R 0 0.0 30:04.55 top 12236 root 20 0 833m 41m 34m S 0 0.5 88:34.24 /usr/lib/vmware/bin/vmware-vmx -# product=2; 1 root 20 0 23704 1476 920 S 0 0.0 0:00.80 /sbin/init 2 root 20 0 0 0 0 S 0 0.0 0:00.01 [kthreadd] 3 root RT 0 0 0 0 S 0 0.0 0:00.00 [migration/0] 4 root 20 0 0 0 0 S 0 0.0 0:00.84 [ksoftirqd/0] 5 root RT 0 0 0 0 S 0 0.0 0:00.00 [watchdog/0] 6 root RT 0 0 0 0 S 0 0.0 0:00.00 [migration/1] The VMWare process for the virtual machine I was installing into became a zombie. Yet, it was still consuming 100% of the CPU time on one of the cores, and I couldn't reach it or any other virtual machines. (I was logged in to one virtual machine over SSH, another via X11, and a third via VNC. All three connections died). When I ran ps -ef and similar commands, I found that the defunct vmware-vmx process had it's parent PID set to init (1). I also used lsof -p 21549 and found that the defunct process had no open files. Yet it was using 100% of CPU time... I was unable to kill any vmware-vmx processes, including the defunct one, even with kill -9. As a last resort to resolve the situation I tried to reboot the box, however shutdown, halt, reboot, and init 6 all failed to reboot/shutdown, even when given appropriate --force settings. ControlAltDel produced a message about rebooting on the console, but the system would not reboot. I had to hard power-cycle the box to resolve the situation. (See my other question, Should I worry about the integrity of my linux software RAID5 after a crash or kernel panic?) What would cause a scenario like this? What else could I have done to resolve it besides a hard reboot? What can I do to prevent such a situation in the future?

    Read the article

  • Why can't I use __getattr__ with Django models?

    - by Joshmaker
    I've seen examples online of people using __getattr__ with Django models, but whenever I try I get errors. (Django 1.2.3) I don't have any problems when I am using __getattr__ on normal objects. For example: class Post(object): def __getattr__(self, name): return 42 Works just fine... >>> from blog.models import Post >>> p = Post() >>> p.random 42 Now when I try it with a Django model: from django.db import models class Post(models.Model): def __getattr__(self, name): return 42 And test it on on the interpreter: >>> from blog.models import Post >>> p = Post() ERROR: An unexpected error occurred while tokenizing input The following traceback may be corrupted or invalid The error message is: ('EOF in multi-line statement', (6, 0)) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /Users/josh/project/ in () /Users/josh/project/lib/python2.6/site-packages/django/db/models/base.pyc in init(self, *args, **kwargs) 338 if kwargs: 339 raise TypeError("'%s' is an invalid keyword argument for this function" % kwargs.keys()[0]) -- 340 signals.post_init.send(sender=self.class, instance=self) 341 342 def repr(self): /Users/josh/project/lib/python2.6/site-packages/django/dispatch/dispatcher.pyc in send(self, sender, **named) 160 161 for receiver in self._live_receivers(_make_id(sender)): -- 162 response = receiver(signal=self, sender=sender, **named) 163 responses.append((receiver, response)) 164 return responses /Users/josh/project/python2.6/site-packages/photologue/models.pyc in add_methods(sender, instance, signal, *args, **kwargs) 728 """ 729 if hasattr(instance, 'add_accessor_methods'): -- 730 instance.add_accessor_methods() 731 732 # connect the add_accessor_methods function to the post_init signal TypeError: 'int' object is not callable Can someone explain what is going on? EDIT: I may have been too abstract in the examples, here is some code that is closer to what I actually would use on the website: class Post(models.Model): title = models.CharField(max_length=255) slug = models.SlugField() date_published = models.DateTimeField() content = RichTextField('Content', blank=True, null=True) # Etc... Class CuratedPost(models.Model): post = models.ForeignKey('Post') position = models.PositiveSmallIntegerField() def __getattr__(self, name): ''' If the user tries to access a property of the CuratedPost, return the property of the Post instead... ''' return self.post.name # Etc... While I could create a property for each attribute of the Post class, that would lead to a lot of code duplication. Further more, that would mean anytime I add or edit a attribute of the Post class I would have to remember to make the same change to the CuratedPost class, which seems like a recipe for code rot.

    Read the article

  • mySQL auto increment problem: Duplicate entry '4294967295' for key 1

    - by Josh
    I have a table of emails. The last record in there for an auto increment id is 3780, which is a legit record. Any new record I now insert is being inserted right there. However, in my logs I have the occasional: Query FAIL: INSERT INTO mail.messages (timestamp_queue) VALUES (:time); Array ( [0] => 23000 [1] => 1062 [2] => Duplicate entry '4294967295' for key 1 ) Somehow, the autoincrement jumped up to the INT max of 4294967295 Why on god's green earth would this get jumped up so high? I have no inserts with an id field. The show status for that table, Auto_increment table now reads: 4294967296 How could something like this occur? I realize the id field should perhaps be a big int, but the worry I have is that somehow this thing jumps back up. Josh

    Read the article

  • PHP/MySQL database connection priority?

    - by Josh
    Hello, I have a production database where usage statistics for a service we're working on. I use php to periodically roll up different resolutions (day, week, month, year) of interesting statistics in buckets dictated by the resolution. The php application I've written "completes" its data when its run, such that it will calculate all the rolled-up statistics for the resolutions and periods since it was last run. This is useful if we want to turn this off to debug database performance issues, because I can turn it back on and have it complete its data set. The problem I have, is the calculations are fairly intensive and drive the QPS of the production database server up. Is there a way to set a "priority" on a particular database connection so that it will only use "off-cycles" to do these calculations? Maybe a proper response would be to replicate the tables I'm working on into a different stats database, but, unfortunately I don't have the resources in place to attempt such a thing (yet). Thanks in advance for any help, Josh

    Read the article

  • PHP - ADO recordsets

    - by Josh
    I am referencing a COM component from PHP. This COM component returns records using ADO. I am assuming I will need to reference ADO in PHP for this to function. How do I do this? Secondly (related to the first question) I have run accross ADODB abstraction libraries, however these seem to mostly deal with queries and handle the ADO internally. How do I get the returned ADO recordset into a PHP friendly array, and likewise pass in an ADO array to the COM object? Thank you, Josh

    Read the article

  • ORM in the realworld

    - by josh
    I am begining a new project that i think will last for some years. Am in the point of deciding the ORM framework to use (or whether to use one at all). Can anyone with experience tell me whether orm frameworks are used in realworld applications. The problem i have in mind is this: The orm tool will generate for me tables and columns etc as i create and modify my entities. However, after the project has gone live and is in production, certain database changes will not be possible. Can this hinder the advancement of the project. If i had used a framework like ibatis for example, i know i would only need to adjust the sql statements based on the database changes. Can someone tell me whether ORM tools have survived the live environment. At my office, we use java based ERP that was done long ago and it was never done using any ORM framework. Regards. Josh

    Read the article

  • ExpressionEngine: {exp:query} producing error

    - by Josh Brown
    Hi. I have this code in place to pull some relation data from the database based on a ID of another weblog entry. The relationship custom field would not work for my situation. I have tested the SQL with MySQL and get no errors. But I get an error when I pull up the page. {exp:query sql="SELECT entry_id,field_id_16,field_id_19 FROM exp_weblog_data WHERE field_id_15='2' AND weblog_id='6'"} Also, I want to populate the “field_id_15” from the url. I have tried using the “segment_*” tag, but nothing. Maybe it is something else. Any help is greatly appreciated. Thanks, Josh

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >