Search Results

Search found 7529 results on 302 pages for 'replace'.

Page 212/302 | < Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >

  • How to force a clock update using ntp?

    - by ysap
    I am running Ubuntu on an ARM based embedded system that lacks a battery backed RTC. The wake-up time is somewhere during 1970. Thus, I use the NTP service to update the time to the current time. I added the following line to /etc/rc.local file: sudo ntpdate -s time.nist.gov However, after startup, it still takes a couple of minutes until the time is updated, during which period I cannot work effectively with tar and make. How can I force a clock update at any given time? UPDATE 1: The following (thanks to Eric and Stephan) works fine from command line, but fails to update the clock when put in /etc/rc.local: $ date ; sudo service ntp stop ; sudo ntpdate -s time.nist.gov ; sudo service ntp start ; date Thu Jan 1 00:00:58 UTC 1970 * Stopping NTP server ntpd [ OK ] * Starting NTP server [ OK ] Thu Feb 14 18:52:21 UTC 2013 What am I doing wrong? UPDATE 2: I tried following the few suggestions that came in response to the 1st update, but nothing seems to actually do the job as required. Here's what I tried: Replace the server to us.pool.ntp.org Use explicit paths to the programs Remove the ntp service altogether and leave just sudo ntpdate ... in rc.local Remove the sudo from the above command in rc.local Using the above, the machine still starts at 1970. However, when doing this from command line once logged in (via ssh), the clock gets updated as soon as I invoke ntpdate. Last thing I did was to remove that from rc.local and place a call to ntpdate in my .bashrc file. This does update the clock as expected, and I get the true current time once the command prompt is available. However, this means that if the machine is turned on and no user is logged in, then the time never gets updates. I can, of course, reinstall the ntp service so at least the clock is updated within a few minutes from startup, but then we're back at square 1. So, is there a reason why placing the ntpdate command in rc.local does not perform the required task, while doing so in .bashrc works fine?

    Read the article

  • Changing Silverlight application themes at runtime

    We have received a lot of questions how can the application theme be changed at run time. The most important thing here to mark is that each time the application theme is changed all the controls should be re-drawn. Without going into too much detail, we could explain the application themes as a mechanism to replace the content of the Generic.xaml file in every loaded Telerik assembly at runtime. This does not affect the controls that already have default style applied, hence the need to create new instances. Because in the Silverlight applications the RootVisual cannot be changed at run time, we need a way to reset the application UI. The following code is in App.xaml.cs. private void Application_Startup(object sender, StartupEventArgs e)     {           // Before:           // this.RootVisual = new MainPage();            this.RootVisual = new Grid();         this.ResetRootVisual();     }        public void ResetRootVisual()     {         var rootVisual = Application.Current.RootVisual as Grid;         rootVisual.Children.Clear();         rootVisual.Children.Add(new MainPage());     }   In Application_Startup() instead of creating new MainPage UserControl instance as RootVisual, we create a new Grid panel, that will contain the MainPage UserControl. In the ResetRootVisual() method we create the instance of MainPage and add it to the RootVisual panel. Then we have to create a method in the code behind which will set StyleManager.ApplicationTheme and then will call the ResetRootVisual() method: private void ChangeApplicationTheme(Theme theme) {     StyleManager.ApplicationTheme = theme;     (Application.Current as App).ResetRootVisual(); }   Here you can find an example which illustrates the described implementation of a Silverlight theme. For more information please refer to Teleriks online demos for Silverlight, the demos for WPF and help documentation for WPF and help documentation for Silverlight. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Building SANE from git-source produce backend missmatch on 12.04 even if built locally

    - by deinonychusaur
    It seems to me that with Ubuntu Precise Pangolin it is all but easy to do a proper install of SANE from source (git-repo). I've found other scanning issues trying to find an answer to this, where the output people posted seems to indicate they suffer the same issue (unknowingly). If I run on a fresh install of Ubuntu 12.04 with compiled SANE source from the git I get: $ scanimage -V scanimage (sane-backends) 1.0.24git; backend version 1.0.22 (I basically followed the instructions on http://ubuntuportal.com/2012/02/how-to-get-an-canon-canoscan-lide-100-scanner-to-work-in-ubuntu-11-10linux-mint-12.html since I didn't find any other information making sure that sane was not installed prior to installation.) My primary interest is the epson2-backend. In 1.0.22 it offers the wrong TPU settings for Epson V700 (TPU2-mode wasn't supported in 1.0.22, and the scanner is useless to me if I don't have the TPU2-support). Since if I ask it to enter transparency mode, it shows 1.0.22 behaviour, it implies that the epson2-backend comes from 1.0.22 and not 1.0.24 even though I just built it. If I install SANE with prefix to a local folder and run that version of scanimage it still produces the mismatch. However, on another computer where I installed a custom 1.0.22 build of SANE prior to upgrading to Ubuntu 12.04, I can build and install the same SANE-git locally and have it correctly match backends: $ ./SANE/bin/scanimage -V scanimage (sane-backends) 1.0.24git; backend version 1.0.24 $ scanimage -V scanimage (sane-backends) 1.0.22; backend version 1.0.22 On this computer the 1.0.24 works correctly in finding TPU2 on Epson V700. So what am I missing/doing wrong? (And I want to replace 1.0.22 with 1.0.24 for the whole system, the local build was just debugging.) Any help would be much appreciated. Edit 1: Just tried compiling SANE using this instruction on Ubuntu 10.04 and it worked like a charm. However, when I upgraded to 12.04 (really would like to run 12.04), SANE was downgraded to 1.0.22. When trying the same set of instructions on 12.04 I was still out of luck -- the backend missmatch was there again (and I do have libusb-dev installed) Edit 2: I updated to Ubuntu 12.10 which now has the 1.0.23 SANE drivers. I haven't dared trying to compile from source on 12.10 since 1.0.23 is good enough for me. This is just a work-around and I would still like to know what's up with Ubuntu 12.04.

    Read the article

  • How to Overcome or fix MIXED_DML_OPERATION error in Salesforce APEX without future method ?

    - by sathya
    How to Overcome or fix MIXED_DML_OPERATION error in Salesforce APEX without future method ?MIXED_DML_OPERATION :-one of the worst issues we have ever faced :)While trying to perform DML operation on a setup object and non-setup object in a single action you will face this error.Following are the solutions I tried and the final one worked out :-1. perform the 1st objects DML on normal apex method. Then Call the 2nd objects DML through a future method.    Drawback :- You cant get a response from the future method as its context is different and because its executing asynchronously and that its static.2. Tried the following option but it didnt work :-    1. perform the dml operation on the normal apex method.    2. tried calling the 2nd dml from trigger thinking that it would be in a different context. But it didnt work.    3. Some suggestions were given in some blogs that we could try System.runas()   Unfortunately that works only for test class.   4. Finally achieved it with response synchronously through the following solution :-    a. Created 2 apex:commandbuttons :-        1. <apex:commandButton value="Save and Send Activation Email" action="{!CreateContact}"  rerender="junkpanel" oncomplete="callSimulateUserSave()">            Note :- Oncomplete will not work if you dont have a rerender attribute. So just try refreshing a junk panel.        2. <apex:commandButton value="SimulateUserSave" id="SimulateUserSave" action="{!SaveUser}"  style="display:none;margin-left:5px;"/>        Have a junk panel as well just for rerendering  :-        <apex:outputPanel id="junkpanel"></apex:outputPanel>    b. Created this javascript function which is called from first button's oncomplete and clicks the second button :-                function callSimulateUserSave()                {                    // Specify the id of the button that needs to be clicked. This id is based on the Apex Component Hierarchy.                    // You will not get this value if you dont have the id attribute in the button which needs to be clicked from javascript                    // If you have any doubt in getting this value. Just hover over the button using Chrome developer tools to get the id.                    // But it will show like theForm:SimulateUserSave but you need to replace the colon with a dot here.                    // Note :- I have given display:none in the style of the second button to make sure that, it is not visible for the user.                    var mybtn=document.getElementById('{!$Component.theForm.SimulateUserSave}');                                    mybtn.click();                }    c. Apex Methods CreateContact and SaveUser are the pagereference methods which contains the code to create contact and user respectively.       After inserting the user inside the second apex method you can just set some public Properties in the page,        for ex:- created userid to get the user details and display in the page to show the acknowledgement to the users that the User is created.

    Read the article

  • Traditional POS is Dead

    - by David Dorf
    Traditional POS is dead -- I've heard that one before. Here's an excerpt from Joe Skorupa's blog over at RIS where he relayed ten trends that were presented at NRF. 7. Mobile POS signals death of traditional POS. Shoppers don't love self-checkout, but they prefer it to long queues or dealing with associates. Fixed POS is expensive and bulky. Mobile POS frees floor space for other purposes and converts associates from being cashiers to being sales assistants that provide new levels of customer service and incremental basket sales. In addition to unplugging the POS, new alternatives are starting to take hold - thin client, POS as a service, and replacing POS software with e-commerce platforms. I'll grant that in some situations for some retailers there might be an opportunity to to ditch the traditional POS, but for the majority of retailers that's just not practical. Take it from a guy that had to wake up at 3am after every Thanksgiving to monitor POS systems across the US on Black Friday. If a retailer's website goes down on Black Friday, they will take a significant hit. If a retailer's chain-wide POS system goes down on Black Friday, that retailer will cease to exist. Mobile POS works great for Apple because the majority of purchases are one or two big-ticket items that don't involve cash. There's still a traditional POS in every store to fall back on (its just hidden). Try this at home: Choose your favorite e-commerce site and add an item to the cart while timing how long it takes. Now multiply that by 15 to represent the 15 items you might buy at store like Target. The user interface isn't optimized for bulk purchases, and that's how it should be. The webstore and POS are designed for different purposes. Self-checkout is a great addition to POS and so is mobile checkout. But they add capabilities to POS, not replace it. Centralized architectures, even those based in the cloud, are quite viable as long as there's resiliency in the registers. You cannot assume perfect access to the network, so a POS must always be able to sell regardless of connectivity. Clearly the different selling channels should be sharing common functionality. Things like calculating tax, accepting coupons, and processing electronic payments can be shared, usually through a service-oriented architecture. This lowers costs and providers greater consistency, both of which help retailers. On paper these technologies look really good and we should continue to push boundaries, but I'm not ready to call the patient dead just yet.

    Read the article

  • I cannot remove/install software some dependency issue?

    - by Ryuzaki
    I'm having difficulty trying to install/uninstall applications in Ubuntu 12.04 LTS. I have this warning error that appears on my desktop in the form of a red icon with a line through it and it states: An error occurred, please run Package Manager from the right-click menu or apt-get in a terminal to see what is wrong. The error message was: 'Error: BrokenCount 0' — This usually means that your installed packages have unmet dependencies. I tried to repair automatically through ubuntu software center and I kept getting errors or it just didn't seem to work. Afterwards, I opened terminal and used sudo apt-get check command to see what the problem was and the results yielded: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: libjack-jackd2-0 : Breaks: libjack-jackd2-0:i386 (!= 1.9.8~dfsg.1-1ubuntu1) but 1.9.8~dfsg.2-1precise1 is installed libjack-jackd2-0:i386 : Breaks: libjack-jackd2-0 (!= 1.9.8~dfsg.2-1precise1) but 1.9.8~dfsg.1-1ubuntu1 is installed E: Unmet dependencies. Try using -f. After I ran sudo apt-get -f install (in an attempt to fix the issue at hand), I encountered the following errors/code: okudaira@haru-kano:~$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: libjack-jackd2-0 Suggested packages: jackd2 The following packages will be upgraded: libjack-jackd2-0 1 upgraded, 0 newly installed, 0 to remove and 24 not upgraded. 1 not fully installed or removed. Need to get 0 B/197 kB of archives. After this operation, 3,072 B of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 181702 files and directories currently installed.) Preparing to replace libjack-jackd2-0 1.9.8~dfsg.1-1ubuntu1 (using .../libjack-jackd2-0_1.9.8~dfsg.2-1precise1_amd64.deb) ... Unpacking replacement libjack-jackd2-0 ... dpkg: error processing /var/cache/apt/archives/libjack-jackd2-0_1.9.8~dfsg.2-1precise1_amd64.deb (--unpack): './usr/share/doc/libjack-jackd2-0/buildinfo.gz' is different from the same file on the system dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/libjack-jackd2-0_1.9.8~dfsg.2-1precise1_amd64.deb localepurge: Disk space freed in /usr/share/locale: 0 KiB localepurge: Disk space freed in /usr/share/man: 0 KiB localepurge: Disk space freed in /usr/share/gnome/help: 0 KiB localepurge: Disk space freed in /usr/share/omf: 0 KiB Total disk space freed by localepurge: 0 KiB E: Sub-process /usr/bin/dpkg returned an error code (1) The real problem I'm having is interpreting what is being stated. I'm fairly new to the ubuntu experience so I'm not very well-versed with the terminology and the entire in's and out's. Can someone tell me what's wrong with my system? I can no longer install/remove any programs at all.

    Read the article

  • Assessing Relative Maintainability

    - by João Bragança
    We (a contractor, actually) are implementing an off the shelf system to replace a legacy homegrown system for the core domain of the company (designing widgets). Unfortunately both systems will have to run concurrently for some time, as the product just isn't ready yet. Also, the decision was made to only migrate some of the widgets from the legacy system, based on date of last sale activity. Later on a new requirement came down: certain people in the company, most of them outside of the widget development context, want to search all widgets. The search results screen has 3 pieces of data: a GUID, a human readable id that is searchable, and a brief description (may need to be searchable in the future). In the widget details, there will be multiple screens. These screens align very well along SOA / bounded context lines - a screen for marketing data, a screen for sales history, etc. UML ahead! I am probably using the wrong kind of arrows here so please forgive me. The current solution - which is not in production yet - is something like the following: Both systems will be queried and the controller will merge the results. The new system has its own proprietary query language (we've alleviated this a bit with a LINQ provider). It also puts a lot of data on the wire. 15 search results typically run about 60k of unintelligible SOAP-wrapped xml. So I would prefer to avoid querying this system directly. These two systems publish events to help us integrate with other systems, mainly an ERP system. One of these events contains all the data necessary for the search screen. I proposed the following alternative: However I am being told that 'adding another database' will create more maintenance down the road. However, I believe this to be false, as I had to add a relatively simple feature that took several hours longer than anticipated because of this merging code. I want to get a feel for which system is more maintainable in the long run. I personally have not had the burden of maintaining any large system. I want something more than my gut. Specifically I'd like to know if having more, specialized physical databases is more or less maintainable than having less larger physical databases.

    Read the article

  • Setting up Ubuntu on my mother's computer

    - by idealmachine
    Intended use My mother had an old Compaq desktop computer running Windows 98, which she used for occasional Web browsing and playing cards. The name of her card game is Hoyle Card Games 3. Although I had to repair it several times over the last 10 years, it worked fine until it finally died at the end of last year. Hardware specifications A relative brought up a newer computer soon afterward: Operating system: Windows XP Asus K8N motherboard (with broken on-board sound; getting a sound card) Athlon 64? processor (don't remember the clock speed) 512 MB RAM Hope the graphics card works... Replacement sound card will be one of: Ensoniq ES1370 AudioPCI Diamond Monster Sound MX300 (Aureal chipset) Sound Blaster Audigy 2 SE Peripherals HP Scanjet 3400c scanner (USB connected) HP LaserJet multi-function printer (parallel port connected, and printing works with a PCL driver) Same serial mouse as old computer Question I had set up an SSH/VNC connection to allow for remotely working out problems. Or so I thought. A month later, the computer would not boot, rendering the SSH connection useless and an OS reinstall necessary. Unfortunately, I have neither the original Windows disc nor the product key. Unless I were to pay $200 for a full Windows 7 Home Premium license for my computer, I would not be able to re-install Windows XP on hers. I consider myself an advanced Linux user, having used Debian for years. So here are my questions. I have only one day to decide whether to use Ubuntu or buy Windows: A quick search leads me to believe all the hardware listed above is supposed to work with Linux, but am I mistaken? Would Ubuntu/Xubuntu suffice (specify which one if it matters), or would I be better off paying the $200 necessary for Windows XP? Is the card game likely to run on Wine? I believe the minimum system requirement is Windows 95. Failing Wine compatibility, will VirtualBox run fast enough on such a computer (Windows 98 as the guest OS)? Are there any free card games just as good? She plays mainly Bridge, Poker, and Solitaire. Is there any "Large Fonts" option for those with poor vision? The lack of it would be a big disadvantage. BONUS: Although I would probably replace the old mouse upon a move to Ubuntu, is it even possible to get a serial mouse working?

    Read the article

  • Chrome Countdown Extension [migrated]

    - by Mike Saffold
    I have modified this countdown script to countdown to 4:20pm everyday. I have attempted to create a Google Chrome app that displays the countdown. The javascript is supposed replace a paragraph tag with id of "note" with the time left. It works when I load the page in chrome, but does not work when I load the extension. Example, if I put: <p id="note">asdf</a> I get just the text, "asdf", but when I open the html file I get the countdown. Here is the manifest.json file: { "name": "My First Extension", "version": "1.0", "manifest_version": 2, "description": "The first extension that I made.", "browser_action": { "default_icon": "icon.png", "default_popup": "popup.html" } } Here is the popup.html code: <html> <head> <title>4:20PM Countdown</title> <!-- Our CSS stylesheet file --> <link rel="stylesheet" href="http://fonts.googleapis.com/css?family=Open+Sans+Condensed:300" /> <link rel="stylesheet" href="http://treesmoke.com/cd/assets/css/styles.css" /> <link rel="stylesheet" href="http://treesmoke.com/cd/assets/countdown/jquery.countdown.css" /> </head> <body> <p id="note">asdf</p> <!-- JavaScript includes --> <script type="text/javascript" src="http://code.jquery.com/jquery-1.7.1.min.js"></script> <script type="text/javascript" src="http://treesmoke.com/cd/assets/countdown/jquery.countdown.js"></script> <script type="text/javascript" src="http://treesmoke.com/cd/assets/js/script.js"></script> </body> </html> Here's the popup.html page, showing that the script works. Thanks guys, it isn't that big of a deal if I can't get it to work. I was just bored and decided to learn a little.

    Read the article

  • LINQ to Twitter v2.1.09 Released

    - by Joe Mayo
    Originally posted on: http://geekswithblogs.net/WinAZ/archive/2013/10/15/linq-to-twitter-v2.1.09-released.aspxToday, I released LINQ to Twitter v2.1.09. Here are important new changes. Bug Fixes This is primarily a bug fix release. Most notably, there were authentication problems in WinRT apps. This is now fixed. New Features One new feature is the addition of ApplicationOnlyAuthentication for WinRT. It is fully async.  Here’s how it works: var auth = new WinRtApplicationOnlyAuthorizer { Credentials = new InMemoryCredentials { ConsumerKey = "", ConsumerSecret = "" } }; if (auth == null || !auth.IsAuthorized) { await auth.AuthorizeAsync(); } var twitterCtx = new TwitterContext(auth); (from search in twitterCtx.Search where search.Type == SearchType.Search && search.Query == SearchTextBox.Text select search) .MaterializedAsyncCallback( async response => await Dispatcher.RunAsync( CoreDispatcherPriority.Normal, async () => { Search searchResponse = response.State.Single(); string message = string.Format( "Search returned {0} statuses", searchResponse.Statuses.Count); await new MessageDialog(message, "Search Complete").ShowAsync(); })); It’s called the WinRtApplicationOnlyAuthorizer. You only need two tokens, ConsumerKey and ConsumerSecret, which come from your Twitter API application settings page. Note: You need a Twitter Application, which you can create at https://dev.twitter.com/. The MaterializedAsyncCallback materializes your query and handles the response. I put everything together in a lambda for demonstration purposes, but you can always replace the callback with a handler of type Action<TwitterAsyncResponse<IEnumerable<T>>>, where T is Search for this example. On the Horizon The next version of LINQ to Twitter is in development. I discussed it at LINQ to Twitter Async. This isn’t complete, but you can download the source code at the LINQ to Twitter site on CodePlex. I’ve competed all the spikes for what I thought would be the hard parts and now have prototypes of queries and commands working. This would be a good time to provide feedback if there are features in the current version that you think could be improved. The current driving forces for the next version will be async and PCL.   @JoeMayo

    Read the article

  • Changing jobs and leaving a project without a leader (aka, me)

    - by AnonUntilAfterTheEvent
    I'm the lead on a project that has been underway for about a year and a half. Two of us have been working on it. One is the database guy. I'm the javascript/ui guy. Which is to say, essentially no overlap in code knowledge. Here's the thing. Someone is about to offer me a sweet job with a nearly 30% bump in pay. Though I am perfectly happy with my current job and love the project, the new one would be better and I can't imagine saying no. The big problem is that my project is supposed to go into production starting in a few weeks. I will consider the new guys to have disqualified the new job by being bad people who would ruin my life if they won't cooperate and let me start after deployment. Since they seem like decent, ethical people, I don't expect that to be a problem. The current project will be brutalized by my absence. I take some comfort in the fact that I have emphatically requested an understudy for at least six months. That puts a little of the responsibility on the boss's head, but still, it's going to be a really bad thing. What do others of you do when you are a critical to a project when it's time to move on? Do I owe any obligation to stick around even though something better shows up? I know my spouse would object if I found someone else. Does that apply to work? I do have an understudy now, though he's fresh out of college. He's not going to replace me anytime soon. It's a small shop and the boss is going to be crushed. I am traumatized in anticipation of telling him and feel guilty about the practical consequences. I'm looking for some solace and some strategy about how to deal with this transition. Thank you for listening. =========================Subsequent notes ========================= @ChaosPandion, Chance: No, I can't stay to finish the project. I will insist on a compromise where I finish the current sprint (about a month from now) but there is at least a half year, probably a year of solid, full-time, work still to be done. I wouldn't expect the new employer to hold the job that long.

    Read the article

  • More useful Sql Server Serivce Broker Queries

    - by ChrisD
    SELECT 'Checking Broker Service Status...' IF (select Top 1 is_broker_enabled from sys.databases where name = 'NWMESSAGE')=1     SELECT ' Broker Service IS Enabled'  -- Should return a 1. ELSE     SELECT '** Broker Service IS DISABLED ***' /* If Is_Broker_enabled returns 0, uncomment and run this code ALTER DATABASE NWMESSAGE SET SINGLE_USER WITH ROLLBACK IMMEDIATE GO Alter Database NWMESSAGE Set enable_broker GO ALTER DATABASE NWDataChannel SET MULTI_USER GO */ SELECT 'Checking For Disabled Queues....' -- ensure the queues are enabled --  0 indicates the queue is disabled. Select '** Receive Queue Disabled: '+name from sys.service_queues where is_receive_enabled = 0 --select [name], is_receive_enabled from sys.service_queues; /*If the queue is disabled, to enable it alter queue QUEUENAME with status=on; – replace QUEUENAME with the name of your queue */ -- Get General information about the queues --select * from sys.service_queues -- Get the message counts in each queue SELECT 'Checking Message Count for each Queue...' select q.name, p.rows from sys.objects as o join sys.partitions as p on p.object_id = o.object_id join sys.objects as q on o.parent_object_id = q.object_id join sys.service_queues sq on sq.name = q.name where p.index_id = 1 -- Ensure all the queue activiation sprocs are present SELECT 'Checking for Activation Stored Procedures....' SELECT  '** Missing Procedure:  '+q.name  From sys.service_queues q Where NOT Exists(Select * from sysobjects where xtype='p' and name='activation_'+q.name) and q.activation_procedure is not null DECLARE @sprocs Table (Name Varchar(2000)) Insert into @sprocs Values ('Echo') Insert into @sprocs Values ('HTTP_POST') Insert into @sprocs Values ('InitializeRecipients') Insert into @sprocs Values ('sp_EnableRecipient') Insert into @sprocs Values ('sp_ProcessReceivedMessage') Insert into @sprocs Values ('sp_SendXmlMessage') SELECT 'Checking for required stored procedures...' SELECT  '** Missing Procedure:  '+s.name  From @sprocs s Where NOT Exists(Select * from sysobjects where xtype='p' and name=s.name) GO -- Check the services Select 'Checking Recipient Message Services...' Select '** Missing Message Service:' + r.RecipientName +'MessageService' From Recipient r Where not exists (Select * from sys.services s where  s.name  COLLATE SQL_Latin1_General_CP1_CI_AS= r.RecipientName+'MessageService') DECLARE @svcs Table (Name Varchar(2000)) Insert into @svcs Values ('XmlMessageSendingService') SELECT  '** Missing Service:  '+s.name  From @svcs s Where NOT Exists(Select * from sys.services where name=s.name COLLATE SQL_Latin1_General_CP1_CI_AS) GO /*** To Test a message send Run: sp_SendXmlMessage  'TSQLTEST', 'CommerceEngine','<Root><Text>Test</Text></Root>' */ Select CAST(message_body as XML) as xml, * From XmlMessageSendingQueue /*** clean out all queues declare @handle uniqueidentifier declare conv cursor for   select conversation_handle from sys.conversation_endpoints open conv fetch next from conv into @handle while @@FETCH_STATUS = 0 Begin    END Conversation @handle with cleanup    fetch next from conv into @handle End close conv deallocate conv ***********************

    Read the article

  • Desparately Need Help: After a mishap, a folder shows 0 files in it

    - by bobby
    I'm hoping some of you guys may be able to shed some light on this scenario: I had a odt document on which I was working from one of many files in a folder among many on an internal hard-drive. Some kind of glitch occured and the document crashed (this could have been some kind of power charge whilst another hard drive was being unmounted). As I looked into the folders surrounding the folder in which my odt document was stored, they start to show 0 files in them. I immediately switched off the PC and then re-started. Upon the re-start, the folders would show the 1,000s of files I've stored in them and then within 5 minutes, as I started to back them up, freeze, cut-off the process of transfer. When I tried to open anything on the internal hard-drive, be it an avi film, an mp3, a cbr or a word doc, they all showed blank or would work. Some folders had vastly less files showing. Eventually, things calmed down. I closed the PC, checked that the connections were in firmly, gave it a vacuum and restarted the PC. All the files eventually showed up and I started to back them up (which I'd brought a hard drive for anyway but been distracted and not done). All folders show except the one which contained the document I was working on at the time of the trouble. Strangely, it was one that should itself full on several occasions on restarts. It shows zero files now. Properties shows zero files and zero space taken by it. Yet when I drop a file into this folder by pasting it in, it disappears too. Opening the folder, there is nothing there. But if I paste that document again, the PC asks would I like to replace the existing file with the same name (that I can't see), when I click yes, the file appears. When I exit, the folder shows the 0 files in the folder. Going back into the folder, it has disappeared again. I'm hoping that someone can help give me tips to recover the files in the folder, it would be greatly, greatly appreciated. All other films, music, comics, documents show and are fine!

    Read the article

  • 3 Monitors, Ubuntu 12.04, Gnome 3, 2 nvidia cards, WITH xrandr or xinerama?

    - by Josh
    Ok. I have been banging my head against the wall for over a week now trying to get 3 monitors to work. I have: 1 - Nvidia 8600 GT 512MB PCIEx16 1 - Nvidia GT 240 1GB PCIEx16 They are not running in SLI (obviously). I have tried to use everything from tutorials to a few templates, all the way up to nvidia-settings, etc etc etc.. From what I hear, Xinerama doesnt like gnome 3 because of the compositing, although I have read a lot about using Xrandr instead, and getting the compositing working, but alas, I cannot. It always either crashes X and I have to replace the xorg.conf with my backup, or it defaults to the gnome-classic desktop, and on top of that, when it does default, it keeps adding more and more panels. Basically, I want to be able to use all 3 monitors (yes, just like in windows) to drag and drop from different windows. I have xorg-edit, but I am still not too sure on how to set this up? Is there any way to: A Get compositing working with 3 monitors, 2 nvidia cards, xinerama, and gnome 3? or BUse twinview with 3 monitors (I have heard it can be done by manually editing xorg.conf) or CSet up Xrandr to draw all 3 monitors with compositing. or DUse separate X for each monitor, and be able to use gnome with compositing, as well as drag between all 3 or EANYTHING. lol. I just want this to work. Any help you can provide would be greatly appreciated. BTW, I am running an ubuntu mini install with gnome. Everything works great but this. I can run it fine with 2 monitors and compositing, but not with 3. Also, what is the best GUI tool for editing xorg.conf? Im not finding anything that is up to date at all, and also is understandable by humans. haha. Im actually an engineer by trade, and have been working with computers a LONG time, but this xorg.conf stuff is really confusing the crap out of me. lol Thanks!

    Read the article

  • vJUG: Worldwide Virtual JUG Created

    - by Tori Wieldt
    London Java Community leader and technical evangelist Simon Maple has created a Meetup called vJUG, with aim toward connecting Java Developers in the virtual world. The aim for vJUG is: Get technical leaders from around the world to present to the vJUG members (without travel cost concerns!). Work with local JUGs to provide worldwide content to their members and help JUGs present to a worldwide audience. Provide content to devs without access to a local JUG. Be a hub that will stream content from other JUG sessions live.  The vJUG is not intended to replace local JUG efforts. "The vJUG can never be, and will never be, as vibrant and valuable to its members as a proper local JUG can. Why? Because the true value in JUG meetings are the face to face interactions and personal networking," said Maple. "However, many people do not have access to a really active JUG with great speakers and awesome content. Or, like me, the closest JUG is about 90 mins away." WebEx and Google Hangouts are great, Maple explained, he hopes vJUG will provide more coordination of online events.  Maple hopes that in the future, vJUG will provide An Events calendar with reminders and links to up coming meetings. A Newsletter with what's coming up and links to previous sessions. Coordination of links to IRC channels which are active during presentations (to create a feeling of virtual community). Comments and forums around sessions and presentations A place where physical JUGs could advertise their sessions (i.e. a NY JUG event) to a worldwide audience, when streamed, via an event that people can sign up to. A common Webex or Hangout. Maple encourages both people who need a JUG and existing JUG members to join vJUG. "I'm looking forward to talking with many of you one to get members, speakers, and JUG support!" Join vJUG now! (I sense a need for a logo...) 

    Read the article

  • Boot error aftter clean Ubuntu 13.04 install: [Reboot and select proper boot device]

    - by IcarusNM
    I am having the same problem as this guy where a fresh Ubuntu install completes beautifully but will not boot. I get the ASUS (?) "Reboot and select proper boot device" error, first with Xubuntu 13.10 and after finally giving up there, and Xubuntu 13.4, I am back to regular Ubuntu 13.4. ASUS motherboard Z77, Intel chipset. Standard internal SATA 500GB HD. 64-bit. All-new hardware less than 3 months old. It was running Ubuntu 12.04LTC great until I tried this upgrade. I have re-installed from scratch every which way: with LVM, without LVM; with the default partitions, with my own partitions. With ext3 or ext4. Alongside; replace; upgrade. No difference. On the last two tries, I have booted afterward from the same USB stick, downloaded and run boot-repair, and now I guess I am off to the boot-repair support email with my URLs from that. It did all kinds of cool stuff but ultimately made no difference. I never got anything like this with Ubuntu 12.04. I've now probably re-installed Ubuntu 13.04 ten times slightly different ways. I finally found how to skip the language packs, so at least that sped things up! :) This starts from the ubuntu-13.04-desktop-amd64.iso and UNetbootin as suggested on the official instructions for USB thumb drive creation from OSX. That part all works fine (booting the USB on the PC and trying Ubuntu and/or installing from there on the PC HD.) I have no CD drive on this PC, but I suppose I could get one. I would rather find some Linux install that works from USB like I've always done. After running boot-repair twice, in the ASUS BIOS I now see three different UEFI boot options in the priority list, and they are all labeled exactly the same: ubuntu (P6: WDC WD5000AAKX-00U6AA0) Then there's a non-UEFI option: P6: WDC WD5000AAKX-00U6AA0 (476940MB) And a fifth option appeared after the first boot-repair: Windows Boot Manager (P6: WDC WD5000AAKX-00U6AA0) I have tried all 5 of these, and I get exactly the same error. I have never had Windows installed on this HD. ASUS is calling it Windows Boot Manaer but I presume that's a mistaken label for whatever boot-repair did. I can boot on USB and run GParted and it looks great. The partitions all look normal. I found another case of this online with no solution posted. I can't find much about it online. Needs a Master Boot Record wipe/redo?? I'm not sure how.

    Read the article

  • Building Enterprise Smartphone App &ndash; Part 3: Key Concerns

    - by Tim Murphy
    This is part 3 in a series of posts based on a talk I gave recently at the Chicago Information Technology Architects Group.  Feel free to leave feedback. Keys Concerns Of Smartphones In The Enterprise These are the factors that you need to be aware of and address in order to build successful enterprise smartphone applications.  Most of them have nothing to do with the application itself as you will see here. Managing Devices Managing devices is a factor that is going to effect how much your company will have to spend outside of developing the applications.  How will you track the devices within the corporation?  How often will you have to replace phones and as a consequence have to upgrade your applications to support new phones?  The devices can represent a significant investment of capital.  If these questions are not addressed you will find a number of hidden costs throughout the life of your solution. Purchase or BYOD We have seen the trend of Bring Your Own Device (BYOD) lately within the enterprise.  How many meetings have you been in where someone is on their personal iPad, iPhone, Android phone or Windows Phone?  The issue is if you can afford to support everyone's choice in device? That is a lot to take on even if you only support the current release of each platform. Do you go with the most popular device or do you pick a platform that best matches your current ecosystem and distribute company owned devices?  There is no easy answer here, but you should be able give some dollar value to both hardware and development costs related to platform coverage. Asset Tracking/Insurance Smartphones are devices that are easier to lose or have stolen than laptops and desktops. Not only do you have your normal asset management concerns but also assignment of financial responsibility. You also will need to insure them against damage and theft and add legal documents that spell out the responsibilities of the employees that use these devices. Personal vs. Corporate Data What happens when you terminate an employee?  How do you recover the device?  What happens when they have put personal data on the device?  These are all situation that can cause possible loss of corporate intellectual property or legal repercussions of reclaiming a device with personal data on it.  Policies need to be put in place that protect the company from being exposed to type of loss.  This can mean significant legal and procedural cost that you need to consider. Coming Up In the last installment of this series I will cover application development considerations. del.icio.us Tags: Smartphones,Enterprise Smartphone Apps,Architecture

    Read the article

  • Threads slowing down application and not working properly

    - by Belgin
    I'm making a software renderer which does per-polygon rasterization using a floating point digital differential analyzer algorithm. My idea was to create two threads for rasterization and have them work like so: one thread draws each even scanline in a polygon and the other thread draws each odd scanline, and they both start working at the same time, but the main application waits for both of them to finish and then pauses them before continuing with other computations. As this is the first time I'm making a threaded application, I'm not sure if the following method for thread synchronization is correct: First of all, I use two global variables to control the two threads, if a global variable is set to 1, that means the thread can start working, otherwise it must not work. This is checked by the thread running an infinite loop and if it detects that the global variable has changed its value, it does its job and then sets the variable back to 0 again. The main program also uses an empty while to check when both variables become 0 after setting them to 1. Second, each thread is assigned a global structure which contains information about the triangle that is about to be rasterized. The structures are filled in by the main program before setting the global variables to 1. My dilemma is that, while this process works under some conditions, it slows down the program considerably, and also it fails to run properly when compiled for Release in Visual Studio, or when compiled with any sort of -O optimization with gcc (i.e. nothing on screen, even SEGFAULTs). The program isn't much faster by default without threads, which you can see for yourself by commenting out the #define THREADS directive, but if I apply optimizations, it becomes much faster (especially with gcc -Ofast -march=native). N.B. It might not compile with gcc because of fscanf_s calls, but you can replace those with the usual fscanf, if you wish to use gcc. Because there is a lot of code, too much for here or pastebin, I created a git repository where you can view it. My questions are: Why does adding these two threads slow down my application? Why doesn't it work when compiling for Release or with optimizations? Can I speed up the application with threads? If so, how? Thanks in advance.

    Read the article

  • From 20,663 issues to 1 issue&ndash;style-copping C5.Tests

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/05/28/from-20663-issues-to-1-issuendashstyle-copping-c5.tests.aspxI recently became interested in the potential of the C5 Collections solution from http://www.itu.dk/research/c5/, however I was dismayed at the state of the code in the unit test project, so I set about fixing the 20,663 issues detected by StyleCop. The tools I used were the latest versions of: My 64-bit development PC running Windows 8 Update with 8Gb RAM Visual Studio 2013 Ultimate with SP2 ReSharper GhostDoc Pro My first attempt had to be abandoned due to collision of class names which broke one of the unit tests. So being aware of this duplication of class names, I started again and planned to prepend the class names with the namespace name. In some cases I additionally prepended the item of the C5 collection that was being tested. So what was the condition of code at the start? Besides the sprawl of C# code not written to style cop standard, there was: 1) Placing of many classes within one physical file. 2) Namespace within name space that did not follow the project structure. 3) As already mentioned, duplication of class names across namespaces. 4) A copyright notice that spawled but had to be preserved. 5) Project sub-folders were all lower case instead of initial letter capitalised. The first step was to add a stylecop heading plus the original heading contained within a region, to every file. The next step was to run GhostDoc Pro using its “Document File” option on every file but not letting it replace the headers, I had added. This brought the number of issues down to 18,192. I then went through each file collapsing each class and prepending names as appropriate. At each step, I saved the changes to my local Git. The step was to move each class to its own file and to style-cop each file. ReSharper provides a very useful feature for doing this which also fixes missing “this.” and moves using statements inside the namespace. Some classes required minimal work whereas others required extensive work to reach the stylecop standard. The unit tests were run at each split and when each class was completed. When all was done, one issue remained which I will need to submit to stylecop team for their advice (and possibly a fix to stylecop). The updated solution has been made available at https://c5stylecopped.codeplex.com/releases/view/122785.

    Read the article

  • Wi-Fi triangulation using android smartphone

    - by user1887020
    How to make application for wifi triangulation using android platform? This project will be implemented inside the building. No GPS needed. Just using wifi only and doing triangulation to get the current position of the user inside the building. I got minimum 3 access point to implement it. But how to start code in android and integrate triangulation inside android coding? I got the algorithm to do it.. but is there any chance that I can get it done? Because this project is actually want to replace the floor directory board into a smartphone floor directory so that user can find their way to their room for example to the lab. public class Triangulation { public Triangulation() { int dist_1, dist_2, dist_3; //variable for the distances int x1, x2, x3; //coordinates of x int y1, y2, y3; //coordinates of y int final_dist1, final_dist2; //final distance after calc dist_1 = 1; dist_2 = 2; dist_3 = 3; x1 = 5; //test inputs x2 = 2; x3 = 4; y1 = 2; y2 = 2; y3 = 5; final_dist1 = ((dist_1 * dist_1) - (dist_2 * dist_2) – (x1 * x1) + (x2 * x2) - (y1 * y1) + (y2 * y2)) / 2; final_dist2 = ((dist_2 * dist_2) - (dist_3 * dist_3) – (x2 * x2) + (x3 * x3) - (y2 * y2) + (y3 * y3)) / 2; initial_a1 = x1 - x2; initial_a2 = x2 - x3; initial_b1 = y1 - y2; initial_b2 = y2 - y3; //-----------------------STEP 1-------------------------------------- int a1 = initial_a1 / initial_a1; int a2 = initial_a2 / initial_a1; int b1 = initial_b1 / initial_a1; int b2 = initial_b2 / initial_a1; final_dist1 /= initial_a1; final_dist2 /= initial_a1; //-----------------------STEP 2-------------------------------------- a2 = a2 -a2; final_dist2 = -(initial_a2) * final_dist1 + final_dist2; //-----------------------STEP 3-------------------------------------- a2 /= b2; final_dist2 = final_dist2 / b2; b2 /= b2; //-------------------------STEP 4----------------------------------- b1 = b1 - b1; final_dist1 = -(initial_b1) * final_dist2 + final_dist1; } }

    Read the article

  • Agile team with no dedicated Tester members. Insane or efficient?

    - by MetaFight
    I'm a software developer. I've been thinking a lot about the efficiency of the Software Testers I've worked with so far in my career. In fact, I've been thinking a lot about the Software Testers role in general and have reached a potentially contentious conclusion: Non-developer Software Testers staff are less efficient at software testing than developers. Now, before everyone gets upset, hear me out. This isn't mere opinion: Software Testing and Software Development both require a lot of skills in common: Problem solving Thinking about corner cases Analytical skills The ability to define clear and concise step-by-step scenarios What developers have in addition to this is the ability to automate their tests. Yes, I know non-dev testers can automate their tests too, but that often then becomes a test maintenance issue. Because automating UI tests is essentially programming, non-dev members encounter all the same difficulties software developers encounter: Copy-pasta, lack of code reusibility/maintainability, etc. So, I was wondering. Why not replace all non-dev roles with developer roles? Developers have the skills required to perform Software Testing tasks, and they have the skills to automate tests and keep them maintainable. Would the following work: Hire a bunch of developers and split them into 2 roles: Software developers Software developers doing testing (some manual, mostly automated by writing integration tests, unit tests, etc) Software developers doing application support. (I've removed this as it is probably a separate question altogether) And, in our case since we're doing Agile development, rotate the roles every sprint or two. Also, if at all possible, try to have people spend their Developer stints and Testing stints on different projects. Ideally you would want to reduce the turnover rate per rotation. So maybe you could have 2 groups and make sure the rotation cycles of the groups are elided. So, for example, if each rotation was two sprints long, the two groups would have their rotations 1 sprint apart. That way there's only a 50% turn-over rate per sprint. Am I crazy, or could this work? (Obviously a key component to this working is that all devs want to be in the 3 roles. Let's assume I'm starting a new company and I can hire these ideal people) Edit I've removed the phrase "QA", as apparently we are using it incorrectly where I work.

    Read the article

  • Help yourself . if you like

    - by rachelp
    At Red Gate we enjoy talking to our customers. Really! If you've read recent blog posts by members of some of our customer-facing teams, you'll have spotted the pleasure they take in their work. In case you missed those posts, here they are: From our Finance team: Finance: Friends, not foes! From our reception desk: The Front line of Communication However, we recognise that sometimes our customers would like to be able to solve their problems or answer their questions without talking to us - they're in a hurry, it's outside office hours . or perhaps they just prefer not to pick up the phone and call.   Self-service customer care So we've begun a programme of work to enable more self-service; whether it's finding the answer to a "how do i.?" question or getting access to a record of what product licenses they own, we want to make it much easier for our customers to get hold of this information for themselves. If they want to.   Phase 1: make it easier to find information We decided to start by tackling findability. We've got loads of useful information on our website, but it's sometimes difficult to find, so we've been working on improving our site search. Step 1 has been to replace the search engine, clean up the search UI, and make it consistent across the site. We're nearly there! The idea is that if we improve the site search it will be easier - and much more pleasant - for people to find the information they need. The new search will go live some time in April, and then we'll be gathering feedback, looking at web analytics (more about this in an earlier article), and working out what improvements we still need to make. We'd love to hear what you think, so do give your feedback or drop us a line. Or pick up the phone and call, if you like.   What do you think? While I've got your attention, I'd love to hear what people think about self-service customer care. Do you like to call, email, live chat . or do you prefer to dig around and find out answers yourself? Who's getting it right: what self-service sites do you like? p.s. Watch this space for news of phase 2.

    Read the article

  • HAProxy reqrep remove URI on backend request

    - by Jim
    real quick question regarding HAProxy reqrep. I am trying to rewrite/replace the request that gets sent to the backend. I have the following example domain and URIs http://domain/web1 http://domain/web2 I want web1 to go to backend webfarm1, and web2 to go to webfarm2. Currently this does happen. However I want to strip off the web1 or web2 URI when the request is sent to the backend. Here is my haproxy.cfg frontend webVIP_80 mode http bind :80 #acl routing to backend acl web1_path path_beg /web1 acl web2_path path_beg /web2 #which backend use_backend webfarm1 if web1_path use_backend webfarm2 if web2_path default_backend webfarm1 backend webfarm1 mode http reqrep ^([^\ ]*)\ /web1/(.*) \1\ /\2 balance roundrobin option httpchk HEAD /index HTTP/1.1\r\nHost:\ example.com server webtest1 10.0.0.10:80 weight 5 check slowstart 5000ms server webtest2 10.0.0.20:80 weight 5 check slowstart 5000ms backend webfarm2 mode http reqrep ^([^\ ]*)\ /web2/(.*) \1\ /\2 balance roundrobin option httpchk HEAD /index HTTP/1.1\r\nHost:\ example.com server webtest1-farm2 10.0.0.110:80 weight 5 check slowstart 5000ms server webtest2-farm2 10.0.0.120:80 weight 5 check slowstart 5000ms If I go to http://domain/web1 or http://domain/web2 I see it in the error logs that the request on a server in each backend that the requst is for the resource /web1 or /web2 respectively. Therefore I believe there to be something wrong with my regular expression, even though I copied and pasted it from the Documentation. http://code.google.com/p/haproxy-docs/wiki/reqrep Summary: I'm trying to route traffic based on URI, however I want to strip the URI on the backend side. Go to http://domain/web1 -- backend request of / to webfarm1 Thank you! -Jim

    Read the article

  • Watchguard SSL Certificate problems

    - by Bill Best
    We recently purchased a Watchguard XTM 510. The hope is to replace our ISA 2006 proxy with this UTM product. We are having some issues with secured sites in our test setup. Currently We are still running traffic through the ISA server and I have the Watchguard also setup to be connected to the network. Where we run into problems is when I set in ISA the HTTPS site's location to be forwarded through the XTM, I get a certificate could not be validated error. Therefore I think Ive narrowed it down to two possibilities. One, the certificate needs to be installed on the XTM. Im not 100% sure this is the case as I believe this should just be acting as strictly a proxy and forwarding all the traffic through no questions asked. Either way if I try to import a certificate to the XTM I always get a certificate validation failed error message. These are generally converted pfx to pem files. Second, the XTM CA certificate needs to be installed on the ISA server so that they may communicate. I have done this but it didn't seem to do anything. I believe this should be working and was hoping someone has struggled through this before.

    Read the article

  • Adaptec 6405 RAID controller turned on red LED

    - by nn4l
    I have a server with an Adaptec 6405 RAID controller and 4 disks in a RAID 5 configuration. Staff in the data center called me because they noticed a red LED was turned on in one of the drive bays. I have then checked the status using 'arcconf getconfig 1' and I got the status message 'Logical devices/Failed/Degraded: 2/0/1'. The status of the logical devices was listed as 'Rebuilding'. However, I did not get any suspicious status of the affected physical device, the S.M.A.R.T. setting was 'no', the S.M.A.R.T. warnings were '0' and also 'arcconf getsmartstatus 1' returned no problems with any of the disk drives. The 'arcconf getlogs 1 events tabular' command gives lots of output (sorry, can't paste the log file here as I only have remote console access, I could post a screenshot though). Here are some sample entries: eventtype FSA_EM_EXPANDED_EVENT grouptype FSA_EXE_SCSI_GROUP subtype FSA_EXE_SCSI_SENSE_DATA subtypecode 12 cdb 28 00 17 c4 74 00 00 02 00 00 00 00 data 70 00 06 00 00 00 00 00 00 00 00 00 02 00 00 00 00 00 00 00 00 00 00 00 00 0 The 'arcconf getlogs 1 device tabular' command reports mediumErrors 1 for two of the disks. Today, I have checked the status of the controller again. Everything is back to normal, the controller status is now 'Logical devices/Failed/Degraded: 2/0/0', the logical devices are also all back to 'Optimal'. I was not able to check the LED status, my guess is that the red LED is off again. Now I have a lot of questions: what is a possible cause for the medium error, why it is not reported by the SMART log too? Should I replace the disk drives? They were purchased just a month ago. The rebuilding process took one or two days, is that normal? The disks are 2 TByte each and the storage system is mostly idling. the timestamp of the logs seem to show the moment of the log retrieval, not the moment of the incident. Please advise, all help is very appreciated.

    Read the article

< Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >