Search Results

Search found 4692 results on 188 pages for 'shuo ran'.

Page 77/188 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Where Is SilverLight Toolkit Installed On My PC?

    - by Gopinath
    This is first question that ran though my mind once I finished installation of SilverLight Toolkit today. When we install the toolkit, the installation wizard does not ask us for any installation folder options and after completion of installation there will not be any entries in to the All Programs section of start menu. After going through the documents, I found that installer silently places all the binaries, themes, samples documents under program files folder depending on the version of the toolkit. If you installed version 4.0 of the toolkit then it will be placed in the folder C:\Program Files\Microsoft SDKs\Silverlight\v4.0 Here is the list of other useful folder of SilverLight toolkit that we refer to often Bin  C:\Program Files\Microsoft SDKs\Silverlight\v4.0\Toolkit\Apr10\Bin   Samples  C:\Program Files\Microsoft SDKs\Silverlight\v4.0\Toolkit\Apr10\Samples   Themes  C:\Program Files\Microsoft SDKs\Silverlight\v4.0\Toolkit\Apr10\Themes   Source  C:\Program Files\Microsoft SDKs\Silverlight\v4.0\Toolkit\Apr10\Source Please note this above listed folder names will not be exactly same on your computer as they vary from one version to another. First open the base folder  C:\Program Files\Microsoft SDKs\Silverlight and then navigate through the available folders for locating the required ones. Hope this helps you. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • Adventures in Scrum: Lesson 2 - For the record

    - by Martin Hinshelwood
    At SSW we have always done Agile. Recently we have started doing Scrum and we have nearly completed our first Sprint ever using Scrum. As you probably guessed from my previous post, it looks like it is going to be a “Failed Sprint”, but the Scrum Team (This includes the ScrumMaster and the Product Owner) has learned a huge amount about working in the Scrum Framework. We have been running with a “Proxy Product Owner” for the last two weeks, but a simple mistake occurred either during the “Product Planning Meeting” or the “Sprint Planning Meeting” that could have prevented this Sprint from failing. We has a heated discussion on the vision of someone not in the room which ended with the assertion that the Product Owner would be quizzed again on their vision. This did not happen and we ran with the “Proxy Product Owner’s vision for two weeks. Product Owner vision: Update Component A of Product A to Silverlight Proxy Product Owner vision: Update Product A to Silverlight Do you see the problem? Worse than that, as we had a lot of junior members of the Scrum Team and we are just feeling our way around how Scrum will work at SSW I missed implementing a fundamental rule. That’s right, it was me. It does not matter that I did not know about this rule, its on the site and I should have read it. Would a police officer let you off if you did not know that a red light meant stop? I think not… But, what is this amazing rule I hear you shout.. Its simple, as per our rule I should have sent the following email: “ Dear Proxy Product Owner, For the record, I disagree that the Product Owner wants us to ‘Update Product A to Silverlight’ as I still think that he wants us to ‘Update Component A of Product A to Silverlight’ and not the entire application. Regards Martin” - ‘For the record’ - Rules to being Software Consultants - Dealing with Clients This email should have been copied to the entire Scrum Team, which would have included the Product Owner, who would have nipped this misunderstanding in the bud and we would have had one less impediment. Technorati Tags: SSW,SSW Rules,SSW Standards,Scrum,Product Owner,ScrumMaster,Sprint,Sprint Planning Meeting,Product Planning Meeting

    Read the article

  • Read-only filesystem Recovery Mode not working

    - by purbleguy
    I have seen other posts of this before, but they didn't help. In short, today I was trying to play Colobot on my Ubuntu Trusty computer, when I tried to access the directory the game was in by terminal, bash warned me that the disk was in a read-only state. I'm like, ok... So I reboot and go into recovery mode, there I do fsck, it finds errors, but apparently fails to fix them. At that point I was getting annoyed and searched the internet, once I found an answer I ran the grub and dpkg options in recovery mode, recovery mode said it was read/write, but when I boot in, I get the same thing, read-only. So I reboot into recovery mode, and tada! It's read-only again. I can't think of anything else to do, as the other people who had the same problems had them fixed by the steps I did. I got all my important files backed up to both a seperate partition and a seperate computer, so no worries there. I just need help getting this to work, as my computer might as well be a brick if I cant do f/a on it

    Read the article

  • 11gr2 DataGuard: Restarting DUPLICATE After a Failure

    - by rene.kundersma
    One of the great new features that comes in very handy when databases get larger and larger these days is RMAN's capability to duplicate from an active database and even restart a duplicate when it fails. Imagine yourself the problem I had lately; I used the duplicate from active database feature and had to wait for an hour or 6 before all datafiles where transferred.At the end of the process some error occurred because of the syntax. While this error was easily to solve I was afraid I had to redo the complete procedure and transfer the 2.5 TB again. Well, 11gr2 RMAN surprised when I re-ran my command with the following output: Using previous duplicated file +DATA/fin2prod/datafile/users.2968.719237649 for datafile 12 with checkpoint SCN of 183289288148 Using previous duplicated file +DATA/fin2prod/datafile/users.2703.719237975 for datafile 13 with checkpoint SCN of 183289295823 Above I only show a small snippet, but what happend is that RMAN smartly skipped all files that where already transferred ! The documentation says this: RMAN automatically optimizes a DUPLICATE command that is a repeat of a previously failed DUPLICATE command. The repeat DUPLICATE command notices which datafiles were successfully copied earlier and does not copy them again. This applies to all forms of duplication, whether they are backup-based (with and without a target connection) or active database duplication. The automatic optimization of the DUPLICATE command can be especially useful when a failure occurs during the duplication of very large databases. If a DUPLICATE operation fails, you need only run the DUPLICATE again, using the same parameters contained in the original DUPLICATE command. Please see chapter 23 of the 11g Release 2 Database Backup and Recovery User's Guide for more details. B.w.t. be very careful with the duplicate command. A small mistake in one of the 'convert' parameters can potentially overwrite your target's controlfile without prompting ! Rene Kundersma Technical Architect Oracle Technology Services

    Read the article

  • OCS 2007 Access Edge Server Certificate issue

    - by BWCA
    We are currently building additional OCS 2007 R2 Access Edge Servers to handle additional capacity.  We ran into a SSL certificate issue when we were setting up the servers. Before running the steps to Deploy an Edge Server, we successfully imported our SSL certificate that we use for external access on all of the new servers.  After successfully completing the first three Deploy Edge Server steps one one of the new servers, we started working on Step 4: Configure Certificates for the Edge Server.  After selecting Assign an existing certificate from the common tasks list and clicking Next to select a certificate, there were no certificates listed as shown below.   The first thing we did was to use the Certificates mmc snap-in to review the SSL certificate information.  We noticed in the General tab that Windows does not have enough information to verify this certificate and in the Certification Path that the issuer of this certificate could not be found for the SSL certificate that we imported successfully earlier.     While troubleshooting, we learned that we could not access the URL for the certificate’s CRL to download the CRL file due to restrictive firewall rules between the new OCS 2007 R2 Access Edge Servers and the Internet. After modifying the firewall rules, we were able to download the CRL file and when we reran Step 4 to assign an existing certificate, the certificate was listed.

    Read the article

  • Use depth bias for shadows in deferred shading

    - by cubrman
    We are building a deferred shading engine and we have a problem with shadows. To add shadows we use two maps: the first one stores the depth of the scene captured by the player's camera and the second one stores the depth of the scene captured by the light's camera. We then ran a shader that analyzes the two maps and outputs the third one with the ready shadow areas for the current frame. The problem we face is a classic one: Self-Shadowing: A standard way to solve this is to use the slope-scale depth bias and depth offsets, however as we are doing things in a deferred way we cannot employ this algorithm. Any attempts to set depth bias when capturing light's view depth produced no or unsatisfying results. So here is my question: MSDN article has a convoluted explanation of the slope-scale: bias = (m × SlopeScaleDepthBias) + DepthBias Where m is the maximum depth slope of the triangle being rendered, defined as: m = max( abs(delta z / delta x), abs(delta z / delta y) ) Could you explain how I can implement this algorithm manually in a shader? Maybe there are better ways to fix this problem for deferred shadows?

    Read the article

  • In Google webmaster tools, can a "soft 404" be triggered by the text on the page?

    - by Stephen Ostermiller
    I just ran across an error in Google Webmaster Tools that I have never seen before. I manage the website for my local community band (I play trombone). One of the pages on the site is a list of our upcoming performances. It is powered by a WordPress events plugin that uses a database of upcoming events that are entered through the administration interface. We just finished up our summer and fall concerts and our next performance will be our Christmas concert. I hadn't gotten around to adding that into the website yet, so there are no upcoming events shown on the page. In fact the text on the page says: No upcoming events listed under Performance. Check out past events for this category or view the full calendar. Then in Google Webmaster Tools, this page is showing up as a "soft 404": The page is returning a 200 status and Google is indicating that he 404 is "soft". I wouldn't have expected Googlebot to be as sophisticated to parse that particular sentence. Is Googlebot able to detect that the text on the page indicates that there is currently not content and then treat it as a 404 page because of that? If Google is treating this page as a soft 404 because of the text on the page, does that mean that like regular 404 pages, the page won't show up in search results?

    Read the article

  • rsync'd a folder, folder doesn't show up, but free disk space decreased

    - by Patrick
    I am currently trying to switch from mac to windows/ubuntu dual boot (on 2 seperate internal HDDs), but ran into some trouble restoring my documents. I am not sure all the information below is necessary, but if I knew how to solve it, I wouldn't ask it here. I backed up my mac before buying this laptop on an external HDD with Carbon Copy Cloner. I wanted to put these files on my user folder on my windows HDD, but I could not do that from inside windows (HFS+ format of mac), so I used rsync from inside Ubuntu to copy the documents from the ext hdd to the windows partition. It seemed like it went okay, but from inside windows (and later also Ubuntu) the folder didn't show up. My free HDD space, however, has reduced with about 200 GB (the size of the backup) when looking at the disk properties (from inside Windows and Ubuntu). rsync command I used: rsync -av /media/patrick/Toshiba\ 1.5T/Users/patrickvandenberg/ /media/patrick/Windows8_OS/Users/Patrick/MacBackup/ Folder does not exist: patrick@patrick-Lenovo-IdeaPad-Y410P:~$ cd /media/patrick/Windows8_OS/Users/Patrick/MacBackup bash: cd: /media/patrick/Windows8_OS/Users/Patrick/MacBackup: No such file or directory Size of disk: patrick@patrick-Lenovo-IdeaPad-Y410P:~$ du -hs /media/patrick/Windows8_OS/ 195G /media/patrick/Windows8_OS/ Size of disk according to Disk properties: http://i.stack.imgur.com/OteMX.png (not enough rep to insert the image)

    Read the article

  • SQL SERVER – Difference between COUNT(DISTINCT) vs COUNT(ALL)

    - by pinaldave
    This blog post is written in response to the T-SQL Tuesday hosted by Jes Schultz Borland. Earlier today, I was presenting a 45-minute session at the Community College about “The Beginning SQL Server Database”. One of the students asked me the following question. What is the difference between COUNT(DISTINCT) vs COUNT(ALL)? I found this question from the student very interesting. He seems to have read the documentation (Book Online) and was then asking me this question. I always carry laptop which has SQL Server installed. I quickly opened it and ran the following script. After looking at the result, I think it was clear to everybody. Here is the script: SELECT COUNT([Title]) Value FROM [AdventureWorks].[Person].[Contact] GO SELECT COUNT(ALL [Title]) ALLValue FROM [AdventureWorks].[Person].[Contact] GO SELECT COUNT(DISTINCT [Title]) DistinctValue FROM [AdventureWorks].[Person].[Contact] GO The above script will give me the following results. You can clearly notice from the result set that COUNT (ALL ColumnName) is the same as COUNT(ColumnName). The reality is that the “ALL” is actually  the default option and it needs not to be specified. The ALL keyword includes all the non-NULL values. I know this is very simple and may be it does not change how we work; however looking at the whole angle, I really enjoyed the question. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Move a SQL Azure server between subscriptions

    - by jamiet
    In September 2011 I published a blog post SSIS Reporting Pack v0.2 now available in which I made available the credentials of a sample database that one could use to test SSIS Reporting Pack. That database was sitting on a paid-for Azure subscription and hence was costing me about £5 a month - not a huge amount but when I later got a free Azure subscription through my MSDN Subscription in January 2012 it made sense to migrate the database onto that subscription. Since then I have been endeavouring to make that move but a few failed attempts combined with lack of time meant that I had not yet gotten round to it.That is until this morning when I heard about a new feature available in the Azure Management Portal that enables one to move a SQL Azure server from one subscription to another. Up to now I had been attempting to use a combination of SSIS packages and/or scripts to move the data but, as I alluded, I ran into a few roadblocks hence the ability to move a SQL Azure server was a godsend to me. I fired up the Azure Management Portal and a few clicks later my server had been successfully migrated, moreover the name of the server doesn't change and neither do any credentials so I have no need to go and update my original blog post either. Its easy to be cynical about SQL Azure (and I maintain a healthy scepticism myself) but that, my friends, is cool!You can read more about the ability to move SQL Azure servers between subscriptions from the official blog post Moving SQL Azure Servers Between Subscriptions.@Jamiet

    Read the article

  • SQL SERVER – Contest – Summary of 5 Day and Additional Information

    - by pinaldave
      I am overwhelmed with the response of our contest ran earlier this week. Every day we are giving away USD 198 worth give aways to readers in USA and India. If you have not participated so far, I encourage you to participate today itself. Here are links to our 5 day contest. The winner of the contest will be announced on August 20th. Query Hint – Contest Win Joes 2 Pros Combo (USD 198) – Day 1 of 5 Identity Fields – Contest Win Joes 2 Pros Combo (USD 198) – Day 2 of 5 Clustered Index and Primary Key – Contest Win Joes 2 Pros Combo (USD 198) – Day 3 of 5 Expanding Views – Contest Win Joes 2 Pros Combo (USD 198) – Day 4 of 5 Understanding XML – Contest Win Joes 2 Pros Combo (USD 198) – Day 5 of 5 Here are a few important notes related to the contest. A few people asked me what should they do as they have forgotten to mention their country in the response. Please resubmit with correct data, we will only consider latest entry from one person. What if you are not from the USA or India? Participate in the Bonus Quiz. Leave a comment for each of the questions above with your favorite article and you may be eligible for winning something cool. What if I am winner of two contests out of 5 contests? Well, in that case, we will send you one set of Combo Kit and Amazon Gift Card of USD 100 for another contest which you won. Can I exchange my kit with other stuff? No, if you do not want kit, give it to someone who needs it. Btw, I strongly suggest that you participate in the Bonus Quiz. There is something cool for everyone! Reference: Pinal Dave (http://blog.sqlauthority.com)         Filed under: Database, DBA, Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Asus Eee PC 1000HE wireless woes

    - by Vladimir Noobokov
    Ever since I have upgraded my Asus Eee PC 1000HE from Lucid 10.04 to Precise 12.04 I have been having issues with my wireless connections. At first I had wireless dropouts: I would be able to start using wireless, but then after a few minutes the wireless would stop working even though I was still connected to the network. Lately things turned worse: while I connect to my wireless network, it just never works. I tried all sorts of solutions on offer here and in other forums but none worked. At best I got the wireless to work up until I rebooted, at which point I would get the same symptoms again: the wireless network is there, but it's not really working. By now I tried so many different "solutions" I don't know where to start describing them; I have also reinstalled 12.04 several times, enough to make me lose faith in Ubuntu. Help here looks like my last resort. For the record, my Asus Eee PC 1000HE is equipped with an Atheros wireless card. I have reinstalled 12.04, ran all the suggested updates, and receive the following response when I type iwconfig in the terminal: lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"Arsenal" Mode:Managed Frequency:2.452 GHz Access Point: 00:04:ED:48:67:89 Bit Rate=1 Mb/s Tx-Power=16 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off Link Quality=70/70 Signal level=-29 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:27 Invalid misc:57 Missed beacon:0 eth0 no wireless extensions. Thanks in advance for any help that might be offered.

    Read the article

  • How can I install Cinnamon on Ubuntu 12.04 and eliminate the following errors:

    - by jaorizabal
    $ sudo apt-get install cinnamon cinnamon-session cinnamon-settings Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting 'cinnamon' instead of 'cinnamon-session' Note, selecting 'cinnamon' instead of 'cinnamon-settings' Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help resolve the situation: The following packages have unmet dependencies: cinnamon : Depends: gir1.2-muffin-3.0 but it is not going to be installed Depends: libcogl5 (>= 1.7.4) but it is not installable Depends: libmuffin0 (>= 1.0.0-0ubuntu1~precise) but it is not going to be installed Recommends: gnome-themes-standard but it is not going to be installed Recommends: gnome-session-fallback but it is not going to be installed E: Unable to correct problems, you have held broken packages. I added this PPA: sudo add-apt-repository ppa:merlwiz79/cinnamon-ppa Then ran the following command: sudo apt-get update && sudo apt-get install cinnamon cinnamon-session cinnamon-settings How can I install the latest Cinnamon desktop? How can I fix this error?

    Read the article

  • Unexpected SQL Server 2008 Performance Tip: Avoid local variables in WHERE clause

    - by Jim Duffy
    Sometimes an application needs to have every last drop of performance it can get, others not so much. We’re in the process of converting some legacy Visual FoxPro data into SQL Server 2008 for an application and ran into a situation that required some performance tweaking. I figured the Making Microsoft SQL Server 2008 Fly session that Yavor Angelov (SQL Server Program Manager – Query Processing) presented at PDC 2009 last November would be a good place to start. I was right. One tip among the list of incredibly useful tips Yavor presented was “local variables are bad news for the Query Optimizer and they cause the Query Optimizer to guess”. What that means is you should be avoiding code like this in your stored procs even though it seems such an intuitively good idea. DECLARE @StartDate datetime SET @StartDate = '20091125' SELECT * FROM Orders WHERE OrderDate = @StartDate Instead you should be referencing the value directly in the WHERE clause so the Query Optimizer can create a better execution plan. SELECT * FROM Orders WHERE OrderDate = '20091125' My first thought about this one was we reference variables in the form of passed in parameters in WHERE clauses in many of our stored procs. Not to worry though because parameters ARE available to the Query Optimizer as it compiles the execution plan. I highly recommend checking out Yavor’s session for additional tips to help you squeeze every last drop of performance out of your queries. Have a day. :-|

    Read the article

  • Unable to Install VirtualBox Due to Missing Kernel Module

    - by SoftTimur
    I am trying to install VirtualBox on my Ubuntu. I first tried to sudo apt-get install virtualbox-ose in a terminal, but after the configuration step, it fails with an error: No suitable module for running kernel found When proceeding with starting virtualbox, I get this error: WARNING: The character device /dev/vboxdrv does not exist. Please install the virtualbox-ose-dkms package and the appropriate headers, most likely linux-headers-generic. You will not be able to start VMs until this problem is fixed. So I tried the package from http://www.virtualbox.org/, but starting VirtualBox fails with: WARNING: The vboxdrv kernel module is not loaded. Either there is no module available for the current kernel (2.6.38-8-generic-pae) or it failed to load. Please recompile the kernel module and install it by sudo /etc/init.d/vboxdrv setup You will not be able to start VMs until this problem is fixed. So I ran sudo /etc/init.d/vboxdrv setup, but it fails too: * Stopping VirtualBox kernel modules [ OK ] * Uninstalling old VirtualBox DKMS kernel modules [ OK ] * Trying to register the VirtualBox kernel modules using DKMS Error! Your kernel headers for kernel 2.6.38-8-generic-pae cannot be found at /lib/modules/2.6.38-8-generic-pae/build or /lib/modules/2.6.38-8-generic-pae/source. * Failed, trying without DKMS * Recompiling VirtualBox kernel modules * Look at /var/log/vbox-install.log to find out what went wrong The contents of /var/log/vbox-install.log. As I am stuck, I also tried to install kernel-devel with yum, still fruitless: root@ubuntu# yum install kernel-devel Setting up Install Process No package kernel-devel available. Nothing to do Now I've no idea how to correct this. Any ideas?

    Read the article

  • Thinktecture.IdentityServer Beta 1

    - by Your DisplayName here!
    I just upload beta 1 to codeplex. Please test this version and give me feedback. Some quick notes on setup Watch the intro screencast on the codeplex site. Use the setup tool to set the signing and SSL certificate. You can now also set the ACLs on the private key for your worker pool account. IIS is required . SSL for the IIS site the STS runs in is required. Users of the STS must be in the 'IdentityServerUsers' role. Admins of the STS must be in the 'IdentityServerAdministrators' roles. What’s new? Mainly smaller bits and pieces and some refactoring. The biggest under the cover change is a new authorization model for the STS itself. If, e.g. you don’t like the new roles I introduced, you can easily change the behavior in the claims authorization manager in the STS web site project. What’s missing? The big one is Azure support. Not that I ran into unforeseeable problems here, I just wanted to wait until the on-premise version is more stabilized. Now with B1 I can start adding Azure support back.

    Read the article

  • How can I make fsck run non-interactively at boot time?

    - by Nelson
    I have a headless Ubuntu 12.04 server in a datacenter 1500 miles away. Twice now on reboot the system decided it had to fsck. Unfortunately Ubuntu ran fsck in interactive mode, so I had to ask someone at my datacenter to go over, plug in a console, and press the Y key. How do I set it up so that fsck runs in non-interactive mode at boot time with the -y or -p (aka -a) flag? If I understand Ubuntu's boot process correctly, init invokes mountall which in turn invokes fsck. However I don't see any way to configure how fsck is invoked. Is this possible? (To head off one suggestion; I'm aware I can use tune2fs -i 0 -c 0 to prevent periodic fscks. That may help a little but I need the system to try to come back up even if it had a real reason to fsck, say after a power failure.) In response to followup questions, here's the pertinent details of my /etc/fstab. I don't believe I've edited this at all from what Ubuntu put there. UUID=3515461e-d425-4525-a07d-da986d2d7e04 / ext4 errors=remount-ro 0 1 UUID=90908358-b147-42e2-8235-38c8119f15a6 /boot ext4 defaults 0 2 UUID=01f67147-9117-4229-9b98-e97fa526bfc0 none swap sw 0 0

    Read the article

  • Use 3 monitors w/built-in intel adapter + two old nvidia PCI cards on 10.10?

    - by Kendall Gifford
    I'd like to move from windows with my current workstation. The only thing holding me back is that I have 3 monitors connected to the system and I really take advantage of the real estate when working. I just installed Ubuntu 10.10 on the system and one of the monitors is up and running just fine. This monitor is connected to the built-in Intel adapter. I also have two old nVidia GeForce4 MX 4000 (nv19pl) cards in my two PCI slots with two monitors connected to them respectively. I installed the legacy (and proprietary) nVidia drivers (the nvidia-96 package) that claims to support these old cards. Now the question is how to get X configured to use all adapters (using two different drivers) so I can use all three monitors (and is this even possible)? From what I've read, it looks like I'll have to write an xorg.conf file since the nVidia driver doesn't support the auto-magic configuration supported by other drivers. On this site: http://wiki.ubuntu.com/X/Config it says that on 10.10 I just need to write an xorg.conf "containing only those sections and options that you need to override Xorg's autoconfigurated settings". So, does this mean I can get away with only including the nVidia-specific configuration stuff and all else will get auto-configured? Or, will providing a config with a "Device" section overrule the auto-magic from detecting/using the Intel adapter? I ran the included nvidia-xconfig to generate a basic, nVidia-specific xorg.conf but I'm hesitant to reboot with it in place, suspecting I'll have a screwed up display. Also, is there any way (any tool or command) to generate an xorg.conf from the current, auto-configured running state of an X session? If I have to write a full, complete config, I'd rather start with one that includes everything that's been auto-detected thus far (and merge it with my nVidia version). Anyhow, any info and thoughts are greatly appreciated (as are answers).

    Read the article

  • Get to Know a Candidate (18 of 25): Jack Fellure&ndash;Prohibition Party

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting.  Information sourced for Wikipedia.  NOTE:  I apologize for getting this entry out of order. Fellure (born October 3, 1931) is an American perennial political candidate and retired engineer.  Fellure has formally campaigned for President of the United States in every presidential election since 1988 as a member of the Republican Party. He asserts on his campaign website that his platform based on the 1611 Authorized King James Bible has never changed. As a candidate, he calls for the elimination of the liquor industry, abortion and pornography, and advocates the teaching of the Bible in public schools and criminalization of homosexuality. He has blamed the ills of society on those he has characterized as "atheists, Marxists, liberals, queers, liars, draft dodgers, flag burners, dope addicts, sex perverts and anti-Christians." After another run in 2008, Fellure initially ran for the Republican Party's 2012 presidential nomination. He then decided to seek the nomination of the Prohibition Party at the party's national convention in Cullman, Alabama The Prohibition Party (PRO) is a political party in the United States best known for its historic opposition to the sale or consumption of alcoholic beverages. It is the oldest existing third party in the US. The party was an integral part of the temperance movement. While never one of the leading parties in the United States, it was once an important force in the politics of the United States during the late 19th century and the early years of the 20th century. It has declined dramatically since the repeal of Prohibition in 1933. The party earned only 643 votes in the 2008 presidential election. The Prohibition Party advocates a variety of socially conservative causes, including "stronger and more vigorous enforcement of laws against the sale of alcoholic beverages and tobacco products, against gambling, illegal drugs, pornography, and commercialized vice." Fellure has Ballot Access in: LA Learn more about Jack Fellure and Prohibition Party on Wikipedia.

    Read the article

  • Nautilus header bar missing -- Ubuntu Gnome 13.10 (Gnome 3.10)

    - by user75252
    So, I recently did a fresh install of Ubuntu GNOME 13.10, added the gnome3-team/gnome3-next and gnome3-team/gnome3-staging PPA's, and upgraded to Gnome 3.10. (Also using a dual-monitor system, 1920 x 1080, Nvidia-319 driver.) Everything was running fine after the updates (including Nautilus, or "Files"), but when I opened Nautilus, at some point, the header bar was gone, and it got stuck in full-screen mode. The header is there for every other application, though. I can't resize Nautilus, I can't move it with the Alt+F7 hotkey. I can, however, make the sidebar disappear with F9 and make the program close with Alt+F4. I can also bring up the window menu with Alt+space, but the options to "resize" and "move" are greyed out, and the "Move Titlebar Onscreen" does nothing when clicked. Attempted solutions: I uninstalled, ran apt-get autoremove clean autoclean, and re-installed Nautilus, including any subsequent applications that were removed -- no fix. I installed and tried replacing the titlebar theme with Ambiance via Gnome Tweak Tool to at least restore the header/title bar -- no fix. I created a new user, logged into that, and opened Nautilus. It DID open up in the windowed mode with the header bar, but then, without my involvement, went to full-screen without the header bar. Same problem. Running "sudo nautilus" from the terminal does open it (full-screen, without header), but gives this error: (nautilus:7531): Gtk-WARNING **: Failed to register client: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.gnome.SessionManager was not provided by any .service files Here's a screenshot of the complete Nautilus dialog box:

    Read the article

  • Grub doesn't show both Ubuntu installations

    - by jackweirdy
    I have a laptop with Ubuntu 12.04 LTS installed as the main OS. The other day I installed Ubuntu-Studio (version 12.04) into another partition on the machine. The installation went great and when the machine booted, the grub menu popped up and I could see the option for Ubuntu Studio and the vanilla Ubuntu OS'. The problem was that this version of grub, installed by the Studio installer, didn't look great and insisted on putting Studio at the top of the list, and therefore as the main OS to boot. I use the standard Ubuntu more often, so I booted into that and ran sudo grub-install dev/sda. That worked OK and now Ubuntu boots as normal. Only problem is that the Grub menu doesn't show up and doesn't give me a chance to choose the other OS. Running sudo os-prober shows that it can find ubuntu studio, it doesn't give me a chance to boot it. Any ideas as to how I can fix this problem? Cheers in advance. EDIT: followed instructions here and saw the boot menu, but the only boot options present were for the standard installation of Ubuntu.

    Read the article

  • Tale of an Encrypted SSIS Package in msdb and a Lost Password

    - by Argenis
      Yesterday a Developer at work asked for a copy of an SSIS package in Production so he could work on it (please, dear Reader – withhold judgment on Source Control – I know!). I logged on to the SSIS instance, and when I went to export the package… Oops. I didn’t have that password. The DBA who uploaded the package to Production is long gone; my fellow DBA had no idea either - and the Devs returned a cricket sound when queried. So I posed the obligatory question on #SQLHelp and a bunch of folks jumped in – some to help and some to make fun of me (thanks, @SQLSoldier @crummel4 @maryarcia and @sqljoe). I tried their suggestions to no avail…even ran some queries to see if I could figure out how to extract the package XML from the system tables in msdb:   SELECT CAST(CAST(p.packagedata AS varbinary(max)) AS varchar(max)) FROM msdb.dbo.sysssispackages p WHERE p.name = 'LePackage'   This just returned a bunch of XML with encrypted data on it:  I knew there was a job in SQL Agent scheduled to execute the package, and when I tried to look at details on the job step I got the following: Not very helpful. The password had to be saved somewhere, but where?? All of a sudden I remembered that there was a system table I hadn’t queried yet: SELECT sjs.command FROM msdb.dbo.sysjobs sj JOIN msdb.dbo.sysjobsteps sjs ON sj.job_id = sjs.job_id WHERE sj.name = 'Run LePackage' The result: “Well, that’s really secure”, I thought to myself. Cheers, -Argenis

    Read the article

  • Searching Your PL/SQL Source with Oracle SQL Developer

    - by thatjeffsmith
    Version 3.2.1 included a few tweaks along with several hundred bug fixes. One of those tweaks was the addition of ‘ALL_SOURCE’ as a selection for the Type drop down in the Find Database Object panel. Scroll ALL the way down to the bottom Searching the database for your code or objects can be expensive. The ALL_SOURCE view comes in pretty handy when I want to demo how to cancel long running queries or the Task Progress panel – did you know you can manage all of your long running queries here? Yeah, don’t run this I pretty much hosed our demo pod at Open World b/c I ran that same query but added an ORDER BY b.TEXT DESC to the query and blew up the TEMP space and filled the primary partition on the image. Fun stuff. Anyways, where was I going with this? Oh yeah, searching ALL_SOURCE can be expensive. So we took it out of the product for awhile. And now it’s back in. If you select the ‘ALL’ field, it doesn’t actually search EVERYTHING, because that would probably be less than helpful. So if you want to search your PL/SQL objects for a scrap or bit of code, use the ‘ALL_SOURCE’ option in v3.2.1 Double-Click on the search results to go to the code you’re looking for. Be careful what you search for. Just like any query, it could take awhile.

    Read the article

  • Performance Overhead of Encrypted /home

    - by SabreWolfy
    I have a netbook with Windows on the second partition and Xubuntu (/ and /home) on the third partition. I selected to encrypt my home folder during installation. The performance of the netbook is adequate for the small machine that it is, but I'm looking to improve performance. I could not find much information about the overhead (CPU or drive) associated with home partition encryption. I ran the following, writing to my home partition as well as the the mounted Windows partition: dd if=/dev/zero of=~/dummy bs=512 count=10240 dd if=/dev/zero of=/media/Windows/dummy bs=512 count=10240 The first returned 2.4MB/s and the second returned 2.5MB/s. Can I therefore deduce that there is very little overhead to home folder encryption? I'm not sure if the different filesystems will make any difference (/ and /home are ext3). Update 1 I don't know why I didn't use /tmp instead of the mounted Windows folder. Only /home is encrypted, so /tmp is unencrypted ext3. The results of the dd as above are astounding: ~: 2.4 MB/s /tmp: 42.6 MB/s Comments please? The reason I am asking this is that disk access on the netbook is noticeably slow. Update 2 I timed each of the dd operations with time: ~: real 0m2.217s user 0m0.028s sys 0m2.176s /tmp: real 0m0.152s user 0m0.012s sys 0m0.136s See also: discussion on UbuntuForums.org and bug report Edit: Output of mount: /dev/sda3 on / type ext3 (rw,noatime,errors=remount-ro,user_xattr,commit=600) proc on /proc type proc (rw,noexec,nosuid,nodev) none on /sys type sysfs (rw,noexec,nosuid,nodev) fusectl on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /dev type devtmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev) gvfs-fuse-daemon on /home/USER/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=USER) `

    Read the article

  • Is HTML5/WebGL performance bad on low-end Android tablets and phones?

    - by Boris van Schooten
    I've developed a couple of WebGL games, and am trying them out on Android. I found that they run very slowly on my tablet, however. For example, a game with 10 sprites or so runs as 5fps. I tried Chrome and CocoonJS, but they are comparably slow. I also tried other games, and even games with only 5 or so moving sprites are this slow. This seems inconsistent with reports from others, such as this benchmark. Typically, when people talk about HTML5 game performance, they mention well-known and higher-end phones and tables. While my 7" tablet is cheap (I believe it's a relabeled Allwinner tablet, apparently with the Mali 400 GPU), I found it generally has a good gaming performance. All the games I tried run smoothly. I also developed an OpenGL ES 2 demo with 200 shaded 3D objects, and it ran at 50fps. My suspicion is that many low-end and white-label devices may have unacceptable HTML5/WebGL support, which means there may be a large section of gamers you will not reach when you choose this as your platform. I've heard rumors about inconsistent performance of HTML5 and WebGL on different devices, but no clear picture emerges. I would like to hear if any of you have had similar experiences with HTML5 or WebGL, or whether I can find information about the percentage of devices I can expect to have decent performance.

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >