Search Results

Search found 27327 results on 1094 pages for 'search results'.

Page 274/1094 | < Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >

  • How do I revert back to windows 8 from 8.1?

    - by chipperyman573
    I hate the new windows 8.1 search. I mean a truly loathe it. It's easily the absolute worst thing Microsoft has ever made (including bing and IE). I asked how to get the old search back and the answer was that you can't. Honestly, I haven't noticed any changes from 8 - 8.1 besides a single tile has changed color (dunno why, the color doesn't even match the icon), and I really hate this new search thing. So how do I get 8 back?

    Read the article

  • Which Browser is the Best to Use When Running Your Laptop on Battery Power?

    - by Asian Angel
    Squeezing the maximum amount of usage time out of your laptop battery can be challenging at times…it all depends on the software you are using. One software we are all likely to be using is a browser to keep up with our online lives… If your laptop is older, then getting the most out of your laptop’s aging battery is definitely a must. The good folks over at the 7 Tutorials blog have done a comparison test to see which browser is the gentlest on your laptop’s battery and the results may surprise you. You can view the results by visiting the link below… Had better (or worse) luck with one of the browsers tested? Then make sure to share the results with your fellow readers in the comments! Test Comparison: Which Browser Will Make Your Laptop’s Battery Last Longer? [7 Tutorials] How to Own Your Own Website (Even If You Can’t Build One) Pt 3 How to Sync Your Media Across Your Entire House with XBMC How to Own Your Own Website (Even If You Can’t Build One) Pt 2

    Read the article

  • Push or Pull Mobile Coupons?

    - by David Dorf
    Mobile phones allow consumers to receive coupons in context, which increases their relevance and therefore redemption rates. Using your current location, you can get coupons that can be redeemed nearby for the things you want now. Receiving a coupon for something you wanted last week or something you might buy next month just isn't as valuable. I previously talked about Placecast and their concept of pushing offers to mobile phones that transgress "geo-fences" around points of interest, like store locations. This push model is an automatic reminder there are good deals just up ahead. This model works well in dense cities where people walk, but I question how effective it will be in the suburbs where people are driving. McDonald's recently ran a campaign in Finland where they pushed offers to GPS devices when cars neared their restaurants. Amazingly, they achieved a 7% click-through rate. But 8coupons.com sees things differently. They prefer the pull model that requires customers to initiate a search for nearby coupons, and they've done some studies to better understand what "nearby" means. It turns out that there are concentric search circles that emanate from your home and work. From inner to outer, people search for food, drink, shopping, and entertainment. Intuitively, that feels about right. So the question is, do consumers prefer the push or pull model for offers? No doubt the market is big enough for both. These days its not good enough to just know who your customers are -- you also need to know where they are so you can catch them in the right moment. According to Borrell Associates, redemption rates of mobile coupons are 10x that of traditional mail and newspaper coupons. One thing is for sure; assuming 85% of consumers regularly spend money within 5 miles of home and work, location-based coupons make tons of sense.

    Read the article

  • SharePoint 2007 / 2010 Content Indexing &ldquo;The file reached the maximum download limit. Check that the full text of the document can be meaningfully crawled.&rdquo;

    - by Stacy Vicknair
    If you have large files in a content source that is being indexed by Sharepoint you might run into the following error message: “The file reached the maximum download limit. Check that the full text of the document can be meaningfully crawled.” This is usually caused because SharePoint’s MaxDownloadSize setting is set lower than the size of the file you are attempting to index. You can increase this value, restart the service then kick off a full crawl in order to fix this issue, but SharePoint 2007 and 2010 have different methods for accomplishing this task.   Sharepoint 2007 Open up the Registry editor and increase the MaxDownloadSize value to a number (in MB) higher than the largest file being indexed. You can find this at: HKEY_LOCAL_MACHINE\Software\Microsoft\Search\1.0\Gathering Manager After you increase the size, cycle the search service and kick off a full crawl of the content source in question.   Sharepoint 2010 With SharePoint 2010 you can use PowerShell via the Sharepoint 2010 Console in order to change the MaxDownloadSize. Execute the following commands to update the value: 1: $ssa = Get-SPEnterpriseSearchServiceApplication 2: $ssa.SetProperty(“MaxDownloadSize”, <new size in MB>) 3: $ssa.Update()   References: http://support.microsoft.com/kb/287231 http://blogs.technet.com/b/brent/archive/2010/07/19/sharepoint-server-2010-maxdownloadsize-and-maxgrowfactor.aspx   Technorati Tags: SharePoint,WSS,MaxDownloadSize,Search

    Read the article

  • SSMS Tools Pack 1.9.3 is out!

    - by Mladen Prajdic
    This release adds a great new feature and fixes a few bugs. The new feature called Window Content History saves the whole text in all all opened SQL windows every N minutes with the default being 30 minutes. This feature fixes the shortcoming of the Query Execution History which is saved only when the query is run. If you're working on a large script and never execute it, the existing Query Execution History wouldn't save it. By contrast the Window Content History saves everything in a .sql file so you can even open it in your SSMS. The Query Execution History and Window Content History files are correlated by the same directory and file name so when you search through the Query Execution History you get to see the whole saved Window Content History for that query. Because Window Content History saves data in simple searchable .sql files there isn't a special search editor built in. It is turned ON by default but despite the built in optimizations for space minimization, be careful to not let it fill your disk. You can see how it looks in the pictures in the feature list. The fixed bugs are: SSMS 2008 R2 slowness reported by few people. An object explorer context menu bug where it showed multiple SSMS Tools entries and showed wrong entries for a node. A datagrid bug in SQL snippets. Ability to read illegal XML characters from log files. Fixed the upper limit bug of a saved history text to 5 MB. A bug when searching through result sets prevents search. A bug with Text formatting erroring out for certain scripts. A bug with finding servers where it would return null even though servers existed. Run custom scripts objects had a bug where |SchemaName| didn't display the correct table schema for columns. This is fixed. Also |NodeName| and |ObjectName| values now show the same thing.   You can download the new version 1.9.3 here. Enjoy it!

    Read the article

  • Stop Your Mouse from Waking Up Your Windows 7 Computer

    - by The Geek
    If you use Sleep Mode on your PC, have you ever noticed that moving your mouse will wake the computer from sleep mode? If you would prefer to only have the PC wake up when you hit a key instead, there’s a simple tweak. Just type Mouse into the start menu search box, or the Control Panel search box, and then open up the Mouse Properties panel. Find the Hardware tab, select your mouse in the list, and then click the Properties button. You’ll have to click the “Change settings” button before you can see the Power Management tab… And now, you can uncheck the box from “Allow this device to wake the computer”. That’s all there is to it. Similar Articles Productive Geek Tips Stop the Mouse From Waking Up Your Computer from Sleep ModeFix "Sleep Mode Randomly Waking Up" Issue in Windows VistaTemporarily Disable Windows Update’s Automatic Reboot in Win7 or VistaDisable Aero Snap (the Mouse Drag Window Arranging Feature in Windows 7)New Year’s Resolutions: Use Your Computer as an Alarm Clock the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool Download Free iPad Wallpapers at iPad Decor Get Your Delicious Bookmarks In Firefox’s Awesome Bar

    Read the article

  • Why does SEO based code tips not appear to affect ranking?

    - by Ben
    I've been researching various methods for SEO where pages have precise titles, keywords are highlighted with h tags and tick the many boxes stated in good page mark up for SEO. However when looking at some top ranked search sites on google for key terms they have terrible SEO based mark up. Really long page titles, no tags, limited appearance of keywords in the text and so on. SEO analysis services rate them lower than other sites, yet these sites rank really high. Even with a low number of back-links they are high, so I don't understand how these sites earn the position when they appear inferior to those below them which have better mark up and links. I don't want to cause trouble my mentioning sites or keywords etc. but looking in google at 'executive search' the roughly 5th placed site makes no sense why it should be highly rank, especially with all the added .swfs. The same applies for the top of 'Japan Executive Search'. My main point is that these sites seem to not have all the important structural rules stated in seo page rating applications and general suggested best practice, nor do they show large back-links. It makes me feel like there is no point bothering to write decent mark up if it really doesn't matter. Can anyone explain how sites with such mark-up, and low back-links can outrank well written and structured sites with greater linkage? Sorry if this is a fuzzy question, I want to avoid singling out any sites for example, but it really has me perplexed that sites which appear to ignore the suggested best practices rank so well.

    Read the article

  • Which algorithm used in Advance Wars type turn based games

    - by Jan de Lange
    Has anyone tried to develop, or know of an algorithm such as used in a typical turn based game like Advance Wars, where the number of objects and the number of moves per object may be too large to search through up to a reasonable depth like one would do in a game with a smaller search base like chess? There is some path-finding needed to to engage into combat, harvest, or move to an object, so that in the next move such actions are possible. With this you can build a search tree for each item, resulting in a large tree for all items. With a cost function one can determine the best moves. Then the board flips over to the player role (min/max) and the computer searches the best player move, and flips back etc. upto a number of cycles deep. Finally it has found the best move and now it's the players turn. But he may be asleep by now... So how is this done in practice? I have found several good sources on A*, DFS, BFS, evaluation / cost functions etc. But as of yet I do not see how I can put it all together.

    Read the article

  • SEO - Index images (lazyload)

    - by Guilherme Nascimento
    Note:My question is not about Javascript. I'm developing a plugin for jQuery/Mootols/Prototype, that work with DOM. This plugin will be to improve page performance (better user experience). The plugin will be distributed to other developers so that they can use in their projects. How does the lazyload: The images are only loaded when you scroll down the page (will look like this: http://www.appelsiini.net/projects/lazyload/enabled_timeout.html LazyLoad). But he does not need HTML5, I refer to this attribute: data-src="image.jpg" Two good examples of website use LazyLoad are: youtube.com (suggested videos) and facebook.com (photo gallery). I believe that the best alternative would be to use: <A href="image.jpg">Content for ALT=""</a> and convert using javascript, for this: <IMG alt="Content for ALT=\"\"" src="image.jpg"> Then you question me: Why do you want to do that anyway? I'll tell you: Because HTML5 is not supported by any browser (especially mobile) And the attribute data-src="image.jpg" not work at all Indexers. I need a piece of HTML code to be fully accessible to search engines. Otherwise the plugin will not be something good for other developers. I thought about doing so to help in indexing: <noscript><img src="teste.jpg"></noscript> But noscript has negative effect on the index (I refer to the contents of noscript) I want a plugin that will not obstruct the image indexing in search engines. This plugin will be used by other developers (and me too). This is my question: How to make a HTML images accessible to search engines, which can minimize the requests?

    Read the article

  • A mechanism to include site title in every page, but not in <title> element

    - by Saeed Neamati
    Each site can have a name. For example, site x. Each page also can have a name (or a title) that should appear in <title> tag in the header. However, many websites out there use the combination site name - page name to provide the value for <title> tag. I find it a little far from being semantic. On the other hand, if you only include page title in <title> tag, search engines won't find your site by its name. For example, if your site's name is Thought Results and you don't include it in page titles, then if you search for Thought Results, you won't find your site in SERPs. Thus I'm searching for a mechanism to both include site title (not page title) in every page, and also only include page title in <title> tag to get more semantic results. Is there any way to achieve this?

    Read the article

  • Sharepoint Server 2007 generates event log entry every 5 minutes - "The SSP Timer Job Distribution L

    - by Teevus
    I get the following error logged into the Event Log every 5 minutes: The SSP Timer Job Distribution List Import Job was not run. Reason: Logon failure: the user has not been granted the requested logon type at this computer In addition, OWSTimer.exe periodically gets into a state where its consuming almost all the CPU and only killing the process or restarting the Sharepoint services fixes it (although I'm not sure if this is a related or seperate issue). I have tried the following (based on various suggestions floating around the web), all to no avail: iisreset (no affect) Added the Sharepoint and Sharepoint Search service accounts to Log on as a batch job and Log on as a service policies in the Group Policies for the domain. I went into the Local Computer Policy on the Sharepoint server and verified that those policies had actually been applied Verified that the Sharepoint and Sharepoint Search service accounts are both in the WSS_WPG group Verified in dcomcnfg that the WSS_WPG group (and indeed the Sharepoint and Sharepoint search service accounts) has local activation rights for SPSearch. Any more suggestions would be valued. Thanks

    Read the article

  • Rawr Code Clone Analysis&ndash;Part 0

    - by Dylan Smith
    Code Clone Analysis is a cool new feature in Visual Studio 11 (vNext).  It analyzes all the code in your solution and attempts to identify blocks of code that are similar, and thus candidates for refactoring to eliminate the duplication.  The power lies in the fact that the blocks of code don't need to be identical for Code Clone to identify them, it will report Exact, Strong, Medium and Weak matches indicating how similar the blocks of code in question are.   People that know me know that I'm anal enthusiastic about both writing clean code, and taking old crappy code and making it suck less. So the possibilities for this feature have me pretty excited if it works well - and thats a big if that I'm hoping to explore over the next few blog posts. I'm going to grab the Rawr source code from CodePlex (a World Of Warcraft gear calculator engine program), run Code Clone Analysis against it, then go through the results one-by-one and refactor where appropriate blogging along the way.  My goals with this blog series are twofold: Evaluate and demonstrate Code Clone Analysis Provide some concrete examples of refactoring code to eliminate duplication and improve the code-base Here are the initial results:   Code Clone Analysis has found: 129 Exact Matches 201 Strong Matches 300 Medium Matches 193 Weak Matches Also indicated is that there was a total of 45,181 potentially duplicated lines of code that could be eliminated through refactoring.  Considering the entire solution only has 109,763 lines of code, if true, the duplicates lines of code number is pretty significant. In the next post we’ll start examining some of the individual results and determine if they really do indicate a potential refactoring.

    Read the article

  • Spirent Communications Improves Customer Experience with Knowledge Management

    - by Tony Berk
    Spirent Communications plc is a global leader in test and measurement inspiring innovation within development labs, communication networks and IT organizations. The world’s leading communications companies rely on Spirent to help design, develop, validate, and deliver world-class network, devices, and services. Spirent’s customers require high levels of support for a diverse and complex product portfolio, and the company is committed to delivering on this requirement. Spirent needed a solution to help its customers get the information they need quickly and at their convenience through its Web site. After evaluating several solutions, Spirent selected and deployed Oracle Knowledge for Web Self Service Enterprise Edition. Oracle Knowledge Management uses natural language processing to understand the true intent of each inquiry logged via the support portal’s search function. The Spirent Knowledge Base on the company’s Customer Support Center (CSC) finds the best possible answer using search enhancement features?such as communications industry-specific libraries and federation to search external sources. Spirent has reduced contact center call volume while better serving its customers. Each time a customer uses the knowledge base, they find answers faster than by calling, and it saves Spirent an average of US$210 per call?which is significant when multiplied across the thousands of calls received monthly. Oracle Knowledge also helps support engineers find answers more quickly, enabling the company to scale without adding additional support engineers. Oracle Knowledge is integrated with Spirent's Siebel Contact Center implementation to provide an integrated desktop for CRM and agent intelligence, avoiding the need for contact center personnel to toggle between various screens to address customer inquiries, thereby accelerating customer service. Click here to learn more about Sprient's use of Siebel CRM and Oracle Knowledge Management.

    Read the article

  • WolframAlpha Can Now Do In-depth Analysis of Your Facebook Account

    - by Jason Fitzpatrick
    If you’re a big fan of WolframAlpha’s ability to crunch the numbers on just about anything–and we certainly are–you’ll likely be just as delighted as we were to watch it massage the data from your Facebook account. Find out your most liked, discussed, and shared posts, see your Facebook habits, and other neat trends. I unleashed it on my account this morning, not sure what to expect from the results. Within the results tabulation WolframAlpha provided me with all sorts of neat data break downs. I now know exactly how many days it is to my next birthday, the composition of my aggregate posting habits (how many posts are status updates, links, or photos), the time of day when I do the most posting (and what the composition of those posts is), and my average post length. I also know my most liked post and my most commented on post. It will even crunch the numbers on your network of friends (60.6% of my friends are married, for example). By far one of the more interesting data analysis it does on the friendship data, however, is organizing all your friends into relationship clusters so you can see who in your Facebook network is friends with other people in your Facebook network. The service from WolframAlpha is free: simply visit the WolframAlpha search portal and type in “Facebook report” to start the process. You’ll be prompted to create a WolframAlpha account if you don’t have one and to authorize the WolframAlpha Facebook app to access your data. Your Facebook data is cached to your WolframAlpha account for one hour in order to crunch the numbers and display the results. WolframAlpha HTG Explains: How Windows Uses The Task Scheduler for System Tasks HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows? Java is Insecure and Awful, It’s Time to Disable It, and Here’s How

    Read the article

  • In Google Webmaster Tools we have 3 sitemaps attributed to 1 domain

    - by Frank
    Thanks for you advice and help ahead of time I have a website that has been on the internet for almost 10 years created in "Microsoft Frontpage" with over 900 pages. Currently in Google Webmaster tools it shows up as 2 domains and 3 sitemaps: http://www.example.com example.com hostedsitmaps.com Furthermore, since we were having hard to placing the xml sitemap on our site(Frontpage Issues) we decided to hire pro-sitemaps.com to create, host and upload the xml file which they did. Thus, I have another site hostedsitemaps.com on our webmaster tools for the site. Hosted sitemaps shows: 900 urls submitted 800 Indexed. Crawl errors and Search queries: No data available http://www.example.com shows: 889 URLs submitted 1 URLs indexed. Crawl Errors: 14 Soft 404 796 Not found Search Queries: 8104 example.com shows: 889 URLs submitted 1 URLs indexed Crawl errors: 48 Soft 404 91 Not found Search Queries: 8104 My questions and need for help are as followed: 1. Why are our domain based sites in webmaster tools ( example.com and http:www.example.com) showing only 1 URL indexed while the hostedsitemap has 800 indexed? 2. Should we have 3 domains configured for this "one" domain in Google Webmaster tools? 3. Should we eliminate/delete the hostedsitemap from webmaster tools completely and take off that XML sitemap? 4 Does having example.com and http://www.example.com impact web ranking? 5. Any other thoughts or help in this very complicated matter for us. Thanks.

    Read the article

  • How to make the start menu find a program based on a custom keyword?

    - by Pierre-Alain Vigeant
    I am searching for a way to type a keyword in the start menu Search programs and files field and that it will return the application that match the keyword. An example will better explain this: Suppose that I want to start the powershell. Currently what I can type in the search field is power and the first item that appear is the 64bits powershell shortcut. Now suppose that I'd like ps to return powershell as the first item of the search list. Currently, typing ps return all files with the .ps extension, along with a control panel options about recording steps but not the powershell executable itself. How can I do that?

    Read the article

  • Setting up autotest with rspec in ubuntu

    - by Reactor5
    I'm trying to set up autotest on Ubuntu, and no matter what my configuration, I get this: loading autotest/rails_rspec2 style: RailsRspec2 /home/brian/.rvm/gems/ruby-1.9.2-rc2@rails3tutorial/gems/redgreen-1.2.2/lib/redgreen/autotest.rb:6:in `<top (required)>': uninitialized constant Object::PLATFORM (NameError) the .autotest (~/.autotest) file I have is as follows: #!/usr/bin/env ruby require 'redgreen/autotest' def self.notify title, msg, img, pri='low', time=3000 `notify-send -i #{img} -u #{pri} -t #{time} '#{msg}'` end Autotest.add_hook :ran_command do |at| results = [at.results].flatten.join("\n") output = results.slice(/(\d+)\s+examples?,\s*(\d+)\s+failures?(,\s*(\d+)\s+not implemented)?(,\s*(\d+)\s+pending)?/) folder = "~/Pictures/autotest/" if output =~ /([123456789]|[\d]{2,})\sfailures?/ notify "FAIL:", "#{output}", folder+"rails_fail.png", 'critical', 10000 elsif output =~ /[1-9]\d*\spending?/ notify "PENDING:", "#{output}", folder+"rails_pending.png", 'normal', 10000 else notify "PASS:", "#{output}", folder+"rails_ok.png" end end what am I doing wrong here?

    Read the article

  • unit/integration testing web service proxy client

    - by cori
    I'm rewriting a PHP client/proxy library that provides an interface to a SOAP-based .Net webservice, and in the process I want to add some unit and integration tests so future modifications are less risky. The work the library I'm working on performs is to marshall the calls to the web service and do a little reorganizing of the responses to present a slightly more -object-oriented interface to the underlying service. Since this library is little else than a thin layer on top of web service calls, my basic assumption is that I'll really be writing integration tests more than unit tests - for example, I don't see any reason to mock away the web service - the work that's performed by the code I'm working on is very light; it's almost passing the response from the service right back to its consumer. Most of the calls are basic CRUD operations: CreateRole(), CreateUser(), DeleteUser(), FindUser(), &ct. I'll be starting from a known database state - the system I'm using for these tests is isolated for testing purposes, so the results will be more or less predictable. My question is this: is it natural to use web service calls to confirm the results of operations within the tests and to reset the state of the application within the scope of each test? Here's an example: One test might be createUserReturnsValidUserId() and might go like this: public function createUserReturnsValidUserId() { // we're assuming a global connection to the service $newUserId = $client->CreateUser("user1"); assertNotNull($newUserId); assertNotNull($client->FindUser($newUserId); $client->deleteUser($newUserId); } So I'm creating a user, making sure I get an ID back and that it represents a user in the system, and then cleaning up after myself (so that later tests don't rely on the success or failure of this test w/r/t the number of users in the system, for example). However this still seems pretty fragile - lots of dependencies and opportunities for tests to fail and effect the results of later tests, which I definitely want to avoid. Am I missing some options of ways to decouple these tests from the system under test, or is this really the best I can do? I think this is a fairly general unit/integration testing question, but if it matters I'm using PHPUnit for the testing framework.

    Read the article

  • Fix 403 errors in Google Webmaster Tools

    - by Justin
    Hi Team, I have a domain that has "fallen off a cliff" for searches in Google. Searches that used to be in position 1-4 are now gone from page 1. The same search in Bing shows the typical position expected (top 5 results). In reviewing Google Webmaster Tools, I am seeing two problems: 1. The Sitemap is reporting two errors: General HTTP error: HTTP 403 error (Forbidden) URLs not accessible However, the URL they provide as "no accessible" is accessible. I can click the link Google provides and it works fine. There are 6,000 crawl errors of type 403. Again, most of these pages that have 403 are accessible in my browser (tried various browsers as well). About half are from January, the other half from November. There are no IP-specific firewall rules on ports 80 and 443 that could block the goolgebot Using the user agent switcher add-on for FF I confirmed that the page loads when the user agent is the googlebot I an confirm that most of the pages reported as 403 are accessible. A search of just "site:thedomain.com" does confirm there are over 9,000 in the index. But most searches don't return the site. I believe the 403 issues are the cause of the fall in search rankings, but I can't seem to find any information online with ideas about how to address this. Any ideas? jpe

    Read the article

  • Google authorship verification issue

    - by Fraser
    I'm trying to get my blog content author verified so my face gets into the Google search results. I managed to achieve this a few weeks back - When testing my content in the Google authorship testing tool it reported that I had been verified and I could see my mug in the results. All I had to do was wait a couple of weeks before I started popping up in the search results (I think(?)). However, I seem to have thrown a spanner in the works. I set up Google apps for my domain and merged my old Google+ profile into my google apps account. This seemed to reset my Google+ profile (no biggy, since it was a new profile and only had 1 connection). I re-set up my G+ account and tied it all in to my blog and it's content. I am now seeing some very strange behaviour. If you take a look at one of my blog posts through the snippet testing tool: http://www.google.com/webmasters/tools/richsnippets?url=http%3A%2F%2Fblog.fraser-hart.co.uk%2Fjquery-fullscreen-background-slideshow%2F&html= You will see that it is not recognising me as an author. However, when you enter my profile URL (https://plus.google.com/108765138229426105004) into the "Authorship verification by email" input, you will see that it does in fact recognise it as verified. Now, if you try and verify the same page again, it reverts back to unverified. I thought I may have to just wait it out but this has been over a week now and previously (before I merged my profile) it happened instantaneously. Has anyone experienced this bizarre behaviour before? What is happening here? More importantly, is there anything I can do to resolve it? (Apologies for the long and boring question). Cheers!

    Read the article

  • Why to avoid SELECT * from tables in your Views

    - by Jeff Smith
    -- clean up any messes left over from before: if OBJECT_ID('AllTeams') is not null  drop view AllTeams go if OBJECT_ID('Teams') is not null  drop table Teams go -- sample table: create table Teams (  id int primary key,  City varchar(20),  TeamName varchar(20) ) go -- sample data: insert into Teams (id, City, TeamName ) select 1,'Boston','Red Sox' union all select 2,'New York','Yankees' go create view AllTeams as  select * from Teams go select * from AllTeams --Results: -- --id          City                 TeamName ------------- -------------------- -------------------- --1           Boston               Red Sox --2           New York             Yankees -- Now, add a new column to the Teams table: alter table Teams add League varchar(10) go -- put some data in there: update Teams set League='AL' -- run it again select * from AllTeams --Results: -- --id          City                 TeamName ------------- -------------------- -------------------- --1           Boston               Red Sox --2           New York             Yankees -- Notice that League is not displayed! -- Here's an even worse scenario, when the table gets altered in ways beyond adding columns: drop table Teams go -- recreate table putting the League column before the City: -- (i.e., simulate re-ordering and/or inserting a column) create table Teams (  id int primary key,  League varchar(10),  City varchar(20),  TeamName varchar(20) ) go -- put in some data: insert into Teams (id,League,City,TeamName) select 1,'AL','Boston','Red Sox' union all select 2,'AL','New York','Yankees' -- Now, Select again for our view: select * from AllTeams --Results: -- --id          City       TeamName ------------- ---------- -------------------- --1           AL         Boston --2           AL         New York -- The column labeled "City" in the View is actually the League, and the column labelled TeamName is actually the City! go -- clean up: drop view AllTeams drop table Teams

    Read the article

  • Lost files after installing Ubuntu

    - by Joshua Rosato
    I installed Ubuntu on my laptop over windows, I had 2 partitions on one hard disk. It seems like my second partition is gone with all my files. How can I recover the old files? They weren't on the same partition as Windows. I read that the partition has probably just not been mounted so ran sudo fdisk -l to find all the partitions and then ran sudo mount, however I can't tell from the results of sudo mount what is not mounted and I am also unsure how to mount it once I find the unmounted partition. sudo fdisk -l - Results Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0002c6dc Device Boot Start End Blocks Id System /dev/sda1 * 2048 486322175 243160064 83 Linux /dev/sda2 486324222 488396799 1036289 5 Extended /dev/sda5 486324224 488396799 1036288 82 Linux swap / Solaris sudo mount - Results /dev/sda1 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/cgroup type tmpfs (rw) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) none on /run/user type tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755) none on /sys/fs/pstore type pstore (rw) systemd on /sys/fs/cgroup/systemd type cgroup (rw,noexec,nosuid,nodev,none,name=systemd) gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,user=joshy1)

    Read the article

  • Help Creating a Google Analytics Funnel for Check out process

    - by Drew
    have a funnel question. I am currently working on tracking (through GA) guest and logged in member activity once they get to my sites shopping cart. But need help with setting up funnels. Specifically to see; Total sales Logged in member total sales List item Guest member sales The urls associated to the check out proces are: Logged in members /cart (arriving to checkout) /checkout (checking out as a logged in member) /checkout/confirmation (thank you - confirmed sale) Guest members - /cart (arriving to checkout) - /checkout-guest (checking out as a guest) - /checkout/confirmation (thanks you - confirmed sale) I've tested the funnels set up for the above with 9 transactions. But the end maths doesn't seem to line up. Total sales funnel shows 9 completed transactions when only tracking these to urls: - /cart - /checkout/confirmation Which is great - cause it's working Logged in member sales show a total of 9 completed transactions based on each step of the logged in url steps (above) being tracked in a funnel. Not good because this number should be 3. Guest check out funnel (see guest steps above) shows 9 as well. What the?!?!?!? The results I am looking for should reflect the following - total sales = 9, logged in members = 3, guest members = 6 Is there any way to set these urls up so that the funnels report the correct results - or do I need to changed the urls and provide logged in members and guest stand alone purchase confirmation pages (this would mean I can not track total sales which combine results from both streams)? Any knowledge in this area is welcome. Thanks.

    Read the article

  • Where to find Oracle Training for BI & EPM Partners

    - by Mike.Hallett(at)Oracle-BI&EPM
    We run both “Live Virtual Training” (web-based classes) as well as “In Class Training” in most countries around Europe, Middle East and Africa. Some of these are subsidised for OPN partners, while others are available at a discount (usually 25%) to OPN partners via OU (Oracle University).  To see what is scheduled for in-depth hands-on implementation training for partners see:   Oracle Business Intelligence Enterprise Edition Plus Implementation Boot Camp For example, these are some of the OBI11g Boot-camps we currently have scheduled: 11 - 15 June 2012 Bucharest, Romania 21 - 23 August 2012 Johannesburg, South Africa 24 - 28 September 2012 Utrecht, Netherlands Oracle Essbase Implementation Boot Camp Oracle GoldenGate Implementation Boot Camp Hyperion Planning Boot Camp   Hyperion Financial Management Boot Camp   Oracle Business Intelligence Applications for ERP Boot Camp     You can also selectively filter search for courses via the Partner Events Calendar @ http://events.oracle.com/search/search?group=Events&keyword=OPN+Only   Otherwise, it is worth checking the Oracle Partner Enablement BLOG for any BI / EPM news, especially the sub-Blogs on the right for each country.   There is also a monthly Partner Enablement Update (PDF) to find out the latest partner training on Oracle's new products and new releases.

    Read the article

  • Why doesn't my grub background show?

    - by luri
    I've tried to change resolution, colors and background image for my grub menu, but I get no background (well, just a black one, no image).... What am I doing wrong? This is my grub.cfg (omitting the OS's part): # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="${saved_entry}" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga } insmod part_msdos insmod ext2 set root='(hd1,msdos5)' search --no-floppy --fs-uuid --set 42509bf9-f3e6-460a-8947-ec0f5c1fbcc8 if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=1280x1024x24 load_video insmod gfxterm fi terminal_output gfxterm insmod part_msdos insmod ext2 set root='(hd1,msdos5)' search --no-floppy --fs-uuid --set 42509bf9-f3e6-460a-8947-ec0f5c1fbcc8 set locale_dir=($root)/boot/grub/locale set lang=es insmod gettext if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### insmod part_msdos insmod ext2 set root='(hd1,msdos5)' search --no-floppy --fs-uuid --set 42509bf9-f3e6-460a-8947-ec0f5c1fbcc8 insmod jpeg if background_image /boot/grub/Serenity_Enchanted_by_sirpecangum.jpg ; then set color_normal=black/white set color_highlight=brown/light-gray else set menu_color_normal=white/black set menu_color_highlight=black/light-gray fi ### END /etc/grub.d/05_debian_theme ### The selected image has been copied to /boot/grub/Serenity_Enchanted_by_sirpecangum.jpg with no luck. I'm for sure missing something (probably something obvious) but I don't really get it...

    Read the article

< Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >