Search Results

Search found 33894 results on 1356 pages for 'case tools'.

Page 53/1356 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Smart defaults [SSDT]

    - by jamiet
    I’ve just discovered a new, somewhat hidden, feature in SSDT that I didn’t know about and figured it would be worth highlighting here because I’ll bet not many others know it either; the feature is called Smart Defaults. It gets around the problem of adding a NOT NULLable column to an existing table that has got data in it – previous to SSDT you would need to define a DEFAULT constraint however it does feel rather cumbersome to create an object purely for the purpose of pushing through a deployment – that’s the situation that Smart Defaults is meant to alleviate. The Smart Defaults option exists in the advanced section of a Publish Profile file: The description of the setting is “Automatically provides a default value when updating a table that contains data with a column that does not allow null values”, in other words checking that option will cause SSDT to insert an arbitrary default value into your newly created NON NULLable column. In case you’re wondering how it does it, here’s how: SSDT creates a DEFAULT CONSTRAINT at the same time as the column is created and then immediately removes that constraint: ALTER TABLE [dbo].[T1]    ADD [C1] INT NOT NULL,         CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b] DEFAULT 0 FOR [C1];ALTER TABLE [dbo].[T1] DROP CONSTRAINT [SD_T1_1df7a5f76cf44bb593506d05ff9a1e2b]; You can then update the value as appropriate in a Post-Deployment script. Pretty cool! On the downside, you can only specify this option for the whole project, not for an individual table or even an individual column – I’m not sure that I’d want to turn this on for an entire project as it could hide problems that a failed deployment would highlight, in other words smart defaults could be seen to be “papering over the cracks”. If you think that should be improved go and vote (and leave a comment) at [SSDT] Allow us to specify Smart defaults per table or even per column. @Jamiet

    Read the article

  • What constitutes a "substantial, good-faith effort to remove the links"

    - by Luke McCallum
    We engaged the services of a 3rd party SEO consultant to assist us in managing our Meta data and to write regular blogs on our site http://cyberdesignworks.com.au Without our authorisation, the SEO also ran a link building campaign which has seen us Penguin slapped and we no longer appear in Google for a number of our core keywords. Since notification by Google that we have "unnatural links" back in March we have undertaken a significant campaign to rid ourselves of these dodgy backlinks by a number of methods. I have just received feedback on my 4th or 5th resubmission which is still advising that we need to make a "substantial, good-faith effort to remove the links" before Google will reconsider us for inclusion. After the effort that I have gone through to get links removed, I am now at a loss as to what else I can do to demonstrate "substantial, good-faith effort to remove the links". Below is a summary of the actions that we have taken to date. According to http://removem.com we had about 5584 back-linking domains. Of those we have successfully contacted and had removed links from 344 domains We ignored links from 625 domains as they were either legitimate press releases, natural backlinks or client websites containing an attribution link in the footer that points back to us. Due to our efforts, or the sites simply becoming defunct, removem.com reports that links from 3262 domains have been removed. We have contacted but are yet to receive feedback from 1666 domains so we can assume that the backlinks remain. We have configured an automatic 301 redirect for each of the links from these 1666 domains to point to http://redirects.sanscode.com/ which we are calling our Bad Link Catcher (a stroke of genius I thought). i.e http://www.mysimplewebdesign.com/create-a-perfect-webpage-with-four-important-tips-from-sydney-web-development-service-companies.php As we are a web design agency, we have a large number of client websites which contain an attribution link in their footer which points back to us. We have gone through the vast majority of these and updated these links to replace anchor text with an image and rel="nofollow" link. i.e <a rel="nofollow" target="_blank" href="http://www.cyberdesignworks.com.au/"><img src="https://sessions.sanscode.com/site/assets/media/badges/Badge_CDW_SANSCODE.png"></a> See http://www.milkatwork.com.au/ An export from http://removem.com detailing the number of times we have contacted each link and whether it is still found or not was also supplied with each resubmission. The total back links reported in Google Web Master Tools has dropped from over 100K to 87K and I expect it to drop significantly lower once Google re-crawls each back-linking page. Based on all of the above, I am not sure what else I can do to to demonstrate a "substantial, good-faith effort to remove the links". I would sincerely appreciate any feedback or suggestions that you may have as I am out of ideas.

    Read the article

  • Problem installing SQL Server client tools

    - by Shiraz Bhaiji
    We are tring to install SQL Server 2005 Standard on Windows 2008 Standard both 64 bit. Have done this before with no problems. This time we get an error during the installation of client tools: There was an unexpected failure during the setup wizard Link Id 20476, message ID 50000 There are no errors in the event log. Anybody have any ideas what could be wrong?

    Read the article

  • What FOSS solutions are available to manage software requirements?

    - by boos
    In the company where I work, we are starting to plan to be compliant to the software development life cycle. We already have, wiki, vcs system, bug tracking system, and a continuous integration system. The next step we want to have is to start to manage, in a structured way, software requirements. We dont want to use a wiki or shared documentation because we have many input (developer, manager, commercial, security analyst and other) and we dont want to handle proliferation of .doc around the network share. We are trying to search and we hope we can find and use a FOSS software to manage all this things. We have about 30 people, and don't have a budget for commercial software. We need a free solution for requirements management. What we want is software that can manage: Required features: Software requirements divided in a structured configurable way Versioning of the requirements (history, diff, etc, like source code) Interdependency of requirements (child of, parent of, related to) Rule Based Access Control for data handling Multi user, multi project File upload (for graph, document related to or so on) Report and extraction features Optional Features: Web Based Test case Time based management (timeline, excepted data, result data) Person allocation and so on Business related stuff Hardware allocation handling I have already play with testlink and now i'm playing with RTH, the next one i try is redmine.

    Read the article

  • Salt River Project Identifies US$500,000 in Cost Reduction Opportunities Through Unified IT Portfolio Management

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Salt River Project (SRP) includes two entities serving the Phoenix area: the Salt River Project Agricultural Improvement and Power District and the Salt River Valley Water Users’ Association. The SRP district operates various power plants and generating stations to provide electricity to nearly 956,000 retail customers. The SRP association maintains an extensive system of reservoirs, wells, and irrigation laterals to deliver nearly 1 million acre-feet of water annually. Salt River Project implemented Oracle’s Primavera Portfolio Management to unify management of its extensive IT portfolio, including essential utility systems, like work and asset management, as well as programming frameworks and development tools. With the system, SRP discovered almost US$500,000 in cost-reduction opportunities by identifying redundant or low use software, including 150 applications that are close to being unsupported. The company retired 10 applications in the last year and upgraded 34 systems. SRP also identified preferred technologies and ensured that more than 90% of applications are based on standard technologies—reducing procurement costs, simplifying maintenance support, and lowering total cost of ownership. Solutions: Provided approximately 70 users in the IT support group with detailed insight into the product lifecycle of each piece of IT infrastructure and software in the entire portfolio Discovered almost US$500,000 in cost reduction opportunities by identifying redundant or low use software that could be eliminated or migrated to alternative solutions Identified approximately 150 applications that are close to being unsupported and prioritized them to begin modernization Click here to view more Oracle Primavera Portfolio Management solutions for SRP. Why Oracle Salt River Project chose Oracle’s Primavera Portfolio Management after evaluating it against four other solutions. “Oracle’s Primavera Portfolio Management offered the most functionality to support our diverse needs,” said Eileen Ahles, IT portfolio manager, Salt River Project. Read the complete customer success story Access a list of all Primavera customer success stories

    Read the article

  • How to manage two separate testing teams using different test tracking tools

    - by newuser
    I have two independent testing teams currently testing the same application. One team is using ClearQuest, and the other is using Mantis. It has been a huge effort to manage all of the duplicate reported bugs. What options would improve this situation? My constraint is that the ClearQuest team will not change test reporting tools. The migration to ClearQuest also comes with a large training effort.

    Read the article

  • Rack layout tools

    - by Luke
    I'm wondering if there's any tools (preferably offline) that would allow me to layout all of the new equipment that will be going into several standard racks. Currently I'm using Excel to map out all of the slots columns for the data but I suspect that there is some better method of doing this. Suggestions? Edit: Dell has an online tool, but doesn't seem very good at actually saving the data that you're working on (and obviously it's geared towards Dell hardware).

    Read the article

  • Cannot install PC Tools firewall

    - by Philip
    I am unable to install PC Tools firewall..It is fine up until after rebooting then the PcTools icon in the system tray hangs and indicates it is "initializing." I have waited 5 minutes and no change. I have tried multiple times.The Vista firewall was turned off prior to attempting to the attempted installation. Help!Thank You

    Read the article

  • Rack layout tools

    - by Luke
    I'm wondering if there's any tools (preferably offline) that would allow me to layout all of the new equipment that will be going into several standard racks. Currently I'm using Excel to map out all of the slots columns for the data but I suspect that there is some better method of doing this. Suggestions? Edit: Dell has an online tool, but doesn't seem very good at actually saving the data that you're working on (and obviously it's geared towards Dell hardware).

    Read the article

  • Meaning of Crawl errors

    - by com
    My question is about definition of Crawl errors in Google Webmaster Tools. Crawl errors is devided into few sections. Let's first consider HTTP section. I assume that all broken links in this section was somehow found by crawler, this is not the links from sitemap. If all this links was found by scanning pages from sitemap for links, why it doesn't mention what was the source page, like in sitemap section with column Linked From. Please correct me if I am wrong. Sitemap section. Looks like all those links came from my sitemap. But there is Linked From column, I already know, that all those broken links is from sitemap, so in order to fix the error, I should revise my sitemap. Am I wrong? Not followed section. I don't know what does it mean. Looks like it accumulates all links that caused redirect, but for some reason Google considers all those redirect as wrong redirect. Do you know if there are any set of rules how to determine wrong redirect. Actually I found were was my mistake, I tried to normalize URL and redirect it to the right URL, but I did normalization in a wrong way. Not found section. This section like HTTP section but with 404 errors. This section has Linked From column. But very often Linked From has unavailable. What does it mean, Google can not say me how it found this non existing page. How this section related to sitemap section. Does this section contains all 404 links from sitemap too. But there is too many 404 links, much more than in sitemap. I tried to take a look what we have in Linked From, and I saw that this link came from sitemap two month ago. But why Google keeps it indexed, the link is already dead, new sitemap doesn't have it. If there is any expire date for old links? Unreachable section. Looks like this section for 500 errors. This section doesn't contain Linked From column. There are too many completely meaningless links, I really don't know where this stuff came from, and without Linked From I am not able to figure out how to deal with it. Sorry for such a big topic, but I just want to make it clear, what every section stands for, because it's extremely crucial in order to deal with all those problems. Hopefully it will be useful not just for me. Thanks!

    Read the article

  • GWT: Generate more complete crawl error report

    - by Mike
    I'm a developer in charge of managing Webmasters and related issues (including correcting crawl errors) for dozens (hundreds, maybe?) of active sites and as part of my duties I create a report of every discrepancy, including all pages generating a 404 and all pages that link to those pages. Currently within Webmaster Tools I'm able to download a csv file of all pages with a 404 response, but I'm then having to manually click on every single one of those links and copy the "linked from" field to paste into my spreadsheet. This is extremely tedious and seems unnecessary; I would expect the ability to download all that data at once. I'm ultimately looking for the end result of one csv file that has every url with a 404, but also has every url that links to each one of them. Am I overlooking this functionality somewhere or does anyone have a good solution? Edit 1 (2/11/2013): Example of what the csv output looks like now: URL,Response Code,News Error,Detected,Category http://www.abcdef.com/123.php,404,,11/12/13,Not found http://www.abcdef.com/456.php,404,,11/12/13,Not found Which is great, but let's say 123.php has 5 pages that link to it. Now I have to duplicate that row in my spreadsheet 4 more times, then go into Webmasters, get all the url's that link to the page, and add that data to my spreadsheet. The output I would prefer: URL,Response Code,Linked From,News Error,Detected,Category http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found Note the (hypothetical) addition of a "Linked From" column, as well as the fact there are only 2 unique URL's now (like before) but all of the "Linked To" pages are shown in one report. Edit 2 (2/12/2013): To clarify, my question is less about detecting and correcting 404's, but more about generating a report of what Google has listed as errors. Oftentimes, these errors aren't even valid anymore but I still need documentation to show that Google detected a problem and that problem is now fixed. Many of the "linked from" url's I find are actually outdated, cached resources. For example, I'll frequently see that the linked-from url is the sitemap, which is actually an old sitemap cached by Google that points to an old page. Neither the sitemap or old page exist, but they still appear in my crawl error reports because they are cached resources.

    Read the article

  • Finding header files

    - by rwallace
    A C or C++ compiler looks for header files using a strict set of rules: relative to the directory of the including file (if "" was used), then along the specified and default include paths, fail if still not found. An ancillary tool such as a code analyzer (which I'm currently working on) has different requirements: it may for a number of reasons not have the benefit of the setup performed by a complex build process, and have to make the best of what it is given. In other words, it may find a header file not present in the include paths it knows, and have to take its best shot at finding the file itself. I'm currently thinking of using the following algorithm: Start in the directory of the including file. Is the header file found in the current directory or any subdirectory thereof? If so, done. If we are at the root directory, the file doesn't seem to be present on this machine, so skip it. Otherwise move to the parent of the current directory and go to step 2. Is this the best algorithm to use? In particular, does anyone know of any case where a different algorithm would work better?

    Read the article

  • Network and Server Management Tools

    - by jessieE
    We are building a farm of test servers. Currently we have 8 servers. We are planning to use the servers to test the following Mysql Cluster Xen or KVM virtualization Heartbeat/Pacemaker/DRDB What tools do experienced sysads use for: Initial installation of operating system( installing centos 5 or ubuntu server manually 8 times seems like a tedious task that just begs for automation) Centralized Configuration Management and Software Updates for Host and possibly Guest(virtualized) servers Hardware, Services and Network Monitoring

    Read the article

  • Google is not treating two Australian schools as separate sites when both are subdomains of qld.edu.au

    - by LuckySpoon
    My question relates to two websites, each of which is a "Calvary Christian College", however in two totally different locations and unrelated to each other entirely (except by name, and thus domain). All schools in the state are issued a <school-name>.qld.edu.au subdomain, in this case calvary.qld.edu.au and calvarycc.qld.edu.au. Now what's interesting is that these domains are crossing each other in sitelinks for searches such as calvary christian college townsville. The green data here is for one school (the Townsville school, as per search term), and the red data is for the other school. I've put a demotion in for this 6 months ago (we control calvary.qld.edu.au), however we're seeing no change on the results page. I have been able to get the owners of calvarycc.qld.edu.au to submit demotions for our domain, which should go in sometime in the next few days. What can we do to tell Google that these websites are not interchangeable, despite both appearing as "subdomains" of qld.edu.au? We can possibly open channels of communication with the administrators of qld.edu.au but will need to tell them what we need to change, and at this point I'm out of ideas.

    Read the article

  • Linux tools to choose suitable Cisco ASA 5500

    - by linuxcore
    I have a linux webhosting server which affects a high DDOS. I want to use Cisco ASA 5500 Series Adaptive Security Appliances to protect the linux server from this DDOS. I know there are many factors should you know before you choose the suitable hardware firewall like the amount of this DDOS and pps ..etc Please suggest a linux tools to measure those factors and to help me collect the required informations ( pps - amount of DDOS - concurrent connections and other factors ) Regards,

    Read the article

  • Bacula optimization/profiling tools

    - by pufferfish
    I'm trying to get an idea of where the bottlenecks are in our backup system. Are there tools available for profiling this? If not, any pointers to a home grown method would also help. I guess most of the info would be in the bacula logs, but I'd also like to see things like what gets saturated during despooling: disk, CPU or network? This feels like a problem most bacula admins would have encountered.

    Read the article

  • Edit Top 200 Rows SQL Pane equivalent in Visual Studio 2012 SQL Server Tools "View Data"

    - by Johan Kronberg
    I've always used Edit Top 200 Rows and then edited the query in the SQL Pane of the 2008 Management Studio to find the rows I want to edit data for. Now I have the tools inside Visual Studio 2012 and want to use be able to change the query after right clicking a table and choosing "View Data" but I can't see that this is possible. Has the "SQL Pane" feature been removed or am I not seeing something?

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >