Search Results

Search found 12933 results on 518 pages for 'collaboration tools'.

Page 34/518 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Firefox 4 show tools menu in button interface

    - by chris.nullptr
    I like the extra vertical space provided by the latest firefox beta (4b9). However, the button interface doesn't give you access to the tools menu. I know that I can either temporarily enable the menubar by holding down the ALT key, but flipping a flag in about:config would be preferable. Does such a flag exist? Edit: I want to be able to access the tools menu from the new button interface like you can with Bookmarks, History, Options, and Help.

    Read the article

  • Microsoft Office 2010 Proofing Tools Kit

    - by Svish
    I have installed the Office 2010 available on MSDN, but there is no proofing tools kit available there yet. Still I see various sources where I can download this kit when I search for it on Google. Is the Proofing Tools Kit available yet or not? Are these sources I see on Google legitimate ones or should I stay away from them? Or are they also available from Microsoft directly somewhere I haven't looked yet?

    Read the article

  • apache log analysis tools for multiple virtual hosts?

    - by shreddd
    I am interested in trying to get a side by side usage comparison of all the virtual hosts being served up by my apache server. In the simplest case, I want to see a list (or bar chart) with each virtual host and the number of requests/traffic on that site. I've been playing around with webalyzer and awstats but I haven't been able to compare multiple virtual hosts in the same infographic. Anyone have any suggestions on tools for doing this (or how I might use the above tools to do so)?

    Read the article

  • Things to install on a new machine – revisited

    - by RoyOsherove
    as I prepare to get a new dev machine at work, I write the things I am going to install on it, before writing the first line of code on that machine: Control Freak Tools: Everything Search Engine – a free and amazingly fast search engine for files all over your machine. (just file names, not inside files). This is so fast I use it almost as a replacement for my start menu, but it’s also great for finding those files that get hidden and tucked away in dark places on my system. Ever had a situation where you needed to see exactly how many copies of X.dll were hiding on your machine and where? this tool is perfect for that. Google Chrome. It’s just fast. very fast. and Firefox has become the IE of alternative browsers in terms of speed and memory. Don’t even get me started on IE. TweetDeck – get a complete view of what’s up on twitter Total Commander – my still favorite file manager, over five years now. KatMouse – will scroll any window your hovering on, even if it’s not an active window, when you use scroll the wheel on it. PowerIso or Daemon Tools – for loading up ISO images of discs LogMeIn Ignition – quick access to your LogMeIn computers for online Backup: JungleDisk or BackBlaze KeePass – save important passwords MS Security Essentials – free anti virus that’s quoest and doesn’t make a mess of your system. for home: uTorrent – a torrent client that can read rss feeds (like the ones from ezrss.it ) Camtasia Studio and SnagIt – for recording and capturing the screen, and then adding cool effects on top. Foxit PDF Reader – much faster that adove reader. Toddler Keys (for home) – for when your baby wants to play with your keyboard. Live Writer – for writing blog posts for Lenovo ThinkPads – Lenovo System Update – if you have a “custom” system instead of the one that came built in, this will keep all your lenovo drivers up to date. FileZilla – for FTP stuff All the utils from sysinternals, (or try the live-links) especially: AutoRuns for deciding what’s really going to load at startup, procmon to see what’s really going on with processes in your system   Developer stuff: Reflector. Pure magic. Time saver. See source code of any compiled assembly. Resharper. Great for productivity and navigation across your source code FinalBuilder – a commercial build automation tool. Love it. much better than any xml based time hog out there. TeamCity – a great visual and friendly server to manage continuous integration. powerful features. Test Lint – a free addin for vs 2010 I helped create, that checks your unit tests for possible problems and hints you about it. TestDriven.NET – a great test runner for vs 2008 and 2010 with some powerful features. VisualSVN – a commercial tool if you use subversion. very reliable addin for vs 2008 and 2010 Beyond Compare – a powerful file and directory comparison tool. I love the fact that you can right click in windows exporer on any file and select “select left side to compare”, then right click on another file and select “compare with left side”. Great usability thought! PostSharp 2.0 – for addind system wide concepts into your code (tracing, exception management). Goes great hand in hand with.. SmartInspect – a powerful framework and viewer for tracing for your application. lots of hidden features. Crypto Obfuscator – a relatively new obfuscation tool for .NET that seems to do the job very well. Crypto Licensing – from the same company –finally a licensing solution that seems to really fit what I needed. And it works. Fiddler 2 – great for debugging and tracing http traffic to and from your app. Debugging Tools for Windows and DebugDiag  - great for debugging scenarios. still wanting more? I think this should keep you busy for a while.   Regulator and Regulazy – for testing and generating regular expressions Notepad 2 – for quick editing and viewing with syntax highlighting

    Read the article

  • Want a headless build server for SSDT without installing Visual Studio? You’re out of luck!

    - by jamiet
    An issue that regularly seems to rear its head on my travels is that of headless build servers for SSDT. What does that mean exactly? Let me give you my interpretation of it. A SQL Server Data Tools (SSDT) project incorporates a build process that will basically parse all of the files within the project and spit out a .dacpac file. Where an organisation employs a Continuous Integration process they will likely want to automate the building of that dacpac whenever someone commits a change to the source control repository. In order to do that the organisation will use a build server (e.g. TFS, TeamCity, Jenkins) and hence that build server requires all the pre-requisite software that understands how to build an SSDT project. The simplest way to install all of those pre-requisites is to install SSDT itself however a lot of folks don’t like that approach because it installs a lot unnecessary components on there, not least Visual Studio itself. Those folks (of which i am one) are of the opinion that it should be unnecessary to install a heavyweight GUI in order to simply get a few software components required to do something that inherently doesn’t even need a GUI. The phrase “headless build server” is often used to describe a build server that doesn’t contain any heavyweight GUI tools such as Visual Studio and is a desirable state for a build server. In his blog post Headless MSBuild Support for SSDT (*.sqlproj) Projects Gert Drapers outlines the steps necessary to obtain a headless build server for SSDT: This article describes how to install the required components to build and publish SQL Server Data Tools projects (*.sqlproj) using MSBuild without installing the full SQL Server Data Tool hosted inside the Visual Studio IDE. http://sqlproj.com/index.php/2012/03/headless-msbuild-support-for-ssdt-sqlproj-projects/ Frankly however going through these steps is a royal PITA and folks like myself have longed for Microsoft to support headless build support for SSDT by providing a distributable installer that installs only the pre-requisites for building SSDT projects. Yesterday in MSDN forum thread Building a VS2013 headless build server - it's sooo hard Mike Hingley complained about this very thing and it prompted a response from Kevin Cunnane from the SSDT product team: The official recommendation from the TFS / Visual Studio team is to install the version of Visual Studio you use on the build machine. I, like many others, would rather not have to install full blown Visual Studio and so I asked: Is there any chance you'll ever support any of these scenarios: Installation of all build/deploy pre-requisites without installing the VS shell? TFS shipping with all of the pre-requisites for doing SSDT project build/deploys 3rd party build servers (e.g. TeamCity) shipping with all of the requisites for doing SSDT project build/deploys I have to say that the lack of a single installer containing all the pre-requisites for SSDT build/deploy puzzles me. Surely the DacFX installer would be a perfect vehicle for that? Kevin replied again: The answer is no for all 3 scenarios. We looked into this issue, discussed it with the Visual Studio / TFS team, and in the end agreed to go with their latest guidance which is to install Visual Studio (e.g. VS2013 Express for Web) on the build machine. This is how Visual Studio Online is doing it and it's the approach recommended for customers setting up their own TFS build servers. I would hope this is compatible with 3rd party build servers but have not verified whether this works with TeamCity etc. Note that DacFx MSI isn't a suitable release vehicle for this as we don't want to include Visual Studio/MSBuild dependencies in that package. It's meant to just include the core DacFx DLLs used by SSMS, SqlPackage.exe on the command line, etc. What this means is we won't be providing a separate MSI installer or nuget package with just the necessary build DLLs you need to run your build and tests. If someone wanted to create a script that generated a nuget package based on our DLLs and targets files, then release that somewhere on the web for easier integration with 3rd party build servers we've no problem with that. Again, here’s the link to the thread and its worth reading in its entirety if this is something that interests you. So there you have it. Microsoft will not be be providing support for headless build servers for SSDT but if someone in the community wants to go ahead and roll their own, go right ahead. @Jamiet

    Read the article

  • exception occured in java compiler

    - by user2892977
    I am a beginner in Java.I have JDK1.7.0 installed on windows 7 OS.I just wrote a sample java file where the file was not getting compiled and throws the below error. Sam.java:5: ';' expected Sample p = New Sample(); An exception has occurred in the compile r (1.7.0-ea). Please file a bug at the Java Developer Connection (http://java.su n.com/webapps/bugreport) after checking the Bug Parade for duplicates. Include your program and the following diagnostic in your report. Thank you. java.lang.StringIndexOutOfBoundsException: String index out of range: 26 at java.lang.String.charAt(String.java:694) at com.sun.tools.javac.util.Log.printErrLine(Log.java:251) at com.sun.tools.javac.util.Log.writeDiagnostic(Log.java:343) at com.sun.tools.javac.util.Log.report(Log.java:315) at com.sun.tools.javac.util.AbstractLog.error(AbstractLog.java:96) at com.sun.tools.javac.parser.Parser.reportSyntaxError(Parser.java:295) at com.sun.tools.javac.parser.Parser.accept(Parser.java:326) at com.sun.tools.javac.parser.Parser.blockStatements(Parser.java:1599) at com.sun.tools.javac.parser.Parser.block(Parser.java:1500) at com.sun.tools.javac.parser.Parser.block(Parser.java:1514) at com.sun.tools.javac.parser.Parser.methodDeclaratorRest(Parser.java:25 69) at com.sun.tools.javac.parser.Parser.classOrInterfaceBodyDeclaration(Par ser.java:2518) at com.sun.tools.javac.parser.Parser.classOrInterfaceBody(Parser.java:24 45) at com.sun.tools.javac.parser.Parser.classDeclaration(Parser.java:2290) at com.sun.tools.javac.parser.Parser.classOrInterfaceOrEnumDeclaration(P arser.java:2228) at com.sun.tools.javac.parser.Parser.typeDeclaration(Parser.java:2217) at com.sun.tools.javac.parser.Parser.compilationUnit(Parser.java:2163) at com.sun.tools.javac.main.JavaCompiler.parse(JavaCompiler.java:530) at com.sun.tools.javac.main.JavaCompiler.parse(JavaCompiler.java:571) at com.sun.tools.javac.main.JavaCompiler.parseFiles(JavaCompiler.java:82 2) at com.sun.tools.javac.main.JavaCompiler.compile(JavaCompiler.java:748) at com.sun.tools.javac.main.Main.compile(Main.java:386) at com.sun.tools.javac.main.Main.compile(Main.java:312) at com.sun.tools.javac.main.Main.compile(Main.java:303) at com.sun.tools.javac.Main.compile(Main.java:82) at com.sun.tools.javac.Main.main(Main.java:67) Below is the code for Sam.java file class sam { public static void main(String args[]) { Sample p = New Sample(); p.show(); p.display(); } } I researched in google with the various compiler options but that did not help.I would like to understand the below errors. 1 - Sam.java:5: ';' expected 2 - An exception has occurred in the compiler (1.7.0-ea)

    Read the article

  • Google indexing and ranking a custom domain served by Google App Engine

    - by Hugues
    I have a website served on the following URL : "http://www.plugimmo.com" which is a custom domain served by Google App Engine on the following URL : http://plugimmo.appspot.com Since a while I have tried to optimise the Google indexing and ranking with no success. The problem is that searching on Google the keywords in the title of my home page does not retrieve my website at all even not in the 1,000 first results : When checking the cached version of google ( cache:www.plugimmo.com), it says that the cached version is the one of 20-Aug-12 of "plugimmo.appspot.com". It looks there are several issues : 1 - The cached version is really old. I have made a lot of changes since the 20-Aug-12 and I saw the googlebot crawling my site several times. 2 - The cached version is for "plugimmo.appspot.com" 3 - When looking at the Google Webmaster tools, I see that the number of pages indexed for www.plugimmo.com is 0, but that can't be the case given the number of changes I made since then. My questions would therefore be the following : Why is the version of the cache so old although I saw the googlebot crawling the site many times since 20-Aug-12 ? Is there a problem with indexing a custom domain served by Google App Engine ? Why is the Google Webmaster tools showing 0 pages indexed although new pages have been crawled and that no errors have been reported in the indexing ? Also, the site has been developed with Google Web Toolkit. I have followed the guidelines regarding crawling Ajax sites. The home page when crawled by a robot is redirected to http://www.plugimmo.com/HomeSnapshot.html Thanks a lot for your help ! Hugues

    Read the article

  • Announcing Sesame Data Browser

    - by Fabrice Marguerie
    At the occasion of MIX10, which is currently taking place in Las Vegas, I'd like to announce Sesame Data Browser.Sesame will be a suite of tools for dealing with data, and Sesame Data Browser will be the first tool from that suite.Today, during the second MIX10 keynote, Microsoft demonstrated how they are pushing hard to get OData adopted. If you don't know about OData, you can visit the just revamped dedicated website: http://odata.org. There you'll find about the OData protocol, which allows you to publish and consume data on the web, the OData SDK (with client libraries for .NET, Java, Javascript, PHP, iPhone, and more), a list of OData producers, and a list of OData consumers.This is where Sesame Data Browser comes into play. It's one of the tools you can use today to consume OData.I'll let you have a look, but be aware that this is just a preview and many additional features are coming soon.Sesame Data Browser is part of a bigger picture than just OData that will take shape over the coming months. Sesame is a project I've been working on for many months now, so what you see now is just a start :-)I hope you'll enjoy what you see. Let me know what you think.

    Read the article

  • Soft 404 error on redirected outbound links

    - by Techlands
    I have a redirect script on my site which sends visitors to an affiliate site. However in the last month I've noticed that Google webmaster tools is reporting my outbound links as a 404 error. Here is the breakdown on how its setup: My outbound links are coded like this: <a href="/f/c123" rel="nofollow" target="_blank">Link Title</a> My redirect script will then perform a 302 redirect to the affiliate link Originally I had the affiliate links (CJ) directly in the HTML, however I noticed over time that this had some impact on my sites traffic. So I changed them to a redirect script and my traffic returned. This seemed to work with no issues for over 1 year but now I'm getting soft 404 errors in Google webmaster tools. I did try adding a rule in my robots.txt to block any links starting with /f/ but I'm not sure if this will help or Google will still report soft 404 errors. I am considering as possible options to change the a tag to a button tag and use an onclick event to load the link.

    Read the article

  • hProduct-microformats not work in google

    - by silverfox
    I'm trying to work with hProduct was testing tool for google microformats (http://www.google.com/webmasters/tools/richsnippets), but it is not recognizing the data: does not recognize the photo does not recognize the price does not recognize the category only recognizes the rating HTML: <div class="hproduct"> <span class="brand">ACME</span> <span class="fn">Executive Anvil</span> <img class="photo" src="http://microformats.org/wiki/skins/Microformats/images/logo.gif" /> <span class="review hreview-aggregate"> Average rating: <span class="rating">4.4</span>, based on <span class="count">89 </span> reviews </span> Regular price: $179.99 Sale: $<span class="price">119.99</span> (Sale ends 5 November!) <span class="description">Sleeker than ACME's Classic Anvil, the Executive Anvil is perfect for the business traveler looking for something to drop from a height.</span> Category: <span class="category"> <span class="value-title" title="Hardware > Tools > Anvils">Anvils</span> </span> </div> and still shows this warning: waring: In order to generate a preview with rich snippets, either price or review or availability needs to be present. I used google's own example: http://support.google.com/webmasters/bin/answer.py?hl=en&answer=186036 I also tested the microformas.org: http://microformats.org/wiki/google-rich-snippets-examples

    Read the article

  • Requirements/issue tracker similar to online spreadsheet

    - by Maxim Eliseev
    Is there a requirements/issue tracker software which is similar to Google spreadsheet? We have Fogbugz but I find it more heavyweight and slow than a simple spreadsheet. Is there a Fogbugz alternative which is - fast - can show issues/requirements as a spreadsheet (at) and allows in-place editing - supports tree structures (where issue can have child issues)? It is required for a small project. There will be 2 developers and 1-2 other users. I guess that only one user will be actively maintaining it. UPDATE I do not say that a spreadsheet is better than Fogbugz or similar tools. In fact I am looking for a tool which is similar to Fogbugz and could replace a spreadsheet, but faster than Fogbugz and has an additional feature (table-like mode). I'd like to find a tool which can operate in a mode which looks like a table (one row per issue) but has a rich set features (similar to Fogbugs and JIRA). I find Fogbugz (and similar tools) inconvenient because I must enter the web form in order to edit anything. In-place editing (when issues are shown as a table) would be much faster.

    Read the article

  • Should I use the cool, new and awesome stuff [closed]

    - by Ieyasu Sawada
    I'm in the field of Web Development. I follow a lot of awesome people on Twitter(Paul Irish, Alex Sexton, David Walsh, Rebecca Murphey, etc.) By following these people I'm constantly updated of the new and cool stuff in web development(backbone.js, angular.js, require.js, ember.js, jasmine, etc.) Now I'm creating a web application for a client and because of the different tools, libraries, plugins that I'm aware of I don't even know anymore when do I need to use those or do I even need those, or how do I even implement it in a sane way where I won't have to repeat myself(DRY). What's your advice on what I really need to do in order to become better. Do I really need to use these cool new tools or should I just stick with what I know for now and try to make my code better. Should I unfollow these people in order to not pollute my mind with stuff that I can't really use now because I don't have the necessary experience in order to use it. What sort of things should I really be focusing on for someone like me who has only about 2 years of experience in the field of web development.

    Read the article

  • Information about how much time in spent in a function, based on the input of this function

    - by olchauvin
    Is there a (quantitative) tool to measure performance of functions based on its input? So far, the tools I used to measure performance of my code, tells me how much time I spent in functions (like Jetbrain Dottrace for .Net), but I'd like to have more information about the parameters passed to the function in order to know which parameters impact the most the performance. Let's say that I have function like that: int myFunction(int myParam1, int myParam 2) { // Do and return something based on the value of myParam1 and myParam2. // The code is likely to use if, for, while, switch, etc.... } If would like a tool that would allow me to tell me how much time is spent in myFunction based on the value of myParam1 and myParam2. For example, the tool would give me a result looking like this: For "myFunction" : value | value | Number of | Average myParam1 | myParam2 | call | time ---------|----------|-----------|-------- 1 | 5 | 500 | 301 ms 2 | 5 | 250 | 1253 ms 3 | 7 | 1268 | 538 ms ... That would mean that myFunction has been call 500 times with myParam1=1 and myParam2=5, and that with those parameters, it took on average 301ms to return a value. The idea behind that is to do some statistical optimization by organizing my code such that, the blocs of codes that are the most likely to be executed are tested before the one that are less likely to be executed. To put it bluntly, if I know which values are used the most, I can reorganize the if/while/for etc.. structure of the function (and the whole program) to optimize it. I'd like to find such tools for C++, Java or.Net. Note: I am not looking for technical tips to optimize the code (like passing parameters as const, inlining functions, initializing the capacity of vectors and the like).

    Read the article

  • Generic software code style enforcer

    - by FuzziBear
    It seems to me to be a fairly common thing to do, where you have some code that you'd like to automatically run through a code style tool to catch when people break your coding style guide(s). Particularly if you're working on code that has multiple languages (which is becoming more common with web-language-x and javascript), you generally want to apply similar code style guides to both and have them enforced. I've done a bit of research, but I've only been able to find tools to enforce code style guidelines (not necessarily applying the code style, just telling you when you break code style guidelines) for a particular language. It would seem to me a reasonably trivial thing to do by just using current IDE rules for syntax highlighting (so that you don't check style guide rules inside quotes or strings, etc) and a whole lot of regexes to enforce some really generic things. Examples: if ( rather than if( checking lines with only whitespace Are there any tools that do this kind of really generic style checking? I'd prefer it to be easily configurable for different languages (because like it or not, some things would just not work cross language) and to add new "rules" to check new things.

    Read the article

  • Should a software developer get a yearly equipment budget?

    - by CrazyDart
    I am looking at a new position with a new company. I have talked to some people in the past (in general, not at this company) that they had been given a yearly budget to buy new computer stuff to keep up to date. Now why I feel this question is worth asking here is that Joel comes right out and says an employer should pay for the best equipment money can buy... within reason of course. From The Joel Test: 12 Steps to Better Code 9. Do you use the best tools money can buy? Writing code in a compiled language is one of the last things that still can't be done instantly on a garden variety home computer... Top notch development teams don't torture their programmers. Even minor frustrations caused by using underpowered tools add up, making programmers grumpy and unhappy. And a grumpy programmer is an unproductive programmer... Does anyone know if the industry has such a standard to offer an allowance or budget? I have never worked for a company like this, but I am thinking I should toss this in the ring for negotiations. Seems reasonable. How do bigger companies like MS, Google, and Apple handle this? If you say yes, give a range... I have been told numbers from $5k to $10k. Seems high to me, but hey I would gladly take it.

    Read the article

  • How to configure hibernate-tools with maven to generate hibernate.cfg.xml, *.hbm.xml, POJOs and DAOs

    - by mmm
    Hi, can any one tell me how to force maven to precede mapping .hbm.xml files in the automatically generated hibernate.cfg.xml file with package path? My general idea is, I'd like to use hibernate-tools via maven to generate the persistence layer for my application. So, I need the hibernate.cfg.xml, then all my_table_names.hbm.xml and at the end the POJO's generated. Yet, the hbm2java goal won't work as I put *.hbm.xml files into the src/main/resources/package/path/ folder but hbm2cfgxml specifies the mapping files only by table name, i.e.: <mapping resource="MyTableName.hbm.xml" /> So the big question is: how can I configure hbm2cfgxml so that hibernate.cfg.xml looks like below: <mapping resource="package/path/MyTableName.hbm.xml" /> My pom.xml looks like this at the moment: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>hibernate3-maven-plugin</artifactId> <version>2.2</version> <executions> <execution> <id>hbm2cfgxml</id> <phase>generate-sources</phase> <goals> <goal>hbm2cfgxml</goal> </goals> <inherited>false</inherited> <configuration> <components> <component> <name>hbm2cfgxml</name> <implemetation>jdbcconfiguration</implementation> <outputDirectory>src/main/resources/</outputDirectory> </component> </components> <componentProperties> <packagename>package.path</packageName> <configurationFile>src/main/resources/hibernate.cfg.xml</configurationFile> </componentProperties> </configuration> </execution> </executions> </plugin> And then the second question: is there a way to tell maven to copy resources to the target folder before executing hbm2java? At the moment I'm using mvn clean resources:resources generate-sources for that, but there must be a better way. Thanks for any help. Update: @Pascal: Thank you for your help. The path to mappings works fine now, I don't know what was wrong before, though. Maybe there is some issue with writing to hibernate.cfg.xml while reading database config from it (though the file gets updated). I've deleted the file hibernate.cfg.xml, replaced it with database.properties and run the goals hbm2cfgxml and hbm2hbmxml. I also don't use the outputDirectory nor configurationfile in those goals anymore. As a result the files hibernate.cfg.xml and all *.hbm.xml are being generated into my target/hibernate3/generated-mappings/ folder, which is the default value. Then I updated the hbm2java goal with the following: <componentProperties> <packagename>package.name</packagename> <configurationfile>target/hibernate3/generated-mappings/hibernate.cfg.xml</configurationfile> </componentProperties> But then I get the following: [INFO] --- hibernate3-maven-plugin:2.2:hbm2java (hbm2java) @ project.persistence --- [INFO] using configuration task. [INFO] Configuration XML file loaded: file:/C:/Documents%20and%20Settings/mmm/workspace/project.persistence/target/hibernate3/generated-mappings/hibernate.cfg.xml 12:15:17,484 INFO org.hibernate.cfg.Configuration - configuring from url: file:/C:/Documents%20and%20Settings/mmm/workspace/project.persistence/target/hibernate3/generated-mappings/hibernate.cfg.xml 12:15:19,046 INFO org.hibernate.cfg.Configuration - Reading mappings from resource : package.name/Messages.hbm.xml [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.codehaus.mojo:hibernate3-maven-plugin:2.2:hbm2java (hbm2java) on project project.persistence: Execution hbm2java of goal org.codehaus.mojo:hibernate3-maven-plugin:2.2:hbm2java failed: resource: package/name/Messages.hbm.xml not found How do I deal with that? Of course I could add: <outputDirectory>src/main/resources/package/name</outputDirectory> to the hbm2hbmxml goal, but I think this is not the best approach, or is it? Is there a way to keep all the generated code and resources away from the src/ folder? I assume, the goal of this approach is not to generate any sources into my src/main/java or /resources folder, but to keep the generated code in the target folder. As I generally agree with this point of view, I'd like to continue with that eventually executing hbm2dao and packaging the project to be used as a generated persistence layer component from the business layer. Is this also what you meant?

    Read the article

  • Workaround for datadude deployment bug - NullReferenceException

    - by jamiet
    I have come across a bug in Visual Studio 2010 Database Projects (aka datadude aka DPro aka Visual Studio Database Development Tools aka Visual Studio Team Edition for Database Professionals aka Juneau aka SQL Server Data Tools) that other people may encounter so, for the purposes of googling, I'm writing this blog post about it. Through my own googling I discovered that a Connect bug had already been raised about it (VS2010 Database project deploy - “SqlDeployTask” task failed unexpectedly, NullReferenceException), and coincidentally enough it was raised by my former colleague Tom Hunter (whom I have mentioned here before as the superhuman Tom Hunter) although it has not (at this time) received a reply from Microsoft. Tom provided a repro, namely that this syntactically valid function definition: CREATE FUNCTION [dbo].[Function1]()RETURNS TABLEASRETURN (    WITH cte AS (    SELECT 1 AS [c1]    FROM [$(Database3)].[dbo].[Table1]   )   SELECT 1 AS [c1]   FROM cte) would produce this nasty unhelpful error upon deployment: C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\TeamData\Microsoft.Data.Schema.TSqlTasks.targets(120,5): Error MSB4018: The "SqlDeployTask" task failed unexpectedly.System.NullReferenceException: Object reference not set to an instance of an object.   at Microsoft.Data.Schema.Sql.SchemaModel.SqlModelComparerBase.VariableSubstitution(SqlScriptProperty propertyValue, IDictionary`2 variables, Boolean& isChanged)   at Microsoft.Data.Schema.Sql.SchemaModel.SqlModelComparerBase.ArePropertiesEqual(IModelElement source, IModelElement target, ModelPropertyClass propertyClass, ModelComparerConfiguration configuration)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareProperties(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, ModelComparisonChangeDefinition changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithoutCompareName(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithSameType(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, ModelComparisonResult result, Boolean ignoreComparingName, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, Boolean compareFromRootElement, ModelComparisonChangeDefinition& changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareChildren(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareParentElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes, Boolean isComposing)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithoutCompareName(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithSameType(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, ModelComparisonResult result, Boolean ignoreComparingName, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, Boolean compareFromRootElement, ModelComparisonChangeDefinition& changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareChildren(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareParentElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes, Boolean isComposing)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithoutCompareName(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, ModelComparisonResult result, ModelComparisonChangeDefinition changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareElementsWithSameType(IModelElement sourceElement, IModelElement targetElement, ModelComparerConfiguration configuration, ModelComparisonResult result, Boolean ignoreComparingName, Boolean parentExplicitlyIncluded, Boolean compareElementOnly, Boolean compareFromRootElement, ModelComparisonChangeDefinition& changes)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareAllElementsForOneType(ModelElementClass type, ModelComparerConfiguration configuration, ModelComparisonResult result, Boolean compareOrphanedElements)   at Microsoft.Data.Schema.SchemaModel.ModelComparer.CompareStore(ModelStore source, ModelStore target, ModelComparerConfiguration configuration)   at Microsoft.Data.Schema.Build.SchemaDeployment.CompareModels()   at Microsoft.Data.Schema.Build.SchemaDeployment.PrepareBuildPlan()   at Microsoft.Data.Schema.Build.SchemaDeployment.Execute(Boolean executeDeployment)   at Microsoft.Data.Schema.Build.SchemaDeployment.Execute()   at Microsoft.Data.Schema.Tasks.DBDeployTask.Execute()   at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute()   at Microsoft.Build.BackEnd.TaskBuilder.ExecuteInstantiatedTask(ITaskExecutionHost taskExecutionHost, TaskLoggingContext taskLoggingContext, TaskHost taskHost, ItemBucket bucket, TaskExecutionMode howToExecuteTask, Boolean& taskResult)   Done executing task "SqlDeployTask" -- FAILED.  Done building target "DspDeploy" in project "Lloyds.UKTax.DB.UKtax.dbproj" -- FAILED. Done executing task "CallTarget" -- FAILED.Done building target "DBDeploy" in project It turns out there are a certain set of circumstances that need to be met for this error to occur: The object being deployed is an inline function  (may also exist for multistatement and scalar functions - I haven't tested that) That object includes SQLCMD variable references The object has already been deployed successfully Just to reiterate that last bullet point, the error does not occur when you deploy the function for the first time, only on the subsequent deployment.   Luckily I have a direct line into a guy on the development team so I fired off an email on Friday evening and today (Monday) I received a reply back telling me that there is a simple fix, one simply has to remove the parentheses that wrap the SQL statement. So, in the case of Tom's repro, the function definition simpy has to be changed to: CREATE FUNCTION [dbo].[Function1]()RETURNS TABLEASRETURN --(    WITH cte AS (    SELECT 1 AS [c1]    FROM [$(Database3)].[dbo].[Table1]   )   SELECT 1 AS [c1]   FROM cte--) I have commented out the offending parentheses rather than removing them just to emphasize the point. Thereafter the function will deploy fine. I tested this out on my own project this morning and can confirm that this fix does indeed work.   I have been told that the bug CAN be reproduced in the Release Candidate (RC) 0 build of SQL Server Data Tools in SQL Server 2010 so am hoping that a fix makes it in for the Release-To-Manufacturing (RTM) build. Hope this helps @jamiet

    Read the article

  • Tools to (privately) annotate/markup a website for maintenance

    - by rob
    I've been tasked with updating a website. Rather than proofreading and updating each page (one at a time), I want to make a single pass over the entire website, marking graphics/images/videos that need to be rewritten, removed, or updated. I thought about taking screenshots, marking those up, and putting them in our bug-tracking database, but that seems like an extremely tedious solution. Some of the content is similar on various pages across the website, and the entire site itself is localized into several languages (so any changes made to the English version will have corresponding changes for other languages). I also want all of my markup to remain private (that is, if it's stored online somewhere, I should be the only person who can see my comments). I found an article that lists several website annotation services, but it's not clear whether they allow private annotations, or whether these tools are even appropriate for website maintenance (many of them look more geared toward social networking). I've started making a list of some necessary and desired features below, and may add more as necessary. Annotations/markup/comments remain private (only visible to me) Comment history/tagging (so I can reuse the same comment for shared footers, items requiring similar updates, etc.) Ability to print/export a list or report of all comments for the entire website Ability to produce a categorized list of changes (e.g., to produce a list of images that need updating, which I can send to the graphic designer) What processes and tools do you use to keep track of all the changes that need to be made to a website? What features are painfully absent from the tools you use?

    Read the article

  • Mac compatible Web-Development Tools

    - by Derek Adair
    Hi, I'm looking for any Mac compatibile development tools that are tried and tested. I'm quite sick of my MySQL query browser crashing... And i'm sure there is probably better software out there anyways... At the moment my focus is dedicated to application development using PHP/MySQL/Ajax. (although, I will be learning ActionScript/Flex shortly). Here are the apps I regularly use and a general idea of the environment I work in: Hosting - localhost: Mamp Production: Amazon EC2 IDE - Coda: PHP/Mysql/Javascript Mysql - PHPMyAdmin MySQL query browser FTP - Coda FileZilla X11 is used for connecting to my production environment I am mainly looking for... -tools that will help me better manage (create, edit... whatever) databases. -Version control. This is huge as I'm working with a team of three other developers (we reeeeeeaaally need to standardize our environments..) Although any cool little tools to make my job easier are always a plus! I am unable to insert more than one link as I'm not rated on this forum.

    Read the article

  • Are there any Graphical PowerShell tools?

    - by Dai
    As a developer for the .NET platform, I like to "explore" a platform, framework or API by browsing through the API documentation which explains what everything is - everything is covered and when I use tools like Reflector or Object Browser then I get to know for certain what I'm working with. When I'm writing my own software I can use tools like the Object Test Bench to explore and work with my classes directly. I'm looking for something similar, but for PowerShell - and ones that avoid text-mode. PowerShell is nice, and there are a lot of cool "discoverability"-things it has, such as the "Verb-Noun" syntax, however when I'm working with Exchange Server, for example, I wanted to get a list of AD Permissions on a Receive Connector and I got this list: [PS] C:\Windows\system32>Get-ADPermission "Client SVR6" -User "NT AUTHORITY\Authenticated Users" | fl User : NT AUTHORITY\Authenticated Users Identity : SVR6\Client SVR6 Deny : False AccessRights : {ExtendedRight} IsInherited : False Properties : ChildObjectTypes : InheritedObjectType : InheritanceType : All User : NT AUTHORITY\Authenticated Users Identity : SVR6\Client SVR6 Deny : False AccessRights : {ExtendedRight} IsInherited : False Properties : ChildObjectTypes : InheritedObjectType : InheritanceType : All User : NT AUTHORITY\Authenticated Users Identity : SVR6\Client SVR6 Deny : False AccessRights : {ExtendedRight} IsInherited : False Properties : ChildObjectTypes : InheritedObjectType : InheritanceType : All User : NT AUTHORITY\Authenticated Users Identity : SVR6\Client SVR6 Deny : False AccessRights : {ExtendedRight} IsInherited : False Properties : ChildObjectTypes : InheritedObjectType : InheritanceType : All User : NT AUTHORITY\Authenticated Users Identity : SVR6\Client SVR6 Deny : False AccessRights : {ExtendedRight} IsInherited : False Properties : ChildObjectTypes : InheritedObjectType : InheritanceType : All User : NT AUTHORITY\Authenticated Users Identity : SVR6\Client SVR6 Deny : True AccessRights : {ReadProperty} IsInherited : True Properties : {ms-Exch-Availability-User-Password} ChildObjectTypes : InheritedObjectType : ms-Exch-Availability-Address-Space InheritanceType : Descendents [PS] C:\Windows\system32> Note how the first few entries contain identical text - there's no way to tell them apart easily. But if there was a GUI presumably it would let me drill-down into the differences better. Are there any tools that do this?

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • GRUB 2 problem after Mac OS X update

    - by vallllll
    I have a MacBook Pro in dual boot Mac OS X / Ubuntu 12.04 (Precise Pangolin). When I boot it I have a rEFIt menu, and I can chose between Mac OS X and Linux. A few days ago I have updated Mac OS X from 10.7 (Lion) to 10.8 (Mountain Lion) using a .dmg image provided by my company. Since then when I select Linux in rEFIt it says: No bootable device --insert boot disk and press any key I have tried going to rEFIt partitioning tool. This is what I got: As suggested in Mac OSX Mavericks update rEFIT broken I wanted to fix the issue the same way as AndrewM, but I don't have the option "MBR table must be updated". Then I booted on Ubuntu 12.04 CD, chose repair broken system, chose root patition /dev/sda6 as this is where my Ubuntu file system is. I got a shell, but I don't really know how to repair the poblem since if it was just Windows dual boot. A GRUB update would solve the issue, but here I don't know where the GRUB 2 is installed. Here are results from Parted, and it is a bit confusing for me as the Mac partition is the one with boot: As you can see the entry 1 is an EFI system partition and is the boot partition, so I wonder if I should install GRUB there or in sda6, which is the Ubuntu filesystem. I am not sure should I work on rEFIt shell or Ubuntu. Unfortunately, I don't remember where GRUB was before update. UPDATE: using same link above I have tried RoundSparrow hilltx answer and installed rEFInd, but the result is same.... still no bootable device when I select Linux. UPDATE 2: just used alternate CD again, mounted on /dev/sda6 and the ran update-grub. It seemed to wok and started listing all my kernels. But after rebooting several times still no bootable device when I select Linux in rEFInd. UDATE 3: Have tried to boot from Ubuntu cd and select "boot from first available filesystem. I got error and dropped to grub rescue shell. I even followed the indications on this link but was unable to boot as I tried to use sdb6 but no luck UPDATE 4 as per Rob Smith request here is out put from ls -l $(find /EFI -iname "*.efi") *MACOSX -rw-r--r--@ 1 root admin 55048 29 oct 17:44 /EFI/refind/drivers_x64/btrfs_x64.efi -rw-r--r--@ 1 root admin 38888 29 oct 17:44 /EFI/refind/drivers_x64/ext2_x64.efi -rw-r--r--@ 1 root admin 39304 29 oct 17:44 /EFI/refind/drivers_x64/ext4_x64.efi -rw-r--r--@ 1 root admin 43432 29 oct 17:44 /EFI/refind/drivers_x64/hfs_x64.efi -rw-r--r--@ 1 root admin 38984 29 oct 17:44 /EFI/refind/drivers_x64/iso9660_x64.efi -rw-r--r--@ 1 root admin 43656 29 oct 17:44 /EFI/refind/drivers_x64/reiserfs_x64.efi -rw-r--r--@ 1 root admin 175016 29 oct 17:44 /EFI/refind/refind_x64.efi -rw-rw-r-- 1 root admin 73232 7 mar 2010 /EFI/tools/dbounce.efi -rw-rw-r-- 1 root admin 763248 7 mar 2010 /EFI/tools/dhclient.efi -rw-rw-r-- 1 root admin 67024 7 mar 2010 /EFI/tools/drawbox.efi -rw-rw-r-- 1 root admin 71312 7 mar 2010 /EFI/tools/dumpfv.efi -rw-rw-r-- 1 root admin 84848 7 mar 2010 /EFI/tools/dumpprot.efi -rw-rw-r-- 1 root admin 472912 7 mar 2010 /EFI/tools/ed.efi -rw-rw-r-- 1 root admin 143856 7 mar 2010 /EFI/tools/edit.efi -rw-rw-r-- 1 root admin 1801008 7 mar 2010 /EFI/tools/ftp.efi -rw-r--r--@ 1 root admin 47848 29 oct 17:44 /EFI/tools/gptsync_x64.efi -rw-rw-r-- 1 root admin 320560 7 mar 2010 /EFI/tools/hexdump.efi -rw-rw-r-- 1 root admin 286384 7 mar 2010 /EFI/tools/hostname.efi -rw-rw-r-- 1 root admin 534416 7 mar 2010 /EFI/tools/ifconfig.efi -rw-rw-r-- 1 root admin 395344 7 mar 2010 /EFI/tools/loadarg.efi -rw-rw-r-- 1 root admin 587408 7 mar 2010 /EFI/tools/ping.efi -rw-rw-r-- 1 root admin 730416 7 mar 2010 /EFI/tools/pppd.efi -rw-rw-r-- 1 root admin 561360 7 mar 2010 /EFI/tools/route.efi -rw-rw-r-- 1 root admin 1961712 7 mar 2010 /EFI/tools/shell.efi -rw-rw-r-- 1 root admin 750224 7 mar 2010 /EFI/tools/tcpipv4.efi -rw-rw-r-- 1 root admin 4048 7 mar 2010 /EFI/tools/textmode.efi -rw-rw-r-- 1 root admin 320656 7 mar 2010 /EFI/tools/which.efi *LINUX

    Read the article

  • Make your code gooder with the goodies gem

    - by kerry
    I have decided to publish all my Ruby tools via a gem called ‘goodies’.  To install this gem simply type ‘gem install goodies’. The source is hosted on GitHub.  The first version (0.1) has the Hash object accessors and the String file path utility methods discussed in the previous two posts. Enjoy!   Ruby Goodies @ GitHub Goodies on gemcutter.org

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >