Search Results

Search found 17579 results on 704 pages for 'mercurial build'.

Page 632/704 | < Previous Page | 628 629 630 631 632 633 634 635 636 637 638 639  | Next Page >

  • Friday Fun: Spell Blazer

    - by Asian Angel
    Are you ready for some fun and adventure after a long week back at work? This week’s game combines jewel-matching style game play with an RPG story for an awesome mix of fun and fiction. Your goal is to help a young wizard reach the magic academy in Raven as the forces of darkness are building. Spell Blazer The object of the game is to help young Kaven reach the Lightcaster Academy in Raven alive, but he will encounter many dangers along the way. Are you ready to begin the quest? As soon as you click Start Game the intro will automatically begin. If this is your first time playing the game the intro provides a nice background story for the game and what is happening in the game environment. Once you are past the intro, you will see a map of the region with your starting point in the Farmlands, various towns and the roads connecting them, along with your final destination of Raven. Notice that some of the roads are different colors…those colors indicate the “danger levels” for each part of your journey (green = good, yellow = some danger, etc.). To begin your journey click on the Town of Goose with your mouse. You will encounter your first monster part of the way towards Goose. This first round takes you through the game play process step-by-step. Once you have clicked Okay you will see the details about the monster you have just encountered. It is very important that you do not click on Fight! or Flee! until viewing and noting the types of spells that the monster is resistant to or has a weakness against. Choose your spells wisely based on the information provided about the monster. Keep in mind that the healing spell can be very useful depending on the monster you meet and your current health status. Note: Spells shown in order here are Healing, Fireball, Icebolt, & Lightning. Ready to fight! The first battle will also explain how to fight…click Okay to get started. Once the main window is in full view there are details that you need to look at. Beneath each of the combatants you will see the three attacks that each brings to the battle and at the bottom you will see their respective health points. We got lucky and had an Icebolt attack that we could utilize on the first play! Note: You can exchange two squares without making a match in order to try and line up an attack. While it happened too quickly to capture in our screenshot, there will be cool lightning bolt effects shoot out from matched up squares to the opposite combatant. You will also see the amount of damage inflicted from a particular attack on top of the avatars. Victory! Once you have won a round of combat a window will appear showing the amount of gold coins left behind by the monster. When you reach a town you will have the opportunity to stop over and rest or directly continue on with your journey. On to Halgard after a good rest! Play Spell Blazer Latest Features How-To Geek ETC How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 The 50 Best How-To Geek Windows Articles of 2010 The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC The Deep – Awesome Use of Metal Objects as Deep Sea Creatures [Video] Convert or View Documents Online Easily with Zoho, No Account Required Build a Floor Scrubbing Robot out of Computer Fans and a Frisbee Serene Blue Windows Wallpaper for Your Desktop 2011 International Space Station Calendar Available for Download (Free) Ultimate Elimination – Lego Black Ops [Video]

    Read the article

  • Craig Mundie's video

    - by GGBlogger
    Timothy recently posted “Microsoft Shows Off Radical New UI, Could Be Used In Windows 8” on Slashdot. I took such grave exception to his post that I found it necessary to my senses to write this blog. We need to go back many years to the days of hand cranked calculators and early main frame computers. These devices had singular purposes – they were “number crunchers” used to make accounting easier. The front facing display in early mainframes was “blinken lights.” The calculators did provide printing – in the form of paper tape and the mainframes used line printers to generate reports as needed. We had other metaphors to work with. The typewriter was/is a mechanical device that substitutes for a type setting machine. The originals go back to 1867 and the keyboard layout has remained much the same to this day. In the earlier years the Morse code telegraphs gave way to Teletype machines. The old ASR33, seen on the left in this photo of one of the first computers I help manufacture, used a keyboard very similar to the keyboards in use today. It also generated punched paper tape that we generated to program this computer in machine language. Everything considered this computer which dates back to the late 1960s has a keyboard for input and a roll of paper as output. So in a very rudimentary fashion little has changed. Oh – we didn’t have a mouse! The entire point of this exercise is to point out that we still use very similar methods to get data into and out of a computer regardless of the operating system involved. The Altair, IMSAI, Apple, Commodore and onward to our modern machines changed the hardware that we interfaced to but changed little in the way we input, view and output the results of our computing effort. The mouse made some changes and the advent of windowed interfaces such as Windows and Apple made things somewhat easier for the user. My 4 year old granddaughter plays here Dora games on our computer. She knows how to start programs, use the mouse, play the game and is quite adept so we have come some distance in making computers useable. One of my chief bitches is the constant harangues leveled at Microsoft. Yup – they are a money making organization. You like Apple? No problem for me. I don’t use Apple mostly because I’m comfortable in the Windows environment but probably more because I don’t like Apple’s “Holier than thou” attitude. Some think they do superior things and that’s also fine with me. Obviously the iPhone has not done badly and other Apple products have fared well. But they are expensive. I just build a new machine with 4 Terabytes of storage, an Intel i7 Core 950 processor and 12 GB of RAMIII. It cost me – with dual monitors – less than 2000 dollars. Now to the chief reason for this blog. I’m going to continue developing software for as long as I’m able. For that reason I don’t see my keyboard, mouse and displays changing much for many years. I also don’t think Microsoft is going to spoil that for me by making radical changes to my developer experience. What Craig Mundie does in his video here:  http://www.ispyce.com/2011/02/microsoft-shows-off-radical-new-ui.html is explore the potential future of computer interfaces for the masses of potential users. Using a computer today requires a person to have rudimentary capabilities with keyboards and the mouse. Wouldn’t it be great if all they needed was hand gestures? Although not mentioned it would also be nice if computers responded intelligently to a user’s voice. There is absolutely no argument with the fact that user interaction with these machines is going to change over time. My personal prediction is that it will take years for much of what Craig discusses to come to a cost effective reality but it is certainly coming. I just don’t believe that what Craig discusses will be the future look of a Window 8.

    Read the article

  • Reverse subarray of an array with O(1)

    - by Babibu
    I have an idea how to implement sub array reverse with O(1), not including precalculation such as reading the input. I will have many reverse operations, and I can't use the trivial solution of O(N). Edit: To be more clear I want to build data structure behind the array with access layer that knows about reversing requests and inverts the indexing logic as necessary when someone wants to iterate over the array. Edit 2: The data structure will only be used for iterations I been reading this and this and even this questions but they aren't helping. There are 3 cases that need to be taking care of: Regular reverse operation Reverse that including reversed area Intersection between reverse and part of other reversed area in the array Here is my implementation for the first two parts, I will need your help with the last one. This is the rule class: class Rule { public int startingIndex; public int weight; } It is used in my basic data structure City: public class City { Rule rule; private static AtomicInteger _counter = new AtomicInteger(-1); public final int id = _counter.incrementAndGet(); @Override public String toString() { return "" + id; } } This is the main class: public class CitiesList implements Iterable<City>, Iterator<City> { private int position; private int direction = 1; private ArrayList<City> cities; private ArrayDeque<City> citiesQeque = new ArrayDeque<>(); private LinkedList<Rule> rulesQeque = new LinkedList<>(); public void init(ArrayList<City> cities) { this.cities = cities; } public void swap(int index1, int index2){ Rule rule = new Rule(); rule.weight = Math.abs(index2 - index1); cities.get(index1).rule = rule; cities.get(index2 + 1).rule = rule; } @Override public void remove() { throw new IllegalStateException("Not implemented"); } @Override public City next() { City city = cities.get(position); if (citiesQeque.peek() == city){ citiesQeque.pop(); changeDirection(); position += (city.rule.weight + 1) * direction; city = cities.get(position); } if(city.rule != null){ if(city.rule != rulesQeque.peekLast()){ rulesQeque.add(city.rule); position += city.rule.weight * direction; changeDirection(); citiesQeque.push(city); } else{ rulesQeque.removeLast(); position += direction; } } else{ position += direction; } return city; } private void changeDirection() { direction *= -1; } @Override public boolean hasNext() { return position < cities.size(); } @Override public Iterator<City> iterator() { position = 0; return this; } } And here is a sample program: public static void main(String[] args) { ArrayList<City> list = new ArrayList<>(); for(int i = 0 ; i < 20; i++){ list.add(new City()); } CitiesList citiesList = new CitiesList(); citiesList.init(list); for (City city : citiesList) { System.out.print(city + " "); } System.out.println("\n******************"); citiesList.swap(4, 8); for (City city : citiesList) { System.out.print(city + " "); } System.out.println("\n******************"); citiesList.swap(2, 15); for (City city : citiesList) { System.out.print(city + " "); } } How do I handle reverse intersections?

    Read the article

  • Monitoring the Application alongside SQL Server

    - by Tony Davis
    Sometimes, on Simple-Talk, it takes a while to spot strange and unexpected patterns of user activity, or small bugs. For example, one morning we spotted that an article’s comment count had leapt to 1485, but that only four were displayed. With some rooting around in Google Analytics, and the endlessly annoying Community Server admin-interface, we were able to work out that a few days previously the article had been subject to a spam attack and that the comment count was for some reason including both accepted and unaccepted comments (which in turn uncovered a bug in the SQL). This sort of incident made us a lot keener on monitoring Simple-talk website usage more effectively. However, the metrics we wanted are troublesome, because they are far too specific for Google Analytics to measure, and the SQL Server backend doesn’t keep sufficient information to enable us to plot trends. The latter could provide, for example, the total number of comments made on, or votes cast for, articles, over all time, but not the number that occur by hour over a set time. We lacked a baseline, in other words. We couldn’t alter the database, as it is a bought-in package. We had neither the resources nor inclination to build-in dedicated application monitoring. Possibly, we could investigate a third-party tool to do the job; but then it occurred to us that we were already using a monitoring tool (SQL Monitor) to keep an eye on the database. It stored data, made graphs and sent alerts. Could we get it to monitor some aspects of the application as well? Of course, SQL Monitor’s single purpose is to check and monitor SQL Server, over time, rather than to monitor applications that use SQL Server. However, how different is the business of gathering and plotting SQL Server Wait Stats, from gathering and plotting various aspects of user activity on the site? Not a lot, it turns out. The latest version allows us to write our own custom monitoring scripts, meaning that we could now monitor any metric in the application that returns an integer. It took little time to write a simple SQL Query that collects basic metrics of the total number of subscribers, votes cast, comments made, or views of articles, over time. The SQL Monitor database polls Simple-Talk every second or so in order to get the latest totals, and can then store and plot this information, or even correlate SQL Server usage to application usage. You can see the live data by visiting monitor.red-gate.com. Click the "Analysis" tab, and select one of the "Simple-talk:" entries in the "Show" box and an appropriate data range (e.g. last 30 days). It’s nascent, and we’re still working on it, but it’s already given us more confidence that we’ll spot quickly trends, bugs, or bursts of ‘abnormal’ activity. If there is a sudden rise in comments, we get an alert, and if it’s due to a spam attack, we can moderate or ban the perpetrator very quickly. We’ve often argued that a tool should perform a single job well rather than turn into a Swiss-army knife, but ironically we’ve rather appreciated being able to make best use of what’s there anyway for a slightly different purpose. Is this a good or common practice? What do you think? Cheers, Tony.

    Read the article

  • New January 2013 Release of the Ajax Control Toolkit

    - by Stephen.Walther
    I am super excited to announce the January 2013 release of the Ajax Control Toolkit! I have one word to describe this release and that word is “Charts” – we’ve added lots of great new chart controls to the Ajax Control Toolkit. You can download the new release directly from http://AjaxControlToolkit.CodePlex.com – or, just fire the following command from the Visual Studio Library Package Manager Console Window (NuGet): Install-Package AjaxControlToolkit You also can view the new chart controls by visiting the “live” Ajax Control Toolkit Sample Site. 5 New Ajax Control Toolkit Chart Controls The Ajax Control Toolkit contains five new chart controls: the AreaChart, BarChart, BubbleChart, LineChart, and PieChart controls. Here is a sample of each of the controls: AreaChart: BarChart: BubbleChart: LineChart: PieChart: We realize that people love to customize the appearance of their charts so all of the chart controls include properties such as color properties. The chart controls render the chart on the browser using SVG. The chart controls are compatible with any browser which supports SVG including Internet Explorer 9 and new and recent versions of Google Chrome, Mozilla Firefox, and Apple Safari. (If you attempt to display a chart on a browser which does not support SVG then you won’t get an error – you just won’t get anything). Updates to the HTML Sanitizer If you are using the HtmlEditorExtender on a public-facing website then it is really important that you enable the HTML Sanitizer to prevent Cross-Site Scripting (XSS) attacks. The HtmlEditorExtender uses the HTML Sanitizer by default. The HTML Sanitizer strips out any suspicious content (like JavaScript code and CSS expressions) from the HTML submitted with the HtmlEditorExtender. We followed the recommendations of OWASP and ha.ckers.org to identify suspicious content. We updated the HTML Sanitizer with this release to protect against new types of XSS attacks. The HTML Sanitizer now has over 220 unit tests. The Ajax Control Toolkit team would like to thank Gil Cohen who helped us identify and block additional XSS attacks. Change in Ajax Control Toolkit Version Format We ran out of numbers. The Ajax Control Toolkit was first released way back in 2006. In previous releases, the version of the Ajax Control Toolkit followed the format: Release Year + Date. So, the previous release was 60919 where 6 represented the 6th release year and 0919 represent September 19. Unfortunately, the AssembyVersion attribute uses a UInt16 data type which has a maximum size of 65,534. The number 70123 is bigger than 65,534 so we had to change our version format with this release. Fortunately, the AssemblyVersion attribute actually accepts four UInt16 numbers so we used another one. This release of the Ajax Control Toolkit is officially version 7.0123. This new version format should work for another 65,000 years. And yes, I realize that 7.0123 is less than 60,919, but we ran out of numbers. Summary I hope that you find the chart controls included with this latest release of the Ajax Control Toolkit useful. Let me know if you use them in applications that you build. And, let me know if you run into any issues using the new chart controls. Next month, back to improving the File Upload control – more exciting stuff.

    Read the article

  • How to I do install DB2 ODBC?

    - by Justin
    I have been trying, with no success, to install a IBM DB2 ODBC driver so that my PHP server can connect to a database. I've tried installing the db2_connect and get all sorts of problems, I tried install I Access for Linux and the RPM did not install right nor did using alien breed any useful results. I've also tried the DB2 Runtime v8.1, no success. If I attempt to run the rpm it claims I need dependencies that I can't find in apt-get. Yum is also not very helpful as it appears I don't have any repositories installed or lists... Running the simple RPM gives me this result in terminal: # rpm -ivh iSeriesAccess-7.1.0-1.0.x86_64.rpm rpm: RPM should not be used directly install RPM packages, use Alien instead! rpm: However assuming you know what you are doing... error: Failed dependencies: /bin/ln is needed by iSeriesAccess-7.1.0-1.0.x86_64 /sbin/ldconfig is needed by iSeriesAccess-7.1.0-1.0.x86_64 /bin/rm is needed by iSeriesAccess-7.1.0-1.0.x86_64 /bin/sh is needed by iSeriesAccess-7.1.0-1.0.x86_64 libc.so.6()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libc.so.6(GLIBC_2.2.5)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libc.so.6(GLIBC_2.3)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libdl.so.2()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libdl.so.2(GLIBC_2.2.5)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libgcc_s.so.1()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libm.so.6()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libm.so.6(GLIBC_2.2.5)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libodbcinst.so.1()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libodbc.so.1()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libpthread.so.0()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libpthread.so.0(GLIBC_2.2.5)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libpthread.so.0(GLIBC_2.3.2)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 librt.so.1()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 librt.so.1(GLIBC_2.2.5)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libstdc++.so.6()(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libstdc++.so.6(CXXABI_1.3)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 libstdc++.so.6(GLIBCXX_3.4)(64bit) is needed by iSeriesAccess-7.1.0-1.0.x86_64 Using alien and running the dkpg gives me thes headaque: $ alien iSeriesAccess-7.1.0-1.0.x86_64.rpm --scripts # dpkg -i iseriesaccess_7.1.0-2_amd64.deb (Reading database ... 127664 files and directories currently installed.) Preparing to replace iseriesaccess 7.1.0-2 (using iseriesaccess_7.1.0-2_amd64.deb) ... Unpacking replacement iseriesaccess ... post uninstall processing for iSeriesAccess 1.0...upgrade /var/lib/dpkg/info/iseriesaccess.postrm: line 8: [: upgrade: integer expression expected Setting up iseriesaccess (7.1.0-2) ... post install processing for iSeriesAccess 1.0...configure iSeries Access ODBC Driver has been deleted (if it existed at all) because its usage count became zero odbcinst: Driver installed. Usage count increased to 1. Target directory is /etc odbcinst: Driver installed. Usage count increased to 3. Target directory is /etc Processing triggers for libc-bin ... ldconfig deferred processing now taking place So it seems the files installed right, well my odbc driver shows up but db2cli.ini is no where to be found. So several questions. Is there a better alternative to connect php to db2, say an ubuntu package I can just install? Can someone direct me to the steps that makes my ubuntu server works well with the RPM so I can build my db2 instance? Also remember I'm connection to an I Series remotely. I'm not using the DB2 Express C thing, even if I did try it to get the db2 php functions to work. And I don't have zend but I think I have every other package on the ubuntu repositories. Help, thank you!

    Read the article

  • SQL Server source control from Visual Studio

    - by David Atkinson
    Developers have long since had to context switch between two IDEs, Visual Studio for application code development and SQL Server Management Studio for database development. While this is accepted, especially given the richness of the database development feature set in SSMS, loading a separate tool can seem a little overkill. This is where SQL Connect comes in. This is an add-in to Visual Studio that provides a connected development experience for the SQL Server developer. Connected database development involves modifying a development sandbox database, as opposed to offline development, where SQL text files are modified independently of the database. One of the main complaints of Data Dude (VS DBPro) is that it enforces the offline approach. This gripe is what SQL Connect addresses. If you don't already use SQL Source Control, you can get up and running with SQL Connect by adding a new project to your Visual Studio solution as follows: Then choose your existing development database and you're ready to go. If you already use SQL Source Control, you will need to link SQL Connect to your existing database scripts folder repository, so SQL Connect and SQL Source Control can be used collaboratively (note that SQL Source Control v.3.0.9.18 or later is required). Locate the repository (this can be found in the Setup tab in SQL Source Control). .and create a working folder for it (here I'm using TortoiseSVN). Back in Visual Studio, locate the SQL Connect panel (in the View menu if it hasn't auto loaded) and select Import SQL Source Control project Locate your working folder and click Import. This creates a Red Gate database project under your solution: From here you can modify your development database, and manage your changes in source control. To associate your development database with the project, right click on the project node, select Properties, set the database and Save. Now you're ready to make some changes. Locate the object you'd like to modify in the Solution Explorer, and double click it to invoke a query window or table designer. You also have the option to edit the creation SQL directly using Edit SQL File in Project. Keeping the development database and Visual Studio project in sync is as easy as clicking on a button. One you've made your change, you can use whichever mechanism you choose to commit to source control. Here I'm using the free open-source AnkhSVN to integrate Subversion with Visual Studio. Maintaining your database in a Visual Studio solution means that you can commit database changes and application code changes in the same changeset. This is desirable if you have continuous integration set up as you want to ensure that all files related to a change are committed atomically, so you avoid an interim "broken build". More discussion on SQL Connect and its benefits can be found in the following article on Simple Talk: No More Disconnected SQL Development in Visual Studio The SQL Connect project team is currently assessing the backlog for the next development effort, and they'd appreciate your feature suggestions, as well as your votes on their suggestions site: http://redgate.uservoice.com/forums/140800-sql-connect-for-visual-studio- A 28-day free trial of SQL Connect is available from the Red Gate website. Technorati Tags: SQL Server

    Read the article

  • Why is Apache ignoring VirtualHost directive for first name in hosts file?

    - by Peter Taylor
    Standard pre-emptive disclaimer: host names, IP addresses, and directories are anonymised. Problem We have a server with Apache 2.2 (WAMP) listening on one IP and IIS listening on another. An ASP.Net application running under IIS needs to do some simple GETs from the PHP applications running under Apache to build a unified search results page. This is a virtual server, so the internal IPs are mapped somehow to external ones. The internal DNS system doesn't resolve the publicly published names under which the applications are accessed externally, so the obvious solution was to add them to etc/hosts with the internal IP address: 127.0.0.1 localhost # 10.0.1.17 is the IP address Apache listens on 10.0.1.17 phpappone.example.com 10.0.1.17 phpapptwo.example.com After restarting Apache, phpappone.example.com stopped working. Instead of returning pages from that app, Apache was returning pages from the default site. The other PHP apps worked fine. Relevant configuration httpd.conf, summarised, says: ServerAdmin [email protected] ServerRoot "c:/server/Apache2" ServerName www.example.com Listen 10.0.1.17:80 Listen 10.0.1.17:443 # Not obviously related config options elided # Nothing obviously astandard # If you want more details, post a comment DocumentRoot "c:/server/Apache2/htdocs" <Directory /> Options FollowSymLinks AllowOverride None Order deny,allow Deny from all </Directory> # Fallback for unknown host names <Directory "c:/server/Apache2/htdocs"> Options Indexes FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> # PHP apps common config <Directory "C:/Inetpub/wwwroot/phpapps"> Options FollowSymLinks -Indexes +ExecCGI AllowOverride All Order Allow,Deny Allow from All </Directory> # Virtual hosts NameVirtualHost 10.0.1.17:80 NameVirtualHost 10.0.1.17:443 <VirtualHost _default_:80> </VirtualHost> <VirtualHost _default_:443> SSLEngine On SSLCertificateFile "certs/example.crt" SSLCertificateKeyFile "certs/example.key" </VirtualHost> Include conf/vhosts/*.conf and the vhosts files are e.g. <VirtualHost 10.0.1.17:80> ServerName phpappone.example.com DocumentRoot "c:/Inetpub/wwwroot/phpapps/phpappone" </VirtualHost> <VirtualHost 10.0.1.17:443> ServerName phpappone.example.com DocumentRoot "c:/Inetpub/wwwroot/phpapps/phpappone" SSLEngine On SSLCertificateFile "certs/example.crt" SSLCertificateKeyFile "certs/example.key" </VirtualHost> Buggy behaviour or our misunderstanding? The documentation for name-based virtual hosts says that Now when a request arrives, the server will first check if it is using an IP address that matches the NameVirtualHost. If it is, then it will look at each <VirtualHost> section with a matching IP address and try to find one where the ServerName or ServerAlias matches the requested hostname. If it finds one, then it uses the configuration for that server. If no matching virtual host is found, then the first listed virtual host that matches the IP address will be used. Yet that isn't what we observe. It seems that if the hostname is the first hostname listed against the IP address in etc/hosts then it uses the configuration from the main server and skips the virtual host lookup. Workarounds The workaround we've put in place for the time being is to add a fake line to the hosts file: 127.0.0.1 localhost # 10.0.1.17 is the IP address Apache listens on 10.0.1.17 fakename.example.com 10.0.1.17 phpappone.example.com 10.0.1.17 phpapptwo.example.com This fixes the problem, but it's not very elegant. In addition, it seems a bit brittle: reordering lines in the hosts file (or deleting the nonsense value) can break it. The other obvious workaround is to make the main server configuration match that of the troublesome virtual host, but that is equally brittle. A third option, which is just ugly, would be to change the ASP.Net code to take separate config items for the IP address and the hostname and to implement HTTP manually. Ugh. The question Is there a good solution to this problem which localises any "Do not touch this!" explanations to the Apache config files?

    Read the article

  • Iron Speed Designer 7.0 - the great gets greater!

    - by GGBlogger
    For Immediate Release Iron Speed, Inc. Kelly Fisher +1 (408) 228-3436 [email protected] http://www.ironspeed.com       Iron Speed Version 7.0 Generates SharePoint Applications New! Support for Microsoft SharePoint speeds application generation and deployment   San Jose, CA – June 8, 2010. Software development tools-maker Iron Speed, Inc. released Iron Speed Designer Version 7.0, the latest version of its popular Web 2.0 application generator. Iron Speed Designer generates rich, interactive database and reporting applications for .NET, Microsoft SharePoint and the Cloud.    In addition to .NET applications, Iron Speed Designer V7.0 generates database-driven SharePoint applications. The ability to quickly create database-driven applications for SharePoint eliminates a lot of work, helping IT departments generate productivity-enhancing applications in just a few hours.  Generated applications include integrated SharePoint application security and use SharePoint master pages.    “It’s virtually impossible to build database-driven application in SharePoint by hand. Iron Speed Designer V7.0 not only makes this possible, the tool makes it easy.” – Razi Mohiuddin, President, Iron Speed, Inc.     Integrated SharePoint application security Generated applications include integrated SharePoint application security. SharePoint sites and their groups are used to retrieve security roles. Iron Speed Designer validates the user against a Microsoft SharePoint server on your network by retrieving the logged in user’s credentials from the SharePoint Context.    “The Iron Speed Designer generated application integrates seamlessly with SharePoint security, removing the hassle of designing, testing and approving your own security layer.” -Michael Landi, Solutions Architect, Light Speed Solutions     SharePoint Solution Packages Iron Speed Designer V7.0 creates SharePoint Solution Packages (WSPs) for easy application deployment. Using the Deployment Wizard, a single application WSP is created and can be deployed to your SharePoint server.   “Iron Speed Designer is the first product on the market that allows easy and painless deployment of database-driven .NET web applications inside the SharePoint environment.” -Bryan Patrick, Developer, Pseudo Consulting     SharePoint master pages and themes In V7.0, generated applications use SharePoint master pages and contain the same content as other SharePoint pages. Generated applications use the current SharePoint color scheme and display standard SharePoint navigation controls on each page.   “Iron Speed Designer preserves the look and feel of the SharePoint environment in deployed database applications without additional hand-coding.” -Kirill Dmitriev, Software Developer, Iron Speed, Inc.     Iron Speed Designer Version 7.0 System Requirements Iron Speed Designer Version 7.0 runs on Microsoft Windows 7, Windows Vista, Windows XP, and Windows Server 2003 and 2008. It generates .NET Web applications for Microsoft SQL Server, Oracle, Microsoft Access and MySQL. These applications may be deployed on any machine running the .NET Framework. Iron Speed Designer supports Microsoft SharePoint 2007 and Windows SharePoint Services (WSS3). Find complete information about Iron Speed Designer Version 7.0 at www.ironspeed.com.     About Iron Speed, Inc. Iron Speed is the leader in enterprise-class application generation. Our software development tools generate database and reporting applications in significantly less time and cost than hand-coding. Our flagship product, Iron Speed Designer, is the fastest way to deliver applications for the Microsoft .NET and software-as-a-service cloud computing environments.   With products built on decades of experience in enterprise application development and large-scale e-commerce systems, Iron Speed products eliminate the need for developers to choose between "full featured" and "on schedule."   Founded in 1999, Iron Speed is well funded with a capital base of over $20M and strategic investors that include Arrow Electronics and Avnet, as well as executives from AMD, Excelan, Onsale, and Oracle. The company is based in San Jose, Calif., and is located online at www.ironspeed.com.

    Read the article

  • Combining Scrum, TFS2010 and Email to keep everyone in the loop

    - by Martin Hinshelwood
    Often you will receive rich information from your Product Owner (Customer) about tasks. That information can be in the form of Word documents, HTML Emails and Pictures, but you generally receive them in the context of an Email. You need to keep these so your Team can refer to it later, and so you can send a “done” when the task has been completed. This preserves the “history” of the task and allows you to keep relevant partied included in any future conversation. At SSW we keep the original email so that we can reply Done and delete the email. But keeping it in your email does not help other members of the team if they complete the task and need to send the “done”. Worse yet, the description field in Team Foundation Server 2010 (TFS 2010) does not support HTML and images, nor does the default task template support an “interested parties” or CC field. You can attach this content manually, but it can be time consuming. Figure: Description only supports plain text, and History supports HTML with no images   What should we do? At SSW we always follow the rules, and it just so happened that we have rules to both achieve this, and to make it easier. You should follow the existing Rules to Better Project Management  and attach the email to your task so you can refer to and reply to it later when you close the task: Do you know what Outlook add-ins you need? Describe the work item request in an email Use Outlook Add-in to move the email to a TFS Work Item When replying to an email with “done” you should follow: Do you update Team Companion template, so the email "subject" doesn't change? Do you update Team Companion template, so you can generate a proper "done" mail? Following these simple rules will help your Product Owner understand you better, and allow your team to more effectively collaborate with each other. An added bonus is that as we are keeping the email history in sync with TFS. When you “reply all” to the email all of the interested partied to the Task are also included. This notified those that may have been blocked by your task to keep up to date with its status. This has been published as Do you know to ensure that relevant emails are attached to tasks in our Rules to Better Scrum using TFS. What could we do better? I would like to see this process automated so that we capture the information correctly in the task without the need to use email. This would require a change to the process template in Team Foundation Server to add an “Interested Parties” field. Each reply to the email would need to be automatically processed into a Work Item. This could be done by adding a task identifier as the first item in the “Relates to” email header, and copying in an email address that you watch. This would then parse out the relevant information and add the new message to the history, update the “Interested parties” field and attach the Images. Upon reflection, it may even be possible, but more difficult to do this using ONLY the History field and including some of the header information in there to the build a done email with history. This would not currently deal with email “forks” well, but I think it would be adequate for our needs. It would be nice if we could find time to implement this, but currently it is but a pipe dream. Maybe Microsoft could implement something in the next version of Team Foundation Server, and in the mean time we have a process that works well. Technorati Tags: Scrum,SSW Rules,TFS 2010,TFS 2008

    Read the article

  • Write TSQL, win a Kindle.

    - by Fatherjack
    So recently Red Gate launched sqlmonitormetrics.red-gate.com and showed the world how to embed your own scripts harmoniously in a third party tool to get the details that you want about your SQL Server performance. The site has a way to submit your own metrics and take a copy of the ones that other people have submitted to build a library of code to keep track of key metrics of your servers performance. There have been several submissions already but they have now launched a competition to provide an incentive for you to get creative and show us what you can do with a bit of TSQL and the SQL Monitor framework*. What’s it worth? Well, if you are one of the 3 winners then you get to choose either a Kindle Fire or $199. How do you win? Simply write the T-SQL for a SQL Monitor custom metric and the relevant description and introduction for it and submit it via  sqlmonitormetrics.red-gate.com before 14th Sept 2012 and then sit back and wait while the judges review your code and your aims in writing the metric. Who are the judges and how will they judge the metrics? There are two judges for this competition, Steve Jones (Microsoft SQL Server MVP, co-founder of SQLServerCentral.com, author, blogger etc) and Jonathan Allen (um, yeah, Steve has done all the good stuff, I’m here by good fortune). We will be looking to rate the metrics on each of 3 criteria: how the metric can help with performance tuning SQL Server. how having the metric running enables DBA’s to meet best practice. how interesting /original the idea for the metric is. Our combined decision will be final etc etc **  What happens to my metric? Any metrics submitted to the competition will be automatically entered into the site library and become available for sharing once the competition is over. You’ll get full credit for metrics you submit regardless of the competition results. You can enter as many metrics as you like. How long does it take? Honestly? Once you have the T-SQL sorted then so long as you can type your name and your email address you are done : http://sqlmonitormetrics.red-gate.com/share-a-metric/ What can I monitor? If you really really want a Kindle or $199 (and let’s face it, who doesn’t? ) and are momentarily stuck for inspiration, take a look at these example custom metrics that have been written by Stuart Ainsworth, Fabiano Amorim, TJay Belt, Louis Davidson, Grant Fritchey, Brad McGehee and me  to start the library off. There are some great pieces of TSQL in those metrics gathering important stats about how SQL Server is performing.   * – framework may not be the best word here but I was under pressure and couldnt think of a better one. If you prefer try ‘engine’, or ‘application’? I don’t know, pick something that makes sense to you. ** – for the full (legal) version of the rules check the details on sqlmonitormetrics.red-gate.com or send us an email if you want any point clarified. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • SQLAuthority News – Amazon Gift Card Raffle for Beta Tester Feedback for NuoDB

    - by pinaldave
    As regular readers know I’ve been spending some time working with the NuoDB beta software. They contacted me last week and asked if I would give you a chance to try their new web-based console for their scalable, SQL-compliant database. They have just put out their final beta release, Beta 9.  It contains a preview of a new web-based “NuoConsole” that will replace and extend the functionality of their current desktop version.  I haven’t spent any time with the new console yet but a really quick look tells me it should make it easier to do deeper monitoring than the older one. It also looks like they have added query-level reporting through the console. I will try to play with it soon. NuoDB is doing a last, big push to get some more feedback from developers before they release their 1.0 product sometime in the next several weeks. Since the console is new, they are especially interested in some quick feedback on it before general availability. For SQLAuthority readers only, NuoDB will raffle off three $50 Amazon gift cards in exchange for your feedback on the NuoConsole preview. Here’s how to Enter Download NuoDBeta 9 here You must build a domain before you can start the console. Launch the Web Console. Windows Code: start java -jar jarnuodbwebconsole.jar Mac, Linux, Solaris, Unix Code: java -jar jar/nuodbwebconsole.jar Access the Web Console: Code: http://localhost:8080 When you have tried it out, go to a short (8 question) survey to enter the raffle Click here for the survey You must complete the survey before midnight EDT on October 17, 2012. Here’s what else they are saying about this last beta before general availability: Beta 9 now supports the Zend PHP framework so that PHP developers can directly integrate web applications with NuoDB. Multi-threaded HDFS support – NuoDB Storage Managers can now be configured to persist data to the high performance Hadoop distributed file system (HDFS). Beta 9 optimizes for multi-thread I/O streams at maximum performance. This enhancement allows users to make Hadoop their core storage with no extra effort which is a pretty cool idea. Improved Performance –On a single transaction node, Beta 9 offers performance comparable with MySQL and MariaDB. As additional nodes are added, NuoDB performance improves significantly at near linear scale. Query & Explain Plan Logging – Beta 9 introduces SQL explain plans for your queries. Qualify queries with the word “EXPLAIN” and NuoDB will respond with the details of the execution plan allowing performance optimization to SQL. Through the NuoConsole, you can now kill hung or long running queries. Java App Server Support – Beta 9 now supports leading Web JEE app servers including JBoss, Tomcat, and ColdFusion. They’ve also reported: Improved PHP/PDO drivers Support for Drupal Faster Ruby on Rails driver The Hibernate Dialect supports version 4.1 And good news for my readers: numerous SQL enhancements They will share the results of the web console feedback with me.  I’ll let you know how it goes. Also the winner of their last contest was Jaime Martínez Lafargue!  Do leave a comment here once you complete the survey.  Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL Authority Tagged: NuoDB

    Read the article

  • Tuesday at OpenWorld: Identity Management

    - by Tanu Sood
    At Oracle OpenWorld? From keynotes, general sessions to product deep dives and executive events, this Tuesday is full of informational, educational and networking opportunities for you. Here’s a quick run-down of what’s happening today: Tuesday, October 2, 2012 KEYNOTE: The Oracle Cloud: Oracle’s Cloud Platform and Applications Strategy 8:00 a.m. – 9:45 a.m., Moscone North, Hall D Leading customers will join Oracle Executive Vice President Thomas Kurian to discuss how Oracle’s innovative cloud solutions are transforming how they manage their business, excite and retain their employees, and deliver great customer experiences through Oracle Cloud. GENERAL SESSION: Oracle Fusion Middleware Strategies Driving Business Innovation 10:15 a.m. – 11:15 a.m., Moscone North - Hall D Join Hasan Rizvi, Executive Vice President of Product in this strategy and roadmap session to hear how developers leverage new innovations in their applications and customers achieve their business innovation goals with Oracle Fusion Middleware. CON9437: Mobile Access Management 10:15 a.m. – 11:15 a.m., Moscone West 3022 The session will feature Identity Management evangelists from companies like Intuit, NetApp and Toyota to discuss how to extend your existing identity management infrastructure and policies to securely and seamlessly enable mobile user access. CON9162: Oracle Fusion Middleware: Meet This Year's Most Impressive Customer Projects 11:45 a.m. – 12:45 a.m., Moscone West, 3001 Hear from the winners of the 2012 Oracle Fusion Middleware Innovation Awards and see which customers are taking home a trophy for the 2012 Oracle Fusion Middleware Innovation Award.  Read more about the Innovation Awards here. CON9491: Enhancing the End-User Experience with Oracle Identity Governance applications 11:45 a.m. – 12:45 p.m., Moscone West 3008 Join experts from Visa and Oracle as they explore how Oracle Identity Governance solutions deliver complete identity administration and governance solutions with support for emerging requirements like cloud identities and mobile devices. CON9447: Enabling Access for Hundreds of Millions of Users 1:15 p.m. – 2:15 p.m., Moscone West 3008 Dealing with scale problems? Looking to address identity management requirements with million or so users in mind? Then take note of Cisco’s implementation. Join this session to hear first-hand how Cisco tackled identity management and scaled their implementation to bolster security and enforce compliance. CON9465: Next Generation Directory – Oracle Unified Directory 5:00 p.m. – 6:00 p.m., Moscone West 3008 Get the 360 degrees perspective from a solution provider, implementation services partner and the customer in this session to learn how the latest Oracle Unified Directory solutions can help you build a directory infrastructure that is optimized to support cloud, mobile and social networking and yet deliver on scale and performance. EVENTS: Executive Edge @ OpenWorld: Chief Security Officer (CSO) Summit 10:00 a.m. – 3:00 p.m. If you are attending the Executive Edge at Open World, be sure to check out the sessions at the Chief Security Officer Summit. Former Sr. Counsel for the National Security Agency, Joel Brenner, will be speaking about his new book "America the Vulnerable". In addition, PWC will present a panel discussion on "Crisis Management to Business Advantage: Security Leadership". See below for the complete agenda. PRODUCT DEMOS: And don’t forget to see Oracle identity Management solutions in action at Oracle OpenWorld DEMOgrounds. DEMOS LOCATION EXHIBITION HALL HOURS Access Management: Complete and Scalable Access Management Moscone South, Right - S-218 Monday, October 1 9:30 a.m.–6:00 p.m. 9:30 a.m.–10:45 a.m. (Dedicated Hours) Tuesday, October 2 9:45 a.m.–6:00 p.m. 2:15 p.m.–2:45 p.m. (Dedicated Hours) Wednesday, October 3 9:45 a.m.–4:00 p.m. 2:15 p.m.–3:30 p.m. (Dedicated Hours) Access Management: Federating and Leveraging Social Identities Moscone South, Right - S-220 Access Management: Mobile Access Management Moscone South, Right - S-219 Access Management: Real-Time Authorizations Moscone South, Right - S-217 Access Management: Secure SOA and Web Services Security Moscone South, Right - S-223 Identity Governance: Modern Administration and Tooling Moscone South, Right - S-210 Identity Management Monitoring with Oracle Enterprise Manager Moscone South, Right - S-212 Oracle Directory Services Plus: Performant, Cloud-Ready Moscone South, Right - S-222 Oracle Identity Management: Closed-Loop Access Certification Moscone South, Right - S-221 For a complete listing, keep the Focus on Identity Management document handy. And don’t forget to converse with us while at OpenWorld @oracleidm. We look forward to hearing from you.

    Read the article

  • How to develop RPG Damage Formulas?

    - by user127817
    I'm developing a classical 2d RPG (in a similar vein to final fantasy) and I was wondering if anyone had some advice on how to do damage formulas/links to resources/examples? I'll explain my current setup. Hopefully I'm not overdoing it with this question, and I apologize if my questions is too large/broad My Characters stats are composed of the following: enum Stat { HP = 0, MP = 1, SP = 2, Strength = 3, Vitality = 4, Magic = 5, Spirit = 6, Skill = 7, Speed = 8, //Speed/Agility are the same thing Agility = 8, Evasion = 9, MgEvasion = 10, Accuracy = 11, Luck = 12, }; Vitality is basically defense to physical attacks and spirit is defense to magic attacks. All stats have fixed maximums (9999 for HP, 999 for MP/SP and 255 for the rest). With abilities, the maximums can be increased (99999 for HP, 9999 for HP/SP, 999 for the rest) with typical values (at level 100) before/after abilities+equipment+etc will be 8000/20,000 for HP, 800/2000 for SP/MP, 180/350 for other stats Late game Enemy HP will typically be in the lower millions (with a super boss having the maximum of ~12 million). I was wondering how do people actually develop proper damage formulas that scale correctly? For instance, based on this data, using the damage formulas for Final Fantasy X as a base looked very promising. A full reference here http://www.gamefaqs.com/ps2/197344-final-fantasy-x/faqs/31381 but as a quick example: Str = 127, 'Attack' command used, enemy Def = 34. 1. Physical Damage Calculation: Step 1 ------------------------------------- [{(Stat^3 ÷ 32) + 32} x DmCon ÷16] Step 2 ---------------------------------------- [{(127^3 ÷ 32) + 32} x 16 ÷ 16] Step 3 -------------------------------------- [{(2048383 ÷ 32) + 32} x 16 ÷ 16] Step 4 --------------------------------------------------- [{(64011) + 32} x 1] Step 5 -------------------------------------------------------- [{(64043 x 1)}] Step 6 ---------------------------------------------------- Base Damage = 64043 Step 7 ----------------------------------------- [{(Def - 280.4)^2} ÷ 110] + 16 Step 8 ------------------------------------------ [{(34 - 280.4)^2} ÷ 110] + 16 Step 9 ------------------------------------------------- [(-246)^2) ÷ 110] + 16 Step 10 ---------------------------------------------------- [60516 ÷ 110] + 16 Step 11 ------------------------------------------------------------ [550] + 16 Step 12 ---------------------------------------------------------- DefNum = 566 Step 13 ---------------------------------------------- [BaseDmg * DefNum ÷ 730] Step 14 --------------------------------------------------- [64043 * 566 ÷ 730] Step 15 ------------------------------------------------------ [36248338 ÷ 730] Step 16 ------------------------------------------------- Base Damage 2 = 49655 Step 17 ------------ Base Damage 2 * {730 - (Def * 51 - Def^2 ÷ 11) ÷ 10} ÷ 730 Step 18 ---------------------- 49655 * {730 - (34 * 51 - 34^2 ÷ 11) ÷ 10} ÷ 730 Step 19 ------------------------- 49655 * {730 - (1734 - 1156 ÷ 11) ÷ 10} ÷ 730 Step 20 ------------------------------- 49655 * {730 - (1734 - 105) ÷ 10} ÷ 730 Step 21 ------------------------------------- 49655 * {730 - (1629) ÷ 10} ÷ 730 Step 22 --------------------------------------------- 49655 * {730 - 162} ÷ 730 Step 23 ----------------------------------------------------- 49655 * 568 ÷ 730 Step 24 -------------------------------------------------- Final Damage = 38635 I simply modified the dividers to include the attack rating of weapons and the armor rating of armor. Magic Damage is calculated as follows: Mag = 255, Ultima is used, enemy MDef = 1 Step 1 ----------------------------------- [DmCon * ([Stat^2 ÷ 6] + DmCon) ÷ 4] Step 2 ------------------------------------------ [70 * ([255^2 ÷ 6] + 70) ÷ 4] Step 3 ------------------------------------------ [70 * ([65025 ÷ 6] + 70) ÷ 4] Step 4 ------------------------------------------------ [70 * (10837 + 70) ÷ 4] Step 5 ----------------------------------------------------- [70 * (10907) ÷ 4] Step 6 ------------------------------------ Base Damage = 190872 [cut to 99999] Step 7 ---------------------------------------- [{(MDef - 280.4)^2} ÷ 110] + 16 Step 8 ------------------------------------------- [{(1 - 280.4)^2} ÷ 110] + 16 Step 9 ---------------------------------------------- [{(-279.4)^2} ÷ 110] + 16 Step 10 -------------------------------------------------- [(78064) ÷ 110] + 16 Step 11 ------------------------------------------------------------ [709] + 16 Step 12 --------------------------------------------------------- MDefNum = 725 Step 13 --------------------------------------------- [BaseDmg * MDefNum ÷ 730] Step 14 --------------------------------------------------- [99999 * 725 ÷ 730] Step 15 ------------------------------------------------- Base Damage 2 = 99314 Step 16 ---------- Base Damage 2 * {730 - (MDef * 51 - MDef^2 ÷ 11) ÷ 10} ÷ 730 Step 17 ------------------------ 99314 * {730 - (1 * 51 - 1^2 ÷ 11) ÷ 10} ÷ 730 Step 18 ------------------------------ 99314 * {730 - (51 - 1 ÷ 11) ÷ 10} ÷ 730 Step 19 --------------------------------------- 99314 * {730 - (49) ÷ 10} ÷ 730 Step 20 ----------------------------------------------------- 99314 * 725 ÷ 730 Step 21 -------------------------------------------------- Final Damage = 98633 The problem is that the formulas completely fall apart once stats start going above 255. In particular Defense values over 300 or so start generating really strange behavior. High Strength + Defense stats lead to massive negative values for instance. While I might be able to modify the formulas to work correctly for my use case, it'd probably be easier just to use a completely new formula. How do people actually develop damage formulas? I was considering opening excel and trying to build the formula that way (mapping Attack Stats vs. Defense Stats for instance) but I was wondering if there's an easier way? While I can't convey the full game mechanics of my game here, might someone be able to suggest a good starting place for building a damage formula? Thanks

    Read the article

  • Silabs cp2102 driver problem

    - by Zxy
    I downloaded appropriate driver from its own site, unzipped it and then tried to install it. But: root@ghostrider:/home/zero/Downloads# tar xvf cp210x-3.1.0.tar.gz cp210x-3.1.0/ cp210x-3.1.0/COPYING cp210x-3.1.0/cp210x/ cp210x-3.1.0/cp210x-3.1.0.spec cp210x-3.1.0/cp210x/.rpmmacros cp210x-3.1.0/cp210x/configure cp210x-3.1.0/cp210x/cp210x.c cp210x-3.1.0/cp210x/cp210x.h cp210x-3.1.0/cp210x/cp210xuniversal.c cp210x-3.1.0/cp210x/cp210xuniversal.h cp210x-3.1.0/cp210x/installmod cp210x-3.1.0/cp210x/Makefile24 cp210x-3.1.0/cp210x/Makefile26 cp210x-3.1.0/cp210x/rpmmacros24 cp210x-3.1.0/cp210x/rpmmacros26 cp210x-3.1.0/cp210x/Rules.make cp210x-3.1.0/INSTALL cp210x-3.1.0/makerpm cp210x-3.1.0/PACKAGE-LIST cp210x-3.1.0/README cp210x-3.1.0/RELEASE-NOTES cp210x-3.1.0/REPORTING-BUGS cp210x-3.1.0/rpm/ cp210x-3.1.0/rpm/brp-java-repack-jars cp210x-3.1.0/rpm/brp-python-bytecompile cp210x-3.1.0/rpm/check-rpaths cp210x-3.1.0/rpm/check-rpaths-worker root@ghostrider:/home/zero/Downloads# cd cp210x-3.1.0 root@ghostrider:/home/zero/Downloads/cp210x-3.1.0# ls COPYING cp210x-3.1.0.spec makerpm README REPORTING-BUGS cp210x INSTALL PACKAGE-LIST RELEASE-NOTES rpm root@ghostrider:/home/zero/Downloads/cp210x-3.1.0# run ./makerpm No command 'run' found, did you mean: Command 'zrun' from package 'moreutils' (universe) Command 'runq' from package 'exim4-daemon-heavy' (main) Command 'runq' from package 'exim4-daemon-light' (main) Command 'runq' from package 'sendmail-bin' (universe) Command 'grun' from package 'grun' (universe) Command 'qrun' from package 'torque-client' (universe) Command 'qrun' from package 'torque-client-x11' (universe) Command 'lrun' from package 'lustre-utils' (universe) Command 'rn' from package 'trn' (multiverse) Command 'rn' from package 'trn4' (multiverse) Command 'rup' from package 'rstat-client' (universe) Command 'srun' from package 'slurm-llnl' (universe) run: command not found root@ghostrider:/home/zero/Downloads/cp210x-3.1.0# sudo ./makerpm + uname -r + kernel_release=3.2.0-25-generic-pae + pwd + current_dir=/home/zero/Downloads/cp210x-3.1.0 + export current_dir + uname -r + KVER=3.2.0-25-generic-pae + echo 3.2.0-25-generic-pae + awk -F . -- { print $1 } + KVER1=3 + echo 3.2.0-25-generic-pae + awk -F . -- { print $2 } + KVER2=2 + sed -e s/3\.2\.//g + echo 3.2.0-25-generic-pae + KVER3=0-25-generic-pae + [ -f /root/.rpmmacros ] + echo 2 2 + [ 2 == 4 ] ./makerpm: 25: [: 2: unexpected operator + echo 0-25-generic-pae 0-25-generic-pae + [ 0-25-generic-pae -gt 15 ] ./makerpm: 29: [: Illegal number: 0-25-generic-pae + cp /home/zero/Downloads/cp210x-3.1.0/cp210x/rpmmacros24 /root/.rpmmacros + d=/var/tmp/silabs + [ ! -d /var/tmp/silabs ] + mkdir /var/tmp/silabs + cd /var/tmp/silabs + r=/var/tmp/silabs/rpmbuild + o=cp210x-3.1.0 + s=/var/tmp/silabs/rpmbuild/SOURCES + spec=cp210x-3.1.0.spec + rm -rf /var/tmp/silabs/rpmbuild + mkdir rpmbuild + mkdir rpmbuild/SOURCES + mkdir rpmbuild/SRPMS + mkdir rpmbuild/SPECS + mkdir rpmbuild/BUILD + mkdir rpmbuild/RPMS + cd /var/tmp/silabs/rpmbuild/SOURCES + rm -rf cp210x-3.1.0 + mkdir cp210x-3.1.0 + cp -r /home/zero/Downloads/cp210x-3.1.0/cp210x/Makefile24 /home/zero/Downloads/cp210x-3.1.0/cp210x/Makefile26 /home/zero/Downloads/cp210x- 3.1.0/cp210x/Rules.make /home/zero/Downloads/cp210x-3.1.0/cp210x/configure /home/zero/Downloads/cp210x-3.1.0/cp210x/cp210x.c /home/zero/Downloads/cp210x- 3.1.0/cp210x/cp210x.h /home/zero/Downloads/cp210x-3.1.0/cp210x/cp210xuniversal.c /home/zero/Downloads/cp210x-3.1.0/cp210x/cp210xuniversal.h /home/zero/Downloads/cp210x- 3.1.0/cp210x/installmod /home/zero/Downloads/cp210x-3.1.0/cp210x/rpmmacros24 /home/zero/Downloads/cp210x-3.1.0/cp210x/rpmmacros26 cp210x-3.1.0 + echo 2 2 + [ 2 == 4 ] ./makerpm: 64: [: 2: unexpected operator + echo 0-25-generic-pae 0-25-generic-pae + [ 0-25-generic-pae -gt 15 ] ./makerpm: 68: [: Illegal number: 0-25-generic-pae + cp /home/zero/Downloads/cp210x-3.1.0/cp210x/.rpmmacros24 cp210x-3.1.0/.rpmmacros cp: cannot stat `/home/zero/Downloads/cp210x-3.1.0/cp210x/.rpmmacros24': No such file or directory + MyCopy=0 + rm -f cp210x-3.1.0.tar + rm -f cp210x-3.1.0.tar.gz + tar -cf cp210x-3.1.0.tar cp210x-3.1.0 + gzip cp210x-3.1.0.tar + cp /home/zero/Downloads/cp210x-3.1.0/cp210x-3.1.0.spec /var/tmp/silabs/rpmbuild/SPECS + rpmbuild -ba /var/tmp/silabs/rpmbuild/SPECS/cp210x-3.1.0.spec ./makerpm: 121: ./makerpm: rpmbuild: not found + [ -f /root/.rpmmacros.cp210x ] How may I solve my problem? Thanks

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 3.5: Node.js relay

    - by Elton Stoneman
    This is an extension to Part 3 in the IPASBR series, see also: Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service Integration Patterns with Azure Service Bus Relay, Part 2: Anonymous full-trust .NET consumer Integration Patterns with Azure Service Bus Relay, Part 3: Anonymous partial-trust consumer In Part 3 I said “there isn't actually a .NET requirement here”, and this post just follows up on that statement. In Part 3 we had an ASP.NET MVC Website making a REST call to an Azure Service Bus service; to show that the REST stuff is really interoperable, in this version we use Node.js to make the secure service call. The code is on GitHub here: IPASBR Part 3.5. The sample code is simpler than Part 3 - rather than code up a UI in Node.js, the sample just relays the REST service call out to Azure. The steps are the same as Part 3: REST call to ACS with the service identity credentials, which returns an SWT; REST call to Azure Service Bus Relay, presenting the SWT; request gets relayed to the on-premise service. In Node.js the authentication step looks like this: var options = { host: acs.namespace() + '-sb.accesscontrol.windows.net', path: '/WRAPv0.9/', method: 'POST' }; var values = { wrap_name: acs.issuerName(), wrap_password: acs.issuerSecret(), wrap_scope: 'http://' + acs.namespace() + '.servicebus.windows.net/' }; var req = https.request(options, function (res) { console.log("statusCode: ", res.statusCode); console.log("headers: ", res.headers); res.on('data', function (d) { var token = qs.parse(d.toString('utf8')); callback(token.wrap_access_token); }); }); req.write(qs.stringify(values)); req.end(); Once we have the token, we can wrap it up into an Authorization header and pass it to the Service Bus call: token = 'WRAP access_token=\"' + swt + '\"'; //... var reqHeaders = { Authorization: token }; var options = { host: acs.namespace() + '.servicebus.windows.net', path: '/rest/reverse?string=' + requestUrl.query.string, headers: reqHeaders }; var req = https.request(options, function (res) { console.log("statusCode: ", res.statusCode); console.log("headers: ", res.headers); response.writeHead(res.statusCode, res.headers); res.on('data', function (d) { var reversed = d.toString('utf8') console.log('svc returned: ' + d.toString('utf8')); response.end(reversed); }); }); req.end(); Running the sample Usual routine to add your own Azure details into Solution Items\AzureConnectionDetails.xml and “Run Custom Tool” on the .tt files. Build and you should be able to navigate to the on-premise service at http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc/rest/reverse?string=abc123 and get a string response, going to the service direct. Install Node.js (v0.8.14 at time of writing), run FormatServiceRelay.cmd, navigate to http://localhost:8013/reverse?string=abc123, and you should get exactly the same response but through Node.js, via Azure Service Bus Relay to your on-premise service. The console logs the WRAP token returned from ACS and the response from Azure Service Bus Relay which it forwards:

    Read the article

  • Create Panoramic Photos with Windows Live Photo Gallery

    - by Matthew Guay
    Have you ever wanted to capture the view from a mountain or the full size of a building?  Here’s how you can stitch multiple shots together into the perfect panoramic picture for free with Windows Live Photo Gallery. Getting Started First, make sure you have Windows Live Photo Gallery installed (link below).  Live Photo Gallery is part of the Windows Live Essentials suite, you can select other programs to install along with it if you want. Make sure to uncheck setting your home page to MSN and setting your search provider as Bing if you don’t want them changed.   Now, make sure you have pictures that will work good for a panorama.  These need to be taken from the same spot, and the edges of the pictures need to overlap so the program can find where the pictures connect.  Here we have taken pictures inside a building with a cell phone camera. Make your Panorama Open Live Photo Gallery, and find the pictures you want to use in your panorama.  It will automatically index and display all of the photos in your Pictures folder or Library if you’re using Windows 7. If your pictures are saved elsewhere, add that folder to Photo Gallery.  Click File, Include a folder in the gallery, and select the correct folder at the prompt. Now select all of the pictures that you will use in your panorama.  You can easily do this by clicking the checkbox on each picture that appears when you hover over it.    Once all of the pictures are selected, click Make in the menu bar and select Create panoramic photo… Alternately, right-click on any of the pictures you’ve selected, and click Create panoramic photo… Live Photo Gallery will analyze your photos and compost them together to create a panorama.  The amount of time it takes will vary depending on the number of photos, size of the pictures, and computer speed. When it’s finished making the panorama, you’ll be prompted to enter a file name and save the picture. Your new panorama picture will open as soon as it’s saved.  Depending on your shots, the picture may have quite a bit of black space around the edges where each picture didn’t cover the exact same amount of area. To correct this, click Fix on the menu bar, and then select Crop Photo in the sidebar that opens. Select the center of the picture with the crop tool, and click Apply when you’ve got the selection you want. Live Photo Gallery automatically saves your picture changes, and you can revert back to the original picture if you wish. Now you’ve got a nice panoramic shot, trimmed and ready to print, share, and more. Conclusion Panoramic shots are great ways to capture your whole surroundings, whether it’s a sports stadium, mall, or a scenic mountain view.  They can also be a great way to capture more with low-resolution cameras. Link Download Windows Live Photo Gallery Similar Articles Productive Geek Tips Family Fun: Share Photos with Photo Gallery and Windows Live SpacesLearning Windows 7: Manage Photos with Live Photo GalleryEasily Re-Size Photos in Windows Vista or XPInstall Windows Live Essentials In Windows 7Convert Photos to Flash for Your Website TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor tinysong gives a shortened URL for you to post on Twitter (or anywhere)

    Read the article

  • How the "migrations" approach makes database continuous integration possible

    - by David Atkinson
    Testing a database upgrade script as part of a continuous integration process will only work if there is an easy way to automate the generation of the upgrade scripts. There are two common approaches to managing upgrade scripts. The first is to maintain a set of scripts as-you-go-along. Many SQL developers I've encountered will store these in a folder prefixed numerically to ensure they are ordered as they are intended to be run. Occasionally there is an accompanying document or a batch file that ensures that the scripts are run in the defined order. Writing these scripts during the course of development requires discipline. It's all too easy to load up the table designer and to make a change directly to the development database, rather than to save off the ALTER statement that is required when the same change is made to production. This discipline can add considerable overhead to the development process. However, come the end of the project, everything is ready for final testing and deployment. The second development paradigm is to not do the above. Changes are made to the development database without considering the incremental update scripts required to effect the changes. At the end of the project, the SQL developer or DBA, is tasked to work out what changes have been made, and to hand-craft the upgrade scripts retrospectively. The end of the project is the wrong time to be doing this, as the pressure is mounting to ship the product. And where data deployment is involved, it is prudent not to feel rushed. Schema comparison tools such as SQL Compare have made this latter technique more bearable. These tools work by analyzing the before and after states of a database schema, and calculating the SQL required to transition the database. Problem solved? Not entirely. Schema comparison tools are huge time savers, but they have their limitations. There are certain changes that can be made to a database that can't be determined purely from observing the static schema states. If a column is split, how do we determine the algorithm required to copy the data into the new columns? If a NOT NULL column is added without a default, how do we populate the new field for existing records in the target? If we rename a table, how do we know we've done a rename, as we could equally have dropped a table and created a new one? All the above are examples of situations where developer intent is required to supplement the script generation engine. SQL Source Control 3 and SQL Compare 10 introduced a new feature, migration scripts, allowing developers to add custom scripts to replace the default script generation behavior. These scripts are committed to source control alongside the schema changes, and are associated with one or more changesets. Before this capability was introduced, any schema change that required additional developer intent would break any attempt at auto-generation of the upgrade script, rendering deployment testing as part of continuous integration useless. SQL Compare will now generate upgrade scripts not only using its diffing engine, but also using the knowledge supplied by developers in the guise of migration scripts. In future posts I will describe the necessary command line syntax to leverage this feature as part of an automated build process such as continuous integration.

    Read the article

  • The busy developers guide to the Kinect SDK Beta

    - by mbcrump
    The Kinect is awesome. From day one, I’ve said this thing has got potential. After playing with several open-source Kinect projects, I am please to announce that Microsoft has released the official SDK beta on 6/16/2011. I’ve created this quick start guide to get you up to speed in no time flat. Let’s begin: What is it? The Kinect for Windows SDK beta is a starter kit for applications developers that includes APIs, sample code, and drivers. This SDK enables the academic research and enthusiast communities to create rich experiences by using Microsoft Xbox 360 Kinect sensor technology on computers running Windows 7. (defined by Microsoft) Links worth checking out: Download Kinect for Windows SDK beta – You can either download a 32 or 64 bit SDK depending on your OS. Readme for Kinect for Windows SDK Beta from Microsoft Research  Programming Guide: Getting Started with the Kinect for Windows SDK Beta Code Walkthroughs of the samples that ship with the Kinect for Windows SDK beta (Found in \Samples Folder) Coding4Fun Kinect Toolkit – Lots of extension methods and controls for WPF and WinForms. Kinect Mouse Cursor – Use your hands to control things like a mouse created by Brian Peek. Kinect Paint – Basically MS Paint but use your hands! Kinect for Windows SDK Quickstarts Installing and Using the Kinect Sensor Getting it installed: After downloading the Kinect SDK Beta, double click the installer to get the ball rolling. Hit the next button a few times and it should complete installing. Once you have everything installed then simply plug in your Kinect device into the USB Port on your computer and hopefully you will get the following screen: Once installed, you are going to want to check out the following folders: C:\Program Files (x86)\Microsoft Research KinectSDK – This contains the actual Kinect Sample Executables along with the documentation as a CHM file. Also check out the C:\Users\Public\Documents\Microsoft Research KinectSDK Samples directory: The main thing to note here is that these folders contain the source code to the applications where you can compile/build them yourself. Audio NUI DEMO Time Let’s get started with some demos. Navigate to the C:\Program Files (x86)\Microsoft Research KinectSDK folder and double click on ShapeGame.exe. Next up is SkeletalViewer.exe (image taken from http://www.i-programmer.info/news/91-hardware/2619-microsoft-launch-kinect-sdk-beta.html as I could not get a good image using SnagIt) At this point, you will have to download Kinect Mouse Cursor – This is really cool because you can use your hands to control the mouse cursor. I actually used this to resize itself. Last up is Kinect Paint – This is very cool, just make sure you read the instructions! MS Paint on steroids! A few tips for getting started building Kinect Applications. It appears WPF is the way to go with building Kinect Applications. You must also use a version of Visual Studio 2010.  Your going to need to reference Microsoft.Research.Kinect.dll when building a Kinect Application. Right click on References and then goto Browse and navigate to C:\Program Files (x86)\Microsoft Research KinectSDK and select Microsoft.Research.Kinect.dll. You are going to want to make sure your project has the Platform target set to x86. The Coding4Fun Kinect Toolkit really makes things easier with extension methods and controls. Just note that this is for WinForms or WPF. Conclusion It looks like we have a lot of fun in store with the Kinect SDK. I’m very excited about the release and have already been thinking about all the applications that I can begin building. It seems that development will be easier now that we have an official SDK and the great work from Coding4Fun. Please subscribe to my blog or follow me on twitter for more information about Kinect, Silverlight and other great technology.  Subscribe to my feed

    Read the article

  • Integrating JavaFX Scene Builder in the IDEs

    - by Jerome Cambon
    I experienced recently using Scene Builder from Netbeans, Eclipse and IntelliJ IDEA. As you may know, Scene Builder is a standalone tool, that can be used independently of any IDE. But it can be very convenient to use it with your favorite IDE, for instance start it by double-clicking on an FXML file, or run samples delivered with Scene Builder.  I'm sharing here with you few tweaks that I had to do for a better integration. Scene Builder 1.1 Developer Preview should be installed before doing the tweaks. The steps below have been done on Windows 7. It should be very similar on both Mac OS and Linux. Please tell me if you find any issue on one of these 2 platforms. Netbeans 7.3 Netbeans 7.3 can be downloaded from here. Creating a New FXML project Part of the JavaFx projects, Netbeans allows to create a 'JavaFX FXML Application', that creates a JavaFx project based on FXML description. The FXML file will be editable with Scene Builder. Starting Scene Builder from Netbeans If SceneBuilder 1.1 is installed, Netbeans will discover it automatically.In case of issue, one can open the Options panel, Java section, JavaFx tab. Scene Builder home should appear here. You can then either Open the FXML file with Scene Builder, or edit it with the Netbeans FXML editor : When 'Open' is selected, Scene Builder appears on top of the Netbeans window : When 'Edit' is selected, the FXML is opened in the Netbeans FXML editor, which support syntax highlighting and completion : Using Scene Builder Samples Scene Builder provides Netbeans projects, that can be opened/run directly : Eclipse 4.2.1 + e(fx)clipse 0.1.1 JavaFX integration in Eclipse has been done with the e(fx)clipse plugin. A distribution bundle containing Eclipse and e(fx)clipse is provided here. Creating New FXML project All the JavaFX-related projects can be found in 'Other' section : First create a new JavaFX project: Enter the project name (Test here). JavaFX delivery will be found in the JRE. Then, create a 'New FXML Document': Enter the FXML file name (Sample here). You may also want to choose the FXML document root element (AnchorPane by default). Dynamic root is for advanced users which want to manage custom types. Starting Scene Builder from Eclipse Once created, you can then either Open the FXML file with Scene Builder, or Open it in the Eclipse FXML editor : Using Scene Builder Samples from Eclipse To use Scene Builder samples, first create a new JavaFX Project (from 'Other' section): Then, on the next panel, 'Link additionnal source': … and select the source directory of a Scene Builder example : HelloWorld here (the parent directory of the java package should be selected).Then, choose a 'Folder name' for your sample: You can now run the Scene Builder example by right-clicking the Main.java source file: IntelliJ IDEA 11.1.3 IntelliJ IDEA Community Edition can be downloaded from here. IntelliJ IDEA has no specific JavaFX integration. Creating New IntelliJ project from existing source Since IntelliJ has no JavaFX project knowledge, we are using the Scene Builder samples as a starting point. We are going to create a new Java project from the HelloWorld sample: Then, click twice on 'Next' (nothing to change), then 'Finish'. The 'HelloWorld' project is created. Starting Scene Builder from IntelliJ We need to tell the IDE that FXML files are opened with an external application. Then, the OS file association will be used. To do this, open the File->Settings panel. Then, select 'File Types' and 'Files opened in associated applications'. And add a new wildcard : '*.fxml' : Now, from the HelloWorld project, you can double-click on HelloWorld.fxml : Scene Builder window appears on top of the IntelliJ window : Using Scene Builder Samples from IntelliJ We need to tell IntelliJ that the fxml files must be copied in the build directory.To do that, from the HelloWorld directory, open the 'idea' section, and edit the 'compiler.xml' file. We need to add an '*.fxml' entry: Then, you can run the sample from HelloWorld project, by right-clicking the Main class:

    Read the article

  • Installing CUDA on Ubuntu 12.04 with nvidia driver 295.59

    - by johnmcd
    I have been trying to get cuda to run on a nvidia gt 650m based laptop. I am running Ubuntu 12.04 with the nvidia 295.59 driver. Also, my laptop uses Optimus so I have install the driver via bumblebee. Bumblebee is not working correctly yet -- however I believe it is possible to install CUDA independently. To install CUDA I have followed the instructions detailed here: How can I get nVidia CUDA or OpenCL working on a laptop with nVidia discrete card/Intel Integrated Graphics? However I am still running into problem building the sdk. I made the changes specified at the above link in common.mk, but I got the following (snippet) from the build process: make[2]: Entering directory `/home/john/NVIDIA_GPU_Computing_SDK/C/src/fluidsGL' /usr/bin/ld: warning: libnvidia-tls.so.302.17, needed by /usr/lib/nvidia-current/libGL.so, not found (try using -rpath or -rpath-link) /usr/bin/ld: warning: libnvidia-glcore.so.302.17, needed by /usr/lib/nvidia-current/libGL.so, not found (try using -rpath or -rpath-link) /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv018tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv012glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv017glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv012tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv015tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv019tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv000glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv017tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv013tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv013glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv018glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv022tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv007tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv009tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv020tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv014glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv015glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv016tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv001glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv006tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv021tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv011tls' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv020glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv019glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv002glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv021glcore' /usr/lib/nvidia-current/libGL.so: undefined reference to `_nv014tls' collect2: ld returned 1 exit status make[2]: *** [../../bin/linux/release/fluidsGL] Error 1 make[2]: Leaving directory `/home/john/NVIDIA_GPU_Computing_SDK/C/src/fluidsGL' make[1]: *** [src/fluidsGL/Makefile.ph_build] Error 2 make[1]: Leaving directory `/home/john/NVIDIA_GPU_Computing_SDK/C' make: *** [all] Error 2 The libraries that ld warns about are on my system and are installed on the system: $ locate libnvidia-tls.so.302.17 libnvidia-glcore.so.302.17 /usr/lib/nvidia-current/libnvidia-glcore.so.302.17 /usr/lib/nvidia-current/libnvidia-tls.so.302.17 /usr/lib/nvidia-current/tls/libnvidia-tls.so.302.17 /usr/lib32/nvidia-current/libnvidia-glcore.so.302.17 /usr/lib32/nvidia-current/libnvidia-tls.so.302.17 /usr/lib32/nvidia-current/tls/libnvidia-tls.so.302.17 however /usr/lib/nvidia-current and /usr/lib32/nvidia-current are not being picked up by ldconfig. I have tried adding them by adding a file to /etc/ld.so.conf.d/ which gets past this error, however now I am getting the following error: make[2]: Entering directory `/home/john/NVIDIA_GPU_Computing_SDK/C/src/deviceQueryDrv' cc1plus: warning: command line option ‘-Wimplicit’ is valid for C/ObjC but not for C++ [enabled by default] obj/x86_64/release/deviceQueryDrv.cpp.o: In function `main': deviceQueryDrv.cpp:(.text.startup+0x5f): undefined reference to `cuInit' deviceQueryDrv.cpp:(.text.startup+0x99): undefined reference to `cuDeviceGetCount' deviceQueryDrv.cpp:(.text.startup+0x10b): undefined reference to `cuDeviceComputeCapability' deviceQueryDrv.cpp:(.text.startup+0x127): undefined reference to `cuDeviceGetName' deviceQueryDrv.cpp:(.text.startup+0x16a): undefined reference to `cuDriverGetVersion' deviceQueryDrv.cpp:(.text.startup+0x1f0): undefined reference to `cuDeviceTotalMem_v2' deviceQueryDrv.cpp:(.text.startup+0x262): undefined reference to `cuDeviceGetAttribute' deviceQueryDrv.cpp:(.text.startup+0x457): undefined reference to `cuDeviceGetAttribute' deviceQueryDrv.cpp:(.text.startup+0x4bc): undefined reference to `cuDeviceGetAttribute' deviceQueryDrv.cpp:(.text.startup+0x502): undefined reference to `cuDeviceGetAttribute' deviceQueryDrv.cpp:(.text.startup+0x533): undefined reference to `cuDeviceGetAttribute' obj/x86_64/release/deviceQueryDrv.cpp.o:deviceQueryDrv.cpp:(.text.startup+0x55e): more undefined references to `cuDeviceGetAttribute' follow collect2: ld returned 1 exit status make[2]: *** [../../bin/linux/release/deviceQueryDrv] Error 1 make[2]: Leaving directory `/home/john/NVIDIA_GPU_Computing_SDK/C/src/deviceQueryDrv' make[1]: *** [src/deviceQueryDrv/Makefile.ph_build] Error 2 make[1]: Leaving directory `/home/john/NVIDIA_GPU_Computing_SDK/C' make: *** [all] Error 2 I would appreciate any help that anyone can provide me with. If I can provide any further information please let me know. Thanks.

    Read the article

  • Database – Beginning with Cloud Database As A Service

    - by Pinal Dave
    I love my weekend projects. Everybody does different activities in their weekend – like traveling, reading or just nothing. Every weekend I try to do something creative and different in the database world. The goal is I learn something new and if I enjoy my learning experience I share with the world. This weekend, I decided to explore Cloud Database As A Service – Morpheus. In my career I have managed many databases in the cloud and I have good experience in managing them. I should highlight that today’s applications use multiple databases from SQL for transactions and analytics, NoSQL for documents, In-Memory for caching to Indexing for search.  Provisioning and deploying these databases often require extensive expertise and time.  Often these databases are also not deployed on the same infrastructure and can create unnecessary latency between the application layer and the databases.  Not to mention the different quality of service based on the infrastructure and the service provider where they are deployed. Moreover, there are additional problems that I have experienced with traditional database setup when hosted in the cloud: Database provisioning & orchestration Slow speed due to hardware issues Poor Monitoring Tools High network latency Now if you have a great software and expert network engineer, you can continuously work on above problems and overcome them. However, not every organization have the luxury to have top notch experts in the field. Now above issues are related to infrastructure, but there are a few more problems which are related to software/application as well. Here are the top three things which can be problems if you do not have application expert: Replication and Clustering Simple provisioning of the hard drive space Automatic Sharding Well, Morpheus looks like a product build by experts who have faced similar situation in the past. The product pretty much addresses all the pain points of developers and database administrators. What is different about Morpheus is that it offers a variety of databases from MySQL, MongoDB, ElasticSearch to Reddis as a service.  Thus users can pick and chose any combination of these databases.  All of them can be provisioned in a matter of minutes with a simple and intuitive point and click user interface.  The Morpheus cloud is built on Solid State Drives (SSD) and is designed for high-speed database transactions.  In addition it offers a direct link to Amazon Web Services to minimize latency between the application layer and the databases. Here are the few steps on how one can get started with Morpheus. Follow along with me.  First go to http://www.gomorpheus.com and register for a new and free account. Step 1: Signup It is very simple to signup for Morpheus. Step 2: Select your database   I use MySQL for my daily routine, so I have selected MySQL. Upon clicking on the big red button to add Instance, it prompted a dialogue of creating a new instance.   Step 3: Create User Now we just have to create a user in our portal which we will use to connect to a database hosted at Morpheus. Click on your database instance and it will bring you to User Screen. Over here you will notice once again a big red button to create a new user. I created a user with my first name.   Step 4: Configure your MySQL client I used MySQL workbench and connected to MySQL instance, which I had created with an IP address and user.   That’s it! You are connecting to MySQL instance. Now you can create your objects just like you would create on your local box. You will have all the features of the Morpheus when you are working with your database. Dashboard While working with Morpheus, I was most impressed with its dashboard. In future blog posts, I will write more about this feature.  Also with Morpheus you use the same process for provisioning and connecting with other databases: MongoDB, ElasticSearch and Reddis. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Software Architecture verses Software Design

    Recently, I was asked what the differences between software architecture and software design are. At a very superficial level both architecture and design seem to mean relatively the same thing. However, if we examine both of these terms further we will find that they are in fact very different due to the level of details they encompass. Software Architecture can be defined as the essence of an application because it deals with high level concepts that do not include any details as to how they will be implemented. To me this gives stakeholders a view of a system or application as if someone was viewing the earth from outer space. At this distance only very basic elements of the earth can be detected like land, weather and water. As the viewer comes closer to earth the details in this view start to become more defined. Details about the earth’s surface will start to actually take form as well as mane made structures will be detected. The process of transitioning a view from outer space to inside our earth’s atmosphere is similar to how an architectural concept is transformed to an architectural design. From this vantage point stakeholders can start to see buildings and other structures as if they were looking out of a small plane window. This distance is still high enough to see a large area of the earth’s surface while still being able to see some details about the surface. This viewing point is very similar to the actual design process of an application in that it takes the very high level architectural concept or concepts and applies concrete design details to form a software design that encompasses the actual implementation details in the form of responsibilities and functions. Examples of these details include: interfaces, components, data, and connections. In review, software architecture deals with high level concepts without regard to any implementation details. Software design on the other hand takes high level concepts and applies concrete details so that software can be implemented. As part of the transition between software architecture to the creation of software design an evaluation on the architecture is recommended. There are several benefits to including this step as part of the transition process. It allows for projects to ensure that they are on the correct path as to meeting the stakeholder’s requirement goals, identifies possible cost savings and can be used to find missing or nonspecific requirements that cause ambiguity in a design. In the book “Evaluating Software Architectures: Methods and Case Studies”, they define key benefits to adding an architectural review process to ensure that an architecture is ready to move on to the design phase. Benefits to evaluating software architecture: Gathers all stakeholders to communicate about the project Goals are clearly defined in regards to the creation or validation of specific requirements Goals are prioritized so that when conflicts occur decisions will be made based on goal priority Defines a clear expectation of the architecture so that all stakeholders have a keen understanding of the project Ensures high quality documentation of the architecture Enables discoveries of architectural reuse  Increases the quality of architecture practices. I can remember a few projects that I worked on that could have really used an architectural review prior to being passed on to developers. This project was to create some new advertising space on the company’s website in order to sell space based on the location and some other criteria. I was one of the developer selected to lead this project and I was given a high level design concept and a long list of ever changing requirements due to the fact that sales department had no clear direction as to what exactly the project was going to do or how they were going to bill the clients once they actually agreed to purchase the Ad space. In my personal opinion IT should have pushed back to have the requirements further articulated instead of forcing programmers to code blindly attempting to build such an ambiguous project.  Unfortunately, we had to suffer with this project for about 4 months when it should have only taken 1.5 to complete due to the constantly changing and unclear requirements. References  Clements, P., Kazman, R., & Klein, M. (2002). Evaluating Software Architectures. Westford, Massachusetts: Courier Westford. 

    Read the article

  • How can we improve overall Programmer Education & Training?

    - by crosenblum
    Last week, I was just viewing this amazing interview by Kevin Rose of Phillip Rosedale, of Second Life. And they had an amazing discussion about how to find, hire and identify good programmer's, and how hard it is to find good ones. Which has lead me to really think about the way we programmer's learn, are taught. For a majority of us, myself included, we are self-taught. Which is great about being a programmer, anyone can learn and develop skills. But this also means, that there is no real standards of what a good programmer is/are, and what kind of environment's encourage the growth of programming skills. This isn't so much a question, but just a desire in me, to see how we can change the culture of programming, and the manager's of programming, so that education and self-improvement is encouraged. There are a lot of avenue's for continued education, youtube videos, books, conferences, but because of the experiental nature of what we do, it isn't always clear what's important to learn and to master. Let's look at the The Joel 12 Steps. The Joel Test Do you use source control? Can you make a build in one step? Do you make daily builds? Do you have a bug database? Do you fix bugs before writing new code? Do you have an up-to-date schedule? Do you have a spec? Do programmers have quiet working conditions? Do you use the best tools money can buy? Do you have testers? Do new candidates write code during their interview? Do you do hallway usability testing? I think all of these have important value, but because of something I call the Experiential Gap, if a programmer or manager has never experienced any of the negative consequences for not having done items on the list, they will never see the need to do any of them. The Experiental Gap, is my basic theory, that each of us has different jobs and different experiences. So for some of us, that have always worked with dozens of programmer's, source control is a must have. But for people who have always been the only programmer, they can not imagine the need for source control. And it's because of this major flaw in how we learn, that we evaluate people by what best practices they do or not do, and the reason for either can start a flame war. We always evaluate people in our field by what they do, and think "Oh if this guy/gal isn't doing xyz best practice, he/she can't be a good programmer, so let's not waste time or energy talking to them." This is exactly why we have so many programming flame wars, that it becomes, because of the Experiental Gap, we can't imagine people not having made the decisions that we have had to made. So this has lead me to think, that we totally need to rethink how we train, educate and manage programmer's. For example, what percentage of you have had encouragement by your manager's to go to conferences, and even have them pay for it? For me, and a lot of people, this is extremely rare, a lot of us would love to go to conferences, to learn more, but the money ain't there to do that. So the point of this question is really to spark a lot of how can we train, learn and manage better? How can we create a new culture of learning that doesn't insult people for not having the same job experiences. Yes we all have jobs and work to do, but our ability to do our jobs well, depends on our desire, interest and support in improving our mastery of our skills. Right now, I see our culture being rather disorganized, we support the elite, but those tons of us that want to get better, just don't have enough support to learn and improve ourselves. I mean, do we as an industry, want to be perceived as just replaceable cogs? Thank you...

    Read the article

  • Bridging the Gap in Cloud, Big Data, and Real-time

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} With all the buzz of around big data and cloud computing, it is easy to overlook one of your most precious commodities—your data. Today’s businesses cannot stand still when it comes to data. Market success now depends on speed, volume, complexity, and keeping pace with the latest data integration breakthroughs. Are you up to speed with big data, cloud integration, real-time analytics? Join us in this three part blog series where we’ll look at each component in more detail. Meet us online on October 24th where we’ll take your questions about what issues you are facing in this brave new world of integration. Let’s start first with Cloud. What happens with your data when you decide to implement a private cloud architecture? Or public cloud? Data integration solutions play a vital role migrating data simply, efficiently, and reliably to the cloud; they are a necessary ingredient of any platform as a service strategy because they support cloud deployments with data-layer application integration between on-premise and cloud environments of all kinds. For private cloud architectures, consolidation of your databases and data stores is an important step to take to be able to receive the full benefits of cloud computing. Private cloud integration requires bidirectional replication between heterogeneous systems to allow you to perform data consolidation without interrupting your business operations. In addition, integrating data requires bulk load and transformation into and out of your private cloud is a crucial step for those companies moving to private cloud. In addition, the need for managing data services as part of SOA/BPM solutions that enable agile application delivery and help build shared data services for organizations. But what about public Cloud? If you have moved your data to a public cloud application, you may also need to connect your on-premise enterprise systems and the cloud environment by moving data in bulk or as real-time transactions across geographies. For public and private cloud architectures both, Oracle offers a complete and extensible set of integration options that span not only data integration but also service and process integration, security, and management. For those companies investing in Oracle Cloud, you can move your data through Oracle SOA Suite using REST APIs to Oracle Messaging Cloud Service —a new service that lets applications deployed in Oracle Cloud securely and reliably communicate over Java Messaging Service . As an example of loading and transforming data into other public clouds, Oracle Data Integrator supports a knowledge module for Salesforce.com—now available on AppExchange. Other third-party knowledge modules are being developed by customers and partners every day. To learn more about how to leverage Oracle’s Data Integration products for Cloud, join us live: Data Integration Breakthroughs Webcast on October 24th 10 AM PST.

    Read the article

< Previous Page | 628 629 630 631 632 633 634 635 636 637 638 639  | Next Page >