Search Results

Search found 42090 results on 1684 pages for 'mean square method'.

Page 655/1684 | < Previous Page | 651 652 653 654 655 656 657 658 659 660 661 662  | Next Page >

  • Why did Embarcadero make me sign a waiver?

    - by Peter Turner
    Just signed in to the Embarcadero Developer Network and got this: EXPORT CONTROLS ON EMBARCADERO SOFTWARE Your EDN membership and access to Embarcadero Software is subject to your agreement to and compliance with the following terms: -You agree that U.S. export control laws govern your use of the Embarcadero Software. -You are not a citizen, national, or resident of, and are not under control of, the government of Cuba, Iran, Sudan, North Korea, Syria, nor any country to which the United States has embargoed or prohibited export. -You will not provide or export Embarcadero Software, directly or indirectly, to the above mentioned countries nor to citizens, nationals or residents of those countries. -You are not listed on the United States Department of Treasury lists of Specially Designated Nationals, Specially Designated Terrorists, and Specially Designated Narcotic Traffickers, nor are you listed on the United States Department of Commerce Table of Denial Orders. -You will not provide or export the Embarcadero Software, directly or indirectly, to persons on the above mentioned lists. -You will not use the Embarcadero Software for, and will not allow the Embarcadero Software to be used for, any purposes prohibited by United States law, including for the development, design, manufacture or production of nuclear, chemical or biological weapons of mass destruction. I think it's BS, but what craziness is forcing companies like Embarcadero to hold developers to these very high standards? Also, what is "Embarcadero Software"? Does that mean I can't put a benign videogame on a website that may have a runtime that might be downloaded by a Iranian who love scrabble. Or does "Embarcadero Software" refer to anything I develop using Delphi.

    Read the article

  • Use 3 monitors w/built-in intel adapter + two old nvidia PCI cards on 10.10?

    - by Kendall Gifford
    I'd like to move from windows with my current workstation. The only thing holding me back is that I have 3 monitors connected to the system and I really take advantage of the real estate when working. I just installed Ubuntu 10.10 on the system and one of the monitors is up and running just fine. This monitor is connected to the built-in Intel adapter. I also have two old nVidia GeForce4 MX 4000 (nv19pl) cards in my two PCI slots with two monitors connected to them respectively. I installed the legacy (and proprietary) nVidia drivers (the nvidia-96 package) that claims to support these old cards. Now the question is how to get X configured to use all adapters (using two different drivers) so I can use all three monitors (and is this even possible)? From what I've read, it looks like I'll have to write an xorg.conf file since the nVidia driver doesn't support the auto-magic configuration supported by other drivers. On this site: http://wiki.ubuntu.com/X/Config it says that on 10.10 I just need to write an xorg.conf "containing only those sections and options that you need to override Xorg's autoconfigurated settings". So, does this mean I can get away with only including the nVidia-specific configuration stuff and all else will get auto-configured? Or, will providing a config with a "Device" section overrule the auto-magic from detecting/using the Intel adapter? I ran the included nvidia-xconfig to generate a basic, nVidia-specific xorg.conf but I'm hesitant to reboot with it in place, suspecting I'll have a screwed up display. Also, is there any way (any tool or command) to generate an xorg.conf from the current, auto-configured running state of an X session? If I have to write a full, complete config, I'd rather start with one that includes everything that's been auto-detected thus far (and merge it with my nVidia version). Anyhow, any info and thoughts are greatly appreciated (as are answers).

    Read the article

  • Install modern browser on Maverick?

    - by feklee
    I tried installing Chrome from the official repository, but I get: $ sudo apt-get install google-chrome-stable Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: google-chrome-stable : Depends: gconf-service but it is not installable Depends: libgconf-2-4 (>= 2.31.1) but it is not installable Depends: libgtk2.0-0 (>= 2.24.0) but 2.22.0-0ubuntu1 is to be installed Depends: libnspr4 (>= 1.8.0.10) but it is not installable Depends: libnss3 (>= 3.14.3) but it is not installable Depends: libstdc++6 (>= 4.6) but 4.5.1-7ubuntu2 is to be installed Depends: libx11-6 (>= 2:1.4.99.1) but 2:1.3.3-3ubuntu1 is to be installed E: Broken packages Note: This is neither my system, nor do I want to do a full system upgrade. Any modern browser will do. Flash plugin is also needed, if not included in the browser.

    Read the article

  • Best way to convert existing project to be open source in GitHub

    - by Tom
    I've been working on a personal closed source project for some time and would like to make it open source. I've never created my own open source project before so it will be a good learning experience. I have been using GitHub as source control, so once I've written some decent docs on how to use and develop for it etc, it should be as simple as switching the repo to be public right? I guess my main question is around licencing. I was thinking of going with Apache 2.0 licence just because it seems to be widely used. It requires the licence header to be attached to all the source files, but if I do that now then all the other commits in the past will have it missing. Does that mean some one could pull an earlier version and it wouldn't have a licence? Is it best to start a new repo with the initial commit containing all the code with licence headers? Or maybe is there some advanced Git functionality that allows me to apply the licence header to all existing commits some how? Cheers.

    Read the article

  • Exam 70-630 - TS: Microsoft Office SharePoint Server 2007, Configuring

    - by DigiMortal
    It has been really quiet here but I wasted no time. I passed exam 70-630 - TS: Microsoft Office SharePoint Server 2007, Configuring and in this posting I will give you a short overview of this very-very easy exam exam. If you are not new to SharePoint Server 2007 and you have some development experiences then this is the easiest exam from Microsoft you have ever seen. There are 51 questions in this exam and two or four of them were not familiar to me. I took me about one hour to prepare for this exam and I got 964 of 1000. Okay, I have some years of experience as SharePoint developer but these questions seemed still too easy for me to be real. I mean based on this exam you cannot accurately say if somebody is able to configure SharePoint Server or not. I think this exam should be very easy also to SharePoint Server administrators who have at least some experience with supporting and maintaining production systems running on SharePoint Server 2007. Those who does not feel strong on SharePoint Server configuring my read a book suggested by Microsoft Learning site: Inside Microsoft® Office SharePoint® Server 2007. Exam 70-630 gives you Microsoft Certified Technology Specialist certificate

    Read the article

  • Should a Python programmer learn Ruby?

    - by C J
    Hi! I have been a Python programmer for around 1.5 years (one internship + side projects), so I am comfortable with the language. Given that everyone is talking about Ruby these days, and I mean seriously! No one bothers about Python (from what I've seen). See GitHub. All RoR. I apply for a job and they ask me about RoR. I look at the screencasts on peepcode.com and they are in Ruby. gitimmersion.com has all the tutorial in Ruby! I know this is pretty vague, but still... why Ruby! Everyone these days is obssessed with RoR! Why not Python? Anyways, my questions are: Should I learn Ruby? Is learning Ruby when knowing Python be, er, complicated for me? Or is it going to be just like learning any other language? Thanks!

    Read the article

  • Encrypting a non-linux partition with LUKS.

    - by linuxn00b
    I have a non-Linux partition I want to encrypt with LUKS. The goal is to be able to store it by itself on a device without Linux and access it from the device when needed with an Ubuntu Live CD. I know LUKS can't encrypt partitions in place, so I created another, unformatted partition of the EXACT same size (using GParted's "Round to MiB" option) and ran this command: sudo cryptsetup luksFormat /dev/xxx Where xxx is the partition's device name. Then I typed in my new passphrase and confirmed it. Oddly, the command exited immediately after, so I guess it doesn't encrypt the entire partition right away? Anyway, then I ran this command: sudo cryptsetup luksOpen /dev/xxx xxx Then I tried copying the contents of the existing partition (call it yyy) to the encrypted one like this: sudo dd if=/dev/yyy of=/dev/mapper/xxx bs=1MB and it ran for a while, but exited with this: dd: writing `/dev/mapper/xxx': No space left on device just before writing the last MB. I take this to mean the contents of yyy was truncated when it was copied to xxx, because I have dd'd it before, and whenever I have dd'd to a partition of the exact same size, I never get that error. (and fdisk reports they are the same size in blocks). After a little Googling I discovered all luksFormat'ted partitions have a custom header followed by the encrypted contents. So it appears I need to create a partition exactly the size of the old one + however many bytes a LUKS header is. What size should the destination partition be, no. 1, and no. 2, am I even on the right track here? UPDATE I found this in the LUKS FAQ: I think this is overly complicated. Is there an alternative? Yes, you can use plain dm-crypt. It does not allow multiple passphrases, but on the plus side, it has zero on disk description and if you overwrite some part of a plain dm-crypt partition, exactly the overwritten parts are lost (rounded up to sector borders). So perhaps I shouldn't be using LUKS at all?

    Read the article

  • Huge google impression drop after cleaning html

    - by olgatorresfoundation
    Good morning, I am the webmaster of a non-profit organization that donates grants to colorectal cancer research projects and funds various colorectal cancer information campaigns. We have three domains: www.fundacioolgatorres dot org (Catalan) www.fundacionolgatorres dot org (Spanish) www.olgatorresfoundation dot org (English) So what happened? I redesigned olgatorresfoundation on the 20th and the fundacionolgatorres on the 30th of May. In both cases, exactly two days later, the number of impressions on both dropped to a halt. Granted, we did not have the traffic of Microsoft, but a 90% decrease a disaster of incredible proportions for us. My only real changes were cleaning up the old ineffective HTML to a cleaner form (mostly moving away from redundant table construction to a table-less view). Here is a before and after snapshot of what the change looks like: Before: http://www.fundacioolgatorres.org/aparell_digestiu/introduccio/ (unchanged page in Catalan) After: http://www.olgatorresfoundation.org/digestive_system/introduction/ (changed page in English) Anybody has a clue to what just happened? Why should a normal, sane html improvement be punished and so dramatically? No URLs have been changed, neither have page names or descriptions. Possible secondary question: If it is so that Google sees it as a major overhaul and decides to drop the pagerank sharply, does it come back to pre-change levels if the content "checks out" or will the page start over from scratch earning those pagerank points (which would mean that we would have to wait 6 months for the pages to recover to the level they had two weeks ago)? (duplicated from productforums.google dot com/forum/#!category-topic/webmasters/crawling-indexing--ranking/YsnyX0JzOpY, hoping to reach a wider audience)

    Read the article

  • Determine whether a link points to a file on local host or foreign domain [migrated]

    - by user107157
    This has been a burning question for me ever since and I think it's interesting enough to discuss it on the forums. As most will know, in websites we include anchor links, stylesheets, script files (javascript) and images. For anchor links we use the form <a href="..." /> For stylesheets we may use the form <link rel="stylesheet" type="text/css" href="..." /> For javascript we may use <script src="..." /> For images we use <img src="..." /> So, the question is this: How do we know that what is in the link pointer (i.e. replacing the ... in each example) is a local file or a foreign entity? To make it clear, lets say I create a local file named "ashish.com". Now, my purpose is to create a link so that anybody who clicks on it may download it. So, my code would be thus: <a href="ashish.com">Download It</a> But this makes it ambiguous. I could also be referring to a website named "ashish.com" So, how does the computer magically know which one I mean? Or does it even know this? What would happen in such a scenario?

    Read the article

  • First Look - Oracle Data Mining

    - by kimberly.billings
    In his blog, JT on EDM, James Taylor shares his analysis of Oracle Data Mining, including its new GUI and Exadata integration. While Oracle Data Mining has been available for a while, it is now easier to access and try via the Amazon Cloud. Using the Oracle 11gR2 Data Mining Amazon Machine Image (AMI), you can launch an Oracle Data Mining-enabled instance directly through Amazon Web Services (AWS) and connect to it using the Oracle Data Miner graphical user interface. The new Oracle Data Mining GUI, which will be available to beta customers soon, provides more graphics, the ability to define, save and share analytical "work flows" to solve business problems, and provides more automation and simplicity. Taylor comments that, "the UI looks to have a nice look and feel including graphical model development flows, easy access to the data, nice little micro graphs when browsing data records and more." On using Oracle Data Mining with Exadata, Taylor writes, "Oracle says that the use of the ODM routines in the Exadata kernel is faster than running a native ODM model in the database by a factor of 2 and that this increases as more joins are used. This could mean that ODM outperforms even third party in-database analytics." Taylor concludes his blog with a positive overall review, stating that "ODM is a nice product for Oracle database customers and well worth looking into. The new UI will only make it more so." Read the blog. var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • SDL & Windows 8 Metro WinRT

    - by Adrian
    I am just beginning to dip my tow into game programming and have been reading up on SDL, SFML, OpenGL, XNA, MonoGame and of course DirectX. (Needless to say there are a lot of choices out there) As much as I like SFMLs syntax I have chosen to read up and start with SDL as it is pretty ubiquitous and available on every platform (Windows, Linux, Mac) and also available on portable devices (Android, iOS) with the current exception of WinPhone 7 After that pre-amble here is my question. I notice that the docs say that for the windows platform the SDL API calls through to DirectX for higher perf. ( http://www.libsdl.org/intro.en/whatplatforms.html ) Microsoft have said that for Metro Game Apps you can only use DirectX (which means no XNA, no OpenGL, no SFML, etc, etc) My question is: If SDL just wraps DirectX calls will I (we) be able to use SDL to bring games to the new Metro WinRT environment and Windows 8 marketplace? This would be great if possible. Additionally as WinPhone 8 is supposedly built on Win8 then this could mean SDL would be available on the win phone in the future too. Thanks for your time in responding to this question and I look forward to hearing your response. EDIT: Based on DeadMG's answer I have installed Visual Studio 11 (beta) in Windows 8 Consumer Preview (CP) and went file-New to check project types. The project types: "Blank Application", "Direct2D Application" and "Direct3D Application" look of interest. I have selected "Direct2D App" but SDL generates its own window when you call: SDL_INIT Is it possible to link/setup the SDL window to point to the Direct2D surface in the this project?

    Read the article

  • The lifecycle of "cool"

    - by Dori
    I've been thinking lately about how some programming projects/products become "cool," and in particular, how that trend can later reverse. Here are two examples that might better explain my context: Textmate Whenever someone asks about text editors on OS X, the answer on the SE sites is an automatic "Textmate!" But looked at objectively: Textmate 1.0 shipped October 2004 Textmate 1.5 shipped January 2006 Textmate 2 was announced February 2006 As of September 2010, the currently shipping version is 1.5.9 In all of 2010, there have been a total of three posts on the Textmate blog At what point (if ever) do Textmate fans start thinking about switching to another text editor? When it breaks after some future Apple update? When alpha geeks they respect start recommending something else? Or? jQuery Whenever a JavaScript-related question is asked on the SE sites, the knee-jerk response is "jQuery!" I've seen it happen even when the question itself only required a single line of JavaScript. Or when the question could be better answered by using CSS. Do the answerers understand they're suggesting a blowtorch to light a candle? That they're recommending adding 70K or so of code to do something trivial? Or is it a symptom of "When you have a hammer, everything looks like a nail"—that is, jQuery is all they know how to do, so that's their recommendation? And do they understand that while they may know jQuery well, that doesn't necessarily mean that they know JavaScript? Is there a way to explain that learning JavaScript would make them better jQuery programmers? My bigger-picture questions: Is this niche focus primarily a trait of programmers? How do you get programmers to not immediately jump to recommending their personal favorites? What can motivate programmers to review their initial selection criteria and possibly modify their choice? Your thoughts?

    Read the article

  • xna download website source code

    - by Emre Canbazoglu
    I have to download the html code of a web site during the game. I am taking the poster url of a movie from the imdb web site by scrapping the html ( also other informations ). I have to do the download process many times during the game for different movies. I can download and scrap the html but downloading the html takes too much time and it causes the game to slow down(freeze while downloading). How can I solve this problem? My one approach is to download and scrap all the information and store them in a database before the game and during the game access this information from the database. I think this will work properly but that is not what I exactly want. It would be better if it is dynamic. I also thought of using multi-threading but I am a bit confused about how to implement threading in xna. I read some articles about it but it is not so clear. I mean when should I start the thread and what about the update function etc. I need your help guys

    Read the article

  • Sending Emails via Google SMTP - after some time quit working

    - by Chris
    on a website I use PHPMailer to send automated registration emails, etc and also a newsletter-tool (which loops through the emails and sends them one by one). Also, I configured in Gmail under Settings and confirmed @mydomain addresses, so I can send from @mydomain emails without the gmail address being displayed. Furthermore I authorized the website to send mails with this link: https://accounts.google.com/DisplayUnlockCaptcha Now, after 2 month where everything worked perfectly fine, suddenly users started not to receive emails anymore and most recently emails are not even being sent anymore. Also, I received many error messages like this: Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550 5.4.1 [email protected]: Recipient address rejected: Access Denied (state 13). When I check at this link: https://toolbox.googleapps.com/apps/checkmx/ It tells 2 none critical errors: Relayhost configuration detected. There SHOULD be a valid SPF record. So, the questions I would have were: does anybody have any hint why it stopped working, what the error messages mean? what to do to fix it? where do I set a SPF record (Cpanel?)? what is a relayhost and how to fix that? It is about 1000-1400 mails a day (gmail's limit is 2000). Also, what can I do wrong when setting up an SPF record? I've heard there are some testing tools for that. Thank you so much already in advance for your help!

    Read the article

  • Ubuntu 13.10 Installing MariaDB

    - by Ecaz
    I have tried everything to install MariaDB on this clean Ubuntu installation but I keep getting this error, Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: mariadb-server : Depends: mariadb-server-5.5 (= 5.5.33a+maria-1~saucy) but it is not going to be installed E: Unable to correct problems, you have held broken packages. I have followed this guide to try and install it, http://www.unixmen.com/install-lemp-server-nginx-mysql-mariadb-php-ubuntu-13-10-server/ And I have also followed the "official" guide on the MariaDB downloads page for 13.10 https://downloads.mariadb.org/mariadb/repositories/ But nothing seems to be working. Edit 1 I have tried both How do I resolve unmet dependencies? and How to install MariaDB? but it still gives me the error I posted above. It's a fresh Ubuntu install with hardly anything installed. Edit 2 All the check boxes are ticket in Updates. I ran: sudo apt-get update && sudo apt-get -f install mariadb-server-5.5"=5.5.33a+maria-1~saucy" And it gave me this error: The following packages have unmet dependencies: mariadb-server-5.5 : Depends: mariadb-client-5.5 (>= 5.5.33a+maria-1~saucy) but it is not going to be installed Depends: mariadb-server-core-5.5 (>= 5.5.33a+maria-1~saucy) but it is not going to be installed E: Unable to correct problems, you have held broken packages.

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Get the Windows 8 Charms Bar in Windows 7, Vista, and XP Using a RocketDock Skin

    - by Lori Kaufman
    Have you tried one of the Windows 8 Preview releases and found you like the Charms bar on the Metro Start Screen? If you’re not quite ready to give up Windows 7, there is a way to get the Charms bar from Windows 8. You can easily add the Charms bar to your desktop in Windows 7 using a RocketDock skin. RocketDock is a free, customizable application launcher for Windows. See our article about RocketDock to learn how to add it to your Windows Desktop. You can also use a portable version of RocketDock. To add a “Charms bar” to your Windows 7 desktop, extract the .rar file you downloaded (see the link at the end of this article). RAR files are associated with WinRAR, which is shareware. NOTE: You can use WinRAR free of charge for 40 days but then you have to buy it ($29.00). However, you can also use the free program 7-Zip to extract RAR files. HTG Explains: What Is Windows RT and What Does It Mean To Me? HTG Explains: How Windows 8′s Secure Boot Feature Works & What It Means for Linux Hack Your Kindle for Easy Font Customization

    Read the article

  • Running CRON job on Ubuntu server for SugarCRM

    - by Logik
    I am pretty inexperienced in Linux, so be descriptive on your answer. My environment: Local Linux server 12.04 hosting Sugar CRM 6.5.2. There is area in sugar CRM called scheduler. I can configured some predefined jobs here. in my case i am trying to run email reminders (ever min/hour/day/month). For this scheduler to be effective, i read some where i need to setup CRON job. So I did some research & finally put following lines in CRONTAB for the root user, as per instructions given in sugarCRM. * * * * * cd /var/www/crm; php -f cron.php > /dev/null 2>&1 Well I am creating contracts in my sugarCRM (AOS module) & I want email reminders to be sent for these contracts to the concern person. Now my sugarCRM email is configured correctly & I can send test emails using it. But the CRON + scheduler not giving any result. I can't receive any emails. Then I tried to read /var/log/syslog & it is showing entry for following line each minute. Oct 27 15:03:01 unicomm CRON[28182]: (root) CMD (cd /var/www/crm; php -f cron.php /dev/null 2&1) I've few questions: what does the CRON job line i've added in crontab mean? cd /var/www/crm; php -f cron.php > /dev/null 2>&1 is not making any sense to me. How am i suppose to get this thing work? I've searched a lot (including SugarCRM forum), but no luck.

    Read the article

  • Google earth will not reinstall

    - by chad
    I was trying to perform the the fix found at this link http://www.omgubuntu.co.uk/2012/01/how-to-make-google-earth-look-native-in-ubuntu It requires you to delete certain files from the /opt/google/earth/free folder and then add some new ones that you download. I deleted the files but the links to download the new ones were unusable. I was using gksudo nautilus so trash was disabled meaning I could not restore the files I had deleted. I the tried to go to the Google Earth website and reinstall it. I downloaded the .deb but when I tried to install it it gav me an error message saying "cannot install ia32-libs" I tried installing this via terminal and it gave me an error message saying chad@chad-Lenovo-G570:~$ sudo apt-get install ia32-libs [sudo] password for chad: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-multiarch E: Unable to correct problems, you have held broken packages. How do I fix this? Now I am stuck without a functioning Google Earth. How can I fix this?

    Read the article

  • Everytime i am trying to connect to my box using SSH, its failing not connecting

    - by YumYumYum
    From any other PC doing SSH to my Ubuntu 11.10,is failing. My network setup: Telenet ISP (Belgium) Fiber cable < RJ45 cable straight to Ubuntu PC Even the SSH is running: Other PC: retrying over and over $ ping 192.168.0.128 PING 192.168.0.128 (192.168.0.128) 56(84) bytes of data. From 192.168.0.226 icmp_seq=1 Destination Host Unreachable From 192.168.0.226 icmp_seq=2 Destination Host Unreachable From 192.168.0.226 icmp_seq=3 Destination Host Unreachable From 192.168.0.226 icmp_seq=4 Destination Host Unreachable $ sudo service iptables stop Stopping iptables (via systemctl): [ OK ] $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host $ ssh [email protected] Connection closed by 192.168.0.128 $ ssh [email protected] [email protected]'s password: Connection closed by UNKNOWN $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host $ ssh [email protected] ssh: connect to host 192.168.0.128 port 22: No route to host Follow up: -- checked cable -- using cable tester and other detectors -- no problem found in cable -- used random 10 cables -- adapter is not broken -- checked it using circuit tester by opening the system (card is new so its not network adapter card problem) -- leds are OK showing -- used LiveCD and did same ping test was having same problem -- disabled ipv6 100% to make sure its not the cause -- disabled iptables 100% so its also not the issue -- some more info $ nmap 192.168.0.128 Starting Nmap 5.50 ( http://nmap.org ) at 2012-06-08 19:11 CEST Nmap scan report for 192.168.0.128 Host is up (0.00045s latency). All 1000 scanned ports on 192.168.0.128 are closed (842) or filtered (158) Nmap done: 1 IP address (1 host up) scanned in 6.86 seconds ubuntu@ubuntu:~$ netstat -aunt | head Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN tcp 0 1 192.168.0.128:58616 74.125.132.99:80 FIN_WAIT1 tcp 0 0 192.168.0.128:56749 199.7.57.72:80 ESTABLISHED tcp 0 1 192.168.0.128:58614 74.125.132.99:80 FIN_WAIT1 tcp 0 0 192.168.0.128:49916 173.194.65.113:443 ESTABLISHED tcp 0 1 192.168.0.128:45699 64.34.119.101:80 SYN_SENT tcp 0 0 192.168.0.128:48404 64.34.119.12:80 ESTABLISHED tcp 0 0 192.168.0.128:54161 67.201.31.70:80 TIME_WAIT $ sudo killall dnsmasq -- did not solved the problem -- -- like many other Q/A was suggesting this same --- $ iptables --list Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination $ netstat -nr Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 $ ssh -vvv [email protected] OpenSSH_5.6p1, OpenSSL 1.0.0j-fips 10 May 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 192.168.0.128 [192.168.0.128] port 22. debug1: Connection established. debug3: Not a RSA1 key file /home/sun/.ssh/id_rsa. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /home/sun/.ssh/id_rsa type 1 debug1: identity file /home/sun/.ssh/id_rsa-cert type -1 debug1: identity file /home/sun/.ssh/id_dsa type -1 debug1: identity file /home/sun/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.8p1 Debian-7ubuntu1 debug1: match: OpenSSH_5.8p1 Debian-7ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug2: fd 3 setting O_NONBLOCK debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],[email protected],ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256 debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug2: dh_gen_key: priv key bits set: 118/256 debug2: bits set: 539/1024 debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug3: check_host_in_hostfile: host 192.168.0.128 filename /home/sun/.ssh/known_hosts debug3: check_host_in_hostfile: host 192.168.0.128 filename /home/sun/.ssh/known_hosts debug3: check_host_in_hostfile: match line 139 debug1: Host '192.168.0.128' is known and matches the RSA host key. debug1: Found key in /home/sun/.ssh/known_hosts:139 debug2: bits set: 544/1024 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /home/sun/.ssh/id_rsa (0x213db960) debug2: key: /home/sun/.ssh/id_dsa ((nil)) debug1: Authentications that can continue: publickey,password debug3: start over, passed a different list publickey,password debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/sun/.ssh/id_rsa debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey,password debug1: Trying private key: /home/sun/.ssh/id_dsa debug3: no such identity: /home/sun/.ssh/id_dsa debug2: we did not send a packet, disable method debug3: authmethod_lookup password debug3: remaining preferred: ,password debug3: authmethod_is_enabled password debug1: Next authentication method: password [email protected]'s password: debug3: packet_send2: adding 64 (len 60 padlen 4 extra_pad 64) debug2: we sent a password packet, wait for reply debug1: Authentication succeeded (password). Authenticated to 192.168.0.128 ([192.168.0.128]:22). debug1: channel 0: new [client-session] debug3: ssh_session2_open: channel_new: 0 debug2: channel 0: send open debug1: Requesting [email protected] debug1: Entering interactive session. debug2: callback start debug2: client_session2_setup: id 0 debug2: channel 0: request pty-req confirm 1 debug1: Sending environment. debug3: Ignored env ORBIT_SOCKETDIR debug3: Ignored env XDG_SESSION_ID debug3: Ignored env HOSTNAME debug3: Ignored env GIO_LAUNCHED_DESKTOP_FILE_PID debug3: Ignored env IMSETTINGS_INTEGRATE_DESKTOP debug3: Ignored env GPG_AGENT_INFO debug3: Ignored env TERM debug3: Ignored env HARDWARE_PLATFORM debug3: Ignored env SHELL debug3: Ignored env DESKTOP_STARTUP_ID debug3: Ignored env HISTSIZE debug3: Ignored env XDG_SESSION_COOKIE debug3: Ignored env GJS_DEBUG_OUTPUT debug3: Ignored env WINDOWID debug3: Ignored env GNOME_KEYRING_CONTROL debug3: Ignored env QTDIR debug3: Ignored env QTINC debug3: Ignored env GJS_DEBUG_TOPICS debug3: Ignored env IMSETTINGS_MODULE debug3: Ignored env USER debug3: Ignored env LS_COLORS debug3: Ignored env SSH_AUTH_SOCK debug3: Ignored env USERNAME debug3: Ignored env SESSION_MANAGER debug3: Ignored env GIO_LAUNCHED_DESKTOP_FILE debug3: Ignored env PATH debug3: Ignored env MAIL debug3: Ignored env DESKTOP_SESSION debug3: Ignored env QT_IM_MODULE debug3: Ignored env PWD debug1: Sending env XMODIFIERS = @im=none debug2: channel 0: request env confirm 0 debug1: Sending env LANG = en_US.utf8 debug2: channel 0: request env confirm 0 debug3: Ignored env KDE_IS_PRELINKED debug3: Ignored env GDM_LANG debug3: Ignored env KDEDIRS debug3: Ignored env GDMSESSION debug3: Ignored env SSH_ASKPASS debug3: Ignored env HISTCONTROL debug3: Ignored env HOME debug3: Ignored env SHLVL debug3: Ignored env GDL_PATH debug3: Ignored env GNOME_DESKTOP_SESSION_ID debug3: Ignored env LOGNAME debug3: Ignored env QTLIB debug3: Ignored env CVS_RSH debug3: Ignored env DBUS_SESSION_BUS_ADDRESS debug3: Ignored env LESSOPEN debug3: Ignored env WINDOWPATH debug3: Ignored env XDG_RUNTIME_DIR debug3: Ignored env DISPLAY debug3: Ignored env G_BROKEN_FILENAMES debug3: Ignored env COLORTERM debug3: Ignored env XAUTHORITY debug3: Ignored env _ debug2: channel 0: request shell confirm 1 debug2: fd 3 setting TCP_NODELAY debug2: callback done debug2: channel 0: open confirm rwindow 0 rmax 32768 debug2: channel_input_status_confirm: type 99 id 0 debug2: PTY allocation request accepted on channel 0 debug2: channel 0: rcvd adjust 2097152 debug2: channel_input_status_confirm: type 99 id 0 debug2: shell request accepted on channel 0 Welcome to Ubuntu 11.10 (GNU/Linux 3.0.0-12-generic x86_64) * Documentation: https://help.ubuntu.com/ 297 packages can be updated. 92 updates are security updates. New release '12.04 LTS' available. Run 'do-release-upgrade' to upgrade to it. Last login: Fri Jun 8 07:45:15 2012 from 192.168.0.226 sun@SystemAX51:~$ ping 19<--------Lost connection again-------------- Tail follow: -- dmesg is showing a very abnormal logs, like Ubuntu is automatically bringing the eth0 up, where eth0 is getting also auto down. [ 2025.897511] r8169 0000:02:00.0: eth0: link up [ 2029.347649] r8169 0000:02:00.0: eth0: link up [ 2030.775556] r8169 0000:02:00.0: eth0: link up [ 2038.242203] r8169 0000:02:00.0: eth0: link up [ 2057.267801] r8169 0000:02:00.0: eth0: link up [ 2062.871770] r8169 0000:02:00.0: eth0: link up [ 2082.479712] r8169 0000:02:00.0: eth0: link up [ 2285.630797] r8169 0000:02:00.0: eth0: link up [ 2308.417640] r8169 0000:02:00.0: eth0: link up [ 2480.948290] r8169 0000:02:00.0: eth0: link up [ 2824.884798] r8169 0000:02:00.0: eth0: link up [ 3030.022183] r8169 0000:02:00.0: eth0: link up [ 3306.587353] r8169 0000:02:00.0: eth0: link up [ 3523.566881] r8169 0000:02:00.0: eth0: link up [ 3619.839585] r8169 0000:02:00.0: eth0: link up [ 3682.154393] nf_conntrack version 0.5.0 (16384 buckets, 65536 max) [ 3899.866854] r8169 0000:02:00.0: eth0: link up [ 4723.978269] r8169 0000:02:00.0: eth0: link up [ 4807.415682] r8169 0000:02:00.0: eth0: link up [ 5101.865686] r8169 0000:02:00.0: eth0: link up How do i fix it? -- http://ubuntuforums.org/showthread.php?t=1959794 $ apt-get install openipml openhpi-plugin-ipml $ openipmish > help redisp_cmd on|off > redisp_cmd on redisp set Final follow up: Step 1: BUG for network card driver r8169 Step 2: get the latest build version http://www.realtek.com/downloads/downloadsView.aspx?Langid=1&PNid=4&PFid=4&Level=5&Conn=4&DownTypeID=3&GetDown=false&Downloads=true#RTL8110SC(L) Step 3: build / make $ cd /var/tmp/driver $ tar xvfj r8169.tar.bz2 $ make clean modules && make install $ rmmod r8169 $ depmod $ cp src/r8169.ko /lib/modules/3.xxxx/kernel/drivers/net/r8169.ko $ modprobe r8169 $ update-initramfs -u $ init 6 Voila!!

    Read the article

  • Approaches to timed puzzle elements

    - by ndg
    I'm working on a side scrolling game that has a number of timed puzzle elements. As a simple example: I have a number of moving platforms that have been setup to transition in a pattern. Ideally I'd like to ensure that as the player first approaches them, they are in an ideal state -- whereby the player can witness the full transition and more experienced players (i.e: speedrunners) can complete the puzzle immediately without having to wait for the current transition to complete. The issue here, in a nutshell, is that because these platforms begin transitioning at the start of the level, it's impossible to correctly calculate when the player is likely to stumble upon them. I've done a fair bit of Googling but haven't managed to turn up any decent resources with regards to solving a problem like this. The obvious solution is to only begin updating the objects when the player (or more likely: the camera) first encounters them. But this becomes difficult when you consider more complicated situations. It seems like potentially the easiest way of handling this is to have an invisible trigger volume that will tell any puzzle elements located inside of it that the player has 'arrived' upon first colliding with the player. But this would mean I'd have to logically group puzzle elements, which could become fairly messy in a hurry. Take, for instance, a puzzle that appears to the right of the screen. It may take the player a number of seconds to reach it. It would look strange if the elements involved were to remain stationary. But by the time the player arrives, it's likely things will be 'out of sync'. I wanted to post here in the hopes that others know of, or have implemented, a decent solution to this problem?

    Read the article

  • Huge procedurally generated 'wilderness' worlds

    - by The Communist Duck
    Hi. I'm sure you all know of games like Dwarf Fortress - massive, procedural generated wilderness and land. Something like this, taken from this very useful article. However, I was wondering how I could apply this to a much larger scale; the scale of Minecraft comes to mind (isn't that something like 8x the size of the Earth's surface?). Pseudo-infinite, I think the best term would be. The article talks about fractal perlin noise. I am no way an expert on it, but I get the general idea (it's some kind of randomly generated noise which is semi-coherent, so not just random pixel values). I could just define regions X by X in size, add some region loading type stuff, and have one bit of noise generating a region. But this would result in just huge amounts of islands. On the other extreme, I don't think I can really generate a supermassive sheet of perlin noise. And it would just be one big island, I think. I am pretty sure Perlin noise, or some noise, would be the answer in some way. I mean, the map is really nice looking. And you could replace the ascii with tiles, and get something very nice looking. Anyone have any ideas? Thanks. :D -TheCommieDuck

    Read the article

  • Meta tags again. Good or bad to use them as page content?

    - by Guandalino
    From a SEO point of view, is it wise to use exactly the same page title value and keyword/description meta tag values not only as meta information, but also as page content? An example illustrates what I mean. Thanks for any answer, best regards. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <title>Meta tags again. Good or bad to use them as page content?</title> <meta name="DESCRIPTION" content="Why it is wise to use (or not) page title, meta tags description and keyword values as page content."> <meta name="KEYWORDS" content="seo,meta,tags,cms,content"> </head> <body> <h1>Meta tags again. Good or bad to use them as page content?</h1> <h2>Why it is wise to use (or not) page title, meta tags description and keyword values as page content.</h2> <ul> <li><a href="http://webmasters.stackexchange.com/questions/tagged/seo">seo</a> <li><a href="http://webmasters.stackexchange.com/questions/tagged/meta">meta</a> <li><a href="http://webmasters.stackexchange.com/questions/tagged/tags">tags</a> <li><a href="http://webmasters.stackexchange.com/questions/tagged/cms">cms</a> <li><a href="http://webmasters.stackexchange.com/questions/tagged/content">content</a> </ul> <p>Read the discussion on <a href="#">webmasters.stackexchange.com</a>. </body> </html>

    Read the article

  • Functional Languages that compile to Android's Dalvik VM?

    - by Berin Loritsch
    I have a software problem that fits the functional approach to programming, but the target market will be on the Android OS. I ask because there are functional languages that compile to Java's VM, but Dalvik bytecode != Java bytecode. Alternatively, do you know if the dx utility can intelligently convert the .class files generated from functional languages like Scala? Edit: In order to add a bit more helpfulness to the community, and also to help me choose better, can I refine the question a bit? Have you used any alternate languages with Dalvik? Which ones? What are some "gotchas" (problems) that I might run into? Is performance acceptable? By that, I mean the application still feels responsive to the user. I've never done mobile phone development, but I grew up on constrained devices and I'm under no illusion that there is a cost to using non-standard languages with the platform. I just need to know if the cost is such that I should shoe-horn my approach into default language (i.e. apply functional principles in the OOP language).

    Read the article

  • Backtick (Grave accent) button sticking

    - by Scott Lemmon
    I have Ubuntu 12.04 running in a Virtual Box virtual machine, on my Windows 7 32bit system. It has been working great, except that the backtick/tilde button sticks(not physically). When I hit the backtick button it keeps repeating until another input button is pressed. So if I press the spacebar while the backtick is repeating, it stops, but if I press the shift while it is repeating, the backtick turns into a tilde, and the tilde keeps repeating until I release the shift key (at which point it's a backtick again and keeps repeating). This sticking behavior only happens with the backtick key, and only in my Ubuntu virtualization, never in windows. I've tried both my laptop's keyboard and an external USB keyboard and the problem happens on both. Both keyboards I've tried are Japanese 106/109 key layout, but I'm using them with a English(US) 101 profile. When I refer to the backtick key above, I mean where it is on the US layout. If I use the Japanese profile, the key in that location (the US backtick location) still sticks, but its no longer mapped as the backtick key. Any thoughts as to what might be causing this and possible solutions? I've searched a lot, but have not found any help so far. Any help is greatly appreciated.

    Read the article

< Previous Page | 651 652 653 654 655 656 657 658 659 660 661 662  | Next Page >