Search Results

Search found 21053 results on 843 pages for 'out of process'.

Page 336/843 | < Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >

  • Power management issues on an Asus N55

    - by Andrea Borga
    I noticed that with respect to Win7 on my Asus N55 Ubuntu 12.04 tend to overheat the system. After startup the fan controller takes control of the fan, I could here it slowing down, after a few second following a login the fan increases its speed again. Though there are no processor hungry process: top shows only Xorg consuming 4%. Even with the system monitor the CPUs load look ok. Is it a power management related problem? This can cause battery life troubles in general, and electronics is never happy to be overheated. Is there a better tool to root the cause of the issue?

    Read the article

  • Managing the Transition to IFRS

    As countries around the world announce and begin their move to adopting IFRS what can companies learn from those that have already travelled this path? Nigel Youell, Product Marketing Director for Performance Management Applications at Oracle talks to David Jones, Director at PWC, who has worked with multi-national companies across Europe helping them to make this transition and to improve their financial reporting in the process. This podcast offers those who have not yet started, or are currently undertaking, the IFRS journey the chance to learn from David's considerable experience on how to make IFRS an opportunity for improvement rather than just an enforced change.

    Read the article

  • Big Data – Buzz Words: What is HDFS – Day 8 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is MapReduce. In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – HDFS. What is HDFS ? HDFS stands for Hadoop Distributed File System and it is a primary storage system used by Hadoop. It provides high performance access to data across Hadoop clusters. It is usually deployed on low-cost commodity hardware. In commodity hardware deployment server failures are very common. Due to the same reason HDFS is built to have high fault tolerance. The data transfer rate between compute nodes in HDFS is very high, which leads to reduced risk of failure. HDFS creates smaller pieces of the big data and distributes it on different nodes. It also copies each smaller piece to multiple times on different nodes. Hence when any node with the data crashes the system is automatically able to use the data from a different node and continue the process. This is the key feature of the HDFS system. Architecture of HDFS The architecture of the HDFS is master/slave architecture. An HDFS cluster always consists of single NameNode. This single NameNode is a master server and it manages the file system as well regulates access to various files. In additional to NameNode there are multiple DataNodes. There is always one DataNode for each data server. In HDFS a big file is split into one or more blocks and those blocks are stored in a set of DataNodes. The primary task of the NameNode is to open, close or rename files and directory and regulate access to the file system, whereas the primary task of the DataNode is read and write to the file systems. DataNode is also responsible for the creation, deletion or replication of the data based on the instruction from NameNode. In reality, NameNode and DataNode are software designed to run on commodity machine build in Java language. Visual Representation of HDFS Architecture Let us understand how HDFS works with the help of the diagram. Client APP or HDFS Client connects to NameSpace as well as DataNode. Client App access to the DataNode is regulated by NameSpace Node. NameSpace Node allows Client App to connect to the DataNode based by allowing the connection to the DataNode directly. A big data file is divided into multiple data blocks (let us assume that those data chunks are A,B,C and D. Client App will later on write data blocks directly to the DataNode. Client App does not have to directly write to all the node. It just has to write to any one of the node and NameNode will decide on which other DataNode it will have to replicate the data. In our example Client App directly writes to DataNode 1 and detained 3. However, data chunks are automatically replicated to other nodes. All the information like in which DataNode which data block is placed is written back to NameNode. High Availability During Disaster Now as multiple DataNode have same data blocks in the case of any DataNode which faces the disaster, the entire process will continue as other DataNode will assume the role to serve the specific data block which was on the failed node. This system provides very high tolerance to disaster and provides high availability. If you notice there is only single NameNode in our architecture. If that node fails our entire Hadoop Application will stop performing as it is a single node where we store all the metadata. As this node is very critical, it is usually replicated on another clustered as well as on another data rack. Though, that replicated node is not operational in architecture, it has all the necessary data to perform the task of the NameNode in the case of the NameNode fails. The entire Hadoop architecture is built to function smoothly even there are node failures or hardware malfunction. It is built on the simple concept that data is so big it is impossible to have come up with a single piece of the hardware which can manage it properly. We need lots of commodity (cheap) hardware to manage our big data and hardware failure is part of the commodity servers. To reduce the impact of hardware failure Hadoop architecture is built to overcome the limitation of the non-functioning hardware. Tomorrow In tomorrow’s blog post we will discuss the importance of the relational database in Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Asciifi Is a Lightening Fast Web-Based ASCII Converter

    - by Jason Fitzpatrick
    If you have a hankering from some old-school ASCII artwork, Asciifi is a free and lightening fast HTML5 ASCII converter. Despite the simplicity of ASCII images (pictures created not out of a grid of colored pixels like a standard digital photograph but out of a grid of text characters) many ASCII converters are rather slow. Asciifi speeds up the process by rendering your images on the fly with a snappy HTML5-based converter. Visit the site, drag and drop your image, and almost instantaneously you’ll see the results. The output can be further tweaked by adjusting the line width and the character set used. Hit up the link below to take it for a test drive. Asciifi [via Digital Inspiration] HTG Explains: Understanding Routers, Switches, and Network Hardware How to Use Offline Files in Windows to Cache Your Networked Files Offline How to See What Web Sites Your Computer is Secretly Connecting To

    Read the article

  • When to do Code Review

    - by mcass20
    We have recently moved to a scrum process and are working on tasks and user stories inside of sprints. We would like to do code reviews frequently to make them less daunting. We are thinking that doing them on a user story level but are unsure how to branch our code to account for this. We are using VS and TFS 2010 and we are a team of 6. We currently branch for features but are working on changing to branching for scrum. We do not currently use shelvesets and don't really want to implement if there are other techniques available. How do you recommend we implement code review per user story?

    Read the article

  • Qt Certification Exams

    - by karlphillip
    I'm wondering about doing a Qt Certification Exam this year, but I'm not 100% sure the investment is worth. I'm considering it because I think it could be a nice + on my resume, and as you know, I'm all for improving my software engineer persona. As I already earn a BSc and MSc degrees in computer stuff, I guess I see the certification process as some kind of adventure. Anyway, I know I'll spend a lot of time preparing myself for the exam and I just wanted to know if a Qt certification is worth the effort. Apparently there are 2 certificates that you can get in the Qt world: Nokia Certified Qt Developer (basic) Nokia Certified Qt Specialist (advanced) Nowadays I build cross-platform software in C++ and this exam would fit beautifully in my resume. My main concern is that, given the obscure future of Qt, I might be throwing time and money out the window. I'm looking for some advice regarding the usefulness of such certifications.

    Read the article

  • Pricing personalized software?

    - by john ryan
    Currently i'm working on a Purchased Order System Application Project for a small scale company. The Software that i am working on is personalized based on the on their business requirement. The company told me to create proposal include the price how much is the application is so they can process the check for me. The person who give me this project is the company supervisor and also a former supply chain supervisor in my employer before which i work also in some of their applications back then.So i want to be fair. This is my first time to create an application as a sideline so i really never experienced pricing a software even though i am working as full time web developer in a big company. Any tips and help ?

    Read the article

  • What is Linkvana?

    Linkvana is a wonderful search engine optimization service that makes it possible to have as many back links as you want. You can have these back links to multiple third party websites. You are completely free to choose your own anchor text for these back links. There is a network of articles and another one that comprises of blogs. You can create a custom anchor text in a very short time. Usually the process takes less than the time taken to write a hundred words long post on your blog or website. It is very easy and intuitive to create number of back links to several pages deep in your site navigation with the help of Linkvana.

    Read the article

  • Default file manager changed, can't change back

    - by user16171
    (Using ubuntu 11.04) I tried to open a torrent file in chrome, and it asked for my default application, so I clicked Transmission, and then I got an error: "Failed to execute default file manager, Failed to execute child process "transmission" (No such file or directory)." Now if I click on any shortcut on the unity bar, such as the trash folder or the icon for my portable hard drive, I get this error, as well as with any download or folder from chrome or firefox. I can't seem to figure out how to fix this, as there seems to be no easy way to do it.

    Read the article

  • Preventing Users From Copying Text From and Pasting It Into TextBoxes

    Many websites that support user accounts require users to enter an email address as part of the registration process. This email address is then used as the primary communication channel with the user. For instance, if the user forgets her password a new one can be generated and emailed to the address on file. But what if, when registering, a user enters an incorrect email address? Perhaps the user meant to enter <code>[email protected]</code>, but accidentally transposed the first two letters, entering <code>[email protected]</code>. How can such typos be prevented?The only foolproof way to ensure that the user's entered email address is valid is to send them a validation email upon registering that includes a link that, when visited, activates their account. (This

    Read the article

  • Disaster Recovery Discovery

    - by Rodney Landrum
    Last weekend I joined several of my IT staff on a mission to perform a DR test in our remote CoLo center in a large South East city of the US. Can I be more obtuse? The goal was simple for me as the sole DBA in a throng of Windows, Storage, Network and SAN admins – restore the databases and make them work. There were 4 applications that back ended to 7 SQL Server databases on 4 different SQL Server instances. We would maintain the original server names, but beyond that it was fair game. We had time to prepare so I was able to script out or otherwise automate the recovery process. I used sp_help_revlogin for three of the servers, a bit of a cheat actually because restoring the Master database on the target DR servers was the specified course of action according to the DR procedures ( the caveat “IF REQUIRED” left it open to interpretation. I really wanted to avoid the step of restoring Master for a number of reasons but mainly because I did not want to deal with issues starting SQL Services afterward. Having to account for the location of TempDB and the version conflicts of the resource DBs were just two of the battles I chose not to fight. Not to mention other system database location problems that might arise and prevent SQL from starting.  I was going to have to restore all of the user databases anyway, so I would not really gain any benefit, outside of logins, for taking the time to restore the source Master database over the newly installed one on the fresh server. What I wanted was the ability to restore the Master database as a user database, call it Master_Mine, from a backup on the source system and then use that restored database to script the SQL Logins and passwords on the DR systems. While I did not attempt this on the trip, the thought stuck in my mind and this past week I succeeded at scripting user accounts and passwords using only a restored copy of the Master database. Granted there were several challenges to overcome.  Also, as is usual for any work like this the usual disclaimers apply:  This is not something that I would imagine Microsoft would condone or support and this was really only an experiment for me to learn if it was even possible. While I have tested the process with success, I do not know that I would use this technique in a documented procedure because future updates for SQL Server will render this technique non-functional. I thought at first, incorrectly of course, that I could use sp_help_revlogin on a restored copy of the master database I named Master_Mine.   Since sp_help_revlogin uses system schema objects, sys.syslogins and sys.server_principals, this was not going to work because all results would come from the main Master database. To test this I added a SQL login via SSMS, backed up Master, restored  it as Master_Mine, and then deleted the login.  Even though the test account I created should presumably still be in the Master_Mine database, I should be able to get to it and script out its creation with its password hash so that I would not need to know the password, but any applications that stored that password would not have to be altered in the DR scenario. They would just work as expected. Once I realized that would not work I began looking deeper.  Knowing that sys.syslogins and sys.server_principals are system views, their underlying code should be available with sp_helptext, right? They were. And this led me to discover the two tables sys.sysxlgns and sys.sysprivs, where the data I needed was stored. These tables existed in both the real Master and the restored copy, Master_Mine.  I used this information to tweak the sp_help_revlogin stored procedure to use these tables instead to create the logins cursor used in sp_help_revlogin. For the password hash,  sp_help_revlogin uses the function LoginProperty() which takes a user name and option ‘passwordhash’ to return the hash for the user. Unfortunately, it requires the login to exist in the Master database. This would not work. So another slight modification I had to make was to pull the password hash itself (pwdhash from sys.sysxlgns) into the logins cursor and comment out the section of sp_help_revlogin that uses LoginProperty. Instead, I pass the pwdhash value as the variable @PWD_varbinary to the sp_hexadecimal stored procedure which is also created by and used within the code provided by Microsoft in the link above for sp_help_revlogin. The final challenge: sys.sysxlgns and sys.server_principals are visible only within a Dedicated Administrator Connection (DAC) query window in SSMS or within SQLCDMD.  To open a DAC connection you have to be logged in on the SQL Server itself, via RDP in my case,  and you preface the server name in the query connection with ADMIN:, so that the server connection looks like ADMIN:ServerName. From there you can create the modified stored procedure in the restored copy of a Master database from a source system as whatever name you like, and then run the modified stored procedure. I named my new stored procedure usp_help_revlogin_MyMaster. Upon execution I was happy to see the logins and password hashes that I needed to apply from the source Master database without having to restore over the new Master system database and without the need to access the original server (assuming it was down due to whatever disaster put it in that state). You will note that I am not providing full code samples here of the modifications. I will say that it was a slight bit of work and anyone who needed to do this for whatever reason, could fairly easily roll their own solution with the information provided herein.  My goal, as I said was to prove that this could be done and provide another option if required to ease the burden of getting SQL Servers up and available in an emergency situation where alternatives may be more challenging or otherwise unavailable.  

    Read the article

  • Problem upgrading from 13.04 to 13.10

    - by Charles
    Part way through upgrading from 13.04 to 13.10 the process ground to a halt with an error message. Now on retrying by going to 'Check for updates' I get the following: Failed to load the package list This is a serious problem. Try again later. If this problem appears again, please report an error to the developers. E:Encountered a section with no Package: header, E:Problem with MergeList /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_saucy_universe_i18n_Translation-en%%5fGB, E:The package lists or status file could not be parsed or opened. Problem reported but my question is, "what can I do now?; Do I have to do a fresh install?; if so will settings etc. in my Home folder (on its own partition) be saved?" 13.04 still seems to be working perfectly, while upgrading I had a terrible internet connection varying between 'dead slow' and 'dead stop', not sure if that caused the problem.

    Read the article

  • Windows Phone 7 Developer Tools &ndash; January 2011 Update

    - by TechTwaddle
    Note: I am currently in the process of relocating my blog from http://www.geekswithblogs.net/techtwaddle to my new address at http://www.techtwaddle.net I suggest you point your feed readers to the new address as I slowly transition to my new shared-hosted, ad-free wordpress blog :) If you haven’t heard already, the Jan 2011 update of the windows phone 7 developer tools is out, er, in Feb. You can download the installation files from here, http://www.microsoft.com/downloads/en/details.aspx?FamilyID=49b9d0c5-6597-4313-912a-f0cca9c7d277 The performance increase with the new emulator is clearly noticeable and the first time deploy is real quick! The emulator image should also be a precursor to the windows phone 7 OS update that we’ve been waiting for ever. The emulator image includes copy-paste functionality which is enabled by default on all textboxes, password boxes and edit controls within web browser control, so existing apps get this feature for free. Go ahead and give the new tools a try. If you want to experiment more you might be interested in a unlocked emulator image, follow the link for more information. http://windowsphonehacker.com/latest_windows_phone_7_emulator_unlocked-02-05-11.php

    Read the article

  • JCP Calendar for 2013 - First EC Meeting 15-16 January

    - by Heather VanCura
    The JCP 2013 calendar and EC Meeting schedule has now been finalized and published :-). This year the EC will be holding meetings in the San Francisco Bay Area in January, and also in September, scheduled around the JavaOne San Francisco Conference; and, Credit Suisse will host the May EC Meeting in Zurich, Switzerland.  The first JCP EC Meeting of 2013 will be a Face to Face Meeting in Santa Clara, California USA, hosted by Intel.  We are of in the midst preparing the agenda now, but it will include a 2012 annual review, JSR Spec Lead presentations & updates, as well as JSR 358, A major revision of the Java Community Process, Expert Group session. You can view meeting materials and minutes from the JCP EC Meetings on JCP.org. The JCP also plans to meet with the Java User Group Leaders attending the User Group Leaders Summit being held at Oracle Redwood Shores location 14-16 January.

    Read the article

  • What does "general purpose system" mean for Java SE Embedded?

    - by Majid Azimi
    The Oracle website says this about Java SE Embedded license: development is free, but royalties are required upon deployment on anything other than general purpose systems What does "general purpose system" mean here? We have a sensor network around the country. On each box we have installed, there is a micro controller based board that gets data from the environment and send data on serial port to a ARM based embedded board. On this board system there is a Java process which reads and submits data to our central server using JMS. Is this categorized as general purpose system? Sorry I'm asking this here. We are in Iran, there is no Oracle office here to ask.

    Read the article

  • Oracle Sequences

    - by jkrebsbach
    Reminder to myself - SQL Server has nice index columns directly tied to their tables. Oracle has sequences that are islands to themselves. select seq_name.currval from dual; select seq_name.nextval from dual; currval - return current number at top of sequence nextval - increment sequence by 1, return new number   therefore - to create functionality in oracle similar to an index column - OPTION A) - Create insert trigger: CREATE OR REPLACE TRIGGER dept_bir BEFORE INSERT ON departments FOR EACH ROW WHEN (new.id IS NULL) BEGIN SELECT dept_seq.NEXTVAL INTO :new.id FROM dual; END; This will handle creating a unique identity, but will not necessarily inform process flow of identity without additional logic. OPTION B) - Select indentity into temp variable, insert whole item into tab **** When attemptint to query currval, the below error was being thrown - SELECT seq_name.currval from dual; ERROR : TABLE OR VIEW DOES NOT EXIST *** Although Oracle sys tables may have access to the sequences, that isn't to say the Oracle user may have access to those sequences - verify permissions when the system can't see object that are being reported in the object explorer.

    Read the article

  • What is a good use case for scala?

    - by Usman Ismail
    In a current project we have setup the build so that we could mix Java and Scala. I would like to use more Scala in our code base to make the code more readable and concise. In the process also learn the language by handing over real features. So I plan to use Scala for some classes to showcase its benefits and convince other devs to look into using Scala too. For a rest based web server or a program in general what kind of code structures lend themselves to Scala's functional programming style.

    Read the article

  • Which tools you use to make gtk themes?

    - by tutuca
    I'm trying to make a new gtk theme using the murrine engine, using Humanity (default in ubuntu 9.10) as a template. You can grab the code in http://github.com/tutuca/themes However, I found cumbersome the process of creating a new theme with it. There is no central starting point. The documentation of both, the engine options (gtkrc's and stuff), and general theming practices (the format of the index.theme files, folders, bla bla) is scarce, How to's and tutorials are often old or subject to lots of opinionated debate and results confusing (to me, having a web developer background, at least :-). So... I wanted to ask to the fellows gtk themers and artist out there: Which tools you use to create a new theme, and how does your average workflow looks like?

    Read the article

  • career advice for PhD scientist seeking to program?

    - by C SD
    I'm largely a self-taught programmer. In fact, I first started programming about half way through biophysics grad school, and even though I think I've done some pretty nice work, I've never worked as part of a 'serious' development team that had more than one or two other developers (and I wouldn't hesitate to call them equally inexperienced in software development as a profession). After finishing my PhD I applied to Google, on a lark, since I had some confidence in my abilities, if not necessarily my experience, and I was hoping to maybe slip in and absorb all the experience and talent I'd be surrounded with and become productive enough, quickly enough, that they wouldn't immediately regret their decision. I was excited to actually get invited to interview up at Mountain View (this was ~ mid 2008). Overall, my memory of the interview was very positive, but after close to a three month wait (is that normal?) they ended up turning me down. I wasn't too surprised or disappointed (aside from the uncomfortably long wait) given my unusual background and admitted lack of experience. I decided to continue as a postdoc, but focus on improving my skills rather than doing research. I've done about three years of that, and my honest assessment is that I've learned a ton more, but I really need more of a peer group to maintain or accelerate my growth. Google invited me to interview again about eight months ago, and the interview process went even better than the first time around (I thought), though they again declined to give me an offer. I have to admit this second rejection was much more discouraging. They had insisted I interview even after I mentioned to them that a move on my part was unlikely given that I had bought a house, gotten married, etc. since the first interview. I guess I was hoping they'd at least give me an offer that I could parlay into a more conventional, but still interesting, programming position close to home. So here I am, going on my third year out of grad school, a glorified postdoc and I'm starting to get pretty discouraged. Even though I could technically get 'back-on-track' for a career in science, I have been focusing the vast majority of this time on gaining programming experience rather than on research and publications. The problem is, whenever I look, most job listings have requirements that seem impossibly grandiose and I hesitate to apply. That, or the job/project seems incredibly dull. Ironically, applying to Google struck me as less intimidating. I suspect that either most people are just a lot less realistic than I am when it comes to assessing how long it will take for them to get up to speed, or they don't care; my fear is that I'm just woefully unqualified for any interesting, well paying work. IE: I'm confident I could switch fully back into C++ mode with a couple weeks work (I mostly use C,Python,C# daily) but I don't list myself as being 'proficient' in C++ on my CV, or applying for jobs that 'require' such knowledge. The few applications for which I did feel I was a legitimately good match have not elicited a response. I suspect the following things are potential problems with my application/CV and I would like feedback on: I don't have a CS degree. My BS was in biochemistry and molecular biology, my PhD in biophysics. I took a undergrad and grad level CS course at UCSD and completely killed them, but I don't know how to translate that to my CV effectively. I have a PhD, but it's not in CS... I've been debating if I should remove it from my CV, and wether or not it would then be misleading to list at least some of those years as some kind of 'programming' job (in many respects it was). I think there are sometimes strong stigmas associated with 'self-taught' programmers. I am certainly one of those. I even recognize that some of those stigmas hold a hint of truth, but I really do want to be an asset to a team. How do I communicate that even though I have been largely self-directing for ~8 years I can still take marching orders when needed? Do I just say so outright? Should I just become a lot less scrupulous about the whole process? anecdote: I have a friend who applied for positions where he completely fudged his qualifications to get past the first culling. He was much more honest and forthcoming about his actual qualifications when contacted and he still managed to get invited to a couple of interviews and even got some offers. His balls are larger than mine though.

    Read the article

  • Permanent redirect to different domain followed by temporary redirect to folder

    - by Ricardo Amaral
    I have old-domain.com which I want to migrate to new-domain.com. However, the content on the old domain is, well, old. And I'm currently in the process of redesigning my whole site. My idea is to do a permanent (301) redirect from old-domain.com to new-domain.com so that search engines know about the new domain and forget about the old one. But since the content is old I was thinking to do a temporary (302) redirect from new-domain.com to new-domain.com/old/ until the new content/site is ready to be published. Is this, for some reason, a bad idea? Or there's nothing wrong with it? One last thing... If I go with this, what should I do when the new content is ready? Should I just remove the 302 redirect and that's it, or should I do something else to notify search engines that the temporary redirect is over?

    Read the article

  • Will we ever lose the human touch?

    - by divya.malik
    I was at a conference two weeks ago, which was targeted to sales and marketing professionals. The discussions around the changing scenario in sales was very interesting. More and more of selling is moving to the internet- sales people are delivering more of their presentations online, or via the phone. Budget constraints and new technologies have dramatically decreased the need for face-to-face interactions. At the same time, customers are also researching for products on their own, taking the advice of peers, making up their mind, and then contacting the vendor. That takes care of more than half of the usual selling process. But humans are social animals, and because of that I believe that despite these changing trends and technologies, the need to maintain the human touch will always be necessary. One of the presenters at the conference shared this video, which stayed in my mind.

    Read the article

  • Webcast: Optimize Accounts Payable Through Automated Invoice Processing

    - by kellsey.ruppel(at)oracle.com
    Is your accounts payable process still very labor-intensive? Then discover how Oracle can help you eliminate paper, automate data entry and reduce costs by up to 90% - while saving valuable time through fewer errors and faster lookups. Join us on Tuesday, March 22 at 10 a.m. PT for this informative Webcast where Jamie Rancourt and Brian Dirking will show how you can easily integrate capture, forms recognition and content management into your PeopleSoft and Oracle E-Business Suite accounts payable systems. You will also see how The Home Depot, Costco and American Express have achieved tremendous savings and productivity gains by switching to automated solutions. Learn how you can automate invoice scanning, indexing and data extraction to:Improve speed and reduce errors Eliminate time-consuming searches Utilize vendor discounts through faster processing Improve visibility and ensure compliance Save costs in accounts payable and other business processesRegister today!

    Read the article

  • How to move files over samba share with gnomevfs cli

    - by Allan
    Ok I am in the process of backing up my film collection to a NAS and I wanted to automate this as much as possible as I have to work at the same time. I am trying to setup a daily dump of ISO's ready to be converted overnight. I would like to do this as a cron job using gnomevfs. I have been able to connect and do an ls command successfully with gnomevfs-ls smb://user:WORKGROUP:password@media-centre/videos/ but I am having trouble setting up a mv command from a local folder to the same shared folder keep getting the Usage: gnomevfs-mv <from> <to> quote which isn't particularly informative ;) any ideas?

    Read the article

  • Deferred Open Source licensing

    - by Thomas W.
    Are there established models for releasing an initially proprietary piece of software under FLOSS conditions after a defined period or a certain point of time? The main problem here is that all parties involved must be able to trust that the Open Source licensing will actually take place at the defined time and no party can further defer or cancel this process. Clearly such a model has its problems, for example it's problematic to deal with contributions from "outside", legally and technically. Ghostscript is a prominent example where a deferred model has been used and abandoned. However, if certain parties involved will insist on keeping the software proprietary, at least for a certain period of time, then the only options are a deferred Open Source licensing model or no Open Source licensing at all. I think I read about services that serve as trusted parties who take care of Open Sourcing the software. However, I was not successful in spotting any of those.

    Read the article

  • Cannot install 14.04 on Dell Inspiron 5447 14

    - by user292121
    I install Ubuntu version 14.04 by use USB stick is a boot devices on DELL Inspiron 5447 14" but it stuck in ubuntu logo. I change to use Linux Mint and it still stuck in the same process is pending in Mint logo. When I change to Linux Mint Compatibility Mode it shown some error meassage. For the stuck screen please see the url below: http://www.image-share.com/upload/2583/266.jpg For the error massage when I try to use Linux Mint can see as below: http://www.image-share.com/ijpg-2583-268.html How can i do next ?

    Read the article

< Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >