Search Results

Search found 22569 results on 903 pages for 'win32 process'.

Page 689/903 | < Previous Page | 685 686 687 688 689 690 691 692 693 694 695 696  | Next Page >

  • Registering domain during christmas holydays

    - by arkascha
    One of the domain names I tried to register previously has been blocked by a domain grabber two days prior to my own attempt. That was about 1 year ago. The attempt to buy the domain from that person failed due to a totally exaggerated price. So I dropped the issue and watched the domain (offered at sedo.com). As expected there were no more offers, the domain was not sold. Now I learn from the whois database that the registration of that domain name ends on 25.12.2012 (christmas holyday). This raises two questions for me, I fail to find reliable answers on the internet. So maybe someone experienced here can drop a statement or a hint: is it reasonable that the domain name in question really will be free again when that date mentioned in the whois database up to when the domain is registered has passed? I certainly know that the registration can be prolonged, that is not what I mean. I expect (hope) that that domain grabber does not extend the registration, since it costs money and effort and he failed to sell the domain. Provided this is the case and the domain registration is not prolonged, is that date mentioned reliable? Or might it just be some 'default' date? I would like to try to register that domain name as soon as it is unregistered. Since that domain grabber registered that domain only two days before my own registration attempt I would like to prevent such annoying interference next time. So I ask myself: is it possible to register a domain name on a holyday? I mean not to send an email to my provider to do so on that day or before, but to actually have to process taking place as not to wait for 1-2 days after the unregistration? My own provider which I am very happy with does not offer such service on a holyday (which is perfectly understandable). They are 'still checking' if they can offer something automatic. I researched and did not find an answer to the question if that is possible at all. Is an autoomatic registration attempt on a holyday possible? Where can I do that? Is that reliable? Thanks for any reply!

    Read the article

  • I can't install Ubuntu 12.04.1 on iMac G5

    - by user89004
    So, I have this iMac G5 that doesn't have iSight, only a small light sensor I think undernieth, machine model 8.2. I tried burning a Ubuntu 12.04.1 PowerPC 64bit .iso to a cd but the computer just won't boot it, I don't know why. Next I tried with a USB but it wouldn't let me boot that either, I created the usb on my dad's win7 laptop as the process was way easier than on freakin Mac or Ubuntu (no command typing AT ALL on windows) I'm able to get into openfirmware and type boot usb and it does show some weird writing that scrolls so fast I can't see anything and then it just gives me this huge no sign like a stop sign and freezez. The sign is grey and the line in the middle is tilted towards the left. An other issue I'm having with hdiutil is that I can't convert the stupid .iso I just downloaded into a .img because the file keeps on dissapearing right when it's done converting it. I used the syntax from Ubuntu support how to create a bootable usb drive under Mac OS X. I even didn't include the 2 stupid ~ that are shown in the syntax that are completly worthless, God only know why they put them there, and I even tried running the whole thing as root with sudo su before the command. The funny thing is that if I convert something smaller it works. The command I was using is hdiutil convert -format UDRW -o /path/to/target.img /path/to/ubuntu.iso I even tried hdiutil convert /path/to/ubuntu.iso -format UDRW -o /path/to/target.img but the same thing happens, the dummy .img.dmg file dissapears when the conversion is done no matter where I set the output file to go. I have tried several different folders, the same thing happens with all of them. I also tried burning a Ubuntu mini iso on a cd, can't remember if it was 11.10 or 12.10 but even thoguh holding c when the iMac boots up does show me the cd and I can boot from it, I get this weird error upon hitting install, it says something like invalid memory access, release keys and error strings I can't read. I don't have any original DVDs from this iMac and can't run hardware diagnostics. WHatever option I try at the command prompt from the mini ubuntu cd I get the same result, error code and openfirmware backdrop that's frozen. I noticed that the pen drive I created on my dads Win7 laptop is formated with MS-DOS but I can still mount it no problem, so it shouldn't have a problem booting it, right? I used the advice on ubuntu.com to make it, from here. Also, my partition is HFS+ so I can't use it as a hard drive and boot from it. I don' have 2 partitions either, just one HDD, one partition. Please help!!!

    Read the article

  • What to do when opensource project starts to tear apart? (or a manager tries to write code and than shouts at the team)

    - by Kabumbus
    Imagine there is an open source cross-platform project on Google code. It has lots of revisions (1000). It concentrates in itself lots technological stuff - rare stuff - it mixes top tech. It contains server, and more than one client. The project was created by a well-connected team of developers (friends) and a manager that was sponsoring project at its start up during its first few months (project now is more than a year old-sponsoring oss project is a big good deal- also gave the idea of project to developers). The project was growing in complexity and effort reqiered to continue development. Once upon a time a manager - team leader started trying to write code (he was a programmer in some other projects - not the best, but he felt like he was one). He started because one of the developers suggested an idea at the team meeting and he felt he just needed to do it on his own. He failed, and he told the dev team about it. The dev team did what he failed to do in a few days. After that, the manager feels that team codes with out him perfectly and gets the job done in short time. He felt sorry and lost and he started to crash like an old bad PC. Firstly, he started to scream (in forms of messages not in voice) he tried to tell developers that what they were doing was a bad, not-needed thing - developers kindly told him that his "beginnings" were not compilable while dev team product worked as needed. He told the developers that all work they do should be firstly discussed with him. Here is the part where we need to mention that all team members are "project owners" and logically have equal rights. The team leader suggested to the developers these options: change their dev process to go through him, or be moved from project owners to contributers. So what are our options as developers? What arguments we can provide to the team leader/manager for him to calm down? Is it possible to save the project or is it better to fork out now? An important issue is that lately we had no active ticket system, and I personally think that this was the reason the mess appeared. So... any ideas?

    Read the article

  • The most dangerous SQL Script in the world!

    - by DrJohn
    In my last blog entry, I outlined how to automate SQL Server database builds from concatenated SQL Scripts. However, I did not mention how I ensure the database is clean before I rebuild it. Clearly a simple DROP/CREATE DATABASE command would suffice; but you may not have permission to execute such commands, especially in a corporate environment controlled by a centralised DBA team. However, you should at least have database owner permissions on the development database so you can actually do your job! Then you can employ my universal "drop all" script which will clear down your database before you run your SQL Scripts to rebuild all the database objects. Why start with a clean database? During the development process, it is all too easy to leave old objects hanging around in the database which can have unforeseen consequences. For example, when you rename a table you may forget to delete the old table and change all the related views to use the new table. Clearly this will mean an end-user querying the views will get the wrong data and your reputation will take a nose dive as a result! Starting with a clean, empty database and then building all your database objects using SQL Scripts using the technique outlined in my previous blog means you know exactly what you have in your database. The database can then be repopulated using SSIS and bingo; you have a data mart "to go". My universal "drop all" SQL Script To ensure you start with a clean database run my universal "drop all" script which you can download from here: 100_drop_all.zip By using the database catalog views, the script finds and drops all of the following database objects: Foreign key relationships Stored procedures Triggers Database triggers Views Tables Functions Partition schemes Partition functions XML Schema Collections Schemas Types Service broker services Service broker queues Service broker contracts Service broker message types SQLCLR assemblies There are two optional sections to the script: drop users and drop roles. You may use these at your peril, particularly as you may well remove your own permissions! Note that the script has a verbose mode which displays the SQL commands it is executing. This can be switched on by setting @debug=1. Running this script against one of the system databases is certainly not recommended! So I advise you to keep a USE database statement at the top of the file. Good luck and be careful!!

    Read the article

  • Proper Usage of Arrays and Functions [closed]

    - by Ssegawa Victor
    Can some one help me write a C code that solves the following problem. PROBLEM Consider the faculty registrar who has to process results for 1st year 1st semester students. Students offer five courses CSC 1100, CSK 1101, CSC 1104, CSC 1105 and CSC 1106. The courses have credit units 4,4,4,3 and 3 respectively. Lecturers provide course work and exam marks. For each course, course work constitutes 40% of the final mark while the exam constitutes 60% of the final mark. The role of the registrar is to Compute the final mark for each student for each course. The final mark must be a whole number Compute the grade and grade point of the students for each course they offered. According to senate regulations, grades and grade points are awarded to final marks according to the following criteria Range Grade Grade Point 90 – 100 A+ 5.0 80 – 89 A 5.0 75 – 79 B+ 4.5 70 – 74 B 4.0 65 – 69 C+ 3.5 60 – 64 C 3.0 55 – 59 D+ 2.5 50 – 54 D 2.0 45 – 49 E 1.5 40 – 44 E- 1.0 0 – 39 F 0.0 Put a comment ‘Retake’ to a student for every course where the Grade Point is less than 2.0 Compute the cumulative grade point average CGPA for each student. The senate formula for CGPA is GGPA =(?_(i=1)^(i=N)¦?CU _i×GP _i ?)/(?_(i=1)^(i=N)¦CU i) Put a comment “Progress” for any student whose GGPA is greater than 2 and “Stay Put” on a student whose CGPA is less than 2 You are required to create a c program that considers a class of 25 students and: 1.Initializes an array ‘student’ which stores student names 2.Initializes arrays for course work and exam for each course. ‘cw_csc_1100’ and ‘ex_csc_1100’ store course work and exam marks (respectively) for CSC 1100. The same approach is considered for all other courses 3.Initializes the coursework and exam marks arrays with marks between 0 and 99 4.Write appropriate functions that will generate the final marks, generate grades, generate grade points, generate cumulative grade points, generate comments for students and comments for courses per student 5.Create appropriate arrays for final marks and insert the data there using the appropriate functions 6.Without having to create any extra arrays, use the functions created to generate a report per student that looks like the one bellow. Student Name: Ngubiri Course Unit Final mark Grade Grade Point Course Comment CSC 1100 43 E- 1.0 Retake CSK 1101 50 D 2.0 CSC 1104 59 D+ 2.5 CSC 1105 70 B 4.0 CSC 1106 65 C+ 3.5 CGPA 2.47 Overall Comment Progress NB It is advisable that the indices are used to identify the owners. Eg if student[x] is John, then cs_csc_100[x] should be a mark for John since the index is the same

    Read the article

  • Cloud Computing: Start with the problem

    - by BuckWoody
    At one point in my life I would build my own computing system for home use. I wanted a particular video card, a certain set of drives, and a lot of memory. Not only could I not find those things in a vendor’s pre-built computer, but those were more expensive – by a lot. As time moved on and the computing industry matured, I actually find that I can buy a vendor’s system as cheaply – and in some cases far more cheaply – than I can build it myself.   This paradigm holds true for almost any product, even clothing and furniture. And it’s also held true for software… Mostly. If you need an office productivity package, you simply buy one or use open-sourced software for that. There’s really no need to write your own Word Processor – it’s kind of been done a thousand times over. Even if you need a full system for customer relationship management or other needs, you simply buy one. But there is no “cloud solution in a box”.  Sure, if you’re after “Software as a Service” – type solutions, like being able to process video (Windows Azure Media Services) or running a Pig or Hive job in Hadoop (Hadoop on Windows Azure) you can simply use one of those, or if you just want to deploy a Virtual Machine (Windows Azure Virtual Machines) you can get that, but if you’re looking for a solution to a problem your organization has, you may need to mix Software, Infrastructure, and perhaps even Platforms (such as Windows Azure Computing) to solve the issue. It’s all about starting from the problem-end first. We’ve become so accustomed to looking for a box of software that will solve the problem, that we often start with the solution and try to fit it to the problem, rather than the other way around.  When I talk with my fellow architects at other companies, one of the hardest things to get them to do is to ignore the technology for a moment and describe what the issues are. It’s interesting to monitor the conversation and watch how many times we deviate from the problem into the solution. So, in your work today, try a little experiment: watch how many times you go after a problem by starting with the solution. Tomorrow, make a conscious effort to reverse that. You might be surprised at the results.

    Read the article

  • Becoming the well-integrated content company (and combating AIUTLVFS)

    - by Lance Shaw
    Every single day, each of us create more and more content. Sometimes it is brand new material and many times it is iterations of existing content, but no one would argue that information and content growth is growing at an almost exponential rate. With all this content being created and stored, a number of problems naturally arise. One of the most common issues that users run into is "Am I Using The Latest Version of this File Syndrome", or AIUTLVFS. This insidious syndrome is all too common and results in ineffective, poor or downright wrong business decisions being made.  When content or files are unavailable or incorrect within the scope of key business processes, the chance for erroneous and costly business decisions is magnified even further. For many companies, the ideal scenario is to be able to connect multiple business systems, both old and new, into one common content repository.  Not only does this reduce content duplication, it also helps guarantee that everyone in various departments is working off the proverbial "same page".  Sounds simple - but for many organizations, the proliferation of file shares, SharePoint sites, and other storage silos of content keep the dream of a more efficient business a distant one. We've created some online assets to help you in your evaluation and eventual improvement of your current content management and delivery systems. Take a few minutes to check out our Online Assessment Tool.  It's quick, easy and just might provide you with insights into how you can improve your current content ecosystem. While you are there, check out our new Infographic that outlines common issues faced by companies today. Feel free to save our informative Infographic PDF and share it with business colleagues and your management to help them understand the business costs and impact of inaction. Together we can stop AIUTLVFS in its tracks and run our businesses more effectively than ever. Additionally, we hope you will take a few minutes to visit our new and informative webpages dedicated to the value of a well connected, fully integrated content management system. It's a great place to learn more about how integrating WebCenter Content into your infrastructure can lower your operational costs while boosting process and worker efficiency.

    Read the article

  • Gauging Maturity of your BPM Strategy - part 1 / 2

    - by Sanjeev Sharma
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} In this post I will discuss the essence of maturity assessment and the business imperative for doing the same in the context of BPM. Social psychology purports that an individual progresses from being a beginner to an expert in a given activity or task along four stages of self-awareness: Unconscious Incompetence where the individual does not understand or know how to do something and does not necessarily recognize the deficit and may even deny the usefulness of the skill. Conscious Incompetence where the individual recognizes the deficit, as well as the value of a new skill in addressing the deficit. Conscious Competence where the individual understands or knows how to do something but demonstrating the skill requires explicit concentration. Unconscious Competence where the individual has had so much practice with a skill that it has become "second nature" and serves as a basis of developing other complementary skills. We can extend the above thinking to an organization as a whole by measuring an organization’s level of competence in a specific area or capability, as an aggregate of the competence levels of individuals it is comprised of. After all organizations too like individuals, evolve through experience, develop “memory” and capabilities that are shaped through a constant cycle of learning, un-learning and re-learning. Hence the key to organizational success lies in developing these capabilities to enable execution of its strategy in-line with the external environment i.e. demand, competition, economy etc. However developing a capability merits establishing a base line in order to Assess the magnitude of improvement from past investments Identify gaps and short-comings Prioritize future investments in the right areas A maturity assessment is essentially an organizational self-awareness check that is aimed at depicting the “as-is” snapshot of an existing capability in-order to guide future investments to develop that capability in-line with business goals. This effectively is the essence of a maturity Organizational capabilities stem through its architecture, routines, culture and intellectual resources that are implicitly and explicitly embedded in its business processes. Given that business processes underpin realization of organizational capabilities, is what has prompted business transformation and process management efforts. Thus, the BPM capability of an organization needs to be measured on an on-going basis to ensure delivery of its planned benefits. In my next post I will describe Oracle’s BPM Maturity assessment methodology.

    Read the article

  • What are the drawbacks of sending XML to browsers and let them apply XSLT?

    - by MainMa
    Context Working as a freelance developer, I often made websites completely based on XSLT. In other words, on every request, an XML file is generated, containing everything we need to know about the page content: the name of the user currently logged in, the top menu entries, if this menu is dynamic/configurable, the text to display in a specific area of the page, etc. Then XSL process (caches, etc.) it to HTML/XHTML page to send to the browser. It has a good point to make it easier to create small-scale websites, especially with PHP. It is a sort of template engine, but which I prefer to other template engines because it's much more powerful than most of template engines, and because I know it better and like it. It is also possible, when need, to give an access to raw XML data on demand for an automated access, without the need to create separate APIs. Of course, it will fail completely on any medium-scale or large-scale website, since, even with good caching techniques, XSL still degrades overall website performance and requires more CPU serverside. Question Modern browsers have the ability to take an XML file and to transform it with an associated XSL file declared in XML like <?xml-stylesheet href="demo.xslt" type="text/xsl"?>. Firefox 3 can do it. Internet Explorer 8 can do it too. It means that it is possible to migrate XSL processing from the server to the client side for 50% of users (according on browser statistics on several websites where I may want to implement this). It means that those 50% of users will receive only the XML file at each request, thus reducing their and server's bandwidth (XML file being much shorter than its processed HTML analog), and reducing server's CPU usage. What are the drawbacks of this technique? I thought about several ones, but it doesn't apply in this situation: Difficult implementation and the need to choose, based on the browser request, when to send raw XML and when to transform it to HTML instead. Obviously, the system will not be much more difficult then the actual one. The only change to make is to add XSL file link to every XML, and to add a browser check. More IO and bandwidth usage, since the XSLT file will be downloaded by the browsers, instead of being cached by the server. I don't think it will be a problem, since XSLT file will be cached by the browsers (like images, or CSS, or JavaScript files are cached actually). Possibly some problems on client side, like maybe problems when saving a page in some browsers. Difficulty to debug code: it is impossible to obtain an HTML source the browser is actually using, since the only displayed source is the downloaded XML. On the other hand, I rarely go look at HTML code on client side, and in most cases, it is unusable directly (whitespace being removed).

    Read the article

  • Questions about identifying the components in MVC

    - by luiscubal
    I'm currently developing an client-server application in node.js, Express, mustache and MySQL. However, I believe this question should be mostly language and framework agnostic. This is the first time I'm doing a real MVC application and I'm having trouble deciding exactly what means each component. (I've done web applications that could perhaps be called MVC before, but I wouldn't confidently refer to them as such) I have a server.js that ties the whole application together. It does initialization of all other components (including the database connection, and what I think are the "models" and the "views"), receiving HTTP requests and deciding which "views" to use. Does this mean that my server.js file is the controller? Or am I mixing code that doesn't belong there? What components should I break the server.js file into? Some examples of code that's in the server.js file: var connection = mysql.createConnection({ host : 'localhost', user : 'root', password : 'sqlrevenge', database : 'blog' }); //... app.get("/login", function (req, res) { //Function handles a GET request for login forms if (process.env.NODE_ENV == 'DEVELOPMENT') { mu.clearCache(); } session.session_from_request(connection, req, function (err, session) { if (err) { console.log('index.js session error', err); session = null; } login_view.html(res, user_model, post_model, session, mu); //I named my view functions "html" for the case I might want to add other output types (such as a JSON API), or should I opt for completely separate views then? }); }); I have another file that belongs named session.js. It receives a cookies object, reads the stored data to decide if it's a valid user session or not. It also includes a function named login that does change the value of cookies. First, I thought it would be part of the controller, since it kind of dealt with user input and supplied data to the models. Then, I thought that maybe it was a model since it dealt with the application data/database and the data it supplies is used by views. Now, I'm even wondering if it could be considered a View, since it outputs data (cookies are part of HTTP headers, which are output)

    Read the article

  • ArchBeat Facebook Friday: Top 10 Shared Links - May 23-29, 2014

    - by OTN ArchBeat
    Among the 5,144 fans of the OTN ArchBeat Facebook Page the following Top 10 items were the most popular over the last seven days, May 23-29, 2014. GlassFish/Java EE Community Open Forum Today! | Reza Rahman Have questions about Glassfish? Java EE/GlassFish evangelist Reza Rahman has answers, and you can pick his brain tomorrow during an online forum organized by the London Glassfish User Group and C2B2. The event is free, but you must register in order to participate. Click the link for more information. Twitter Tuesday - Top 10 @ArchBeat Tweets - May 20-26, 2014 The top 10 @OTNArchBeat tweets for the week of May 20-26, 2014. Topics covered include ADF, Cloud, GoldenGate, KScope14, OBIEE, ODI, WebLogic, WebCenter, and more. FrameworkFolders Support has come to Oracle WebCenter Portal | JayJay Zheng Interested in working with Framework Folders in Oracle WebCenter Portal? Oracle ACE JayJay Zheng reviews the essentials. Video: Programming Best Practices - ADF Business Components | Frank Nimphius Frank Nimphius discusses best practices and recommendations for ADF Business Components in the latest video from ADF Architecture TV. Video: Kscope 2014 Preview: Data Modeling and Moving Meditation with Kent Graziano For your mind and your body! Oracle ACE Director Kent Graziano previews his Kscope 2014 data modeling presentations and the early morning Chi Gung sessions he will once again lead for Kscope attendees. OAG and OES Integration for Web API Security: skin and guts | Andre Correa A-Team architect Andre Correa's post examines a strategy for web API security that uses OAG (Oracle API Gateway) and OES (Oracle Entitlements Server). Getting Started with Coherence*Web in WebLogic Server 12.1.2 | Tim Middleton Solution architect Tim Middleton shows you how to configure Coherence*Web in WebLogic Server 12.1.2 and deploy a basic web application. SOA and Business Processes: You are the Process! Part of the 13-part "Industrial SOA" article series, this article looks at best practices for modeling and managing effective business processes. Authentication in Oracle Identity Federation/ IdP | Damien Carru Damien Carru discuss authentication when OIF acts as an IdP and how the server can be configured to use specific OAM Authentication Schemes to challenge the user. Caveats on Using WebLogic Server with JDK7 | JayJay Zheng Quick tech tips from Oracle ACE JayJay Zheng.

    Read the article

  • What is the correct way to use g_signal_connect() in C++ for dynamic unity quicklists?

    - by hakermania
    I want to make my application use dynamic unity quicklists. For building my application I am using C++ and the QtCreator IDE. When a menu action is triggered I want to be able to have access to a non-static function of my MainWindow class so as to be able to update the Graphical User Interface which can be accessed from inside 'normal' MainWindow's functions. So, I am building up my quicklist like this (mainwindow.cpp): void MainWindow::enable_unity_quicklist(){ Unity_Menu = dbusmenu_menuitem_new(); dbusmenu_menuitem_property_set_bool (Unity_Menu, DBUSMENU_MENUITEM_PROP_VISIBLE, FALSE); Unity_Stop = dbusmenu_menuitem_new(); dbusmenu_menuitem_property_set(Unity_Stop, DBUSMENU_MENUITEM_PROP_LABEL, "Stop"); dbusmenu_menuitem_child_append (Unity_Menu, Unity_Stop); g_signal_connect (Unity_Stop, DBUSMENU_MENUITEM_SIGNAL_ITEM_ACTIVATED, G_CALLBACK(&fake_callback), (gpointer)this); if(!unity_entry) unity_entry = unity_launcher_entry_get_for_desktop_id("myapp.desktop"); unity_launcher_entry_set_quicklist(unity_entry, Unity_Menu); dbusmenu_menuitem_property_set_bool(Unity_Menu, DBUSMENU_MENUITEM_PROP_VISIBLE, true); dbusmenu_menuitem_property_set_bool(Unity_Stop, DBUSMENU_MENUITEM_PROP_VISIBLE, true); } void MainWindow::fake_callback(gpointer data){ MainWindow* m = (MainWindow*)data; m->on_stopButton_clicked(); } void MainWindow::on_stopButton_clicked(){ //stopping the process... } mainwindow.h: private slots: void enable_unity_quicklist(); void on_stopButton_clicked(); public slots: static void fake_callback(gpointer data); This suggestion was taken from http://old.nabble.com/Using-g_signal_connect-in-class-td18461823.html The program crashes immediately after I choose the 'Stop' action from the Unity Quicklist. Debugging the program shows that I am not able to access anything MainWindow related inside the on_stopButton_clicked() without crashing. For example, it crashes when doing this check (which is the first 2 lines of code inside this function): if (!ui->stopButton->isEnabled()) return; I have also tested lots of other things that I found at the internet, but nothing of them worked. One interesting solution would be to use gtkmm (http://developer.gnome.org/gtkmm-tutorial/stable/sec-connecting-signal-handlers.html.en) but I am not used at all working on GTK applications (I work solely in Qt) and I don't know if this even suits to my occasion. A compilable example indicating what the problem is can be found at: http://ubuntuone.com/7iKA3wnPmWVp8YNNDLlVQI (3.2Kb) If you are not familiar with the QtCreator IDE, you can compile with the following commands, as long as you have all the needed libraries: cd dynamic_unity_quicklists_test; qmake -project; qmake; make

    Read the article

  • NEW: Oracle Certification Exam Preparation Seminars

    - by Harold Green
    Hi Everyone, I am really excited about a new offering that we are announcing this week - Oracle Certification Exam Preparation Seminars. These are something that will make a big difference for many of you in your efforts to become certified and move your career forward. They are also something that have previously only been available (but very popular) to the limited number of customers who have attended our annual conferences in San Francisco (Oracle OpenWorld and JavaOne). These are the first in a series of offerings that we are releasing over the next few months. So for those of you either preparing or considering Oracle certification - keep watching here on the blog, Facebook, Twitter and the Oracle Certification website for additional announcements related to our most popular certification areas. Details of the new Exam Preparation Seminars are found below: NEW: ORACLE CERTIFICATION EXAM PREPARATION SEMINARS Becoming Oracle certified is a great way to build your career, gain additional credibility and improve your earning power. We know that the decision to become certified is not trivial. Our surveys indicate that people consider their time investment a critical factor in their decision to become certified. Your time is important. In order to help candidates maximize the efficiency of their study time we are releasing a new series of video-based seminars called Exam Preparation Seminars. These seminars are patterned after the extremely popular Exam Cram sessions that until now have only been available at our annual customer conferences (Oracle Open World and JavaOne). Beginning today they are now available to anyone, anywhere as a part of this Exam Prep Seminar series. Features: Fast-paced objective by objective review of the exam topics - led by top Oracle University instructors 24/7 access through Oracle University's training on demand platform. Ability to re-watch all or part of the the seminar. All the conveniences of video-based training: start, stop, fast-forward, skip, rewind, review. Tips that will help you better understand what you need to know to pass the exam. The Exam Preparation Seminars are meant to help anyone with a working knowledge of the technology get that extra boost to help them finalize their preparation, and will help anyone who wants a better understanding of the the depth and breadth of the exam topics and objectives. Benefits: Save time by understanding what you should study. Makes you efficient because you will understand the breadth and depth of each of the exam topics. Helps you create a better, more efficient study plan. Improves your confidence in your skills and ability to pass the certification exam. Exam Preparation Seminars are available individually, or in convenient Value Packages (which include the Exam Preparation Seminar, and an exam voucher which includes one free-retake if you need it). Currently we are releasing two seminars - one for DBA SQL and one for DBA Administration I. Additional offerings are in process. Find out more: General WEB: Oracle Certification Exam Preparation Seminars VIDEO: Exam Preparation Seminars Promo (1:27) Oracle Database Administration I (11g, 10g) VIDEO: Instructor Introduction (1:08) VIDEO: Sample Video (2:16) Oracle Database SQL VIDEO: Instructor Introduction (1:08) VIDEO: Sample Video (2:16)

    Read the article

  • The Solution

    - by Patrick Liekhus
    So I recently attended a class about time management as well as read the book “The Seven Habits of Highly Effective People” by Stephen Covey.  Both have been instrumental in helping me get my priorities aligned as well as keep me focused. The reason I bring this up is that it gave me a great idea for a small application with which to create a great technical stack solution that would be easy to demo and explain.  Therefore, the project from this point forward with be the Liekhus.TimeTracker application which will bring some the time management skills that I have acquired into a technical implementation.  The idea is rather simple, but leverages some of the basic principles of Covey along with some of the worksheets that I garnered from class.  The basics are as such: 1) a plan is a must have and 2) write it down!  A plan not written down is just an idea.  How many times have you had an idea that didn’t materialize?  Exactly.  Hence why I am writing it all down now! The worksheet consists of a few simple columns that I will outline below as well as some modifications that I made according to the Covey habits.  The worksheet looks like the following: Status Issue Area CQ Notes P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   P  F  L     1234   The idea is really simple and straightforward; you write down all your tasks and keep track of them along the way.  The status stands for (P)ending, (F)inished or (L)ater.  You write a quick title for the issue and select the CQ (Covey Quadrant) with which the issue occurs.  The notes section is for things that happen while you are working through the issue.  And last, but not least, is the Area column that I added as a way to identify the Role or Area of your life that this task falls within based upon Covey’s teachings. The second part of this application is a simple phone log that allows you to track your phone conversations throughout the day.  All of this is currently done on a sheet of paper, but being involved in technology, I want it to have bells and whistles.  Therefore, this is my simple idea for a project that will allow me to test my theories about coding and implementations.  Stay tuned as the next session will be flushing out the concept and coming up with user stories to begin the SCRUM process. Thanks

    Read the article

  • Java JRE 1.6.0_35 Certified with Oracle E-Business Suite

    - by Steven Chan (Oracle Development)
    The latest Java Runtime Environment 1.6.0_35 (a.k.a. JRE 6u35-b10) is now certified with Oracle E-Business Suite Release 11i and 12 desktop clients.   What's new in Java 1.6.0_35?See the 1.6.0_35 Update Release Notes for details about what has changed in this release.  This release is available for download from the usual Sun channels and through the 'Java Automatic Update' mechanism. 32-bit and 64-bit versions certified This certification includes both the 32-bit and 64-bit JRE versions. 32-bit JREs are certified on: Windows XP Service Pack 3 (SP3) Windows Vista Service Pack 1 (SP1) and Service Pack 2 (SP2) Windows 7 and Windows 7 Service Pack 1 (SP1) 64-bit JREs are certified only on 64-bit versions of Windows 7 and Windows 7 Service Pack 1 (SP1). Worried about the 'mismanaged session cookie' issue? No need to worry -- it's fixed.  To recap: JRE releases 1.6.0_18 through 1.6.0_22 had issues with mismanaging session cookies that affected some users in some circumstances. The fix for those issues was first included in JRE 1.6.0_23. These fixes will carry forward and continue to be fixed in all future JRE releases.  In other words, if you wish to avoid the mismanaged session cookie issue, you should apply any release after JRE 1.6.0_22.All JRE 1.6 releases are certified with EBS upon release Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops from JRE 1.6.0_03 and later updates on the 1.6 codeline.  We test all new JRE 1.6 releases in parallel with the JRE development process, so all new JRE 1.6 releases are considered certified with the E-Business Suite on the same day that they're released by our Java team.  You do not need to wait for a certification announcement before applying new JRE 1.6 releases to your EBS users' desktops. Important For important guidance about the impact of the JRE Auto Update feature on JRE 1.6 desktops, see: URGENT BULLETIN: All E-Business Suite End-Users Must Manually Apply JRE 6 Updates References Recommended Browsers for Oracle Applications 11i (Metalink Note 285218.1) Upgrading Sun JRE (Native Plug-in) with Oracle Applications 11i for Windows Clients (Metalink Note 290807.1) Recommended Browsers for Oracle Applications 12 (MetaLink Note 389422.1) Upgrading JRE Plugin with Oracle Applications R12 (MetaLink Note 393931.1) Related Articles Mismanaged Session Cookie Issue Fixed for EBS in JRE 1.6.0_23 Roundup: Oracle JInitiator 1.3 Desupported for EBS Customers in July 2009

    Read the article

  • Building an MVC application using QuickBooks

    - by dataintegration
    RSSBus ADO.NET Providers can be used from many tools and IDEs. In this article we show how to connect to QuickBooks from an MVC3 project using the RSSBus ADO.NET Provider for QuickBooks. Although this example uses the QuickBooks Data Provider, the same process can be used with any of our ADO.NET Providers. Creating the Model Step 1: Download and install the QuickBooks Data Provider from RSSBus. Step 2: Create a new MVC3 project in Visual Studio. Add a data model to the Models folder using the ADO.NET Entity Data Model wizard. Step 3: Create a new RSSBus QuickBooks Data Source by clicking "New Connection", specify the connection string options, and click Next. Step 4: Select all the tables and views you need, and click Finish to create the data model. Step 5: Right click on the entity diagram and select 'Add Code Generation Item'. Choose the 'ADO.NET DbContext Generator'. Creating the Controller and the Views Step 6: Add a new controller to the Controllers folder. Give it a meaningful name, such as ReceivePaymentsController. Also, make sure the template selected is 'Controller with empty read/write actions'. Before adding new methods to the Controller, create views for your model. We will add the List, Create, and Delete views. Step 7: Right click on the Views folder and go to Add -> View. Here create a new view for each: List, Create, and Delete templates. Make sure to also associate your Model with the new views. Step 10: Now that the views are ready, go back and edit the RecievePayment controller. Update your code to handle the Index, Create, and Delete methods. Sample Project We are including a sample project that shows how to use the QuickBooks Data Provider in an MVC3 application. You may download the C# project here or download the VB.NET project here. You will also need to install the QuickBooks ADO.NET Data Provider to run the demo. You can download a free trial here. To use this demo, you will also need to modify the connection string in the 'web.config'.

    Read the article

  • Java EE 8 update

    - by delabassee
    Planning for Java EE 8 is now well underway. As you know, a few weeks ago, we conducted a three part Java EE 8 Community Survey (you can find the final summary here). The data gathered have been very influential for the next steps. You can now expect over the coming weeks and months to see updates on the various specifications that compose the Java EE platform. Some Specification Leads are busy gathering additional feedback regarding what they should focus their efforts on (e.g. CDI 2 survey). Other Specification Leads have already publicly exposed what they think should be one of the focus for the evolution of the specification they lead.  For example, adding Server-Senet Events (SSE) support in JAX-RS is being discussed here and adding MVC support is being discussed here. Please remember that the fact we are now discussing any feature does not insure that it will be included in the proposal, nor in any particular update to Java EE. We can expect additional enhancements, changes and evolutions as we get closer to the finalisation of the different specifications... and there is still a long way to go with these specification proposals! Linda DeMichiel, Java EE Co-Specification Lead, has recently posted a draft proposal for the Java EE 8 Platform specification. Linda's goal is to recruit people and companies supporting this proposal before submitting it to the JCP.  This draft proposal is very interesting reading as it contains relevant information on the plans for Java EE 8 such as : The themes: Support for the latest web standards (eg. HTTP 2.0)  Continue to work on ease of development Improve the infrastructure for cloud support Alignment with Java SE 8 New JSRs to be added to the platform: J-Cache Java API for JSON Binding Java Configuration Plans for the Web Profile Plans on technologies to prune in Java EE 8, ... So if you haven't done it yet, I really encourage you to read the Java EE 8 draft proposal! Our goal for the Java EE 8 specification is for it to be finalized in the second half of 2016. It is important to note that we are in the early days of Java EE 8 and at this stage everything (themes, content, timing, etc.) is preliminary. Everything still needs to be discussed, challenged and agreed within the different Java Community Process (JCP) Experts Groups (EGs). Some EGs that still need to be formed! It could also means that the roadmap will have to be adjusted to follow the progress being made in the different EGs. This is also a good occasion to remind you that participation within those upcoming JCP Experts Groups is encouraged. Contributing in an EG is an effective lever to influence what Java EE 8 will become! Finally, as things get more concrete, we will share details on how to engage in the different Java EE 8 related Adopt-a-JSR initiatives, another way to contribute. You can also read other posts related to Java EE 8, here at The Aquarium blog. Just look for articles with the 'javaee8' tag.

    Read the article

  • Now Shipping! NetAdvantage for .NET 2010 Volume 3!

    The new NetAdvantage Ultimate includes all four Line of Business user interface control sets for ASP .NET, Windows Forms, WPF and Silverlight plus two advanced Data Visualization UI control sets for WPF and Silverlight. With six NetAdvantage products in one robust package, Infragistics® gives you hundreds of controls and infinite development possibilities. Unified XAML Product Strategy-Share Code, Get More Controls In the 10.3 release, Infragistics continues to deliver code parity between the XAML platforms, WPF and Silverlight. In the line of business toolsets, Infragistics introduces the new xamSchedule™, full-featured, Outlook® 2010-style schedule controls, and the new xamDataTree™, a data bound tree view that comfortably handles tens of thousands of tree nodes. Mimicking our Silverlight Drag and Drop Framework, the WPF Drag and Drop Framework CTP empowers you to add your own rich touches to your applications. Track Users' Behaviors New to all NetAdvantage Silverlight controls is the Infragistics Analytics Framework (IGAF), which empowers you to track user behavior in RIAs running on Silverlight 4. Building on the Microsoft® Silverlight Analytics Framework, with IGAF you can analyze the user's behaviors to ensure the experience you want to deliver. NetAdvantage for Windows Forms--New Office® 2010 Ribbon and Application Menu 2010 Create new experiences with Windows Forms. Now with Office 2010 styling, NetAdvantage for Windows Forms has new features such as Microsoft® Office 2010 ribbon and enhanced Infragistics.Excel to export the contents of the high performance WinGrid™ into Microsoft Excel® 2010. The new Windows Message Support enables Infragistics standalone editor controls to process numerous Windows® OS messages, allowing them to respond just like native controls to changes in the Windows environment. Create Faster Web 2.0 Experiences with NetAdvantage for ASP .NET Infragistics continues to push the envelope to deliver the fastest ASP .NET WebForms controls available on the market. Our lightning fast ASP .NET grids are now enhanced with XPS/PDF Exporting and Summary Rows. This release also includes support for jQuery Templating (as a CTP) within our WebDataGrid™ and WebDataTree™ controls allowing you to quickly cut down overall page size. Deliver Business Intelligence with Power, Flexibility and the Office 2010 Experience NetAdvantage for WPF Data Visualization and NetAdvantage for Silverlight Data Visualization help you deliver flexible, powerful and usable end user experiences in Business Intelligence applications. Both suites include the Pivot Grid that delivers the full power of online analytical processing (OLAP) to present multi-dimensional data, sliced and diced in cross-tabulated form for end users to drill down into, interact with and easily extract meaning from the data. Mapping Made Easy 10.3 marks the official release of the WPF Data Visualization xamMap™ control to map anything and everything from geographic to geo-spacial mapping data. Map layers allow you to add successive levels of detail, navigational panes for panning in all directions, color swatch panes that facilitate value scales like Choropleth shading, and scale panes allowing users to zoom-in and out. Both toolsets introduce the first of many relationship maps! With the xamOrgChart™ CTP you can map out organizational charts of up to 50K employees, competitive brackets (think World Cup) and any other relational, organizational map your application needs. http://www.infragistics.com span.fullpost {display:none;}

    Read the article

  • A brief note for customers running SOA Suite on AIX platforms

    - by christian
    When running Oracle SOA Suite with IBM JVMs on the AIX platform, we have seen performance slowdowns and/or memory leaks. On occasion, we have even encountered some OutOfMemoryError conditions and the concomittant Java coredump. If you are experiencing this issue, the resolution may be to configure -Dsun.reflect.inflationThreshold=0 in your JVM startup parameters. https://www.ibm.com/developerworks/java/library/j-nativememory-aix/ contains a detailed discussion of the IBM AIX JVM memory model, but I will summarize my interpretation and understanding of it in the context of SOA Suite, below. Java ClassLoaders on IBM JVMs are allocated a native memory area into which they are anticipated to map such things as jars loaded from the filesystem. This is an excellent memory optimization, as the file can be loaded into memory once and then shared amongst many JVMs on the same host, allowing for excellent horizontal scalability on AIX hosts. However, Java ClassLoaders are not used exclusively for loading files from disk. A performance optimization by the Oracle Java language developers enables reflectively accessed data to optimize from a JNI call into Java bytecodes which are then amenable to hotspot optimizations, amongst other things. This performance optimization is called inflation, and it is executed by generating a sun.reflect.DelegatingClassLoader instance dynamically to inject the Java bytecode into the virtual machine. It is generally considered an excellent optimization. However, it interacts very negatively with the native memory area allocated by the IBM JVM, effectively locking out memory that could otherwise be used by the Java process. SOA Suite and WebLogic are both very large users of reflection code. They reflectively use many code paths in their operation, generating lots of DelegatingClassLoaders in normal operation. The IBM JVM slowdown and subsequent OutOfMemoryError are as a direct result of the Java memory consumed by the DelegatingClassLoader instances generated by SOA Suite and WebLogic. Java garbage collection runs more frequently to try and keep memory available, until it can no longer do so and throws OutOfMemoryError. The setting sun.reflect.inflationThreshold=0 disables this optimization entirely, never allowing the JVM to generate the optimized reflection code. IBM JVMs are susceptible to this issue primarily because all Java ClassLoaders have this native memory allocation, which is shared with the regular Java heap. Oracle JVMs don't automatically give all ClassLoaders a native memory area, and my understanding is that jar files are never mapped completely from shared memory in the same way as IBM does it. This results in different behaviour characteristics on IBM vs Oracle JVMs.

    Read the article

  • Managing accounts on a private website for a real-life community

    - by Smudge
    Hey Pro Webmasters, I'm looking at setting-up a walled-in website for a real-life community of people, and I was wondering if anyone has any experience with managing member accounts for this kind of thing. Some conditions that must be met: This community has a set list of real-life members, each of whom would be eligible for one account on the website. We don't expect or require that they all sign-up. It is purely opt-in, but we anticipate that many of them would be interested in the services we are setting up. Some of the community members emails are known, but some of them have fallen off the grid over the years, so ideally there would be a way for them to get back in touch with us through the public-facing side of the site. (And we'd want to manually verify the identity of anyone who does so). Their names are known, and for similar projects in the past we have assigned usernames derived from their real-life names. This time, however, we are open to other approaches, such as letting them specify their own username or getting rid of usernames entirely. The specific web technology we will use (e.g. Drupal, Joomla, etc) is not really our concern right now -- I am more interested in how this can be approached in the abstract. Our database already includes the full member roster, so we can email many of them generated links to a page where they can create an account. (And internally we can require that these accounts be paired with a known member). Should we have them specify their own usernames, or are we fine letting them use their registered email address to log-in? Are there any paradigms for walled-in community portals that help address security issues if, for example, one of their email accounts is compromised? We don't anticipate attempted break-ins being much of a threat, because nothing about this community is high-profile, but we do want to address security concerns. In addition, we want to make the sign-up process as painless for the members as possible, especially given the fact that we can't just make sign-ups open to anyone. I'm interested to hear your thoughts and suggestions! Thanks!

    Read the article

  • Representing complex object dependencies

    - by max
    I have several classes with a reasonably complex (but acyclic) dependency graph. All the dependencies are of the form: class X instance contains an attribute of class Y. All such attributes are set during initialization and never changed again. Each class' constructor has just a couple parameters, and each object knows the proper parameters to pass to the constructors of the objects it contains. class Outer is at the top of the dependency hierarchy, i.e., no class depends on it. Currently, the UI layer only creates an Outer instance; the parameters for Outer constructor are derived from the user input. Of course, Outer in the process of initialization, creates the objects it needs, which in turn create the objects they need, and so on. The new development is that the a user who knows the dependency graph may want to reach deep into it, and set the values of some of the arguments passed to constructors of the inner classes (essentially overriding the values used currently). How should I change the design to support this? I could keep the current approach where all the inner classes are created by the classes that need them. In this case, the information about "user overrides" would need to be passed to Outer class' constructor in some complex user_overrides structure. Perhaps user_overrides could be the full logical representation of the dependency graph, with the overrides attached to the appropriate edges. Outer class would pass user_overrides to every object it creates, and they would do the same. Each object, before initializing lower level objects, will find its location in that graph and check if the user requested an override to any of the constructor arguments. Alternatively, I could rewrite all the objects' constructors to take as parameters the full objects they require. Thus, the creation of all the inner objects would be moved outside the whole hierarchy, into a new controller layer that lies between Outer and UI layer. The controller layer would essentially traverse the dependency graph from the bottom, creating all the objects as it goes. The controller layer would have to ask the higher-level objects for parameter values for the lower-level objects whenever the relevant parameter isn't provided by the user. Neither approach looks terribly simple. Is there any other approach? Has this problem come up enough in the past to have a pattern that I can read about? I'm using Python, but I don't think it matters much at the design level.

    Read the article

  • My Automated NuGet Workflow

    - by Wes McClure
    When we develop libraries (whether internal or public), it helps to have a rapid ability to make changes and test them in a consuming application. Building Setup the library with automatic versioning and a nuspec Setup library assembly version to auto increment build and revision AssemblyInfo –> [assembly: AssemblyVersion("1.0.*")] This autoincrements build and revision based on time of build Major & Minor Major should be changed when you have breaking changes Minor should be changed once you have a solid new release During development I don’t increment these Create a nuspec, version this with the code nuspec - set version to <version>$version$</version> This uses the assembly’s version, which is auto-incrementing Make changes to code Run automated build (ruby/rake) run “rake nuget” nuget task builds nuget package and copies it to a local nuget feed I use an environment variable to point at this so I can change it on a machine level! The nuget command below assumes a nuspec is checked in called Library.nuspec next to the csproj file $projectSolution = 'src\\Library.sln' $nugetFeedPath = ENV["NuGetDevFeed"] msbuild :build => [:clean] do |msb| msb.properties :configuration => :Release msb.targets :Build msb.solution = $projectSolution end task :nuget => [:build] do sh "nuget pack src\\Library\\Library.csproj /OutputDirectory " + $nugetFeedPath end Setup the local nuget feed as a nuget package source (this is only required once per machine) Go to the consuming project Update the package Update-Package Library or Install-Package TLDR change library code run “rake nuget” run “Update-Package library” in the consuming application build/test! If you manually execute any of this process, especially copying files, you will find it a burden to develop the library and will find yourself dreading it, and even worse, making changes downstream instead of updating the shared library for everyone’s sake. Publishing Once you have a set of changes that you want to release, consider versioning and possibly increment the minor version if needed. Pick the package out of your local feed, and copy it to a public / shared feed! I have a script to do this where I can drop the package on a batch file Replace apikey with your nuget feed's apikey Take out the confirm(s) if you don't want them @ECHO off echo Upload %1? set /P anykey="Hit enter to continue " nuget push %1 apikey set /P anykey="Done " Note: helps to prune all the unnecessary versions during testing from your local feed once you are done and ready to publish TLDR consider version number run command to copy to public feed

    Read the article

  • 4.8M wasn't enough so we went for 5.055M tpmc with Unbreakable Enterprise Kernel r2 :-)

    - by wcoekaer
    We released a new set of benchmarks today. One is an updated tpc-c from a few months ago where we had just over 4.8M tpmc at $0.98 and we just updated it to go to 5.05M and $0.89. The other one is related to Java Middleware performance. You can find the press release here. Now, I don't want to talk about the actual relevance of the benchmark numbers, as I am not in the benchmark team. I want to talk about why these numbers and these efforts, unrelated to what they mean to your workload, matter to customers. The actual benchmark effort is a very big, long, expensive undertaking where many groups work together as a big virtual team. Having the virtual team be within a single company of course helps tremendously... We already start with a very big server setup with tons of storage, many disks, lots of ram, lots of cpu's, cores, threads, large database setups. Getting the whole setup going to start tuning, by itself, is no easy task, but then the real fun starts with tuning the system for optimal performance -and- stability. A benchmark is not just revving an engine at high rpm, it's actually hitting the circuit. The tests require long runs, require surviving availability tests, such as surviving crashes -and- recovery under load. In the TPC-C example, the x4800 system had 4TB ram, 160 threads (8 sockets, hyperthreaded, 10 cores/socket), tons of storage attached, tons of luns visible to the OS. flash storage, non flash storage... many things at high scale that all have to be perfectly synchronized. During this process, we find bugs, we fix bugs, we find performance issues, we fix performance issues, we find interesting potential features to investigate for the future, we start new development projects for future releases and all this goes back into the products. As more and more customers, for Oracle Linux, are running larger and larger, faster and faster, more mission critical, higher available databases..., these things are just absolutely critical. Unrelated to what anyone's specific opinion is about tpc-c or tpc-h or specjenterprise etc, there is a ton of effort that the customer benefits from. All this work makes Oracle Linux and/or Oracle Solaris better platforms. Whether it's faster, more stable, more scalable, more resilient. It helps. Another point that I always like to re-iterate around UEK and UEK2 : we have our kernel source git repository online. Complete changelog of the mainline kernel, and our changes, easy to pull, easy to dissect, easy to know what went in when, why and where. No need to go log into a website and manually click through pages to hopefully discover changes or patches. No need to untar 2 tar balls and run a diff.

    Read the article

  • Cloning a dual boot system from HDD to SSD

    - by Alex
    I'm planning on replacing my laptop's HDD with a 256GB SSD, but I have a dual-boot (12.04 and Windows 7) setup and I'd like to be able to directly migrate Ubuntu over without having to reinstall and lose all of my settings. GParted reports the following partition setup on my HDD. I am, of course, able to modify it if necessary. /dev/sda1 (NTFS) 66.92 out of 200.00 MB used I'm honestly not sure what this partition is for. Maybe for Windows 7 system files? I'm hesitant to mess with it. (edit; it turns out it is a partition for Windows recovery files in the event of OS corruption, so I don't want to remove it. Plus it also appears to be a major pain to remove anyways) /dev/sda2 (NTFS) 116.35 out of 339.06 GB used (boot) This partition is the C:/ drive on my Windows installation. I don't use it on my Ubuntu installation, except it is the boot partition and thus has grub on it. /dev/sda4 (extended) > /dev/sda5 (ext4) 14.49 out of 91.34 GB used > /dev/sda6 (linux-swap) 5.92 GB These are my Ubuntu partitions. /sda5 contains my documents and all of the files I use on Ubuntu, and (as far as I know) the system files for Ubuntu itself (it's the partition I created when prompted by the Live-DVD installer). /sda6 is, of course, the swap partition which I only need for hibernation (6GB of RAM). /dev/sda3 (NTFS) 9.89 out of 14.75 GB used This is an annoying partition that Lenovo created to store some drivers and files that I might need later on. For example, it allows me to use OneKeyRecovery for a quick factory recovery if absolutely necessary, not sure if that'll work on an SSD. It also contains not-so-important files for bloatware installation. In total, my HDD only has about 150GB of files on it so it should fit comfortably on the SSD. The problem is, I want to exactly migrate my files, partitions, OSes, MBR, etc. from my HDD to my SSD and I'm not quite sure how to do this. I've seen CloneZilla referenced before, but I'm not all too experienced and the documentation for it quite frankly seems a bit like a foreign language to me. So, put simply, is there any way I can exactly clone this HDD to an SSD without a massive headache? Also, if it matters, I'll probably be using an external hard drive case (as recommended in online tutorials) to externally attach the SSD to my laptop during the cloning process due to the lack of two hard drive slots in the machine.

    Read the article

  • Motivating yourself to actually write the code after you've designed something

    - by dpb
    Does it happen only to me or is this familiar to you too? It's like this: You have to create something; a module, a feature, an entire application... whatever. It is something interesting that you have never done before, it is challenging. So you start to think how you are going to do it. You draw some sketches. You write some prototypes to test your ideas. You are putting different pieces together to get the complete view. You finally end up with a design that you like, something that is simple, clear to everybody, easy maintainable... you name it. You covered every base, you thought of everything. You know that you are going to have this class and that file and that database schema. Configure this here, adapt this other thingy there etc. But now, after everything is settled, you have to sit down and actually write the code for it. And is not challenging anymore.... Been there, done that! Writing the code now is just "formalities" and makes it look like re-iterating what you've just finished. At my previous job I sometimes got away with it because someone else did the coding based on my specifications, but at my new gig I'm in charge of the entire process so I have to do this too ('cause I get payed to do it). But I have a pet project I'm working on at home, after work and there is just me and no one is paying me to do it. I do the creative work and then when time comes to write it down I just don't feel like it (lets browse the web a little, see what's new on P.SE, on SO etc). I just want to move to the next challenging thing, and then to the next, and the next... Does this happen to you too? How do you deal with it? How do you convince yourself to go in and write the freaking code? I'll take any answer.

    Read the article

< Previous Page | 685 686 687 688 689 690 691 692 693 694 695 696  | Next Page >