Search Results

Search found 2772 results on 111 pages for 'hour'.

Page 62/111 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • 'Xojo' is the only application that I can't install

    - by Gichan
    I can't install xojo. When I click install in the software center it's not progressing. In the terminal it's stuck in : gichan02@gichan02-Latitude-D520:~$ sudo apt-get install xojo [sudo] password for gichan02: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: xojo-bin The following NEW packages will be installed: xojo xojo-bin 0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded. Need to get 209 MB/209 MB of archives. After this operation, 596 MB of additional disk space will be used. Do you want to continue? [Y/n] Y 0% [Working] then after waiting for an hour for progress it says: Failed to fetch https://private-ppa.launchpad.net/commercial-ppa-uploaders/xojo/ubuntu/pool/main/x/xojo/xojo-bin_2013.41-0ubuntu1_i386.deb Could not resolve host: private-ppa.launchpad.net So I added apt repository for 'private-ppa': deb https://ging-giana:[email protected]/commercial-ppa-uploaders/xojo/ubuntu trusty main Then when I try 'apt-get update': GPG error: https://private-ppa.launchpad.net trusty Release: The following signatures were invalid: NODATA 2 Then I noticed something the Software Sources:Other software TAB: Added by software-center; credentials stored in /etc/apt/auth.conf https://private-ppa.launchpad.net/commercial-ppa-uploaders/xojo/ubuntu So i go to the '/etc/apt/auth.conf' ,but It cannot be opened and it is not a keyserver. So i uncheck: Added by software-center; credentials stored in /etc/apt/auth.conf https://private-ppa.launchpad.net/commercial-ppa-uploaders/xojo/ubuntu GPG error was gone. But then again I found myself at the beginning of the problem.STUCK at '0% [Working]'. 'Xojo' is the only application that I can't install.Any explanation why is it like that?

    Read the article

  • recent unreliable wireless connection on 10.04 and 10.10

    - by gabkdlly
    Recently, my internet connection over wireless has become unreliable, on both a Dell laptop running Ubuntu 10.04 as well as my Desktop running Ubuntu 10.10 . The problem does not seem to occur on a laptop running Windows Vista. The problem does not seem to occur on my Openmoko Freerunner ( running Android 1.5 ), though I hardly ever use this device to connect over WLAN, so the problem may have just slipped by. This problem does not seem to appear when I boot into Ubuntu 9.10 from a live CD ( more precisely, I was able to ping fu-berlin.de for an hour without any packet loss ). Under Ubuntu 10.10, I am experiencing about 33% packet loss. On my main Ubuntu Desktop, I have tried the following wireless devices: a Longshine PCI card ( an old device with an RTL8180L chip ) a D-Link DWL-510 PCI card ( this device threw warnings in dmesg ) a USB device from MSI ( US54EX ). Usually my wireless network shows up in the network manager with a normal signal strength, even when the connection speed is slow ( which happens often ) or the connection gets reset ( asking me to click connect to re-authenticate my wireless connection ). I have observed this problem with a Netgear KWGR614 Router ( with the manufacturers firmware ), as well as with a TP-LINK TL-WR741ND router running OpenWrt. Taking a look at my routers logs, I find many instances of the following line: Tuesday,04 Jan 2011 03:53:01 [TCP SYN Flood][Deny access policy matched, dropping packet] I know that the Netgear router is susceptible to denial of service attacks, as I have previously been able to disrupt its operation by putting an nmap scan into a while loop. I use WEP on the Netgear router and WPA on the TP-LINK to encrypt the wireless connections. Is it possible that someone is jamming my signal ?

    Read the article

  • Tracking feature requests for small-scale components

    - by DXM
    I'm curious how other development teams (especially those that work in moderate to large development groups) track "future" features/wishlists for functionality for internally developed frameworks or components. I know the standard advice is that a development team should find one good tool for tracking bugs/features and use that for everything and I agree with that if the future requests are for the product itself. In my company we have an engineering department, which is broken up into multiple groups and within each there can be one to several agile teams. The bug tracking product we use has been "a leader since 1997" (their UI/usability seems to also be evaluated against that year even today) but my agile team or even group doesn't really control what is being used by the whole department. What we are looking to track is not necessarily product features but expansion/nice to have functionality for internal components that go into our product. So to name a few for example... framework/utility library on top of CppUnit which our developers share low-level IPC communications framework Common development SDK that myself and several other team leads started to help share some common code/tools at the department-wide level (this SDK is released as internal "product" to each of the groups). Is the standard practice to use the one bug tracking tool? Or would it make more sense to setup something more localized specifically for our needs and maintain it ourselves? It's also unclear how management will feel if developers start performing "IT" roles of maintaining software and servers. At the same time, right now, we use excel files, internal wiki and MS OneNote for this kind of stuff and that just doesn't feel right. (I'm afraid to ask for actual software recommendations, since that might make this question more localized or something. Also developers needs this way more than management, so it would be nice to find something either free or no more than the cost of a happy hour).

    Read the article

  • The premier support for Sun Cluster 3.1 ended

    - by JuergenS
    In October 2011 the premier support for Sun Cluster 3.1 ended. See details in Oracle Lifetime Support Policy for Oracle and Sun System Software document. There no 'Extended Support' and the 'Sustaining Support Ends' is indefinite. But for indefinite 'Sustaining Support' I like to point out from the mentioned document (version Sept. 2011) on page 5: Sustaining Support does NOT include: * New program updates, fixes, security alerts, general maintenance releases, selected functionality releases and documentation updates or upgrade tools * Certification with most new third-party products/versions and most new Oracle products * 24 hour commitment and response guidelines for Severity 1 service requests *Previously released fixes or updates that Oracle no longer supports This means Solaris 10 9/10 update9 is the last qualified release for Sun Cluster 3.1. So, Sun Cluster 3.1 is not qualified on Solaris 10 8/11 Update10. Furthermore there is an issue around with SVM patch 145899-06 or higher. This SVM patch is part of Solaris 10 8/11 Update10. The 145899-06 is the first released patch of this number, therefore the support for Sun Cluster 3.1 ends with the previous SVM patches 144622-01 and 139967-02. For details about the known problem with SVM patch 145899-06 please refer to doc 1378828.1. Further this means you should freeze (no patching, no upgrade) your Sun Cluster 3.1 configuration not later than Solaris 10 9/10 update9. Or even better plan an upgrade to Solaris Cluster 3.3 now to get back to full support.

    Read the article

  • Trigger IP ban based on request of given file?

    - by Mike Atlas
    I run a website where "x.php" was known to have vulnerabilities. The vulnerability has been fixed and I don't have "x.php" on my site anymore. As such with major public vulnerabilities, it seems script kiddies around are running tools that hitting my site looking for "x.php" in the entire structure of the site - constantly, 24/7. This is wasted bandwidth, traffic and load that I don't really need. Is there a way to trigger a time-based (or permanent) ban to an IP address that tries to access "x.php" anywhere on my site? Perhaps I need a custom 404 PHP page that captures the fact that the request was for "x.php" and then that triggers the ban? How can I do that? Thanks! EDIT: I should add that part of hardening my site, I've started using ZBBlock: This php security script is designed to detect certain behaviors detrimental to websites, or known bad addresses attempting to access your site. It then will send the bad robot (usually) or hacker an authentic 403 FORBIDDEN page with a description of what the problem was. If the attacker persists, then they will be served up a permanently reccurring 503 OVERLOAD message with a 24 hour timeout. But ZBBlock doesn't do quite exactly what I want to do, it does help with other spam/script/hack blocking.

    Read the article

  • Which is the best image hosting site for hosting images for the website? [closed]

    - by rahul dagli
    Possible Duplicate: How to find web hosting that meets my requirements? I currently have a website and blog and using a limited web hosting plan. When I upload images on my hosting server it consumes a lot of bandwidth and space. So I was thinking of hosting images on some-other image hosting site and direct linking it to my site. I found out few sites like imageshack, photobucket, tinypic, imgur. However, I see all have certain restrictions. The features i am looking for are as follows: 1. At least 10gb space 2. At least 500gb bandwidth (bec I hav very high traffic) 3. Very high speed even during heavy load like 1000 visitors accessing every hour. 4. Ultra reliable servers (99.9% uptime) 5. Privacy control 6. Must not ever delete image if inactive 7. Create and manage albums 8. Company that will last long in business atleast for next 10 years. 9. Free of cost 10. Hotlinking/ Directlinking image.

    Read the article

  • Upcoming: Oracle Advanced Benefits Advisor Webcasts Announced

    - by user793553
    Oracle support is pleased to announce a new webcast covering the Open Enrollment functionality in Oracle Advanced Benefits.  The webcast is repeated on three different dates, in order to make attendance easier, whatever timezone you operate in. These one-hour sessions are recommended for technical and functional users who will be having an Open Enrollment cycle in the next 12 months.  The session will review the best proactive practices recommended by Oracle Support regardless of when your Open Enrollment takes place.  It will review planning, patching, data corruption and critical checklists. TOPICS WILL INCLUDE: Planning Ahead for Open Enrollment testing Required Patches Test performance Avoid major patching/updates Data corruption issues A short, live demonstration (only if applicable) and question and answer period will be included.  Below is the schedule for the webcasts.  The same can be found in the MyOracleSupport Document Advisor Webcast Current Schedule Doc ID 740966.1 Please follow the links to register for your chosen session. Webcast Topic and Description Registration Details Date and Time Best Benefits Practices for Open Enrollment Session 3   Doc ID 1489318.1 October 17, 2012 at 16:00 US EST Best Benefits Practices for Open Enrollment Session 4   Doc ID 1489319.1 October 31, 2012 at 16:00 US EST Product Enhancements in R12.1.3 RUP 5 Session 2   Doc ID 1489320.1 November 07, 2012 at 16:00 US EST

    Read the article

  • Requesting quality analysis test cases up front of implementation/change

    - by arin
    Recently I have been assigned to work on a major requirement that falls between a change request and an improvement. The previous implementation was done (badly) by a senior developer that left the company and did so without leaving a trace of documentation. Here were my initial steps to approach this problem: Considering that the release date was fast approaching and there was no time for slip-ups, I initially asked if the requirement was a "must have". Since the requirement helped the product significantly in terms of usability, the answer was "If possible, yes". Knowing the wide-spread use and affects of this requirement, had it come to a point where the requirement could not be finished prior to release, I asked if it would be a viable option to thrash the current state and revert back to the state prior to the ex-senior implementation. The answer was "Most likely: no". Understanding that the requirement was coming from the higher management, and due to the complexity of it, I asked all usability test cases to be written prior to the implementation (by QA) and given to me, to aid me in the comprehension of this task. This was a big no-no for the folks at the management as they failed to understand this approach. Knowing that I had to insist on my request and the responsibility of this requirement, I insisted and have fallen out of favor with some of the folks, leaving me in a state of "baffledness". Basically, I was trying a test-driven approach to a high-risk, high-complexity and must-have requirement and trying to be safe rather than sorry. Is this approach wrong or have I approached it incorrectly? P.S.: The change request/improvement was cancelled and the implementation was reverted back to the prior state due to the complexity of the problem and lack of time. This only happened after a 2 hour long meeting with other seniors in order to convince the aforementioned folks.

    Read the article

  • Extra Life 2012

    - by Chris Gardner
    Greetings, It's that time of year again. The time when I beg you for money for charity. See, unlike those bell ringers outside Wal-Mart, I don't do it when you have ten bazillion holiday obligations... Once again, I will be enduring a 24-hour marathon of gaming to raise money for Children Hospital in Birmingham. All the money goes straight to them, and you get to tell Uncie Samual that you're good for that money. I'd REALLY like to break $1000 this year, as I have come REALLY close for the past 2 year to doing so. Don't live near me? Live closer to a children's hospital in the Children's Miracle Network? It's OK. Go find a participant that is working for your hospital and hook them up. Just left me know, I will will join in with the karmic love you will already receive. This year, the event will take place on October 20th, beginning at 8 A.M. Once again, I will try to provide some web streams, etc, if you want to point and laugh (especially if I have to result to playing Dance Central at 4 AM to stay awake for the last part.) Look at it this way, I'm going to badger you about this for the next month. You might as well donate some money so you can righteously tell me to shut the Smurf up. You can place your bid at the link below. Feel free to spread the word to anyone and everyone. I thank you. The children thank you. Several breeds of feral platypus thank you. Maybe, just maybe, doing so will will help you feel the love felt by re-fried beans when lovingly hugged in a warm tortilla. Enjoy your burrito. http://www.extra-life.org/participant/cgardner

    Read the article

  • Agile project management, agile development: early integration

    - by Matías Fidemraizer
    I believe that agile works if everything is agile. In software development area, in my opinion, if team members' code is integrated early, code will be more in sync and this has a lot of pros: Early integration helps team members to avoid painful merges. Encourages better coding habits, because everyone makes sure that they don't break co-workers' code everyday. Both developers and architects (code reviewers) may detect bad design decisions or just wrong development directions in real-time, preventing useless work. Actually I'm talking about getting the latest version of code base and checking-in your own code to the source control in a daily basis. When you start your coding day (i.e. you arrive to your work), your first action is updating your code base with the latest version from the source control. In the other hand, when you're about an hour to leave from your work and go home, your last action is checking-in your code to the source control and be sure that your day work doesn't break the project's build process. Rather than updating and checking-in your code once you finished an entire task, I believe the best approach is fixing small and flexible personal milestones and checking-in the code once you finish one of these. I really believe that this coding approach fits better in the agile project management concept. Do you know some document, blog post, wiki, article or whatever that you can suggest me that could be in sync with my opinion?. And, do you find any problem working with this approach?. Thank you in advance.

    Read the article

  • Perfomance of 8 bit operations on 64 bit architechture

    - by wobbily_col
    I am usually a Python / Database programmer, and I am considering using C for a problem. I have a set of sequences, 8 characters long with 4 possible characters. My problem involves combining sets of these sequences and filtering which sets match a criteria. The combinations of 5 run into billions of rows and takes around an hour to run. So I can represent each sequence as 2 bytes. If I am working on a 64 bit architechture will I gain any advantage by keeping these data structures as 2 bytes when I generate the combinations, or will I be as well storing them as 8 bytes / double ? (64 bit = 8 x 8) If I am on a 64 bit architecture, all registers will be 64 bit, so in terms of operations that shouldn´t be any faster (please correct me if I am wrong). Will I gain anything from the smaller storage requirements - can I fit more combinations in memory, or will they all take up 64 bits anyway? And finally, am I likley to gain anything coding in C. I have a first version, which stores the sequence as a small int in a MySQL database. It then self joins the tabe to itself a number of times in order to generate all the possible combinations. The performance is acceptable, depending on how many combinations are generated. I assume the database must involve some overhead.

    Read the article

  • What should happen at the start of a software project startup?

    - by Willem
    A quick introduction My college semesters include a 8 week project working for an actual company with a software need in order to get some much needed practical experience. I have just started such a project with 5 other students. We're required to spend roughly 40 hours a week per student on this project. We're working with SCRUM as the software development method, this was assigned by our teachers. The question Day one of the project just ended which has created some questions for me as to how to start a project in the 'real world'. Our first day included working on a project planning document (not sure what the English term is), creating a appointment with the company for an introduction and the opportunity to start specifying the requirements and setting up some standards for the behavior within the group. However these items didn't take that long to finish. We've made some concrete plans for tomorrow and the day after we'll meet the company. This still leaves several hours of 'work-time' unspent. Is it usual not being able to fill every hour of a day for work at the start of a project or are we simply too inexperienced to see what work needs to be done at this stage of a project, or are we, perhaps, going through the above list too fast? How does this work in the 'real world'? Do you spend your time wondering 'what should I do now', or do you have a clear view of what you're supposed to do at that moment?

    Read the article

  • What does the crash mean? And why is my Ubuntu Blackbox is crashing how can i check deeply?

    - by YumYumYum
    My system was running for a while amount of 6 hour. Two times i loss remote access and it was not functioning anymore IP is gone etc etc. 3 time showing crash but i have no idea what and why. How to know what went wrong? $ last sun pts/0 d51a429c9.access Mon Mar 19 13:44 still logged in sun tty7 :0 Mon Mar 19 12:17 still logged in reboot system boot 2.6.38-8-generic Mon Mar 19 12:17 - 13:49 (01:31) sun pts/0 d51a429c9.access Mon Mar 19 10:05 - crash (02:12) sun tty7 :0 Mon Mar 19 10:00 - crash (02:16) reboot system boot 2.6.38-8-generic Mon Mar 19 10:00 - 13:49 (03:48) sun pts/0 d51a429c9.access Mon Mar 19 09:24 - down (00:35) sun tty7 :0 Mon Mar 19 09:20 - down (00:39) reboot system boot 2.6.38-8-generic Mon Mar 19 09:20 - 10:00 (00:39) sun pts/2 d51a429c9.access Sun Mar 18 18:04 - down (01:14) sun pts/1 d51a429c9.access Sun Mar 18 17:43 - down (01:35) sun pts/0 d51a429c9.access Sun Mar 18 15:07 - 18:47 (03:40) sun pts/1 d51a429c9.access Sun Mar 18 12:58 - 17:42 (04:43) sun pts/0 d51a429c9.access Sun Mar 18 10:21 - 15:06 (04:44) sun tty7 :0 Sun Mar 18 08:56 - down (10:22) reboot system boot 2.6.38-8-generic Sun Mar 18 08:56 - 19:19 (10:22) sun tty7 :0 Sat Mar 17 18:03 - down (14:51) reboot system boot 2.6.38-8-generic Sat Mar 17 18:03 - 08:55 (14:51) sun tty7 :0 Sat Mar 17 15:00 - down (01:38) reboot system boot 2.6.38-8-generic Sat Mar 17 15:00 - 16:39 (01:38) sun pts/0 d51a4297d.access Sat Mar 17 10:45 - 14:32 (03:46) sun tty7 :0 Fri Mar 16 18:46 - crash (20:14) reboot system boot 2.6.38-8-generic Fri Mar 16 18:46 - 16:39 (21:53) $ sensors acpitz-virtual-0 Adapter: Virtual device temp1: +27.8°C (crit = +100.0°C) temp2: +29.8°C (crit = +100.0°C)

    Read the article

  • Jenkins Paramerized Trigger + Copy Artifact

    - by Josh Kelley
    I'm working on setting up Jenkins to handle our release builds. A release build consists of a Windows installer that includes some binaries that must be built on Linux. Here's what I have so far: The Windows portion and Linux portion are set up as separate Jenkins projects. The Windows project is parameterized, taking the Subversion tag to build and release. As part of its build, the Windows project triggers a build of that same Subversion tag for the Linux project (using the Parameterized Trigger plugin) then copies the artifacts from the Linux project (using the Copy Artifact plugin) to the Windows project's workspace so that they can be included in the Windows installer. Where I'm stuck: Right now, Copy Artifact is set up to copy the last successful build. It seems more robust to configure Copy Artifact to copy from the exact build that Parameterized Trigger triggered, but I'm having trouble figuring out how to make that work. There's an option for a "build selector" parameter that I think is intended to help with this, but I can't figure out how it's supposed to be set up (and blindly experimenting with different possibilities is somewhat painful when the build takes an hour or two to find success or failure). How should I set this up? How does build selector work?

    Read the article

  • Live programming help

    - by frazras
    This idea has been floating around my head for a few years. I started some work on it but I just want to know if it is feasible, sensible, or if there is something else like it out there. Dont want to know I was wasting time on a solved issue. Whenever I have a programming issue, this is my sequence: Google it!: That usually brings up a lot of things: blogs, forums, stackoverflow, stackexchange, and even the official docs of the language/framework/cms. Ask on IRC: I format my question and try to get people on IRC to help me. Make a post: I create a post on forums/stackoverflow/stackexchange or shout on twitter with hashtags. Now a lot of the time I am in the middle of a project with a deadline. So I want answers NOW!!! Sometimes just 5-15 minutes worth of attention. Usually by the time I am failing at getting answers at #2, I am imagining how many people are ONLINE NOW with the skill and my exact answer but playing video games, watching youtube or idling online. However, if they were motivated, they would invest the 15 mintes helping me, that would make a world of a difference. I am even in positions where I would PAY for that 15 minutes of instant help. If your rate is as much as $100/hour (relatively good programmer) that is $25 that might save me 3 hours. This help would be live, text chat/skype/phone/screenshare. Should I continue developing this idea or is there a better alternative out there? Or is this even an unfeasible idea?

    Read the article

  • Oracle Database 12c Technical Trainin

    - by mseika
    Audience Database Administrators, Solutions Architects, System Engineers, Technical Consultants, Implementation and support personnel, Technical Analysts, and Developers. What We Are Announcing During his opening keynote at Oracle OpenWorld 2012, Larry Ellison previewed Oracle Database 12c - the latest generation of the database market leader and Oracle flagship product. Oracle Database 12c introduces many groundbreaking features making it the database foundation of choice for the cloud. Many years of development effort have been focused on introducing innumerable new technological innovations centered on the cloud computing platform. This training session will focus on the specific needs of our Oracle partner community and developers, and provide insight into the many features and capabilities your customers will be looking to leverage in their own environments. Topics includes: Consolidation and Cloud Strategies Deep dive into the key Database 12c Options Migrating to Oracle Database 12c Webcast Details Speaker: Sean Stacey, Director of Platform Technology Solutions.Please note that you will need to join both the Audio and Web Conferences to attend. Please plan on joining 10 minutes before the scheduled time. Region: Date & Time Audio Conference Web Conference Calendar NAS, LAD, EMEA July 28am PT (US) Duration: 1 hour US/Canada: (866) 900-7470Click for local numberIf your country is not listed, dial +1 (706) 634-7953. Local charges may applyCONFERENCE ID: 98498078 Click here to joinPassword: Oracle123 Add this session to your calendar If you have any questions, please contact: Yvonne OungSenior Manager, Channel [email protected]

    Read the article

  • Partner Webcast - Oracle Taleo Cloud Service - 12 Dec 2012

    - by Thanos
    Talent Intelligence is the insight companies need to unlock the power of their most critical asset – their people. CEOs are charged with driving growth, and the one ingredient to growth that’s common across all industries and regions - both in good economic times and in bad – is people. In every economic environment, Talent Intelligence is a company’s biggest lever for driving growth, innovation and customer success. Oracle Taleo Cloud Service provides a comprehensive suite of SaaS products that help companies manage their investment in people by improving their Talent Intelligence. The Oracle Taleo Cloud Service enables enterprises and midsize businesses to recruit top talent, align that talent to key goals, manage performance, develop and compensate top performers, and turn today's best performers into tomorrow's leaders. Join us to find out more about the industry's broadest cloud-based talent management platform. Agenda: Oracle HCM Footprint Taleo value proposition Taleo quick tour Why invest in Taleo resources Demonstrating Taleo Q&A REGISTER NOW Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24 hours prior to start time may not receive confirmation to attend. Duration: 1 hour For any questions please contact us at [email protected]. Visit our ISV Migration Center blog Or Follow us @oracleimc to learn more on Oracle Technologies, upcoming partner webcasts and events. Existing content available YouTube - SlideShare - Oracle Mix.

    Read the article

  • Running CORN job on Ubuntu server for SugarCRM

    - by Logik
    i am pretty inexperienced in Linux.So be descriptive on your answer. My environment :Local Linux server 12.04 hosting Sugar CRM 6.5.2. There is area in sugar CRM called scheduler. I can configured some predefined jobs here. in my case i am trying to run email reminders (ever min/hour/day/month). For this scheduler to be effective, i read some where i need to setup CRON job. So i did some research & finally put following lines in CRONTAB for the root user, as per instructions given in sugarCRM. cd /var/www/crm; php -f cron.php /dev/null 2&1 Well i am creating contracts in my sugarCRM (AOS module) & i want email reminders to be sent for these contracts to the concern person. Now my sugarCRM email is configured correctly & i can send test emails using it. But the CRON + scheduler not giving any result. I can't receive any emails. Then i tried to read /var/log/syslog & it is showing entry for following line each minute. Oct 27 15:03:01 unicomm CRON[28182]: (root) CMD (cd /var/www/crm; php -f cron.php /dev/null 2&1) I've few questions: 1) what does the CRON job line i've added in crontab mean? cd /var/www/crm; php -f cron.php /dev/null 2&1 is not making any sense to me. 2) How am i suppose to get this thing work? I've searched a lot (including SugarCRM forum), but no luck.

    Read the article

  • Ubuntu 12.04 Freezes at Bootup

    - by Ryan Yu
    I have an Acer Aspire One Model AO722, that has a dual-boot configuration with Windows 7 Home Premium and Ubuntu 12.04. Processor: AMD C-50, 1.00GHz 2.00GB ram, 64-bit, AMD Radeon HD 6250 Graphics. I installed Ubuntu two days ago, and since then, loading it has been sporadic. Windows 7 still loads just fine, but Ubuntu will sometimes freeze. First time I booted Ubuntu up, it ran no problem. Worked with it for half an hour, then shut down the computer. The second time I booted it up, it froze after ten minutes when I was trying to set up Thunderbird. Third time, same thing. Fourth time, it froze about 5 minutes after bootup when I was trying to connect to my wifi. Fifth time, it ran no problems for about 5~ hours. Then, three consecutive times I've restarted (what is it, the 6th, 7th, and 8th times?) it has gotten to the login screen, I've entered my password, all the text on the page disappears as if its going to load my desktop, and it freezes. A minute ago when I booted Ubuntu up for the 9th time, it loaded the desktop fine. Who knows if it'll crash soon; probably, though. Any ideas?

    Read the article

  • Upgrade went wrong, laptop essentially 'bricked'

    - by hexagonheat
    I have an old netbook I was trying to upgrade from 10.04 to 10.10. Ubuntu was in the process of upgrading when everything completely froze. I left it sit for an hour but it would not respond to anything. So I powered down the machine and it didn't have the necessary files to run Ubuntu. I went to the terminal and it told me to put in some command that I can not remember to 'rebuild' something. That takes me to now, when I turn on the laptop it comes up with a screen "GNU GRUB version 1.98+20100804-5ubuntu3.3" and has a bunch of options such as: 1. Ubuntu, with Linux 2.6.35-32-generic 2. Ubuntu, with Linux 2.6.35-32-generic (recover mode) etc. (there are like 15 of these with different numbers after 2.6.35 and the word 'generic'. It doesn't seem to matter what I pick, it will go to the "Ubuntu" loading screen with the colored dots but then every time it will freeze and I have to reboot to the same thing. I can't seem to get a terminal prompt anywhere either. Any ideas? I can't think of what to do :(

    Read the article

  • 13.04 Dash Icon and Ubuntu Gnome-Classic Issues

    - by Flabricorn
    The dash icon for Ubuntu 13.04 was "released", and a how-to on OMG! Ubuntu! Was written. Now, I followed all of the instructions, and the assets installed fine. Oh they installed alright, and now I can't get rid of them! I went through the multiple posts, but since this seems to be a new problem, none of the answers were working, and nothing new was appearing! So what I did, was I went into Gnome-Classic, just for old times sake, and I wanted to hide from the new 13.04 icons I didn't really take to. And of course what should appear covering my top, and bottom panels, but the Unity dock, with it's 13.04 Dash icon and everything. Wait, how did unity make it's way into my gnome-classic environment? I'm not really sure, but after about an hour of fooling around with Unity-Tweak and the compiz configuration, I needed something new. Would it be a bad idea to reinstall? At this point, nothing much would be lost, but I would like to stray from that idea.

    Read the article

  • Every file on cPanel got deleted (then restored hours later), and I have no idea why

    - by mcranston18
    I apologize in advance if I don't provide proper detail; I am new to server stuff and am looking for general advice about this issue: I was helping out a client doing web design last month. They have about a dozen static sites on one server. The sites are all built on Joomla, except one which I built on Wordpress. Everything was working fine last month when we did the redesign but all of a sudden this morning, every single file on their server got deleted: every web page, file, and all e-mail addresses. I phoned the hosting company (alliancewww.com) to ask, "why did every single file suddenly delete off the server?" They said, "because someone must have deleted it." I said, "well no one did." (Which I'm pretty damn sure no one did.) They said, "you can pay us to look into the problem." I authorized $150 for them to look into the problem. About an hour later, everything was magically re-instated. The host said they had a back-up of everything and just restored everything. What I'm wondering: Does anyone have recommendations of logs I can go through to investigate how the files got deleted in the first place? I've checked out their cPanel logs but found nothing. Is it likely that this is a mess-up on the host's part?

    Read the article

  • Sangam 13: Hyderabad, India

    - by mvaughan
    by Teena Singh, Oracle Applications User Experience The AIOUG (All India Oracle User Group) will be hosting Sangam 13 November 8th and 9th in Hyderabad, India. The first Sangam conference was in 2009 and the AppsUX team has been involved with the conference and user group membership since 2011. We are excited to be returning to the conference and meeting Oracle end users there. For the first time at Sangam the AppsUX team will host an Onsite Usability Lab at the conference. If you or one of your team members is attending the conference and interested in attending a pre-scheduled one on one usability session, contact [email protected]. In addition to pre-scheduled sessions in the Onsite Usability Lab, our team will also be hosting Walk In studies.  Whether you have 5 minutes, 15 minutes, or half an hour, you can experience a one on one demo learn more about how user testing is conducted with a UX expert. Additionally, you can learn how you and your company can participate in future design and user research activities. The AppsUX team will also be available at the Oracle booth in the Demo area if you want to ask questions. Finally, you can learn how simplicity, consistency, and emerging trends are driving the applications user experience strategy at Oracle when you attend Thomas Wolfmaier's (Director of SCM User Experience, Oracle) presentation on: Applications User Experiences In the Cloud: Trends and Strategy,  November 8th, 2013. For further information on our team’s involvement in the conference, please refer to the events page on Usable Apps here.

    Read the article

  • Languages like Tcl that have configurable syntax?

    - by boost
    I'm looking for a language that will let me do what I could do with Clipper years ago, and which I can do with Tcl, namely add functionality in a way other than just adding functions. For example in Clipper/(x)Harbour there are commands #command, #translate, #xcommand and #xtranslate that allow things like this: #xcommand REPEAT; => DO WHILE .T. #xcommand UNTIL <cond>; => IF (<cond>); ;EXIT; ;ENDIF; ;ENDDO LOCAL n := 1 REPEAT n := n + 1 UNTIL n > 100 Similarly, in Tcl I'm doing proc process_range {_for_ project _from_ dat1 _to_ dat2 _by_ slice} { set fromDate [clock scan $dat1] set toDate [clock scan $dat2] if {$slice eq "day"} then {set incrementor [expr 24 * 60]} if {$slice eq "hour"} then {set incrementor 60} set method DateRange puts "Scanning from [clock format $fromDate -format "%c"] to [clock format $toDate -format "%c"] by $slice" for {set dateCursor $fromDate} {$dateCursor <= $toDate} {set dateCursor [clock add $dateCursor $incrementor minutes]} { # ... } } process_range for "client" from "2013-10-18 00:00" to "2013-10-20 23:59" by day Are there any other languages that permit this kind of, almost COBOL-esque, syntax modification? If you're wondering why I'm asking, it's for setting up stuff so that others with a not-as-geeky-as-I-am skillset can declare processing tasks.

    Read the article

  • Monitoring Baseline

    - by Grant Fritchey
    Knowing what's happening on your servers is important, that's monitoring. Knowing what happened on your server is establishing a baseline. You need to do both. I really enjoyed this blog post by Ted Krueger (blog|twitter). It's not enough to know what happened in the last hour or yesterday, you need to compare today to last week, especially if you released software this weekend. You need to compare today to 30 days ago in order to begin to establish future projections. How your data has changed over 30 days is a great indicator how it's going to change for the next 30. No, it's not perfect, but predicting the future is not exactly a science, just ask your local weatherman. Red Gate's SQL Monitor can show you the last week, the last 30 days, the last year, or all data you've collected (if you choose to keep a year's worth of data or more, please have PLENTY of storage standing by). You have a lot of choice and control here over how much data you store. Here's the configuration window showing how you can set this up: This is for version 2.3 of SQL Monitor, so if you're running an older version, you might want to update. The key point is, a baseline simply represents a moment in time in your server. The ability to compare now to then is what you're looking for in order to really have a useful baseline as Ted lays out so well in his post.

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >