Search Results

Search found 18842 results on 754 pages for 'the machine'.

Page 352/754 | < Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >

  • Network does not connect at boot

    - by Daniel Svozil
    I am on Ubuntu 12.04, fresh install. When I boot a machine, the boot screen says Connecting to network, later it is changed to something like Did not connect, trying for another 60s. However, the network does not connect at the boot. But I can then log in without a network connection, and if I start a network manager service manually from the terminal (sudo service network-manager start), the network is connected without any problems. Please, does anybody know where the problem could be? I don't want to wait more than two minutes every computer restart :-). I am new to Ubuntu (and also to upstart) so I am a bit lost. There is no /var/log/messages, in dmesg I found this record, though it may not be related: init: network-interface (eth1) pre-start process (492) terminated with status 1 init: network-interface (eth1) post-stop process (548) terminated with status 1 Thanks Daniel

    Read the article

  • ubuntu 14 painfully slow on dell r200

    - by sirmonkey
    I didn't notice it at first. The machines (there is 20 plus) are to be used a simple file servers. It wasn't until samba just wouldn't act right that I installed a desktop gui and started more diagnoseing the problem did I catch the slow preformance... I've tested 4 servers they all suck. And windows 7 runs fantastic on them. I have Google and searched. But nothing to explain this. The easy test is dmesg is so slow you can almost read it. I'm guessing it's an apic or cpu power management issue. What output would you all like????? It is a core2 machine with 4Gb of ram. On board data.

    Read the article

  • How to protect own software from copying [closed]

    - by Zzz
    Possible Duplicate: How do you prevent the piracy of your software? Is possible to protect some file from copying if you are administrator of machine? I heard some story about some behavior: one software developer sells his software in some way. He is installing it on every client's computer and this software does not work on other computers or cannot be copied physically. How to implement the first and second protection. Is it effectively protection if software costs about $100 for all copies across client's company?

    Read the article

  • Participez aux ateliers de certification Oracle à Paris le 30 octobre & 09 novembre 2012

    - by mseika
    Participez aux ateliers de certification Oracle à Paris le 30 octobre & 09 novembre 2012 Remportez la préférence de vos clients et prospects grâce à vos spécialisations Oracle ! Dans la continuité de votre démarche vers la certification Oracle, nous vous proposons 2 demi-journées "spéciale ateliers de certification" à Paris. Réservez votre matinée du 30 octobre ou du 09 novembre prochains pour passer les certifications indispensables à votre entreprise pour être spécialisée.Les ateliers auront lieu à Paris Saint Lazare de 9h à 12h30 au :Centre M2i20 rue d'Athènes75009 PARISNe manquez pas cette occasion, de nombreux ateliers au choix vous sont proposés. Attention, le nombre de places est limité. Programme des ateliers de certifications :- Oracle Software : Oracle Database 11g, Database Security, Data Integration, Data Warehousing, Oracle Business Intelligence Foundation, Exadata Database Machine, Exalogic Elastic Cloud, SOA... - Oracle Hardware : Oracle Linux, Oracle Solaris, SPARC Entry & Midrange, SPARC T-Series Servers, Stockage Unifié, Virtualisation Les ateliers seront suivis d'un déjeuner. Des pré-requis sont nécessaires pour passer ces examens en ligne.Vérifiez-les

    Read the article

  • GRUB- error: no such partition grub rescue and Error: No default or UI configuration directive found boot > on pendrive

    - by Ash
    I have dell inspiron, previously I installed Ubuntu 11.10 on my Windows 7 and made it dual boot. But since I want to upgrade my Ubuntu version and change the partition spacing, I deleted 11.10 partition directly and extended my hardrive space (Windows + Ubuntu) at that moment everything was fine. Then I prepared a 12.04 32bit USB and installed it . It was installed but isn't showing dual boot option like 11.10 and my machine directly boot into Windows 7. So instantly i again deleted my 12.04 partition . Now I login into Windows 7 but whenever I put USB ( with 12.04 ) to boot from it, I am facing error of "no such partition grub rescue" even though I try to put lower version(11.04) it showing another error "Error: No default or UI configuration directive found boot " I have reinstall Windows 7 and reformat all partition, still I am facing same error :(

    Read the article

  • New EFI Laptop: After restart, Ubuntu boots to black screen?

    - by Henson S
    So, I've got this new laptop (Acer Aspire 5560 15") that's doing a funny thing.. If I cold booth the machine, everything is great! I see the grub menu, and Ubuntu 12.04 loads just fine. If I reboot within Ubuntu, I see the BIOS screen, but then nothing. No grub menu, no hard drive activity except for just a blip. I noticed that when installing Ubuntu this time, I had to create an EFI boot partition -- something I'm not used to. And I'm guessing that it has something to do with the issue. Could be totally wrong. Any ideas?

    Read the article

  • Anti aliasing problem

    - by byronyasgur
    I am auditioning fonts on google web fonts and one that I was discounting was Ubuntu because it looked a bit jagged ( screenshot below taken straight from google); however afterward I read an article where it was mentioned as a good choice, and there was a screenshot where it looked really good ( to me anyway ). I am using windows 7 and have tried looking at it in chrome and firefox. I notice the same thing with some other fonts but this one is a good example because it looks perfect in the screenshot but not so good when I look at it on their site. I know this essentially is a question about setting my computer, but I thought that this would be the best place to pose the question: Is there something wrong with the settings on my machine seeing as it's obviously not showing the font the same on my computer as it did when the article writer downloaded it and used it in an image. The screenshot from Google ... The screenshot from the article above ...

    Read the article

  • A Patent for Workload Management Based on Service Level Objectives

    - by jsavit
    I'm very pleased to announce that after a tiny :-) wait of about 5 years, my patent application for a workload manager was finally approved. Background Many operating systems have a resource manager which lets you control machine resources. For example, Solaris provides controls for CPU with several options: shares for proportional CPU allocation. If you have twice as many shares as me, and we are competing for CPU, you'll get about twice as many CPU cycles), dedicated CPU allocation in which a number of CPUs are exclusively dedicated to an application's use. You can say that a zone or project "owns" 8 CPUs on a 32 CPU machine, for example. And, capped CPU in which you specify the upper bound, or cap, of how much CPU an application gets. For example, you can throttle an application to 0.125 of a CPU. (This isn't meant to be an exhaustive list of Solaris RM controls.) Workload management Useful as that is (and tragic that some other operating systems have little resource management and isolation, and frighten people into running only 1 app per OS instance - and wastefully size every server for the peak workload it might experience) that's not really workload management. With resource management one controls the resources, and hope that's enough to meet application service objectives. In fact, we hold resource distribution constant, see if that was good enough, and adjust resource distribution if that didn't meet service level objectives. Here's an example of what happens today: Let's try 30% dedicated CPU. Not enough? Let's try 80% Oh, that's too much, and we're achieving much better response time than the objective, but other workloads are starving. Let's back that off and try again. It's not the process I object to - it's that we to often do this manually. Worse, we sometimes identify and adjust the wrong resource and fiddle with that to no useful result. Back in my days as a customer managing large systems, one of my users would call me up to beg for a "CPU boost": Me: "it won't make any difference - there's plenty of spare CPU to be had, and your application is completely I/O bound." User: "Please do it anyway." Me: "oh, all right, but it won't do you any good." (I did, because he was a friend, but it didn't help.) Prior art There are some operating environments that take a stab about workload management (rather than resource management) but I find them lacking. I know of one that uses synthetic "service units" composed of the sum of CPU, I/O and memory allocations multiplied by weighting factors. A workload is set to make a target rate of service units consumed per second. But this seems to be missing a key point: what is the relationship between artificial 'service units' and actually meeting a throughput or response time objective? What if I get plenty of one of the components (so am getting enough service units), but not enough of the resource whose needed to remove the bottleneck? Actual workload management That's not really the answer either. What is needed is to specify a workload's service levels in terms of externally visible metrics that are meaningful to a business, such as response times or transactions per second, and have the workload manager figure out which resources are not being adequately provided, and then adjust it as needed. If an application is not meeting its service level objectives and the reason is that it's not getting enough CPU cycles, adjust its CPU resource accordingly. If the reason is that the application isn't getting enough RAM to keep its working set in memory, then adjust its RAM assignment appropriately so it stops swapping. Simple idea, but that's a task we keep dumping on system administrators. In other words - don't hold the number of CPU shares constant and watch the achievement of service level vary. Instead, hold the service level constant, and dynamically adjust the number of CPU shares (or amount of other resources like RAM or I/O bandwidth) in order to meet the objective. Instrumenting non-instrumented applications There's one little problem here: how do I measure application performance in a way relating to a service level. I don't want to do it based on internal resources like number of CPU seconds it received per minute - We need to make resource decisions based on externally visible and meaningful measures of performance, not synthetic items or internal resource counters. If I have a way of marking the beginning and end of a transaction, I can then measure whether or not the application is meeting an objective based on it. If I can observe the delay factors for an application, I can see which resource shortages are slowing an application enough to keep it from meeting its objectives. I can then adjust resource allocations to relieve those shortages. Fortunately, Solaris provides facilities for both marking application progress and determining what factors cause application latency. The Solaris DTrace facility let's me introspect on application behavior: in particular I can see events like "receive a web hit" and "respond to that web hit" so I can get transaction rate and response time. DTrace (and tools like prstat) let me see where latency is being added to an application, so I know which resource to adjust. Summary After a delay of a mere few years, I am the proud creator of a patent (advice to anyone interested in going through the process: don't hold your breath!). The fundamental idea is fairly simple: instead of holding resource constant and suffering variable levels of success meeting service level objectives, properly characterise the service level objective in meaningful terms, instrument the application to see if it's meeting the objective, and then have a workload manager change resource allocations to remove delays preventing service level attainment. I've done it by hand for a long time - I think that's what a computer should do for me.

    Read the article

  • How to use different php.ini files for different VirtualHosts?

    - by gsingh2011
    I have my site and it's staging subdomain running on the same CentOS machine running apache. The subdomain is created using a VirtualHost, and I use it to find any bugs before I push to production. I want the php.ini file for the staging VirtualHost to be a development one, and the production site will use a production php.ini. How can I configure apache to use different php.ini files? I don't want to use php_value/php_flag for everything, I'd rather just use the php.ini file I already have available. I've tried creating an .htaccess file that looks like this, SetEnv PHPRC /path/to/php.ini/directory This has no effect, as phpinfo() tells me it's still using /etc/php.ini. I've also tried setting PHPIniDir for both virtual hosts (www and staging) and it complains about seeing the directive twice.

    Read the article

  • Code review process when using GIT as a repository?

    - by Sid
    What is the best process for code review when using GIT? Current process: We have a GIT server with a master branch to which everyone commits Devs work off the local master mirror or a local feature branch Devs commit to server's master branch Devs request code review on last commit Problem: Any bug in code review are already in master by the time it's caught. Worse, usually someone has burnt a few hours trying to figure out what happened... So, we would like To do code review BEFORE delivery into the 'master'. Have a process that works with a global team (no over the shoulder reviews!) something that doesn't require an individual dev to be at his desk/machine to be powered up so someone else can remote in (remove human dependency, devs go home at different timezones) We use TortoiseGIT for a visual representation of a list of files changed, diff'ing files etc. Some of us drop into a GIT shell when the GUI isn't enough, but ideally we'd like the workflow to be simple and GUI based (I want the tool to lift any burden, not my devs).

    Read the article

  • How do I X forward a Windows application to a Linux system using ssh?

    - by triunenature
    Ok, so if I have two Linux Machines (A and B) and I have a program on one, and want to run it on be I do: user@LinuxA:~$ ssh -X LinuxB user@LinuxB:~$ programName (Displays on LinuxA machine) Ok same thing, WindowsA LinuxB (Program on Linux) Start Xming X Server on Windows Run Putty, use x11 port forward with :0.0 After connect to LinuxB, run program, it loads in Windows! Now here is the question, WindowsA LinuxB, (Program on Windows) Run Windows Program On Linux, using a windows resources. How to make it work? BTW, I know it can because years ago, I read a white paper on it, but never actually tested it out.

    Read the article

  • disable shutdown/suspend if there is other user logged in via ssh

    - by Denwerko
    I remember that in versions of ubuntu around 9.04 was possible to disable user to shutdown ( and maybe suspend too ) system if there was other user logged in.Something like policykit or similar. Is it possible to do in 11.04 ? Thanks edit: if someone needs ( for own risk ), little change in /usr/lib/pm-utils/bin/pm-action will allow user to suspend machine if he is only user logged in or when user will run sudo pm-suspend. Probably not best piece of code, but for now works. diff -r 805887c5c0f6 pm-action --- a/pm-action Wed Jun 29 23:32:01 2011 +0200 +++ b/pm-action Wed Jun 29 23:37:23 2011 +0200 @@ -47,6 +47,14 @@ exit 1 fi +if [ "$(id -u )" == 0 -o `w -h | cut -f 1 -d " " | sort | uniq | wc -l` -eq 1 ]; then + echo "either youre root or root isnt here and youre only user, continuing" 1&2 + else + echo "Not suspending, root is here or there is more users" 1&2 + exit 2 + fi + + remove_suspend_lock() { release_lock "${STASHNAME}.lock" Question still stands, is it possible to forbid shutdown or suspend when there is more than one user logged in ( without rewriting system file )?

    Read the article

  • Wheres my memory going?

    - by Stu2000
    My machine keeps 'freezing' before eventaully logging out with all the programs exiting. This is rather annoying, and I think its because I keep running out of memory. I am not running any custom software, just netbeans, chrome etc. (Stuff I usually run on other ubuntu computers without issue). For some reason my memory usage is through the roof as seen here, but I can't quite figure out why. Here is a screenshot which may be useful with htop and gnome-system monitor open as user and as root. I notice that my console-kit-daemon is taking up about a gig of 'virtual memory'. Is that normal? Any tips/advice will be helpful. In the meantime I have ordered 2 x 4 gig ram sticks to try and just throw hardware at the issue.

    Read the article

  • SSL Certificate is Untrusted... sometimes

    - by dragonmantank
    Web Designer I'm working with signed up a new client that needed an SSL certificate. We went to namecheap.com and purchased on from Comodo. Got all the needed files and set it up in ISPConfig. To test we used Windows 7 running IE8, Firefox 3.6, and Chrome 12, and then on OSX with Firefox 4, Safari 5, and Chrome 13. All of them worked fine. The client is getting 'This connection is untrusted' in Firefox 4 and 5. Safari works fine on their machine. On my machines and the designer's machines all works with no errors. I had the client forward me the info for the certificate that Firefox has and the fingerprints match up. I have an old Windows 2000 VM with IE6 and Chrome and those work just fine as well. Any ideas on what else to check or do? The server is running Debian 5.0, up-to-date, with Apache 2 and ISPConfig 3.3

    Read the article

  • My laptop hangs a lot

    - by Salahuddin
    My laptop is 1G ram and 120G hard-disk 1.68HZ processor , I've both Ubuntu and Windows7 on my machine For the last few days, my laptop, running Ubuntu 11.10 has been hanging a lot. A lot of the time, the touchpad won't work for unknown reasons, and I have to connect a USB mouse to be able to move the pointer on the screen. When browsing the web, the browser hangs a lot, and I have to restart the laptop to refresh it. I know that there is a hotkey to force Ubuntu to refresh when it hangs, but it doesn't work here.These problems happens only on ubuntu not on windows 7... How do I make Ubuntu light and fast? Help me, please.

    Read the article

  • Xen 4.1 + NVidia driver = Unity has no window decorations

    - by Shade
    I am running Ubuntu 11.10 with Unity. I installed XEN and booted its kernel. The machine booted normally. However, when I started Unity (3D), there were no window decorations. I tried Unity 2D and the window decorations are present there. I then decided to remove the driver (installed through Additional Drivers) and install it manually like this: IGNORE_XEN_PRESENCE=y CC="gcc -DNV_VMAP_4_PRESENT -DNV_SIGNAL_STRUCT_RLIM" ./NVIDIA* ...but the window decorations were still missing in Unity 3D. The window decorations are there when I boot the regular Ubuntu kernel (without XEN support). What could be the problem and how to fix it?

    Read the article

  • Why can't i ping server? VMware set to 'Bridged' loses IP address on 10.04.

    - by Dave
    I have installed a fresh 10.04 server onto a laptop on a home network as a VMware machine and set network connection to 'Bridged: connect directly to the physical network' from within VMware and rebooted the server. It then loses its IP address. 'dhclient eth0' says "No working leases in persistent database - sleeping" DHCP is working fine on the wi-fi router. The laptop is wired to a wireless router and from there wirelessly to a desktop. Desktop and Laptop can ping each other from Windows. I can ping the VM from Windows on the same laptop, but not from the desktop. Strangely ping has started to resolve hostnames to IPv6 addresses and not IPv4. Don't know whether that's connected? A kick in the right direction would be greatly appreciated. I've been an Ubuntu desktkop user for a few years, but new to ubuntu servers.

    Read the article

  • web services, J2EE, spring, DB integration project ideas- maybe data mining related?

    - by sj88
    Hey guys, I am a graduate CS student (Data mining and machine learning) and have a good exposure to core JAVA (3 years). I have read up a bunch of stuff on Design patterns J2EE Web services( soap and rest) spring and hibernate Java Concurrency - advanced features like Task and Executors. I would now like to do a project combining this stuff (over my free time of corse) to get a better understanding of these things and to kind of make an end to end software (to learn the best design principles etc + svn, maven). Any good project ideas would be really appreciated. I just wanna build this stuff to learn so I dont really mind re-inventing the wheel. Also, anything related to data mining would be an added bonus (fits with my research) but absolutly not necesary (since this project is more to learn to do large scale software developement)

    Read the article

  • JavaFx a-t-il encore une chance de s'imposer face à Flash, Silverlight et l'émergence du HTML 5 ? Ou

    JavaFx a-t-il encore une chance de s'imposer Face à Flash, Silverlight et l'émergence du HTML 5 ? JavaFx a été lancé il y a trois ans pour développer des applications lourdes. Très vite, les développeurs l'ont utilisé pour des applications multimédias et pour faire du web java (notamment des Rich Internet Applications ou RIA). La plateforme - qui se compose du langage de script JavaFX, une plateforme pour client lourd et une intégration avec la machine virtuelle Java - entendait ainsi répondre ainsi aux besoins d'un marché où la compétition fait désormais rage avec, entre autres, des acteurs aussi importants que Flash de Adobe et Silverlight de Microsoft. Selon la

    Read the article

  • Ubuntu install and boot failure 11.10

    - by Robert Moody
    I installed Ubuntu 11.10 on my machine alongside Vista, and upgraded to 12.4. I decided I liked 11.10 better, so I tried to install that again as my only OS, except I increased the size of the swap file partition to 2 gigs. It boots up fine off the CD, but when I install, it gives me a non-specific error, and returns me to the desktop. When attempting to boot off the hard drive, I get a black screen with a blinking underscore that starts in the corner, drops a couple spaces, and stays there. I managed to install 9.04, and am currently using that. The computer is a little outdated, but was fired up for the very first time last week, so the hard drive is in new condition and the CD rom drive is fine too. Running a 3GHzX2 processor. I ran a memory test, which came back fine, and being new to the linux environment, I've been scratching my head for the last couple days. How can I fix this?

    Read the article

  • How to get the correct battery status?

    - by GUI Junkie
    At this moment, ever since I installed Ubuntu on this machine, the battery status says: not present. Looking at this answer, however, I find that /proc/acpi/battery/BAT1/info (sometimes its /proc/acpi/battery/BAT0/info, use tab complete to help) has the following info: present: yes design capacity: 4400 mAh last full capacity: 4400 mAh battery technology: rechargeable design voltage: 11100 mV design capacity warning: 300 mAh design capacity low: 132 mAh cycle count: 0 capacity granularity 1: 32 mAh capacity granularity 2: 32 mAh model number: BAT1 serial number: 11 battery type: 11 OEM info: 11 In accordance to this answer, I've checked the /proc/acpi/battery/BAT1/state file: present: yes capacity state: ok charging state: charged present rate: unknown remaining capacity: unknown present voltage: 10000 mV The acpi -b command returns: Battery 0: Unknown, 0%, rate information unavailable Any suggestions on getting the battery info updated?

    Read the article

  • How to prevent WLAN connection from dropping permanently?

    - by Chris
    I have a desktop with a Fritz USB WLAN N stick and tried Ubuntu 12.04. Installation went fine and WLAN is working. However, connection drops permanent. Reconnecting manually fixes it but after a few minutes it drops again. It's connected to a Vodafone 802 box with WLAN N fix set. It seems that it works when I switch off N mode. But I need to test. Can someone confirm this issue or is there another solution? I have another machine with 12.04 (HP 625 laptop) running where connection is stable.

    Read the article

  • Turning off XON/XOFF when SSHing via PuTTY

    - by Oddthinking
    I have a fresh install of Ubuntu 9.10 on a rented dedicated server. When I ssh to it using PuTTY (on a Windows machine), I find it responds to Ctrl+S and Ctrl+Q as XON/XOFF transmission control (i.e. the terminal freezes everytime I type Ctrl+S until I type Ctrl + Q). This hasn't been a problem on other remote servers, and I realise I don't really have much idea about how this is determined. Is this something that is negotiated at the start of the terminal session, something that is set by the choice of terminal emulation (TERM=xterm, if that helps) or - as I suspect - some setting on the server I am not aware of. How do I tell Ubuntu that it is 2011, and no-one has terminals that rely on XON/XOFF any more?

    Read the article

  • Solved: Chrome v18, self signed certs and &ldquo;signed using a weak signature algorithm&rdquo;

    - by David Christiansen
    So chrome has just updated itself automatically and you are now running v18 – great. Or is it… If like me, you are someone that are running sites using a self-signed SSL Certificate (i.e. when running a site on a developer machine) you may come across the following lovely message; Fear not, this is likely as a result of you following instructions you found on the apache openssl site which results in a self signed cert using the MD5 signature hashing algorithm. Using OpenSSL The simple fix is to generate a new certificate specifying to use the SHA512 signature hashing algorithm, like so; openssl req -new -x509 -sha512 -nodes -out server.crt -keyout server.key Simples! Now, you should be able to confirm the signature algorithm used is sha512 by looking at the details tab of certificate Notes If you change your certificate, be sure to reapply any private key permissions you require – such as allowing access to the application pool user.

    Read the article

  • Solved: Chrome v18, self signed certs and &ldquo;signed using a weak signature algorithm&rdquo;

    - by David Christiansen
    So chrome has just updated itself automatically and you are now running v18 – great. Or is it… If like me, you are someone that are running sites using a self-signed SSL Certificate (i.e. when running a site on a developer machine) you may come across the following lovely message; Fear not, this is likely as a result of you following instructions you found on the apache openssl site which results in a self signed cert using the MD5 signature hashing algorithm. The simple fix is to generate a new certificate specifying to use the SHA1 signature hashing algorithm, like so; openssl req -new -x509 -sha1 -nodes -out server.crt -keyout server.key Simples!

    Read the article

< Previous Page | 348 349 350 351 352 353 354 355 356 357 358 359  | Next Page >