Search Results

Search found 48985 results on 1960 pages for 'system reserved partition'.

Page 365/1960 | < Previous Page | 361 362 363 364 365 366 367 368 369 370 371 372  | Next Page >

  • How do you upgrade/remove a side-by-side installation?

    - by d3vid
    I've hit some snags in the last two upgrades (which I've been able to resolve with time, patience and AskUbuntu :) so come 12.04 I'm considering a side-by-side installation. Perhaps even installing a pre-release before that (because virtual machine testing can't reveal hardware-related issues). So, let's say I installed a side-by-side version. As far as I can tell this splits my existing partition and installs a brand new Ubuntu on partition 2. If all goes well, there are no hardware issues, and my favorite apps seem to be working, how do I switch to a one-sided installation? If I can't, how do I do a side-by-side installation the next time? (And, am I crazy to consider using a pre-release version to do a side-by-side installation?)

    Read the article

  • how to create a watcher using fsevents in mac osx 10.6

    - by mathan
    I m trying to get file event notifications using fsevents.h file. I m working with Mac OS X 10.6 and XCode 3.1.4 in which i found fsevents.h in four following locations /System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/CarbonCore.framework/Versions/A/Headers/FSEvents.h /Xcode3.1.4/SDKs/MacOSX10.5.sdk/System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/CarbonCore.framework/Versions/A/Headers /Developer/SDKs/MacOSX10.5.sdk/System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/CarbonCore.framework/Versions/A/Headers /Developer/SDKs/MacOSX10.6.sdk/System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/CarbonCore.framework/Versions/A/Headers I have following issues in accessing fsevents.h 1) Out of above four locations which one should be included since fsevents is not getting included unless i put following include syntax include<../../../../Developer/SDKs/MacOSX10.6.sdk/System/Library/Frameworks/CoreServices.framework/Versions/A/Frameworks/CarbonCore.framework/Versions/A/Headers/fsevents.h 2) Where could I find the function definition whose prototypes are declared in fsevents.h using "extern" keyword

    Read the article

  • How to improve my LDAP schema?

    - by asmaier
    Hello, I have a OpenLDAP Database and it holds some project objects that look like dn: cn=Proj1,ou=Project,ou=ua,dc=org cn: Proj1 objectClass: top objectClass: posixGroup member: 001ag member: 002ag System: ABEL System: PCx Budget: ABEL:1000000:0.3 Budget: PCx:300000:0.3 One can see that the Budget attribute is a ":"-separated string, where the first part holds the name of the system the budget is for, the second part holds some budget (which may change every month) and the last entry is a conversion factor for the budget of that system. Seeing this, I thought this is bad database design, since attribute values should always be atomic. But how can I improve that in LDAP, so that I can do a direct ldapsearch or a direct ldapmodify of the budget of System "ABEL" instead of writing a script, that will have to parse and split the ":"-separated string?

    Read the article

  • Users being forced to re-login randomly, before session and auth ticket timeout values are reached

    - by Don
    I'm having reports and complaints from my user that they will be using a screen and get kicked back to the login screen immediately on their next request. It doesn't happen all the time but randomly. After looking at the Web server the error that shows up in the application event log is: Event code: 4005 Event message: Forms authentication failed for the request. Reason: The ticket supplied has expired. Everything that I read starts out with people asking about web gardens or load balancing. We are not using either of those. We're a single Windows 2003 (32-bit OS, 64-bit hardware) Server with IIS6. This is the only website on this server too. This behavior does not generate any application exceptions or visible issues to the user. They just get booted back to the login screen and are forced to login. As you can imagine this is extremely annoying and counter-productive for our users. Here's what I have set in my web.config for the application in the root: <authentication mode="Forms"> <forms name=".TcaNet" protection="All" timeout="40" loginUrl="~/Login.aspx" defaultUrl="~/MyHome.aspx" path="/" slidingExpiration="true" requireSSL="false" /> </authentication> I have also read that if you have some locations setup that no longer exist or are bogus you could have issues. My path attributes are all valid directories so that shouldn't be the problem: <location path="js"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> <location path="images"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> <location path="anon"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> <location path="App_Themes"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> <location path="NonSSL"> <system.web> <authorization> <allow users="*" /> </authorization> </system.web> </location> The only thing I'm not clear on is if my timeout value in the forms property for the auth ticket has to be the same as my session timeout value (defined in the app's configuration in IIS). I've read some things that say you should have the authentication timeout shorter (40) than the session timeout (45) to avoid possible complications. Either way we have users that get kicked to the login screen a minute or two after their last action. So the session definitely should not be expiring. Update 2/23/09: I've since set the session timeout and authentication ticket timeout values to both be 45 and the problem still seems to be happening. The only other web.config in the application is in 1 virtual directory that hosts Community Server. That web.config's authentication settings are as follows: <authentication mode="Forms"> <forms name=".TcaNet" protection="All" timeout="40" loginUrl="~/Login.aspx" defaultUrl="~/MyHome.aspx" path="/" slidingExpiration="true" requireSSL="true" /> </authentication> And while I don't believe it applies unless you're in a web garden, I have both of the machine key values set in both web.config files to be the same (removed for convenience): <machineKey validationKey="<MYVALIDATIONKEYHERE>" decryptionKey="<MYDECRYPTIONKEYHERE>" validation="SHA1" /> <machineKey validationKey="<MYVALIDATIONKEYHERE>" decryptionKey="<MYDECRYPTIONKEYHERE>" validation="SHA1"/> Any help with this would be greatly appreciated. This seems to be one of those problems that yields a ton of Google results, none of which seem to be fitting into my situation so far.

    Read the article

  • Bless doesn't fix white boot screen boot delay for single-boot Xubuntu 14.04 on Macbook 4,1

    - by elephant
    I still have a 30-second delay on the white boot-up screen before Xubuntu loads after trying various combinations of bless --device as recommended here: https://help.ubuntu.com/community/MactelSupportTeam/AppleIntelInstallation#Avoid_long_EFI_wait_before_GRUB I wonder if anyone has experienced this before, or can point me to some good steps for troubleshooting this issue? I have cycled my macbook dozens of times, it would be great to be able to boot quicker. I am single-booting Xubuntu 14.04 (no Mac OSX partitions or any other OS, just a GRUB partition at sda1, a main partition at sda2, and a swap at the end of the drive). Suggestions very appreciated.

    Read the article

  • Sharepoint Variations Error

    - by marcocampos
    I'm getting some crazy errors when trying to create variations in Sharepoint. Has anybody seen this error? PublishingPage::AttemptPairUpWithPage() Ends. this: http://wseasp05/PT/Paginas/Destaque1.aspx, destPageUrl: /ES/Paginas/Destaque1.aspx Begin DeploymentWrapper::SynchronizePeerPages(), sourcePage = Paginas/Destaque1.aspx DeploymentWrapper::SynchronizePeerPages(), synchronizeDestUrl = /ES/Paginas/Destaque1.aspx Access to the path 'C:\Windows\TEMP\11c7c12e-030d-4860-a942-f5ab71f0930d\ExportSettings.xml' is denied. at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy) at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options, String msgPath, Boolean bFromProxy) at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options) at System.IO.FileInfo.Create() at Microsoft.SharePoint.Deployment.Ex... ...portDataFileManager.Initialize() at Microsoft.SharePoint.Deployment.SPExport.InitializeExport() at Microsoft.SharePoint.Deployment.SPExport.Run() Export Completed. DeploymentWrapper.SynchronizePeerPages() catches UnauthorizedAccessException. Spawn failed for /ES/Paginas/Destaque1.aspx End of DeploymentWrapper.SynchronizePeerPages() Thanks in advance.

    Read the article

  • Invalid character in a Base-64 string

    - by swetha
    I am getting this error when I am validating the user with sql membership provider this.provider.ValidateUser(userName, password); the password i have used is "freetrial". I tried trimming the spaces but still no luck!!! and the call stack is as follows: [FormatException: Invalid character in a Base-64 string.] System.Convert.FromBase64String(String s) +0 System.Web.Security.MembershipProvider.EncodePassword(String pass, Int32 passwordFormat, String salt) +54 System.Web.Security.SqlMembershipProvider.CheckPassword(String username, String password, Boolean updateLastLoginActivityDate, Boolean failIfNotApproved, String& salt, Int32& passwordFormat) +169 System.Web.Security.SqlMembershipProvider.CheckPassword(String username, String password, Boolean updateLastLoginActivityDate, Boolean failIfNotApproved) +42 System.Web.Security.SqlMembershipProvider.ValidateUser(String username, String password) +78

    Read the article

  • How to be sure that my MVC project is running on the correct version after upgrade to vs2010?

    - by Stephane
    I just installed visual studio 2010 and upgraded my MVC project (which was running on MVC RC2 in visual studio 2008). visual studio 2010 updated every project file to target the framework 4.0. But the system.web.dll is pointing to C:\Program Files (x86)\Microsoft ASP.NET\ASP.NET MVC 2\Assemblies\System.Web.Mvc.dll in VS2010 object browser, I have every dll showing up in multiple versions as expected (3.5.0.0 and 4.0.0.0) except for the System.Web.Mvc dll which doesn't show any version and points to the path I mentioned above. Isn't this namespace point to the Framework folder like the System.Web namespace? C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.0\System.Web.dll

    Read the article

  • Setting the paper size

    - by rajaneesh
    Please help me how to set paper size in c# code . i am using api printDocument .. my code is ppvw = new PrintPreviewDialog(); ppvw.Document = printDoc; ppvw.PrintPreviewControl.StartPage = 0; ppvw.PrintPreviewControl.Zoom = 1.0; ppvw.PrintPreviewControl.Columns = 10; // Showing the Print Preview Page printDoc.BeginPrint += new System.Drawing.Printing.PrintEventHandler(PrintDoc_BeginPrint); printDoc.PrintPage += new System.Drawing.Printing.PrintPageEventHandler(PrintDoc_PrintPage); if (ppvw.ShowDialog() != DialogResult.OK) { printDoc.BeginPrint -= new System.Drawing.Printing.PrintEventHandler(PrintDoc_BeginPrint); printDoc.PrintPage -= new System.Drawing.Printing.PrintPageEventHandler(PrintDoc_PrintPage); } printDoc.PrinterSettings.DefaultPageSettings.PaperSize = new System.Drawing.Printing.PaperSize("a2", 5.0,5.0); printDoc.Print();

    Read the article

  • Security exception with ASP.NET AJAX toolkit

    - by Rod
    I've got an ASP.NET WebForms app that I've written, which uses the ASP.NET AJAX Toolkit. I've put the MultiView control onto the web form, and it worked fine, when I had it under Vista. Well, I had to replace my machine (HD failed) and I went to Windows 7 Ultimate. I tried copying the ASP.NET app from the system (before it finally failed for good) and put it onto the Windows 7 machine. I can bring up the app fine, go to all pages, but the one with these controls on it. When I do I get the following error: Security Exception Description: The application attempted to perform an operation not allowed by the security policy. To grant this application the required permission please contact your system administrator or change the application's trust level in the configuration file. Exception Details: System.Security.SecurityException: Request for the permission of type 'System.Web.AspNetHostingPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. What's going on? How do I fix it?

    Read the article

  • ntfsresize volume and size information

    - by antonio
    I am going to resize my sda2 NTFS partition. When gathering info with ntfsresize, I get: ntfsresize --info /dev/sda2 ntfsresize v2013.1.13 (libntfs-3g) Device name : /dev/sda2 NTFS volume version: 3.1 Cluster size : 4096 bytes Current volume size: 21999993344 bytes (22000 MB) Current device size: 23622320128 bytes (23623 MB) Checking filesystem consistency ... Accounting clusters ... Space in use : 10673 MB (48.5%) Collecting resizing constraints ... You might resize at 10672590848 bytes or 10673 MB (freeing 11327 MB). Please make a test run using both the -n and -s options before real resizing! Can you tell me what is the difference between volume and device size? As for device size, 23622320128 bytes / 1000^2 = 23622.3 MB. Why is 23623 MB reported instead of 23622? Note that parted confirms this value: parted /dev/sda2 unit MB p Model: Unknown (unknown) Disk /dev/sda2: 23622MB Sector size (logical/physical): 512B/512B Partition Table: loop Disk Flags: Number Start End Size File system Flags 1 0.00MB 23622MB 23622MB ntfs

    Read the article

  • Change from tri-boot to dual-boot

    - by Andrew Robinson
    I have been tri-booting Windows 7, Windows 8 Release Candidate and Ubuntu 12.04 LTS for a few months now. I have decided that, since I have no touch screen, I will not purchase Win 8. I now want to get rid of the Win 8 RC, then add that partition space to my Ubuntu partition, but have no idea how to accomplish this. Do I need to uninstall Win 8 RC from within Windows first? The grub loader sends me to the Win 8 loader, where I have Win 7 as the default. Does that complicate things? Any assistance anyone can give would be greatly appreciated.

    Read the article

  • A rather long question about installation/uninstall

    - by user2364312
    Ok so here's the deal guys, last week I decided I want to install Ubunutu again because I really missed it. (Last time I had an ubunutu was 7.somehting) I downloaded 12.04 and isntalled it via bootable usb device. Knowing how dual boots works I cleared up some space on my hard disc before hand using the windows 7 built in disc manager. During Ubunutu's installation I thought that by choosing "Install Ubuntu alongside Windows 7" will automatically just use the space I cleared before, but apperentaly it did not since that partition is still 100% free space. On what partition is Ubunutu installed when using that method? And how can I uninstall it to re-install it back on the space I cleared up for it? Thank you very your time reading and helping!

    Read the article

  • How to install ubuntu-12.10-desktop along with windows 8 64bit

    - by Priyesh
    I have hp pavilion 15 n204tx, which came with pre-installed Ubuntu OS. I formatted it and installed Win 8 64bit. and created 3 + 1(system reserve) partition. 50GB for win8 50GB for Ubuntu remaining for my files and other is system reserve. But now i need ubuntu-12.10-desktop also, along with win8. Is there is a way to install ubuntu-12.10-desktop, without affecting my files and win8 on second 50GB partition. Is the installation method is same as of other similar questions here. Please answer and i don't know anything about commands posted on other answers here, i just started to learn UNIX. So kindly tell where and how to use commands, if any Thank you

    Read the article

  • Triple boot problem with Windows 7, Ubuntu 12.04 & Fedora 17

    - by daniel
    I just installed Fedora 17 after Ubuntu 12.04, But now I can't boot into any of 2 linux, I also have windows 7 installed and I can boot to it, I edited boot with EasyBCD. During installation of Fedora 17 I used standard creating partition and used Separate "/boot" , "/", "Swap" , "Home" for Fedora 17, Is this fixable? or I have to reinstall 2 OS? Also Is possible to share one "Swap" partition? And I am on Ubuntu live cd. Thanks for any guides.

    Read the article

  • Computer Networks UNISA - Chap 15 &ndash; Network Management

    - by MarkPearl
    After reading this section you should be able to Understand network management and the importance of documentation, baseline measurements, policies, and regulations to assess and maintain a network’s health. Manage a network’s performance using SNMP-based network management software, system and event logs, and traffic-shaping techniques Identify the reasons for and elements of an asset managements system Plan and follow regular hardware and software maintenance routines Fundamentals of Network Management Network management refers to the assessment, monitoring, and maintenance of all aspects of a network including checking for hardware faults, ensuring high QoS, maintaining records of network assets, etc. Scope of network management differs depending on the size and requirements of the network. All sub topics of network management share the goals of enhancing the efficiency and performance while preventing costly downtime or loss. Documentation The way documentation is stored may vary, but to adequately manage a network one should at least record the following… Physical topology (types of LAN and WAN topologies – ring, star, hybrid) Access method (does it use Ethernet 802.3, token ring, etc.) Protocols Devices (Switches, routers, etc) Operating Systems Applications Configurations (What version of operating system and config files for serve / client software) Baseline Measurements A baseline is a report of the network’s current state of operation. Baseline measurements might include the utilization rate for your network backbone, number of users logged on per day, etc. Baseline measurements allow you to compare future performance increases or decreases caused by network changes or events with past network performance. Obtaining baseline measurements is the only way to know for certain whether a pattern of usage has changed, or whether a network upgrade has made a difference. There are various tools available for measuring baseline performance on a network. Policies, Procedures, and Regulations Following rules helps limit chaos, confusion, and possibly downtime. The following policies and procedures and regulations make for sound network management. Media installations and management (includes designing physical layout of cable, etc.) Network addressing policies (includes choosing and applying a an addressing scheme) Resource sharing and naming conventions (includes rules for logon ID’s) Security related policies Troubleshooting procedures Backup and disaster recovery procedures In addition to internal policies, a network manager must consider external regulatory rules. Fault and Performance Management After documenting every aspect of your network and following policies and best practices, you are ready to asses you networks status on an on going basis. This process includes both performance management and fault management. Network Management Software To accomplish both fault and performance management, organizations often use enterprise-wide network management software. There various software packages that do this, each collect data from multiple networked devices at regular intervals, in a process called polling. Each managed device runs a network management agent. So as not to affect the performance of a device while collecting information, agents do not demand significant processing resources. The definition of a managed devices and their data are collected in a MIB (Management Information Base). Agents communicate information about managed devices via any of several application layer protocols. On modern networks most agents use SNMP which is part of the TCP/IP suite and typically runs over UDP on port 161. Because of the flexibility and sophisticated network management applications are a challenge to configure and fine-tune. One needs to be careful to only collect relevant information and not cause performance issues (i.e. pinging a device every 5 seconds can be a problem with thousands of devices). MRTG (Multi Router Traffic Grapher) is a simple command line utility that uses SNMP to poll devices and collects data in a log file. MRTG can be used with Windows, UNIX and Linux. System and Event Logs Virtually every condition recognized by an operating system can be recorded. This is typically done using event logs. In Windows there is a GUI event log viewer. Similar information is recorded in UNIX and Linux in a system log. Much of the information collected in event logs and syslog files does not point to a problem, even if it is marked with a warning so it is important to filter your logs appropriately to reduce the noise. Traffic Shaping When a network must handle high volumes of network traffic, users benefit from performance management technique called traffic shaping. Traffic shaping involves manipulating certain characteristics of packets, data streams, or connections to manage the type and amount of traffic traversing a network or interface at any moment. Its goals are to assure timely delivery of the most important traffic while offering the best possible performance for all users. Several types of traffic prioritization exist including prioritizing traffic according to any of the following characteristics… Protocol IP address User group DiffServr VLAN tag in a Data Link layer frame Service or application Caching In addition to traffic shaping, a network or host might use caching to improve performance. Caching is the local storage of frequently needed files that would otherwise be obtained from an external source. By keeping files close to the requester, caching allows the user to access those files quickly. The most common type of caching is Web caching, in which Web pages are stored locally. To an ISP, caching is much more than just convenience. It prevents a significant volume of WAN traffic, thus improving performance and saving money. Asset Management Another key component in managing networks is identifying and tracking its hardware. This is called asset management. The first step to asset management is to take an inventory of each node on the network. You will also want to keep records of every piece of software purchased by your organization. Asset management simplifies maintaining and upgrading the network chiefly because you know what the system includes. In addition, asset management provides network administrators with information about the costs and benefits of certain types of hardware or software. Change Management Networks are always in a stage of flux with various aspects including… Software changes and patches Client Upgrades Shared Application Upgrades NOS Upgrades Hardware and Physical Plant Changes Cabling Upgrades Backbone Upgrades For a detailed explanation on each of these read the textbook (Page 750 – 761)

    Read the article

  • Big Data – Buzz Words: What is HDFS – Day 8 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is MapReduce. In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – HDFS. What is HDFS ? HDFS stands for Hadoop Distributed File System and it is a primary storage system used by Hadoop. It provides high performance access to data across Hadoop clusters. It is usually deployed on low-cost commodity hardware. In commodity hardware deployment server failures are very common. Due to the same reason HDFS is built to have high fault tolerance. The data transfer rate between compute nodes in HDFS is very high, which leads to reduced risk of failure. HDFS creates smaller pieces of the big data and distributes it on different nodes. It also copies each smaller piece to multiple times on different nodes. Hence when any node with the data crashes the system is automatically able to use the data from a different node and continue the process. This is the key feature of the HDFS system. Architecture of HDFS The architecture of the HDFS is master/slave architecture. An HDFS cluster always consists of single NameNode. This single NameNode is a master server and it manages the file system as well regulates access to various files. In additional to NameNode there are multiple DataNodes. There is always one DataNode for each data server. In HDFS a big file is split into one or more blocks and those blocks are stored in a set of DataNodes. The primary task of the NameNode is to open, close or rename files and directory and regulate access to the file system, whereas the primary task of the DataNode is read and write to the file systems. DataNode is also responsible for the creation, deletion or replication of the data based on the instruction from NameNode. In reality, NameNode and DataNode are software designed to run on commodity machine build in Java language. Visual Representation of HDFS Architecture Let us understand how HDFS works with the help of the diagram. Client APP or HDFS Client connects to NameSpace as well as DataNode. Client App access to the DataNode is regulated by NameSpace Node. NameSpace Node allows Client App to connect to the DataNode based by allowing the connection to the DataNode directly. A big data file is divided into multiple data blocks (let us assume that those data chunks are A,B,C and D. Client App will later on write data blocks directly to the DataNode. Client App does not have to directly write to all the node. It just has to write to any one of the node and NameNode will decide on which other DataNode it will have to replicate the data. In our example Client App directly writes to DataNode 1 and detained 3. However, data chunks are automatically replicated to other nodes. All the information like in which DataNode which data block is placed is written back to NameNode. High Availability During Disaster Now as multiple DataNode have same data blocks in the case of any DataNode which faces the disaster, the entire process will continue as other DataNode will assume the role to serve the specific data block which was on the failed node. This system provides very high tolerance to disaster and provides high availability. If you notice there is only single NameNode in our architecture. If that node fails our entire Hadoop Application will stop performing as it is a single node where we store all the metadata. As this node is very critical, it is usually replicated on another clustered as well as on another data rack. Though, that replicated node is not operational in architecture, it has all the necessary data to perform the task of the NameNode in the case of the NameNode fails. The entire Hadoop architecture is built to function smoothly even there are node failures or hardware malfunction. It is built on the simple concept that data is so big it is impossible to have come up with a single piece of the hardware which can manage it properly. We need lots of commodity (cheap) hardware to manage our big data and hardware failure is part of the commodity servers. To reduce the impact of hardware failure Hadoop architecture is built to overcome the limitation of the non-functioning hardware. Tomorrow In tomorrow’s blog post we will discuss the importance of the relational database in Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • How to fix 'grub error file not found' when installing 12.04?

    - by Tomasz Grabowski
    i'm trying to install Ubuntu. I don't know if it is important, but i'm trying to install it on external HDD. In the end i have external bootable HDD which only displays: error: file not found grub recovery> From the beginning: I've downloaded ubuntu-12.04-desktop-i386.iso I've used LiLi USB Creator (LinuxLive) to create bootable pendrive from that image I've bootet from it, it works I've clicked "Try ubuntu", it works too. I've used GParted to look over drivers (disks) My primary embedded disk is seen as /dev/sda My attached external disk as /dev/sdb My PenDrive as /dev/sdc I've created partitions on /dev/sdb Fist partition for system (over 200GiB) Second was there already (it's xsf, and i don't want to touch it :P) Third is extended partition, with 1 locital partiton (10GiB) for swap I've started installation i've choose "somethin else" in ... i belive secound screeb then is selected /dev/sdb as boot disk for first partiton of /dev/sdb i set i want ext3 file system, i've check "formattin" checkbox, and mount path set to "/" firs logical partiton set as swap partition After installation finished, i restarted my computer. When i boot from my primary disc it's work ok, my previous operating system - vista - works ok. When i set my BIOS to boot from my external disc, i only get that message: error: file not found grub recovery> I've try to reinstall it, but didn't help... In desperation, i've try to read a bit about that "grub recovery" command-line and experiment a bit... I'm not sure if this has had any point, or if it give you some information (notice, that i don't know what i'm doing :P ) when i've type command: insmod (hd1,1)/boot/grub/linux.mod i've get message: unknown filesystem the same with: insmod (hd1,msdos1)/boot/grub/linux.mod the same with: insmod ext3 but i get no message after command: insmod ext2 ... notice that i really don't know what this command exactly do, but than i thought that maybe if i reinstall ubuntu with ext2 filesystem, it will work. I've done that, but symptoms are the same. I've go back to that Live version of ubuntu, filesystem and basics directories seems to be present on /dev/sdb1 ... i'm completely unfamiliar with GRUB. I'm also don't know which wersion of GRUB it is, i hope there is only one version on ubuntu-12.04-desktop-i386.iso Any help? Thax

    Read the article

  • Unable to boot either Ubuntu or Windows after kernel panic

    - by Josh Taylor
    Hi, Today I have been unable to boot into my Ubuntu (10.10) or Windows (7) partition. Ubuntu kernel panics on boot with the error: init: hash.c:296: Assertion failed in nih_hash_search: hash != NULL I can boot into a LiveUSB environment, and from there can access all my files on my 3 partitions (1 ext4, 2 NTFS). I have also ran fsck on the ext4 partition and ntfsfix on the 2 NTFS partitions, both not finding any errors at all. And Grub is intact and have also tried a reinstall of it. So at the moment I'm currently stuck using a LiveUSB, and would like to see if there are any other options other than reinstalling. Thanks. Update I've now ran chkdsk using my Windows recovery disk, and it found errors and fixed them, but I am still unable to boot into either Windows or Ubuntu Update #2 I've decided to just re-install Ubuntu and start again as I didn't really want to spend any more time looking around whilst I need this computer for work. Thanks for all your help though.

    Read the article

  • Watching a variable for changes without polling.

    - by milkfilk
    I'm using a framework called Processing which is basically a Java applet. It has the ability to do key events because Applet can. You can also roll your own callbacks of sorts into the parent. I'm not doing that right now and maybe that's the solution. For now, I'm looking for a more POJO solution. So I wrote some examples to illustrate my question. Please ignore using key events on the command line (console). Certainly this would be a very clean solution but it's not possible on the command line and my actual app isn't a command line app. In fact, a key event would be a good solution for me but I'm trying to understand events and polling beyond just keyboard specific problems. Both these examples flip a boolean. When the boolean flips, I want to fire something once. I could wrap the boolean in an Object so if the Object changes, I could fire an event too. I just don't want to poll with an if() statement unnecessarily. import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; /* * Example of checking a variable for changes. * Uses dumb if() and polls continuously. */ public class NotAvoidingPolling { public static void main(String[] args) { boolean typedA = false; String input = ""; System.out.println("Type 'a' please."); while (true) { InputStreamReader isr = new InputStreamReader(System.in); BufferedReader br = new BufferedReader(isr); try { input = br.readLine(); } catch (IOException ioException) { System.out.println("IO Error."); System.exit(1); } // contrived state change logic if (input.equals("a")) { typedA = true; } else { typedA = false; } // problem: this is polling. if (typedA) System.out.println("Typed 'a'."); } } } Running this outputs: Type 'a' please. a Typed 'a'. On some forums people suggested using an Observer. And although this decouples the event handler from class being observed, I still have an if() on a forever loop. import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.util.Observable; import java.util.Observer; /* * Example of checking a variable for changes. * This uses an observer to decouple the handler feedback * out of the main() but still is polling. */ public class ObserverStillPolling { boolean typedA = false; public static void main(String[] args) { // this ObserverStillPolling o = new ObserverStillPolling(); final MyEvent myEvent = new MyEvent(o); final MyHandler myHandler = new MyHandler(); myEvent.addObserver(myHandler); // subscribe // watch for event forever Thread thread = new Thread(myEvent); thread.start(); System.out.println("Type 'a' please."); String input = ""; while (true) { InputStreamReader isr = new InputStreamReader(System.in); BufferedReader br = new BufferedReader(isr); try { input = br.readLine(); } catch (IOException ioException) { System.out.println("IO Error."); System.exit(1); } // contrived state change logic // but it's decoupled now because there's no handler here. if (input.equals("a")) { o.typedA = true; } } } } class MyEvent extends Observable implements Runnable { // boolean typedA; ObserverStillPolling o; public MyEvent(ObserverStillPolling o) { this.o = o; } public void run() { // watch the main forever while (true) { // event fire if (this.o.typedA) { setChanged(); // in reality, you'd pass something more useful notifyObservers("You just typed 'a'."); // reset this.o.typedA = false; } } } } class MyHandler implements Observer { public void update(Observable obj, Object arg) { // handle event if (arg instanceof String) { System.out.println("We received:" + (String) arg); } } } Running this outputs: Type 'a' please. a We received:You just typed 'a'. I'd be ok if the if() was a NOOP on the CPU. But it's really comparing every pass. I see real CPU load. This is as bad as polling. I can maybe throttle it back with a sleep or compare the elapsed time since last update but this is not event driven. It's just less polling. So how can I do this smarter? How can I watch a POJO for changes without polling? In C# there seems to be something interesting called properties. I'm not a C# guy so maybe this isn't as magical as I think. private void SendPropertyChanging(string property) { if (this.PropertyChanging != null) { this.PropertyChanging(this, new PropertyChangingEventArgs(property)); } }

    Read the article

  • Why did Ubuntu and Windows start hanging mysteriously after I took a vacation?

    - by Ashrey Goel
    I installed Ubuntu alongside my Windows 7, after partitioning my HDD using Easeus partitioning manager. It was working perfectly, no problems, no data lost or corruption. Then I went away for 2 days and in my absence I don't know what happened in that period, now both Windows 7 and Ubuntu keep hanging continuously, like when you paint and change a brush it'll hang, I mean on very simple commands and I know my computer does not hang on such petty things. I use it for developing music and the specification are: Model: DELL-XPS Processor: Intel i5, 2.53 GHz RAM/Memory: 4GB Hard disk size: 500GB HDD Windows 7 partition: 417 GB Ubuntu Partition: 50 GB Please Help.

    Read the article

  • Overheating on Dell Studio XPS 1645

    - by pjtatlow
    So I was wondering if anyone else has come upon this problem, and/or has come up with a solution. When I use my Ubuntu partition, my computer becomes extremely hot, and the fan runs very noisily for a very long time. If I reboot into windows while this is happening, my computer actually begins to cool down while doing the exact same tasks. Thinking this might just be a bug with Ubuntu, I installed fedora on another partition, and the same problem occurs. Is this a problem with the kernel? Cpufreq tells me that my CPU is running at 933 MHz out of a possible 1.6 GHz from my Intel Core i7 CPU Q70. For anyone who wants more information, I have 8 GB of memory, and an ATI Mobility Raedon HD 5730 Graphics Card. I'm open to any ideas anyone might have. Thanks in advance!

    Read the article

  • Tripple boot install with Windows MBR

    - by Andre Doria
    I have 2 hard drives, each 1TB. First drive has only Windows 7. The second drive has Kali installed on logical partitions #5 (/boot), #6 (/), #7 (/home), and #8 (swap). The bootloader is installed in /dev/sdb5. It also has Ubuntu installed on logical partitions #9 (/boot), #10 (/), #11 (/home), and #12 (swap). I want to use Windows bootloader, so I use easyBCD to configure the boot menu. EasyBCD sees my second drive partitions as #1, #2, #3,..., #8. I then add Kali selecting second drive #1 (/boot) partition, and Ubuntu selecting its #5 (/boot) partition. After this my menu has choices of Windows 7 (default), Kali, and Ubuntu. The problem is that whether I select Kali or Ubuntu I always boot Kali! Any idea on how to enable Ubuntu boot while also keep using Windows bootloader in MBR?

    Read the article

  • Ubuntu 12.10 "fakeRAID" RAID0 installation

    - by João André
    I have 2 80 Gb HDD's with a RAID 0 motherboard configuration (Intel Z77, fakeRAID) with a 100 Gb partition running Windows 7 and a 60 Gb partition where I would like to install Ubuntu 12.10. However, even though the installer seems to correctly detect the RAID 0 array, GRUB2 is not installed and the computer boots into Windows normally. The same thing does not happen when installing Fedora 17. The installer (Anaconda) also detects the disk array, but GRUB2 installation is successful. What exactly are the differences between Ubiquity and Anaconda? And is there a way to correctly install GRUB2 in a fakeRAID system, since there are no alternative Ubuntu CDs?

    Read the article

  • Why is it necessary to install EFI/rEFInd/UEFI/... on a SD Card since the Macbook Pro seems to already have it?

    - by user170794
    Dear askubuntu members, I own a Macbook Pro (late 2009) and when I boot the laptop and hold the alt key meanwhile, there is a EFI screen, so EFI is installed on... the firmware? I had a few troubles with my hard disk, so I had to change it, but I haven't installed OS X, I have only installed Ubuntu and still the EFI screen is there which is surely a good thing. As the new hard disk is making troubles again, I am using Puppy Linux, booting from a CD each time, which is unconfortable. So I am trying to have Ubuntu installed on a SD Card. After having spent many months on the internet grabing informations anywhere I can and trying several things, I applied this method: http://www.weihermueller.de/mac/ I succeeded in making one SD Card recognizable by the EFI of my laptop (holding alt key @ boot), but nothing installed on it yet as I fear to lose the recognizable-by-EFI part. I haven't succeded in producing the same result on another SD Card. I have a bootable USB key of Ubuntu (yipee) which works like a live CD, made with the help of Universal Linux UDF Creator, found there: http://www.pendrivelinux.com/universal-usb-installer-easy-as-1-2-3/ on which I have put Ubuntu 13.04 64bit, retrieved from the official deposits. Eventhough I have to add the "nouveau.noaccel=1" option to the grub command line launching Linux, it works (yipee again) properly as a live cd. When installing Ubuntu I come across the "where do I wanna put Ubuntu" window, I partition another SD Card in: the EFI part (40MB) the Linux part (15GB< <16GB) The installation works fine and finishes with no problem. But at the reboot, the SD Card where Linux is installed is not recognized by the EFI, the icons are : the CD (Puppy Linux), the USB stick (from Linux UDF Creator), the hard drive (the formerly-working Ubuntu 12) but no fourth icon of the SD Card whatsoever. As the title of this thread suggests, I am wondering: why there is a need for EFI to be installed on the SD Card since EFI seems to be on my laptop anyway? why EFI has to be on a different partition than the Linux's one? How do both parts communicate? why the EFI part on the SD Card made with the help of the live-USB key isn't recognized? on the EFI partition, there is a folder named "EFI" which contains another folder named "ubuntu" which contains a file named "grubx64.efi", why is there a thing called grub? Is it the Linux's grub where one can chose either to boot, to boot in safe mode, etc.? Thank you for your patience, looking forward for any kind of answer, Julien

    Read the article

< Previous Page | 361 362 363 364 365 366 367 368 369 370 371 372  | Next Page >