Search Results

Search found 4177 results on 168 pages for 'lost in the sauce'.

Page 108/168 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • Exalogic 2.0.1 Tea Break Snippets - Creating and using Distribution Groups

    - by The Old Toxophilist
    By default running your Exalogic in a Virtual provides you with, what to Cloud Users, is a single large resource and they can just create vServers and not care about how they are laid down on the the underlying infrastructure. All the Cloud Users will know is that they can create vServers. For example if we have a Quarter Rack (8 Nodes) and our Cloud User creates 8 vServers those 8 vServers may run on 8 distinct nodes or may all run on the same node. Although in many cases we, as Cloud Users, may not be to worried how the Virtualisation Algorithm decides where to place our vServers there are cases where it is extremely important that vServers run on distinct physical compute nodes. For example if we have a Weblogic Cluster we will want the Servers with in the cluster to run on distinct physical node to cover for the situation where one physical node is lost. To achieve this the Exalogic Virtualised implementation provides Distribution Groups that define and anti-aliasing policy that the underlying Virtualisation Algorithm will take into account when placing vServers. It should be noted that Distribution Groups must be created before you create vServers because a vServer can only be added to a Distribution Group at creation time. Creating A Distribution Group To create a Distribution Groups we will first need to select the Account in which we want the Distribution Group to be created. Once we have selected the account we will see the Interface update and Account specific Actions will be displayed within the Action Panes. From the Action pane (or Right-Click on the Account) select the "Create Distribution Group" action. This will initiate the create wizard as follows. Distribution Group Details Within the first Step of the Wizard we can specify the name of the distribution group and this should be unique. In addition we can provide a detailed description of the group. Distribution Group Configuration The second step of the configuration wizard allows you to specify the number of elements that are required within this group and will specify a maximum of the number of nodes within you Exalogic. At this point it is always better to specify a group with spare capacity allowing for future expansion. As vServers are added to group the available slots decrease. Summary Finally the last step of the wizard display a summary of the information entered.

    Read the article

  • 5 ways to stop code thrashing&hellip;

    - by MarkPearl
    A few days ago I was programming on a personal project and hit a roadblock. I was applying the MVVM pattern and for some reason my view model was not updating the view when the state changed??? I had applied this pattern many times before and had never had this problem. It just didn’t make sense. So what did I do… I did what anyone would have done in my situation and looked to pass the blame to someone or something else. I tried to blame one of the inherited base classes, but it looked fine, then to Visual Studio, but it seemed to be fine and eventually to any random segment of code I came across. My elementary problem had now mushroomed into one that had lost any logical basis and I was in thrashing mode! So what to do when you begin to thrash? 1) Do a general code cleanup – Now there is a difference between cleaning code and changing code . When you thrash you change code and you want to avoid this. What you really want to do is things like rename variables to have better meaning and go over your comments. 2) Do a proof of concept – if cleaning code doesn’t help. The  you want to isolate the problem and identify the key concepts. When you isolate code you ideally want it to be in a totally separate project with as little complexity as possible. Make the building blocks and try and replicate the functionality that you are getting in your current application. 3) Phone a friend – I have found speaking to someone else about the problem generally helps me solve any thrashing issues I am having. Usually they don’t even have to say anything to solve the problem, just you talking them through the problem helps you get clarity of mind. 4) Let the dust settle – Sometimes time is the best solution. I have had a few problems that no matter who I discussed them with and no matter how much code cleaning I had done I just couldn’t seem to fix it. My brain just seemed to be going in circles. A good nights rest has always helped and often just the break away from the problem has helped me find a solution. 5) Stack overflow it – So similar to phone a friend. I am really surprised to see what a melting pot stack overflow has been and what a help it has been in solving technology specific problems. Just be considerate to those using the site and explain clearly exactly what problem you are having and the technologies you are using or else you will probably not get any useful help…

    Read the article

  • Java JRE 1.6.0_37 Certified with Oracle E-Business Suite

    - by Steven Chan (Oracle Development)
    My apologies: this certification announcement got lost in the OpenWorld maelstorm.  Better late than never. The section below entitled, "All JRE 1.6 releases are certified with EBS upon release" should obviate the need for these announcements, but I know that people have gotten used to seeing these certifications referenced explicitly.  The latest Java Runtime Environment 1.6.0_37 (a.k.a. JRE 6u37-b06) is now certified with Oracle E-Business Suite Release 11i and 12 desktop clients.   What's new in Java 1.6.0_37?See the 1.6.0_37 Update Release Notes for details about what has changed in this release.  This release is available for download from the usual Sun channels and through the 'Java Automatic Update' mechanism. 32-bit and 64-bit versions certified This certification includes both the 32-bit and 64-bit JRE versions. 32-bit JREs are certified on: Windows XP Service Pack 3 (SP3) Windows Vista Service Pack 1 (SP1) and Service Pack 2 (SP2) Windows 7 and Windows 7 Service Pack 1 (SP1) 64-bit JREs are certified only on 64-bit versions of Windows 7 and Windows 7 Service Pack 1 (SP1). Worried about the 'mismanaged session cookie' issue? No need to worry -- it's fixed.  To recap: JRE releases 1.6.0_18 through 1.6.0_22 had issues with mismanaging session cookies that affected some users in some circumstances. The fix for those issues was first included in JRE 1.6.0_23. These fixes will carry forward and continue to be fixed in all future JRE releases.  In other words, if you wish to avoid the mismanaged session cookie issue, you should apply any release after JRE 1.6.0_22.All JRE 1.6 releases are certified with EBS upon release Our standard policy is that all E-Business Suite customers can apply all JRE updates to end-user desktops from JRE 1.6.0_03 and later updates on the 1.6 codeline.  We test all new JRE 1.6 releases in parallel with the JRE development process, so all new JRE 1.6 releases are considered certified with the E-Business Suite on the same day that they're released by our Java team.  You do not need to wait for a certification announcement before applying new JRE 1.6 releases to your EBS users' desktops. Important For important guidance about the impact of the JRE Auto Update feature on JRE 1.6 desktops, see: Planning Bulletin for JRE 7: What EBS Customers Can Do Today References Recommended Browsers for Oracle Applications 11i (Metalink Note 285218.1) Upgrading Sun JRE (Native Plug-in) with Oracle Applications 11i for Windows Clients (Metalink Note 290807.1) Recommended Browsers for Oracle Applications 12 (MetaLink Note 389422.1) Upgrading JRE Plugin with Oracle Applications R12 (MetaLink Note 393931.1) Related Articles Mismanaged Session Cookie Issue Fixed for EBS in JRE 1.6.0_23 Roundup: Oracle JInitiator 1.3 Desupported for EBS Customers in July 2009

    Read the article

  • Nvidia GT218 repository drivers don't work

    - by user1042840
    I upgraded all packages with sudo apt-get upgrade command on my Ubuntu 10.04 box and I have Ubuntu 12.04 3.2.0-29-generic-pae now. I have two monitors and the following GPU: 01:00.0 VGA compatible controller: NVIDIA Corporation GT218 [NVS 300] (rev a2) After upgrading to 12.04, I somehow lost my previous setup with one common workspace stretched across two monitors. When Ubuntu starts only one monitor is on. I can see the message on the active monitor: Not optimum mode. Recommended mode: 1680x1050 60Hz I used Nvidia proprietary drivers on 10.04 but now jockey-text --list shows: xorg:nvidia_current - NVIDIA accelerated graphics driver (Proprietary, Disabled, Not in use) xorg:nvidia_current_updates - NVIDIA accelerated graphics driver (post-release updates) (Proprietary, Enabled, Not in use) When I run sudo nvidia-settings it says You do not appear to be using the NVIDIA X driver. Please edit your X configuration file (just run `nvidia-xconfig` as root), and restart the X server.' I typed nvidia-xconfig and rebooted, but jockey-text --list says the same after the reboot: Not in use. The same with nvidia-current - Enabled but Not in use. I also tried nvidia-173 but I ended up in tty immediately at startup so I removed it. I used to have some problems with Nvidia proprietary drivers on 10.04, I had to put paths to EDID files in /etc/X11/xorg.conf explicitly, but the resolution was as recommended and both monitors were working. If I understand correctly, nouveau drivers are used now by default because the resolution is still quite high, definitely not 800x600, xrandr showed: xrandr: Failed to get size of gamma for output default Screen 0: minimum 320 x 400, current 1600 x 1200, maximum 1600 x 1200 default connected 1600x1200+0+0 0mm x 0mm 1600x1200 66.0* 1280x1024 76.0 1024x768 76.0 800x600 73.0 640x480 73.0 640x400 0.0 320x400 0.0 1680x1050_60.00 (0x4f) 146.2MHz h: width 1680 start 1784 end 1960 total 2240 skew 0 clock 65.3KHz v: height 1050 start 1053 end 1059 total 1089 clock 60.0Hz However, colors seem a bit faded and blurry with nouveau drivers. Mouse cursor is invisible if it's placed inside Firefox window, and only one monitor is working. I like open source and if it's possible I'd prefer to use nouveau drivers but a few things should be fixed. I'm curious why nvidia-current drivers from the repository don't work now. I read it has something to do with the new X11 server in Ubuntu 12.04, is it true? How can I get it back to work?

    Read the article

  • Why does my ID3DXSprite appear to be incorrectly scaled?

    - by Bjoern
    I am using D3D9 for rendering some simple things (a movie) as the backmost layer, then on top of that some text messages, and now wanted to add some buttons to that. Before adding the buttons everything seemed to have worked fine, and I was using a ID3DXSprite for the text as well (ID3DXFont), now I am loading some graphics for the buttons, but they seem to be scaled to something like 1.2 times their original size. In my test window I centered the graphic, but it being too big it just doesnt fit well, for example the client area is 640x360, the graphic is 440, so I expect 100 pixel on left and right, left side is fine [I took screenshot and "counted" the pixels in photoshop], but on the right there is only about 20 pixels) My rendering code is very simple (I am omitting error checks, et cetera, for brevity) // initially viewport was set to width/height of client area // clear device m_d3dDevice->Clear( 0, NULL, D3DCLEAR_TARGET|D3DCLEAR_STENCIL|D3DCLEAR_ZBUFFER, D3DCOLOR_ARGB(0,0,0,0), 1.0f, 0 ); // begin scene m_d3dDevice->BeginScene(); // render movie surface (just two triangles to which the movie is rendered) m_d3dDevice->SetRenderState(D3DRS_ALPHABLENDENABLE,false); m_d3dDevice->SetSamplerState( 0, D3DSAMP_MAGFILTER, D3DTEXF_LINEAR ); // bilinear filtering m_d3dDevice->SetSamplerState( 0, D3DSAMP_MINFILTER, D3DTEXF_LINEAR ); // bilinear filtering m_d3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG1, D3DTA_TEXTURE ); m_d3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG2, D3DTA_DIFFUSE ); //Ignored m_d3dDevice->SetTextureStageState( 0, D3DTSS_COLOROP, D3DTOP_SELECTARG1 ); m_d3dDevice->SetTexture( 0, m_movieTexture ); m_d3dDevice->SetStreamSource(0, m_displayPlaneVertexBuffer, 0, sizeof(Vertex)); m_d3dDevice->SetFVF(Vertex::FVF_Flags); m_d3dDevice->DrawPrimitive(D3DPT_TRIANGLELIST, 0, 2); // render sprites m_sprite->Begin(D3DXSPRITE_ALPHABLEND | D3DXSPRITE_SORT_TEXTURE | D3DXSPRITE_DO_NOT_ADDREF_TEXTURE); // text drop shadow m_font->DrawText( m_playerSprite, m_currentMessage.c_str(), m_currentMessage.size(), &m_playerFontRectDropShadow, DT_RIGHT|DT_TOP|DT_NOCLIP, m_playerFontColorDropShadow ); // text m_font->DrawText( m_playerSprite, m_currentMessage.c_str(), m_currentMessage.size(), &m_playerFontRect, DT_RIGHT|DT_TOP|DT_NOCLIP, m_playerFontColorMessage ) ); // control object m_sprite->Draw( m_texture, 0, 0, &m_vecPos, 0xFFFFFFFF ); // draws a few objects like this m_sprite->End() // end scene m_d3dDevice->EndScene(); What did I forget to do here? Except for the control objects (play button, pause button etc which are placed on a "panel" which is about 440 pixels wide) everything seems fine, the objects are positioned where I expect them, but just too big. By the way I loaded the images using D3DXCreateTextureFromFileEx (resizing wnidow, and reacting to lost device, etc, works fine too). For experimenting, I added some code to take an identity matrix and scale is down on the x/y axis to 0.75f, which then gave me the expected result for the controls (but also made the text smaller and out of position), but I don't know why I would need to scale anything. My rendering code is so simple, I just wanted to draw my 2D objects 1;1 the size they came from the file... I am really very inexperienced in D3D, so the answer might be very simple...

    Read the article

  • JSR 355 Final Release, and moves JCP to version 2.9

    - by heathervc
    JSR 355, JCP EC Merge, passed the JCP EC Final Approval Ballot on 13 August 2012, with 14 Yes votes, 1 abstain (1 member did not vote) on the SE/EE EC, and 12 yes votes (2 members were not eligible to vote) on the ME EC.  JSR 355 posted a Final Release this week, moving the JCP program version to JCP 2.9.  The transition to a merged EC will happen after the 2012 EC Elections, as defined in the Appendix B of the JCP (pasted below), and the EC will operate under the new EC Standing Rules. In the previous version (2.8) of this Process Document there were two separate Executive Committees, one for Java ME and one for Java SE and Java EE combined. The single Executive Committee described in this version of the Process Document will be implemented through the following process: The 2012 annual elections will be held as defined in JCP 2.8, but candidates will be informed that if they are elected their term will be for only a single year, since all candidates must stand for re-election in 2013. Immediately after the 2012 election the two ECs will be merged. Oracle and IBM's second seats will be eliminated, resulting in a single EC with 30 members. All subsequent JSR ballots (even for in-progress JSRs) will then be voted on by the merged EC. For the 2013 annual elections three Ratified and two Elected Seats will be eliminated, thereby reducing the EC to 25 members. All 25 seats will be up for re-election in 2013. Members elected in 2013 will be ranked to determine whether their initial term will be one or two years. The 50% of Ratified and 50% of Elected members who receive the most votes will serve an initial two-year term, while all others will serve an initial one year term. All members elected in 2014 and subsequently will serve a two-year term. For clarity, note that the provisions specified in this version of the Process Document regarding a merged EC will apply to subsequent ballots on all existing JSRs, whether or not the Spec Leads of those JSRs chose to adopt this version of the Process Document in its entirety. <end of Appendix> Also of note:  the materials and minutes from the July EC meeting and the June EC Meeting are now available--following the July EC Meeting, Samsung and SK Telecom lost their EC seats. The June EC meeting also had a public portion--the audio from the public portion of the EC meeting are now posted online.  For Spec Leads there is also the recording of the EG Nominations call.

    Read the article

  • Can a 10-bit monitor connection preserve all tones in 8-bit sRGB gradients on a wide-gamut monitor?

    - by hjb981
    This question is about color management and the use of a higher color depth, 10 bits per channel (30 bits in total, resulting in 1.07 billion colors, or 1024 shades of gray, sometimes referred to as "deep color") compared to the standard of 8 bits per channel (24 bits in total, 16.7 million colors, 256 shades of gray, sometimes referred to as "true color"). Do not confuse with "32 bit color", which usually refers to standard 8 bit color with an extra channel ("alpha channel") for transparency (used to achieve effects like semi-transparent windows etc). The following can be assumed to be in place: 1: A wide-gamut monitor that supports 10-bit input. Further, it can be assumed that the monitor has been calibrated to its native gamut and that an ICC color profile has been created. 2: A graphics card that supports 10-bit output (and is connected to the monitor via DisplayPort). 3: Drivers for the graphics card that support 10-bit output. If applications that support 10-bit output and color profiles would be used, I would expect them to display images that were saved using different color spaces correctly. For example, both an sRGB and an adobeRGB image should be displayed correctly. If an sRGB image was saved using 8 bits per channel (almost always the case), then the 10-bit signal path would ensure that no tonal gradients were lost in the conversion from the sRGB of the image to the native color space of the monitor. For example: If the image contains a pixel that is pure red in 8 bits (255,0,0), the corresponding value in 10 bits would be (1023,0,0). However, since the monitor has a larger color space than sRGB, sending the signal (1023,0,0) to the monitor would result in a red that was too saturated. Therefore, according to the ICC color profile, the signal would be transformed into a different value with less red saturation, for example (987,0,0). Since there are still plenty of levels left between 0 and 987, all 256 values (0-255) for red in the sRGB color space of the file could be uniquely mapped to color-corrected 10-bit values in the monitor's native color space. However, if the conversion was done in 8 bits, (255,0,0) would be translated to (246,0,0), and there would now only be 247 available levels for the red channel instead of 256, degrading the displayed image quality. My question is: how does this work on Ubuntu? Let's say that I use Firefox (which is color-aware and uses ICC color profiles). Would I get 10-bit processing, thus preserving all levels of an 8-bit picture? What is the situation like for other applications, especially photo applications like Shotwell, Rawtherapee, Darktable, RawStudio, Photivo etc? Does Ubuntu differ from other operating systems (Linux and others) on this point?

    Read the article

  • Some Problems Can't Be Outsourced

    - by mikef
    More and more companies are becoming attracted to the idea of Infrastructure as a Service (or IaaS). It would seem that you can outsource the provisioning and management of your services, encompassing everything from Email, through to your servers, workstations and software, all the way down to your LAN and internet services. This type of outsourcing can be a very attractive option for companies who have tight budgets who are short of technical skills or don't have the means to provide long-term IT support. Essentially, they can outsource your services at low short-term costs that are knowable and controllable, are quickly and easily scalable, and generate a minimum of hassle for your internal staff. If you want to get a sophisticated IT infrastructure set up in a hurry without the usual high buy-in costs, or the task of finding and hiring the right specialists. It would seem the way to go, particularly when their salesmen are hypnotizing you with oleaginous phrases such as "we are closely aligned with our client organization's core business requirements, providing agile services". It sounds too good to be true, and so it is. Whereas the costs will have initially been calculated on the annual renewal fees and service fees for ongoing support, there are other charges too which aren't so obvious. It can end up costing far more than the conventional solution once you take into account the extra costs, the fees for customization and upgrades. The Total Cost of Ownership (TCO) only becomes apparent when it is too late to extract the company easily from the arrangement. After a few years, these annual fees can add up to more than the initial cost of implementing a traditional in-house system. Worse than that is that you can then lose your power to determine your priorities: When you become reliant on this company, with its own schedule of priorities, to implement every change, however simple, you have effectively lost control of your technical infrastructure. This will make senior management very nervous. There is definitely a requirement for this sort of service. If you urgently need an exceptionally high class of service or more expertise than you currently possess, then outsourcing is probably for you. You and your IT colleagues will always have something to do, be it user assistance, smoothing out integrations with an external provider, or working on something entirely new. Heck, if you outsource to IBM, the SysAdmins can go along for the ride and polish their expertise. What you need to figure out is how much your time is worth, because time is ultimately all that outsourcing will buy you and your organization. Now you just need to convince your nervous CEO. Cheers, Michael

    Read the article

  • Avoiding the Black Hole of Leads

    - by Charles Knapp
    Sales says, "Marketing doesn’t deliver enough qualified leads. So, we generate 90% of our own leads." Meanwhile, Marketing says, "We generate most of the leads. But, Sales doesn’t contact them quickly enough, while the lead is still interested." According to Sirius Decisions: Up to 90% of leads never make it to closure Sales works on only 11% of the leads supplied by Marketing Only 18% of the leads Sales accepts convert to opportunities Yet, 45% of prospects typically buy a product from someone within 12 months The root cause of these commonplace complaints is a disconnect between the funnels of marketing and sales. Unfortunately, we often see companies with an assortment of poorly integrated marketing tools. It takes too long and too many people to move the data around, scrub it, upload it from one system to another, and get it routed to the right sales teams. As a result, leads fall through the cracks, contextual information is lost, and by the time sales actually contacts a customer it may be too late. Sales automation alone is not enough. Marketing automation (including social) is not enough. Sales and Marketing must work together. It’s time to connect the silos of marketing and sales pipelines and analytics. It’s time for integrated Sales and Marketing automation. Integrated pipelines improve lead quality and timeliness. Marketing systems can track a rich set of contextual information about a prospect–self-disclosed information about interests, content viewed, and so on. This insight can equip the sales rep with rich information to make a face-to-face conversation more relevant and more likely to convert to the next stage in the sales process. Integrated lead to revenue (LTR) management provides end-to-end visibility, enabling the company to measure what is working. Marketing can measure its impact on revenue and other business outcomes, and sales can harness and redirect marketing investments to areas where they most help achieve sales objectives. It’s a win-win play. Marketing delivers more leads that are qualified, cuts cost per lead, and demonstrates a strong Return on Marketing Investment (ROMI). Sales spends more time with warm leads and less time on cold calls, achieves higher close rates, and delivers more revenue. Learn more by attending our Integrated Sales and Marketing session at the upcoming CloudWorld conferences. Or, visit our Sales and Marketing Cloud Service site for videos and other learning resources.

    Read the article

  • Got Samba, Got PyNeighbourhood but still no connection. What else do I need?

    - by Frank A
    I am sure I had already hit post before but then could only find it by backing through browser. Was it deleted? is the question too dumb, sorry that I do not know the right jargon just trying to get answers to my problem anyway have reworded stuff a bit This seems to be a number one requirement for lots of people and 2 months on from setting up my Ubuntu pc, I am still unable to get a lasting connection in either direction. Adding a windows pc to a network is so easy... just a few clicks and get on with using it all. Using all command approaches and modifying configuration files is hardly user friendly. Googling brings up thousands of solutions but mostly they are too techy or assume the user is fully aware of how to use Linux. I do realise that their must be a lot of flavours for connecting to networks. So far I have installed Samba and fiddled with its config file. The day I did all that it worked from XP to Ubuntu. When I came back two days later to transfer my data over it would not connect. Although the the share does show up in Windows (XP) My Network Places. Today I installed PyNeighbourhood and this shows the Ubuntu box and all of the shares I had created at some point on Ubuntu and it even shows this under the XP workgroup name. But instructions on setting the connection up seem to relate to an earlier version and nothing seems to work there either. (I unshared most of those test folders but they still show up her but that is another question. When I click on mount- I can only click on one on the Ubuntu machine, there is one with no name so I assume this to be my attempt to add one XP Shared drive using ipaddress, I get errors. (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", mount error(6): No such device or address Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) OK tried to find the manual referred to... only an old comment that manual would be produced for future versions. I saw in another thread that Winbind is needed as well or at least I assume as well? Totally lost again? Please help, what else needs to be installed to connect to win pcs on the network.

    Read the article

  • The partition table is corrupt

    - by Tim
    I have a corrupt the partition table on the laptop that is running Ubunutu 10.4. Before the partition table was corrupt I had the following partitions: 2 primary partitions: 1st - NTFS 2nd - Extended 4 logical partitons that are built within 2nd extended: 1st NTFS (68 Gib) 2nd Linux (19 Gib) 3rd Swap (1.4 Gib) 4th Linux (24 Gib) The physical order of these partitions was the following: ( 4th Linux ) - ( 1st NTFS ) - ( 2nd Linux ) - ( 3rd Swap ) The logical order of the partition was different: ( 1st NTFS ) - ( 2nd Linux ) - ( 3rd Swap ) ( 4th Linux ) NTFS partition was big and it resided between 2 Linux partitions, neither of these partitions had enough space to install Oracle 11g. Therefore, I decided to a) either move the NTFS partion to the left or b) remove it completely and extend partition where Linux resides. As I tool I have chosen GParted. But unfortunately it was not able to move the partition because he found that in NTFS partition there are some blocks that are referenced multiple times. Also it was not able to remove the partition neither, because in this case the partitions that follow it ( 2nd Linux ) - ( 3rd Swap ) have to be in his opinion also removed, because the organization of extended partition is a linked list. Since GParted was not able to do such thing I was trying to find another tool. I found diskdrake tool on PSLinuxOS distribution of linux. That tool silently deleted ( 1st NTFS ) partition and I thought that everything was fine. But diskdrake has damaged the partition in a way that I am not able either to boot from the hard disk nor to see the partitions with GParted and even with diskdrake itself! Fortunately I have a live CD of Ubuntu 8.10 and I am able to boot and see hard disk. I have 2 ideas how I can solve the problem: 1) Manually change disk partitions and point them to the correct partitions. 2) Create partition table with GParted that as much as possible is the same with the previous one I find the 2nd approach less time consuming but some data will be lost because of it is not possible to place borders of the partitions exactly how it was before. And moreover I am not sure if such approach would work, for example, if the OS is able to locate files after repartitioning. I feel like that it will but not 100% sure. Are there some ideas how the problem may be solved?

    Read the article

  • Link instead of Attaching

    - by Daniel Moth
    With email storage not being an issue in many companies (I think I currently have 25GB of storage on my email account, I don’t even think about storage), this encourages bad behaviors such as liberally attaching office documents to emails instead of sharing a link to the document in SharePoint or SkyDrive or some file share etc. Attaching a file admittedly has its usage scenarios too, but it should not be the default. I thought I'd list the reasons why sharing a link can be better than attaching files directly. In no particular order: Better Review. It allows multiple recipients to review the file and their comments are aggregated into a single document. The alternative is everyone having to detach the document, add their comments, then send back to you, and then you have to collate. Wirth the alternative, you also potentially miss out on recipients reading comments from other recipients. Always up to date. The attachment becomes a fork instead of an always up to date document. For example, you send the email on Thursday, I only open it on Tuesday: between those days you could have made updates that now I am missing because you decided to share a link instead of an attachment. Better bookmarking. When I need to find that document you shared, you are forcing me to search through my email (I may not even be running outlook), instead of opening the link which I have bookmarked in my browser or my collection of links in my OneNote or from the recent/pinned links of the office app on my task bar, etc. Can control access. If someone accidentally or naively forwards your link to someone outside your group/org who you’d prefer not to have access to it, the location of the document can be protected with specific access control. Can add more recipients. If someone adds people to the email thread in outlook, your attachment doesn't get re-attached - instead, the person added is left without the attachment unless someone remembers to re-attach it. If it was a link, they are immediately caught up without further actions. Enable Discovery. If you put it on a share, I may be able to discover other cool stuff that lives alongside that document. Save on storage. So this doesn't apply to me given my opening statement, but if in your company you do have such limitations, attaching files eats up storage on all recipients accounts and will also get "lost" when those people archive email (and lose completely at some point if they follow the company retention policy). Like I said, attachments do have their place, but they should be an explicit choice for explicit reasons rather than the default. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Legitimate use of the Windows "Documents" folder in programs.

    - by romkyns
    Anyone who likes their Documents folder to contain only things they place there knows that the standard Documents folder is completely unsuitable for this task. Every program seems to want to put its settings, data, or something equally irrelevant into the Documents folder, despite the fact that there are folders specifically for this job1. So that this doesn't sound empty, take my personal "Documents" folder as an example. I don't ever use it, in that I never, under any circumstances, save anything into this folder myself. And yet, it contains 46 folders and 3 files at the top level, for a total of 800 files in 500 folders. That's 190 MB of "documents" I didn't create. Obviously any actual documents would immediately get lost in this mess. My question is: can anything be done to improve the situation sufficiently to make "Documents" useful again, say over the next 5 years? Can programmers be somehow educated en-masse not to use it as a dumping ground? Could the OS start reporting some "fake" location hidden under AppData through the existing APIs, while only allowing Explorer and the various Open/Save dialogs to know where the "real" Documents folder resides? Or are any attempts completely futile or even unnecessary? 1For the record, here's a quick summary of the various standard directories that should be used instead of "Documents": RoamingAppData for user-specific data and settings. This is the directory to use for user-specific non-temporary data. Anything placed here will be available on any machine that a given user logs on to in networks where this is configured. Do not place large files here though, because they slow down login/logout in such environments. LocalAppData for user-and-machine-specific data and settings. This data differs for every user and every machine. This is also where very large user-specific data should be placed. ProgramData for machine-specific data and settings. These are the same regardless of which user is logged on, and will not roam to other machines in a network. GetTempPath for all files that may be wiped without loss of data when not in use. This is also the place for things like caches, because like temporary data, a cache does not need to be backed up. Place your huge cache here and you'll save your user some backup trouble. "Documents" itself should only ever be used if the user specified it manually by entering a path or selecting it in a Save dialog. That is the only time it is ever appropriate to save stuff in "Documents".

    Read the article

  • Pixels - A cry for some insight

    - by CarrotFile
    I'm pretty new to web developing and I'd love some clarification. Although reading more than one book on the topic, I cannot seem to wrap my head around the pixel concept. I encounter problems with this issue when trying to use CSS and pixel units for design that fits different screen sizes. To my understanding a pixel is the most basic unit used by a monitor in order to compose an image on the screen. So if me resolution is 800 by 600, everything on my screen is rendered using those 800*600 basic building blocks. If I were to enlarge my screen resolution, 3 things would accrue: A. The basic image building block(the pixel) would shrink in size B. The pixels would move close together C. Well, more pixels would now be available All these combined lead to a sharper(depending on the viewing distance) and more detail enabling image. Well so far so good. Here is were I start getting lost: To my knowledge a pixel is not a physical, real object. Monitors are not embedded with a few thousand pixels. I am drawn to this conclusion because anyone can change his screen's resolution, making a pixel on his screen bigger or smaller, and adding or subtracting the amount of total pixels on screen. Adding to that, I have herd that different monitors have different pixel densities. For example Apple's retina monitors. Taking all of the above as my knowledge base, These are my questions: If a pixel has no real world constant size, what does comparing different pixel densities matter? Each screen company can define it's own pixel concept and declare the higher density. What does a bigger pixel density mean? Say we take two screens with the same physical dimensions, but with a different pixel density, am I to assert that the main difference would be the larger density screen being able to display a higher max resolution? Or am I to assert that given the same resolution on both monitors, the higher density one would display a sharper, smaller image? If a pixel is not a fixed size within one monitor, is it a fixed size between the same resolution on two different monitors? For example, would two different monitors, set to the same resolution, be comprised of same size, same quantity pixels? I'd love some help (:

    Read the article

  • Homepage issue on Google [closed]

    - by nico
    We have recently updated our website www.blinds4uk.co.uk with a new homepage containing additional features and more on page content but since then we have lost primary keyword positions and the home page has disappeared completely. The only time it appears is for an exact search ‘blinds4uk’. Today I took snippets of 'unique content' from the homepage and put this into the 'google search' but our homepage was nowhere to be found. When I did the same in ‘Yahoo’ the homepage came up. Are we missing something? We ranked in the top 4 for primary kw terms 'blinds uk' & 'uk blinds' but now we don’t show anywhere for these terms. Our homepage has never ranked well for the primary kw 'blinds' yet our internal pages rank very well with many pages on page 1 of google uk. We employed an SEO firm for 9 months to help us establish issues with the homepage but they never could, so we got rid. We have been trying to get to the root cause of why the homepage ranks so poorly for a number of years and only yesterday we established that we had the meta tag directly below the tag and our title & meta description were further down the page; we have today corrected. Not sure what effect this would have on the way Google reads the homepage but we are trying everything to try and get the homepage ranking fro those primary kw's. Our current developers & ex SEO guys are all part of the same company and cannot pin-point anything other than saying carryon with their SEO team because it will take time but just comes across as a milking exercise. Another thing which I have found very strange is the data from our 'traffic audience'. We are a UK based website yet our traffic stats were showing as;- UK 36.6%, Denmark 35.8% and India 27.6%. – don’t make sense to me! Is there anybody out there that could simply point us in the right direction to the problem(s), so we can fix once and for all? Could there be anything within the code that is causing the home page not to display within google for our primary kw's terms such as blinds, window blinds etc. I would appreciate any advice at all that may help us in our quest to sort this homepage issue once and for all

    Read the article

  • Powerful Lessons in Data from the Presidential Election

    - by Christina McKeon
    Now that we’ve had a few days to recover from the U.S. presidential election, it’s a good time to take a step back from politics and look for the customer experience lessons that we can take away. The most powerful lesson is that when you know more about your base, you will have an advantage over your competition. That advantage will translate into you winning and your competition losing. Michael Scherer of TIME was given access to Obama’s data analysts two days before the election. His account is documented in Inside the Secret World of the Data Crunchers Who Helped Obama Win. What we learned from Scherer’s inside view is how well Obama’s team did in getting the right data, analyzing it, and acting on it. This data team recognized how critical it was to break down data silos within the campaign. As Scherer noted, they created “a single system that merged information from pollsters, fundraisers, field workers, consumer databases, and social-media and mobile contacts with the main Democratic voter files in the swing states.” The Obama analysis was so meticulous that they knew which celebrity and which type of celebrity event would help them maximize campaign contributions. With a single system, their data models became more precise. They determined which messages were more successful with specific demographic groups and that who made the calls mattered. Data analysis also led to many other changes in Obama’s campaign including a new ad buying strategy, using social media and applications to tap into supporters’ friends, and using new social news sites. While we did not have that same inside view into Romney’s campaign, much of the post-mortem coverage indicates that Romney’s team did not have the right analysis. As Peter Hamby of CNN wrote in Analysis: Why Romney Lost, “Romney officials had modeled an electorate that looked something like a mix of 2004 and 2008….” That historical data did not account for the changing demographics in the U.S. Does your organization approach data like the Obama or Romney team? Do you really know your base? How well can you predict what is going to happen in your business? If you haven’t already put together a strategy and plan to know more, this week’s civics lesson is a powerful reason to do it sooner rather than later. Your competitors are probably thinking the same thing that you are!

    Read the article

  • core.* files eating up server space (~50MB)

    - by skytreader
    I'm renting server space from someone and, upon logging in my control panel after quite sometime, noticed an abnormal spike (~50MB) in the disk usage. Upon investigating, I found a lot of core.* files scattered around my public_html directory. Each one is more than 5MB in size but no more than 6MB. The * part is all numbers (in programming regex, that should be core\.\d+). I downloaded one and checked the contents. There was a lot of balderdash characters (NUL mostly, but also a scattering of ETB, ETX, STX) but there's this block of readable text which says: This text is part of the internal format of your mail folder, and is not a real message. It is created automatically by the mail system software. If deleted, important folder data will be lost, and it will be re-created with the data reset to initial values. Pretty self-explanatory. A few blocks above the text are some more readable messages that look like logs but is sandwiched in between non printable characters. I've extracted some below. Scan not valid for mh mailboxes Bogus character 0x%x in news state Can't rewrite news state %.80s Error closing backup news state %.80s No state for newsgroup %.80s found Now, a few concerns: Am I under attack? The messages seem to be about my webmail but I don't use my personal webmail that much---only for a vanity email address and an inbox for an outdated comments system. However, lately, I seem to notice a spike in the spam for my vanity mail. (Note: the comments system is covered by a captcha but every now and then some get through. My vanity email has a spam filter but it isn't as good as I'd like). Next, if this is a feature, can I turn it off? Is it advisable to? I've only 150MB so you see why I'm fretting over a 50MB spike. Some final details: my only server-side scripts are in PHP. The directory which accumulated the most number of these core files is the one containing the Wordpress-managed subdomain of my site. I manage my server through CPanel. Lastly, I decided to delete this files and after some checking nothing seems amiss in my websites nor in my mail. They are indeed the ones responsible for the ~50MB spike as my disk space usage is back to expected.

    Read the article

  • SMTP POP3 & PST. Acronyms from Hades.

    - by mikef
    A busy SysAdmin will occasionally have reason to curse SMTP. It is, certainly, one of the strangest events in the history of IT that such a deeply flawed system, designed originally purely for campus use, should have reached its current dominant position. The explanation was that it was the first open-standard email system, so SMTP/POP3 became the internet standard. We are, in consequence, dogged with a system with security weaknesses so extreme that messages are sent in plain text and you have no real assurance as to who the message came from anyway (SMTP-AUTH hasn't really caught on). Even without the security issues, the use of SMTP in an office environment provides a management nightmare to all commercial users responsible for complying with all regulations that control the conduct of business: such as tracking, retaining, and recording company documents. SMTP mail developed from various Unix-based systems designed for campus use that took the mail analogy so literally that mail messages were actually delivered to the users, using a 'store and forward' mechanism. This meant that, from the start, the end user had to store, manage and delete messages. This is a problem that has passed through all the releases of MS Outlook: It has to be able to manage mail locally in the dreaded PST file. As a stand-alone system, Outlook is flawed by its neglect of any means of automatic backup. Previous Outlook PST files actually blew up without warning when they reached the 2 Gig limit and became corrupted and inaccessible, leading to a thriving industry of 3rd party tools to clear up the mess. Microsoft Exchange is, of course, a server-based system. Emails are less likely to be lost in such a system if it is properly run. However, there is nothing to stop users from using local PSTs as well. There is the additional temptation to load emails into mobile devices, or USB keys for off-line working. The result is that the System Administrator is faced by a complex hybrid system where backups have to be taken from Servers, and PCs scattered around the network, where duplication of emails causes storage issues, and document retention policies become impossible to manage. If one adds to that the complexity of mobile phone email readers and mail synchronization, the problem is daunting. It is hardly surprising that the mood darkens when SysAdmins meet and discuss PST Hell. If you were promoted to the task of tormenting the souls of the damned in Hades, what aspects of the management of Outlook would you find most useful for your task? I'd love to hear from you. Cheers, Michael

    Read the article

  • 'Unable to mount Filesystem' Error

    - by Charles
    Trying to extract data from a 'bricked' Western Digital MyBook Live 2tb drive. I came across a forum that advised to use Ubuntu (booted from a CD) on my Macbook. Managed to download and create a boot CD for Ubuntu (like this little operating system btw). Booted the machine with the CD and plugged the drive (which I had extracted from it's casing and placed into a external USB SATA case & plugged to the laptop). The drive is seen by Ubuntu but each time I click on the drive, it gives me the following error: Unable to mount 2.0 TB Filesystem Error mounting: mount: wrong fs type, bad option, bad superblock on /dev/sdb4, missing codepage or helper program, or other error In some cases useful info is found in syslog -try dmesg | tail or so I am new to this and spent quite some time searching this site to see if I could find a solution to this problem without troubling anyone. I came up with a few that came close but some of the questioners mentioned that they had lost data...which scared me from going further. I need to basically extract 1 particular folder from the drive. If I can get to mount this volume 'sdb4', there is a folder called 'My_Work' which I need to back up. The rest I have/had a copy of. When I typed in dmesg | tail...I got several lines..but I think ones that are relevant are: [ 406.864677] EXT4-fs (sdb4): bad block size 65536 [ 429.098776] hfs: write access to a journaled filesystem is not supported, use the force option at your own risk, mounting read-only [ 439.786365] hfs: write access to a journaled filesystem is not supported, use the force option at your own risk, mounting read-only [ 445.982692] EXT4-fs (sdb4): bad block size 65536 [ 1565.841690] EXT4-fs (sdb4): bad block size 65536 I read somewhere to try/check 'sudo fdisk -l /dev/sdb4'. It gave me the following result: Disk /dev/sdb44: 1995.8 GB, 1995774623744 bytes 255 heads, 63 sectors/track, 242639 cylinders, total 3897997312 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdb4 doesn't contain a valid partition table This is where I reached and got frustrated and decided to try & get help on this without digging myself deeper into a hole! I understand that the answer may already be out there. If so, could someone please point me in the right direction. And if not, could someone please resolve (if possible) my situation!

    Read the article

  • Adding JavaScript to your code dependent upon conditions

    - by DavidMadden
    You might be in an environment where you code is source controlled and where you might have build options to different environments.  I recently encountered this where the same code, built on different configurations, would have the website at a different URL.  If you are working with ASP.NET as I am you will have to do something a bit crazy but worth while.  If someone has a more efficient solution please share. Here is what I came up with to make sure the client side script was placed into the HEAD element for the Google Analytics script.  GA wants to be the last in the HEAD element so if you are doing others in the Page_Load then you should do theirs last. The settings object below is an instance of a class that holds information I collection.  You could read from different sources depending on where you stored your unique ID for Google Analytics. *** This has been formatted to fit this screen. *** if (!IsPostBack) { if (settings.GoogleAnalyticsID != null || settings.GoogleAnalyticsID != string.Empty) { string str = @"//<!CDATA[ var _gaq = _gaq || []; _gaq.push(['_setAccount', '"  + settings.GoogleAnalyticsID + "']); _gaq.push(['_trackPageview']);  (function () {  var ga = document.createElement('script');  ga.type = 'text/javascript';  ga.async = true;  ga.src = ('https:' == document.location.protocol  ? 'https://ssl' :  'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0];  s.parentNode.insertBefore(ga, s);})();"; System.Web.UI.HtmlControls.HtmlGenericControl si =  new System.Web.UI.HtmlControls.HtmlGenericControl(); si.TagName = "script"; si.Attributes.Add("type", @"text/javascript"); si.InnerHtml = sb.ToString(); this.Page.Header.Controls.Add(si); } } The code above will prevent the code from executing if it is a PostBack and then if the ID was not able to be read or something caused the settings to be lost by accident. If you have larger function to declare, you can use a StringBuilder to separate the lines. This is the most compact I wished to go and manage readability.

    Read the article

  • How do you turn on the customizable gnome-panel features (like gnome-applets) in Precise?

    - by chriv
    I resurrected a broken laptop today. I took out the HDD, put it in a USB 3.0 enclosure, and created a VM that would use it. It was running lucid. I took a screenshot of the desktop before I started "do-release-upgrade", because from experience, I will never have my GUI back the way I want it again. I know how to install gnome-panel to get back the "Gnome Classic" session option. I know how to put my minimize, maximize, and close buttons back in the upper-right hand corner of windows (where they belong). I know how to use gdm instead of lightdm. Unity gets worse in every version (and the other desktop OS is going to be even worse with Metro). Here's what I don't know (in order of importance): 1. How do you make the panels in gnome (gnome-panel, to be precise) customizable again (like they were in older versions of Ubuntu)? 2. How do you install applets in the panels now (right-click is now ignored)? 3. How can you customize all of the window elements (like you could in older versions of Ubuntu)? I can't remember much about maverick, natty, or oneiric (except their names), so I don't know exactly when I lost these capabilities. Edit: (no screenshot), my StackExchange reputation (on other StackExchange sites) doesn't carry over to this site, so I can't post the screenshot. Take a look at the panels in the screen hot. They are nice, compact, and VERY functional (disk mounter applet, frequently used shortcuts, workspaces, show desktop, kill window, and trash icons, etc.) Notice how small the fonts (and how little real estate they waste). You can't notice the compact title bars, fonts, and window icons in this screen shot (since I redacted the rest of the desktop), but it's the same story there. Please help. I don't want to learn another distro, but Ubuntu gets less customizable with every "upgrade." Screenshot (not an inline image, since I don't have the reputation yet)... i.stack.imgur.com/puoUT.png

    Read the article

  • Object Oriented Design of a Small Java Game

    - by user2733436
    This is the problem i am dealing with. I have to make a simple game of NIM. I am learning java using a book so far i have only coded programs that deal with 2 classes. This program would have about 4 classes i guess including the main class. My problem is i am having a difficult time designing classes how they will interact with each other. I really want to think and use a object oriented approach. So the first thing i did was design the Pile CLASS as it seemed the easiest and made the most sense to me in terms of what methods go in it. Here is what i have got down for the Pile Class so far. package Nim; import java.util.Random; public class Pile { private int initialSize; public Pile(){ } Random rand = new Random(); public void setPile(){ initialSize = (rand.nextInt(100-10)+10); } public void reducePile(int x){ initialSize = initialSize - x; } public int getPile(){ return initialSize; } public boolean hasStick(){ if(initialSize>0){ return true; } else { return false; } } } Now i need help in designing the Player Class. By that i mean i am not asking for anyone to write code for me as that defeats the purpose of learning i was just wondering how would i design the player class and what would go on it. My guess is that the player class would contain method for choosing move for computer and also receiving the move human user makes. Lastly i am guessing in the Game class i am guessing the turns would be handeled. I am really lost right now so i was wondering if someone can help me think through this problem it would be great. Starting with the player class would be appreciated. I know there are some solutions for this problem online but i refuse to look at because i want to develop my own approach to such problems and i am confident if i can get through this problem i can solve other problems. I apologize if this question is a bit poor but in specific i need help in designing the Player class.

    Read the article

  • Rolling With the Punches

    - by D'Arcy Lussier
    So I’ve been tweeting the last little while “Rolling with the punches” and I’ve had some people ask me what that meant. Whether you’re running a conference (like I am this week), or a project, or a birthday party for a 2 year old, you need to be ready to handle those things that are unexpected. Risk mitigation can only go so far and its at those times that you need to become resourceful. So let me tell you what the last few days have been like. Today is the first day of Prairie Dev Con Winnipeg, a conference that I run. On Friday I was informed that my keynote speaker had lost his voice, one of my speakers had a family emergency and had to back out, and I got a warning from another that he was travelling over the weekend and if there was a storm or something he may not be able to get back by Monday for his talk. A storm didn’t happen, but their car did break down and he was delayed. Finally, Saturday night I took my printing order to Staples. It was at 5 and they closed at 6, and I had a bunch of surveys to be printed and cut. The girl working said that she’d have it ready by the next day (Sunday). Her intent was to come in the next morning and finish the job. Unfortunately, she had to be hospitalized that night and never made it into work…and never informed anyone of the remaining work. They found out at 3pm when I came to pick it up and there was no way they’d be able to cut everything in time. So how did we roll with these punches? - Miguel, my keynote speaker, was a trooper and was able to do the keynote but asked that his session get moved from Monday to Tuesday. This is why I wait until the last day before printing out schedules, they can change up to the event and even later. - I was able to move some sessions around to accommodate my stranded speaker and fill the empty slot from the speaker that couldn’t make it. - Staples was able to get me half the cut surveys so I took those and my wife will pick up the rest today. I altered how we’d collect session surveys, and actually I think it’ll work better. So all of this is to say, plan but also plan for what you can’t plan for – there will be things that happen that blindside you, that you’re not sure how to handle or solve. Stop, take a deep breath, and don’t feel that you need to limit yourself to the boundaries that you initially set for yourself. Roll with the punch and learn from it so that you can avoid the blow next time. Now, back to the conference! D

    Read the article

  • A better way to organize your Silverlight Code Snippets.

    - by mbcrump
    I hate re-writing code. I also hate it when I find a great code snippet on the web and forget to bookmark it or it gets lost in my endless sea of bookmarks. So what do you do to get around this? This is the question that I was asking myself at the end of 2010. How can I get my Silverlight code organized? My requirements for a snippet manager were: Needs to be FREE. An easy way to view XAML/C# code behind together in one “view”. I wanted the ability to store the code snippets in cloud in case my HDD dies. Searchable Keywords to quickly find code snippets. I started looking for a snippet manager that would allow me to do just that and finally found Snippet Manager. Before going any further, I think that one of the most important things to note here is that this software supports 37 languages. It’s not just for Silverlight developers nor C# only guys. The software supports Java, SQL and even COBOL.   Below is a screenshot of the Snippet Manager that shows my Silverlight code snippet. You will notice that I have highlighted two sections. The top part is my XAML and the bottom is my C# code behind. I’ve included a sample below of my code snippets so that you can get an idea of how I organized it. Another thing that’s great about this software is that it supports plain text. I added some connection strings in the TEXT section below.  Once you have finished adding your code snippets, you can store them in the cloud. I created a FTP directory called “snippets” on my FTP Server and hit the upload button once I am finished adding my new codes snippets. This will allow me to use the code snippets on another computer with this application on my USB Key. See screenshots below: Enter your FTP credentials below: Hit the Uploads button on the Toolbar: Login in to your FTP Server and verify the following files are now on the FTP Server: Another great feature of the Snippet Manager is that you can also integrate this into VS2010 by clicking Tools –> External Tools: And setting up your External Screen to point to the Executable: You can now launch it by going to Tools –> Snippet Manager. If you want you could also a shortcut to launch the program with HotKeys. As you can see, this is a nice little program that includes everything needed to organize your code snippets very clean. I didn’t go over every feature but this is something that you might want to download and give it a shot.  Subscribe to my feed CodeProject

    Read the article

  • dual boot ubuntu installation mishap

    - by user590849
    I have Windows 7 pc ,where i had 2 partitions, a c drive for my system files and a d drive for my data. I decided to install ubuntu 11.10 a couple of days ago and thought of install it in a separate partition of its own. So i made a separate Linux partition of 30GB. I downloaded ubuntu on my usb stick and installed. During the installation process i was asked where to install ubuntu so i opened up a screen that was similar to this one There were six partitions present ( I had made only 3 partition via windows). Their names were totally different from the ones that i had given in windows. So i selected a drive which had the same size as my Linux partition that i had made in windows ( no other partition had the same size). I clicked on install now and got an error message saying that "There was no root folder set". I set the newly made partition as my root folder and clicked install now. Now out of the 6 partitions that were created 3 were logical ( i had only created 3 partitions in windows). As soon as i clicked install now, the system asked me where i wanted to put my "swap space". I selected one of the logical drives and hit install. Ubuntu successfully installed on my system and at the end it asked me to reboot. I did and got the following error message: "missing operating system". I was shocked. I tried my windows recovery disk ( that i had gotten when i had purchased my laptop) and there i went into startup repair. In the startup repair option i was not able to locate windows. The system asked me to click the "Load drivers" button to load the drivers to my harddrive where windows was installed, but i could not locate any drivers to my harddrive. I tried this several times but to no success. I panicked and installed ubuntu, now this time click "ok" at every step( not worrying about the partition and all). The os installed correctly and i am now able to access my harddrive. NO data within the c drive is lost. All the windows system files are intact. I wish to recover my windows installation. How do i go about it? Thank you in advance. I do not want to format my computer and install windows again.

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >