Search Results

Search found 3760 results on 151 pages for 'mutlple entries'.

Page 99/151 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • ASP.NET MVC localization DisplayNameAttribute alternatives: a better way

    - by Brian Schroer
    In my last post, I talked bout creating a custom class inheriting from System.ComponentModel.DisplayNameAttribute to retrieve display names from resource files: [LocalizedDisplayName("RememberMe")] public bool RememberMe { get; set; } That’s a lot of work to put an attribute on all of my model properties though. It would be nice if I could intercept the ASP.NET MVC code that analyzes the model metadata to retrieve display names to make it automatically get localized text from my resource files. That way, I could just set up resource file entries where the keys are the property names, and not have to put attributes on all of my properties. That’s done by creating a custom class inheriting from System.Web.Mvc.DataAnnotationsModelMetadataProvider: 1: public class LocalizedDataAnnotationsModelMetadataProvider : 2: DataAnnotationsModelMetadataProvider 3: { 4: protected override ModelMetadata CreateMetadata( 5: IEnumerable<Attribute> attributes, 6: Type containerType, 7: Func<object> modelAccessor, 8: Type modelType, 9: string propertyName) 10: { 11: var meta = base.CreateMetadata 12: (attributes, containerType, modelAccessor, modelType, propertyName); 13:   14: if (string.IsNullOrEmpty(propertyName)) 15: return meta; 16:   17: if (meta.DisplayName == null) 18: GetLocalizedDisplayName(meta, propertyName); 19:   20: if (string.IsNullOrEmpty(meta.DisplayName)) 21: meta.DisplayName = string.Format("[[{0}]]", propertyName); 22:   23: return meta; 24: } 25:   26: private static void GetLocalizedDisplayName(ModelMetadata meta, string propertyName) 27: { 28: ResourceManager resourceManager = MyResource.ResourceManager; 29: CultureInfo culture = Thread.CurrentThread.CurrentUICulture; 30:   31: meta.DisplayName = resourceManager.GetString(propertyName, culture); 32: } 33: } Line 11 calls the base CreateMetadata method. Line 17 checks whether the metadata DisplayName property has already been populated by a DisplayNameAttribute (or my LocalizedDisplayNameAttribute). If so, it respects that and doesn’t use my custom localized text lookup. The GetLocalizedDisplayName method checks for the property name as a resource file key. If found, it uses the localized text from the resource files. If the key is not found in the resource file, as with my LocalizedDisplayNameAttribute, I return a formatted string containing the property name (e.g. “[[RememberMe]]”) so I can tell by looking at my web pages which resource keys I haven’t defined yet. It’s hooked up with this code in the Application_Start method of Global.asax: ModelMetadataProviders.Current = new LocalizedDataAnnotationsModelMetadataProvider();

    Read the article

  • Part 2: The Customization Lifecycle

    - by volker.eckardt(at)oracle.com
    To understand the challenges when working with Customizations better, please allow me to explain my understanding from the Customization Lifecycle.  The starting point is the functional GAP list. Any GAP can lead to a customization (but not have to). The decision is driven by priority, gain, costs, future functionality, accepted workarounds etc. Let's assume the customization has been accepted as such - including estimation. (Otherwise this blog would not have any value)Now the customization life-cycle starts and could look like this:-    Functional specification-    Technical specification-    Technical development-    Functional setup-    Module Test-    System Test-    Integration Test (if required)-    Acceptance Test-    Production mode-    Usage-    10 x Rework-    10 x Retest -    2 x Upgrade-    2 x Upgrade Test-    Usage-    10 x Rework-    10 x Retest -    1 x Upgrade-    1 x Upgrade Test-    Usage-    Review for Retirement-    Accepted Retirement-    De-installationWhat I like to highlight herewith is that any material and documentation you create upfront or during the first phases will usually be used multiple times, partial or complete, will be enhanced, reviewed, retested. The better the quality right from the beginning is, the better we can perform the next steps.What I see very often is the wish to remove a customization, our customers are upgrading and they like to get at least some of the customizations replaced with standard functionality. To be able to support this process best, the customization documentation should contain at least the following key information: What is/are the business process(es) where this customization is used or linked to?Who was involved in the different customization phases?What are the objects comprising the customization?What is the setup necessary for the customization?What setup comes with the customization, what has to be done via other tools or manually?What are the test steps and test results (in all test areas)?What are linked customizations? What is the customization complexity?How is this customization classified?Which technologies were used?How many days were needed to create/test/upgrade the customization?Etc.If all this is available, a replacement / retirement can be done much more efficient and precise, or an estimation and upgrade itself can be executed with much better support.In the following blog entries I will explain in more detail why we suggest tracking such information, by whom this task shall be done and how.Volker Eckardt

    Read the article

  • TSAM 11gR1

    - by todd.little
    The Tuxedo System and Application Monitor (TSAM) 11gR1 release provides powerful new application monitoring capabilities, as well as significant improvements in ease of use. The first thing users will notice is the completely redesigned user interface in the TSAM console. Based on Oracle ADF, the console is much easier to navigate, provides a Web 2.0 style interface with dynamically updating panels, and a look and feel familiar to those that have used Oracle Enterprise Manager. Monitoring data can be viewed in both tabular and graphical form and exported to Excel for further analysis. A number of new metrics are collected and displayed in this release. Call path monitoring now displays CPU time, message size, total transport time, and client address giving even more end-to-end information about a specific Tuxedo request. As well the call path display has been completely revamped to make it much easier to see the branches of the call path. The call pattern display now provides statistics on successful vs failed calls, system and application failures, and end-to-end average elapsed time. Service monitoring now displays minimum and maximum message size, CPU usage, and client address. System server monitoring now includes monitoring the SALT gateway servers to provide detailed performance metrics about those servers. Perhaps the most significant new feature is the consolidation of alert definitions and policy management. In previous versions of TSAM, some alerts were defined and checked on the monitored systems while others were defined and checked in the console. Policy management could be performed on both the monitored node via environment variable or command, as well as from the console. Now all alert definitions and policy definitions are only made using the console. For alerts this means that regardless of where the alert is evaluated it is defined in one and only one place. Thus the plug-in alert mechanism of previous releases can now be managed using the TSAM console, making SLA alert definition much easier and cleaner. Finally there is support in TSAM for monitoring rehosted mainframe applications. The newly announced Oracle Tuxedo Application Runtime for CICS and Batch can be monitored in the TSAM console using traditional mainframe views of the application such as regions. Look for a future blog entry with more details on this as well as some entries providing a glimpse of the console. TSAM gives users a single point for monitoring the performance of all of their Tuxedo applications.

    Read the article

  • Save Links for Later Reading in Firefox

    - by Asian Angel
    Do you want a simple way to save and manage links for reading later? The Save-To-Read extension for Firefox makes it easy to do without an account. Using Save-To-Read As soon as you install the extension you will notice two new additions to your UI. You will see a small plus sign in the address bar and a new toolbar button (opens and closes the sidebar shown here). Your bookmarks menu will also have a new folder entry. For our example we chose to save three pages for later reading. Each time you want to save a website click on the small plus sign, and it is automatically added to your read later list. Our second article… And finally the third article. Notice that the small plus sign has become a minus sign after adding the article to our list. Opening the sidebar shows our three entries waiting to be read. Checking the bookmarks menu shows the same articles available there. When you are ready to read your articles simply click on the link in the sidebar, bookmarks menu, etc. Notice that the entry is still available at the moment…there are no automatic deletions until you are finished with an article. This is great if you accidentally click the wrong link before you are ready for it. Removing an article from the list is as simple as clicking on the address bar minus sign. It will revert to a plus sign and the entry is no longer visible in your list. For those who want to avoid using a sidebar there is a different toolbar button available too. The alternate toolbar button provides access to a drop-down article list. Choose the access style that best suits your needs. Preferences The preferences are simple to work with and focus on appearance/ease-of-use. Conclusion If you have been looking for a simpler alternative to other “read later” extensions, then Save-To-Read could be just what you have been waiting for. Another cool option for reading posts later, even on eReaders, then check out our article on saving articles to read later with Instapaper. Links Download the Save-To-Read extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Save Pages for Later With Reading List Extension for FirefoxInstall Adobe PDF Reader on Ubuntu EdgyQuick Hits: 11 Firefox Tab How-TosSave Webpage Links & URLs as Files in FirefoxQuick Tip: Save Windows and Tabs When Restarting Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server TubeSort: YouTube Playlist Organizer XPS file format & XPS Viewer Explained Microsoft Office Web Apps Guide Know if Someone Accessed Your Facebook Account Shop for Music with Windows Media Player 12 Access Free Documentaries at BBC Documentaries

    Read the article

  • Install of AppFabric RC stops AppFabric Monitoring (some traps for young players)

    - by Rob Addis
    I uninstalled AppFabric Beta 2 and installed AppFabric RC. The AppFabricEventCollection Service is started (runs under Local Service which is a dbo_owner on the Monitoring Database to prove this wasn’t the issue). The SQLServerAgent Service is started. Nothing is being written to the Monitoring DB Staging Table and thus nothing is being written to the Event tables or seen in the AppFabric Dashboard. Nothing has been written to the following event logs     - Microsoft-Windows-Application Server-System Services\Admin     - Microsoft-Windows-Application Server-System Services\Operational The Microsoft-Windows-Application Server-System Services\Debug event log is not shown in the event viewer. The WCF configuration appears fine the connection string to the Monitoring DB is correct. Monitoring is set to “Trouble Shooting” and no errors are shown on the “Configure WCF and WF for Application” dialog. So the problem seems to lie with either AppFabric which writes to the event log or the AppFabricEventCollection Service. I thought I was flummoxed... However one of my colleagues said have you checked the etwProviderId? I was using a config which was created under AppFabric  Beta 2 which had a different etwProviderId. So I deleted the following section and all other references to AppFabric monitoring from the web.config and then recreated them using IIS the “Configure WCF and WF for Application” dialog and set the level to TroubleShooting.         <diagnostics etwProviderId="6b44a7ff-9db4-4723-b8cf-1b584bf1591b">             <endToEndTracing propagateActivity="true" messageFlowTracing="true" />         </diagnostics>   I then called a service to create some log entries. Still nothing was written to the Monitoring DB Staging Table... I checked the Microsoft-Windows-Application Server-System Services\Admin event log. It had the following entry... Configuration error. Please see the details to correct the problem. \rDetailed information:\r Filename: \\?\C:\Users\xxx\Documents\dotnetdev\Frameworks\SOA\xxx.SOA.Framework\xxx.SOA.Framework.MockServices\SimpleServiceParent\web.config Error: Cannot read configuration file due to insufficient permissions    System.UnauthorizedAccessException: Filename: \\?\C:\Users\xxx\Documents\dotnetdev\Frameworks\SOA\xxx.SOA.Framework\IAG.SOA.Framework.MockServices\SimpleServiceParent\web.config Error: Cannot read configuration file due to insufficient permissions   And guess who the user was... Local Service yes yes I should have used a better User in the AppFabric RC setup to run the AppFabricEventCollection Service under! So I changed the user to a more appropriate one and removed Local Service as a DBO and hay presto!

    Read the article

  • What micro web-framework has the lowest overhead but includes templating

    - by Simon Martin
    I want to rewrite a simple small (10 page) website and besides a contact form it could be written in pure html. It is currently built with classic asp and Dreamweaver templates. The reason I'm not simply writing 10 html pages is that I want to keep the layout all in 1 place so would need either includes or a masterpage. I don't want to use Dreamweaver templates, or batch processing (like org-mode) because I want to be able to edit using notepad (or Visual Studio) because occasionally I might need to edit a file on the server (Go Daddy's IIS admin interface will let me edit text). I don't want to use ASP.NET MVC or WebForms (which I use in my day job) because I don't need all the overhead they bring with them when essentially I'm serving up 9 static files, 1 contact form and 1 list of clubs (that I aim to use jQuery to filter). The shared hosting package I have on Go Daddy seems to take a long time to spin up when serving aspx files. Currently the clubs page is driven from an MS SQL database that I try to keep up to date by manually checking the dojo locator on the main HQ pages and editing the entries myself, this is again way over the top. I aim to get a text file with the club details (probably in JSON or xml format) and use that as the source for the clubs page. There will need to be a bit of programming for this as the HQ site is unable to provide an extract / feed so something will have to scrape the site periodically to update my clubs persistence file. I'd like that to be automated - but I'm happy to have that triggered on a visit to the clubs page so I don't need to worry about scheduling a job. I would probably have a separate process that updates the persistence that has nothing to do with the rest of the site. Ideally I'd like to use Mercurial (or git) to publish, I know Bitbucket (and github) both serve static page sites so they wouldn't work in this scenario (dynamic pages and a contact form) but that's the model I'd like to use if there is such a thing. My requirements are: Simple templating system, 1 place to define header, footers, menu etc., that can be edited using just notepad. Very minimal / lightweight framework. I don't need a monster for 10 pages Must run either on IIS7 (shared Go Daddy Windows hosting) or other free host

    Read the article

  • Gparted Partition Mount Points Alternate Between 2 Physical Disk Drives

    - by California Ken
    I'm running Ubuntu Server 14.04 on a system with 2 physical disk drives. I am frequently seeing mount errors on startup. When I check the drive partitions using GPARTED, I see that my two "non-system created" data partitions have the wrong disk assignments (i.e. sda1 vs sdb1) or visa-versa. If I hand edit /etc/fstab to match GPARTED, the system will boot error free one time. On the second restart I will get the "serious mount problem" error for the 2 data partitions and when I check GPARTED, the disk assignments have changed again (again, GPARTED and fstab don't match). A listing of my /etc/fstab is: /etc/fstab: static file system information. # Use 'blkid' to print the universally unique identifier for a device; this may be used with UUID= as a more robust way to name devices that works even if disks are added and removed. See fstab(5). # / was on /dev/sdb2 during installation UUID=766a06a4-e5af-484a-adf0-fa1e88da7212 / ext4 errors=remount-ro,user_xattr,acl,barrier=1 0 1 swap was on /dev/sda6 during installation UUID=8c42f835-ead3-43fb-88d8-196f5dfc3aa7 none swap sw 0 0 swap was on /dev/sdb3 during installation UUID=2214deec-ba98-47da-aea7-4e46998f3e57 none swap sw 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0 /dev/sda1 /media/ken/Linux-Data ext3 defaults 0 2 /dev/sda5 /media/ken/Data2 ext4 defaults 0 2 The device designations in the last 2 lines are the ones in question. The fstab entries to NOT change between system restarts but the mount points in the GPARTED display do. Does anyone have a fix for this? Thanks Mr. Young and Mr Gedak, Following is my fstab file and two blkid outputs. The fstab output is correct. The first blkid output was after a reboot and is WRONG! The sda and sdb device partition data is reversed. The 2nd blkid output was after a second reboot (fstab not changed). It shows the sda and adb partition data CORRECTLY. I didn't see any duplicate UUIDs. Does anyone have any idea why the GPARTED and blkid outputs alternate on consecutive reboots? The alternating partition data is real since when the partition assignments are reversed, the boot sequence halts with disk mounting errers (I have to press "s" to skip the mounts). Thanks again. Ken I copied the contents of a text file showing my fstab and 2 blkid outputs. The text file contents show up in the text entry box but does not appear in the main body of the question. Is there a way I can attach the text file or edit this question so that the text is displayed for question viewers?

    Read the article

  • Set Custom Reload Times for Individual Webpages in Chrome

    - by Asian Angel
    Do you have a webpage that needs to be reloaded every so often or perhaps you have multiple webpages that each need their own individual reload time? Now you can have the best of both with the AutoReloader extension for Google Chrome. Using AutoReloader When you first look at the drop-down window everything will be in a neutral “waiting” state. You can start using the extension immediately by simply entering the desired “time frame” for reloading a webpage. Notice for the “Repeat Option” that “0 = Continuous”… You may want to have a quick look through the “Options” to see if there are any “operational changes” that you would like to make. Once you enter a time click on the “Set Link” to start the timer. Notice that you can view the time remaining on the “Toolbar Button” unless you disabled the feature in the “Options”. Clicking on the “Toolbar Button” will show a larger version of the timer in the drop-down window along with a “Cancel Current Timer Link”. Here is the best part of all with AutoReloader…you can set up your own customized list of “Reload Times” and then access them through the drop-down window. Using the two times shown here we were able to set the “Productive Geek Webpage” up for 30 second reloads and the “TinyHacker Webpage” up for 1 minute reloads at the same time. There was no conflict whatsoever in running both “reload times” simultaneously. This is a really terrific feature! Conclusion Whether you have only one webpage or multiple pages that need periodic reloading (such as tracking a Woot-Off or an Ebay auction) the AutoReloader extension is the perfect tool for the job. Running custom reload times simultaneously have never been easier. Links Download the AutoReloader extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Set Up Automatic Timed Page Reloading on Your Webpages in FirefoxRemove Custom about:config Entries the Easy WayEnable Vista Black Style Theme for Google Chrome in XPActivate the Redesigned New-Tab Interface in Google ChromeModify Tab Ordering in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional The Growth of Citibank Quickly Switch between Tabs in IE Windows Media Player 12: Tweak Video & Sound with Playback Enhancements Own a cell phone, or does a cell phone own you? Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier

    Read the article

  • TFS Rant *WARNING* negative opinions are being expressed.

    - by ryanabr
    It has happened several times now where I end up installing TFS "over the shoulder" of the system admin guy whose job it will be to "own" the server when I am gone. TFS is challenging enough to stand up when doing it myself on a completely open platform, but at these locations, networks are locked down, machines are locked down, and the unexpected always seems to pop up. I personally have the tolerance for these things as a software developer, but as we are installing I have to listen to all of 'colorful' remarks being made about: "why is it like this" or "this is a piece of crap". Generally the issues center around SharePoint integration. TFS on it's own is straightforward, but the last flavor in everyone's mouth is the SharePoint piece. As a product I like SharePoint, but installation is a nightmare. In this particular case, we are going to use WSS since the customer would like this separate from their corp SharePoint 2010 installations since there dev team is really small (1 developer) and it is being used as a VSS replacement, more than a full blown ALM tool. The server where it is being installed as a Cisco Security Agent on it that seems to block 'suspicious' activity, and as far as I can tell is preventing WSS from installing properly. The most confounding thing we can find no meaningful log entries to help diagnose the issue. it didn't help matters that when we tried to contact Microsoft for support, because we mentioned TFS in the list of things that we were trying to install, that after waiting 2 hours that we got a TFS support person NOT the SharePoint person that we really needed, so after another 2 hours the SharePoint support that we did get managed to corrupt the registry sufficiently with his 'tools' that we ended up starting over from scratch the next day anyway after going home at midnight. My point to this is: The System Administrator who is going to own this, now thinks it is a piece of crap because SharePoint wouldn't install properly. Perception is everything.  Everyone today is conditioned that software installs and works in a very simple matter. When looking at the different options to install TFS with the different "modes" there is inconsistency in the information being presented which leads to choices that causes headaches and this bad perception before the product is even installed. I am highlighting this because I love TFS as a product, but I HATE installing it, and would like it to install as simply and elegantly as the product operates once it is installed.

    Read the article

  • JRockit Virtual Edition Debug Key

    - by changjae.lee
    There are a few keys that can help the debugging of the JRVE env in console. you can type in each keys in JRVE console to see what's happening under the hood. key '0' : System information key '5' : Enable shutdown key '7' : Start JRockit Management Server (port 7091) key '8' : Statistics Counters key '9' : Full Thread Dump key '0' : Status of Debug-key Below is the sample out from each keys. Debug-key '1' pressed ============ JRockitVE System Information ============ JRockitVE version : 11.1.1.3.0-67-131044 Kernel version : 6.1.0.0-97-131024 JVM version : R27.6.6-28_o-125824-1.6.0_17-20091214-2104-linux-ia32 Hypervisor version : Xen 3.4.0 Boot state : 0x007effff Uptime : 0 days 02:04:31 CPU : uniprocessor @2327 Mhz CPU usage : 0% ctx/s: 285 preempt/s: 0 migrations/s: 0 Physical pages : 82379/261121 (321/1020 MB) Network info : 10.179.97.64 (10.179.97.64/255.255.254.0) GateWay : 10.179.96.1 MAC address : 00:16:3e:7e:dc:78 Boot options : vfsCwd : /application/user_projects/domains/wlsve_domain mainArgs : java -javaagent:/jrockitve/services/sshd/sshd.jar -cp /jrockitve/jrockit/lib/tools.jar:/jrockitve/lib/common.jar:/application/patch_wls1032/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/application/wlserver_10.3/server/lib/weblogic.jar -Dweblogic.Name=WlsveAdmin -Dweblogic.Domain=wlsve_domain -Dweblogic.management.username=weblogic -Dweblogic.management.password=welcome1 -Dweblogic.management.GenerateDefaultConfig=true weblogic.Server consLog : /jrockitve/log/jrockitve.log mounts : ext2 / dev0; posixLocale : en_US posixTimezone : Asia/Seoul posixEncoding : ISO-8859-1 Local disk : Size: 1024M, Used: 728M, Free: 295M ======================================================== Debug-key '5' pressed Shutdown enabled. Debug-key '7' pressed [JRockit] Management server already started. Ignoring request. Debug-key '8' pressed Starting stat recording Debug-key '8' pressed ========= Statistics Counters for the last second ========= dev.eth0_rx.cnt : 22 packets dev.eth0_rx_bytes.cnt : 2704 bytes dev.net_interrupts.cnt : 22 interrupts evt.timer_ticks.cnt : 123 ticks hyper.priv_entries.cnt : 144 entries schedule.context_switches.cnt : 271 switches schedule.idle_cpu_time.cnt : 997318849 nanoseconds schedule.idle_cpu_time_0.cnt : 997318849 nanoseconds schedule.total_cpu_time.cnt : 1000031757 nanoseconds time.system_time.cnt : 1000 ns time.timer_updates.cnt : 123 updates time.wallclock_time.cnt : 1000 ns ======================================= Debug-key '9' pressed ===== FULL THREAD DUMP =============== Fri Jun 4 08:22:12 2010 BEA JRockit(R) R27.6.6-28_o-125824-1.6.0_17-20091214-2104-linux-ia32 "Main Thread" id=1 idx=0x4 tid=1 prio=5 alive, in native, waiting -- Waiting for notification on: weblogic/t3/srvr/T3Srvr@0x646ede8[fat lock] at jrockit/vm/Threads.waitForNotifySignal(JLjava/lang/Object;)Z(Native Method) at java/lang/Object.wait(J)V(Native Method) at java/lang/Object.wait(Object.java:485) at weblogic/t3/srvr/T3Srvr.waitForDeath(T3Srvr.java:919) ^-- Lock released while waiting: weblogic/t3/srvr/T3Srvr@0x646ede8[fat lock] at weblogic/t3/srvr/T3Srvr.run(T3Srvr.java:479) at weblogic/Server.main(Server.java:67) at jrockit/vm/RNI.c2java(IIIII)V(Native Method) -- end of trace "(Signal Handler)" id=2 idx=0x8 tid=2 prio=5 alive, in native, daemon Open lock chains ================ Chain 1: "ExecuteThread: '0' for queue: 'weblogic.socket.Muxer'" id=23 idx=0x50 tid=20 waiting for java/lang/String@0x630c588 held by: "ExecuteThread: '1' for queue: 'weblogic.socket.Muxer'" id=24 idx=0x54 tid=21 (active) ===== END OF THREAD DUMP =============== Debug-key '0' pressed Debug-keys enabled Happy Cloud Walking :)

    Read the article

  • Sony VAIO with Insyde H2O EFI bios will not boot into GRUB EFI

    - by Rohan Dhruva
    I bought a new Sony Vaio S series laptop. It uses Insyde H2O BIOS EFI, and trying to install Linux on it is driving me crazy. root@kubuntu:~# parted /dev/sda print Model: ATA Hitachi HTS72756 (scsi) Disk /dev/sda: 640GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 274MB 273MB fat32 EFI system partition hidden 2 274MB 20.8GB 20.6GB ntfs Basic data partition hidden, diag 3 20.8GB 21.1GB 273MB fat32 EFI system partition boot 4 21.1GB 21.3GB 134MB Microsoft reserved partition msftres 5 21.3GB 342GB 320GB ntfs Basic data partition 6 342GB 358GB 16.1GB ext4 Basic data partition 7 358GB 374GB 16.1GB ntfs Basic data partition 8 374GB 640GB 266GB ntfs Basic data partition What is surprising is that there are 2 EFI system partitions on the disk. The sda2 partition is a 20gb recovery partition which loads windows with a basic recovery interface. This is accessible by pressing the "ASSIST" button as opposed to the normal power button. I presume that the sda1 EFI System Partition (ESP) loads into this recovery. The sda3 ESP has more fleshed out entries for Microsoft Windows, which actually goes into Windows 7 (as confirmed by bcdedit.exe on Windows). Ubuntu is installed on sda6, and while installation I chose sda3 as my boot partition. The installer correctly created a sda3/EFI/ubuntu/grubx64.efi application. The real problem: for the life of me, I can't set it to be the default! I tried creating a sda3/startup.nsh which called grubx64.efi, but it didn't help -- on rebooting, the system still boots into windows. I tried using efibootmgr, and that shows as it it worked: root@kubuntu:~# efibootmgr BootCurrent: 0000 BootOrder: 0000,0001 Boot0000* EFI USB Device Boot0001* Windows Boot Manager root@kubuntu:~# efibootmgr --create --gpt --disk /dev/sda --part 3 --write-signature --label "GRUB2" --loader "\\EFI\\ubuntu\\grubx64.efi" BootCurrent: 0000 BootOrder: 0002,0000,0001 Boot0000* EFI USB Device Boot0001* Windows Boot Manager Boot0002* GRUB2 root@kubuntu:~# efibootmgr BootCurrent: 0000 BootOrder: 0002,0000,0001 Boot0000* EFI USB Device Boot0001* Windows Boot Manager Boot0002* GRUB2 However, on rebooting, as you guessed, the machine rebooted directly back into Windows. The only things I can think of are: The sda1 partition is somehow being used Overwrite /EFI/Boot/bootx64.efi and /EFI/Microsoft/Boot/bootmgfw.efi with grubx64.efi [but this seems really radical]. Can anyone please help me out? Thanks -- any help is greatly appreciated, as this issue is driving me crazy!

    Read the article

  • How to repair an external harddrive?

    - by dodohjk
    I would like to reformat my hard disk, and if possible recover the (somewhat unimportant) contents if possible. I have a Western Digital 1TB hard drive which had a NTFS partition. I unplugged the drive without safely removing it first. At first a pop up was asking me to use a Windows OS to run the chkdsk /f command, however, in the effort to keep using a Linux OS I used the ntfsfix command on the ubuntu terminal Now, when I try to access the hard drive, it doesn't show up anymore in Nautilus. I tried reformatting it using Disk Utility, but it gives me an error message, and Gparted would hang on the "Scanning devices" step infinitely. Please comment any output that you would like to see and I will add it to my question. EDIT disk utility tells me is on /dev/sdb the command sudo fdisk -l gives dodohjk@DodosPC:~$ sudo fdisk -l [sudo] password for dodohjk: Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0006fa8c Device Boot Start End Blocks Id System /dev/sda1 * 4094 482344959 241170433 5 Extended /dev/sda2 482344960 488396799 3025920 82 Linux swap / Solaris /dev/sda5 4096 31461127 15728516 83 Linux /dev/sda6 31463424 52434943 10485760 83 Linux /dev/sda7 52436992 62923320 5243164+ 83 Linux /dev/sda8 62924800 482344959 209710080 83 Linux Disk /dev/sdb: 1000.2 GB, 1000202043392 bytes 255 heads, 63 sectors/track, 121600 cylinders, total 1953519616 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x6e697373 This doesn't look like a partition table Probably you selected the wrong device. Device Boot Start End Blocks Id System /dev/sdb1 ? 1936269394 3772285809 918008208 4f QNX4.x 3rd part /dev/sdb2 ? 1917848077 2462285169 272218546+ 73 Unknown /dev/sdb3 ? 1818575915 2362751050 272087568 2b Unknown /dev/sdb4 ? 2844524554 2844579527 27487 61 SpeedStor Partition table entries are not in disk order I wrote something wrong here, however here the output of fsck /dev/sbd is dodohjk@DodosPC:~$ sudo fsck /dev/sdb fsck from util-linux 2.20.1 e2fsck 1.42.5 (29-Jul-2012) ext2fs_open2: Bad magic number in super-block fsck.ext2: Superblock invalid, trying backup blocks... fsck.ext2: Bad magic number in super-block while trying to open /dev/sdb The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device&gt;

    Read the article

  • Vote of Disconfidence to Entity Framework

    - by Ricardo Peres
    A friend of mine has found the following problem with Entity Framework 4: Two simple classes and one association between them (one to many): One condition to filter out soft-deleted entities (WHERE Deleted = 0): 100 records in the database; A simple query: 1: var l = ctx.Person.Include("Address").Where(x => (x.Address.Name == "317 Oak Blvd." && x.Address.Number == 926) || (x.Address.Name == "891 White Milton Drive" && x.Address.Number == 497)); Will produce the following SQL: 1: SELECT 2: [Extent1].[Id] AS [Id], 3: [Extent1].[FullName] AS [FullName], 4: [Extent1].[AddressId] AS [AddressId], 5: [Extent202].[Id] AS [Id1], 6: [Extent202].[Name] AS [Name], 7: [Extent202].[Number] AS [Number] 8: FROM [dbo].[Person] AS [Extent1] 9: LEFT OUTER JOIN [dbo].[Address] AS [Extent2] ON ([Extent2].[Deleted] = 0) AND ([Extent1].[AddressId] = [Extent2].[Id]) 10: LEFT OUTER JOIN [dbo].[Address] AS [Extent3] ON ([Extent3].[Deleted] = 0) AND ([Extent1].[AddressId] = [Extent3].[Id]) 11: LEFT OUTER JOIN [dbo].[Address] AS [Extent4] ON ([Extent4].[Deleted] = 0) AND ([Extent1].[AddressId] = [Extent4].[Id]) 12: LEFT OUTER JOIN [dbo].[Address] AS [Extent5] ON ([Extent5].[Deleted] = 0) AND ([Extent1].[AddressId] = [Extent5].[Id]) 13: LEFT OUTER JOIN [dbo].[Address] AS [Extent6] ON ([Extent6].[Deleted] = 0) AND ([Extent1].[AddressId] = [Extent6].[Id]) 14: ... 15: WHERE ((N'317 Oak Blvd.' = [Extent2].[Name]) AND (926 = [Extent3].[Number])) 16: ... And will result in 680 MB of memory being taken! Now, Entity Framework has been historically known for producing less than optimal SQL, but 680 MB for 100 entities?! According to Microsoft, the problem will be addressed in the following version, there is a Connect issue open. There is even a whitepaper, Performance Considerations for Entity Framework 5, which talks about some of the changes and optimizations coming on version 5, but by reading it, I got even more concerned: “Once the cache contains a set number of entries (800), we start a timer that periodically (once-per-minute) sweeps the cache.” Say what?! The next version of Entity Framework will spawn timer threads?! When Code First came along, I thought it was a step in the right direction. Sure, it didn’t include some things that NHibernate did for quite some time – for example, different strategies for Id generation that do not rely on IDENTITY columns, which makes INSERT batching impossible, or support for enumerated types – but I thought these would come with the time. Now, enumerated types have, but so did… timer threads! I’m afraid Entity Framework is becoming a monster.

    Read the article

  • Enhance Your Browsing with Hyperwords in Firefox

    - by Asian Angel
    While browsing it is easy to find information that you would like to know more about, convert, or translate. The Hyperwords extension provides access to these types of resources and more in Firefox. Using Hyperwords Once the extension has finished installing you will be presented with a demo video that will let you learn more about how Hyperwords works. For our first example we chose to look for more information concerning “WASP-12b” using Wikipedia. Notice the small bluish circle on the lower right of the highlighted term…it is the default access for the Hyperwords menu (access by hovering mouse over it). If you hover over the the Wikipedia (or other) link you can access the information in a scrollable popup window. Or if you prefer click on the link to view the information in a new tab. Choose the style that best suits your needs. Hyperwords is extremely useful for quick unit conversions. Suppose you want to share a news story that you have found while browsing. Highlight the title, access Hyperwords, and choose your preferred sharing source. You may need to authorize access for Hyperwords to post to your account. Once you have authorized access you can start sharing those links very easily. This is just a small sampling of Hyperwords many useful features. Preferences Hyperwords has a nice set of preferences available to help you customize it. Alter the menu popup style, add or remove menu entries, and modify other functions for Hyperwords. Conclusion Hyperwords makes a nice addition to Firefox for anyone needing quick access to search, reference, translation, and other services while browsing. Links Download the Hyperwords extension (Mozilla Add-ons) Download the Hyperwords extension (Extension Homepage) Similar Articles Productive Geek Tips Switch to Private Browsing Mode Easily with Toggle Private BrowsingPreview Tabs in Firefox with Tab Preview 0.3You Really Want to Completely Disable Tabs in Firefox?Quick Hits: 11 Firefox Tab How-TosWhen to Use Protect Tab vs Lock Tab in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7 Map the Stars with Stellarium Use ILovePDF To Split and Merge PDF Files

    Read the article

  • How to Name Linked Servers

    - by Bill Graziano
    I did another SQL Server migration over the weekend that dealt with linked servers.  I’ve seen all kinds of odd naming schemes and there are a few I like and a few I suggest you avoid. Don’t name your linked server for its IP address.  At some point whatever is on the other end of that IP address will move.  You’ll probably need to point your linked server to a new IP address but not change the name of the linked server.  And then you’ve completely lost any context around this.  Bonus points if a new SQL Server eventually ends up at the old IP address further adding confusion when you’re trying to troubleshoot. Don’t name your linked server based on its instance name.  This one is less obvious.  It sounds nice to have a linked server named [VSRV1\SQLTRAN01].  You know what it is and it’s easy to use.  It’s less nice when you’ve got 200 stored procedures that all reference this linked server but the database they reference has moved to a new instance.  Now when you query this you’re actually querying a different instance. (Please note: I’m not saying it’s a good idea to have 200 stored procedures that all reference a linked server.  I’m just saying it’s not all that uncommon.) Consider naming your linked server something that you can easily search on.  See my note above.  You can also get around this by always enclosing the name in brackets.  That is harder to enforce unless you use some odd characters in it. Consider naming your linked server based on the function.  For example, I’ve had some luck having a linked server named [DW] that points to our data warehouse server.  That server can change names or physically move and all I need to do is update the linked server to point to the new destination.  The descriptive name of the linked server is still accurate.  No code needs to change and people still know what it is just by looking at it. Consider naming your linked server for the database.  I’m still thinking through this one.  It may mean you have multiple linked servers that point to the same instance.  I’ve found that database names rarely change.  It also makes it easier to move individual databases to new servers. Consider pointing your linked servers to DNS entries and not IP addresses.  I’ve done this for reporting databases and had some success.  Especially for read-only snapshots that can get created on the main database or on the mirror.  What issues have you had with linked server names?  What has worked for you?  Where are the holes in my approach?

    Read the article

  • Good approach for hundreds of comsumers and big files

    - by ????? ???????
    I have several files (nearly 1GB each) with data. Data is a string line. I need to process each of these files with several hundreds of consumers. Each of these consumers does some processing that differs from others. Consumers do not write anywhere concurrently. They only need input string. After processing they update their local buffers. Consumers can easily be executed in parallel. Important: With one specific file each consumer has to process all lines (without skipping) in correct order (as they appear in file). The order of processing different files doesn't matter. Processing of a single line by one consumer is comparably fast. I expect less than 50 microseconds on Corei5. So now I'm looking for the good approach to this problem. This is going to be be a part of a .NET project, so please let's stick with .NET only (C# is preferable). I know about TPL and DataFlow. I guess that the most relevant would be BroadcastBlock. But i think that the problem here is that with each line I'll have to wait for all consumers to finish in order to post the new one. I guess that it would be not very efficient. I think that ideally situation would be something like this: One thread reads from file and writes to the buffer. Each consumer, when it is ready, reads the line from the buffer concurrently and processes it. The entry from the buffer shouldn't be deleted as one consumer reads it. It can be deleted only when all consumers have processed it. TPL schedules consumer threads itself. If one consumer outperforms the others, it shouldn't wait and can read more recent entries from the buffer. Am i right with this kind of approach? Whether yes or not, how can i implement the good solution? A bit was already discussed on StackOverflow: link

    Read the article

  • Thinktecture.IdentityServer RC

    - by Your DisplayName here!
    I just uploaded the RC of IdentityServer to Codeplex. This release is feature complete and if I don’t get any bug reports this is also pretty much the final V1. Changes from B1 The configuration data access is now based on EF 4.1 code first. This makes it much easier to use different data stores. For RTM I will also provide a SQL script for SQL Server so you can move the configuration to a separate machine (e.g. for load balancing scenarios). I included the ASP.NET Universal Providers in the download. This adds official support for SQL Azure, SQL Server and SQL Compact for the membership, roles and profile features. Unfortunately the Universal Provider use a different schema than the original ASP.NET providers (that sucks btw!) – so I made them optional. If you want to use them go to web.config and uncomment the new provider. The relying party registration entries now have added fields to add extra data that you want to couple with the RP. One use case could be to give the UI a hint how the login experience should look like per RP. This allows to have a different look and feel for different relying parties. I also included a small helper API that you can use to retrieve the RP record based on the incoming WS-Federation query string. WS-Federation single sign out is now conforming to the spec. Certificate based endpoint identities for SSL endpoints are optional now. Added a initial configuration “wizard”. This sets up the signing certificate, issuer URI and site title on the first run. Installation This is still a “developer” release – that means it ships with source code that you have to build it etc. But from that point it should be a little more straightforward as it used to be: Make sure SSL is configured correctly for IIS Map the WebSite directory to a vdir in IIS Run the web site. This should bring up the initial configuration Make sure the worker process account has access to the signing certificate private key Make sure all your users are in the “IdentityServerUsers” role in your role store. Administrators need the “IdentityServerAdministrators” role That should be it. A proper documentation will be hopefully available soon (any volunteers?). Please provide feedback! thanks!

    Read the article

  • AJAX event, prevents other page actions

    - by cobaltduck
    Here's a fairly average scenario, using JSF as an example, but this same concept I have observed in ASP.NET, Apache Wicket, and other frameworks with ajax capabilities. <h:inputText id="text1" value="#{myBacker.myBean.myStringVar}" styleClass="goodCSS"> <f:ajax event="change" listener="#{myBacker.text1ChangeEventMethod}" update="someOtherField" /> </h:inputText> <h:selectBooleanCheckbox id="check1" value="#{myBacker.myBean.myBoolVar}" /> Let's suppose that the 'text1ChangeEventListener' is essential to 'someOtherField' and perhaps toggles its disabled attribute, or changes its available options, based on the value of 'myStringVar.' The particulars aren't important, let's just accept that for some reason we need an ajax call when the 'text1' value is changed. So Jane User is working her way down the form. She arrives at the 'text1' field and types some value. The cursor focus is still in the text field, as she moves her mouse to the 'check1' box and clicks. It appears to her that nothing has happened. She clicks again, and this time the checkbox highlights and the icon indicating a selection appears in the box. Jane has to do several entries in the form today, and sees this happen every time, and it becomes very frustrating for her. Likewise, Jeff Admin is also perusing this form, and begins to type in 'text1.' He then realizes he doesn't really want to enter this data, and so moves his mouse to the "cancel" button elsewhere on the page, and clicks. Nothing seems to happen. Jeff clicks again, and after confirming he really does want to cancel, is returned to the home page. Jeff scratches his head. The problem is simply that the first thing the system does after 'text1' looses focus is run the listener and perform the ajax operation. It may only take a fraction of a second, but still, you can click other buttons all you want, but until that ajax has finished, everything else is ignored. I've spent the morning searching and reading, and it seems no one else has even noticed this. I could find not one article, blog, past question here or at SO, or anyting that addresses this obvious and glaring deficiency in ajax. So first of all, am I truly alone in thinking this is a big problem? Second, does anyone have a solution?

    Read the article

  • Programming Windows 8 Apps with HTML, CSS, and JavaScript - All you need in one title

    It took me a while to work through the 800+ pages of this title. And yes, I really mean working not reading... Since the release of Windows 8 it should be obvious to any Windows software developer that there are new ways to develop, deploy and market applications for a broader audience. Interestingly, Microsoft started to narrow the technological gap between the various platforms - desktop, web, smartphone and XBox - and development of modern apps with HTML, CSS and JavaScript couldn't be easier. Kraig covers all facets of modern Windows 8 apps from the basic building blocks and project templates in Visual Studio 2012 over to the thoughtful use of specific APIs to finally proper deployment in the App Store and potential monetization. The organisation of the book is lied out like step by step instructions or a tutorial. Kraig literally takes the reader by the hand and explains in detail in his examples about the reasons, the pros, and the cons of a certain way of implementation. Thanks to cross-references to other chapters he leaves the choice to the reader to dig deeper right now or to catch up at some time later. Personally, I have to admit that I really enjoyed the relaxed writing style. App development is not dust-dry rocket science and it should be joyful to learn about new technologies. And thanks to the richness of the various chapters and samples you could easily adapt and transfer the knowledge gained in this title to other platforms like Windows Phone 8. And last but not least: The ebook is freely available at Amazon, Microsoft Press and O'Reilly. Don't think about it, just get the book. Now. Update: I already mentioned this title in other blog entries which are related to Microsoft certification. Feel free to read on and to discover more online resources: Learning content for MCSDs: Web Applications and Windows Store Apps using HTML5 More content for MCSDs: Web Applications and Windows Store Apps using HTML5 O'Reilly offers free webcasts on their site, too. And in case that you would like to know more about Kraig's book and his experience with various development teams, please checkout this one: Zero to App in Two Weeks: Programming Windows 8 Apps in HTML, CSS, and JavaScript. The recording should be available soon.

    Read the article

  • MIX 2010 Covert Operations Day 1

    - by GeekAgilistMercenary
    Portland Departure - Farewell Stumptown Off I go on a plane from Portland, Oregon to Las Vegas, Nevada for the MIX 2010 Conference.  Before I even boarded the plane I met Paul Gomes a Senior Software Engineer and Andrew Saylor the Director of Business Development.  Both of these SoftSource Employees were en route to MIX themselves.  Being stoked to already be bumping into some top tier people, I bid them adieu and headed for my seat on the plane. I boarded, and had before the boarding opted for an upgrade.  I have to advise that if you get a chance on Alaska to upgrade at the last minute, take it.  It is usually only about $50 bucks or so and the additional space makes working on the ole' laptop actually possible (even on my monstrous 17" laptop).  So take it from me, click that upgrade button and fork over that $50 bucks for anything over an hour flight, the comfort and ability to work is usually worth it! Las Vegas Arrival - Welcome to Sin City Got into Las Vegas and swung out of the airport.  I then, with my comrade Beth attempted to get Internet Access for the next 3 hours.  Las Vegas, is not the most friendly Internet Access town.  I will just say it, I am not sure why any Internet related company (ala Microsoft) would hold a conference here.  There are more than a dozen other cities that would be better. But I digress, I did manage to get Internet Access after checking into the Circus Circus.  Don't ask why I ended up staying here, if you run into me in person, ask then because there is a whole story to it. At this point I started checking out each session further on the MIX10 Site.  There are a number I deemed necessary to check out.  However, you'll have to read my pending entries to see which session I jumped into. With this juncture in time reached, I got a ton of work to wrap up, some code to write and some sleep to get.  Until tomorrow, adieu. For more of my writing, thoughts, and other topics check out my other blog, where the original entry is posted.

    Read the article

  • Oracle Utilities Application Framework V4.2.0.0.0 Released

    - by ACShorten
    The Oracle Utilities Application Framework V4.2.0.0.0 has been released with Oracle Utilities Customer Care And Billing V2.4. This release includes new functionality and updates to existing functionality and will be progressively released across the Oracle Utilities applications. The release is quite substantial with lots of new and exciting changes. The release notes shipped with the product includes a summary of the changes implemented in V4.2.0.0.0. They include the following: Configuration Migration Assistant (CMA) - A new data management capability to allow you to export and import Configuration Data from one environment to another with support for Approval/Rejection of individual changes. Database Connection Tagging - Additional tags have been added to the database connection to allow database administrators, Oracle Enterprise Manager and other Oracle technology the ability to monitor and use individual database connection information. Native Support for Oracle WebLogic - In the past the Oracle Utilities Application Framework used Oracle WebLogic in embedded mode, and now, to support advanced configuration and the ExaLogic platform, we are adding Native Support for Oracle WebLogic as configuration option. Native Web Services Support - In the past the Oracle Utilities Application Framework supplied a servlet to handle Web Services calls and now we offer an alternative to use the native Web Services capability of Oracle WebLogic. This allows for enhanced clustering, a greater level of Web Service standards support, enchanced security options and the ability to use the Web Services management capabilities in Oracle WebLogic to implement higher levels of management including defining additional security rules to control access to individual Web Services. XML Data Type Support - Oracle Utilities Application Framework now allows implementors to define XML Data types used in Oracle in the definition of custom objects to take advantage of XQuery and other XML features. Fuzzy Operator Support - Oracle Utilities Application Framework supports the use of the fuzzy operator in conjunction with Oracle Text to take advantage of the fuzzy searching capabilities within the database. Global Batch View - A new JMX based API has been implemented to allow JSR120 compliant consoles the ability to view batch execution across all threadpools in the Coherence based Named Cache Cluster. Portal Personalization - It is now possible to store the runtime customizations of query zones such as preferred sorting, field order and filters to reuse as personal preferences each time that zone is used. These are just the major changes and there are quite a few more that have been delivered (and more to come in the service packs!!). Over the next few weeks we will be publishing new whitepapers and new entries in this blog outlining new facilities that you want to take advantage of.

    Read the article

  • Sony VAIO is booting directly into Windows without showing grub

    - by Rohan Dhruva
    I bought a new Sony Vaio S series laptop. It uses Insyde H2O BIOS EFI, and trying to install Linux on it is driving me crazy. root@kubuntu:~# parted /dev/sda print Model: ATA Hitachi HTS72756 (scsi) Disk /dev/sda: 640GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 274MB 273MB fat32 EFI system partition hidden 2 274MB 20.8GB 20.6GB ntfs Basic data partition hidden, diag 3 20.8GB 21.1GB 273MB fat32 EFI system partition boot 4 21.1GB 21.3GB 134MB Microsoft reserved partition msftres 5 21.3GB 342GB 320GB ntfs Basic data partition 6 342GB 358GB 16.1GB ext4 Basic data partition 7 358GB 374GB 16.1GB ntfs Basic data partition 8 374GB 640GB 266GB ntfs Basic data partition What is surprising is that there are 2 EFI system partitions on the disk. The sda2 partition is a 20gb recovery partition which loads windows with a basic recovery interface. This is accessible by pressing the "ASSIST" button as opposed to the normal power button. I presume that the sda1 EFI System Partition (ESP) loads into this recovery. The sda3 ESP has more fleshed out entries for Microsoft Windows, which actually goes into Windows 7 (as confirmed by bcdedit.exe on Windows). Ubuntu is installed on sda6, and while installation I chose sda3 as my boot partition. The installer correctly created a sda3/EFI/ubuntu/grubx64.efi application. The real problem: for the life of me, I can't set it to be the default! I tried creating a sda3/startup.nsh which called grubx64.efi, but it didn't help -- on rebooting, the system still boots into windows. I tried using efibootmgr, and that shows as it it worked: root@kubuntu:~# efibootmgr BootCurrent: 0000 BootOrder: 0000,0001 Boot0000* EFI USB Device Boot0001* Windows Boot Manager root@kubuntu:~# efibootmgr --create --gpt --disk /dev/sda --part 3 --write-signature --label "GRUB2" --loader "\\EFI\\ubuntu\\grubx64.efi" BootCurrent: 0000 BootOrder: 0002,0000,0001 Boot0000* EFI USB Device Boot0001* Windows Boot Manager Boot0002* GRUB2 root@kubuntu:~# efibootmgr BootCurrent: 0000 BootOrder: 0002,0000,0001 Boot0000* EFI USB Device Boot0001* Windows Boot Manager Boot0002* GRUB2 However, on rebooting, as you guessed, the machine rebooted directly back into Windows. The only things I can think of are: The sda1 partition is somehow being used Overwrite /EFI/Boot/bootx64.efi and /EFI/Microsoft/Boot/bootmgfw.efi with grubx64.efi [but this seems really radical]. Can anyone please help me out? Thanks -- any help is greatly appreciated, as this issue is driving me crazy!

    Read the article

  • Playing a video logs me out

    - by Kartick Vaddadi
    When I try to play a video in vlc, totem or banshee, it immediately logs me out. Sometimes this happens when I try to full screen the video. This seems to happen only after upgrading to ubuntu 11, and happens for multiple kinds of files, like avi and m4v. The motherboard is Asus a8v-mx. Please help me fix my ubuntu installation. Thanks. Here are the relevant entries from syslog: 21:12:27 enlightenment kernel: [ 488.157457] powernow-k8: Hardware error - pending bit very stuck - no further pstate changes possible May 1 21:12:27 enlightenment kernel: [ 488.158634] powernow-k8: transition frequency failed May 1 21:12:27 enlightenment kernel: [ 488.264015] powernow-k8: failing targ, change pending bit set May 1 21:12:27 enlightenment kernel: [ 488.306466] agpgart-amd64 0000:00:00.0: AGP 3.0 bridge May 1 21:12:27 enlightenment kernel: [ 488.306489] agpgart-amd64 0000:00:00.0: putting AGP V3 device into 8x mode May 1 21:12:27 enlightenment kernel: [ 488.306562] pci 0000:01:00.0: putting AGP V3 device into 8x mode May 1 21:12:27 enlightenment kernel: [ 488.372044] powernow-k8: error - out of sync, fix 0x2 0xa, vid 0x4 0x4 May 1 21:12:27 enlightenment kernel: [ 488.372055] powernow-k8: ph2 null fid transition 0xa May 1 21:12:30 enlightenment rtkit-daemon[1304]: Successfully made thread 1987 of process 1987 (n/a) owned by '105' high priority at nice level -11. May 1 21:12:30 enlightenment rtkit-daemon[1304]: Supervising 1 threads of 1 processes of 1 users. May 1 21:12:30 enlightenment rtkit-daemon[1304]: Successfully made thread 1988 of process 1987 (n/a) owned by '105' RT at priority 5. May 1 21:12:30 enlightenment rtkit-daemon[1304]: Supervising 2 threads of 1 processes of 1 users. May 1 21:12:30 enlightenment rtkit-daemon[1304]: Successfully made thread 1989 of process 1987 (n/a) owned by '105' RT at priority 5. May 1 21:12:30 enlightenment rtkit-daemon[1304]: Supervising 3 threads of 1 processes of 1 users. May 1 21:12:32 enlightenment gdm-simple-greeter[1975]: Gtk-WARNING: /build/buildd/gtk+2.0-2.24.4/gtk/gtkwidget.c:5687: widget not within a GtkWindow May 1 21:12:32 enlightenment gdm-simple-greeter[1975]: WARNING: Unable to load CK history: no seat-id found May 1 21:12:34 enlightenment gdm-session-worker[1978]: GLib-GObject-CRITICAL: g_value_get_boolean: assertion `G_VALUE_HOLDS_BOOLEAN (value)' failed May 1 21:12:38 enlightenment gdm-session-worker[1978]: pam_sm_authenticate: Called May 1 21:12:38 enlightenment gdm-session-worker[1978]: pam_sm_authenticate: username = [rama] May 1 21:12:39 enlightenment rtkit-daemon[1304]: Successfully made thread 2108 of process 2108 (n/a) owned by '1000' high priority at nice level -11. May 1 21:12:39 enlightenment rtkit-daemon[1304]: Supervising 4 threads of 2 processes of 2 users. May 1 21:12:39 enlightenment pulseaudio[2108]: pid.c: Stale PID file, overwriting. May 1 21:12:39 enlightenment rtkit-daemon[1304]: Successfully made thread 2111 of process 2108 (n/a) owned by '1000' RT at priority 5. May 1 21:12:39 enlightenment rtkit-daemon[1304]: Supervising 5 threads of 2 processes of 2 users. May 1 21:12:39 enlightenment rtkit-daemon[1304]: Successfully made thread 2112 of process 2

    Read the article

  • Configurable Objects - Introduction

    - by Anthony Shorten
    One of the interesting facilities in the framework is Configurable Object functionality (it is also known as Task Optimization and also known as Cool Tools). The idea is that any implementation can create their own views of the base product objects and services and implement functionality against those new views. For example, in Oracle Utilities Customer Care and Billing, there is a Person object. That object is used to store and manage information about individuals as well as companies. In the base product you would use the Person Maintenance screen and fill in some of the screen when you wanted to register or maintain and individual as well and fill out other parts of the screen when you wanted to register or maintain a company. This can be somewhat confusing to some customers. Using Configurable Objects this can be simplified. A business object can be created that is a view of the any object. For example, you could create a Human business object which would cover the aspects of the Person object pertaining to an individual and a Company business object to cover the aspects unique to a company. Even the tag names (i.e. Field Names) in the object can be changed to be more what the implementation is familiar with. The object can also restructure the object. For example, a common identifier for an individual in the USA is the Social Security number, this value is a Person Identifier (as this varies in each country). In the new Human object you can remap the Person Identifier as a Social Security number. To define a Business Object you use a schema editor built into the browser user interface and use a mapping language to setup the business objects. An example of the language is shown below in an extract of the schema for the Human business object. As you can see there are mapping as well as formatting and other tags. This information can be built manually or using a wizard which generates the base structure for you to alter. This is all stored as meta data when saved. Once a Business object is built it can be used as basis for code, other business objects (we support inheritance), called by a screen (called a UI Map) or even as a Web Service. This is just a start with Configurable Objects as you can also create views of base services called Business Services, Service Scripts used for non-object or complex object processing (as well as other things), UI Maps used for screens and Data Areas to reuse definitions across multiple objects. Configurable Objects are powerful and I only really touched on them here. Over the next few months I hope to add lots more entries about them.

    Read the article

  • What does the `dmesg` error: "composite sync not supported" mean?

    - by M. Tibbits
    Question: I see [ 20.473125] composite sync not supported and several such entries when I run dmesg. What do they mean? Background: I'm trying to debug a problem where my laptop won't suspend. Since acpi seems happy and I can suspend easily from the command line, I've turned to tracking down all boot-up errors/warnings. So I run dmesg | grep not and, amongst other shtuff, I get: 728:[ 17.267120] composite sync not supported 733:[ 18.009061] composite sync not supported 740:[ 18.159289] registered panic notifier 749:[ 18.162500] vga16fb: not registering due to another framebuffer present 757:[ 18.598251] composite sync not supported 776:[ 20.473125] composite sync not supported 777:[ 20.932266] composite sync not supported 778:[ 28.350231] composite sync not supported 779:[ 28.924913] composite sync not supported 780:[ 35.480658] composite sync not supported And the full log for the few lines right around that first appearance (line 728) is listed at the bottom of my post (I'd happily include anything else). Any ideas what could be causing this? I've read several sites: Ubuntuforums #1 IRC Chat #1 One post talks about ??Adobe flash?? causing this error? Some others also suggest that it might be an nvidia related problem, but I've got a Dell Latitude D630 with an integrated Intel graphics -- so nvidia isn't the problem. [ 17.207142] phy0: Selected rate control algorithm 'minstrel' [ 17.207833] Registered led device: b43-phy0::tx [ 17.207849] Registered led device: b43-phy0::rx [ 17.207865] Registered led device: b43-phy0::radio [ 17.207927] Broadcom 43xx driver loaded [ Features: PL, Firmware-ID: FW13 ] [ 17.267120] composite sync not supported [ 17.415795] EXT4-fs (sda2): mounted filesystem with ordered data mode [ 17.602131] [drm] initialized overlay support [ 17.620201] input: DualPoint Stick as /devices/platform/i8042/serio1/input/input7 [ 17.641192] input: AlpsPS/2 ALPS DualPoint TouchPad as /devices/platform/i8042/serio1/input/input8 [ 18.009061] composite sync not supported [ 18.106042] pcmcia_socket pcmcia_socket0: cs: IO port probe 0x100-0x3af: clean. [ 18.108115] pcmcia_socket pcmcia_socket0: cs: IO port probe 0x3e0-0x4ff: clean. [ 18.108941] pcmcia_socket pcmcia_socket0: cs: IO port probe 0x820-0x8ff: clean. [ 18.109676] pcmcia_socket pcmcia_socket0: cs: IO port probe 0xc00-0xcf7: clean. [ 18.110356] pcmcia_socket pcmcia_socket0: cs: IO port probe 0xa00-0xaff: clean. [ 18.159286] fb0: inteldrmfb frame buffer device [ 18.159289] registered panic notifier [ 18.160218] input: Video Bus as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A03:00/LNXVIDEO:01/input/input9 [ 18.160286] ACPI: Video Device [VID1] (multi-head: yes rom: no post: no) [ 18.160334] ACPI Warning for \_SB_.PCI0.VID2._DOD: Return Package has no elements (empty) (20090903/nspredef-433) [ 18.160432] input: Video Bus as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A03:00/LNXVIDEO:02/input/input10 [ 18.160491] ACPI: Video Device [VID2] (multi-head: yes rom: no post: no) [ 18.160539] [drm] Initialized i915 1.6.0 20080730 for 0000:00:02.0 on minor 0 [ 18.162494] vga16fb: initializing [ 18.162497] vga16fb: mapped to 0xc00a0000 [ 18.162500] vga16fb: not registering due to another framebuffer present [ 18.176091] HDA Intel 0000:00:1b.0: PCI INT A -> GSI 21 (level, low) -> IRQ 21 [ 18.176123] HDA Intel 0000:00:1b.0: setting latency timer to 64 [ 18.285752] input: HDA Digital PCBeep as /devices/pci0000:00/0000:00:1b.0/input/input11 [ 18.312497] input: HDA Intel Mic at Ext Left Jack as /devices/pci0000:00/0000:00:1b.0/sound/card0/input12 [ 18.312586] input: HDA Intel HP Out at Ext Left Jack as /devices/pci0000:00/0000:00:1b.0/sound/card0/input13 [ 18.328043] usbcore: registered new interface driver ndiswrapper [ 18.460909] Console: switching to colour frame buffer device 180x56 [ 18.598251] composite sync not supported

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >