Search Results

Search found 45987 results on 1840 pages for 'copy files'.

Page 453/1840 | < Previous Page | 449 450 451 452 453 454 455 456 457 458 459 460  | Next Page >

  • Clone a Windows Installation to a 3TB Hard Drive; MBR to GPT

    - by DanBlakemore
    I have Windows 7 Professional 64-bit installed on my desktop. Unfortunately for me and my wallet my hard drive is failing. I have purchased a 3TB hard drive as a replacement for my current 2TB drive. I would like to avoid as much hassle as possible in moving to this new drive so I would like to copy my current partition to the new drive using Gparted. The problem is that I suspect that my current partition is MBR, and I need GPT on my new drive since it is 3TB. Can I simply copy the MBR partition onto the new disk and then convert it to GPT after the fact (can you even convert the type of a partition)? Or would I need to somehow copy the contents of the partition into a GPT partition on the new drive? How do I go about making this transistion? Also, are there any issues I should be wary of booting to a GPT partition? If it matters, my motherboard is 1 year old as of May, 2012. Edit: My motherboard is 1 day old. My old one does not have UEFI compatibility, so I decided to make an upgrade to Intel today given that I would need a UEFI motherboard to use my new HDD. How much can I use a dying hard drive (bad sectors according to Hitachi Drive Fitness Test)? I have assumed not at all, to be safe.

    Read the article

  • How can I extract data from Toshiba Satellite with a dead Windows installation?

    - by msanford
    I've got a Toshiba Satellite (unknown model number but bought early 2010) running Windows Vista which throws a kernel error on boot. We don't have the restore/recovery CD any more to restore the Windows partition. I have managed to boot to a Live CD version of Ubuntu 10.10 and have mounted the internal hard drive (which takes nearly 8 minutes). I suspect that the hard drive is malfunctioning, however, because copy tasks of even 30 megs of data to an attached and mounted USB flash drive takes over an hour, and some files are mysteriously inaccessible (not a permissions issue). When browsing folders, it takes many minutes to populate the folder window even with a single tiny file. During the copy tasks, the hard disk sounds like it tries to sleep several times in rapid succession, then continues accessing, it sounds, at full throughput. I initially tried using scp (from the shell) to copy data but I encountered the same local problems. I don't know the S.M.A.R.T. status of the hard disk, either. Is there a more effective way of going about recovering the data on the internal disk, assuming that I can't use a recovery CD and am too cheap to bring it in (for now, at least)?

    Read the article

  • Duplicate forwarded messages in Blackberry when using BIS

    - by Avery Payne
    Our Setup External email arrives at a Postfix server, is scanned, and then forwarded via settings in transport (using the RELAY:[{ip-address}] for a given address) to an Exchange 2007 server. Some users are on Exchange, but a few are still on the Postfix server (they will be moved in the near future). IMAPS is provided for external connections via Dovecot; in-house, IMAP is provided for the Gateway and native MAPI is used for Exchange/Outlook. Blackberries are connected via BIS, which uses Dovecot as a reverse-proxy IMAPS service to connect to Exchange (when the mailbox exists on Exchange, otherwise it connects to the mailbox on the gateway). The Issue We have a user that, when they forward an email on their Outlook client, they get a duplicate of the original message on their Blackberry. When I say duplicate, I mean that they have a copy of the forwarded version of the message (i.e. their version of the message that they obtained hitting the forward button), and a copy of the original message that shows up at the same time. The expected behavior is to just see the forwarded message, not the forwarded message and a 2nd copy of the original message. We've only seen this with Outlook users that also have a Blackberry. Other IMAP clients, such as OS X Mail or Thunderbird, do not exhibit this behavior when connecting to the Exchange server; forwarded messages work as expected. The Questions what is causing this to happen? why does it only affect Outlook/Blackberry setups, and not TBird/Blackberry or OSX-Mail/Blackberry? how do we get it to stop, before people go insane and never forward messages again?

    Read the article

  • Cheated!! Please help

    - by Rohit K
    I was experiencing some hard disk problem with my HP laptop. It showed at bootup as system diagnostics are run -- Smart Drive attribute failed. Also i repeatedly got a warning for imminent hard drive failure and thus to back up my data. So i gave my laptop to authorized HP service center for repair. They formatted my hard disk and installed a pirated copy of Win 7 Ultimate (i earlier had genuine Win 7 Home Premium running on my laptop) and they told me they cover out-of-warranty issues and even charged me for it. What's worse is that the hard drive problem is still present and all i am left with is an illegal copy of windows which i think also voids my warranty. What should i do? I mean i did purchase a genuine Windows with my laptop, so there must be some way to reinstall it even when i don't have a genuine copy now on my machine. Can't i get legit keys to reinstall Win 7 from Microsoft because i did pay for the software when i purchased my machine. And if that's not possible, how can i claim warranty and get my hard disk replaced by HP?

    Read the article

  • vmware server 64 bit on ubuntu 9.10 64 bit with P2V windows 2003 SBS poor network speed

    - by RobertHC
    configuration is ubuntu 2.6.31-21 64 bit vmware 2.0.2 64 bit last release hardware is core 2 quad with 8GB ram guest is win 2003 server SBS 32 bit Dear friends, we have a converted physical to virtual windows sbs 2003, converted with last converter available nowadays http://www.vmware.com/products/converter/ vCenter converter. Running the P2V 2K3 SBS on vmware server, it does boot fine, but we do note an abnormal CPU activity and a poor lan speed. As attempts we did what follow. We removed all unneeded peripherals, we removed one NIC (phisycal server was 2 nics), we changed the vmx to ged the nic recognized as intel instead than amd, we removed 1 cpu (physical was 2 cpu), we removed anything was reported as failed driver from system events monitor. Nothing to do, no way and funny results. Let's read some tests results. All are made with the same file copied in different source folders. Copying from client side (both directions copy, to/from server) results are i.e. 10 seconds, copying the same files from server side (again from and to server) results are different... from client to server, speed is round about (bit more) 10 seconds, but from server to client direction is slower: double the time. Beeing very fast and launching a simultaneous copy "from server to client"+"from client to server", this made from the server side, results in a stuck traffic... 45 seconds to do the copy. vmware tools are installed and e1000 driver has been updated. With one processor CPU activity is still going up and down but much less than with two. Because of test, we installed win 2k8 STD 64 bit. We repeated all the above tests with exactly the same file result is just one: always 5 seconds (this matches the lan speed) Any idea about this issue is welcome and thank you if any. Kind regards R.

    Read the article

  • Performance of external USB disk with ESXi5

    - by PeterMmm
    I have a new HP DL120 G7 server with ESXi5. One VM is a Win2003 instalation and I have an external USB2.0 drive attached by USB Controller and USB Device. I copy a 4GB file from external USB to server disk. In the VM that takes up to 10 minutes. On a native Win2003 that takes aprox. 3 minutes. I have no explaination for that diference: In any case the bottleneck is the USB connection, much slower than the disks (SAS, RAID1). If the USB connection on the VM would be USB1.1 and not USB2.0 it would take much more time. (The disk performance between server partitions on the VM is correct. - see update) Could be that my native box is extremely fast and the VM is the normal case. ??? Update I try with passtrough and a first run copy the same data in aprox. 7 minutes. Still 2 times slower than the native connection. I also did another messure and the copy between partitions on the same VM takes 3 minutes.

    Read the article

  • Strange network issue (ZIP file fails CRC test over VPN)

    - by Joe Schmoe
    We have a server in the office running Windows Server 2003 Our office is connected to our datacenter via hardware VPN (Linksys RV082 router in the office to CISCO router in the datacenter). There is a job that runs on the server in the office that does following: ZIP certain files from the server using 7Zip, copy ZIP file to a network share in the office and verify ZIP integrity, copy ZIP file to a network share in the data center and verify ZIP integrity. Problem is - verifying ZIP integrity for the file in the data center always fails. However, if I run 7Zip on the server in data center that exposes that share ZIP file verifies just fine, so it is not actually corrupted during copy operation. Additionally, I tried running ZIP on other computers in the office to verify ZIP file on datacenter file share and it verifies OK. I tried plugging server to the same network port where my workstation is connected using different cable (my workstation doesn't exhibit this problem) and ZIP verification still fails. So the problem is local to that specific server. On network adapter properties for the server in question there is no "Advanced" tab where one can usually configure a lot of network settings. Network card driver is up to date (Windows Update doesn't find anything newer and Lenovo website doesn't have any drivers for Windows 2003 for this computer model). Is there any other way to configure network setting via command line? What settings could be relevant to this problem?

    Read the article

  • Install Quartz.Net as a windows service and Test installation

    - by Tarun Arora
    In this blog post I’ll be covering, 01: Where to download Quartz.net from 02: How to install Quartz.net as a Windows service 03: Test the Quartz.net Installation If you are new to Quartz.net I would recommend reading the blog post on a brief introduction to Quartz.net. 01 – Where to download Quartz.net? http://sourceforge.net/projects/quartznet/files/quartznet/       Currently version  Quartz.Net 2.0.1 is the recommended download version. 02 – How to install Quartz.net as a Windows service         Go to the download location and unzip the Quartz.net package Navigate to the folder Quartz.Net \ Server \ bin – This is where you will find different .net version installers of the quartz.net packages. For example in the screen shot above, you can see the Quartz.net .net 3.5 and .net 4 packages. Open up the Quartz.net .net 4.0 folder, this folder contains the files you need to install Quartz.net as a windows service Copy the contents of the folder Downloads\Quartz.NET-2.0.1\server\bin\4.0 to the folder %program files%\Quartz.net   5. Open up a new CMD as an administrator and run the below command to install Quartz.net as a windows service /> Quartz.Server.exe install 6. How do I know that Quartz.Net service has installed as a Windows service? Go to run prompt and type ‘services.msc’ you should now see all the windows services installed on your machine. Navigate down to look for Quartz.Net. The service installs itself as an automatic startup Type and log on as ‘Local System’. You can easily change this to your prefer account that you would like to run the service as. If you wanted to name the Quartz service something else then that’s also possible… Can I change the default display name of the quartz.net windows service? Yes, you can! Navigate to C:\Program Files (x86)\Quartz.Net\ and open up the config file ‘quartz.config’ - You can change the instance name - You can change the default thread count of 10 - The port that the service listens to (by default this is port 555) A blog post on more configuration details can be found here. 03 – Test Quartz.Net windows service installation So, I have installed Quartz.Net as a windows service, how do I test whether my installation has been successful. Open up cmd as an administrator and run the below command, C:\Program Files (x86)\Quartz.Net> Quartz.Server.exe –i Since by default the Quartz.net windows service writes INFO level diagnostics (this can be changed from Quartz.Server.exe.config) you should see the service information show up on the console. For instance in the example above I can see that the service is running in a NON CLUSTERED mode, its currently not started and is currently in standby mode with 0 number of jobs executed so far… This was second in the series of posts on enterprise scheduling using Quartz.net, in the next post I’ll be covering how to run your first scheduled task using Quartz.net windows service. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • How To Rip an Audio CD to FLAC with Foobar2000

    - by Mysticgeek
    Foobar2000 is a great audio player that is fully customizable, is light on system resources, and contains a lot of tools and features. Today we show you how to use it to rip an audio CD to FLAC format. Note: For this tutorial we’re going to assume this is the first time you’re ripping a disc with Foobar2000. We’re running it on Windows 7 Ultimate 64-bit. Install Foobar2000 and FLAC First download and install Foobar2000 (link below). The main thing you’ll want to make sure to enable during the install process is Audio CD Support… And the freedb Tagger which are located under Optional Features, then continue through the rest of the install wizard. Next you need to install the latest version of the FLAC codec (link below) following the defaults. Rip Audio CD To rip a CD, place it in your CDROM drive, launch Foobar2000 and click File \ Open Audio CD. Select the appropriate CD drive and click the Rip button. Next you’ll want to lookup the disc information with freedb…or you can manually enter in the track data if it’s a custom disc. Select the proper tag information in the freedb tagger window, then click Update files. The data will be entered in, make sure the radio button next to Go to the Converter Setup dialog is selected, and click the Rip button. In the Converter Setup screen, here you can select the output format, where in our case we’re selecting FLAC. In this window you can choose several other options like the output path, merging the tracks into one or individual files…etc. When you have those settings completed click OK. Next you’ll need to find flac.exe which is located wherever you installed it. On our 64-bit Windows 7 system the default path is C:\Program Files (x86)\FLAC Now wait while your CD is ripped and converted to FLAC. You’ll get a Converter Status Report…after you’ve checked it over you can close out of it. If you set the option to show the output files after conversion you can take a look, make sure all tracks were converted, and play them right away if you want. You can play the tracks in Foobar2000 or any player that supports FLAC. If you want to use WMC or WMP see our article on how to play FLAC files in Windows 7 Media Center or Player. That’s all there is to it! If you’re a fan of Foobar2000 and enjoy your music converted to FLAC format, Foobar2000 does the job quite well. There are a lot of customizations and tools you can use in Foobar2000 that we’ll be taking a look at in future articles. For more information check out our look at this fully customizable music player. Foobar2000 run on XP, Vista, and Windows 7 Links Download Foobar2000 Download FLAC Similar Articles Productive Geek Tips Using Ubuntu: What Package Did This File Come From?Easily Change Audio File Formats with XRECODEFoobar2000 is a Fully Customizable Music PlayerConvert Virtually Any Audio Format with XRECODE IIExtract Audio from a Video File with Pazera Free Audio Extractor TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job? Find Downloads and Add-ins for Outlook

    Read the article

  • OBIEE 11.1.1 - Introduction to OBIEE 11g Full Sample App

    - by user809526
    Isn't it nice to discover OBIEE 11g around a nice "How To" catalog of features? to observe OBI and Essbase relationships at work? to discover TimesTen? The OBIEE 11g Full Sample App (FSA) is a comprehensive collection of examples designed to demonstrate the latest Oracle BIEE 11g capabilities and design best practices: Enhanced visualizations as Geo-spacial maps and interactive dashboards, Action Framework,  BI Publisher, Scorecard and Strategy Management, Mobile style sheets, Semantic layer modeling, Multi-source federation, Integration with products such as Essbase, Oracle OLAP, ODM, TimesTen, ODI and more The FSA is intended to be comprehensive, it is big (see CAVEAT below). The FSA is not an Oracle product, it is a good will free deployment of OBIEE/Essbase designed to exemplify OBIEE features, infrastructure and security around the Fusion Middleware components. Its contents and code are distributed free for demonstrative purposes only. It is neither maintained nor supported by Oracle as a licensed product. The OBIEE Full Sample App is independent of the default Sample App that comes with the OBIEE product. BENEFITS The FSA helps as a demonstrator of OBIEE 11g best practices, a tutorial, an environment "Test & Scrap", a SR bench (regression, conflicts), a tuning bench, a quick ready made POC seed for projects, a security options environment, ... The FSA - Is organized around a catalog of functional features - Has been deployed over 1000 times, it should be stable RELEASE The Full Sample App (V107) is bound to OBIEE 11.1.1.5 and Essbase 11.1.2.1 (November 2011). The FSA release dates are independent of the Product GA date (OBIEE). In early December 2011, a new functional Patch (V110) is released. It is easily applied (in less than 15 mins) on top of OBIEE SampleApp 11.1.1.5 (V107). The patch (V110) includes additional functional examples:        1. Web Catalog Statistics Application: Provides detailed insight into your web catalog content, dormant catalog objects, webcat impact analysis for metadata changes and more        2. Data inflation Scripts: A set of simple SQL procedures to quickly inflate SampleApp Fact and Dimension data to millions of records in a few minutes        3. Public Content Extensions Framework: A patching framework for public examples and contributions leveraging SampleApp        4. Additional report examples (including bridge report, external chart integrations) and bug fixes DISTRIBUTION as VBox image (November 2011) The ready made VBox image is designed to run on Virtual Box. It can be converted to VMware (see another BLOG). 1/ http://www.oracle.com/technetwork/middleware/bi-foundation/obiee-samples-167534.html VBox Image Deployment Guide Sampleapp_v107_GA.ovf - VBox image key file The above http URL provides the user:password for the ftp URLs below. 2/ ftp://user:[email protected]/static/SampleAppV107/ 12 "7-zip" files Sampleapp_v107_GA_7_20.7z.001 -> .012 We recommend 7-zip file manager for unzipping (http://www.7-zip.org/). Select Unzip here option, it will create the contents under a directory named "SampleApp_10722". On Windows, it is important to download and save zip file under the root directory (e.g. C:\ or D:\) because of possible long pathnames. 3/ ftp://user:[email protected]/static/SampleAppV107/Unzipped_Version/ 4 files Sampleapp_v107_GA-disk[1234].vmdk Important note: Check the provided checksums (md5sum). Please do it! DISTRIBUTION as Installation files for existing OBI 11.1.1.5 (November 2011) http://www.oracle.com/technetwork/middleware/bi-foundation/obiee-samples-167534.html Install files Deployment Guide SampleApp_10722_1.zip - 198 MB CAVEAT Many computers have RAM chips problems that keep often silent ... until you manipulate big files. It is strongly advised you run some memory check program eg MEMTEST in GRUB boot manager. Running md5sum repeatedly onto the very same big file must be consistent [same result], else a hardware memory problem is suspected. For Virtual Box, you should most likely enable VT-X (Vanderpool) hardware virtualization in BIOS. A free disk space of 80 GB is required to perform safely the VBox image installation. A Virtual Machine of minimum 6 to 7 GB memory fits the needs of combining OBIEE and Essbase execution.

    Read the article

  • FluentPath: a fluent wrapper around System.IO

    - by Bertrand Le Roy
    .NET is now more than eight years old, and some of its APIs got old with more grace than others. System.IO in particular has always been a little awkward. It’s mostly static method calls (Path.*, Directory.*, etc.) and some stateful classes (DirectoryInfo, FileInfo). In these APIs, paths are plain strings. Since .NET v1, lots of good things happened to C#: lambda expressions, extension methods, optional parameters to name just a few. Outside of .NET, other interesting things happened as well. For example, you might have heard about this JavaScript library that had some success introducing a fluent API to handle the hierarchical structure of the HTML DOM. You know? jQuery. Knowing all that, every time I need to use the stuff in System.IO, I cringe. So I thought I’d just build a more modern wrapper around it. I used a fluent API based on an essentially immutable Path type and an enumeration of such path objects. To achieve the fluent style, a healthy dose of lambda expressions is being used to act on the objects. Without further ado, here’s an example of what you can do with the new API. In that example, I’m using a Media Center extension that wants all video files to be in their own folder. For that, I need a small tool that creates directories for each video file and moves the files in there. Here’s the code for it: Path.Get(args[0]) .Select(p => p.Extension == ".avi" || p.Extension == ".m4v" || p.Extension == ".wmv" || p.Extension == ".mp4" || p.Extension == ".dvr-ms" || p.Extension == ".mpg" || p.Extension == ".mkv") .CreateDirectory(p => p.Parent .Combine(p.FileNameWithoutExtension)) .Previous() .Move(p => p.Parent .Combine(p.FileNameWithoutExtension) .Combine(p.FileName)); This code creates a Path object pointing at the path pointed to by the first command line argument of my executable. It then selects all video files. After that, it creates directories that have the same names as each of the files, but without their extension. The result of that operation is the set of created directories. We can now get back to the previous set using the Previous method, and finally we can move each of the files in the set to the corresponding freshly created directory, whose name is the combination of the parent directory and the filename without extension. The new fluent path library covers a fair part of what’s in System.IO in a single, convenient API. Check it out, I hope you’ll enjoy it. Suggestions are more than welcome. For example, should I make this its own project on CodePlex or is this informal style just OK? Anything missing that you’d like to see? Is there a specific example you’d like to see expressed with the new API? Bugs? The code can be downloaded from here (this is under a new BSD license): http://weblogs.asp.net/blogs/bleroy/Samples/FluentPath.zip

    Read the article

  • Silverlight Cream for January 26, 2011 -- #1036

    - by Dave Campbell
    In this all-submittal Issue: XamlNinja, Kevin Dockx, Steve Wortham, Andrea Boschin, Mick Norman, Colin Eberhardt, and Rudi Grobler(-2-, -3-, -4-, -5-). Above the Fold: Silverlight: "Getting an invalid cross-thread exception in Silverlight?" Kevin Dockx WP7: "WP7 Contrib – the last messenger" XamlNinja ISO: "How many files are too many files for isolated storage?" Mick Norman Shoutouts: Telerik announced a free WP7 Webinars series that you probably don't want to miss: Join Us for the Special Free Windows Phone 7 Webinars Series. Guest lecturers - Shawn Wildermuth and Mark Arteaga From SilverlightCream.com: WP7 Contrib – the last messenger XamlNinja has a great post up extending Laurent's IMessenger to deal with a tricky issue of trying to fire a message from one VM to another even if the 2nd VM isn't alive yet... oh, and this is in WP7Contrib, so go grab it! Getting an invalid cross-thread exception in Silverlight? Kevin Dockx has a solution to a problem we've all had... the 'invalid cross-thread exception' ... and the solution is even for those of us trying to do this in a VM... cool and easy solution, Kevin! Mastering Storyboards One Mistake at a Time Steve Wortham is back with a tutorial with a great title :) ... check out the progression from one success to another in this picture/title viewer ... don't miss the very end where he has the control rolled up into a CaptionedImageHyperlink, and a link to download it! Windows Phone 7 - Part #2: Your First Application Andrea Boschin has part 2 of his SilverlightShow WP7 series up. Lots of good intro material here on the manifest file and app.xaml ... he even gets into the ApplicationBar, phone orientation, and the Metro theme. How many files are too many files for isolated storage? Mick Norman alerted me to his blog early this morning, and this is his latest post... interesting tests of how many files are too many for ISO on your WP7... and I have to admit... he's stuffing a boatload of them out there in these tests! ... great info Mick! and thanks for the links. A Navigator Control For Visiblox Time Series Charts Colin Eberhardt's latest post is about creating an interactive navigator for large time series datasets in Visiblox charts.... check the images at the top of the post, and it'll be obvious :) ... very cool stuff. MVVM Frameworks with WP7 support Rudi Grobler has been very busy and if you check the dates, these posts are all in a day or two! This first highlights two contenders for MVVM on WP7: Caliburn and MVVMLight... both well-supported... quick intro to each followed by good links out to the author's sites Reading barcodes from your WP7 device Rudi Grobler also has a cool post up on reading barcodes with your WP7... he's using the ZXing Barcode Scanning Library, and makes quick work of the job. Taking Sterling for a Test-Drive Rudi Grobler has a quick intro to Sterlink, Jeremy Likness' ISO database for Silverlight up... quickly taking care of writing and reading back data. SQLite on WP7 After his discussion of Sterling, Rudi Grobler is now demonstrating the use of SQLite that has been ported to WP7. Check out his demo code... looks pretty easy to use. Hacking the WP7 Camera (The basics) Rudi Grobler's latest post is on getting direct access to the camera on WP7... be sure to do all the downloads and check out the external links he has. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Can't boot 12.04 installed alongside Windows 7

    - by PalaceChan
    I realize there are other questions like this one here, but I have visited them and tried several things and nothing is helping. One of them had a suggestion to boot the liveCD, and sudo mount /dev/sda* /mnt and to then chroot and reinstall grub. I did this and it did not help. Then on the Windows side, I downloaded a free version of easyBCD and chose to add a Grub2 Ubuntu 12.04 entry. On restart I saw this entry, but when I click on it it takes me to a Windows failed to boot error, as if it wasn't even trying to boot Ubuntu. I have booted from Ubuntu liveCD once again and have a snapshot of my GParted I ran this bootinfoscript thing from the liveCD, here are my results: It seems grub is on sda. I just want to be able to boot into my Ubuntu on startup. Boot Info Script 0.61 [1 April 2012] ============================= Boot Info Summary: =============================== = Grub2 (v1.99) is installed in the MBR of /dev/sda and looks at sector 1041658947 of the same hard drive for core.img. core.img is at this location and looks for (,gpt7)/boot/grub on this drive. sda1: __________________________________________ File system: vfat Boot sector type: Windows 7: FAT32 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: /efi/Boot/bootx64.efi sda2: __________________________________________ File system: Boot sector type: - Boot sector info: Mounting failed: mount: unknown filesystem type '' sda3: __________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files: /bootmgr /Boot/BCD /Windows/System32/winload.exe sda4: __________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: sda5: __________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: /bootmgr /boot/bcd sda6: __________________________________________ File system: BIOS Boot partition Boot sector type: Grub2's core.img Boot sector info: sda7: __________________________________________ File system: ext4 Boot sector type: Grub2 (v1.99) Boot sector info: Grub2 (v1.99) is installed in the boot sector of sda7 and looks at sector 1046637581 of the same hard drive for core.img. core.img is at this location and looks for (,gpt7)/boot/grub on this drive. Operating System: Ubuntu 12.04 LTS Boot files: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sda8: __________________________________________ File system: swap Boot sector type: - Boot sector info: ============================ Drive/Partition Info: ============================= Drive: sda _______________________________________ Disk /dev/sda: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 1 1,465,149,167 1,465,149,167 ee GPT GUID Partition Table detected. Partition Start Sector End Sector # of Sectors System /dev/sda1 2,048 411,647 409,600 EFI System partition /dev/sda2 411,648 673,791 262,144 Microsoft Reserved Partition (Windows) /dev/sda3 673,792 533,630,975 532,957,184 Data partition (Windows/Linux) /dev/sda4 533,630,976 1,041,658,946 508,027,971 Data partition (Windows/Linux) /dev/sda5 1,412,718,592 1,465,147,391 52,428,800 Windows Recovery Environment (Windows) /dev/sda6 1,041,658,947 1,041,660,900 1,954 BIOS Boot partition /dev/sda7 1,041,660,901 1,396,174,572 354,513,672 Data partition (Windows/Linux) /dev/sda8 1,396,174,573 1,412,718,591 16,544,019 Swap partition (Linux) blkid output: ____________________________________ Device UUID TYPE LABEL /dev/loop0 squashfs /dev/sda1 B498-319E vfat SYSTEM /dev/sda3 820C0DA30C0D92F9 ntfs OS /dev/sda4 168410AB84108EFD ntfs DATA /dev/sda5 AC7A43BA7A438056 ntfs Recovery /dev/sda7 42a5b598-4d8b-471b-987c-5ce8a0ce89a1 ext4 /dev/sda8 5732f1c7-fa51-45c3-96a4-7af3bff13278 swap /dev/sr0 iso9660 Ubuntu 12.04 LTS i386 ================================ Mount points: ================================= Device Mount_Point Type Options /dev/loop0 /rofs squashfs (ro,noatime) /dev/sr0 /cdrom iso9660 (ro,noatime) =========================== sda7/boot/grub/grub.cfg: =========================== How can I get this option? When I was using easyBCD, it kept saying I had no entries at all, so I did the add entry thing for Ubuntu many times and I see several of those on boot screen now. I'd love to get rid of all those unusable options.

    Read the article

  • Build Your Own CE6 Kernel

    - by Kate Moss' Big Fan
    The Share Source Program in Windows CE provides many modules in %_WINCEROOT%\Private\ tree, and the kernel is one of them! Although it is not full source of kernel but it is good enough for tracing it, even tweak the kernel. Tracing the kernel and see how it works is lots of fun, but it is fascinated to modify and verify the change you made. So first comes first, where is the source of kernel? It's in your %_WINCEROOT%\private\winceos\COREOS\nk\ And next question will be "How do I build it?", Some of you may say just "build -c" there and it should be good. If you are the owner of kernel and got full source, that is definitely the right answer, but none of them are applied to our case though. So what should I do? Let's dig deeper into the coreos\nk folder, there are a couples of subfolder, CELOG, KDSTUB, KERNEL and etc. KERNEL\ is the main component of kernel.dll, in the other word, most of the modify to kernel is going to happen here. And the good thing is, you could "build -c" in %_WINCEROOT%\private\winceos\COREOS\nk\kernel\ with no error at all. But before doing that, remember to backup eveything you are going to modify, including the source and binaries; remember, this is not something belong to you, and if you didn't restore them back later, it could end up confuse the subsequence QFE updates! Here is the steps Backup the source code, I will suggest the whole %_WINCEROOT%\private\winceos\COREOS\nk\ Backup the binaries in common\oak\lib\, and again if you are not sure which files, backup the whole %_WINCEROOT%\common\oak\lib\ is the safest way. Do whatever modification you want in %_WINCEROOT%\private\winceos\COREOS\nk\kernel\ build -c in %_WINCEROOT%\private\winceos\COREOS\nk\kernel If everything went well so far, you should get a new nkmain.lib,nkmain.pdb, nkprmain.lib and nkprmain.pdb in %_WINCEROOT%\public\common\oak\lib\%_TGTCPU%\%WINCEDEBUG%\ Basically, you just rebuild your new kernel, the rest is to "blddemo clean -q" to have your new kernel SYSGEN'd and include in your OS Image. Or just "set WINCEREL=1" then "sysgen -p common nk nkprof" and "makeimg" if you can't wait another minutes for "blddemo clean -q" Tat sounds good, but some of you may not like the idea to alter any code in private folder, and not to mention how annoying to backup/restore files every time. Better idea? Yes, Microsoft provides a tool SYSGEN_CAPTURE (http://msdn.microsoft.com/en-us/library/ee504678.aspx for detail and usage) to creates Sources files for public drivers that you want to modify and build in your platform directory. In fact, not only public drivers, virtually anything in the %_WINCEROOT%\public\<project name>\cesysgen\makefile can be captured, and of course including kernel. So I am going to introduce a second way to build your own kernel by using SYSGEN_CAPTURE tool. Again the steps Create a folder in your BSP for building kernel, says %_TARGETPLATROOT%\SRC\Kernel. Use "SYSGEN_CAPTURE -p common nk" and then you will get a SOURCES.KERN, you could also "SYSGEN_CAPTURE -p common nkprof" to generate profiler enabled kernel. rename the SOURCE.KERN to SOURCES and copy one of the sample makefile into your kernel directory. For example the one in PRIVATE\WINCEOS\COREOS\NK\KERNEL\NKNORMAL. Copy the source files you want to modify from private\winceos\coreos\nk\kernel\ into your kernel directory. Modifying the SOURCES= macro to the source files you addes in step 4. For example, if you copied the vm.c, it is going to be SOURCES=vm.c Refer to the private\winceos\COREOS\nk\kernel\sources.inc and add macro defines and proper include path in your SOURCES file. "set WINCEREL=1", "build -c" in your kernel directory and "makeimg", voila! Here is an example for the MACROS you need to add in x86 Here are the macros for x86 CDEFINES=$(CDEFINES) -DIN_KERNEL -DWINCEMACRO -DKERN_CORE # Machine independent defines CDEFINES=$(CDEFINES) -DDBGSUPPORT _COREOSROOT=$(_WINCEROOT)\private\winceos\coreos INCLUDES=$(_COREOSROOT)\inc;$(_COREOSROOT)\nk\inc !IFDEF DP_SETTINGS CDEFINES=$(CDEFINES) -DDP_SETTINGS=$(DP_SETTINGS) !ENDIF ASM_SAFESEH=1 CDEFINES=$(CDEFINES) -Gs100000 -DENCODE_GS_COOKIE

    Read the article

  • OS Development. Only Few Particular Questions

    - by Total Anime Immersion
    I am new to this site as a member but have consulted its answers quite a lot of times. Besides my questions regarding OS Development hasn't been answered in any forum. In OS Dev. we make a bootloader. The org point is 7C00H. Why so? Why not 0000h? What are the last two signatures in the bootloader used for? People on every forum have answered that it is important for the system to recognize it as a bootable media. But I want a specific answer. What do each of those signatures do. I have the basic concept of a kernel. Point is.. it relates to different files required in a system. It sort of binds up everything that is individually developed. Now the thing is that that I have floating ideas in my mind regarding different aspects like keyboard, mouse, etc.. how do I put them all together? Which should I start with first? If possible please provide a step by step procedure of the startups of the kernel. Suppose I have developed my language entirely in C and Assembly. Now questions is will exe files work on my system.. if it doesn't then I have to create my own files and publish them. Which is a bad idea.. next step would be for me to go for a compiler for a language which I have designed myself. Now the point is.. How do I implement the compiler into my OS? After all this my final question is that.. How do you go about multitasking and multithreading? and I don't want to use int 21h as its dos specific.. how do I go about making files, renaming them, etc. and all assembly books teach 16 but programming.. how do i go about doing 32 bit or 64 bit with the knowledge I have.. if the basics and instructions are the same.. I don't mind.. but how do i go about otherwise? Don't tell me to give up the idea because I WON'T. And don't tell me it's too complex because I have a sharp knowledge of working of a system, C, Java, Assembly, C++ and python, C#, visual basic.. and not just basics but full fledged api developments.. but I really want to go deep into the systems part.. so I want professional help.. And I have gone through many OS project files but I want help particularly from this site as there are people with knowledge depth who can guide me the right way. And please don't suggest any books above 20$ and they should be available on flipkart as amazon charges massively for shipping and I prefer free shipping from flipkart.

    Read the article

  • Bash completion doesn't work, or is ignoring what I've typed; but works for commands

    - by Neil Traft
    Bash completion seems to be ignoring what I've typed (it tries to complete, but acts as if there's nothing under the cursor). I know I saw it work on this machine earlier today, but I'm not sure what has changed. Some examples: cd shows all directories under my current folder: $ cd co<tab><tab> cmake/ config/ doc/ examples/ include/ programs/ sandbox/ src/ .svn/ tests/ Commands like ls and less show all files and directories under my current folder: $ ls co<tab><tab> cmake/ config/ .cproject Doxyfile.in include/ programs/ README.txt src/ tests/ CMakeLists.txt COPYING.txt doc/ examples/ mainpage.dox .project sandbox/ .svn/ Even when I try to complete things from a different folder, it gives me only the results for my current folder (telling me that it is completely ignoring what I've typed): $ cd ~/D<tab><tab> cmake/ config/ doc/ examples/ include/ programs/ sandbox/ src/ .svn/ tests/ But it seems to be working fine for commands and variables: $ if<tab><tab> if ifconfig ifdown ifnames ifquery ifup $ echo $P<tab><tab> $PATH $PIPESTATUS $PPID $PS1 $PS2 $PS4 $PWD $PYTHONPATH I do have this bit in my .bashrc, and I have confirmed that my .bashrc is indeed getting sourced: if [ -f /etc/bash_completion ] && ! shopt -oq posix; then . /etc/bash_completion fi I've even tried manually executing that file, but it doesn't fix the problem: $ . /etc/bash_completion There was even one point in time where it was working for ls, but was not working for cd ... but I can't replicate that result now. Update: I also just discovered that I have terminals open from earlier that still work. I ran source .bashrc in one of them and afterwards completion was broken. Here is my .bashrc: # ~/.bashrc: executed by bash(1) for non-login shells. # see /usr/share/doc/bash/examples/startup-files (in the package bash-doc) # for examples # # Modified by Neil Traft #source ~/.profile # Allow globs to expand hidden files shopt -s dotglob nullglob # If not running interactively, don't do anything [ -z "$PS1" ] && return # don't put duplicate lines or lines starting with space in the history. # See bash(1) for more options HISTCONTROL=ignoreboth # append to the history file, don't overwrite it shopt -s histappend # for setting history length see HISTSIZE and HISTFILESIZE in bash(1) HISTSIZE=1000 HISTFILESIZE=2000 # check the window size after each command and, if necessary, # update the values of LINES and COLUMNS. shopt -s checkwinsize # If set, the pattern "**" used in a pathname expansion context will # match all files and zero or more directories and subdirectories. #shopt -s globstar # make less more friendly for non-text input files, see lesspipe(1) [ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)" # set variable identifying the chroot you work in (used in the prompt below) if [ -z "$debian_chroot" ] && [ -r /etc/debian_chroot ]; then debian_chroot=$(cat /etc/debian_chroot) fi # Color the prompt export PS1="\[$(tput setaf 2)\]\u@\h:\[$(tput setaf 5)\]\W\[$(tput setaf 2)\] $\[$(tput sgr0)\] " # enable color support of ls and also add handy aliases if [ -x /usr/bin/dircolors ]; then test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)" alias ls='ls --color=auto' #alias dir='dir --color=auto' #alias vdir='vdir --color=auto' alias grep='grep --color=auto' alias fgrep='fgrep --color=auto' alias egrep='egrep --color=auto' fi # Add an "alert" alias for long running commands. Use like so: # sleep 10; alert alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"' # Alias definitions. # You may want to put all your additions into a separate file like # ~/.bash_aliases, instead of adding them here directly. # See /usr/share/doc/bash-doc/examples in the bash-doc package. if [ -f ~/.bash_aliases ]; then . ~/.bash_aliases fi # enable programmable completion features (you don't need to enable # this, if it's already enabled in /etc/bash.bashrc and /etc/profile # sources /etc/bash.bashrc). if [ -f /etc/bash_completion ] && ! shopt -oq posix; then . /etc/bash_completion fi

    Read the article

  • Old School Wizardry Tip: Batch File Comments

    - by jkauffman
    Johnny, the Endangered Keyboard-Driven Windows User Some of my proudest, obscure Windows tricks are losing their relevance. I know I’m not alone. Keyboard shortcuts are going the way of the dodo. I used to induce fearful awe by slapping Ctrl+Shift+Esc in front of the lowly, pedestrian Windows users. No windows key on the keyboard? No problem: Ctrl+Esc. No menu key on the keyboard: Shift+F10. I am also firmly planted in the habit of closing windows with the Alt+Space menu (Alt+Space, C); and I harbor a brooding, slow=growing list of programs that fail to support this correctly (that means you, Paint.NET). Every time a new version of windows comes out, the support for some of these minor time-saving habits get pared out. Will I complain publicly? Nope, I know my old ways should be axed to conserve precious design energy. In fact, I disapprove of fierce un-intuitiveness for the sake of alleged productivity. Like vim, for example. If you approach a program after being away for 5 years, having to recall encyclopedic knowledge is a flaw. The RTFM disciples have lost. Anyway, some of the items in my arsenal of goofy time-saving tricks are still relevant today. I wanted to draw attention to one that’s stood the test of time. Remember Batch Files? Yes, it’s true, batch files are fading faster than the world of print. But they're not dead yet. I still run into some situations where I opt to use batch files. They are still relevant for build processes, or just various development workflow tools. Sure, there’s powershell, but there’s that stupid Set-ExecutionPolicy speed bump standing in your way; can you really spare the time to A) hunt down that setting on all machines affected and/or B) make futile efforts to convince your coworkers/boss that the hassle was worth it? When possible, I prefer the batch file wild card. And whenever I return to batch files, I end up researching some of the unintuitive aspects such as parameters, quote handling, and ERRORLEVEL. But I never have to remember to use “REM” for comment lines, because there’s a cleaner way to do them! Double Colon For Eye-Friendly Comments Here is a very simple batch file, with pretty much minimal content: @ECHO OFF SETLOCAL REM This is a comment ECHO This batch file doesn’t do much If you code on a daily basis, this may be more suitable to your eyes: @ECHO OFF SETLOCAL :: This is a comment ECHO This batch file doesn’t do much Works great! I imagine I find it preferable due to the similarity to comments in other situations: // or ;  or # I’ve often make visual pseudo-line breaks in my code, and this colon-based syntax works wonders: @ECHO OFF SETLOCAL :: Do stuff ECHO Doing Stuff :::::::::::::::::::::::::::: :: Do more stuff ECHO This batch file doesn’t do much Not only is it more readable, but there’s a slight performance benefit. The batch file engine sees this as an invalid line label and immediately reads the following line. Use that fact to your advantage if this trick leads you into heated nerd debate. Two Pitfalls to Avoid Be aware of that there are a couple situations where this hack will fail you. It most likely won’t be a problem unless you’re getting really sophisticated with your batch files. Pitfall #1: Inline comments @ECHO OFF SETLOCAL IF EXIST C:\SomeFile.txt GOTO END ::This will fail :END Unfortunately, this fails. You can only have whitespace to the left of your comments. Pitfall #2: Code Blocks @ECHO OFF SETLOCAL IF EXIST C:\SomeFile.txt (         :: This will fail         ECHO HELLO ) Code blocks, such as if statements and for loops, cannot contain these comments. This is ultimately due to the fact that entire code blocks are processed as a single line. I originally learned this from Rob van der Woude’s site. He goes into more depth about the behavior of the pitfalls as well, if you are interested in further details. I hope this trick earns you serious geek rep!

    Read the article

  • Oracle Tutor: XPDL conversion (and why you should care)

    - by mary.keane
    You may have noticed that the Oracle Business Process Converter feature in Tutor 14 supports "XPDL" conversion to Oracle Business Process Analysis Suite (BPA), Oracle Business Process Management Suite (BPM), and Oracle Tutor, and you may have briefly wondered "what is XPDL?" before you moved on to the Visio import feature (a very popular feature in Tutor 14). This posting is for those who do not yet understand (or care) about XPDL and process modeling. Many of us (and I'm including myself) have spent years working in the process definition arena: we've written procedures, designed systems and software to help others write procedures, and have been responsible for embedding policies and procedures into training material for employees. We've worked with tools such as Oracle Tutor, Microsoft Visio, Microsoft Word, and UPK. Most of us have never worked with "modeling tools" before, and we certainly never had to understand BPMN. It's a brave new world in this arena, and companies desperately need people with policy and procedural system expertise to be able to work with system analysts so there is a seamless transfer of knowledge from IT to employees. When working with applications, a picture is worth a thousand words, so eventually you're going to need to understand and be able to work with business process models. XPDL is an acronym for XML Process Definition Language, and it is an interchange format for business process models. It allows you to take a BPMN model that was developed in one workflow application such as BizAgi and import it into another workflow application or a true BPMN management system such as Oracle BPM. Specifically, the XPDL format contains the graphical information of a model as well as any executable information. By using a common format, models can be moved from a basic modeling application used by business owners to applications used by system architects. Over 80 applications support the XPDL format, including MetaStorm ProVision, BEA ALBPM, BizAgi, and Tibco. I mention these applications because we have provided XSLT mapping files specifically for these vendors. Oracle Business Process Converter was designed with user extensibility in mind, and thus users can add their own XML files so that additional XPDL models from other vendors can be converted to BPM, BPA, and Oracle Tutor. Instructions on how to add your own files can be found in Appendix 4 of the Oracle Business Converter manual. Let's take a visual look at how this works. Here is an example of a model devloped in BizAgi: This model can be created by the average business user without a large learning curve, and it's a good start for the system analyst who will be adding web services as well as for the business manager who manages the process described in the model. By exporting this model as XPDL, the information can be converted into Oracle BPA and Oracle BPM as well as converted to Oracle Tutor to become the framework for a procedure. Through this conversion feature, one graphic illustration of a business process can be used by a system analyst, business analyst, business manager, and employee, as seen below. Model Converted to Tutor Procedure Below is the task section of the procedure after conversion from an XPDL file. Model converted to BPA Model converted to BPM End users still want step by step instructions on how to perform their jobs, so procedures (Oracle Tutor) and application simulations (UPK) are still a critical piece of the solution. But IT professionals need graphic descriptions of how the applications work, regardless of whether there are any tasks involving humans. Now there is a way to convert procedures (Oracle Tutor docx files) and basic models (XPDL files) so that business managers and system analysts can share process information. References Wikipedia XPDL. Workflow Management Coalition, XPDL Support and Resources Oracle Business Process Converter manual, Oracle Tutor 14 Oracle Business Process Management 11g If you have any XPDL conversion stories to share, we'd love to hear from you. Best wishes for the coming new year, Mary Keane, Senior Development Manager, Oracle Tutor and BPM

    Read the article

  • Deploying an SSL Application to Windows Azure &ndash; The Dark Secret

    - by ToStringTheory
    When working on an application that had been in production for some time, but was about to have a shopping cart added to it, the necessity for SSL certificates came up.  When ordering the certificates through the vendor, the certificate signing request (CSR) was generated through the providers (http://register.com) web interface, and within a day, we had our certificate. At first, I thought that the certification process would be the hard part…  Little did I know that my fun was just beginning… The Problem I’ll be honest, I had never really secured a site before with SSL.  This was a learning experience for me in the first place, but little did I know that I would be learning more than the simple procedure.  I understood a bit about SSL already, the mechanisms in how it works – the secure handshake, CA’s, chains, etc…  What I didn’t realize was the importance of the CSR in the whole process.  Apparently, when the CSR is created, a public key is created at the same time, as well as a private key that is stored locally on the PC that generated the request.  When the certificate comes back and you import it back into IIS (assuming you used IIS to generate the CSR), all of the information is combined together and the SSL certificate is added into your store. Since at the time the certificate had been ordered for our site, the selection to use the online interface to generate the CSR was chosen, the certificate came back to us in 5 separate files: A root certificate – (*.crt file) An intermediate certifcate – (*.crt file) Another intermediate certificate – (*.crt file) The SSL certificate for our site – (*.crt file) The private key for our certificate – (*.key file) Well, in case you don’t know much about Windows Azure and SSL certificates, the first thing you should learn is that certificates can only be uploaded to Azure if they are in a PFX package – securable by a password.  Also, in the case of our SSL certificate, you need to include the Private Key with the file.  As you can see, we didn’t have a PFX file to upload. If you don’t get the simple PFX from your hosting provider, but rather the multiple files, you will soon find out that the process has turned from something that should be simple – to one that borders on a circle of hell… Probably between the fifth and seventh somewhere… The Solution The solution is to take the files that make up the certificates chain and key, and combine them into a file that can be imported into your local computers store, as well as uploaded to Windows Azure.  I can not take the credit for this information, as I simply researched a while before finding out how to do this. Download the OpenSSL for Windows toolkit (Win32 OpenSSL v1.0.1c) Install the OpenSSL for Windows toolkit Download and move all of your certificate files to an easily accessible location (you'll be pointing to them in the command prompt, so I put them in a subdirectory of the OpenSSL installation) Open a command prompt Navigate to the folder where you installed OpenSSL Run the following command: openssl pkcs12 -export –out {outcert.pfx} –inkey {keyfile.key}      –in {sslcert.crt} –certfile {ca1.crt} –certfile (ca2.crt) From this command, you will get a file, outcert.pfx, with the sum total of your ssl certificate (sslcert.crt), private key {keyfile.key}, and as many CA/chain files as you need {ca1.crt, ca2.crt}. Taking this file, you can then import it into your own IIS in one operation, instead of importing each certificate individually.  You can also upload the PFX to Azure, and once you add the SSL certificate links to the cloud project in Visual Studio, your good to go! Conclusion When I first looked around for a solution to this problem, there were not many places online that had the information that I was looking for.  While what I ended up having to do may seem obvious, it isn’t for everyone, and I hope that this can at least help one developer out there solve the problem without hours of work!

    Read the article

  • Folders in SQL Server Data Tools

    - by jamiet
    Recently I have begun a new project in which I am using SQL Server Data Tools (SSDT) and SQL Server Integration Services (SSIS) 2012. Although I have been using SSDT & SSIS fairly extensively while SQL Server 2012 was in the beta phase I usually find that you don’t learn about the capabilities and quirks of new products until you use them on a real project, hence I am hoping I’m going to have a lot of experiences to share on my blog over the coming few weeks. In this first such blog post I want to talk about file and folder organisation in SSDT. The predecessor to SSDT is Visual Studio Database Projects. When one created a new Visual Studio Database Project a folder structure was provided with “Schema Objects” and “Scripts” in the root and a series of subfolders for each schema: Apparently a few customers were not too happy with the tool arbitrarily creating lots of folders in Solution Explorer and hence SSDT has gone in completely the opposite direction; now no folders are created and new objects will get created in the root – it is at your discretion where they get moved to: After using SSDT for a few weeks I can safely say that I preferred the older way because I never used Solution Explorer to navigate my schema objects anyway so it didn’t bother me how many folders it created. Having said that the thought of a single long list of files in Solution Explorer without any folders makes me shudder so on this project I have been manually creating folders in which to organise files and I have tried to mimic the old way as much as possible by creating two folders in the root, one for all schema objects and another for Pre/Post deployment scripts: This works fine until different developers start to build their own different subfolder structures; if you are OCD-inclined like me this is going to grate on you eventually and hence you are going to want to move stuff around so that you have consistent folder structures for each schema and (if you have multiple databases) each project. Moreover new files get created with a filename of the object name + “.sql” and often people like to have an extra identifier in the filename to indicate the object type: The overall point is this – files and folders in your solution are going to change. Some version control systems (VCSs) don’t take kindly to files being moved around or renamed because they recognise the renamed/moved file simply as a new file and when they do that you lose the revision history which, to my mind, is one of the key benefits of using a VCS in the first place. On this project we have been using Team Foundation Server (TFS) and while it pains me to say it (as I am no great fan of TFS’s version control system) it has proved invaluable when dealing with the SSDT problems that I outlined above because it is integrated right into the Visual Studio IDE. Thus the advice from this blog post is: If you are using SSDT consider using an Visual-Studio-integrated VCS that can easily handle file renames and file moves I suspect that fans of other VCSs will counter by saying that their VCS weapon of choice can handle renames/file moves quite satisfactorily and if that’s the case…great…let me know about them in the comments. This blog post is not an attempt to make people use one particular VCS, only to make people aware of this issue that might rise when using SSDT. More to come in the coming few weeks! @jamiet

    Read the article

  • First Impressions of a MacBook (from a PC guy)

    - by dgreen
    Disclaimer: I've been a PC guy my entire working career. I'd probably characterize myself as a power user. Never afraid to bust out the console line. But working with a Mac is totally foreign to me. So for those Mac guys who are curious, this is how your world appears from the outside to a computer literate person :)My Macbook Air has arrived! And it's a thing of beauty:First, the specs: 13" MacBook Air, 2.0GHz Core i7 processor. Upgraded to 8GB of RAM for an additional $100, SSD flash storage  = 256GB. The plan is ultimately to use this baby for some iOS development but also some decent lifting in Windows with Visual Studio. Done a lot of reading  and between VMWare Fusion, Parallels and Bootcamp...I'm going to go with VMWare Fusion for $49.99And now my impressions (please re-read disclaimer before proceeding!):I open the box and am trying to understand exactly how the magsafe connector works (and how to disconnect it).  Why does it have two socket outlet plugs? Who knows.  I feel like Hansel in Zoolander. The files are "in" the computer.Stuck in my external hard drive (usb). So how do I get to the files? To the Googles!Argh...it can't read my external NTFS drive. Fat32 can't support field over 4GB…problematic since some of my existing VMWare image files are much larger than 4GB. Didn't see this coming.Three year old loves iPhoto. Super easy to use. Don't even know what I'm doing but I've already (accidentally) discovered the image filtering options. Fun stuff.First thing I downloaded ever => Chrome. I need something to ground me, something familiar. My token, if you will (sorry, gratuitous Inception joke).Ok, I get it… Finder == windows explorer. But where is my hierarchical structure? I miss the tree :(On that note, yeah…how do I see what "path" my files reside in? I'm afraid to know the answer. You know what scares more though…this notion of a smart folder. Feel like the godfather - just get the job done, I don't care how you handle it, I don't want to know...just get it done. What the hell is AirDrop?Mail…just worked. Still in shock that they have a free client for yahoo mail (please no yahoo jokes).mail -> deleting a message takes 5 seconds. Have they heard of async?"Command" key instead of "Control" ok, then what the $%&^! is the control key for then"aliases" == shortcuts I thinkI don't see the file system. And I'm scared. All these things I'm downloading…these .dmg files (bad name) where are they going? Can't seem to delete when they're doneUgh...realized need to buy a mini-to-vga adaptor if I want to use my external monitor ($13 on ebay, $39 in apple store).Windows docking is trickiest for me…this notion of detached windows with a menu bar at the top. I don't like this paradigm, it's confusing. But maybe because I've been using Windows for too long.Evernote, Dropbox desktop clients seem almost identical…few quirks here and there I need to get used to.iTunes is still a bit gross. In a weird way it's actually worse on a Mac if thats possible. This is not the MacBook's fault…this is a software design issue. Overall: UI will take some getting used to. Can't decide if this represents the future and I'm stuck in the past…or this is the past and I've been spoiled by the future (which would be Windows…don't be hating I happen to be very productive in Win7)  So there you go - my 90 minute first impression of the MacBook universe.

    Read the article

  • Production Access Denied! Who caused this rule anyways?

    - by Matt Watson
    One of the biggest challenges for most developers is getting access to production servers. In smaller dev teams of less than about 5 people everyone usually has access. Then you hire developer #6, he messes something up in production... and now nobody has access. That is how it always starts in small dev teams. I think just about every rule of life there is gets created this way. One person messes it up for the rest of us. Rules are then put in place to try and prevent it from happening again.Breaking the rules is in our nature. In this example it is for good cause and a necessity to support our applications and troubleshoot problems as they arise. So how do developers typically break the rules? Some create their own method to collect log files off servers so they can see them. Expensive log management programs can collect log files, but log files alone are not enough. Centralizing where important errors are logged to is common. Some lucky developers are given production server access by the IT operations team out of necessity. Wait. That's not fair to all developers and knowingly breaks the company rule!  When customers complain or the system is down, the rules go out the window. Commonly lead developers get production access because they are ultimately responsible for supporting the application and may be the only person who knows how to fix it. The problem with only giving lead developers production access is it doesn't scale from a support standpoint. Those key employees become the go to people to help solve application problems, but they also become a bottleneck. They end up spending up to half of their time every day helping resolve application defects, performance problems, or whatever the fire of the day is. This actually the last thing you want your lead developers doing. They should be working on something more strategic like major enhancements to the product. Having production access can actually be a curse if you are the guy stuck hunting down log files all day. Application defects are good tasks for junior developers. They can usually handle figuring out simple application problems. But nothing is worse than being a junior developer who can't figure out those problems and the back log of them grows and grows. Some of them require production server access to verify a deployment was done correctly, verify config settings, view log files, or maybe just restart an application. Since the junior developers don't have access, they end up bugging the developers who do have access or they track down a system admin to help. It can take hours or days to see server information that would take seconds or minutes if they had access of their own. It is very frustrating to the developer trying to solve the problem, the system admin being forced to help, and most importantly your customers who are not happy about the situation. This process is terribly inefficient. Production database access is also important for solving application problems, but presents a lot of risk if developers are given access. They could see data they shouldn't.  They could write queries on accident to update data, delete data, or merely select every record from every table and bring your database to its knees. Since most of the application we create are data driven, it can be very difficult to track down application bugs without access to the production databases.Besides it being against the rule, why don't all developers have access? Most of the time it comes down to security, change of control, lack of training, and other valid reasons. Developers have been known to tinker with different settings to try and solve a problem and in the process forget what they changed and made the problem worse. So it is a double edge sword. Don't give them access and fixing bugs is more difficult, or give them access and risk having more bugs or major outages being created!Matt WatsonFounder, CEOStackifyAgile Support for Agile Developers

    Read the article

  • Utility to Script SQL Server Configuration

    - by Bill Graziano
    I wrote a small utility to script some key SQL Server configuration information. I had two goals for this utility: Assist with disaster recovery preparation Identify configuration changes I’ve released the application as open source through CodePlex. You can download it from CodePlex at the Script SQL Server Configuration project page. The application is a .NET 2.0 console application that uses SMO. It writes its output to a directory that you specify.  Disaster Planning ScriptSqlConfig generates scripts for logins, jobs and linked servers.  It writes the properties and configuration from the instance to text files. The scripts are designed so they can be run against a DR server in the case of a disaster. The properties and configuration will need to be manually compared. Each job is scripted to its own file. Each linked server is scripted to its own file. The linked servers don’t include the password if you use a SQL Server account to connect to the linked server. You’ll need to store those somewhere secure. All the logins are scripted to a single file. This file includes windows logins, SQL Server logins and any server role membership.  The SQL Server logins are scripted with the correct SID and hashed passwords. This means that when you create the login it will automatically match up to the users in the database and have the correct password. This is the only script that I programmatically generate rather than using SMO. The SQL Server configuration and properties are scripted to text files. These will need to be manually reviewed in the event of a disaster. Or you could DIFF them with the configuration on the new server. Configuration Changes These scripts and files are all designed to be checked into a version control system.  The scripts themselves don’t include any date specific information. In my environments I run this every night and check in the changes. I call the application once for each server and script each server to its own directory.  The process will delete any existing files before writing new ones. This solved the problem I had where the scripts for deleted jobs and linked servers would continue to show up.  To see any changes I just need to query the version control system to show many any changes to the files. Database Scripting Utilities that script database objects are plentiful.  CodePlex has at least a dozen of them including one I wrote years ago. The code is so easy to write it’s hard not to include that functionality. This functionality wasn’t high on my list because it’s included in a database backup.  Unless you specify the /nodb option, the utility will script out many user database objects. It will script one object per file. It will script tables, stored procedures, user-defined data types, views, triggers, table types and user-defined functions. I know there are more I need to add but haven’t gotten around it yet. If there’s something you need, please log an issue and get it added. Since it scripts one object per file these really aren’t appropriate to recreate an empty database. They are really good for checking into source control every night and then seeing what changed. I know everyone tells me all their database objects are in source control but a little extra insurance never hurts. Conclusion I hope this utility will help a few of you out there. My goal is to have it script all server objects that aren’t contained in user databases. This should help with configuration changes and especially disaster recovery.

    Read the article

  • git | error: Unable to append to .git/logs/refs/remotes/origin/master: Permission denied [SOLVED]

    - by Corbin Tarrant
    I am having a strange issue that I can't seem to resolve. Here is what happend: I had some log files in a github repository that I didn't want there. I found this script that removes files completely from git history like so: #!/bin/bash set -o errexit # Author: David Underhill # Script to permanently delete files/folders from your git repository. To use # it, cd to your repository's root and then run the script with a list of paths # you want to delete, e.g., git-delete-history path1 path2 if [ $# -eq 0 ]; then exit 0are still fi # make sure we're at the root of git repo if [ ! -d .git ]; then echo "Error: must run this script from the root of a git repository" exit 1 fi # remove all paths passed as arguments from the history of the repo files=$@ git filter-branch --index-filter "git rm -rf --cached --ignore-unmatch $files" HEAD # remove the temporary history git-filter-branch otherwise leaves behind for a long time rm -rf .git/refs/original/ && git reflog expire --all && git gc --aggressive --prune I, of course, made a backup first and then tried it. It seemed to work fine. I then did a git push -f and was greeted with the following messages: error: Unable to append to .git/logs/refs/remotes/origin/master: Permission denied error: Cannot update the ref 'refs/remotes/origin/master'. Everything seems to have pushed fine though, because the files seem to be gone from the GitHub repository, if I try and push again I get the same thing: error: Unable to append to .git/logs/refs/remotes/origin/master: Permission denied error: Cannot update the ref 'refs/remotes/origin/master'. Everything up-to-date EDIT $ sudo chgrp {user} .git/logs/refs/remotes/origin/master $ sudo chown {user} .git/logs/refs/remotes/origin/master $ git push Everything up-to-date Thanks! EDIT Uh Oh. Problem. I've been working on this project all night and just went to commit my changes: error: Unable to append to .git/logs/refs/heads/master: Permission denied fatal: cannot update HEAD ref So I: sudo chown {user} .git/logs/refs/heads/master sudo chgrp {user} .git/logs/refs/heads/master I try the commit again and I get: error: Unable to append to .git/logs/HEAD: Permission denied fatal: cannot update HEAD ref So I: sudo chown {user} .git/logs/HEAD sudo chgrp {user} .git/logs/HEAD And then I try the commit again: 16 files changed, 499 insertions(+), 284 deletions(-) create mode 100644 logs/DBerrors.xsl delete mode 100644 logs/emptyPHPerrors.php create mode 100644 logs/trimXMLerrors.php rewrite public/codeCore/Classes/php/DatabaseConnection.php (77%) create mode 100644 public/codeSite/php/init.php $ git push Counting objects: 49, done. Delta compression using up to 2 threads. Compressing objects: 100% (27/27), done. Writing objects: 100% (27/27), 7.72 KiB, done. Total 27 (delta 15), reused 0 (delta 0) To [email protected]:IAmCorbin/MooKit.git 59da24e..68b6397 master -> master Hooray. I jump on http://GitHub.com and check out the repository, and my latest commit is no where to be found. ::scratch head:: So I push again: Everything up-to-date Umm...it doesn't look like it. I've never had this issue before, could this be a problem with github? or did I mess something up with my git project? EDIT Nevermind, I did a simple: git push origin master and it pushed fine.

    Read the article

  • Forbidden Patterns Check-In Policy in TFS 2010

    - by Jaxidian
    I've been trying to use the Forbidden Patterns part of the TFS 2010 Power Tools and I'm just not understanding something - I simply cannot get anything to change as I try to use this! I'm using the version that was released recently (I believe April 23, 2010), so it's not an old version. First off, yes, I know it's regex based, so let's clear that doubt... I have tried to block the following scenarios: 1) I have modified all of my T4 EF templates to generate files named EntityName.gen.cs. I then attempted to prevent TFS from wanting to check those files in. I used the regular expression \.gen\.cs\z and it didn't change a single thing! I even tried it without the \z and nadda! 2) I don't want app.config and web.config files to be checked-in by default because we have these things stored into app.config.base and web.config.base files that our build scripts use to generate our per-environment app.config and web.config files. As such, I tried the following regexes and again, nothing worked! web\.config\z, app\.config\z, web\.release\.config\z and web\.debug\.config\z. What is it that I am screwing up with this?

    Read the article

< Previous Page | 449 450 451 452 453 454 455 456 457 458 459 460  | Next Page >