Search Results

Search found 15030 results on 602 pages for 'money format'.

Page 370/602 | < Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >

  • I have a bad install of Windows on another hard drive and it won't let me install a fresh copy. How do I fix it in Ubuntu 12.04?

    - by Dana LaBerge
    Basically, there was a security issue in the drivers for my graphics card. It was a 64-bit card and I installed 32-bit Windows. Apparently, before SP1 was available, which fixed that issue, 6 trojan horses got in. They stopped SP1 from installing. After going through the ringer several times, I finally talked to a person who knew the problem. It was something about how the drivers tried to transfer between the 32-bit OS and the 64-bit card that left me open. Ever since, my computer has been slow and has had weird issues. Like tinypic wouldn't ever load. Also, certain programs wouldn't install. So I eventually talk to the dude that knew the problem and he takes the reigns and does some diagnostics. He tells me that to fix it I have to format the hard drive and do a fresh install. I'm okay with that because I was planning on it anyway, to upgrade to the 64-bit version. The problem is, how do I do that? I have the disk to install the new copy, but when I go to install it, it tells me I can't and to check the log file. However, I don't know where that log file is, and it wiped my install of Windows out. How do I find the file and as a different route to get to the goal, how do I zero out the drive from Ubuntu 12.04? (I installed the 64-bit version just the other day)

    Read the article

  • I can't install Ubuntu on my Dell Inspiron 15R at all

    - by Kieran Rimmer
    I'm attempting to install Ubuntu 12.04LTS, 64 bit onto a Dell Inspiron 15R laptop. I've shrunk down one of the windows partitions and even used gparted to format the vacant space as ext4. However, the install disk simply does not present any options when it comes to the partitioning step. What I get is a non-responsive blank table As well as the above, I've changed the BIOS settings so that USB emulation is disabled (as per Can't install on Dell Inspiron 15R), and changed the SATA Operation setting to all three possible options. Anyway, the install CD will bring up the trial version of ubuntu, and if I open terminal and type sudo fdisk -l, I get this: Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0xb4fd9215 Device Boot Start End Blocks Id System /dev/sda1 63 80324 40131 de Dell Utility Partition 1 does not start on physical sector boundary. /dev/sda2 * 81920 29044735 14481408 7 HPFS/NTFS/exFAT /dev/sda3 29044736 1005142015 488048640 7 HPFS/NTFS/exFAT /dev/sda4 1005154920 1953520064 474182572+ 83 Linux Disk /dev/sdb: 32.0 GB, 32017047552 bytes 255 heads, 63 sectors/track, 3892 cylinders, total 62533296 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xb4fd923d Device Boot Start End Blocks Id System /dev/sdb1 2048 16775167 8386560 84 OS/2 hidden C: drive If I type 'sudo parted -l', I get: Model: ATA WDC WD10JPVT-75A (scsi) Disk /dev/sda: 1000GB Sector size (logical/physical): 512B/4096B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 41.1MB 41.1MB primary fat16 diag 2 41.9MB 14.9GB 14.8GB primary ntfs boot 3 14.9GB 515GB 500GB primary ntfs 4 515GB 1000GB 486GB primary ext4 Model: ATA SAMSUNG SSD PM83 (scsi) Disk /dev/sdb: 32.0GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 8589MB 8588MB primary Warning: Unable to open /dev/sr0 read-write (Read-only file system). /dev/sr0 has been opened read-only. Error: Can't have a partition outside the disk! I've also tried a Kubuntu 12.04 and Linuxmint install disks, wityh the same problem. I'm completely lost here. Cheers, Kieran

    Read the article

  • BizTalk: History of one project architecture

    - by Leonid Ganeline
    "In the beginning God made heaven and earth. Then he started to integrate." At the very start was the requirement: integrate two working systems. Small digging up: It was one system. It was good but IT guys want to change it to the new one, much better, chipper, more flexible, and more progressive in technologies, more suitable for the future, for the faster world and hungry competitors. One thing. One small, little thing. We cannot turn off the old system (call it A, because it was the first), turn on the new one (call it B, because it is second but not the last one). The A has a hundreds users all across a country, they must study B. A still has a lot nice custom features, home-made features that cannot disappear. These features have to be moved to the B and it is a long process, months and months of redevelopment. So, the decision was simple. Let’s move not jump, let’s both systems working side-by-side several months. In this time we could teach the users and move all custom A’s special functionality to B. That automatically means both systems should work side-by-side all these months and use the same data. Data in A and B must be in sync. That’s how the integration projects get birth. Moreover, the specific of the user tasks requires the both systems must be in sync in real-time. Nightly synchronization is not working, absolutely.   First draft The first draft seems simple. Both systems keep data in SQL databases. When data changes, the Create, Update, Delete operations performed on the data, and the sync process could be started. The obvious decision is to use triggers on tables. When we are talking about data, we are talking about several entities. For example, Orders and Items [in Orders]. We decided to use the BizTalk Server to synchronize systems. Why it was chosen is another story. Second draft   Let’s take an example how it works in more details. 1.       User creates a new entity in the A system. This fires an insert trigger on the entity table. Trigger has to pass the message “Entity created”. This message includes all attributes of the new entity, but I focused on the Id of this entity in the A system. Notation for this message is id.A. System A sends id.A to the BizTalk Server. 2.       BizTalk transforms id.A to the format of the system B. This is easiest part and I will not focus on this kind of transformations in the following text. The message on the picture is still id.A but it is in slightly different format, that’s why it is changing in color. BizTalk sends id.A to the system B. 3.       The system B creates the entity on its side. But it uses different id-s for entities, these id-s are id.B. System B saves id.A+id.B. System B sends the message id.A+id.B back to the BizTalk. 4.       BizTalk sends the message id.A+id.B to the system A. 5.       System A saves id.A+id.B. Why both id-s should be saved on both systems? It was one of the next requirements. Users of both systems have to know the systems are in sync or not in sync. Users working with the entity on the system A can see the id.B and use it to switch to the system B and work there with the copy of the same entity. The decision was to store the pairs of entity id-s on both sides. If there is only one id, the entities are not in sync yet (for the Create operation). Third draft Next problem was the reliability of the synchronization. The synchronizing process can be interrupted on each step, when message goes through the wires. It can be communication problem, timeout, temporary shutdown one of the systems, the second system cannot be synchronized by some internal reason. There were several potential problems that prevented from enclosing the whole synchronization process in one transaction. Decision was to restart the whole sync process if it was not finished (in case of the error). For this purpose was created an additional service. Let’s call it the Resync service. We still keep the id pairs in both systems, but only for the fast access not for the synchronization process. For the synchronizing these id-s now are kept in one main place, in the Resync service database. The Resync service keeps record as: ·       Id.A ·       Id.B ·       Entity.Type ·       Operation (Create, Update, Delete) ·       IsSyncStarted (true/false) ·       IsSyncFinished (true/false0 The example now looks like: 1.       System A creates id.A. id.A is saved on the A. Id.A is sent to the BizTalk. 2.       BizTalk sends id.A to the Resync and to the B. id.A is saved on the Resync. 3.       System B creates id.B. id.A+id.B are saved on the B. id.A+id.B are sent to the BizTalk. 4.       BizTalk sends id.A+id.B to the Resync and to the A. id.A+id.B are saved on the Resync. 5.       id.A+id.B are saved on the B. Resync changes the IsSyncStarted and IsSyncFinished flags accordingly. The Resync service implements three main methods: ·       Save (id.A, Entity.Type, Operation) ·       Save (id.A, id.B, Entity.Type, Operation) ·       Resync () Two Save() are used to save id-s to the service storage. See in the above example, in 2 and 4 steps. What about the Resync()? It is the method that finishes the interrupted synchronization processes. If Save() is started by the trigger event, the Resync() is working as an independent process. It periodically scans the Resync storage to find out “unfinished” records. Then it restarts the synchronization processes. It tries to synchronize them several times then gives up.     One more thing, both systems A and B must tolerate duplicates of one synchronizing process. Say on the step 3 the system B was not able to send id.A+id.B back. The Resync service must restart the synchronization process that will send the id.A to B second time. In this case system B must just send back again also created id.A+id.B pair without errors. That means “tolerate duplicates”. Fourth draft Next draft was created only because of the aesthetics. As it always happens, aesthetics gave significant performance gain to the whole system. First was the stupid question. Why do we need this additional service with special database? Can we just master the BizTalk to do something like this Resync() does? So the Resync orchestration is doing the same thing as the Resync service. It is started by the Id.A and finished by the id.A+id.B message. The first works as a Start message, the second works as a Finish message.     Here is a diagram the whole process without errors. It is pretty straightforward. The Resync orchestration is waiting for the Finish message specific period of time then resubmits the Id.A message. It resubmits the Id.A message specific number of times then gives up and gets suspended. It can be resubmitted then it starts the whole process again: waiting [, resubmitting [, get suspended]], finishing. Tuning up The Resync orchestration resubmits the id.A message with special “Resubmitted” flag. The subscription filter on the Resync orchestration includes predicate as (Resubmit_Flag != “Resubmitted”). That means only the first Sync orchestration starts the Resync orchestration. Other Sync orchestration instantiated by the resubmitting can finish this Resync orchestration but cannot start another instance of the Resync   Here is a diagram where system B was inaccessible for some period of time. The Resync orchestration resubmitted the id.A two times. Then system B got the response the id.A+id.B and this finished the Resync service execution. What is interesting about this, there were submitted several identical id.A messages and only one id.A+id.B message. Because of this, the system B and the Resync must tolerate the duplicate messages. We also told about this requirement for the system B. Now the same requirement is for the Resunc. Let’s assume the system B was very slow in the first response and the Resync service had time to resubmit two id.A messages. System B responded not, as it was in previous case, with one id.A+id.B but with two id.A+id.B messages. First of them finished the Resync execution for the id.A. What about the second id.A+id.B? Where it goes? So, we have to add one more internal requirement. The whole solution must tolerate many identical id.A+id.B messages. It is easy task with the BizTalk. I added the “SinkExtraMessages” subscriber (orchestration with one receive shape), that just get these messages and do nothing. Real design Real architecture is much more complex and interesting. In reality each system can submit several id.A almost simultaneously and completely unordered. There are not only the “Create entity” operation but the Update and Delete operations. And these operations relate each other. Say the Update operation after Delete means not the same as Update after Create. In reality there are entities related each other. Say the Order and Order Items. Change on one of it could start the series of the operations on another. Moreover, the system internals are the “black boxes” and we cannot predict the exact content and order of the operation series. It worth to say, I had to spend a time to manage the zombie message problems. The zombies are still here, but this is not a problem now. And this is another story. What is interesting in the last design? One orchestration works to help another to be more reliable. Why two orchestration design is more reliable, isn’t it something strange? The Synch orchestration takes all the message exchange between systems, here is the area where most of the errors could happen. The Resync orchestration sends and receives messages only within the BizTalk server. Is there another design? Sure. All Resync functionality could be implemented inside the Sync orchestration. Hey guys, some other ideas?

    Read the article

  • Windows Azure Virtual Machines - Make Sure You Follow the Documentation

    - by BuckWoody
    To create a Windows Azure Infrastructure-as-a-Service Virtual Machine you have several options. You can simply select an image from a “Gallery” which includes Windows or Linux operating systems, or even a Windows Server with pre-installed software like SQL Server. One of the advantages to Windows Azure Virtual Machines is that it is stored in a standard Hyper-V format – with the base hard-disk as a VHD. That means you can move a Virtual Machine from on-premises to Windows Azure, and then move it back again. You can even use a simple series of PowerShell scripts to do the move, or automate it with other methods. And this then leads to another very interesting option for deploying systems: you can create a server VHD, configure it with the software you want, and then run the “SYSPREP” process on it. SYSPREP is a Windows utility that essentially strips the identity from a system, and when you re-start that system it asks a few details on what you want to call it and so on. By doing this, you can essentially create your own gallery of systems, either for testing, development servers, demo systems and more. You can learn more about how to do that here: http://msdn.microsoft.com/en-us/library/windowsazure/gg465407.aspx   But there is a small issue you can run into that I wanted to make you aware of. Whenever you deploy a system to Windows Azure Virtual Machines, you must meet certain password complexity requirements. However, when you build the machine locally and SYSPREP it, you might not choose a strong password for the account you use to Remote Desktop to the machine. In that case, you might not be able to reach the system after you deploy it. Once again, the key here is reading through the instructions before you start. Check out the link I showed above, and this link: http://technet.microsoft.com/en-us/library/cc264456.aspx to make sure you understand what you want to deploy.  

    Read the article

  • Ubuntu 14.04 ATI Radeon open source driver with distorted video playback

    - by Bwog
    Video in VLC or SMplayer is often in black and white with washed-out colors in the wrong place (translated considerably). Moreover, the last video image is often visible when a new video is started and persist as long as the new video is running. Colors have a recognizable shape (e.g. a persons clothes or face), but can be obviously incorrect (e.g. green or purples faces). This is independent of the format of the videos (mp4, mkv, wmv). Sometimes all problems disappear when a new video is started, but often only a reboot restores normal video. Ubuntu was upgraded to 14.04 and is fully updated. Processor intel core i5-2500K cpu. gpu: amd/ati Radeon HD 7950. graphics: gallium 0.4 on AMD Tahiti. xorg xserver amd/ati display driver wrapper from xserver-xorg-video-ati. :~$ Xorg -version X.Org X Server 1.15.1 Release Date: 2014-04-13 X Protocol Version 11, Revision 0 Build Operating System: Linux 3.2.0-37-generic x86_64 Ubuntu Current Operating System: Linux Mare 3.13.0-29-generic #53-Ubuntu SMP Wed Jun 4 21:00:20 UTC 2014 x86_64 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.13.0-29-generic.efi.signed root=UUID=number ro Build Date: 16 April 2014 01:36:29PM xorg-server 2:1.15.1-0ubuntu2 Current version of pixman: 0.30.2 ~$ lspci | grep VGA 01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Tahiti PRO [Radeon HD 7950/8950 OEM / R9 280] Question: how to restore regular video playback?

    Read the article

  • MyPaint is an Open-Source Graphics App for Digital Painters

    - by Asian Angel
    Are you looking for a terrific graphics app to use for original painting and artwork creation on your computer? Whether it is for you or the kids, MyPaint is an app that you should definitely have on hand for when those artistic moods come along. For our example we chose to install MyPaint on Ubuntu 10.10…you can easily find it in the Ubuntu Software Center by doing a quick search. Once you have it installed, all that is left to do is decide if you want to add additional brushes (link provided below) and then start having fun creating your next work of art. Here are some of MyPaint’s wonderful features: Exists for several platforms (Linux, Windows, and Mac OS X) Supports pressure sensitive graphics tablets Extensive brush creation and configuration options Unlimited canvas (you never have to resize) Basic layer support Comes with a large brush collection including charcoal and ink to emulate real media MyPaint is fun to use and can quickly become very addicting as you experiment during the creation process! Links MyPaint Homepage Download Additional Brushes for MyPaint Download the GIMP Plugin for the OpenRaster File Format Latest Features How-To Geek ETC Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines MyPaint is an Open-Source Graphics App for Digital Painters Can the Birds and Pigs Really Be Friends in the End? [Angry Birds Video] Add the 2D Version of the New Unity Interface to Ubuntu 10.10 and 11.04 MightyMintyBoost Is a 3-in-1 Gadget Charger Watson Ties Against Human Jeopardy Opponents Peaceful Tropical Cavern Wallpaper

    Read the article

  • Where to find and install ASUS motherboard drivers for Linux

    - by Dan
    This is my second day ever with Linux, and I had one heck of a time getting the nVidia drivers installed and working. Please, keep in mind I am very new and just starting out. I currently have an ASUS P8Z68-V LE motherboard and I'm not sure if the drivers are installed. Where would I go to find that out? I am using Gnome as my UI. If I don't have the drivers installed, where would I go? The ASUS site only gives me options to download for various Windows OS, DOS and "other" (in .ROM format). Which should I take and how should I install? I'm mostly looking for audio drivers. A lot of music I play, either on YouTube or with VLC has a faint crackling in the background on Ubuntu, which gets much worse the higher I turn the volume up. Could this be something other than the drivers? I doubt it's the hardware since the sound seems fine on Windows. I am currently running 12.04.

    Read the article

  • Won't boot after installing Ubuntu 12.04 sucessfully

    - by Matt
    I installed 12.04 successfully and rebooted (I took out my installation CD), and selected the newly installed Linux partition to boot from rEFIt. Then it just comes up with this error message: Error loading operating system which could not be more vague. Take that back. I guess it could say just "error." I don't even get to the boot prompt which limits what I can do. I cannot boot into rescue mode. I tried boot-repair, but it took more than 24 hours to check the system configuration, so I gave up on that. I'm running a Mac Mini with its main OS being Mac OS X 10.5.8. I have an alternate OS Windows XP installed, which was virtually destroyed by this Linux installation. I sacrificed my working, speedy Windows partition for something that won't even boot up. What was I thinking. My Mac partition is slow as crap. I've tried installing 12.04 many times with two different disks. The first time, I had one partition for Linux, then I had 2 (swap+main), then 3 (swap, main and BIOS), then 4 which is what I have now (swap, main, BIOS, and boot/grub). The only way I could get through the install without GRUB giving up was if I created a separate partition for it. Which was pointless, because it did install successfully, but it still doesn't boot up at all. Could rEFIt be booting off of the BIOS or one of the other partitions? Because if that's the case, there is no alternative, because Mac itself without rEFIt refuses to recognize a Linux ext4 (or 2 or 3) format partition. Apple always has to make everything so difficult. If I'm not mistaken, rEFIt is the only application of its kind for Mac. I can boot off of the CD back to the install/try screen. This is extremely upsetting, can you guys help? Please?

    Read the article

  • How to partition Seagate FreeAgent GoFlex 2TB hard disk?

    - by balki
    Hi I bought a new Seagate 2TB external hard disk. I opened the drive's application in my virtual windows, did product registration using the application present in it. I have few questions on how best to use it. The drive by default has some files and folders - setup.exe, System Volume Information, USB 3.0 PC Card Adapter etc,. I copied all the files to my laptop. Is it safe to delete these files? It has a dash board for windows which allows to tune power options, test the drive etc. Will I be able to use the dash board if I put back all these files and mount on windows again? I want to partition and format the hard disk. Data I like to store is Around 10 to 20 GB Files - Virtual box images. Around 4GB Files - Dvd images. Other Movies and personal Files. What is the best filesystem to store very huge files like 10 to 20GB files. So that they are written and accessed fast also best uses the drive's capacity. If I leave one of the partition as ntfs and others to different files systems, will it be able to mount on windows and Will I be able to use the device's dash board? Note: I dont need any encryption for my data. Any other advice on using the hard disk is also welcome.

    Read the article

  • "Failed to create swap space" error during installation

    - by Welsh Heron
    I've been trying to install Ubuntu for the past two days or so, but I've been running into a problem: every time I run the installation program on the LiveCD, I always get the same (or a very similar) error: "Failed to create Swap space The creation of swap space in partition #3 of SCSI5 (0,0,0)(sda) failed." So far, I've run DBAN (Darik's Boot and Nuke) on my HDD once, to make absolutely sure that everything on it had been erased. Then, I simply put in the LiveCD, and let it run the automated install. I get the above error directly after I tell it to automatically partition the HDD (it will work for a second or so, then this will pop up), forcing me back to the screen that lets me choose whether I want to automatically or manually partition the HDD. Well, after failing to install the software manually, I did a little research and learned enough about partitioning Linux to use the 'Manual partitioning' option. I partitioned the HDD as follows (it's a 1TB drive): /home - (the rest)- ext2, / - 20GB - ext2, /boot - 100MB - ext2, /swap - 8GB /EFIboot - 40MB The only difference when I tried this method was that I got THIS message: "Failed to create Swap space The creation of swap space in partition #2 of SCSI5 (0,0,0)(sda) failed." Basically, the only difference was that there was now a '2' instead of a '3'. If I may ask, what exactly am I doing wrong? I've tried looking around the internet (that's basically all I've done for the last two days), but no one seems to have the same problem that I have, and I've tried most of the solutions for similar problems (DBAN, formatting partitions in ext2 format, etc). The only thing I haven't tried is using the terminal to manually partition the HDD...and I actually DID try to do this, but I wasn't able to get past 'su' 's password demand, so I wasn't able to use the terminal. Thank you for your help in advance. ~Welsh

    Read the article

  • "Accumulate" buffer results in XNA4?

    - by Utkarsh Sinha
    I'm trying to simulate a "heightmap" buffer in XNA4.0 but the results don't look correct. Here's what I'm hoping to achieve: http://www.youtube.com/watch?feature=player_detailpage&v=-Q6ISVaM5Ww#t=517s (8:38). From what I understand, here are the steps to reach there: Pass height buffer + current entity's heightmap Generate a stencil and update the height buffer Render sprite+stencil For now, I'm just trying to get the height buffer thing to work. So here's the problem. Inside the draw loop, I do the following: Create a new render target & set it Draw the heightmap with a sprite batch(no shaders) graphicsDevice.SetRenderTarget(null) Draw the rendertarget with SpriteBatch I expected to see all entities' heightmaps. But only the last entity's heightmap is visible. Any hints on what I'm doing wrong? Here's the code inside the draw loop: RenderTarget2D tempDepthStencil = new RenderTarget2D(graphicsDevice, graphicsDevice.Viewport.Width, graphicsDevice.Viewport.Height, false, graphicsDevice.DisplayMode.Format, DepthFormat.None); graphicsDevice.SetRenderTarget(tempDepthStencil); // Gather depth information SpriteBatch depthStencilSpriteBatch = new SpriteBatch(graphicsDevice); depthStencilSpriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.None, RasterizerState.CullCounterClockwise); depthStencilSpriteBatch.Draw(texHeightmap, pos, null, Color.White, 0, Vector2.Zero, 1, spriteEffects, 1); depthStencilSpriteBatch.End(); graphicsDevice.SetRenderTarget(null); SpriteBatch b1 = new SpriteBatch(graphicsDevice); b1.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, null, null, null, null); b1.Draw((Texture2D)tempDepthStencil, Vector2.Zero, null, Color.White, 0, Vector2.Zero, 1, spriteEffects, 1); b1.End();

    Read the article

  • apt-get does not work with proxy

    - by tommyk
    For a command sudo apt-get update I get following error W: Failed to fetch http://ch.archive.ubuntu.com/ubuntu/dists/maverick-updates/multiverse/binary-i386/Packages.gz 407 Proxy Authentication Required ( The ISA Server requires authorization to fulfill the request. Access to the Web Proxy filter is denied. ) I am running Ubuntu 10.10 installed on Windows XP using VirtualBox. For internet connections I am using proxy server with an authentication. I tried to use gnome-network-proxy tool to set proxy settings system-wide. After that /etc/environment has been updated by http_proxy variable with the format http://my_proxy:port/, there were no authentication data. I checked this with firefox. Browser asked my for login and password and everything was working fine. It was unfortunately not the case for apt-get. I have also tried to do as described here. Unfortunately it does not work. May it be somehow related to the fact that a proxy is in a Windows domain, any ideas ? EDIT: My proxy name is http-proxy. Is '-' a special character here ?

    Read the article

  • SD-CARD reader does not show in ubuntu

    - by shantanu
    I bought Acer asipre 4250. It have built-in SD card reader. But it is not working. Nothing show in /media or fdisk but something in dmesg. dmesg: new high-speed USB device number 3 using ehci_hcd [ 127.396733] scsi5 : usb-storage 2-2:1.0 [ 128.526562] scsi 5:0:0:0: Direct-Access Multiple Card Reader 1.00 PQ: 0 ANSI: 0 [ 128.532512] sd 5:0:0:0: Attached scsi generic sg2 type 0 [ 129.008110] ohci_hcd 0000:00:12.0: PCI INT A disabled [ 129.032083] ohci_hcd 0000:00:13.0: PCI INT A disabled [ 129.056411] ohci_hcd 0000:00:16.0: PCI INT A disabled [ 129.338026] sd 5:0:0:0: [sdb] Attached SCSI removable disk [ 129.808328] ohci_hcd 0000:00:14.5: PCI INT C disabled [ 167.728616] usb 2-2: USB disconnect, device number 3 [ 169.872284] ehci_hcd 0000:00:13.2: PCI INT B disabled [ 169.872340] ehci_hcd 0000:00:13.2: PME# enabled fdisk -l: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0006bc6d Device Boot Start End Blocks Id System /dev/sda1 * 2048 48828415 24413184 7 HPFS/NTFS/exFAT /dev/sda2 48828416 50829311 1000448 82 Linux swap / Solaris /dev/sda3 50829312 99657727 24414208 83 Linux /dev/sda4 99659774 625141759 262740993 5 Extended Partition 4 does not start on physical sector boundary. /dev/sda5 99659776 275439615 87889920 7 HPFS/NTFS/exFAT /dev/sda6 275441664 451221503 87889920 7 HPFS/NTFS/exFAT /dev/sda7 451223552 625141759 86959104 7 HPFS/NTFS/exFAT I found another problem just right now. I format last three drives as EXT4 with disk utility. But they are showing as NTFS/exFAT in fdisk. :-(

    Read the article

  • Manager Self Service at your Fingertips

    - by Elaine Clement
    Last week we released new and improved Manager Self Service capabilities in PeopleSoft HCM 9.1. We delivered a new Manager Dashboard, streamlined many Manager Self Service transactions, provided new Pivot Grid capabilities, and implemented one-click Related Actions accessible from multiple places – all with the goal of improving every Manager’s self service experience. Manager Dashboard These new capabilities have the potential to significantly impact an organization’s bottom line, and here is why. Increased Efficiency The Manager Dashboard provides a ‘one-stop shop’ for your Managers with all of the key data they need consolidated into a single view. Alerts notifying managers of important tasks are immediately viewable and actionable. Administrators can configure the dashboard to include the most important pagelets needed for their organization, and Managers can personalize it to fit within their personal way of conducting their tasks. The Related Actions feature further improves the ease with which Managers get their work done by providing one-click access to Manager Self Service transactions.  Increased Job Satisfaction The streamlined Manager transactions, related actions, and the new Manager Dashboard provide an enhanced user experience. Managers are able to quickly get in, get the information they need, complete their transactions, and get out. Managers can spend their time focusing on getting the business results they need instead of their day to day HR tasks. Enhanced Decision Support Administrators can ensure the information and analytics they want their Managers to use are available from the Manager Dashboard, establishing best business practices. Additional pivot grids relevant to your own organization can be added to the Manager Dashboard. With this easy access to the relevant information in an easily understood format, Managers can make the right business decisions needed to improve their team and their team’s productivity. For more details on the Manager Dashboard and some of the other newly posted features, such as a new Talent Summary, check out this video and others: Oracle PeopleSoft Webcasts

    Read the article

  • RTFMobile

    - by ultan o'broin
    It may seem obvious but it’s worth stating again. The idea that mobile users are going to read lots of user assistance on their devices is just wrong. So, Jakob Nielsen’s post Mobile Content Is Twice as Difficult serves as a timely reminder for anyone thinking of putting manuals as a form of user assistance onto mobile phones. There is also an excellent post on UXMag.com, explaining that one of the ways to screw up with your iPhone app is to throw an old-style user manual into the user experience: 10 Surefire Ways to Screw Up Your iPhone App.   (Image copyright and referenced from UX Magazine 2010)   Instead, user assistance  alternatives—if any at all—include one-time tours, graphics, in-context instructions, and so on. Not so sure that importing “humor” and “personality” work so well in the enterprise app space, myself. However, the message is clear: iPhone users don’t read manuals. Great message. Users will figure it out, and if they can’t, well then your app’s UX is a problem and the app will fail. Shame some teams are obsessed with figuring out ways to port existing manuals to mobile platforms without any thought for the UX. Razorfish’s Scatter/Gather blog says it all: One thing that is particularly discouraging, most material currently available on “Creating Content for the iPad” or similar themes turns out to be about getting traditional content onto, or into, the iPad. Now, manuals for non-end users in PDF format on eReaders is a different matter. I have research on that, but it’s for another post. Technorati Tags: mobile,user assistance,UX,user experience,manuals,documentation

    Read the article

  • 10gR2 Transportable Tablespaces Certified for EBS 11i

    - by Steven Chan
    Database migration across platforms of different "endian" (byte ordering) formats using the Cross Platform Transportable Tablespaces (XTTS) process is now certified for Oracle E-Business Suite Release 11i (11.5.10.2) with Oracle Database 10g Release 2.  This process is sometimes also referred to as transportable tablespaces (TTS).What is the Cross-Platform Transportable Tablespace Feature?The Cross-Platform Transportable Tablespace feature allows users to move a user tablespace across Oracle databases. It's an efficient way to move bulk data between databases. If the source platform and the target platform are of different endianness, then an additional conversion step must be done on either the source or target platform to convert the tablespace being transported to the target format. If they are of the same endianness, then no conversion is necessary and tablespaces can be transported as if they were on the same platform.Moving data using transportable tablespaces can be much faster than performing either an export/import or unload/load of the same data. This is because transporting a tablespace only requires the copying of datafiles from source to the destination and then integrating the tablespace structural information. You can also use transportable tablespaces to move both table and index data, thereby avoiding the index rebuilds you would have to perform when importing or loading table data.

    Read the article

  • Form Validation Options

    The steps involved in transmitting form data from the client to the Web server User loads web form. User enters data in to web form fields User clicks submit On submit page validates fields using JavaScript. If validation errors are found then the validation script stops the browser from canceling posting the data to the web server and displays error messages as needed. If the form passes the data validation process then the browser will URL encode the values of every field and post it to the server.  The server reads the posted data from the query string and then again validates the data just to ensure data consistency and to prevent any non-validated data because JavaScript was turned off on the clients browser from being inserted in to a database or passed on to other process. If the data passes the second validation check then the server side code will continue with the requested processes. In my opinion, it is mandatory to validate data using client side and server side validation as a fail over process. The client side validation allows users to correct any error before they are sent to the web server for processing, and this allows for an immediate response back to the user regarding data that is not correct or in the proper format that is desired. In addition, this prevents unnecessary interaction between the user and the web server and will free up the server over time compared to doing only server side validation. Server validation is the last line of defense when it comes to validation because you can check to ensure the user’s data is correct before it is used in a business process or stored to a database. Honestly, I cannot foresee a scenario where I would only want to use one form of validation over another especially with the current cost of creating and maintaining data. In my opinion, the redundant validation is well worth the overhead.

    Read the article

  • How to distribute python GTK applications?

    - by Nik
    This is in correlation with the previous question I asked here. My aim is to create and package an application for easy installation in Ubuntu and other debian distributions. I understand that the best way to do this is by creating .deb file with which users can easily install my application on their system. However, I would also like to make sure my application is available in multiple languages. This is why I raised the question before which you can read here. In the answers that were provided, I was asked to use disutils for my packaging. I am however missing the bigger picture here. Why is there a need to include a setup.py file when I distribute my application in .deb format? My purpose is to ensure that users do not need to perform python setup.py to install my application but rather just click on the .deb file. I already know how to create a deb file from the excellent tutorial available here. It clearly shows how to edit rules, changelog and everything required to create a clean deb file. You can look at my application source code and folder structure at Github if it helps you better understand my situation. Please note I have glanced through the official python documentation found here. But I am hoping that I would get an answer which would help even a lame man understand since my knowledge is pretty poor in this regard.

    Read the article

  • How do I boot Ubuntu Cloud images in vmware?

    - by Graham
    I want to run an Ubuntu cloud image on on VMWare. I've gotten pretty far but want to know how to set the OVF properties in a way that VMWare understands in order to pass parameters to cloud-init. This is what I've done: Install VMWare Player 4.0.4, using the vmware workstation 8.0.2 / player 4.0.2 fix for linux 3.2+ patch to get around the compilation failure for virtual ethernet module. Download precise-server-cloudimg-amd64.ovf, and also the QCOW2 .img file (220MB) Run qemu-img convert -O vmdk precise-server-cloudimg-amd64-disk1.img precise-server-cloudimg-amd64-disk1.vmdk to convert the image from QCOW2 to VMDK Edit the OVF to change the extension and ovf:size on the <File> element and set ovf:format="http://www.vmware.com/interfaces/specifications/vmdk.html#sparse" on the <Disk> element. Edit the OVF to remove all the <Property> elements because vmplayer was complaining about unrecognised elements. Run the OVF in vmplayer. Alternatively, run ovftool to convert to vmx and run the vmx. Unfortunately I can't log in as "ubuntu" at the prompt because the OVF properties haven't been provided to cloud-init. How should I do this?

    Read the article

  • How To Backup Of MySQL Database Using PhpMyAdmin

    - by Jyoti
    It is very important to do backup of your MySql database, you will probably realize it when it is too late. A lot of web applications use MySql for storing the content. This can be blogs, and a lot of other things. When you have all your content as html files on your web server it is very easy to keep them safe from crashes, you just have a copy of them on your own PC and then upload them again after the web server is restored after the crash. All the content in the MySql database must also be backed up. If you have spent a lot of time making the content and it is only stored in the Mysql server, you will feel very bad if it gets lost for ever. Backing it up once every month or so makes sure you never loose too much of your work in case of a server crash, and it will make you sleep better at night. It is easy and fast, so there is no reason for not doing it. Step 1: Log into phpMyAdmin on your server. Step2: You can select the database that you would like to backup from the drop-down menu called Database. Step 3: A new page will be loaded in phpMyAdmin showing the selected database. In order to proceed with the backup click on the Export tab. Step 4: The options that you should select apart from the default ones are Save as file which will save the file locally to your computer in an .sql format and Add DROP TABLE which will add the drop table functionality if the table already exists in the database backup as shown below. Step 5: Click on the Go button to start the export/backup procedure for your database. A download window will pop up prompting for the exact place where you would like to save the file on your local computer. It is possible that the download starts automatically. This depends on your browser’s settings.

    Read the article

  • CodePlex Daily Summary for Thursday, June 28, 2012

    CodePlex Daily Summary for Thursday, June 28, 2012Popular ReleasesDesigning Windows 8 Applications with C# and XAML: Chapters 1 - 7 Release Preview: Source code for all examples from Chapters 1 - 7 for the Release PreviewDataBooster - Extension to ADO.NET Data Provider: DataBooster Library for Oracle + SQL Server Beta2: This is a derivative library of dbParallel project http://dbparallel.codeplex.com. All above binaries releases require .NET Framework 4.0 or later. SQL Server support is always build-in (can't be unplugged). The first download (DLL) also requires ODP.NET to connect Oracle; The second download (DLL) also requires DataDirect(3.5) to connect Oracle; The third download (DLL) doesn't support Oracle. Please download the source code if the provider need to be replaced by others. For example ODP.NE...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.57: Fix for issue #18284: evaluating literal expressions in the pattern c1 * (x / c2) where c1/c2 is an integer value (as opposed to c2/c1 being the integer) caused the expression to be destroyed.Visual Studio ALM Quick Reference Guidance: v2 - Visual Studio 2010 (Japanese): Rex Tang (?? ??) http://blogs.msdn.com/b/willy-peter_schaub/archive/2011/12/08/introducing-the-visual-studio-alm-rangers-rex-tang.aspx, Takaho Yamaguchi (?? ??), Masashi Fujiwara (?? ??), localized and reviewed the Quick Reference Guidance for the Japanese communities, based on http://vsarquickguide.codeplex.com/releases/view/52402. The Japanese guidance is available in AllGuides and Everything packages. The AllGuides package contains guidances in PDF file format, while the Everything packag...Visual Studio Team Foundation Server Branching and Merging Guide: v1 - Visual Studio 2010 (Japanese): Rex Tang (?? ??) http://blogs.msdn.com/b/willy-peter_schaub/archive/2011/12/08/introducing-the-visual-studio-alm-rangers-rex-tang.aspx, Takaho Yamaguchi (?? ??), Hirokazu Higashino (?? ??), localized and reviewed the Branching Guidance for the Japanese communities, based on http://vsarbranchingguide.codeplex.com/releases/view/38849. The Japanese guidance is available in AllGuides and Everything packages. The AllGuides package contains guidances in PDF file format, while the Everything packag...SQL Server FineBuild: Version 3.1.0: Top SQL Server FineBuild Version 3.1.0This is the stable version of FineBuild for SQL Server 2012, 2008 R2, 2008 and 2005 Documentation FineBuild Wiki containing details of the FineBuild process Known Issues Limitations with this release FineBuild V3.1.0 Release Contents List of changes included in this release Please DonateFineBuild is free, but please donate what you think FineBuild is worth as everything goes to charity. Tearfund is one of the UK's leading relief and de...EasySL: RapidSL V2: Rewrite RapidSL UI Framework, Using Silverlight 5.0 EF4.1 Code First Ria Service SP2 + Lastest Silverlight Toolkit.SOLID by example: All examples: All solid examplesSiteMap Editor for Microsoft Dynamics CRM 2011: SiteMap Editor (1.1.1726.406): Use of new version of connection controls for a full support of OSDP authentication mechanism for CRM Online.StreamInsight Samples: StreamInsight Product Team Samples V2.1: These samples correspond to the new StreamInsight APIs introduced with V2.1.Umbraco CMS: Umbraco CMS 5.2: Development on Umbraco v5 discontinued After much discussion and consultation with leaders from the Umbraco community it was decided that work on the v5 branch would be discontinued with efforts being refocused on the stable and feature rich v4 branch. For full details as to why this decision was made please watch the CodeGarden 12 Keynote. What about all that hard work?!?? We are not binning everything and it does not mean that all work done on 5 is lost! we are taking all of the best and m...IIS Express Manager: IIS Express 0.31 B: V0.1B - 04 May, 2012 Initiated Project. V0.2B - 05May, 2012 1. Fixed small bug. Threw error when stop button was pressed in an already stopped application. 2. Removed start and stop button. Double clicking on list items will now stop / start the websites. 3. Improved code readability. 4. Changed Orientation of Buttons in UI. V0.3B - 06May, 2012 1. Complete modification of IISEM and process ID handling 2. IISEM is now capable of reflecting the existing IISExpress processes right from startup...CodeGenerate: CodeGenerate Alpha: The Project can auto generate C# code. Include BLL Layer、Domain Layer、IDAL Layer、DAL Layer. Support SqlServer And Oracle This is a alpha program,but which can run and generate code. Generate database table info into MS WordXDA ROM HUB: XDA ROM HUB v0.9: Kernel listing added -- Thanks to iONEx Added scripts installer button. Added "Nandroid On The Go" -- Perform a Nandroid backup without a PC! Added official Android app!ExtAspNet: ExtAspNet v3.1.8.2: +2012-06-24 v3.1.8 +????Grid???????(???????ExpandUnusedSpace????????)(??)。 -????MinColumnWidth(??????)。 -????AutoExpandColumn,???????????????(ColumnID)(?????ForceFitFirstTime??ForceFitAllTime,??????)。 -????AutoExpandColumnMax?AutoExpandColumnMin。 -????ForceFitFirstTime,????????????,??????????(????????????)。 -????ForceFitAllTime,????????????,??????????(??????????????????)。 -????VerticalScrollWidth,????????(??????????,0?????????????)。 -????grid/grid_forcefit.aspx。 -???????????En...AJAX Control Toolkit: June 2012 Release: AJAX Control Toolkit Release Notes - June 2012 Release Version 60623June 2012 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - The current version of the AJAX Control Toolkit is not compatible with ASP.NET 2.0. The latest version that is compatible with ASP.NET 2.0 can be found here: 11121. - Pages using ...WPF Application Framework (WAF): WPF Application Framework (WAF) 2.5.0.5: Version: 2.5.0.5 (Milestone 5): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Changelog Legend: [B] Breaking change; [O] Marked member as obsolete WAF: Add IsInDesignMode property to the WafConfiguration class. WAF: Introduce the IModuleController interface. WAF: Add ...Windows 8 Metro RSS Reader: Metro RSS Reader.v7: Updated for Windows 8 Release Preview Changed background and foreground colors Used VariableSizeGrid layout to wrap blog posts with images Sort items with Images first, text-only last Enabled Caching to improve navigation between framesXDesigner.Development: First release: First releaseBlackJumboDog: Ver5.6.5: 2012.06.22 Ver5.6.5  (1) FTP??????? EPSV ?? EPRT ???????New ProjectsAzure Log Viewer: Simple viewer for Windows Azure Diagnostics logs table.ChsoftDemo: chsoftDemoCodeDDD: CodeDDD is a set of lightweight Application Blocks with the goal of help on the development of Nlayered DDD ApplicationsCOM32: Serial communication componentCustom Email Template SharePoint: This solution customizes the Email text (Subject and the Message) sent to users on granting permssions to various sites.DBD::IngresII: DBD::IngresII is Ingres database driver for Perl.Dure: just for simplifying developments....FormAbstraction: FormAbstration allows you to add data objects to the session variable. You just have to name your fields the same as your properties and you're editing data.HD44780 Protocol Analyzer: HD44780 Protocol Analyzer for the Saleae Logic and Logic 16 analyzers. Supports 8 and 4 bit data transfer modes.IDisposable Azure Service Bus: A utility class that wraps interaction with a ServicBus inside an IDisposable object.IonoWumpus 2012: It's an awesome project! For the Hunt the Wumpus competition!Its My Story: It is All About Me and My Relatives, friends and followers..Kinect Gestures for Mayhem: Kinect Gesture for Mayhem is a module written for Mayhem. It implements hand gestures as events.KinectComposite: Using advanced Image Processing techniques along with environment information collected by Kinect this project attempt to create real-time composites.Lm.Common: aaaaaaOrBUO SVN: OrBUO SA/HS SVN ProjectOutlook Contact Category Correction Tool: Corrects contacts where the categories have been removed and you have a backup of your contacts at a time when your categories were still intact.Persian (Jalali-Shamsi) Calendar for Windows 8: This project is a Persian calendar for Metro UI on windows 8. It is a sample of using live tile on Metro UI.Schwazzle: New activity feed based on MS' famous CMSSDLGmud: ...SimpleWorkflow: Yet another simple workflow in .Net. The primary goal is to provide a quick lightweight dynamically constructed & reusable workflow API. SmartMeterParamSetting: Thread WPF UDPSPWikiTree 0.1.0.a: Purpose: Create an implementation of a JQuery Tree View Derive tree view content from wiki page library NOT rely on installed feature. (Created client-side)Sucdri: This is an Project Management ProjectUltimate Awesomeness, Inc.: It's cool!WallpaperDeleter: Simple program to delete the current background wallpaper image.zwp: wwww

    Read the article

  • samba share not on network after upgrading to Ubuntu 12.04LTS.

    - by Sylvain Huard
    I just upgraded an old Ubuntu box to 12.04LTS (machine named A-Ubuntu). This is an upgrade not a format re-install. All the accounts and config were preserved. The basic setup is a local network with 2 Ubuntu machines (let say A-Ubuntu, B-Ubuntu) and a MAC (C-MAC). Before the upgrade, all of them could see each other by their names not only the IP address. The local network has a D-Link Router where everybody is connected with RJ-45 wired etherenet (not wi-fi). Since the A-Ubuntu upgrade, we can't see this machine name on the Network and its name is not on machine list in the D-Link router anymore. We can see it's IP address only. I can't access A-Ubuntu from the other two by its name but I can ping it with its address (192.168.0.109). From A-Ubuntu, I can connect and see the shared samba folders on B-Ubuntu and C-MAC. But from B-Ubuntu and C-MAc, I can't connect to A-Ubuntu. Correct me if I'm wrong but this tells me that Samba should be fine and the real problem is that A-Ubuntu does not advertise its name on the Network so the D-Link does not have it in its table so nobody else finds it. After a lot of googling, I see that it is the job of avahi and mdns to do so. Those packages are running, I checked multiple config files for samba, avahi, mdns to see as if it is like the examples on the WEB and also similar to what I find on the working B-Ubuntu machine. This is the same. I did multiple service restart with samba, avahi, remove the firewall to make sure it does not block the hostname broadcast. I rebooted multiple time to make sure the update I was making were effective. Still, Can't see the A-Ubuntu name on the network. Any idea what it can be?, Where to look next?

    Read the article

  • Combine 3D objects in XNA 4

    - by Christoph
    Currently I am writing on my thesis for university, the theme I am working on is 3D Visualization of hierarchical structures using cone trees. I want to do is to draw a cone and arrange a number of spheres at the bottom of the cone. The spheres should be arranged according to the radius and the number of spheres correctly. As you can imagine I need a lot of these cone/sphere combinations. First Attempt I was able to find some tutorials that helped with drawing cones and spheres. Cone public Cone(GraphicsDevice device, float height, int tessellation, string name, List<Sphere> children) { //prepare children and calculate the children spacing and radius of the cone if (children == null || children.Count == 0) { throw new ArgumentNullException("children"); } this.Height = height; this.Name = name; this.Children = children; //create the cone if (tessellation < 3) { throw new ArgumentOutOfRangeException("tessellation"); } //Create a ring of triangels around the outside of the cones bottom for (int i = 0; i < tessellation; i++) { Vector3 normal = this.GetCircleVector(i, tessellation); // add the vertices for the top of the cone base.AddVertex(Vector3.Up * height, normal); //add the bottom circle base.AddVertex(normal * this.radius + Vector3.Down * height, normal); //Add indices base.AddIndex(i * 2); base.AddIndex(i * 2 + 1); base.AddIndex((i * 2 + 2) % (tessellation * 2)); base.AddIndex(i * 2 + 1); base.AddIndex((i * 2 + 3) % (tessellation * 2)); base.AddIndex((i * 2 + 2) % (tessellation * 2)); } //create flate triangle to seal the bottom this.CreateCap(tessellation, height, this.Radius, Vector3.Down); base.InitializePrimitive(device); } Sphere public void Initialize(GraphicsDevice device, Vector3 qi) { int verticalSegments = this.Tesselation; int horizontalSegments = this.Tesselation * 2; //single vertex on the bottom base.AddVertex((qi * this.Radius) + this.lowering, Vector3.Down); for (int i = 0; i < verticalSegments; i++) { float latitude = ((i + 1) * MathHelper.Pi / verticalSegments) - MathHelper.PiOver2; float dy = (float)Math.Sin(latitude); float dxz = (float)Math.Cos(latitude); //Create a singe ring of latitudes for (int j = 0; j < horizontalSegments; j++) { float longitude = j * MathHelper.TwoPi / horizontalSegments; float dx = (float)Math.Cos(longitude) * dxz; float dz = (float)Math.Sin(longitude) * dxz; Vector3 normal = new Vector3(dx, dy, dz); base.AddVertex(normal * this.Radius, normal); } } // Finish with a single vertex at the top of the sphere. AddVertex((qi * this.Radius) + this.lowering, Vector3.Up); // Create a fan connecting the bottom vertex to the bottom latitude ring. for (int i = 0; i < horizontalSegments; i++) { AddIndex(0); AddIndex(1 + (i + 1) % horizontalSegments); AddIndex(1 + i); } // Fill the sphere body with triangles joining each pair of latitude rings. for (int i = 0; i < verticalSegments - 2; i++) { for (int j = 0; j < horizontalSegments; j++) { int nextI = i + 1; int nextJ = (j + 1) % horizontalSegments; base.AddIndex(1 + i * horizontalSegments + j); base.AddIndex(1 + i * horizontalSegments + nextJ); base.AddIndex(1 + nextI * horizontalSegments + j); base.AddIndex(1 + i * horizontalSegments + nextJ); base.AddIndex(1 + nextI * horizontalSegments + nextJ); base.AddIndex(1 + nextI * horizontalSegments + j); } } // Create a fan connecting the top vertex to the top latitude ring. for (int i = 0; i < horizontalSegments; i++) { base.AddIndex(CurrentVertex - 1); base.AddIndex(CurrentVertex - 2 - (i + 1) % horizontalSegments); base.AddIndex(CurrentVertex - 2 - i); } base.InitializePrimitive(device); } The tricky part now is to arrange the spheres at the bottom of the cone. I tried is to draw just the cone and then draw the spheres. I need a lot of these cones, so it would be pretty hard to calculate all the positions correctly. Second Attempt So the second try was to generate a object that builds all vertices of the cone and all of the spheres at once. So I was hoping to render a cone with all its spheres arranged correctly. After a short debug I found out that the cone is created and the first sphere, when it turn of the second sphere I am running into an OutOfBoundsException of ushort.MaxValue. Cone and Spheres public ConeWithSpheres(GraphicsDevice device, float height, float coneDiameter, float sphereDiameter, int coneTessellation, int sphereTessellation, int numberOfSpheres) { if (coneTessellation < 3) { throw new ArgumentException(string.Format("{0} is to small for the tessellation of the cone. The number must be greater or equal to 3", coneTessellation)); } if (sphereTessellation < 3) { throw new ArgumentException(string.Format("{0} is to small for the tessellation of the sphere. The number must be greater or equal to 3", sphereTessellation)); } //set properties this.Height = height; this.ConeDiameter = coneDiameter; this.SphereDiameter = sphereDiameter; this.NumberOfChildren = numberOfSpheres; //end set properties //generate the cone this.GenerateCone(device, coneTessellation); //generate the spheres //vector that defines the Y position of the sphere on the cones bottom Vector3 lowering = new Vector3(0, 0.888f, 0); this.GenerateSpheres(device, sphereTessellation, numberOfSpheres, lowering); } // ------ GENERATE CONE ------ private void GenerateCone(GraphicsDevice device, int coneTessellation) { int doubleTessellation = coneTessellation * 2; //Create a ring of triangels around the outside of the cones bottom for (int index = 0; index < coneTessellation; index++) { Vector3 normal = this.GetCircleVector(index, coneTessellation); //add the vertices for the top of the cone base.AddVertex(Vector3.Up * this.Height, normal); //add the bottom of the cone base.AddVertex(normal * this.ConeRadius + Vector3.Down * this.Height, normal); //add indices base.AddIndex(index * 2); base.AddIndex(index * 2 + 1); base.AddIndex((index * 2 + 2) % doubleTessellation); base.AddIndex(index * 2 + 1); base.AddIndex((index * 2 + 3) % doubleTessellation); base.AddIndex((index * 2 + 2) % doubleTessellation); } //create flate triangle to seal the bottom this.CreateCap(coneTessellation, this.Height, this.ConeRadius, Vector3.Down); base.InitializePrimitive(device); } // ------ GENERATE SPHERES ------ private void GenerateSpheres(GraphicsDevice device, int sphereTessellation, int numberOfSpheres, Vector3 lowering) { int verticalSegments = sphereTessellation; int horizontalSegments = sphereTessellation * 2; for (int childCount = 1; childCount < numberOfSpheres; childCount++) { //single vertex at the bottom of the sphere base.AddVertex((this.GetCircleVector(childCount, this.NumberOfChildren) * this.SphereRadius) + lowering, Vector3.Down); for (int verticalSegmentsCount = 0; verticalSegmentsCount < verticalSegments; verticalSegmentsCount++) { float latitude = ((verticalSegmentsCount + 1) * MathHelper.Pi / verticalSegments) - MathHelper.PiOver2; float dy = (float)Math.Sin(latitude); float dxz = (float)Math.Cos(latitude); //create a single ring of latitudes for (int horizontalSegmentsCount = 0; horizontalSegmentsCount < horizontalSegments; horizontalSegmentsCount++) { float longitude = horizontalSegmentsCount * MathHelper.TwoPi / horizontalSegments; float dx = (float)Math.Cos(longitude) * dxz; float dz = (float)Math.Sin(longitude) * dxz; Vector3 normal = new Vector3(dx, dy, dz); base.AddVertex((normal * this.SphereRadius) + lowering, normal); } } //finish with a single vertex at the top of the sphere base.AddVertex((this.GetCircleVector(childCount, this.NumberOfChildren) * this.SphereRadius) + lowering, Vector3.Up); //create a fan connecting the bottom vertex to the bottom latitude ring for (int i = 0; i < horizontalSegments; i++) { base.AddIndex(0); base.AddIndex(1 + (i + 1) % horizontalSegments); base.AddIndex(1 + i); } //Fill the sphere body with triangles joining each pair of latitude rings for (int i = 0; i < verticalSegments - 2; i++) { for (int j = 0; j < horizontalSegments; j++) { int nextI = i + 1; int nextJ = (j + 1) % horizontalSegments; base.AddIndex(1 + i * horizontalSegments + j); base.AddIndex(1 + i * horizontalSegments + nextJ); base.AddIndex(1 + nextI * horizontalSegments + j); base.AddIndex(1 + i * horizontalSegments + nextJ); base.AddIndex(1 + nextI * horizontalSegments + nextJ); base.AddIndex(1 + nextI * horizontalSegments + j); } } //create a fan connecting the top vertiex to the top latitude for (int i = 0; i < horizontalSegments; i++) { base.AddIndex(this.CurrentVertex - 1); base.AddIndex(this.CurrentVertex - 2 - (i + 1) % horizontalSegments); base.AddIndex(this.CurrentVertex - 2 - i); } base.InitializePrimitive(device); } } Any ideas how I could fix this?

    Read the article

  • Where to put code documentation?

    - by Patrick
    I am currently using two systems to write code documentation (am using C++): Documentation about methods and class members are added next to the code, using the Doxygen format. On a server Doxygen is run on the sources so the output can be seen in a web browser Overview pages (describing a set of classes, the structure of the application, example code, ...) is added to a Wiki I personally think that this approach is easy because the documentation about members and classes is really close to the code, while the overview pages are really easy to edit in the Wiki (and it's also easy to add images, tables, ...). A web browser allows you to see both documentations. My co-worker now suggests to put everything in Doxygen, because we can then create one big help file with everything in it (using either Microsoft's HTML WorkShop or Qt Assistant). My concern is that editing Doxygen-style documentation is much harder (compared to Wiki), especially when you want to add tables, images, ... (or is there a 'preview' tool for Doxygen that doesn't require you to generate the code before you can see the result?) What do big open-source (or closed source) projects use to write their code documentation? Do they also split this up between Doxygen-style and a Wiki? Or do they use another system? What is the most appropriate way to expose the documentation? Via a Web server/browser, or via a big (several 100MB) help file? Which approach do you take when writing code documentation?

    Read the article

  • Do you know the minimum builds to create on any branch?

    - by Martin Hinshelwood
    You should always have three builds on your team project. These should be setup and tested using an empty solution before you write any code at all. Figure: Three builds named in the format [TeamProject].[AreaPath]_[Branch].[Gate|CI|Nightly] for every branch.   These builds should use the same XAML build workflow; however you may set them up to run a different set of tests depending on the time it takes to run a full build. Gate – Only needs to run the smallest set of tests, but should run most if not all of the Unit Test. This is run before developers are allowed to check-in CI – This should run all Unit Tests and all of the automated UI tests. It is run after a developer check-in. Nightly – The Nightly build should run all of the Unit Tests, all of the Automated UI tests and all of the Load and Performance tests. The nightly build is time consuming and will run but once a night. Packaging of your Product for testing the next day may be done at this stage as well. Figure: You can control what tests are run and what data is collected while they are running. Note: We do not run all the tests every time because of the time consuming nature of running some tests, but ALL tests should be run overnight. Note: If you had a really large project with thousands of tests including long running Load tests you may need to add a Weekly build to the mix.     Figure: Bad example, you can’t tell what these builds do if they are in a larger list   Figure: Good example, you know exactly what project, branch and type of build these are for.   Technorati Tags: SSW,SSW Rules,VS2010,VS ALM,Team Build 2010,Team Build

    Read the article

< Previous Page | 366 367 368 369 370 371 372 373 374 375 376 377  | Next Page >