Search Results

Search found 12765 results on 511 pages for 'format'.

Page 294/511 | < Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >

  • Convert MP3 to AAC,FLAC to AAC (.NET/C#) FREE :)

    - by PearlFactory
    So I was tasked with looking at converting 10 million tracks from mp3 320k to AAC and also Converting from mp3 320k to mp3 128k After a bit of hunting around the tool you need to use is FFMPEG Download x64 WindowsAlso for the best results get the Nero AACEncoder Download Now the command line STEP 1(From Flac)ffmpeg -i input.flac -f wav - | neroAacEnc -ignorelength -q 0.5 -if - -of output.m4aor (From mp3)ffmpeg -i input.mp3 -f wav - | neroAacEnc -ignorelength -q 0.5 -if - -of output.m4aNow the output.m4a is a intermediate state that we now put a ACC wrapper on via FFMpeg STEP 2ffmpeg -i output.m4a -vn -acodec copy final.aacDone :) There are a couple of options with the FFMPEG library as in we can look at importing the librarys and manipulation the API for the direct result FFMPEG has this support. You can get the relevant librarys from HereThey even have the source if you are that keen :-)In this case I am going to wrap the command lines into c# external process threads.( For the app that i am building to convert the 10 million tracks there is a complex multithreaded app to support this novel code )//Arrange Metadata about Call Process myProcess = new Process();ProcessStartInfo p = new ProcessStartInfo();string sArgs = string.format(" -i {0} -f wav - | neroAacEnc -ignorelength -q 0.5 -if - -of {1}",inputfile,outputfil) ; p.FileName = "ffmpeg.exe" ; p.CreateNoWindow = true; p.RedirectStandardOutput = true; //p.WindowStyle = ProcessWindowStyle.Normal p.UseShellExecute = false;//Execute p.Arguments = sArgs; myProcess.StartInfo = p; myProcess.Start(); myProcess.WaitForExit();//Write details about call  myProcess.StandardOutput.ReadToEnd();Now in this case we would execute a 2nd call using the same code but with different sArgs to put the AAC wrapper on the m4a file. Thats it. So if you need to do some conversions of any kind for you ASP.net sites/apps this is a great start and super fast.. With conversion times of around 2-3 seconds all of this can be done on the fly:-)Justin Oehlmannref : StackOverflow.com

    Read the article

  • Best option for PDF viewer embedded in web app

    - by RationalGeek
    I have a web app that needs to be able to display a PDF. It needs to allow the user to page through the PDF, and my application needs to be able to know which page is currently being viewed, because other aspects of the web app will change based on the current page. Ideally it would not be dependent on the client having Adobe Reader but I could probably support that dependency. What are my best options for this? My application stack consists of ASP.NET 4 along with optionally Silverlight 5. Also, I could use something that is client-side based as well using JavaScript / HTML if such a thing exists. I found ComponentOne's offering for this and that seems like the leading candidate at this point, but I want to know if there are other options I should consider. Edit: Per Fosco's comment, converting the PDF to another format (such as HTML) might be an option, as long as I could tie back parts of the converted document to the original PDF page #s. Another note: this has to run entirely on our servers. It would not be acceptable to use a third-party service to view the PDFs.

    Read the article

  • SSL Certificate Works in Monit - But Not in Keystore

    - by Bart Silverstrim
    I have a situation where there's a keystore file with the various root/intermediate certificates stored in it in a way that it seems to work for most browsers. Problem is that when mobile browsers hit it, there's a break in the chain and they complain. I used an SSL checker at http://www.sslshopper.com/ssl-checker.html and it states that "The certificate is not trusted in all web browsers. You may need to install an Intermediate/chain certificate to link it to a trusted root certificate." So...the desktop browsers must have the intermediate certs already and can make the chain connections, I'm assuming, while the mobile browsers can't. The thing is that I had used Portecle to export certificates from the keystore and cobble them together to create a .PEM certificate to run the Monit utility. When I check that application with the SSL checker, it works fine! The person that originally created the keystore said he couldn't follow the SSL provider's directions for creating the keystore because he created the CSR request using openssl, so the cert and private key had to be converted to DER format and use importkey to get it to work; following the directions he found online had importkey seem to use only a set keystore file as a result, and it would erase anything already in the file if it existed. So is there a way to take the certificate I created for Monit and create a working keystore for the Tomcat website? What would be causing the chain to be broken in the current keystore, but work for Monit? I have the SSL cert provider's intermediate and cross certificates, and the website's certificate, but is what else would I need to create a working chain of certs for a keystore?

    Read the article

  • CodeGolf : Find the Unique Paths

    - by st0le
    Here's a pretty simple idea, in this pastebin I've posted some pair of numbers. These represent Nodes of a unidirected connected graph. The input to stdin will be of the form, (they'll be numbers, i'll be using an example here) c d q r a b d e p q so x y means x is connected to y (not viceversa) There are 2 paths in that example. a->b->c->d->e and p->q->r. You need to print all the unique paths from that graph The output should be of the format a->b->c->d->e p->q->r Notes You can assume the numbers are chosen such that one path doesn't intersect the other (one node belongs to one path) The pairs are in random order. They are more than 1 paths, they can be of different lengths. All numbers are less than 1000. If you need more details, please leave a comment. I'll amend as required. Shameless-Plug For those who enjoy Codegolf, please Commit at Area51 for its very own site:) (for those who don't enjoy it, please support it as well, so we'll stay out of your way...)

    Read the article

  • How to facilitate code reviews in a small team for embedded software?

    - by Adam Lewis
    Short Question Does a cost-effective tool / workflow exist to facilitate code reviews in a small team? More specifically, a small team that relies on post-commit code reviews. Background Our team currently consists of 3 full time and 1 part time software engineers, with plans on hiring more in the near future. Due to our team size and volume of projects we all must juggle, the pre-commit workflow that major tools (such as Review Board and Code Collaborator) use is not obtainable for us right now. The best we can do at the moment is to perform post-commit reviews before major releases or as time permits. Nearly all of our projects are hosted on RepositoryHosting.com (which I highly recommend) and contain a mixture of SVN and GIT repositories. Current Thoughts Since I cannot find a tool that fits our needs right now, I am turning to TRAC that is built into our repository's site. At the moment we use TRAC to file tickets and track milestones, so to me this seems like a natural fit for code review results as well. The direction I am heading in right now is to use a spread sheet(s) to log all of the bugs and comments. Do some macro magic to get it in a format that I can use TRAC's import ticket method and use TRAC's ticketing system to create the action items / bug reports automatically. The auto ticket generation is darn near a must have, adding in bugs and comments one at a time from a web-gui is really painful. Secondary Question If this workflow makes sense, is there a good / standard template to use as a code review log?

    Read the article

  • I have a bad install of Windows on another hard drive and it won't let me install a fresh copy. How do I fix it in Ubuntu 12.04?

    - by Dana LaBerge
    Basically, there was a security issue in the drivers for my graphics card. It was a 64-bit card and I installed 32-bit Windows. Apparently, before SP1 was available, which fixed that issue, 6 trojan horses got in. They stopped SP1 from installing. After going through the ringer several times, I finally talked to a person who knew the problem. It was something about how the drivers tried to transfer between the 32-bit OS and the 64-bit card that left me open. Ever since, my computer has been slow and has had weird issues. Like tinypic wouldn't ever load. Also, certain programs wouldn't install. So I eventually talk to the dude that knew the problem and he takes the reigns and does some diagnostics. He tells me that to fix it I have to format the hard drive and do a fresh install. I'm okay with that because I was planning on it anyway, to upgrade to the 64-bit version. The problem is, how do I do that? I have the disk to install the new copy, but when I go to install it, it tells me I can't and to check the log file. However, I don't know where that log file is, and it wiped my install of Windows out. How do I find the file and as a different route to get to the goal, how do I zero out the drive from Ubuntu 12.04? (I installed the 64-bit version just the other day)

    Read the article

  • Link to article on website libraries

    - by acidzombie24
    I just started another website and it has taken me 30mins to copy/paste my other website and delete stuff because I don't have a template. Theres lots of features I copied over that I haven't seen in libraries/templates. But I don't really know any libraries/templates. This site is ASP.NET. Some things I have is a string.format that escapes strings for HTML (so <hi> is text instead of a tag). Other features are adding or removing items in the url query, a class to pass in a ASP.NET error and log or convert it into a row in a db (I know about elmah but during development on my last site it wasn't Mono compatible), a mini AJAX library for success/fail/redirect/etc, a class to pass in a ASP.NET error and log or convert it into a row in a db and anything else I would use in every site. I don't like my (library) design because I wasn't expecting to do more then 2-3 websites and I am on my 5th. I don't know proper ASP.NET either so what is an article that explains how to make a great library/template for websites?

    Read the article

  • I can't install Ubuntu on my Dell Inspiron 15R at all

    - by Kieran Rimmer
    I'm attempting to install Ubuntu 12.04LTS, 64 bit onto a Dell Inspiron 15R laptop. I've shrunk down one of the windows partitions and even used gparted to format the vacant space as ext4. However, the install disk simply does not present any options when it comes to the partitioning step. What I get is a non-responsive blank table As well as the above, I've changed the BIOS settings so that USB emulation is disabled (as per Can't install on Dell Inspiron 15R), and changed the SATA Operation setting to all three possible options. Anyway, the install CD will bring up the trial version of ubuntu, and if I open terminal and type sudo fdisk -l, I get this: Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0xb4fd9215 Device Boot Start End Blocks Id System /dev/sda1 63 80324 40131 de Dell Utility Partition 1 does not start on physical sector boundary. /dev/sda2 * 81920 29044735 14481408 7 HPFS/NTFS/exFAT /dev/sda3 29044736 1005142015 488048640 7 HPFS/NTFS/exFAT /dev/sda4 1005154920 1953520064 474182572+ 83 Linux Disk /dev/sdb: 32.0 GB, 32017047552 bytes 255 heads, 63 sectors/track, 3892 cylinders, total 62533296 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xb4fd923d Device Boot Start End Blocks Id System /dev/sdb1 2048 16775167 8386560 84 OS/2 hidden C: drive If I type 'sudo parted -l', I get: Model: ATA WDC WD10JPVT-75A (scsi) Disk /dev/sda: 1000GB Sector size (logical/physical): 512B/4096B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 41.1MB 41.1MB primary fat16 diag 2 41.9MB 14.9GB 14.8GB primary ntfs boot 3 14.9GB 515GB 500GB primary ntfs 4 515GB 1000GB 486GB primary ext4 Model: ATA SAMSUNG SSD PM83 (scsi) Disk /dev/sdb: 32.0GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 8589MB 8588MB primary Warning: Unable to open /dev/sr0 read-write (Read-only file system). /dev/sr0 has been opened read-only. Error: Can't have a partition outside the disk! I've also tried a Kubuntu 12.04 and Linuxmint install disks, wityh the same problem. I'm completely lost here. Cheers, Kieran

    Read the article

  • BizTalk: History of one project architecture

    - by Leonid Ganeline
    "In the beginning God made heaven and earth. Then he started to integrate." At the very start was the requirement: integrate two working systems. Small digging up: It was one system. It was good but IT guys want to change it to the new one, much better, chipper, more flexible, and more progressive in technologies, more suitable for the future, for the faster world and hungry competitors. One thing. One small, little thing. We cannot turn off the old system (call it A, because it was the first), turn on the new one (call it B, because it is second but not the last one). The A has a hundreds users all across a country, they must study B. A still has a lot nice custom features, home-made features that cannot disappear. These features have to be moved to the B and it is a long process, months and months of redevelopment. So, the decision was simple. Let’s move not jump, let’s both systems working side-by-side several months. In this time we could teach the users and move all custom A’s special functionality to B. That automatically means both systems should work side-by-side all these months and use the same data. Data in A and B must be in sync. That’s how the integration projects get birth. Moreover, the specific of the user tasks requires the both systems must be in sync in real-time. Nightly synchronization is not working, absolutely.   First draft The first draft seems simple. Both systems keep data in SQL databases. When data changes, the Create, Update, Delete operations performed on the data, and the sync process could be started. The obvious decision is to use triggers on tables. When we are talking about data, we are talking about several entities. For example, Orders and Items [in Orders]. We decided to use the BizTalk Server to synchronize systems. Why it was chosen is another story. Second draft   Let’s take an example how it works in more details. 1.       User creates a new entity in the A system. This fires an insert trigger on the entity table. Trigger has to pass the message “Entity created”. This message includes all attributes of the new entity, but I focused on the Id of this entity in the A system. Notation for this message is id.A. System A sends id.A to the BizTalk Server. 2.       BizTalk transforms id.A to the format of the system B. This is easiest part and I will not focus on this kind of transformations in the following text. The message on the picture is still id.A but it is in slightly different format, that’s why it is changing in color. BizTalk sends id.A to the system B. 3.       The system B creates the entity on its side. But it uses different id-s for entities, these id-s are id.B. System B saves id.A+id.B. System B sends the message id.A+id.B back to the BizTalk. 4.       BizTalk sends the message id.A+id.B to the system A. 5.       System A saves id.A+id.B. Why both id-s should be saved on both systems? It was one of the next requirements. Users of both systems have to know the systems are in sync or not in sync. Users working with the entity on the system A can see the id.B and use it to switch to the system B and work there with the copy of the same entity. The decision was to store the pairs of entity id-s on both sides. If there is only one id, the entities are not in sync yet (for the Create operation). Third draft Next problem was the reliability of the synchronization. The synchronizing process can be interrupted on each step, when message goes through the wires. It can be communication problem, timeout, temporary shutdown one of the systems, the second system cannot be synchronized by some internal reason. There were several potential problems that prevented from enclosing the whole synchronization process in one transaction. Decision was to restart the whole sync process if it was not finished (in case of the error). For this purpose was created an additional service. Let’s call it the Resync service. We still keep the id pairs in both systems, but only for the fast access not for the synchronization process. For the synchronizing these id-s now are kept in one main place, in the Resync service database. The Resync service keeps record as: ·       Id.A ·       Id.B ·       Entity.Type ·       Operation (Create, Update, Delete) ·       IsSyncStarted (true/false) ·       IsSyncFinished (true/false0 The example now looks like: 1.       System A creates id.A. id.A is saved on the A. Id.A is sent to the BizTalk. 2.       BizTalk sends id.A to the Resync and to the B. id.A is saved on the Resync. 3.       System B creates id.B. id.A+id.B are saved on the B. id.A+id.B are sent to the BizTalk. 4.       BizTalk sends id.A+id.B to the Resync and to the A. id.A+id.B are saved on the Resync. 5.       id.A+id.B are saved on the B. Resync changes the IsSyncStarted and IsSyncFinished flags accordingly. The Resync service implements three main methods: ·       Save (id.A, Entity.Type, Operation) ·       Save (id.A, id.B, Entity.Type, Operation) ·       Resync () Two Save() are used to save id-s to the service storage. See in the above example, in 2 and 4 steps. What about the Resync()? It is the method that finishes the interrupted synchronization processes. If Save() is started by the trigger event, the Resync() is working as an independent process. It periodically scans the Resync storage to find out “unfinished” records. Then it restarts the synchronization processes. It tries to synchronize them several times then gives up.     One more thing, both systems A and B must tolerate duplicates of one synchronizing process. Say on the step 3 the system B was not able to send id.A+id.B back. The Resync service must restart the synchronization process that will send the id.A to B second time. In this case system B must just send back again also created id.A+id.B pair without errors. That means “tolerate duplicates”. Fourth draft Next draft was created only because of the aesthetics. As it always happens, aesthetics gave significant performance gain to the whole system. First was the stupid question. Why do we need this additional service with special database? Can we just master the BizTalk to do something like this Resync() does? So the Resync orchestration is doing the same thing as the Resync service. It is started by the Id.A and finished by the id.A+id.B message. The first works as a Start message, the second works as a Finish message.     Here is a diagram the whole process without errors. It is pretty straightforward. The Resync orchestration is waiting for the Finish message specific period of time then resubmits the Id.A message. It resubmits the Id.A message specific number of times then gives up and gets suspended. It can be resubmitted then it starts the whole process again: waiting [, resubmitting [, get suspended]], finishing. Tuning up The Resync orchestration resubmits the id.A message with special “Resubmitted” flag. The subscription filter on the Resync orchestration includes predicate as (Resubmit_Flag != “Resubmitted”). That means only the first Sync orchestration starts the Resync orchestration. Other Sync orchestration instantiated by the resubmitting can finish this Resync orchestration but cannot start another instance of the Resync   Here is a diagram where system B was inaccessible for some period of time. The Resync orchestration resubmitted the id.A two times. Then system B got the response the id.A+id.B and this finished the Resync service execution. What is interesting about this, there were submitted several identical id.A messages and only one id.A+id.B message. Because of this, the system B and the Resync must tolerate the duplicate messages. We also told about this requirement for the system B. Now the same requirement is for the Resunc. Let’s assume the system B was very slow in the first response and the Resync service had time to resubmit two id.A messages. System B responded not, as it was in previous case, with one id.A+id.B but with two id.A+id.B messages. First of them finished the Resync execution for the id.A. What about the second id.A+id.B? Where it goes? So, we have to add one more internal requirement. The whole solution must tolerate many identical id.A+id.B messages. It is easy task with the BizTalk. I added the “SinkExtraMessages” subscriber (orchestration with one receive shape), that just get these messages and do nothing. Real design Real architecture is much more complex and interesting. In reality each system can submit several id.A almost simultaneously and completely unordered. There are not only the “Create entity” operation but the Update and Delete operations. And these operations relate each other. Say the Update operation after Delete means not the same as Update after Create. In reality there are entities related each other. Say the Order and Order Items. Change on one of it could start the series of the operations on another. Moreover, the system internals are the “black boxes” and we cannot predict the exact content and order of the operation series. It worth to say, I had to spend a time to manage the zombie message problems. The zombies are still here, but this is not a problem now. And this is another story. What is interesting in the last design? One orchestration works to help another to be more reliable. Why two orchestration design is more reliable, isn’t it something strange? The Synch orchestration takes all the message exchange between systems, here is the area where most of the errors could happen. The Resync orchestration sends and receives messages only within the BizTalk server. Is there another design? Sure. All Resync functionality could be implemented inside the Sync orchestration. Hey guys, some other ideas?

    Read the article

  • Ubuntu 14.04 ATI Radeon open source driver with distorted video playback

    - by Bwog
    Video in VLC or SMplayer is often in black and white with washed-out colors in the wrong place (translated considerably). Moreover, the last video image is often visible when a new video is started and persist as long as the new video is running. Colors have a recognizable shape (e.g. a persons clothes or face), but can be obviously incorrect (e.g. green or purples faces). This is independent of the format of the videos (mp4, mkv, wmv). Sometimes all problems disappear when a new video is started, but often only a reboot restores normal video. Ubuntu was upgraded to 14.04 and is fully updated. Processor intel core i5-2500K cpu. gpu: amd/ati Radeon HD 7950. graphics: gallium 0.4 on AMD Tahiti. xorg xserver amd/ati display driver wrapper from xserver-xorg-video-ati. :~$ Xorg -version X.Org X Server 1.15.1 Release Date: 2014-04-13 X Protocol Version 11, Revision 0 Build Operating System: Linux 3.2.0-37-generic x86_64 Ubuntu Current Operating System: Linux Mare 3.13.0-29-generic #53-Ubuntu SMP Wed Jun 4 21:00:20 UTC 2014 x86_64 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-3.13.0-29-generic.efi.signed root=UUID=number ro Build Date: 16 April 2014 01:36:29PM xorg-server 2:1.15.1-0ubuntu2 Current version of pixman: 0.30.2 ~$ lspci | grep VGA 01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Tahiti PRO [Radeon HD 7950/8950 OEM / R9 280] Question: how to restore regular video playback?

    Read the article

  • Windows Azure Virtual Machines - Make Sure You Follow the Documentation

    - by BuckWoody
    To create a Windows Azure Infrastructure-as-a-Service Virtual Machine you have several options. You can simply select an image from a “Gallery” which includes Windows or Linux operating systems, or even a Windows Server with pre-installed software like SQL Server. One of the advantages to Windows Azure Virtual Machines is that it is stored in a standard Hyper-V format – with the base hard-disk as a VHD. That means you can move a Virtual Machine from on-premises to Windows Azure, and then move it back again. You can even use a simple series of PowerShell scripts to do the move, or automate it with other methods. And this then leads to another very interesting option for deploying systems: you can create a server VHD, configure it with the software you want, and then run the “SYSPREP” process on it. SYSPREP is a Windows utility that essentially strips the identity from a system, and when you re-start that system it asks a few details on what you want to call it and so on. By doing this, you can essentially create your own gallery of systems, either for testing, development servers, demo systems and more. You can learn more about how to do that here: http://msdn.microsoft.com/en-us/library/windowsazure/gg465407.aspx   But there is a small issue you can run into that I wanted to make you aware of. Whenever you deploy a system to Windows Azure Virtual Machines, you must meet certain password complexity requirements. However, when you build the machine locally and SYSPREP it, you might not choose a strong password for the account you use to Remote Desktop to the machine. In that case, you might not be able to reach the system after you deploy it. Once again, the key here is reading through the instructions before you start. Check out the link I showed above, and this link: http://technet.microsoft.com/en-us/library/cc264456.aspx to make sure you understand what you want to deploy.  

    Read the article

  • MyPaint is an Open-Source Graphics App for Digital Painters

    - by Asian Angel
    Are you looking for a terrific graphics app to use for original painting and artwork creation on your computer? Whether it is for you or the kids, MyPaint is an app that you should definitely have on hand for when those artistic moods come along. For our example we chose to install MyPaint on Ubuntu 10.10…you can easily find it in the Ubuntu Software Center by doing a quick search. Once you have it installed, all that is left to do is decide if you want to add additional brushes (link provided below) and then start having fun creating your next work of art. Here are some of MyPaint’s wonderful features: Exists for several platforms (Linux, Windows, and Mac OS X) Supports pressure sensitive graphics tablets Extensive brush creation and configuration options Unlimited canvas (you never have to resize) Basic layer support Comes with a large brush collection including charcoal and ink to emulate real media MyPaint is fun to use and can quickly become very addicting as you experiment during the creation process! Links MyPaint Homepage Download Additional Brushes for MyPaint Download the GIMP Plugin for the OpenRaster File Format Latest Features How-To Geek ETC Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines MyPaint is an Open-Source Graphics App for Digital Painters Can the Birds and Pigs Really Be Friends in the End? [Angry Birds Video] Add the 2D Version of the New Unity Interface to Ubuntu 10.10 and 11.04 MightyMintyBoost Is a 3-in-1 Gadget Charger Watson Ties Against Human Jeopardy Opponents Peaceful Tropical Cavern Wallpaper

    Read the article

  • "Failed to create swap space" error during installation

    - by Welsh Heron
    I've been trying to install Ubuntu for the past two days or so, but I've been running into a problem: every time I run the installation program on the LiveCD, I always get the same (or a very similar) error: "Failed to create Swap space The creation of swap space in partition #3 of SCSI5 (0,0,0)(sda) failed." So far, I've run DBAN (Darik's Boot and Nuke) on my HDD once, to make absolutely sure that everything on it had been erased. Then, I simply put in the LiveCD, and let it run the automated install. I get the above error directly after I tell it to automatically partition the HDD (it will work for a second or so, then this will pop up), forcing me back to the screen that lets me choose whether I want to automatically or manually partition the HDD. Well, after failing to install the software manually, I did a little research and learned enough about partitioning Linux to use the 'Manual partitioning' option. I partitioned the HDD as follows (it's a 1TB drive): /home - (the rest)- ext2, / - 20GB - ext2, /boot - 100MB - ext2, /swap - 8GB /EFIboot - 40MB The only difference when I tried this method was that I got THIS message: "Failed to create Swap space The creation of swap space in partition #2 of SCSI5 (0,0,0)(sda) failed." Basically, the only difference was that there was now a '2' instead of a '3'. If I may ask, what exactly am I doing wrong? I've tried looking around the internet (that's basically all I've done for the last two days), but no one seems to have the same problem that I have, and I've tried most of the solutions for similar problems (DBAN, formatting partitions in ext2 format, etc). The only thing I haven't tried is using the terminal to manually partition the HDD...and I actually DID try to do this, but I wasn't able to get past 'su' 's password demand, so I wasn't able to use the terminal. Thank you for your help in advance. ~Welsh

    Read the article

  • Where to find and install ASUS motherboard drivers for Linux

    - by Dan
    This is my second day ever with Linux, and I had one heck of a time getting the nVidia drivers installed and working. Please, keep in mind I am very new and just starting out. I currently have an ASUS P8Z68-V LE motherboard and I'm not sure if the drivers are installed. Where would I go to find that out? I am using Gnome as my UI. If I don't have the drivers installed, where would I go? The ASUS site only gives me options to download for various Windows OS, DOS and "other" (in .ROM format). Which should I take and how should I install? I'm mostly looking for audio drivers. A lot of music I play, either on YouTube or with VLC has a faint crackling in the background on Ubuntu, which gets much worse the higher I turn the volume up. Could this be something other than the drivers? I doubt it's the hardware since the sound seems fine on Windows. I am currently running 12.04.

    Read the article

  • Won't boot after installing Ubuntu 12.04 sucessfully

    - by Matt
    I installed 12.04 successfully and rebooted (I took out my installation CD), and selected the newly installed Linux partition to boot from rEFIt. Then it just comes up with this error message: Error loading operating system which could not be more vague. Take that back. I guess it could say just "error." I don't even get to the boot prompt which limits what I can do. I cannot boot into rescue mode. I tried boot-repair, but it took more than 24 hours to check the system configuration, so I gave up on that. I'm running a Mac Mini with its main OS being Mac OS X 10.5.8. I have an alternate OS Windows XP installed, which was virtually destroyed by this Linux installation. I sacrificed my working, speedy Windows partition for something that won't even boot up. What was I thinking. My Mac partition is slow as crap. I've tried installing 12.04 many times with two different disks. The first time, I had one partition for Linux, then I had 2 (swap+main), then 3 (swap, main and BIOS), then 4 which is what I have now (swap, main, BIOS, and boot/grub). The only way I could get through the install without GRUB giving up was if I created a separate partition for it. Which was pointless, because it did install successfully, but it still doesn't boot up at all. Could rEFIt be booting off of the BIOS or one of the other partitions? Because if that's the case, there is no alternative, because Mac itself without rEFIt refuses to recognize a Linux ext4 (or 2 or 3) format partition. Apple always has to make everything so difficult. If I'm not mistaken, rEFIt is the only application of its kind for Mac. I can boot off of the CD back to the install/try screen. This is extremely upsetting, can you guys help? Please?

    Read the article

  • How to partition Seagate FreeAgent GoFlex 2TB hard disk?

    - by balki
    Hi I bought a new Seagate 2TB external hard disk. I opened the drive's application in my virtual windows, did product registration using the application present in it. I have few questions on how best to use it. The drive by default has some files and folders - setup.exe, System Volume Information, USB 3.0 PC Card Adapter etc,. I copied all the files to my laptop. Is it safe to delete these files? It has a dash board for windows which allows to tune power options, test the drive etc. Will I be able to use the dash board if I put back all these files and mount on windows again? I want to partition and format the hard disk. Data I like to store is Around 10 to 20 GB Files - Virtual box images. Around 4GB Files - Dvd images. Other Movies and personal Files. What is the best filesystem to store very huge files like 10 to 20GB files. So that they are written and accessed fast also best uses the drive's capacity. If I leave one of the partition as ntfs and others to different files systems, will it be able to mount on windows and Will I be able to use the device's dash board? Note: I dont need any encryption for my data. Any other advice on using the hard disk is also welcome.

    Read the article

  • "Accumulate" buffer results in XNA4?

    - by Utkarsh Sinha
    I'm trying to simulate a "heightmap" buffer in XNA4.0 but the results don't look correct. Here's what I'm hoping to achieve: http://www.youtube.com/watch?feature=player_detailpage&v=-Q6ISVaM5Ww#t=517s (8:38). From what I understand, here are the steps to reach there: Pass height buffer + current entity's heightmap Generate a stencil and update the height buffer Render sprite+stencil For now, I'm just trying to get the height buffer thing to work. So here's the problem. Inside the draw loop, I do the following: Create a new render target & set it Draw the heightmap with a sprite batch(no shaders) graphicsDevice.SetRenderTarget(null) Draw the rendertarget with SpriteBatch I expected to see all entities' heightmaps. But only the last entity's heightmap is visible. Any hints on what I'm doing wrong? Here's the code inside the draw loop: RenderTarget2D tempDepthStencil = new RenderTarget2D(graphicsDevice, graphicsDevice.Viewport.Width, graphicsDevice.Viewport.Height, false, graphicsDevice.DisplayMode.Format, DepthFormat.None); graphicsDevice.SetRenderTarget(tempDepthStencil); // Gather depth information SpriteBatch depthStencilSpriteBatch = new SpriteBatch(graphicsDevice); depthStencilSpriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.None, RasterizerState.CullCounterClockwise); depthStencilSpriteBatch.Draw(texHeightmap, pos, null, Color.White, 0, Vector2.Zero, 1, spriteEffects, 1); depthStencilSpriteBatch.End(); graphicsDevice.SetRenderTarget(null); SpriteBatch b1 = new SpriteBatch(graphicsDevice); b1.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, null, null, null, null); b1.Draw((Texture2D)tempDepthStencil, Vector2.Zero, null, Color.White, 0, Vector2.Zero, 1, spriteEffects, 1); b1.End();

    Read the article

  • apt-get does not work with proxy

    - by tommyk
    For a command sudo apt-get update I get following error W: Failed to fetch http://ch.archive.ubuntu.com/ubuntu/dists/maverick-updates/multiverse/binary-i386/Packages.gz 407 Proxy Authentication Required ( The ISA Server requires authorization to fulfill the request. Access to the Web Proxy filter is denied. ) I am running Ubuntu 10.10 installed on Windows XP using VirtualBox. For internet connections I am using proxy server with an authentication. I tried to use gnome-network-proxy tool to set proxy settings system-wide. After that /etc/environment has been updated by http_proxy variable with the format http://my_proxy:port/, there were no authentication data. I checked this with firefox. Browser asked my for login and password and everything was working fine. It was unfortunately not the case for apt-get. I have also tried to do as described here. Unfortunately it does not work. May it be somehow related to the fact that a proxy is in a Windows domain, any ideas ? EDIT: My proxy name is http-proxy. Is '-' a special character here ?

    Read the article

  • SD-CARD reader does not show in ubuntu

    - by shantanu
    I bought Acer asipre 4250. It have built-in SD card reader. But it is not working. Nothing show in /media or fdisk but something in dmesg. dmesg: new high-speed USB device number 3 using ehci_hcd [ 127.396733] scsi5 : usb-storage 2-2:1.0 [ 128.526562] scsi 5:0:0:0: Direct-Access Multiple Card Reader 1.00 PQ: 0 ANSI: 0 [ 128.532512] sd 5:0:0:0: Attached scsi generic sg2 type 0 [ 129.008110] ohci_hcd 0000:00:12.0: PCI INT A disabled [ 129.032083] ohci_hcd 0000:00:13.0: PCI INT A disabled [ 129.056411] ohci_hcd 0000:00:16.0: PCI INT A disabled [ 129.338026] sd 5:0:0:0: [sdb] Attached SCSI removable disk [ 129.808328] ohci_hcd 0000:00:14.5: PCI INT C disabled [ 167.728616] usb 2-2: USB disconnect, device number 3 [ 169.872284] ehci_hcd 0000:00:13.2: PCI INT B disabled [ 169.872340] ehci_hcd 0000:00:13.2: PME# enabled fdisk -l: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0006bc6d Device Boot Start End Blocks Id System /dev/sda1 * 2048 48828415 24413184 7 HPFS/NTFS/exFAT /dev/sda2 48828416 50829311 1000448 82 Linux swap / Solaris /dev/sda3 50829312 99657727 24414208 83 Linux /dev/sda4 99659774 625141759 262740993 5 Extended Partition 4 does not start on physical sector boundary. /dev/sda5 99659776 275439615 87889920 7 HPFS/NTFS/exFAT /dev/sda6 275441664 451221503 87889920 7 HPFS/NTFS/exFAT /dev/sda7 451223552 625141759 86959104 7 HPFS/NTFS/exFAT I found another problem just right now. I format last three drives as EXT4 with disk utility. But they are showing as NTFS/exFAT in fdisk. :-(

    Read the article

  • Manager Self Service at your Fingertips

    - by Elaine Clement
    Last week we released new and improved Manager Self Service capabilities in PeopleSoft HCM 9.1. We delivered a new Manager Dashboard, streamlined many Manager Self Service transactions, provided new Pivot Grid capabilities, and implemented one-click Related Actions accessible from multiple places – all with the goal of improving every Manager’s self service experience. Manager Dashboard These new capabilities have the potential to significantly impact an organization’s bottom line, and here is why. Increased Efficiency The Manager Dashboard provides a ‘one-stop shop’ for your Managers with all of the key data they need consolidated into a single view. Alerts notifying managers of important tasks are immediately viewable and actionable. Administrators can configure the dashboard to include the most important pagelets needed for their organization, and Managers can personalize it to fit within their personal way of conducting their tasks. The Related Actions feature further improves the ease with which Managers get their work done by providing one-click access to Manager Self Service transactions.  Increased Job Satisfaction The streamlined Manager transactions, related actions, and the new Manager Dashboard provide an enhanced user experience. Managers are able to quickly get in, get the information they need, complete their transactions, and get out. Managers can spend their time focusing on getting the business results they need instead of their day to day HR tasks. Enhanced Decision Support Administrators can ensure the information and analytics they want their Managers to use are available from the Manager Dashboard, establishing best business practices. Additional pivot grids relevant to your own organization can be added to the Manager Dashboard. With this easy access to the relevant information in an easily understood format, Managers can make the right business decisions needed to improve their team and their team’s productivity. For more details on the Manager Dashboard and some of the other newly posted features, such as a new Talent Summary, check out this video and others: Oracle PeopleSoft Webcasts

    Read the article

  • 10gR2 Transportable Tablespaces Certified for EBS 11i

    - by Steven Chan
    Database migration across platforms of different "endian" (byte ordering) formats using the Cross Platform Transportable Tablespaces (XTTS) process is now certified for Oracle E-Business Suite Release 11i (11.5.10.2) with Oracle Database 10g Release 2.  This process is sometimes also referred to as transportable tablespaces (TTS).What is the Cross-Platform Transportable Tablespace Feature?The Cross-Platform Transportable Tablespace feature allows users to move a user tablespace across Oracle databases. It's an efficient way to move bulk data between databases. If the source platform and the target platform are of different endianness, then an additional conversion step must be done on either the source or target platform to convert the tablespace being transported to the target format. If they are of the same endianness, then no conversion is necessary and tablespaces can be transported as if they were on the same platform.Moving data using transportable tablespaces can be much faster than performing either an export/import or unload/load of the same data. This is because transporting a tablespace only requires the copying of datafiles from source to the destination and then integrating the tablespace structural information. You can also use transportable tablespaces to move both table and index data, thereby avoiding the index rebuilds you would have to perform when importing or loading table data.

    Read the article

  • RTFMobile

    - by ultan o'broin
    It may seem obvious but it’s worth stating again. The idea that mobile users are going to read lots of user assistance on their devices is just wrong. So, Jakob Nielsen’s post Mobile Content Is Twice as Difficult serves as a timely reminder for anyone thinking of putting manuals as a form of user assistance onto mobile phones. There is also an excellent post on UXMag.com, explaining that one of the ways to screw up with your iPhone app is to throw an old-style user manual into the user experience: 10 Surefire Ways to Screw Up Your iPhone App.   (Image copyright and referenced from UX Magazine 2010)   Instead, user assistance  alternatives—if any at all—include one-time tours, graphics, in-context instructions, and so on. Not so sure that importing “humor” and “personality” work so well in the enterprise app space, myself. However, the message is clear: iPhone users don’t read manuals. Great message. Users will figure it out, and if they can’t, well then your app’s UX is a problem and the app will fail. Shame some teams are obsessed with figuring out ways to port existing manuals to mobile platforms without any thought for the UX. Razorfish’s Scatter/Gather blog says it all: One thing that is particularly discouraging, most material currently available on “Creating Content for the iPad” or similar themes turns out to be about getting traditional content onto, or into, the iPad. Now, manuals for non-end users in PDF format on eReaders is a different matter. I have research on that, but it’s for another post. Technorati Tags: mobile,user assistance,UX,user experience,manuals,documentation

    Read the article

  • Form Validation Options

    The steps involved in transmitting form data from the client to the Web server User loads web form. User enters data in to web form fields User clicks submit On submit page validates fields using JavaScript. If validation errors are found then the validation script stops the browser from canceling posting the data to the web server and displays error messages as needed. If the form passes the data validation process then the browser will URL encode the values of every field and post it to the server.  The server reads the posted data from the query string and then again validates the data just to ensure data consistency and to prevent any non-validated data because JavaScript was turned off on the clients browser from being inserted in to a database or passed on to other process. If the data passes the second validation check then the server side code will continue with the requested processes. In my opinion, it is mandatory to validate data using client side and server side validation as a fail over process. The client side validation allows users to correct any error before they are sent to the web server for processing, and this allows for an immediate response back to the user regarding data that is not correct or in the proper format that is desired. In addition, this prevents unnecessary interaction between the user and the web server and will free up the server over time compared to doing only server side validation. Server validation is the last line of defense when it comes to validation because you can check to ensure the user’s data is correct before it is used in a business process or stored to a database. Honestly, I cannot foresee a scenario where I would only want to use one form of validation over another especially with the current cost of creating and maintaining data. In my opinion, the redundant validation is well worth the overhead.

    Read the article

  • Mac OS needs Windows Live Writer &ndash; badly!

    - by digitaldias
    I recently bought a new  Macbook Pro (the 13” one) to dive into a new world of programming challenges as well as to get a more powerful netbook than my Packard Bell Dot which I’ve been using since last year. I’ve had immense pleasure using the netbook format and their small size in meetings (taking notes with XMind), surfing “anywhere”, and, of course blogging with windows live writer. So far the Mac is holding up, it’s sleek, responsive, and I’ve even begun looking at coding in Objective C with it, but in one arena, it is severely lacking: Blogging software. There is nothing that even comes close to Live Writer for getting your blog posts out. The few blogger applications that do exist on mac both look and feel medieval in comparison, AND some even cost money! It looks like some mac users actually install a virtual machine on their mac to run Windows XP just so they can use WLW. I’m not that extreme; instead, I’m hoping that the WLW team will write it’s awesome application as a Silverlight 4 app. That way, it would run on Mac and Windows (as a desktop app). I wonder if it will ever happen though…   PS: The image is of me, took it with the built-in camera on the mac and emailed it to the windows PC that I am writing on :)

    Read the article

  • How to distribute python GTK applications?

    - by Nik
    This is in correlation with the previous question I asked here. My aim is to create and package an application for easy installation in Ubuntu and other debian distributions. I understand that the best way to do this is by creating .deb file with which users can easily install my application on their system. However, I would also like to make sure my application is available in multiple languages. This is why I raised the question before which you can read here. In the answers that were provided, I was asked to use disutils for my packaging. I am however missing the bigger picture here. Why is there a need to include a setup.py file when I distribute my application in .deb format? My purpose is to ensure that users do not need to perform python setup.py to install my application but rather just click on the .deb file. I already know how to create a deb file from the excellent tutorial available here. It clearly shows how to edit rules, changelog and everything required to create a clean deb file. You can look at my application source code and folder structure at Github if it helps you better understand my situation. Please note I have glanced through the official python documentation found here. But I am hoping that I would get an answer which would help even a lame man understand since my knowledge is pretty poor in this regard.

    Read the article

< Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >