Search Results

Search found 6266 results on 251 pages for 'skip lists'.

Page 148/251 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • SQL Server 2008 Compression

    - by Peter Larsson
    Hi! Today I am going to talk about compression in SQL Server 2008. The data warehouse I currently design and develop holds historical data back to 1973. The data warehouse will have an other blog post laster due to it's complexity. However, the server has 60GB of memory (of which 48 is dedicated to SQL Server service), so all data didn't fit in memory and the SAN is not the fastest one around. So I decided to give compression a go, since we use Enterprise Edition anyway. This is the code I use to compress all tables with PAGE compression. DECLARE @SQL VARCHAR(MAX)   DECLARE curTables CURSOR FOR             SELECT 'ALTER TABLE ' + QUOTENAME(OBJECT_SCHEMA_NAME(object_id))                     + '.' + QUOTENAME(OBJECT_NAME(object_id))                     + ' REBUILD PARTITION = ALL WITH (DATA_COMPRESSION = PAGE)'             FROM    sys.tables   OPEN    curTables   FETCH   NEXT FROM    curTables INTO    @SQL   WHILE @@FETCH_STATUS = 0     BEGIN         IF @SQL IS NOT NULL             RAISERROR(@SQL, 10, 1) WITH NOWAIT           FETCH   NEXT         FROM    curTables         INTO    @SQL     END   CLOSE       curTables DEALLOCATE  curTables Copy and paste the result to a new code window and execute the statements. One thing I noticed when doing this, is that the database grows with the same size as the table. If the database cannot grow this size, the operation fails. For me, I first ended up with orphaned connection. Not good. And this is the code I use to create the index compression statements DECLARE @SQL VARCHAR(MAX)   DECLARE curIndexes CURSOR FOR             SELECT      'ALTER INDEX ' + QUOTENAME(name)                         + ' ON '                         + QUOTENAME(OBJECT_SCHEMA_NAME(object_id))                         + '.'                         + QUOTENAME(OBJECT_NAME(object_id))                         + ' REBUILD PARTITION = ALL WITH (FILLFACTOR = 100, DATA_COMPRESSION = PAGE)'             FROM        sys.indexes             WHERE       OBJECTPROPERTY(object_id, 'IsMSShipped') = 0                         AND OBJECTPROPERTY(object_id, 'IsTable') = 1             ORDER BY    CASE type_desc                             WHEN 'CLUSTERED' THEN 1                             ELSE 2                         END   OPEN    curIndexes   FETCH   NEXT FROM    curIndexes INTO    @SQL   WHILE @@FETCH_STATUS = 0     BEGIN         IF @SQL IS NOT NULL             RAISERROR(@SQL, 10, 1) WITH NOWAIT           FETCH   NEXT         FROM    curIndexes         INTO    @SQL     END   CLOSE       curIndexes DEALLOCATE  curIndexes When this was done, I noticed that the 90GB database now only was 17GB. And most important, complete database now could reside in memory! After this I took care of the administrative tasks, backups. Here I copied the code from Management Studio because I didn't want to give too much time for this. The code looks like (notice the compression option). BACKUP DATABASE [Yoda] TO              DISK = N'D:\Fileshare\Backup\Yoda.bak' WITH            NOFORMAT,                 INIT,                 NAME = N'Yoda - Full Database Backup',                 SKIP,                 NOREWIND,                 NOUNLOAD,                 COMPRESSION,                 STATS = 10,                 CHECKSUM GO   DECLARE @BackupSetID INT   SELECT  @BackupSetID = Position FROM    msdb..backupset WHERE   database_name = N'Yoda'         AND backup_set_id =(SELECT MAX(backup_set_id) FROM msdb..backupset WHERE database_name = N'Yoda')   IF @BackupSetID IS NULL     RAISERROR(N'Verify failed. Backup information for database ''Yoda'' not found.', 16, 1)   RESTORE VERIFYONLY FROM    DISK = N'D:\Fileshare\Backup\Yoda.bak' WITH    FILE = @BackupSetID,         NOUNLOAD,         NOREWIND GO After running the backup, the file size was even more reduced due to the zip-like compression algorithm used in SQL Server 2008. The file size? Only 9 GB. //Peso

    Read the article

  • Add a non-Google Tasks List to Chrome

    - by Asian Angel
    Most people rely on a task list to help them remember what they need to do but not everyone wants one that is tied to a Google account. If you have been wanting an independent tasks list then join us as we look at the Tasks extension for Google Chrome. Tasks in Action As soon as you have finished installing the extension you are ready to start adding new tasks to your list. Enter your task into the “Text Area” and press “Enter” to add the task to the list. Note: Your tasks list will be retained (in the order you set) when you close and then reopen your browser. In just moments you can have your task list ready to go. Notice that there is also a “numerical indicator” attached to the “Toolbar Button” so that you will always know how many tasks you have left to complete. You can use the “drag and drop” function to rearrange your list into a more proper order if needed. When you are finished with a task all that you will need to do is click on the “Checkmark” to remove it from the list. If you need to make a new entry similar to an existing one simply right click and the text is automatically pasted into the “Text Area”. Make any desired changes and press “Enter” to add your new task to the list. Prefer to skip using the drop-down window? Click on “Tasks” at the top to open your list in a new tab instead. The tasks list looked very nice in our new tab. Being able to use the style that best suits your needs makes this a very convenient extension. Conclusion The Tasks extension is a perfect fit for anyone who needs a tasks list available but does not want to be tied down with an online account. Quick, simple and best of all hassle free. Links Download the Tasks extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Turn Chrome’s New Tab Page into a Google Tasks PageAccess Google Tasks in Chrome the Easy WayHow to Make Google Chrome Your Default BrowserAdd a To-Do List to Chrome’s New Tab PageAccess Remember The Milk in Google Chrome the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Test Drive Mobile Phones Online With TryPhone Ben & Jerry’s Free Cone Day, 3/23/10 New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users Get News Quick and Precise With Newser Scan for Viruses in Ubuntu using ClamAV

    Read the article

  • Visual Studio 2010 and Target Framework Version

    - by Scott Dorman
    Almost two years ago, I wrote about a Visual Studio macro that allows you to change the Target Framework version of all projects in a solution. If you don’t know, the Target Framework version is what tells the compiler which version of the .NET Framework to compile against (more information is available here) and can be set to one of the following values: .NET Framework 2.0 .NET Framework 3.0 .NET Framework 3.5 .NET Framework 3.5 Client Profile .NET Framework 4.0 .NET Framework 4.0 Client Profile This can be easily accomplished by editing the project properties: The problem with this approach is that if you need to change a lot of projects at one time it becomes rather unwieldy. One possible solution is to edit the project files by hand in a text editor and change the <TargetFrameworkVersion /> and <TargetFrameworkProfile /> properties to the correct values. For example, for the .NET Framework 4.0 Client Profile, these values would be: <TargetFrameworkVersion>v4.0</TargetFrameworkVersion> <TargetFrameworkProfile>Client</TargetFrameworkProfile> Again, this is not only time consuming but can also be error-prone. The better solution is to automate this through the use of a Visual Studio macro. Since I had already created a macro to do this for Visual Studio 2008, I updated that macro to work with Visual Studio 2010 and .NET 4.0. It prompts you for the target framework version you want to set for all of the projects and then loops through each project in the solution and makes the change. If you select one of the Framework versions that support a Client Profile, it will ask if you want to use the Client Profile or the Full Profile. It is smart enough to skip project types that don’t support this property and projects that are already at the correct version. This version also incorporates the changes suggested by George (in the comments). The macro is available on my SkyDrive account. Download it to your <UserProfile>\Documents\Visual Studio 2010\Projects\VSMacros80\MyMacros folder, open the Visual Studio Macro IDE (Alt-F11) and add it as an existing item to the “MyMacros” project. I make no guarantees or warranties on this macro. I have tested it on several solutions and projects and everything seems to work and not cause any problems, but, as always, use with caution. Since it is a macro, you have the full source code available to investigate and see what it’s actually doing. If you find any bugs or make any useful changes, please let me know and I’ll update the macro. Technorati Tags: Macros,Visual Studio

    Read the article

  • Media keys play/pause globally worked in 12.10, not in 13.10

    - by Stéphane Gourichon
    Laptop media keys On Asus n55sf laptop, there are a dedicated keys for volume up, volume down, mute, [play/pause], stop, launch (plus a dozen Fn-key combinations). In 12.10 most worked. (Overall is seems unrelated to desktop environment used, stating it for the sake of completeness.) On Ubuntu 12.10 under XFCE they just worked. That is: when a player like rhythmbox or totem was started, it would alternate between play and pause. Interestingly, if several were started, they would alternate independently. E.g. use mouse to pause rhythmbox, launch totem, and one hit on [play/pause] key would pause one and resume the other. Keys Next,Previous and Stop worked as expected in any program. In 13.10 most still work, but play/skip related ignored. On Xubuntu 13.10 (XFCE too) the volume keys work but the [play/pause], stop, next and prev are ignored. Not tried regular Ubuntu 13.10 (Unity). Search before you ask Here are a few facts: https://wiki.ubuntu.com/Hotkeys/Architecture is ummutable and mentions Ubuntu 9.10. https://wiki.ubuntu.com/Hotkeys/Troubleshooting is also outdated as it mentions /usr/share/doc/udev/README.keymap.txt which no longer exists. On 12.10 and 13.10 versions, at XFCE level (as visible by xfconf-query or using xfce4-settings-manager) there are a couple of shortcut for keys like XF86Calculator or XF86TouchpadToggle but nothing related to volume prev/next/play/stop, which is okay. XF86Audio substring doesn't appear in /etc (which is normal) Kernel-level test: "showkey -s" on console shows that keys Next,Play/Pause,Previous,Stop are keycodes 163,164,165,166. Nothing relevant in /etc about that. Reports https://bugs.launchpad.net/ubuntu/+source/udev/+bug/1072371 and https://bugs.launchpad.net/ubuntu/+source/systemd/+bug/1012365 suggest to adjust at udev level. Alas, the udev tutorials I found ( e.g. https://wiki.debian.org/udev ) don't even mention keyboard. A thread in french seems to deal with a similar issue: https://forum.ubuntu-fr.org/viewtopic.php?id=1395051. @sudo evtest /dev/input/event3@, in X as well as on plain console, reports events on key pressed and repeats, but nothing when pressing those media keys. Is udev a dead end ? Questions How did it work in 12.10 ? Through udev ? Something else ? Any other hint ?

    Read the article

  • Advice on designing a robust program to handle a large library of meta-information & programs

    - by Sam Bryant
    So this might be overly vague, but here it is anyway I'm not really looking for a specific answer, but rather general design principles or direction towards resources that deal with problems like this. It's one of my first large-scale applications, and I would like to do it right. Brief Explanation My basic problem is that I have to write an application that handles a large library of meta-data, can easily modify the meta-data on-the-fly, is robust with respect to crashing, and is very efficient. (Sorta like the design parameters of iTunes, although sometimes iTunes performs more poorly than I would like). If you don't want to read the details, you can skip the rest Long Explanation Specifically I am writing a program that creates a library of image files and meta-data about these files. There is a list of tags that may or may not apply to each image. The program needs to be able to add new images, new tags, assign tags to images, and detect duplicate images, all while operating. The program contains an image Viewer which has tagging operations. The idea is that if a given image A is viewed while the library has tags T1, T2, and T3, then that image will have boolean flags for each of those tags (depending on whether the user tagged that image while it was open in the Viewer). However, prior to being viewed in the Viewer, image A would have no value for tags T1, T2, and T3. Instead it would have a "dirty" flag indicating that it is unknown whether or not A has these tags or not. The program can introduce new tags at any time (which would automatically set all images to "dirty" with respect to this new tag) This program must be fast. It must be easily able to pull up a list of images with or without a certain tag as well as images which are "dirty" with respect to a tag. It has to be crash-safe, in that if it suddenly crashes, all of the tagging information done in that session is not lost (though perhaps it's okay to loose some of it) Finally, it has to work with a lot of images (10,000) I am a fairly experienced programmer, but I have never tried to write a program with such demanding needs and I have never worked with databases. With respect to the meta-data storage, there seem to be a few design choices: Choice 1: Invidual meta-data vs centralized meta-data Individual Meta-Data: have a separate meta-data file for each image. This way, as soon as you change the meta-data for an image, it can be written to the hard disk, without having to rewrite the information for all of the other images. Centralized Meta-Data: Have a single file to hold the meta-data for every file. This would probably require meta-data writes in intervals as opposed to after every change. The benefit here is that you could keep a centralized list of all images with a given tag, ect, making the task of pulling up all images with a given tag very efficient

    Read the article

  • Android, how important is deltaTime?

    - by iQue
    Im making a game that is getting pretty big and sometimes my thread has to skip a frame, so far I'm not using deltaTime for setting the speed of my different objects in the game because it's still not a big enough game for it to matter imo. But its getting bigger then I planned, so my question is, how important is delta Time? If I should use delta time there is a problem, since speedX and speedY are integers(they have to be for eclipse to let you make a rectangle of them), I cant add delta time very functionally as far as I understand, but might be wrong? Ive tried adding deltaTime to the code below, and sometimes my enemies just not move after spawn, they just stand there and run in the same place Will add an some code for how I set / use speed: public void update(int dx, int dy) { double theta = 180.0 / Math.PI * Math.atan2(-(y - controls.pointerPosition.y), controls.pointerPosition.x - x); x +=dx * Math.cos(Math.toRadians(theta)); y +=dy * Math.sin(Math.toRadians(theta)); currentFrame = ++currentFrame % BMP_COLUMNS; } public void draw(Canvas canvas) { int srcX = currentFrame * width; int srcY = 1 * height; Rect src = new Rect(srcX, srcY, srcX + width, srcY + height); Rect dst = new Rect(x, y, x + width, y + height); canvas.drawBitmap(bitmap, src, dst, null); } So if someone with some experience with this has any thoughts, please share. Thank you! Changed code: public void update(int dx, int dy, float delta) { double theta = 180.0 / Math.PI * Math.atan2(-(y - controls.pointerPosition.y), controls.pointerPosition.x - x); double speedX = delta * dx * Math.cos(Math.toRadians(theta)); double speedY = delta * dy * Math.sin(Math.toRadians(theta)); x += speedX; y += speedY; currentFrame = ++currentFrame % BMP_COLUMNS; } public void draw(Canvas canvas) { int srcX = currentFrame * width; int srcY = 1 * height; Rect src = new Rect(srcX, srcY, srcX + width, srcY + height); Rect dst = new Rect(x, y, x + width, y + height); canvas.drawBitmap(bitmap, src, dst, null); } with this code my enemies move like before, except they wont move to the right (wont increment x), all other directions work.

    Read the article

  • Wordpress Installation (on IIS and SQL Server)

    - by Davide Mauri
    To proceed with the installation of Wordpress on SQL Server and IIS, first of all, you need to do the following steps Create a database on SQL Server that will be used by Wordpress Create login that can access to the just created database and put the user into ddladmin, db_datareader, db_datawriter roles Download and unpack Wordpress 3.3.2 (latest version as of 27 May 2012) zip file into a directory of your choice Download the wp-db-abstraction 1.1.4 (latest version as of 27 May 2012) plugin from wordpress.org website Now that the basic action has been done, you can start to setup and configure your Wordpress installation. Unpack and follow the instructions in the README.TXT file to install the Database Abstraction Layer. Mainly you have to: Upload wp-db-abstraction.php and the wp-db-abstraction directory to wp-content/mu-plugins.  This should be parallel to your regular plugins directory.  If the mu-plugins directory does not exist, you must create it. Put the db.php file from inside the wp-db-abstraction.php directory to wp-content/db.php Now you can create an application pool in IIS like the following one Create a website, using the above Application Pool, that points to the folder where you unpacked Wordpress files. Be sure to give the “Write” permission to the IIS account, as pointed out in this (old, but still quite valid) installation manual: http://wordpress.visitmix.com/development/installing-wordpress-on-sql-server#iis Now you’re ready to go. Point your browser to the configured website and the Wordpress installation screen will be there for you. When you’re requested to enter information to connect to MySQL database, simply skip that page, leaving the default values. If you have installed the Database Abstraction Layer, another database installation screen will appear after the one used by MySQL, and here you can enter the configuration information needed to connect to SQL Server. After having finished the installation steps, you should be able to access and navigate your wordpress site.  A final touch, and it’s done: just add the needed rewrite rules http://wordpress.visitmix.com/development/installing-wordpress-on-sql-server#urlrewrite and that’s it! Well. Not really. Unfortunately the current (as of 27 May 2012) version of the Database Abstraction Layer (1.1.4) has some bugs. Luckily they can be quickly fixed: Backslash Fix http://wordpress.org/support/topic/plugin-wp-db-abstraction-fix-problems-with-backslash-usage Select Top 0 Fix Make the change to the file “.\wp-content\mu-plugins\wp-db-abstraction\translations\sqlsrv\translations.php” suggested by “debettap”   http://sourceforge.net/tracker/?func=detail&aid=3485384&group_id=315685&atid=1328061 And now you have a 100% working Wordpress installation on SQL Server! Since I also wanted to take advantage of SQL Server Full Text Search, I’ve created a very simple wordpress plugin to setup full-text search and to use it as website search engine: http://wpfts.codeplex.com/ Enjoy!

    Read the article

  • How to Use Your Android Phone as a Modem; No Rooting Required

    - by Jason Fitzpatrick
    If your cellular provider’s mobile hotspot/tethering plans are too pricey, skip them and tether your phone to your computer without inflating your monthly bill. Read on to see how you can score free mobile internet. We recently received a letter from a How-To Geek reader, requesting help linking their Android phone to their laptop to avoid the highway robbery their cellular provider was insisting upon: Dear How-To Geek, I recently found out that my cellphone company charges $30 a month to use your smartphone as a data modem. That’s an outrageous price when I already pay an extra $15 a month charge just because they insist that because I have a smartphone I need a data plan because I’ll be using so much more data than other users. They expect me to pay what amounts to a $45 a month premium just to do some occasional surfing and email checking from the comfort of my laptop instead of the much smaller smartphone screen! Surely there is a work around? I’m running Windows 7 on my laptop and I have an Android phone running Android OS 2.2. Help! Sincerely, No Double Dipping! Well Double Dipping, this is a sentiment we can strongly related to as many of us on staff are in a similar situation. It’s absurd that so many companies charge you to use the data connection on the phone you’re already paying for. There is no difference in bandwidth usage if you stream Pandora to your phone or to your laptop, for example. Fortunately we have a solution for you. It’s not free-as-in-beer but it only costs $16 which, over the first year of use alone, will save you $344. Let’s get started! Latest Features How-To Geek ETC What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Make Efficient Use of Tab Bar Space by Customizing Tab Width in Firefox See the Geeky Work Done Behind the Scenes to Add Sounds to Movies [Video] Use a Crayon to Enhance Engraved Lettering on Electronics Adult Swim Brings Their Programming Lineup to iOS Devices Feel the Chill of the South Atlantic with the Antarctica Theme for Windows 7 Seas0nPass Now Offers Untethered Apple TV Jailbreaking

    Read the article

  • The Diabolical Developer: What You Need to Do to Become Awesome

    - by Tori Wieldt
    Wearing sunglasses and quite possibly hungover, Martijn Verburg's evil persona provided key tips on how to be a Diabolical Developer. His presentation at TheServerSide Java Symposium was heavy on the sarcasm and provided lots of laughter. Martijn insisted that developers take their power back and get rid of all the "modern fluff" that distract developers.He provided several key tips to become a Diabolical Developer:*Learn only from yourself. Don't read blogs or books, and don't attend conferences. If you must go on forums, only do it display your superiority, answer as obscurely as possible.*Work aloneBest coding happens when you alone in your room, lock yourself in for days. Make sure you have a gaming machine in with you.*Keep information to yourselfKnowledge is power. Think job security. Never provide documentation. *Make sure only you can read your code.Don't put comments in your code. Name your variables A,B,C....A1,B1, etc.If someone insists you format your in a standard way, change a small section and revert it back as soon as they walk away from your screen. *Stick to what you knowStay on Java 1.3. Don't bother learning abstractions. Write your application in a single file. Stuff as much code into one class as possible, a 30,000-line class is fine. Makes it easier for you to read and maintain.*Use Real ToolsNo "fancy-pancy" IDEs. Real developers only use vi.*Ignore FadsThe cloud is massively overhyped. Mobile is a big fad for young kids.The big, clunky desktop computer (with a real keyboard) will return.Learn new stuff only to pad your resume. Ajax is great for that. *Skip TestingTest-driven development is a complete waste of time. They sent men to the moon without unit tests.Just write your code properly in the first place and you don't need tests.*Compiled = Ship ItUser acceptance testing is an absolute waste of time. *Use a Single ThreadDon't use multithreading. All you need to do is throw more hardware at the problem.*Don't waste time on SEO.If you've written the contract correctly, you are paid for writing code, not attracting users.You don't want a lot of users, they only report problems. *Avoid meetingsFake being sick to avoid meetings. If you are forced into a meeting, play corporate bingo.Once you stand up and shout "bingo" you will kicked out of the meeting. Job done.Follow these tips and you'll be well on your way to being a Diabolical Developer!

    Read the article

  • ASP.NET MVC 3 Hosting :: Deploying ASP.NET MVC 3 web application to server where ASP.NET MVC 3 is not installed

    - by mbridge
    You can built sample application on ASP.NET MVC 3 for deploying it to your hosting first. To try it out first put it to web server where ASP.NET MVC 3 installed. In this posting I will tell you what files you need and where you can find them. Here are the files you need to upload to get application running on server where ASP.NET MVC 3 is not installed. Also you can deploying ASP.NET MVC 3 web application to server where ASP.NET MVC 3 is not installed like this example: you can change reference to System.Web.Helpers.dll to be the local one so it is copied to bin folder of your application. First file in this list is my web application dll and you don’t need it to get ASP.NET MVC 3 running. All other files are located at the following folder: C:\Program Files\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\Assemblies\ If there are more files needed in some other scenarios then please leave me a comment here. And… don’t forget to convert the folder in IIS to application. While developing an application locally, this isn’t a problem. But when you are ready to deploy your application to a hosting provider, this might well be a problem if the hoster does not have the ASP.NET MVC assemblies installed in the GAC. Fortunately, ASP.NET MVC is still bin-deployable. If your hosting provider has ASP.NET 3.5 SP1 installed, then you’ll only need to include the MVC DLL. If your hosting provider is still on ASP.NET 3.5, then you’ll need to deploy all three. It turns out that it’s really easy to do so. Also, ASP.NET MVC runs in Medium Trust, so it should work with most hosting providers’ Medium Trust policies. It’s always possible that a hosting provider customizes their Medium Trust policy to be draconian. Deployment is easy when you know what to copy in archive for publishing your web site on ASP.NET MVC 3 or later versions. What I like to do is use the Publish feature of Visual Studio to publish to a local directory and then upload the files to my hosting provider. If your hosting provider supports FTP, you can often skip this intermediate step and publish directly to the FTP site. The first thing I do in preparation is to go to my MVC web application project and expand the References node in the project tree. Select the aforementioned three assemblies and in the Properties dialog, set Copy Local to True. Now just right click on your application and select Publish. This brings up the following Publish wizard Notice that in this example, I selected a local directory. When I hit Publish, all the files needed to deploy my app are available in the directory I chose, including the assemblies that were in the GAC. Another ASP.NET MVC 3 article: - New Features in ASP.NET MVC 3 - ASP.NET MVC 3 First Look

    Read the article

  • Video stutter when using external drive

    - by psion
    When using boxee to play video files off of an external western digital 1TB drive formatted NTFS, I notice a slight stutter in the video every 5-10 seconds. When using mplayer, it doesn't stutter as often, but it still stutters occasionally. If I play the video off of the local sata drive, it plays fine even in boxee. I use this computer as my HTPC and I just switched from windows to linux on it. In windows, I never had any sort of stutter playing movies from the drive. I am using the latest intel graphics drivers (for the intel GMA 950) root@eee-htpc:/home/htpc# grep wd /etc/mtab /dev/sdb1 /mnt/wd2 fuseblk rw,nosuid,nodev,allow_other,blksize=512 0 0 I notice that despite trying to use ntfs or ntfs-3g, ubuntu uses ntfs-fuse which I've heard is slower. /dev/sdb1: Timing buffered disk reads: 80 MB in 3.07 seconds = 26.08 MB/sec root@eee-htpc:/mnt/wd2# dd if=/dev/zero of=./120mb bs=1024 count=120000 root@eee-htpc:/mnt/wd2# time mv ./120mb /home/htpc real 0m2.095s user 0m0.016s sys 0m0.736s Even though fuse has a reputation for being slow, it should easily be fast enough for playing standard definition video files. So why the video stutter? edit: The issue seems to be overhead cpu usage from either playing off of a usb device or ntfs/fuse. Watching CPU usage with top, local files use 10-40% CPU. Watching the same video on the external formatted ntfs, it spikes to 170% (over 100% because of hyperthreading). To me it seems like it must be overhead from the fuse driver, though I don't know if it has more or less overhead than ntfs-3g. It's a EEEBox B202 that has an atom 270, so not exactly the most powerful out there. edit2: I believe the solution would be to use non-fuse drivers or different fuse drivers. so far I have not been able to. edit3: I've probably edited this more times than I should, but as an update I have upgraded ntfs drivers to ntfs-3g 2010.8.8 external FUSE 28 - Third Generation NTFS Driver using the following PPA - ppa:x3lectric/team-iquik-releases. When first opening a video file in boxee that's on ntfs there's still the same amount of lag. After a few minutes of video, the lag seems to go away and the cpu usage comes down to 10-40%. Every so often though, it begins to stutter again. Also, if I skip ahead/back in the file, it begins to stutter a lot.

    Read the article

  • How to load stacking chunks on the fly?

    - by Brettetete
    I'm currently working on an infinite world, mostly inspired by minecraft. A Chunk consists of 16x16x16 blocks. A block(cube) is 1x1x1. This runs very smoothly with a ViewRange of 12 Chunks (12x16) on my computer. Fine. When I change the Chunk height to 256 this becomes - obviously - incredible laggy. So what I basically want to do is stacking chunks. That means my world could be [8,16,8] Chunks large. The question is now how to generate chunks on the fly? At the moment I generate not existing chunks circular around my position (near to far). Since I don't stack chunks yet, this is not very complex. As important side note here: I also want to have biomes, with different min/max height. So in Biome Flatlands the highest layer with blocks would be 8 (8x16) - in Biome Mountains the highest layer with blocks would be 14 (14x16). Just as example. What I could do would be loading 1 Chunk above and below me for example. But here the problem would be, that transitions between different bioms could be larger than one chunk on y. My current chunk loading in action For the completeness here my current chunk loading "algorithm" private IEnumerator UpdateChunks(){ for (int i = 1; i < VIEW_RANGE; i += ChunkWidth) { float vr = i; for (float x = transform.position.x - vr; x < transform.position.x + vr; x += ChunkWidth) { for (float z = transform.position.z - vr; z < transform.position.z + vr; z += ChunkWidth) { _pos.Set(x, 0, z); // no y, yet _pos.x = Mathf.Floor(_pos.x/ChunkWidth)*ChunkWidth; _pos.z = Mathf.Floor(_pos.z/ChunkWidth)*ChunkWidth; Chunk chunk = Chunk.FindChunk(_pos); // If Chunk is already created, continue if (chunk != null) continue; // Create a new Chunk.. chunk = (Chunk) Instantiate(ChunkFab, _pos, Quaternion.identity); } } // Skip to next frame yield return 0; } }

    Read the article

  • Upgrading Code from 2007 to 2010

    - by MOSSLover
    So I’ve been doing some upgrades just to see if things will work from 2007 to 2010.  So far most of the stuff I want works, but obviously there are some things that break.  Did you guys know that in 2007 you could add a webpart to the view pages for lists and libraries without losing the toolbar?  In 2010 the ribbon disappears every time you add a webpart.  So if you are using Scot Hillier’s Codeplex project to hide buttons it will not work the same way, because the ribbon is going to disappear altogether. I have also learned another reason why standalone installations are the bane of my existence.  Nine times out of ten the installation is done using Network Service as the application pool account.  You are wondering why is this bad?  Well, let’s just say the site collection administrator with local admin rights wants to attach the IIS Worker process and debug say a webpart.  Visual Studio 2010 will throw a nasty error that tells you that you are not an administrator.  You will say, but I am an administrator?  I have all the correct group permissions on the server and on SQL and in SharePoint.  Then you will go in and decide let’s add my own admin account just to see if I can attach the debugger and you will notice that works properly.  So the morale of the story is create a separate account on your development environment to run all the SharePoint Services and such.  You don’t need to go all out and create the best practices amount of accounts if it’s just your dev environment.  I would at least create one single account to run all your SharePoint process (Services, SQL, and App Pool).  Also, don’t run a standalone install unless you want to kill kittens (this is a quote from Todd Klindt).  We love kittens they are cute and awesome.  Besides you learn more if you click Complete and just skip standalone.  You will learn how to setup SQL Server 2008 and you will learn how to configure your environment.  It will help you in the long run.  So I have ranted enough for today I figure these are enough tidbits for you this time around.  The two of you who read my blog and I know some of you are friends who don’t understand SharePoint.  I might as well have just done “wahwahwahwah” in Charlie Brown adult speak.  Thanks for reading as usual.  I’ll catch you all when I complain more about the upgrade process and share more tidbits, which will inevitably become a presentation at a conference or two. Technorati Tags: Upgrade Code SharePoint 2007 to 2010,Visuaul Studio 2010,SharePoint 2010

    Read the article

  • Reinventing the Paged IEnumerable, Weigert Style!

    - by adweigert
    I am pretty sure someone else has done this, I've seen variations as PagedList<T>, but this is my style of a paged IEnumerable collection. I just store a reference to the collection and generate the paged data when the enumerator is needed, so you could technically add to a list that I'm referencing and the properties and results would be adjusted accordingly. I don't mind reinventing the wheel when I can add some of my own personal flare ... // Extension method for easy use public static PagedEnumerable AsPaged(this IEnumerable collection, int currentPage = 1, int pageSize = 0) { Contract.Requires(collection != null); Contract.Assume(currentPage >= 1); Contract.Assume(pageSize >= 0); return new PagedEnumerable(collection, currentPage, pageSize); } public class PagedEnumerable : IEnumerable { public PagedEnumerable(IEnumerable collection, int currentPage = 1, int pageSize = 0) { Contract.Requires(collection != null); Contract.Assume(currentPage >= 1); Contract.Assume(pageSize >= 0); this.collection = collection; this.PageSize = pageSize; this.CurrentPage = currentPage; } IEnumerable collection; int currentPage; public int CurrentPage { get { if (this.currentPage > this.TotalPages) { return this.TotalPages; } return this.currentPage; } set { if (value < 1) { this.currentPage = 1; } else if (value > this.TotalPages) { this.currentPage = this.TotalPages; } else { this.currentPage = value; } } } int pageSize; public int PageSize { get { if (this.pageSize == 0) { return this.collection.Count(); } return this.pageSize; } set { this.pageSize = (value < 0) ? 0 : value; } } public int TotalPages { get { return (int)Math.Ceiling(this.collection.Count() / (double)this.PageSize); } } public IEnumerator GetEnumerator() { var pageSize = this.PageSize; var currentPage = this.CurrentPage; var startCount = (currentPage - 1) * pageSize; return this.collection.Skip(startCount).Take(pageSize).GetEnumerator(); } IEnumerator IEnumerable.GetEnumerator() { return this.GetEnumerator(); } }

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • Is ASP.NET MVC completely (and exclusively) based on conventions?

    - by Mike Valeriano
    --TL;DR Is there a "Hello World!" ASP.NET MVC tutorial out there that doesn't rely on conventions and "stock" projects? Is it even possible to take advantage of the technology without reusing the default file structure, and start from a single "hello_world.asp" file or something (like in PHP)? Am I completely mistaken and I should be looking somewhere else, maybe this? I'm interested in the MVC framework, not Web Forms --Background I've played a bit with PHP in the past, just for fun, and now I'm back to it since web development became relevant for me once again. I'm no professional, but I try to gain as much knowledge and control over the technology I'm working with as possible. I'm using Visual Studio 2012 for C# - my "desktop" language of choice - and since I got the Professional Edition from Dreamspark, the Web Development Tools are available, including ASP.NET MVC 4. I won't touch Web Forms, but the MVC Framework got my attention because the MVC pattern is something I can really relate to, since it provides the control I want but... not quite. Learning PHP was easy - and right form the start I could just create a "hello_world.php" file and just do something like this for immediate results: <!-- file: hello_world.php --> <?php> echo "Hello World!"; <?> But I couldn't find a single ASP.NET (MVC) tutorial out there (I'll be sure to buy one of the upcoming MVC 4 books, only a month away or so) that would start like that. They all start with a sample project, building up knowledge from the basics and heavily using conventions as they go along. Which is fine, I suppose, but it's now the best way for me to learn things. Even the "Empty" project template for a new ASP.NET MVC 4 Application in VS2012 is not empty at all: several files and folders are created for you - much like a new C# desktop application project, but with C# I can in fact start from scratch, creating the project structure myself. It is not the case with PHP: I can choose from a plethora of different MVC frameworks I can just create my own framework I can just skip frameworks altogether, and toss random PHP along with my HTML on a single file and make it work I understand the framework needs to establish some rules, but what if I just want to create a single page website with some C# logic behind it? Do I really need to create a whole bloat of files and folders for the sake of convention? Also, please understand that I haven't gotten far on any of those tutorials mainly because of this reason, but, if that's the only way to do it, I'll go for it using one of the books I've mentioned before. This is my first contact with ASP.NET but from the few comparisons I've read, I believe I should stay the hell away from Web Forms. Thank you. (Please forgive the broken English - it is not my primary language.)

    Read the article

  • Wordpress Installation (on IIS and SQL Server)

    - by Davide Mauri
    To proceed with the installation of Wordpress on SQL Server and IIS, first of all, you need to do the following steps Create a database on SQL Server that will be used by Wordpress Create login that can access to the just created database and put the user into ddladmin, db_datareader, db_datawriter roles Download and unpack Wordpress 3.3.2 (latest version as of 27 May 2012) zip file into a directory of your choice Download the wp-db-abstraction 1.1.4 (latest version as of 27 May 2012) plugin from wordpress.org website Now that the basic action has been done, you can start to setup and configure your Wordpress installation. Unpack and follow the instructions in the README.TXT file to install the Database Abstraction Layer. Mainly you have to: Upload wp-db-abstraction.php and the wp-db-abstraction directory to wp-content/mu-plugins.  This should be parallel to your regular plugins directory.  If the mu-plugins directory does not exist, you must create it. Put the db.php file from inside the wp-db-abstraction.php directory to wp-content/db.php Now you can create an application pool in IIS like the following one Create a website, using the above Application Pool, that points to the folder where you unpacked Wordpress files. Be sure to give the “Write” permission to the IIS account, as pointed out in this (old, but still quite valid) installation manual: http://wordpress.visitmix.com/development/installing-wordpress-on-sql-server#iis Now you’re ready to go. Point your browser to the configured website and the Wordpress installation screen will be there for you. When you’re requested to enter information to connect to MySQL database, simply skip that page, leaving the default values. If you have installed the Database Abstraction Layer, another database installation screen will appear after the one used by MySQL, and here you can enter the configuration information needed to connect to SQL Server. After having finished the installation steps, you should be able to access and navigate your wordpress site.  A final touch, and it’s done: just add the needed rewrite rules http://wordpress.visitmix.com/development/installing-wordpress-on-sql-server#urlrewrite and that’s it! Well. Not really. Unfortunately the current (as of 27 May 2012) version of the Database Abstraction Layer (1.1.4) has some bugs. Luckily they can be quickly fixed: Backslash Fix http://wordpress.org/support/topic/plugin-wp-db-abstraction-fix-problems-with-backslash-usage Select Top 0 Fix Make the change to the file “.\wp-content\mu-plugins\wp-db-abstraction\translations\sqlsrv\translations.php” suggested by “debettap”   http://sourceforge.net/tracker/?func=detail&aid=3485384&group_id=315685&atid=1328061 And now you have a 100% working Wordpress installation on SQL Server! Since I also wanted to take advantage of SQL Server Full Text Search, I’ve created a very simple wordpress plugin to setup full-text search and to use it as website search engine: http://wpfts.codeplex.com/ Enjoy!

    Read the article

  • Need help to install Ubuntu on Win7 Alienware M17xR3, installing not working

    - by Rodrigo
    Please I need help to install linux on my computer to start analyzing Next Generation Sequencing (molecular genetics) for my work. I've tried to install from all different ways and forms and it gave me different errors. First I will describe my system: Alienware M17x R3 Raid Drive 596GB (2x 298 GB) Windows7 primary partition C: The hardrive is divided in 3 primary partitions (can't do 4 don't know why) 1st: Fat16 no name capacity 40mbs almost not used (has files like command.com, dellbio.bin, dellrmk.bin and oobedone.flg) I dont know what this is for but is blocked and non system (Status:none). 2nd: Recovery NTFS 9GB (Status:system) and it seems to have boot files. 3rd: CD: Windos OS (status:boot). And I have 2 Logical drivers F: data for backup and games :D (300gb) G: For trying to install the Linux (Already tried with Fat32, Ext2, ext3 and ext4) Installing with Wubi on F: and G: Instalation looks good on windows ask to restart dos mode appears: try (HD0,0) non-ms:skip try (HD0,1) NTFS5: No Wibildr try (HD0,2) NTFS5: Error: "Prefix" is not set. Installing with Wubi on C: Installation looks good on windows ask to restart dos mode apears: Mount denied becouse NTFS volume is already exclusive opened, the volume may be already mounted or another software may use it which could be identified for example by the help of the 'FUSER' command Mount devdm-4 Could not find installation files /ubuntu/install custom-installation Run 'Chkdsk /r' on windows and restart...blablablalba... that should correct. done this didn't work. Tried installing with USB drive and after burning a CD with the Iso file download from the official website: I've tried the option to install along windows with a G: drive in FAT32, EXT2,3 and 4, I've tried to leave unallocated data, without the driver G at all... appears this everytime: http://i.stack.imgur.com/eo2Ie.png pres ok it ask me to: Not possible to install the boot loader at the location: specify new location: I've tried to select all the drivers non is accepted If I ask to continue without installing the boot loader he ask me to install later by myself which I'm trying to figure it out but I'm not really good to understand how ppl get to the programming part like a DOS to write down couse I'm a newb and I've dont work on dos or linux before... I've read that the boot loader has to be installed on the primary partition, but I'm not sure what I can delete from those 2 partitions that would let me get 1 extra... and it seems that the one that windows use is protected to write down so the software is not touching it... Please if someone have any idea to help me out... I've got a course tomorrow of how to use the software to analyze the data and I wanted to take my computer with me, so I've passed the last 3 days trying to search different ways to solve. My question is also post on the bugs.launchpad.net link here: lounchpad.net bug #882695

    Read the article

  • Fan not detected by lm-sensors

    - by OrangeTux
    My fan is blowing hard, while my cpu temperature is 32 degrees I tried a lot of things to control my fan. Changed grub file GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi_osi= pci=noacpi" _ GRUB_CMDLINE_LINUX_DEFAULT="quiet splash acpi_osi=\"Linux\"" Ran sensors-detect : To load everything that is needed, add this to /etc/modules: #----cut here---- # Chip drivers coretemp #----cut here---- If you have some drivers built into your kernel, the list above will contain too many modules. Skip the appropriate ones! Do you want to add these lines automatically to /etc/modules? (yes/NO) Unloading i2c-dev... OK Unloading i2c-i801... OK Unloading cpuid... OK Ran sensors: acpitz-virtual-0 Adapter: Virtual device temp1: +34.0°C (crit = +90.0°C) coretemp-isa-0000 Adapter: ISA adapter Core 0: +34.0°C (high = +80.0°C, crit = +90.0°C) Core 2: +34.0°C (high = +80.0°C, crit = +90.0°C) Ran sudo start module-init-tools and sudo start module-init-tools module-init-tools stop/waiting As you can see my fan isn't detected. Running fancontrol gives me this: Loading configuration from /etc/fancontrol ... Error: Can't read configuration file Can you help me, please? I cannot use my laptop now in class. Thanks in advance. My system 00:00.0 Host bridge: Intel Corporation Core Processor DRAM Controller (rev 02) 00:01.0 PCI bridge: Intel Corporation Core Processor PCI Express x16 Root Port (rev 02) 00:02.0 VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 02) 00:16.0 Communication controller: Intel Corporation 5 Series/3400 Series Chipset HECI Controller (rev 06) 00:1a.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05) 00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 05) 00:1c.0 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1 (rev 05) 00:1c.2 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 3 (rev 05) 00:1d.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev a5) 00:1f.0 ISA bridge: Intel Corporation Mobile 5 Series Chipset LPC Interface Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation 5 Series/3400 Series Chipset 4 port SATA AHCI Controller (rev 05) 00:1f.3 SMBus: Intel Corporation 5 Series/3400 Series Chipset SMBus Controller (rev 05) 01:00.0 VGA compatible controller: ATI Technologies Inc Manhattan [Mobility Radeon HD 5400 Series] 01:00.1 Audio device: ATI Technologies Inc Manhattan HDMI Audio [Mobility Radeon HD 5000 Series] 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8101E/RTL8102E PCI Express Fast Ethernet controller (rev 02) 03:00.0 Network controller: Broadcom Corporation BCM4313 802.11b/g/n Wireless LAN Controller (rev 01) 7f:00.0 Host bridge: Intel Corporation Core Processor QuickPath Architecture Generic Non-core Registers (rev 05) 7f:00.1 Host bridge: Intel Corporation Core Processor QuickPath Architecture System Address Decoder (rev 05) 7f:02.0 Host bridge: Intel Corporation Core Processor QPI Link 0 (rev 05) 7f:02.1 Host bridge: Intel Corporation Core Processor QPI Physical 0 (rev 05) 7f:02.2 Host bridge: Intel Corporation Core Processor Reserved (rev 05) 7f:02.3 Host bridge: Intel Corporation Core Processor Reserved (rev 05)

    Read the article

  • Pro SharePoint 2010 Business Intelligence Solutions

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). Oh yeah baby, it’s out finally! This book is what I wanted to write for so long now, but never really got a chance to. For SharePoint 2007, I authored the SharePoint section of “Smart BI Solutions with SQL Server 2008” for MS Press. But never really got the time, to author a full book that this topic deserved. Until SharePoint 2010, we actually have a full book on this topic. So first things first, I didn’t actually write it. My role was limited to the overall concept, the outline, the layout, completion of it, code samples, identifying what we need in here, vouching for technical accuracy, identifying authors etc. The real work was done by Srini (5 chapters), and Steve (1 chapter). So credit given where it is due. But, with that said, this is a pretty good book. It has always been a challenge to find the superman that knows both, data ware housing concepts, and SharePoint concepts. The data ware housing concepts include basic stuff you need to know to work in the BI area such as cubes, MDX queries, etc. So chapter 1 covers that – and if you’re a hardcore DBA, feel free to skip Chapter 1. Then beyond that, we take every single SharePoint 2010 BI topic, and slice and dice it in detail. The topics we deal with are - Visio Services Reporting services Business Connectivity Services Excel Services PerformancePoint Services And in covering each of these topics, we ensure that a general layout was followed for each topic, to ensure completeness of content. We make sure we cover Setup related issues and advice Point and click usage Code usage, i.e. extensibility using visual studio and a walkthrough of the administration side of things, including powershell. (Yes, I insisted on that in being there in every chapter). Writing a book is always a lot of work, so we hope you find it useful. And it should go very well with the other book I just reviewed, which is Microsoft ADO.NET 4, step by step. Comment on the article ....

    Read the article

  • ArchBeat Link-o-Rama for 2012-08-29

    - by Bob Rhubart
    ORCLville: OOW 2012 - Crystal BallOracle ACE Director Floyd Teter cooks up some tongue-in-cheek predictions for news and announcements that might come out of Oracle OpenWorld 2012. What's your prediction? Oracle Optimized Solutions at Oracle OpenWorld 2012 | Oracle Hardware Hardware matters, too! The people behind the Oracle Hardware blog have put together a list of Oracle Openworld 2012 sessions focused Oracle Optimized Solutions, "designed, pre-tested, tuned and fully documented architectures for optimal performance and availability." Just plug the session ID numbers into Schedule Builder and you're good to go. AIX Checklist for stable OBIEE deployment | Dick Dunbar "OBIEE is a complicated system with many moving parts and connection points," according to Oracle Business Inteligence escalation engineer Dick Dunbar. "The purpose of this article is to provide a checklist to discuss OBIEE deployment with your systems administrators." Demo for OPN: Coherence Management with EM Cloud Control 12c Oracle Partner Network members can check out a new Coherence Management demo that showcases some of the key capabilities of Management Pack for Oracle Coherence and JVM Diagnostics. "The demo flow showcases the key enhancements made in Enterprise Manager 12c release which includes new customizable performance summary, cache data management and configuration management," according to the WebLogic Partner Community EMEA blog. The Pragmatic Architect: To Boldly Go Where No One Has Gone Before | Frank Buschmann "Many architects have technical knowledge that's both impressive and sound, which is indeed an inevitable basis for design success," says Frank Buschmann. "Yet, a lot of software projects fail or suffer due to severe challenges in their architecture. The key to mastery is how architects approach design, what they value, and where they focus their attention and work." As retail dies, whom will be the winners? | Peter Evans-Greenwood "The problem for many retailers is that how consumers shop has changed but the the retailers haven't adapted, " says Peter Evans-Greenwood. "Their sole virtue was to be the last step in a supply chain delivering somebody else's products to the consumer. However, being the last step in the supply chain is no longer a virtue when consumers skip across channels and can reach around the globe, no longer dependant on or limited to what they can find locally." Thought for the Day "Brains require stimulation. If you're locked into a pattern of work, work, and more work, your brain soon habituates - the same way that it lets you stop hearing a clock ticking. So, if you want to be more effective at work, you must, paradoxically, be less single-minded in your devotion to work. Anything you do—anything—that stimulates new segments of your brain will make you a more effective programmer or analyst. I promise, with a money-back guarantee." — Gerald M. Weinberg Source: SoftwareQuotes.com

    Read the article

  • NEW: Oracle Certification Exam Preparation Seminars

    - by Harold Green
    Hi Everyone, I am really excited about a new offering that we are announcing this week - Oracle Certification Exam Preparation Seminars. These are something that will make a big difference for many of you in your efforts to become certified and move your career forward. They are also something that have previously only been available (but very popular) to the limited number of customers who have attended our annual conferences in San Francisco (Oracle OpenWorld and JavaOne). These are the first in a series of offerings that we are releasing over the next few months. So for those of you either preparing or considering Oracle certification - keep watching here on the blog, Facebook, Twitter and the Oracle Certification website for additional announcements related to our most popular certification areas. Details of the new Exam Preparation Seminars are found below: NEW: ORACLE CERTIFICATION EXAM PREPARATION SEMINARS Becoming Oracle certified is a great way to build your career, gain additional credibility and improve your earning power. We know that the decision to become certified is not trivial. Our surveys indicate that people consider their time investment a critical factor in their decision to become certified. Your time is important. In order to help candidates maximize the efficiency of their study time we are releasing a new series of video-based seminars called Exam Preparation Seminars. These seminars are patterned after the extremely popular Exam Cram sessions that until now have only been available at our annual customer conferences (Oracle Open World and JavaOne). Beginning today they are now available to anyone, anywhere as a part of this Exam Prep Seminar series. Features: Fast-paced objective by objective review of the exam topics - led by top Oracle University instructors 24/7 access through Oracle University's training on demand platform. Ability to re-watch all or part of the the seminar. All the conveniences of video-based training: start, stop, fast-forward, skip, rewind, review. Tips that will help you better understand what you need to know to pass the exam. The Exam Preparation Seminars are meant to help anyone with a working knowledge of the technology get that extra boost to help them finalize their preparation, and will help anyone who wants a better understanding of the the depth and breadth of the exam topics and objectives. Benefits: Save time by understanding what you should study. Makes you efficient because you will understand the breadth and depth of each of the exam topics. Helps you create a better, more efficient study plan. Improves your confidence in your skills and ability to pass the certification exam. Exam Preparation Seminars are available individually, or in convenient Value Packages (which include the Exam Preparation Seminar, and an exam voucher which includes one free-retake if you need it). Currently we are releasing two seminars - one for DBA SQL and one for DBA Administration I. Additional offerings are in process. Find out more: General WEB: Oracle Certification Exam Preparation Seminars VIDEO: Exam Preparation Seminars Promo (1:27) Oracle Database Administration I (11g, 10g) VIDEO: Instructor Introduction (1:08) VIDEO: Sample Video (2:16) Oracle Database SQL VIDEO: Instructor Introduction (1:08) VIDEO: Sample Video (2:16)

    Read the article

  • C# Dev - I've tried Lisps, but I don't get it.

    - by Jonathan Mitchem
    After a few months of learning about and playing with lisps, both CL and a bit of Clojure, I'm still not seeing a compelling reason to write anything in it instead of C#. I would really like some compelling reasons, or for someone to point out that I'm missing something really big. The strengths of a Lisp (per my research): Compact, expressive notation - More so than C#, yes... but I seem to be able to express those ideas in C# too. Implicit support for functional programming - C# with LINQ extension methods: mapcar = .Select( lambda ) mapcan = .Select( lambda ).Aggregate( (a,b) = a.Union(b) ) car/first = .First() cdr/rest = .Skip(1) .... etc. Lambda and higher-order function support - C# has this, and the syntax is arguably simpler: "(lambda (x) ( body ))" versus "x = ( body )" "#(" with "%", "%1", "%2" is nice in Clojure Method dispatch separated from the objects - C# has this through extension methods Multimethod dispatch - C# does not have this natively, but I could implement it as a function call in a few hours Code is Data (and Macros) - Maybe I haven't "gotten" macros, but I haven't seen a single example where the idea of a macro couldn't be implemented as a function; it doesn't change the "language", but I'm not sure that's a strength DSLs - Can only do it through function composition... but it works Untyped "exploratory" programming - for structs/classes, C#'s autoproperties and "object" work quite well, and you can easily escalate into stronger typing as you go along Runs on non-Windows hardware - Yeah, so? Outside of college, I've only known one person who doesn't run Windows at home, or at least a VM of Windows on *nix/Mac. (Then again, maybe this is more important than I thought and I've just been brainwashed...) The REPL for bottom-up design - Ok, I admit this is really really nice, and I miss it in C#. Things I'm missing in a Lisp (due to a mix of C#, .NET, Visual Studio, Resharper): Namespaces. Even with static methods, I like to tie them to a "class" to categorize their context (Clojure seems to have this, CL doesn't seem to.) Great compile and design-time support the type system allows me to determine "correctness" of the datastructures I pass around anything misspelled is underlined realtime; I don't have to wait until runtime to know code improvements (such as using an FP approach instead of an imperative one) are autosuggested GUI development tools: WinForms and WPF (I know Clojure has access to the Java GUI libraries, but they're entirely foreign to me.) GUI Debugging tools: breakpoints, step-in, step-over, value inspectors (text, xml, custom), watches, debug-by-thread, conditional breakpoints, call-stack window with the ability to jump to the code at any level in the stack (To be fair, my stint with Emacs+Slime seemed to provide some of this, but I'm partial to the VS GUI-driven approach) I really like the hype surrounding Lisps and I gave it a chance. But is there anything I can do in a Lisp that I can't do as well in C#? It might be a bit more verbose in C#, but I also have autocomplete. What am I missing? Why should I use Clojure/CL?

    Read the article

  • Finding a new programming language for web development?

    - by Xeoncross
    I'm wondering if there are any un-biased resources that give good, specific overviews of programming languages and their intended goals. I would like to learn a new language, but visiting the sites of each language isn't working. Each one talks about how great it is without much mention of it's weaknesses or specific goals. Ruby is a dynamic, open source programming language with a focus on simplicity and productivity. Python is a programming language that lets you work more quickly and integrate your systems more effectively. Having been a PHP developer for years, Vic Cherubini sums up my plight well: I knew PHP well, had my own framework, and could work quickly to get something up and running. I programmed like this throughout the MVC revolution. I got better and better jobs (read: better paying, better title) as a PHP developer, but all along the way realizing that the code I wrote on my own time was great, and the code I worked with at work was horrible. Like, worse than horrible. Atrocious. OS Commerce level bad. Having side projects kept me sane, because the code I worked with at work made me miserable. This is why I'm retiring from PHP for my side projects and new programming ventures. I'm spent with PHP. Exhausted, if you will. I've reached a level where I think I'm at the top with it as a language and if I don't move on to a new language soon, I'll be done completely with programming and I do not want that. Languages I've looked at include JavaScript (for node.js), Ruby, Python, & Erlang. I've even thought about Scala or C++. The problem is figuring out which ones are built to handle my needs the best. So where can I go to skip the hype and get real information about the maturity of a platform, the size of the community, and the strengths & weaknesses of that language. If I know these then picking a language to continue my web development should be easy.

    Read the article

  • Disk drive for / not ready on boot after upgrade from 10.04 to 12.04

    - by Mathieu M-Gosselin
    After upgrading (using the Upgrade button from the update manager) from 10.04.4 to 12.04.1, I cannot boot anymore. Upon booting, I am greeted with the Ubuntu logo and the error "The disk drive for / is not ready yet or not present". I have the option to wait, to skip and to access a basic shell. Waiting overnight did nothing, skipping just gives me the same error for /tmp, /home, then for a UUID and finally it just goes to a black screen with a white "_" in the top left corner. My setup is a dual boot one with XP on a single hard drive, I use separate partitions for / and /home. Back in the day I installed 8.04 directly from the CD while leaving a partition for XP, which I installed after. This setup had never caused any such issues, even when upgrading from 8.04 to 10.04. I have done plenty of research regarding this issue, as many others seem to have had similar issues after doing the same upgrade as me. However, while for most what fixed the problem was running: apt-get -f install after remounting / in read-write, it didn't do it for me. I got dependency errors (see here), which I also investigated. I found https://bugs.launchpad.net/ubuntu/+source/python-defaults/+bug/990740 where most people say the solution that worked is (prior to running the above command) running: apt-get install -o APT::Immediate-Configure=false -f apt python-minimal but that also got me a lot of dependencies errors as output (see here), similar to #34 in the above thread. I also read that running: dpkg --configure -a could help, at first it wouldn't run because it had trouble parsing /var/lib/dpkg/status since there was an extra blank line in a package description (see https://bugs.launchpad.net/ubuntu/+source/dpkg/+bug/916799) but I removed it using vim (and then reran the command). It still gives me output that looks like an error, though. Here it is: http://paste.ubuntu.com/1338074/. I also tried re-running the above apt-get commands after that, to no avail. I'm running out of things to try in the hope of getting this fixed, your help would be very much appreciated! Thank you in advance.

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >