Search Results

Search found 9447 results on 378 pages for 'str replace'.

Page 283/378 | < Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >

  • Parallelism in .NET – Part 17, Think Continuations, not Callbacks

    - by Reed
    In traditional asynchronous programming, we’d often use a callback to handle notification of a background task’s completion.  The Task class in the Task Parallel Library introduces a cleaner alternative to the traditional callback: continuation tasks. Asynchronous programming methods typically required callback functions.  For example, MSDN’s Asynchronous Delegates Programming Sample shows a class that factorizes a number.  The original method in the example has the following signature: public static bool Factorize(int number, ref int primefactor1, ref int primefactor2) { //... .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } However, calling this is quite “tricky”, even if we modernize the sample to use lambda expressions via C# 3.0.  Normally, we could call this method like so: int primeFactor1 = 0; int primeFactor2 = 0; bool answer = Factorize(10298312, ref primeFactor1, ref primeFactor2); Console.WriteLine("{0}/{1} [Succeeded {2}]", primeFactor1, primeFactor2, answer); If we want to make this operation run in the background, and report to the console via a callback, things get tricker.  First, we need a delegate definition: public delegate bool AsyncFactorCaller( int number, ref int primefactor1, ref int primefactor2); Then we need to use BeginInvoke to run this method asynchronously: int primeFactor1 = 0; int primeFactor2 = 0; AsyncFactorCaller caller = new AsyncFactorCaller(Factorize); caller.BeginInvoke(10298312, ref primeFactor1, ref primeFactor2, result => { int factor1 = 0; int factor2 = 0; bool answer = caller.EndInvoke(ref factor1, ref factor2, result); Console.WriteLine("{0}/{1} [Succeeded {2}]", factor1, factor2, answer); }, null); This works, but is quite difficult to understand from a conceptual standpoint.  To combat this, the framework added the Event-based Asynchronous Pattern, but it isn’t much easier to understand or author. Using .NET 4’s new Task<T> class and a continuation, we can dramatically simplify the implementation of the above code, as well as make it much more understandable.  We do this via the Task.ContinueWith method.  This method will schedule a new Task upon completion of the original task, and provide the original Task (including its Result if it’s a Task<T>) as an argument.  Using Task, we can eliminate the delegate, and rewrite this code like so: var background = Task.Factory.StartNew( () => { int primeFactor1 = 0; int primeFactor2 = 0; bool result = Factorize(10298312, ref primeFactor1, ref primeFactor2); return new { Result = result, Factor1 = primeFactor1, Factor2 = primeFactor2 }; }); background.ContinueWith(task => Console.WriteLine("{0}/{1} [Succeeded {2}]", task.Result.Factor1, task.Result.Factor2, task.Result.Result)); This is much simpler to understand, in my opinion.  Here, we’re explicitly asking to start a new task, then continue the task with a resulting task.  In our case, our method used ref parameters (this was from the MSDN Sample), so there is a little bit of extra boiler plate involved, but the code is at least easy to understand. That being said, this isn’t dramatically shorter when compared with our C# 3 port of the MSDN code above.  However, if we were to extend our requirements a bit, we can start to see more advantages to the Task based approach.  For example, supposed we need to report the results in a user interface control instead of reporting it to the Console.  This would be a common operation, but now, we have to think about marshaling our calls back to the user interface.  This is probably going to require calling Control.Invoke or Dispatcher.Invoke within our callback, forcing us to specify a delegate within the delegate.  The maintainability and ease of understanding drops.  However, just as a standard Task can be created with a TaskScheduler that uses the UI synchronization context, so too can we continue a task with a specific context.  There are Task.ContinueWith method overloads which allow you to provide a TaskScheduler.  This means you can schedule the continuation to run on the UI thread, by simply doing: Task.Factory.StartNew( () => { int primeFactor1 = 0; int primeFactor2 = 0; bool result = Factorize(10298312, ref primeFactor1, ref primeFactor2); return new { Result = result, Factor1 = primeFactor1, Factor2 = primeFactor2 }; }).ContinueWith(task => textBox1.Text = string.Format("{0}/{1} [Succeeded {2}]", task.Result.Factor1, task.Result.Factor2, task.Result.Result), TaskScheduler.FromCurrentSynchronizationContext()); This is far more understandable than the alternative.  By using Task.ContinueWith in conjunction with TaskScheduler.FromCurrentSynchronizationContext(), we get a simple way to push any work onto a background thread, and update the user interface on the proper UI thread.  This technique works with Windows Presentation Foundation as well as Windows Forms, with no change in methodology.

    Read the article

  • Getting Started With Sinatra

    - by Liam McLennan
    Sinatra is a Ruby DSL for building web applications. It is distinguished from its peers by its minimalism. Here is hello world in Sinatra: require 'rubygems' require 'sinatra' get '/hi' do "Hello World!" end A haml view is rendered by: def '/' haml :name_of_your_view end Haml is also new to me. It is a ruby-based view engine that uses significant white space to avoid having to close tags. A hello world web page in haml might look like: %html %head %title Hello World %body %div Hello World You see how the structure is communicated using indentation instead of opening and closing tags. It makes views more concise and easier to read. Based on my syntax highlighter for Gherkin I have started to build a sinatra web application that publishes syntax highlighted gherkin feature files. I have found that there is a need to have features online so that customers can access them, and so that they can be linked to project management tools like Jira, Mingle, trac etc. The first thing I want my application to be able to do is display a list of the features that it knows about. This will happen when a user requests the root of the application. Here is my sinatra handler: get '/' do feature_service = Finding::FeatureService.new(Finding::FeatureFileFinder.new, Finding::FeatureReader.new) @features = feature_service.features(settings.feature_path, settings.feature_extensions) haml :index end The handler and the view are in the same scope so the @features variable will be available in the view. This is the same way that rails passes data between actions and views. The view to render the result is: %h2 Features %ul - @features.each do |feature| %li %a{:href => "/feature/#{feature.name}"}= feature.name Clearly this is not a complete web page. I am using a layout to provide the basic html page structure. This view renders an <li> for each feature, with a link to /feature/#{feature.name}. Here is what the page looks like: When the user clicks on one of the links I want to display the contents of that feature file. The required handler is: get '/feature/:feature' do @feature_name = params[:feature] feature_service = Finding::FeatureService.new(Finding::FeatureFileFinder.new, Finding::FeatureReader.new) # TODO replace with feature_service.feature(name) @feature = feature_service.features(settings.feature_path, settings.feature_extensions).find do |feature| feature.name == @feature_name end haml :feature end and the view: %h2= @feature.name %pre{:class => "brush: gherkin"}= @feature.description %div= partial :_back_to_index %script{:type => "text/javascript", :src => "/scripts/shCore.js"} %script{:type => "text/javascript", :src => "/scripts/shBrushGherkin.js"} %script{:type => "text/javascript" } SyntaxHighlighter.all(); Now when I click on the Search link I get a nicely formatted feature file: If you would like see the full source it is available on bitbucket.

    Read the article

  • SOCharts: Charts by Tags

    - by abhin4v
    Screenshot I created this small app as a weekend hack. It shows the reputations, upvotes, downvotes and accepted answers for a user against the tags for the answers. About I wanted to know how may upvotes I was away from getting the bronze badge for the clojure tag. But I could not find any straightforward way of doing that. So I wrote this app (in Clojure, of course). The SO API is used for the data and the charts are created using the Google Chart API. The charts are opened in the default browser. License Licensed under EPL 1.0. Download If you have Clojure and Leiningen installed, you can simply get the code from https://gist.github.com/725331, save it as socharts.clj and then run lein repl -e "(load \"socharts\")(refer 'socharts.socharts)(-main)" for launching the Swing UI If you don't have Clojure installed, but have Java then download the standalone jar from http://dl.dropbox.com/u/5247/socharts-1.0.0-standalone.jar and run it as javaw -jar socharts-1.0.0-standalone.jar Once the UI is launched, just type your user id in the input box and press <ENTER>. It will take some time to download the data from the SO API (the progress bar shows the download progress) and then it will open the charts in your default browser. You can also run it as a command line app by running lein repl -e "(load \"socharts\")(refer 'socharts.socharts)(-main <userid>)" or java -jar socharts-1.0.0-standalone.jar <userid> where you replace <userid> with your user id. Be warned that because of a missing feature in the SO API, it will fetch the data for each question you have answered. So the maximum limit is 10000 answers (the SO API call limit). Platform All platforms with Java 1.6. Contact You can reach me at abhinav [at] abhinavsarkar [dot] net. Please report bugs/comments/suggestions as answers to this post. Code Code was written in Clojure with the UI in Swing. It is available at https://gist.github.com/725331. It's a public gist so your can fork it if you like to do some changes.

    Read the article

  • Oracle AutoVue Key Highlights from Oracle OpenWorld 2012

    - by Celine Beck
    We closed another successful Oracle Open World for AutoVue. Thanks to everyone who joined us this year. As usual, from customer presentations to evening networking activities, there was enough to keep us busy during the entire event. Here is a summary of some of the key highlights of the conference: Sessions:We had two AutoVue-specific sessions during Oracle Open World this year. The first session was part of the Product Lifecycle Management track and covered how AutoVue can be used to help drive effective decision making and streamline design for manufacturing processes. Attendees had the opportunity to learn from customer speaker GLOBALFOUNDRIES how they have been leveraging Oracle AutoVue within Agile PLM to enable high degree of collaboration during the exceptionally creative phases of their product development processes, securely, without risking valuable intellectual property. If you are interested, you can actually download the presentation by visiting launch.oracle.com/?plmopenworld2012.AutoVue was also featured as part of the Utilities track. This session focused on how visualization solutions play a critical role in effective plant optimization and configuration strategies defined by owners and operators of power generation facilities. Attendees learnt how integrated with document management systems, and enterprise applications like Oracle Primavera and Asset Lifecycle Management, AutoVue improves change management processes; minimizes risks by providing access to accurate engineering drawings which capture and reflect the as-maintained status of assets; and allows customers to drive complex maintenance projects to successful completion.Augmented Business Visualization for Agile PLMDuring Oracle Open World, we also showcased an Augmented Business Visualization-based solution for Oracle Agile PLM. An Augmented Business Visualization (ABV) solution is one where your structured data (from Oracle Agile PLM for instance) and your unstructured data (documents, designs, 3D models, etc) come together to allow you to make better decisions (check out our blog posts on the topic: Augment the Value of Your Data (or Time to replace the “attach” button) and Context is Everything ). As part of the Agile PLM, the idea is to support more effective decision-making by turning 3D assemblies into color-coded reports, and streamlining business processes like Engineering Change Management by enabling the automatic creation of engineering change requests in Agile PLM directly from documents being viewed in AutoVue. More on this coming soon...probably during the Oracle Value Chain Summit to be held in San Francisco, from Feb. 4-6, 2013 in San Francisco! Mark your calendars and stay tuned for more information! And thanks again for joining us at Oracle OpenWorld!

    Read the article

  • .NET development on a “Retina” MacBook Pro

    - by Jeff
    The rumor that Apple would release a super high resolution version of its 15” laptop has been around for quite awhile, and one I watched closely. After more than three years with a 17” MacBook Pro, and all of the screen real estate it offered, I was ready to replace it with something much lighter. It was a fantastic machine, still doing 6 or 7 hours after 460 charge cycles, but I wanted lighter and faster. With the SSD I put in it, I was able to sell it for $750. The appeal of higher resolution goes way back, when I would plug into a projector and scale up. Consolas, as it turns out, is a nice looking font for code when it’s bigger. While I have mostly indifference for iOS, I have to admit that a higher dot pitch on the iPhone and iPad is pretty to look at. So I ordered the new 15” “Retina” model as soon as the Apple Store went live with it, and got it seven days later. I’ve been primarily using Parallels as my VM of choice from OS X for about five years. They recently put out an update for compatibility with the display, though I’m not entirely sure what that means. I figured there would have to be some messing around to get the VM to look right. The combination that seems to work best is this: Set the display in OS X to “more room,” which is roughly the equivalent of the 1920x1200 that my 17” did. It’s not as stunning as the text at the default 1440x900 equivalent (in OS X), but it’s still quite readable. Parallels still doesn’t entirely know what to do with the high resolution, though what it should do is somehow treat it as native. That flaw aside, I set the Windows 7 scaling to 125%, and it generally looks pretty good. It’s not really taking advantage of the display for sharpness, but hopefully that’s something that Parallels will figure out. Screen tweaking aside, I got the base model with 16 gigs of RAM, so I give the VM 8. I can boot a Windows 7 VM in 9 seconds. Nine seconds! The Windows Experience Index scores are all 7 and above, except for graphics, which are both at 6. Again, that’s in a VM. It’s hard to believe there’s something so fast in a little slim package like that. Hopefully this one gets me at least three years, like the last one.

    Read the article

  • NoSQL is not about object databases

    - by Bertrand Le Roy
    NoSQL as a movement is an interesting beast. I kinda like that it’s negatively defined (I happen to belong myself to at least one other such a-community). It’s not in its roots about proposing one specific new silver bullet to kill an old problem. it’s about challenging the consensus. Actually, blindly and systematically replacing relational databases with object databases would just replace one set of issues with another. No, the point is to recognize that relational databases are not a universal answer -although they have been used as one for so long- and recognize instead that there’s a whole spectrum of data storage solutions out there. Why is it so hard to recognize, by the way? You are already using some of those other data storage solutions every day. Let me cite a few: The file system Active Directory XML / JSON documents The Web e-mail Logs Excel files EXIF blobs in your photos Relational databases And yes, object databases It’s just a fact of modern life. Notice by the way that most of the data that you use every day is unstructured and thus mostly unsuitable for relational storage. It really is more a matter of recognizing it: you are already doing NoSQL. So what happens when for any reason you need to simultaneously query two or more of these heterogeneous data stores? Well, you build an index of sorts combining them, and that’s what you query instead. Of course, there’s not much distance to travel from that to realizing that querying is better done when completely separated from storage. So why am I writing about this today? Well, that’s something I’ve been giving lots of thought, on and off, over the last ten years. When I built my first CMS all that time ago, one of the main problems my customers were facing was to manage and make sense of the mountain of unstructured data that was constituting most of their business. The central entity of that system was the file system because we were dealing with lots of Word documents, PDFs, OCR’d articles, photos and static web pages. We could have stored all that in SQL Server. It would have worked. Ew. I’m so glad we didn’t. Today, I’m working on Orchard (another CMS ;). It’s a pretty young project but already one of the questions we get the most is how to integrate existing data. One of the ideas I’ll be trying hard to sell to the rest of the team in the next few months is to completely split the querying from the storage. Not only does this provide great opportunities for performance optimizations, it gives you homogeneous access to heterogeneous and existing data sources. For free.

    Read the article

  • Best advice for setting up Ubuntu on my mother's computer

    - by idealmachine
    Intended use My mother had an old Compaq desktop computer running Windows 98, which she used for occasional Web browsing and playing cards. The name of her card game is Hoyle Card Games 3. Although I had to repair it several times over the last 10 years, it worked fine until it finally died at the end of last year. Hardware specifications A relative brought up a newer computer soon afterward: Operating system: Windows XP Asus K8N motherboard (with broken on-board sound; getting a sound card) Athlon 64? processor (don't remember the clock speed) 512 MB RAM Hope the graphics card works... Replacement sound card will be one of: Ensoniq ES1370 AudioPCI Diamond Monster Sound MX300 (Aureal chipset) Sound Blaster Audigy 2 SE Peripherals HP Scanjet 3400c scanner (USB connected) HP LaserJet multi-function printer (parallel port connected, and printing works with a PCL driver) Same serial mouse as old computer Question I had set up an SSH/VNC connection to allow for remotely working out problems. Or so I thought. A month later, the computer would not boot, rendering the SSH connection useless and an OS reinstall necessary. Unfortunately, I have neither the original Windows disc nor the product key. Unless I were to pay $200 for a full Windows 7 Home Premium license for my computer, I would not be able to re-install Windows XP on hers. I consider myself an advanced Linux user, having used Debian for years. So here are my questions. I have only one day to decide whether to use Ubuntu or buy Windows: A quick search leads me to believe all the hardware listed above is supposed to work with Linux, but am I mistaken? Would Ubuntu/Xubuntu suffice (specify which one if it matters), or would I be better off paying the $200 necessary for Windows XP? Is the card game likely to run on Wine? I believe the minimum system requirement is Windows 95. Failing Wine compatibility, will VirtualBox run fast enough on such a computer (Windows 98 as the guest OS)? Are there any free card games just as good? She plays mainly Bridge, Poker, and Solitaire. Is there any "Large Fonts" option for those with poor vision? The lack of it would be a big disadvantage. BONUS: Although I would probably replace the old mouse upon a move to Ubuntu, is it even possible to get a serial mouse working?

    Read the article

  • Playing NSF music in FMOD.net

    - by Tesserex
    So, as the title says, I want to be able to play NSF files using FMOD, because my project already uses FMOD and I'd rather not replace it. This will involve figuring out how existing players and emulators work and porting it. I haven't yet found an existing player that uses FMOD. My starting point is the MyNes source from http://sourceforge.net/projects/mynes/. There are two big steps between here and what I'm looking for. MyNes plays from a ROM, not NSF. So, I have to rip out the APU and get it to play NSF files. The MyNes APU uses SlimDX, so I have to convert that to FMOD.NET. I am really stuck about how to go about either of these, because I'm not that familiar with audio formats and it's hard finding resources online. So here are a few questions: From what I can tell from the NSF spec at http://kevtris.org/nes/nsfspec.txt, it's just contains the relevant memory section of the ROM, plus the header. If anyone can verify or correct this that would be great. The emulator APU uses data from the rest of the emulator to play, including things like cycle counts. I'm not sure what replaces this in a standalone player. Can't I just load all the music data at once into a stream and play it? Joining #1 and #2, does the header data from the NSF substitute for some of the ROM data in the emulator code? Using FMOD, will I be following the usercreatedsound example for loading a stream? And does this format count as PCM? Specifically MyNes says PCM8. Any tips on loading / playing the stream in FMOD are appreciated. As an aside, I don't really understand the loading / playing sections of the spec I linked at all. It seems to apply to 6502 systems / emulators only and not to my situation. I know it's a long shot for anyone here to have enough experience in this area to help, but anything you can provide is definitely appreciated. A link to an existing .NET library that does this would be even better, but I don't believe one exists.

    Read the article

  • How can I troubleshoot flash player/hardware conflict?

    - by sparthikas
    OBJECTIVE: Have a web browser on my Ubuntu install that can play youtube and hulu videos. Also would like to understand problem so that I can fix it again if I change software. Workarounds welcome, technical understanding and solution preferable. SYMPTOMS: Flash does not run - cannot make the right-click menu appear, an empty box is where video should be, changes to black box when hovering over other links. The "The Adobe Flash plugin has crashed" message does not appear with its sad lego face. cannot activate proprietary graphics driver - causes system to reboot to a prompt. SOLUTIONS TRIED: Replaced OS (tried slackware 13.37, fedora 17, linuxmint 13 maya, gentoo, lubuntu, and even winXP. lubuntu confirmed to work, don't remember how much tweaking, if any, this required. Slack, fedora, mint, and gentoo all failed to run flash just like ubuntu) many reinstalls of flash player via different methods, including cleaning up old installs first, also tried gnash and lightspark. replaced graphics card (replaced HIS IceQ Radeon HD 4670 AGP with older GeForce 5700 LE no change in problem) flash does successfully work on winXP installation with Catalyst AGP hotfix driver applied, however I consider windows wholly unacceptable for web browsing due to lack of security. Lubuntu install also works, however I do not want to be tied down to just using Lubuntu on this computer. SYSINFO: Have latest versions of Ubuntu, Firefox, and Flash on fresh Ubuntu install. Using Gigabyte 7s748 motherboard with Athlon XP 2800+ and 3 GB of RAM with Radeon HD 4670 AGP card, also a Dell soundblaster live series sound card (due to malfunction of onboard sound on motherboard) Wired internet connection, Maxtor 6Y120L3 HDD, Sony DVD RW AW-Q170A, Dell M993s monitor. NOTES: I do not know if the graphics driver issue and the flash troubles are linked. Substitution of older graphics card having same flash troubles seems to suggest they aren't. My troubleshooting method is rather reductionist, consisting mainly of "replace things until you find out what was causing the error by process of elimination" only it seems that this must be a conflict which arises when software decides how to configure itself on my hardware. That is, I know the hardware can run Flash, and I know that on other systems the same software can too, but somehow the combination fails. Consequently I feel out of my depth. I will keep trying things off and on, but I have spent probably about 30 man-hours in the last 4 months working on this problem with no joy other than the lubuntu workaround. Any help will be appreciated, I will be checking back and posting updates. Any pertinent questions regarding me or my computer will be answered, outputs from config files can be accessed and posted (IDK which ones or what parts to post).

    Read the article

  • Finally, I have my HP 6910p laptop running with 8Gb RAM

    - by Liam Westley
    Today, I received two Corsair Value Select 4Gb DDR SO-DIMMs (from overclock.co.uk) for my aging HP 6910p to give it the extra lease of life to keep it going until the end of 2010.  And here is the proof that Windows 7 64-bit happily sees all 8Gb, There are no 4Gb modules are officially supported for the HP 6910p (they didn’t exist when it was first build).  I was taking a bit of a gamble, and relying on the UK distance selling regulations which meant that even if they didn’t work I’d be able to send them back, getting a full refund and only paying for the return postage. I’d read Keith Comb’s blog back in 2008, (http://blogs.technet.com/b/keithcombs/archive/2008/07/05/loading-a-hp-6910p-with-8gb-of-ram.aspx) where he mentioned ‘trying’ out 4Gb samples of SO-DIMMs in a HP 6910p laptop, but there still appears to be no mentions of running this configuration in any other blog. Seeing how the 8Gb of memory is used is made easier with the new Resource Monitor available in Windows 7.  With two copies of Visual Studio 2008, Outlook, Firefox (with 30+ tabs), TweetDeck (an infamous memory hog) and VMWare workstation running a virtual machine allocated with 2Gb of memory, you might have no ‘free’ memory remaining, but the standby memory is an awesome 2.4Gb, and once the VM is up and running the Hard Faults/sec hovers around zero,   It’s the page fault figure which really counts, because reducing that value means that you are preventing the Windows 7 system drive from being used for virtual memory paging operations.  Even after only a few hours of use it’s noticeable that disc access has been reduced and applications feel more responsive and ‘snappy’.  I did consider the option of purchasing an SSD to replace the main drive, rather than go for 8Gb of RAM, but I think I’ve probably made the correct decision. Given my hobby topic of virtualisation, I take the view that you can never have too much memory.   It was also a decision made easier by the price differential between 8Gb of RAM compared to a decent size SSD.  In the 18 months since Keith Comb tested the first 4Gb SO-DIMMS they have plummeted in price, at just under £100 per 4Gb, they are around a fifth of the price when launched. So if you ever wondered if a HP 6910p can handle 8Gb, now you know.

    Read the article

  • Ranking Part III

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved   Ranking Part III In a previous blogs “Ranking an Introduction” and  “Ranking Part II” , you have already praised me in “Rank the Author” and learned how to create a new element on a page and how to place it where you need it. For this installment, I just added code to keep the number of votes (you vote by clicking one of the stars) and the total vote. Using these two, we can compute the average rating. It’s a small step, but its purpose is to show that we do not need a detailed history in order to compute the average. A running total is sufficient. Please note that once you close the game, you will lose your previous total. In real life, we persist the totals in the list itself. We also keep a list of actual votes, but its purpose is to prevent double votes. If a person has already voted, his user id is already on the list and our program will check for it and bar the person from voting again. This is coded in an event receiver, which is a SharePoint server piece of code. I will show you how to do this part in a subsequent blog. Again, go to the page and look at the code. The gist of it is here. avg, votes, and stars are global variables that I defined before. function sendRate(sel){//I hate long line so I created pieces of the message in their own vars            var s1 = "Your Rating Was: ";            var s2 = ".. ";            var s3 = "\nVotes = ";            var s4 = "\nTotal Stars = ";            var s5 = "\nAverage = ";            var s;            s = parseInt(sel.id.replace("_", '')); // Get the selected star number            votes = parseInt(votes) + 1;            stars = parseInt(stars) + s;            avg = parseFloat(stars) / parseFloat(votes);            alert(s1 + sel.id + s2 +sel.title + s3 + votes + s4 + stars + s5 + avg);} Click on the link to play and examine “Ranking with Stats” That’s all folks!

    Read the article

  • Important Note for Enablement Service Pack 1 for UPK 3.6.1

    - by marc.santosusso
    The following was originally posted to one of the UPK communities on LinkedIn. Since this post generated some feedback that this information was not well-known, I thought it would be good to repost, which I've done with permission from Earl Sullivan. This is an FYI for those who have UPK 3.6.1 and applied the Enablement Pack 1. There is a manual database update that is needed to be run. Here is the information: To correct an issue with permissioning in the Library, this Service Pack, issued in March 2010, also contains scripts to update the database on the Oracle Database or MicrosoftSQL server. Once you have run the Setup.exe file for the Service Pack, the necessary script files can be found at the root of the folder where the Developer is installed. These scripts must be run manually according to the instructions below. To update a database located on an Oracle Database server manually: Run the Setup.exe to install the files for the Service Pack. Start SQL*Plus and login with the system account. At the command prompt, enter the path to the AlterSchemaObjects.sql script located at the root of the folder where the Developer is installed. and append the following parameters: schema_owner - There is a limit of 20 characters on the schema owner name. You can find this information in the web.config file located in the Repository.WS in the folder where the server is installed. password - The existing schema owner password. Statement with generic parameters: @C:\AlterSchemaObjects.sql schema_owner password 4. Run the AlterSchemaObjects.sql script. To update a database located on a Microsoft SQL server manually: Run the Setup.exe to install the files for the Service Pack. Log in to the database using the database administrator account. Open and edit the AlterDBObjects.sql file located at the root of the folder where the Developer is installed. Replace the ODServer text with the username used when the database was installed. You can find this information in the web.config file located in the Repository.WS folder in the folder where the server is installed. Change the database from master to the name of the existing Developer database and run the AlterDBObjects.sql script. Note: The database name is the initial catalog in the connection string in the web.config file. Editor's note: The database update fixes a problem with permissions where the permissions for a user will be incorrectly updated when a group that the user was removed from has their permissions changed.

    Read the article

  • Miracle Growth Of Organs From Our Own Cells

    - by Rekha
    At the current situation, there is a shortage of healthy organs. The donor and patient also have to be closely matched and there are chances for the patient’s immune system may reject the transplant. Right now, researchers are seriously involved in a new kind of solution: "bioartifical" organs are being grown from the patient’s own cells. There are a few people who have already received lab-grown bladders. Bladder technique was developed by Anthony Atala of the Wake Forest Institute for Regenerative Medicine in Winston-Salem, North Carolina. The healthy cells from the patient’s diseased bladder is taken and cause them to multiply profusely in petri dishes. The muscle cells go on the outside, urothelial cells on the inside by layering the cells one layer at a time. The bladder-to-be is then incubated at body temperature until the cells form functioning tissue. This process could take six to eight months. Organs with lots of blood vessels, such as kidneys or livers, are harder to grow than hollow ones like bladders. Atala’s group works on 22 organs and tissues including ears, recently made a functioning piece of human liver. Others in the list includes:  Columbia University – Jawbone, Yale University – Lung, University of Minnesota – Rat heart, University of Michigan – Artificial Kidney There are possibilities that growing a copy of patient’s organ is not always possible – for instance, when the original is completely damaged by cancer. By using stem cell bank collected without harming human embryos from amniotic fluid in the womb, those cells are coaxed into becoming heart, liver and other organ cells. A bank of 1,00,000 stem cell samples would have enough genetic variety to match nearly any patient. Surgeons can order organs grown as needed instead of waiting for the perfect donor. "There are few things as devastating for a surgeon as knowing you have to replace the tissue and you’re doing something that’s not ideal," says Atala, a urologic surgeon himself. "Wouldn’t it be great if they had their own organ?" Great for the patient especially, he means. Via National Geographic  and cc image credit This article titled,Miracle Growth Of Organs From Our Own Cells, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Windows CE: Changing Static IP Address

    - by Bruce Eitman
    A customer contacted me recently and asked me how to change a static IP address at runtime.  Of course this is not something that I know how to do, but with a little bit of research I figure out how to do it. It turns out that the challenge is to request that the adapter update itself with the new IP Address.  Otherwise, the change in IP address is a matter of changing the address in the registry for the adapter.   The registry entry is something like: [HKEY_LOCAL_MACHINE\Comm\LAN90001\Parms\TcpIp]    "EnableDHCP"=dword:0    "IpAddress"="192.168.0.100"     "DefaultGateway"="192.168.0.1"    "Subnetmask"="255.255.255.0" Where LAN90001 would be replace with your adapter name.  I have written quite a few articles about how to modify the registry, including a registry editor that you could use. Requesting that the adapter update itself is a matter of getting a handle to the NDIS driver, and then asking it to refresh the adapter.  The code is: #include <windows.h> #include "winioctl.h" #include "ntddndis.h"   void RebindAdapter( TCHAR *adaptername ) {       HANDLE hNdis;       BOOL fResult = FALSE;       int count;         // Make this function easier to use - hide the need to have two null characters.       int length = wcslen(adaptername);       int AdapterSize = (length + 2) * sizeof( TCHAR );       TCHAR *Adapter = malloc(AdapterSize);       wcscpy( Adapter, adaptername );       Adapter[ length ] = '\0';       Adapter[ length +1 ] = '\0';           hNdis = CreateFile(DD_NDIS_DEVICE_NAME,                   GENERIC_READ | GENERIC_WRITE,                   FILE_SHARE_READ | FILE_SHARE_WRITE,                   NULL,                   OPEN_ALWAYS,                   0,                   NULL);         if (INVALID_HANDLE_VALUE != hNdis)       {             fResult = DeviceIoControl(hNdis,                         IOCTL_NDIS_REBIND_ADAPTER,                         Adapter,                         AdapterSize,                         NULL,                         0,                         &count,                         NULL);             if( !fResult )             {                   RETAILMSG( 1, (TEXT("DeviceIoControl failed %d\n"), GetLastError() ));             }             CloseHandle(hNdis);       }       else       {             RETAILMSG( 1, (TEXT("Failed to open NDIS Handle\n")));       }   }       int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPWSTR    lpCmdLine, int       nCmdShow) {     RebindAdapter( TEXT("LAN90001") );     return 0; }   If you don’t want to write any code, but instead plan to use a registry editor to change the IP Address, then there is a command line utility to do the same thing.  NDISConfig.exe can be used: Ndisconfig adapter rebind LAN90001    Copyright © 2012 – Bruce Eitman All Rights Reserved

    Read the article

  • ArchBeat Link-o-Rama for 2012-03-20

    - by Bob Rhubart
    SOA! SOA! SOA!; OSB 11g Recipes and Author Interviews www.oracle.com Featured this week on the OTN Architect Homepage, along with the latest articles, white papers, blogs, events, and other resources for software architects. OTN Virtual Developer Day - Java - APAC Tuesday March 27th, 2012. 9:30 am to 2:00pm IST / 12:00pm to 4.30pm SGT / 3.00pm - 7.30pm AEDT Oracle Virtualization Newsletter - March Edition www.oracle.com News, white papers, webcasts, events, blogs, and more -- all focused on Oracle Virtualization products. 7 Signs an Enterprise is getting the post-PC thing | Ron Tolido www.capgemini.com Capgemini's Ron Tolido shares "indicators for enterprises that actually understand the power of mobility and the post-PC era." Gartner: Personal Cloud Will Replace the Personal Computer as the Center of Users' Digital Lives www.gartner.com The change, says Gartner, "will require enterprises to fundamentally rethink how they deliver applications and services to users." Northeast Ohio Oracle Users Group 2 Day Seminar - May 14-15 - Cleveland, OH www.neooug.org More than 20 sessions over 4 tracks, featuring 18 speakers, including Oracle ACE Director Cary Millsap, Oracle ACE Director Rich Niemiec, and Oracle ACE Stewart Brand. Register before April 15 and save. Oracle Hardware Systems: The Extreme Performance Tour - Dates and Locations Worldwide www.oracle.com Get the inside track on Oracle's hardware strategy and product roadmap from the people who know Oracle hardware best. And be sure to meet our global experts in the Extreme Performance exhibition area. Click the link for dates and locations worldwide. Oracle's ZFS Storage Appliance Simulator | Steen Schmidt blogs.oracle.com Take a test drive. Oracle Access Manager 11g - useful links | Dmitry Nefedkin blogs.oracle.com Dmitry Nefedkin shares a list of links to useful resources for those interested in Oracle Access Manager 11g. Oracle Linux Online Forum - March 27 event.on24.com Date: Tuesday, March 27, 2012 Time: 9:30 AM PT / 12:30 PM ET Leading Innovations in Enterprise Linux hosted by Oracle Executives Edward Screven and Wim Coekaerts. Customer Presentation: How Oracle Helps Reduce Cost and Improve Performance of Database Applications at Progressive Insurance Speaker: John Dome What's New in Oracle Linux Speakers: Waseem Daher, Chris Mason, Elena Zannoni, Lenz Grimmer Get More Value from your Linux Vendor Speakers: Sergio Leunissen, Chris Mason, Monica Kumar Thought for the Day "I have yet to see any problem, however complicated, which, when looked at in the right way, did not become still more complicated." —Poul Anderson

    Read the article

  • Switching the layout in Orchard CMS

    - by Bertrand Le Roy
    The UI composition in Orchard is extremely flexible, thanks in no small part to the usage of dynamic Clay shapes. Every notable UI construct in Orchard is built as a shape that other parts of the system can then party on and modify any way they want. Case in point today: modifying the layout (which is a shape) on the fly to provide custom page structures for different parts of the site. This might actually end up being built-in Orchard 1.0 but for the moment it’s not in there. Plus, it’s quite interesting to see how it’s done. We are going to build a little extension that allows for specialized layouts in addition to the default layout.cshtml that Orchard understands out of the box. The extension will add the possibility to add the module name (or, in MVC terms, area name) to the template name, or module and controller names, or module, controller and action names. For example, the home page is served by the HomePage module, so with this extension you’ll be able to add an optional layout-homepage.cshtml file to your theme to specialize the look of the home page while leaving all other pages using the regular layout.cshtml. I decided to implement this sample as a theme with code. This way, the new overrides are only enabled as the theme is activated, which makes a lot of sense as this is going to be where you’ll be creating those additional layouts. The first thing I did was to create my own theme, derived from the default TheThemeMachine with this command: codegen theme CustomLayoutMachine /CreateProject:true /IncludeInSolution:true /BasedOn:TheThemeMachine .csharpcode, .csharpcode pre { font-size: 12px; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Once that was done, I worked around a known bug and moved the new project from the Modules solution folder into Themes (the code was already physically in the right place, this is just about Visual Studio editing). The CreateProject flag in the command-line created a project file for us in the theme’s folder. This is only necessary if you want to run code outside of views from that theme. The code that we want to add is the following LayoutFilter.cs: using System.Linq; using System.Web.Mvc; using System.Web.Routing; using Orchard; using Orchard.Mvc.Filters; namespace CustomLayoutMachine.Filters { public class LayoutFilter : FilterProvider, IResultFilter { private readonly IWorkContextAccessor _wca; public LayoutFilter(IWorkContextAccessor wca) { _wca = wca; } public void OnResultExecuting(ResultExecutingContext filterContext) { var workContext = _wca.GetContext(); var routeValues = filterContext.RouteData.Values; workContext.Layout.Metadata.Alternates.Add( BuildShapeName(routeValues, "area")); workContext.Layout.Metadata.Alternates.Add( BuildShapeName(routeValues, "area", "controller")); workContext.Layout.Metadata.Alternates.Add( BuildShapeName(routeValues, "area", "controller", "action")); } public void OnResultExecuted(ResultExecutedContext filterContext) { } private static string BuildShapeName( RouteValueDictionary values, params string[] names) { return "Layout__" + string.Join("__", names.Select(s => ((string)values[s] ?? "").Replace(".", "_"))); } } } This filter is intercepting ResultExecuting, which is going to provide a context object out of which we can extract the route data. We are also injecting an IWorkContextAccessor dependency that will give us access to the current Layout object, so that we can add alternate shape names to its metadata. We are adding three possible shape names to the default, with different combinations of area, controller and action names. For example, a request to a blog post is going to be routed to the “Orchard.Blogs” module’s “BlogPost” controller’s “Item” action. Our filters will then add the following shape names to the default “Layout”: Layout__Orchard_Blogs Layout__Orchard_Blogs__BlogPost Layout__Orchard_Blogs__BlogPost__Item Those template names get mapped into the following file names by the system (assuming the Razor view engine): Layout-Orchard_Blogs.cshtml Layout-Orchard_Blogs-BlogPost.cshtml Layout-Orchard_Blogs-BlogPost-Item.cshtml This works for any module/controller/action of course, but in the sample I created Layout-HomePage.cshtml (a specific layout for the home page), Layout-Orchard_Blogs.cshtml (a layout for all the blog views) and Layout-Orchard_Blogs-BlogPost-Item.cshtml (a layout that is specific to blog posts). Of course, this is just an example, and this kind of dynamic extension of shapes that you didn’t even create in the first place is highly encouraged in Orchard. You don’t have to do it from a filter, we only did it this way because that was a good place where we could get the context that we needed. And of course, you can base your alternate shape names on something completely different from route values if you want. For example, you might want to create your own part that modifies the layout for a specific content item, or you might want to do it based on the raw URL (like it’s done in widget rules) or who knows what crazy custom rule. The point of all this is to show that extending or modifying shapes is easy, and the layout just happens to be a shape. In other words, you can do whatever you want. Ain’t that nice? The custom theme can be found here: Orchard.Theme.CustomLayoutMachine.1.0.nupkg Many thanks to Louis, who showed me how to do this.

    Read the article

  • Playing NSF music in FMOD.net

    - by Tesserex
    So, as the title says, I want to be able to play NSF files using FMOD, because my project already uses FMOD and I'd rather not replace it. This will involve figuring out how existing players and emulators work and porting it. I haven't yet found an existing player that uses FMOD. My starting point is the MyNes source from http://sourceforge.net/projects/mynes/. There are two big steps between here and what I'm looking for. MyNes plays from a ROM, not NSF. So, I have to rip out the APU and get it to play NSF files. The MyNes APU uses SlimDX, so I have to convert that to FMOD.NET. I am really stuck about how to go about either of these, because I'm not that familiar with audio formats and it's hard finding resources online. So here are a few questions: From what I can tell from the NSF spec at http://kevtris.org/nes/nsfspec.txt, it's just contains the relevant memory section of the ROM, plus the header. If anyone can verify or correct this that would be great. The emulator APU uses data from the rest of the emulator to play, including things like cycle counts. I'm not sure what replaces this in a standalone player. Can't I just load all the music data at once into a stream and play it? Joining #1 and #2, does the header data from the NSF substitute for some of the ROM data in the emulator code? Using FMOD, will I be following the usercreatedsound example for loading a stream? And does this format count as PCM? Specifically MyNes says PCM8. Any tips on loading / playing the stream in FMOD are appreciated. As an aside, I don't really understand the loading / playing sections of the spec I linked at all. It seems to apply to 6502 systems / emulators only and not to my situation. I know it's a long shot for anyone here to have enough experience in this area to help, but anything you can provide is definitely appreciated. A link to an existing .NET library that does this would be even better, but I don't believe one exists.

    Read the article

  • Is the science of Computer Science dead?

    - by Veaviticus
    Question : Is the science and art of CS dead? By that I mean, the real requirements to think, plan and efficiently solve problems seems to be falling away from CS these days. The field seems to be lowering the entry-barrier so more people can 'program' without having to learn how to truly program. Background : I'm a recent graduate with a BS in Computer Science. I'm working a starting position at a decent sized company in the IT department. I mostly do .NET and other Microsoft technologies at my job, but before this I've done Java stuff through internships and the like. I personally am a C++ programmer for my own for-fun projects. In Depth : Through the work I've been doing, it seems to me that the intense disciplines of a real science don't exist in CS anymore. In the past, programmers had to solve problems efficiently in order for systems to be robust and quick. But now, with the prevailing technologies like .NET, Java and scripting languages, it seems like efficiency and robustness have been traded for ease of development. Most of the colleagues that I work with don't even have degrees in Computer Science. Most graduated with Electrical Engineering degrees, a few with Software Engineering, even some who came from tech schools without a 4 year program. Yet they get by just fine without having the technical background of CS, without having studied theories and algorithms, without having any regard for making an elegant solution (they just go for the easiest, cheapest solution). The company pushes us to use Microsoft technologies, which take all the real thought out of the matter and replace it with libraries and tools that can auto-build your project for you half the time. I'm not trying to hate on the languages, I understand that they serve a purpose and do it well, but when your employees don't know how a hash-table works, and use the wrong sorting methods, or run SQL commands that are horribly inefficient (but get the job done in an acceptable time), it feels like more effort is being put into developing technologies that coddle new 'programmers' rather than actually teaching people how to do things right. I am interested in making efficient and, in my opinion, beautiful programs. If there is a better way to do it, I'd rather go back and refactor it than let it slide. But in the corporate world, they push me to complete tasks quickly rather than elegantly. And that really bugs me. Is this what I'm going to be looking forward to the rest of my life? Are there still positions out there for people who love the science and art of CS rather than just the paycheck? And on the same note, here's a good read if you haven't seen it before The Perils Of Java Schools

    Read the article

  • Dynamically switching the theme in Orchard

    - by Bertrand Le Roy
    It may sound a little puzzling at first, but in Orchard CMS, more than one theme can be active at any given time. The reason for that is that we have an extensibility point that allows a module (or a theme) to participate in the choice of the theme to use, for each request. The motivation for building the theme engine this way was to enable developers to switch themes based on arbitrary criteria, such as user preferences or the user agent (if you want to serve a mobile theme for phones for example). The choice is made between the active themes, which is why there is a difference between the default theme and the active themes. In order to have a say in the choice of the theme, all you have to do is implement IThemeSelector. That interface is quite simple as it only has one method, GetTheme, that takes the current RequestContext and returns a ThemeSelectorResult or null if the implementation of the interface does not want to participate in the current request (we'll see an example in a moment). ThemeSelectorResult itself is just a ThemeName string property and an integer Priority. We're using a priority so that an arbitrary number of implementations of IThemeSelector can contribute to the choice of a theme. If you look for existing implementations of the interface in Orchard, you'll find four: AdminThemeSelector: selects the TheAdmin theme with a very high priority (100) if the current request is for a page that is part of the admin. Otherwise, null is returned, which enables other implementations to choose the theme. PreviewThemeSelector: selects the preview theme if there is one, with a high priority (90), and null otherwise. This enables administrators to view the site under a different theme while everybody else continues to see the current default theme. SiteThemeSelector: this is the implementation that is doing what you expect most of the time, which is to get the current theme from site settings and set it with a priority of –5. SafeModeThemeSelector: this is the fallback implementation, which should almost never win. It sets the theme as the safe mode theme, which has no style and just uses the default templates for everything. The priority is very low (-100). While this extensibility mechanism is great to have, I wanted to bring that level of choice into the hands of the site administrator rather than just developers. In order to achieve that, I built the Vandelay Theme Picker module. The module provides administration UI to create rules for theme selection. It provides its own extensibility point (the IThemeSelectionRule interface) and one implementation of a rule: UserAgentThemeSelectorRule. This rule gets the current user agent from the context and tries to match it with a regular expression that the administrator can configure in the admin UI. You can for example configure a rule with a regular expression that matches IE6 and serve a different subtheme where the stylesheet has been tweaked for such an antique browser. Another possible configuration is to detect mobile devices from their agent string and serve the mobile theme. All those operations can be done with this module entirely from the admin UI, without writing a line of code. The module also offers the administrator the opportunity to inject a link into the front-end in a specific zone and with a specific position that enables the user to switch to the default theme if he wishes to. This is especially useful for sites that use a mobile theme but still want to allow users to use the full desktop site. While the module is nice and flexible, it may be overkill. On my own personal blog, I have only two active themes: the desktop theme and the mobile theme. I'm fine with going into code to change the criteria on which to switch the theme, so I'm not using my own Theme Picker module. Instead, I made the mobile theme a theme with code (in other words there is a csproj file in the theme). The project includes a single C# file, my MobileThemeSelector for which the code is the following: public class MobileThemeSelector : IThemeSelector { private static readonly Regex _Msie678 = new Regex(@"^Mozilla\/4\.0 \(compatible; MSIE [678]" + @"\.0; Windows NT \d\.\d(.*)\)$", RegexOptions.IgnoreCase); private ThemeSelectorResult _requestCache; private bool _requestCached; public ThemeSelectorResult GetTheme(RequestContext context) { if (_requestCached) return _requestCache; _requestCached = true; var userAgent = context.HttpContext.Request.UserAgent; if (userAgent.IndexOf("phone", StringComparison.OrdinalIgnoreCase) != -1 || _Msie678.IsMatch(userAgent) || userAgent.IndexOf("windows live writer", StringComparison.OrdinalIgnoreCase) != -1) { _requestCache = new ThemeSelectorResult { Priority = 10, ThemeName = "VuLuMobile" }; } return _requestCache; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The theme selector selects the current theme for Internet Explorer versions 6 to 8, for phones, and for Windows Live Writer (so that the theme that is used when I write posts is as simple as possible). What's interesting here is that it's the theme that selects itself here, based on its own criteria. This should give you a good panorama of what's possible in terms of dynamic theme selection in Orchard. I hope you find some fun uses for it. As usual, I can't wait to see what you're going to come up with…

    Read the article

  • Rapid Planning: Next Generation MRP

    - by john.bermudez
    MRP has been a mainstay of manufacturing systems for 40 years. MRP evolved from simple inventory planning systems to become the heart of the MRPII systems which eventually became ERP. While the applications surrounding it have become broader, more sophisticated and web-based, MRP continues to operate in the loneliness of the Saturday night batch window quietly exploding bills of materials and logging exceptions for hours. During this same 40 years, manufacturing business processes have seen countless changes and improvements including JIT, TQM, Six Sigma, Flow Manufacturing, Lean Manufacturing and Supply Chain Management. Although much logic has been added to MRP to deal with new manufacturing processes, it has not been able to keep up with the real-time pace of today's supply chain. As a result, planners have devised ingenious ways to trick MRP to handle new processes but often need to dump the output into spreadsheets of their own design in the hope of wrestling thousands of exceptions to ground. Oracle's new Rapid Planning application is just what companies still running MRP have been waiting for! The newest member of the Value Chain Planning product line, Rapid Planning is designed to empower planners with comprehensive supply planning that runs online in minutes, not hours. It enables a planner simulate the incremental impact of a new order or re-run an entire plan in a separate sandbox. Rapid Planning does a complete multi-level bill of material explosion like MRP but plans orders considering material and capacity constraints. Considering material and capacity constraints in planning can help you quickly reduce inventory and improve on-time shipments. Rapid Planning is an APS application that leverages years of Oracle development experience and customer feedback. Rather than rely exclusively on black-box heuristics, Rapid Planning is designed to give planners the computing power to use their industry experience and business knowledge to improve MRP. For example, Rapid Planning has a powerful worksheet user interface with built-in query capability that allows the planner to locate the orders she is interested in and use a mass update function to make quick work of large changes. The planner can save these queries and unique user interface to personalize their planning environment. Most importantly, Rapid Planning is designed to do supply planning in today's dynamic supply chain environment. It can be used to supplement MRP or replace MRP entirely. It generates plans that provide order-by-order details with aggregate key performance indicators that enable planners to quickly assess the overall business impact of a plan. To find out more about how Rapid Planning can help improve your MRP, please contact me at [email protected] or your Oracle Account Manager.

    Read the article

  • Fixing the #mvvmlight code snippets in Visual Studio 11

    - by Laurent Bugnion
    If you installed the latest MVVM Light version for Windows 8, you may encounter an issue where code snippets are not displayed correctly in the Intellisense popup. I am working on a fix, but for now here is how you can solve the issue manually. The code snippets MVVM Light, when installed correctly, will install a set of code snippets that are very useful to allow you to type less code. As I use to say, code is where bugs are, so you want to type as little of that as possible ;) With code snippets, you can easily auto-insert segments of code and easily replace the keywords where needed. For instance, every coder who uses MVVM as his favorite UI pattern for XAML based development is used to the INotifyPropertyChanged implementation, and how boring it can be to type these “observable properties”. Obviously a good fix would be something like an “Observable” attribute, but that is not supported in the language or the framework for the moment. Another fix involves “IL weaving”, which is a post-build operation modifying the generate IL code and inserting the “RaisePropertyChanged” instruction. I admire the invention of those who developed that, but it feels a bit too much like magic to me. I prefer more “down to earth” solutions, and thus I use the code snippets. Fixing the issue Normally, you should see the code snippets in Intellisense when you position your cursor in a C# file and type mvvm. All MVVM Light snippets start with these 4 letters. Normal MVVM Light code snippets However, in Windows 8 CP, there is an issue that prevents them to appear correctly, so you won’t see them in the Intellisense windows. To restore that, follow the steps: In Visual Studio 11, open the menu Tools, Code Snippets Manager. In the combobox, select Visual C#. Press Add… Navigate to C:\Program Files (x86)\Laurent Bugnion (GalaSoft)\Mvvm Light Toolkit\SnippetsWin8 and select the CSharp folder. Press Select Folder. Press OK to close the Code Snippets Manager. Now if you type mvvm in a C# file, you should see the snippets in your Intellisense window. Cheers Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Is my computer slow due to lack of swap

    - by Kristian Jensen
    A few months ago, I installed Ubuntu 12.04 alongside with Windows 7 on my Asus EEE-PC 1015bx. It has a tendency of freezing and when trying to investigate I found that a swap partition of only 256 MB had been created. The Asus EEE-PC 1015bx is born with 1 GByte RAM only and it is not possible to add further or exchange the existing 1 GByte with a larger card. When looking at the system monitor, it looks like all swap is being utilized along with 70-75% of the RAM, even with very few applications running. Can the lack of much swap space be the reason for my computer running slowly and at times freezing? How can I add a swap partition? Or should I add a swap file instead? At the moment, I see two partitions when viewing the system monitor: one 28.6 GByte ext4 partition which must be the one containing Ubuntu and one 100 GByte fuseblk partition which I assume is the one holding Windows. It shows that I have 18.6 GByte free space on the ext4 partition. Can I "take a bite" from the ext4 partition and convert this into a swap partition? I was thinking something like 3 GBytes for swap considering my limited RAM. I hope that someone can guide me through. Thank you. 20th Oct 2012 - Further details Thank you for below answer which I find very useful. I am certainly considering switching to one of your suggested shells as I can see from the Internet that many have posted that these require much fewer resources than ubuntu. It seems to me that lubuntu is the perfect match for my very limited computer. I will have to wait a few days, though, as I am presently limited by a very slow and restricted Internet connection via satellite. But will lubuntu install as simply another shell replacing unity or will it replace ubuntu all together? Will the software that I have installed under ubuntu still be accessible in lubuntu? And can I return to ubuntu if required? Regarding the actual question of swap: When I run gparted, it shows me that there is one ntfs partition of 100 GBytes from where it boots and the before mentioned ext4 partition of 28.6 GBytes is not mentioned. Could it be that my ubuntu installation resides inside this 100 GBytes ntfs partiotion? And if so, can I take a bite of this for my swap partition? Realising that gparted is shown in Danish, I hope that you can make out what I mean. System monitoring shows below details: Once again I sincerely hope that you can help. Thank you.

    Read the article

  • New Wine in New Bottles

    - by Tony Davis
    How many people, when their car shows signs of wear and tear, would consider upgrading the engine and keeping the shell? Even if you're cash-strapped, you'll soon work out the subtlety of the economics, the cost of sudden breakdowns, the precious time lost coping with the hassle, and the low 'book value'. You'll generally buy a new car. The same philosophy should apply to database systems. Mainstream support for SQL Server 2005 ends on April 12; many DBAS, if they haven't done so already, will be considering the migration to SQL Server 2008 R2. Hopefully, that upgrade plan will include a fresh install of the operating system on brand new hardware. SQL Server 2008 R2 and Windows Server 2008 R2 are designed to work together. The improved architecture, processing power, and hyper-threading capabilities of modern processors will dramatically improve the performance of many SQL Server workloads, and allow consolidation opportunities. Of course, there will be many DBAs smiling ruefully at the suggestion of such indulgence. This is nothing like the real world, this halcyon place where hardware and software budgets are limitless, development and testing resources are plentiful, and third party vendors immediately certify their applications for the latest-and-greatest platform! As with cars, or any other technology, the justification for a complete upgrade is complex. With Servers, the extra cost at time of upgrade will generally pay you back in terms of the increased performance of your business applications, reduced maintenance costs, training costs and downtime. Also, if you plan and design carefully, it's possible to offset hardware costs with reduced SQL Server licence costs. In his forthcoming SQL Server Hardware book, Glenn Berry describes a recent case where he was able to replace 4 single-socket database servers with one two-socket server, saving about $90K in hardware costs and $350K in SQL Server license costs. Of course, there are exceptions. If you do have a stable, reliable, secure SQL Server 6.5 system that still admirably meets the needs of a specific business requirement, and has no security vulnerabilities, then by all means leave it alone. Why upgrade just for the sake of it? However, as soon as a system shows sign of being unfit for purpose, or is moving out of mainstream support, the ruthless DBA will make the strongest possible case for a belts-and-braces upgrade. We'd love to hear what you think. What does your typical upgrade path look like? What are the major obstacles? Cheers, Tony.

    Read the article

  • Say goodbye to System.Reflection.Emit (any dynamic proxy generation) in WinRT

    - by mbrit
    tl;dr - Forget any form of dynamic code emitting in Metro-style. It's not going to happen.Over the past week or so I've been trying to get Moq (the popular open source TDD mocking framework) to work on WinRT. Irritatingly, the day before Release Preview was released it was actually working on Consumer Preview. However in Release Preview (RP) the System.Reflection.Emit namespace is gone. Forget any form of dynamic code generation and/or MSIL injection.This kills off any project based on the popular Castle Project Dynamic Proxy component, of which Moq is one example. You can at this point in time not perform any form of mocking using dynamic injection in your Metro-style unit testing endeavours.So let me take you through my journey on this, so that other's don't have to...The headline fact is that you cannot load any assembly that you create at runtime. WinRT supports one Assembly.Load method, and that takes the name of an assembly. That has to be placed within the deployment folder of your app. You cannot give it a filename, or stream. The methods are there, but private. Try to invoke them using Reflection and you'll be met with a caspol exception.You can, in theory, use Rotor to replace SRE. It's all there, but again, you can't load anything you create.You can't write to your deployment folder from within your Metro-style app. But, can you use another service on the machine to move a file that you create into the deployment folder and load it? Not really.The networking stack in Metro-style is intentionally "damaged" to prevent socket communication from Metro-style to any end-point on the local machine. (It just times out.) This militates against an approach where your Metro-style app can signal a properly installed service on the machine to create proxies on its behalf. If you wanted to do this, you'd have to route the calls through a C&C server somewhere. The reason why Microsoft has done this is obvious - taking out SRE know means they don't have to do it in an emergency later. The collateral damage in removing SRE is that you can't do mocking in test mode, but you also can't do any form of injection in production mode. There are plenty of reasons why enterprise apps might want to do this last point particularly. At CP, the assumption was that their inspection tools would prevent SRE being used as a malware vector - it now seems they are less confident about that. (For clarity, the risk here is in allowing a nefarious program to download instructions from a C&C server and make up executable code on the fly to run, getting around the marketplace restrictions.)So, two things:- System.Reflection.Emit is gone in Metro-style/WinRT. Get over it - dynamic, on-the-fly code generation is not going to to happen.- I've more or less got a version of Moq working in Metro-style. This is based on the idea of "baking" the dynamic proxies before you use them. You can find more information here: https://github.com/mbrit/moqrt

    Read the article

  • Sniffing out SQL Code Smells: Inconsistent use of Symbolic names and Datatypes

    - by Phil Factor
    It is an awkward feeling. You’ve just delivered a database application that seems to be working fine in production, and you just run a few checks on it. You discover that there is a potential bug that, out of sheer good chance, hasn’t kicked in to produce an error; but it lurks, like a smoking bomb. Worse, maybe you find that the bug has started its evil work of corrupting the data, but in ways that nobody has, so far detected. You investigate, and find the damage. You are somehow going to have to repair it. Yes, it still very occasionally happens to me. It is not a nice feeling, and I do anything I can to prevent it happening. That’s why I’m interested in SQL code smells. SQL Code Smells aren’t necessarily bad practices, but just show you where to focus your attention when checking an application. Sometimes with databases the bugs can be subtle. SQL is rather like HTML: the language does its best to try to carry out your wishes, rather than to be picky about your bugs. Most of the time, this is a great benefit, but not always. One particular place where this can be detrimental is where you have implicit conversion between different data types. Most of the time it is completely harmless but we’re  concerned about the occasional time it isn’t. Let’s give an example: String truncation. Let’s give another even more frightening one, rounding errors on assignment to a number of different precision. Each requires a blog-post to explain in detail and I’m not now going to try. Just remember that it is not always a good idea to assign data to variables, parameters or even columns when they aren’t the same datatype, especially if you are relying on implicit conversion to work its magic.For details of the problem and the consequences, see here:  SR0014: Data loss might occur when casting from {Type1} to {Type2} . For any experienced Database Developer, this is a more frightening read than a Vampire Story. This is why one of the SQL Code Smells that makes me edgy, in my own or other peoples’ code, is to see parameters, variables and columns that have the same names and different datatypes. Whereas quite a lot of this is perfectly normal and natural, you need to check in case one of two things have gone wrong. Either sloppy naming, or mixed datatypes. Sure it is hard to remember whether you decided that the length of a log entry was 80 or 100 characters long, or the precision of a number. That is why a little check like this I’m going to show you is excellent for tidying up your code before you check it back into source Control! 1/ Checking Parameters only If you were just going to check parameters, you might just do this. It simply groups all the parameters, either input or output, of all the routines (e.g. stored procedures or functions) by their name and checks to see, in the HAVING clause, whether their data types are all the same. If not, it lists all the examples and their origin (the routine) Even this little check can occasionally be scarily revealing. ;WITH userParameter AS  ( SELECT   c.NAME AS ParameterName,  OBJECT_SCHEMA_NAME(c.object_ID) + '.' + OBJECT_NAME(c.object_ID) AS ObjectName,  t.name + ' '     + CASE     --we may have to put in the length            WHEN t.name IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN c.max_length = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN t.name IN ('nchar', 'nvarchar')                      THEN c.max_length / 2 ELSE c.max_length                    END)                END + ')'         WHEN t.name IN ('decimal', 'numeric')             THEN '(' + CONVERT(VARCHAR(4), c.precision)                   + ',' + CONVERT(VARCHAR(4), c.Scale) + ')'         ELSE ''      END  --we've done with putting in the length      + CASE WHEN XML_collection_ID <> 0         THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                    THEN 'DOCUMENT '                    ELSE 'CONTENT '                   END              + COALESCE(               (SELECT QUOTENAME(ss.name) + '.' + QUOTENAME(sc.name)                FROM sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE sc.xml_collection_ID = c.XML_collection_ID),'NULL') + ')'          ELSE ''         END        AS [DataType]  FROM sys.parameters c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys'   AND parameter_id>0)SELECT CONVERT(CHAR(80),objectName+'.'+ParameterName),DataType FROM UserParameterWHERE ParameterName IN   (SELECT ParameterName FROM UserParameter    GROUP BY ParameterName    HAVING MIN(Datatype)<>MAX(DataType))ORDER BY ParameterName   so, in a very small example here, we have a @ClosingDelimiter variable that is only CHAR(1) when, by the looks of it, it should be up to ten characters long, or even worse, a function that should be a char(1) and seems to let in a string of ten characters. Worth investigating. Then we have a @Comment variable that can't decide whether it is a VARCHAR(2000) or a VARCHAR(MAX) 2/ Columns and Parameters Actually, once we’ve cleared up the mess we’ve made of our parameter-naming in the database we’re inspecting, we’re going to be more interested in listing both columns and parameters. We can do this by modifying the routine to list columns as well as parameters. Because of the slight complexity of creating the string version of the datatypes, we will create a fake table of both columns and parameters so that they can both be processed the same way. After all, we want the datatypes to match Unfortunately, parameters do not expose all the attributes we are interested in, such as whether they are nullable (oh yes, subtle bugs happen if this isn’t consistent for a datatype). We’ll have to leave them out for this check. Voila! A slight modification of the first routine ;WITH userObject AS  ( SELECT   Name AS DataName,--the actual name of the parameter or column ('@' removed)  --and the qualified object name of the routine  OBJECT_SCHEMA_NAME(ObjectID) + '.' + OBJECT_NAME(ObjectID) AS ObjectName,  --now the harder bit: the definition of the datatype.  TypeName + ' '     + CASE     --we may have to put in the length. e.g. CHAR (10)           WHEN TypeName IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN MaxLength = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN TypeName IN ('nchar', 'nvarchar')                      THEN MaxLength / 2 ELSE MaxLength                    END)                END + ')'         WHEN TypeName IN ('decimal', 'numeric')--a BCD number!             THEN '(' + CONVERT(VARCHAR(4), Precision)                   + ',' + CONVERT(VARCHAR(4), Scale) + ')'         ELSE ''      END  --we've done with putting in the length      + CASE WHEN XML_collection_ID <> 0 --tush tush. XML         THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                    THEN 'DOCUMENT '                    ELSE 'CONTENT '                   END              + COALESCE(               (SELECT TOP 1 QUOTENAME(ss.name) + '.' + QUOTENAME(sc.Name)                FROM sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE sc.xml_collection_ID = XML_collection_ID),'NULL') + ')'          ELSE ''         END        AS [DataType],       DataObjectType  FROM   (Select t.name AS TypeName, REPLACE(c.name,'@','') AS Name,          c.max_length AS MaxLength, c.precision AS [Precision],           c.scale AS [Scale], c.[Object_id] AS ObjectID, XML_collection_ID,          is_XML_Document,'P' AS DataobjectType  FROM sys.parameters c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  AND parameter_id>0  UNION all  Select t.name AS TypeName, c.name AS Name, c.max_length AS MaxLength,          c.precision AS [Precision], c.scale AS [Scale],          c.[Object_id] AS ObjectID, XML_collection_ID,is_XML_Document,          'C' AS DataobjectType            FROM sys.columns c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID   WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys'  )f)SELECT CONVERT(CHAR(80),objectName+'.'   + CASE WHEN DataobjectType ='P' THEN '@' ELSE '' END + DataName),DataType FROM UserObjectWHERE DataName IN   (SELECT DataName FROM UserObject   GROUP BY DataName    HAVING MIN(Datatype)<>MAX(DataType))ORDER BY DataName     Hmm. I can tell you I found quite a few minor issues with the various tabases I tested this on, and found some potential bugs that really leap out at you from the results. Here is the start of the result for AdventureWorks. Yes, AccountNumber is, for some reason, a Varchar(10) in the Customer table. Hmm. odd. Why is a city fifty characters long in that view?  The idea of the description of a colour being 256 characters long seems over-ambitious. Go down the list and you'll spot other mistakes. There are no bugs, but just mess. We started out with a listing to examine parameters, then we mixed parameters and columns. Our last listing is for a slightly more in-depth look at table columns. You’ll notice that we’ve delibarately removed the indication of whether a column is persisted, or is an identity column because that gives us false positives for our code smells. If you just want to browse your metadata for other reasons (and it can quite help in some circumstances) then uncomment them! ;WITH userColumns AS  ( SELECT   c.NAME AS columnName,  OBJECT_SCHEMA_NAME(c.object_ID) + '.' + OBJECT_NAME(c.object_ID) AS ObjectName,  REPLACE(t.name + ' '   + CASE WHEN is_computed = 1 THEN ' AS ' + --do DDL for a computed column          (SELECT definition FROM sys.computed_columns cc           WHERE cc.object_id = c.object_id AND cc.column_ID = c.column_ID)     --we may have to put in the length            WHEN t.Name IN ('char', 'varchar', 'nchar', 'nvarchar')             THEN '('               + CASE WHEN c.Max_Length = -1 THEN 'MAX'                ELSE CONVERT(VARCHAR(4),                    CASE WHEN t.Name IN ('nchar', 'nvarchar')                      THEN c.Max_Length / 2 ELSE c.Max_Length                    END)                END + ')'       WHEN t.name IN ('decimal', 'numeric')       THEN '(' + CONVERT(VARCHAR(4), c.precision) + ',' + CONVERT(VARCHAR(4), c.Scale) + ')'       ELSE ''      END + CASE WHEN c.is_rowguidcol = 1          THEN ' ROWGUIDCOL'          ELSE ''         END + CASE WHEN XML_collection_ID <> 0            THEN --deal with object schema names             '(' + CASE WHEN is_XML_Document = 1                THEN 'DOCUMENT '                ELSE 'CONTENT '               END + COALESCE((SELECT                QUOTENAME(ss.name) + '.' + QUOTENAME(sc.name)                FROM                sys.xml_schema_collections sc                INNER JOIN Sys.Schemas ss ON sc.schema_ID = ss.schema_ID                WHERE                sc.xml_collection_ID = c.XML_collection_ID),                'NULL') + ')'            ELSE ''           END + CASE WHEN is_identity = 1             THEN CASE WHEN OBJECTPROPERTY(object_id,                'IsUserTable') = 1 AND COLUMNPROPERTY(object_id,                c.name,                'IsIDNotForRepl') = 0 AND OBJECTPROPERTY(object_id,                'IsMSShipped') = 0                THEN ''                ELSE ' NOT FOR REPLICATION '               END             ELSE ''            END + CASE WHEN c.is_nullable = 0               THEN ' NOT NULL'               ELSE ' NULL'              END + CASE                WHEN c.default_object_id <> 0                THEN ' DEFAULT ' + object_Definition(c.default_object_id)                ELSE ''               END + CASE                WHEN c.collation_name IS NULL                THEN ''                WHEN c.collation_name <> (SELECT                collation_name                FROM                sys.databases                WHERE                name = DB_NAME()) COLLATE Latin1_General_CI_AS                THEN COALESCE(' COLLATE ' + c.collation_name,                '')                ELSE ''                END,'  ',' ') AS [DataType]FROM sys.columns c  INNER JOIN sys.types t ON c.user_Type_ID = t.user_Type_ID  WHERE OBJECT_SCHEMA_NAME(c.object_ID) <> 'sys')SELECT CONVERT(CHAR(80),objectName+'.'+columnName),DataType FROM UserColumnsWHERE columnName IN (SELECT columnName FROM UserColumns  GROUP BY columnName  HAVING MIN(Datatype)<>MAX(DataType))ORDER BY columnName If you take a look down the results against Adventureworks, you'll see once again that there are things to investigate, mostly, in the illustration, discrepancies between null and non-null datatypes So I here you ask, what about temporary variables within routines? If ever there was a source of elusive bugs, you'll find it there. Sadly, these temporary variables are not stored in the metadata so we'll have to find a more subtle way of flushing these out, and that will, I'm afraid, have to wait!

    Read the article

< Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >