Search Results

Search found 1942 results on 78 pages for 'compact disc'.

Page 72/78 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • ASP.NET WebAPI Security 4: Examples for various Authentication Scenarios

    - by Your DisplayName here!
    The Thinktecture.IdentityModel.Http repository includes a number of samples for the various authentication scenarios. All the clients follow a basic pattern: Acquire client credential (a single token, multiple tokens, username/password). Call Service. The service simply enumerates the claims it finds on the request and returns them to the client. I won’t show that part of the code, but rather focus on the step 1 and 2. Basic Authentication This is the most basic (pun inteneded) scenario. My library contains a class that can create the Basic Authentication header value. Simply set username and password and you are good to go. var client = new HttpClient { BaseAddress = _baseAddress }; client.DefaultRequestHeaders.Authorization = new BasicAuthenticationHeaderValue("alice", "alice"); var response = client.GetAsync("identity").Result; response.EnsureSuccessStatusCode();   SAML Authentication To integrate a Web API with an existing enterprise identity provider like ADFS, you can use SAML tokens. This is certainly not the most efficient way of calling a “lightweight service” ;) But very useful if that’s what it takes to get the job done. private static string GetIdentityToken() {     var factory = new WSTrustChannelFactory(         new WindowsWSTrustBinding(SecurityMode.Transport),         _idpEndpoint);     factory.TrustVersion = TrustVersion.WSTrust13;     var rst = new RequestSecurityToken     {         RequestType = RequestTypes.Issue,         KeyType = KeyTypes.Bearer,         AppliesTo = new EndpointAddress(Constants.Realm)     };     var token = factory.CreateChannel().Issue(rst) as GenericXmlSecurityToken;     return token.TokenXml.OuterXml; } private static Identity CallService(string saml) {     var client = new HttpClient { BaseAddress = _baseAddress };     client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("SAML", saml);     var response = client.GetAsync("identity").Result;     response.EnsureSuccessStatusCode();     return response.Content.ReadAsAsync<Identity>().Result; }   SAML to SWT conversion using the Azure Access Control Service Another possible options for integrating SAML based identity providers is to use an intermediary service that allows converting the SAML token to the more compact SWT (Simple Web Token) format. This way you only need to roundtrip the SAML once and can use the SWT afterwards. The code for the conversion uses the ACS OAuth2 endpoint. The OAuth2Client class is part of my library. private static string GetServiceTokenOAuth2(string samlToken) {     var client = new OAuth2Client(_acsOAuth2Endpoint);     return client.RequestAccessTokenAssertion(         samlToken,         SecurityTokenTypes.Saml2TokenProfile11,         Constants.Realm).AccessToken; }   SWT Authentication When you have an identity provider that directly supports a (simple) web token, you can acquire the token directly without the conversion step. Thinktecture.IdentityServer e.g. supports the OAuth2 resource owner credential profile to issue SWT tokens. private static string GetIdentityToken() {     var client = new OAuth2Client(_oauth2Address);     var response = client.RequestAccessTokenUserName("bob", "abc!123", Constants.Realm);     return response.AccessToken; } private static Identity CallService(string swt) {     var client = new HttpClient { BaseAddress = _baseAddress };     client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", swt);     var response = client.GetAsync("identity").Result;     response.EnsureSuccessStatusCode();     return response.Content.ReadAsAsync<Identity>().Result; }   So you can see that it’s pretty straightforward to implement various authentication scenarios using WebAPI and my authentication library. Stay tuned for more client samples!

    Read the article

  • Blog Buzz - Devoxx 2011

    - by Janice J. Heiss
    Some day I will make it to Devoxx – for now, I’m content to vicariously follow the blogs of attendees and pick up on what’s happening.  I’ve been doing more blog "fishing," looking for the best commentary on 2011 Devoxx. There’s plenty of food for thought – and the ideas are not half-baked.The bloggers are out in full, offering useful summaries and commentary on Devoxx goings-on.Constantin Partac, a Java developer and a member of Transylvania JUG, a community from Cluj-Napoca/Romania, offers an excellent summary of the Devoxx keynotes. Here’s a sample:“Oracle Opening Keynote and JDK 7, 8, and 9 Presentation•    Oracle is committed to Java and wants to provide support for it on any device.•    JSE 7 for Mac will be released next week.•    Oracle would like Java developers to be involved in JCP, to adopt a JSR and to attend local JUG meetings.•    JEE 7 will be released next year.•    JEE 7 is focused on cloud integration, some of the features are already implemented in glassfish 4 development branch.•    JSE 8 will be release in summer of 2013 due to “enterprise community request” as they can not keep the pace with an 18    month release cycle.•    The main features included in JSE8 are lambda support, project Jigsaw, new Date/Time API, project Coin++ and adding   support for sensors. JSE 9 probably will focus on some of these features:1.    self tuning JVM2.    improved native language integration3.    processing enhancement for big data4.    reification (adding runtime class type info for generic types)5.    unification of primitive and corresponding object classes6.    meta-object protocol in order to use type and methods define in other JVM languages7.    multi-tenancy8.    JVM resource management” Thanks Constantin! Ivan St. Ivanov, of SAP Labs Bulgaria, also commented on the keynotes with a different focus.  He summarizes Henrik Stahl’s look ahead to Java SE 8 and JavaFX 3.0; Cameron Purdy on Java EE and the cloud; celebrated Java Champion Josh Bloch on what’s good and bad about Java; Mark Reinhold’s quick look ahead to Java SE 9; and Brian Goetz on lambdas and default methods in Java SE 8. Here’s St. Ivanov’s account of Josh Bloch’s comments on the pluses of Java:“He started with the virtues of the platform. To name a few:    Tightly specified language primitives and evaluation order – int is always 32 bits and operations are executed always from left  to right, without compilers messing around    Dynamic linking – when you change a class, you need to recompile and rebuild just the jar that has it and not the whole application    Syntax  similarity with C/C++ – most existing developers at that time felt like at home    Object orientations – it was cool at that time as well as functional programming is today    It was statically typed language – helps in faster runtime, better IDE support, etc.    No operator overloading – well, I’m not sure why it is good. Scala has it for example and that’s why it is far better for defining DSLs. But I will not argue with Josh.”It’s worth checking out St. Ivanov’s summary of Bloch’s views on what’s not so great about Java as well. What's Coming in JAX-RS 2.0Marek Potociar, Principal Software Engineer at Oracle and currently specification lead of Java EE RESTful web services API (JAX-RS), blogged on his talk about what's coming in JAX-RS 2.0, scheduled for final release in mid-2012.  Here’s a taste:“Perhaps the most wanted addition to the JAX-RS is the Client API, that would complete the JAX-RS story, that is currently server-side only. In JAX-RS 2.0 we are adding a completely interface-based and fluent client API that blends nicely in with the existing fluent response builder pattern on the server-side. When we started with the client API, the first proposal contained around 30 classes. Thanks to the feedback from our Expert Group we managed to reduce the number of API classes to 14 (2 of them being exceptions)! The resulting is compact while at the same time we still managed to create an API that reflects the method invocation context flow (e.g. once you decide on the target URI and start setting headers on the request, your IDE will not try to offer you a URI setter in the code completion). This is a subtle but very important usability aspect of an API…” Obviously, Devoxx is a great Java conference, one that is hitting this year at a time when much is brewing in the platform and beginning to be anticipated.

    Read the article

  • How to Tell If Your Computer is Overheating and What to Do About It

    - by Chris Hoffman
    Heat is a computer’s enemy. Computers are designed with heat dispersion and ventilation in mind so they don’t overheat. If too much heat builds up, your computer may become unstable or suddenly shut down. The CPU and graphics card produce much more heat when running demanding applications. If there’s a problem with your computer’s cooling system, an excess of heat could even physically damage its components. Is Your Computer Overheating? When using a typical computer in a typical way, you shouldn’t have to worry about overheating at all. However, if you’re encountering system instability issues like abrupt shut downs, blue screens, and freezes — especially while doing something demanding like playing PC games or encoding video — your computer may be overheating. This can happen for several reasons. Your computer’s case may be full of dust, a fan may have failed, something may be blocking your computer’s vents, or you may have a compact laptop that was never designed to run at maximum performance for hours on end. Monitoring Your Computer’s Temperature First, bear in mind that different CPUs and GPUs (graphics cards) have different optimal temperature ranges. Before getting too worried about a temperature, be sure to check your computer’s documentation — or its CPU or graphics card specifications — and ensure you know the temperature ranges your hardware can handle. You can monitor your computer’s temperatures in a variety of different ways. First, you may have a way to monitor temperature that is already built into your system. You can often view temperature values in your computer’s BIOS or UEFI settings screen. This allows you to quickly see your computer’s temperature if Windows freezes or blue screens on you — just boot the computer, enter the BIOS or UEFI screen, and check the temperatures displayed there. Note that not all BIOSes or UEFI screens will display this information, but it is very common. There are also programs that will display your computer’s temperature. Such programs just read the sensors inside your computer and show you the temperature value they report, so there are a wide variety of tools you can use for this, from the simple Speccy system information utility to an advanced tool like SpeedFan. HWMonitor also offer this feature, displaying a wide variety of sensor information. Be sure to look at your CPU and graphics card temperatures. You can also find other temperatures, such as the temperature of your hard drive, but these components will generally only overheat if it becomes extremely hot in the computer’s case. They shouldn’t generate too much heat on their own. If you think your computer may be overheating, don’t just glance as these sensors once and ignore them. Do something demanding with your computer, such as running a CPU burn-in test with Prime 95, playing a PC game, or running a graphical benchmark. Monitor the computer’s temperature while you do this, even checking a few hours later — does any component overheat after you push it hard for a while? Preventing Your Computer From Overheating If your computer is overheating, here are some things you can do about it: Dust Out Your Computer’s Case: Dust accumulates in desktop PC cases and even laptops over time, clogging fans and blocking air flow. This dust can cause ventilation problems, trapping heat and preventing your PC from cooling itself properly. Be sure to clean your computer’s case occasionally to prevent dust build-up. Unfortunately, it’s often more difficult to dust out overheating laptops. Ensure Proper Ventilation: Put the computer in a location where it can properly ventilate itself. If it’s a desktop, don’t push the case up against a wall so that the computer’s vents become blocked or leave it near a radiator or heating vent. If it’s a laptop, be careful to not block its air vents, particularly when doing something demanding. For example, putting a laptop down on a mattress, allowing it to sink in, and leaving it there can lead to overheating — especially if the laptop is doing something demanding and generating heat it can’t get rid of. Check if Fans Are Running: If you’re not sure why your computer started overheating, open its case and check that all the fans are running. It’s possible that a CPU, graphics card, or case fan failed or became unplugged, reducing air flow. Tune Up Heat Sinks: If your CPU is overheating, its heat sink may not be seated correctly or its thermal paste may be old. You may need to remove the heat sink and re-apply new thermal paste before reseating the heat sink properly. This tip applies more to tweakers, overclockers, and people who build their own PCs, especially if they may have made a mistake when originally applying the thermal paste. This is often much more difficult when it comes to laptops, which generally aren’t designed to be user-serviceable. That can lead to trouble if the laptop becomes filled with dust and needs to be cleaned out, especially if the laptop was never designed to be opened by users at all. Consult our guide to diagnosing and fixing an overheating laptop for help with cooling down a hot laptop. Overheating is a definite danger when overclocking your CPU or graphics card. Overclocking will cause your components to run hotter, and the additional heat will cause problems unless you can properly cool your components. If you’ve overclocked your hardware and it has started to overheat — well, throttle back the overclock! Image Credit: Vinni Malek on Flickr     

    Read the article

  • I, Android

    - by andrewbrust
    I’m just back from the 2011 Consumer Electronics Show (CES).  I go to CES to get a sense of what Microsoft is doing in the consumer space, and how people are reacting to it.  When I first went to CES 2 years ago, Steve Ballmer announced the beta of Windows 7 at his keynote address, and the crowd went wild.  When I went again last year, everyone was hoping for a Windows tablet announcement at the Ballmer keynote.  Although they didn’t get one (unless you count the unreleased HP Slate running Windows 7), people continued to show anticipation around Project Natal (which became Xbox 360 Kinect) and around Windows Phone 7.  On the show floor last year, there were machines everywhere running Windows 7, including lots of netbooks.  Microsoft had a serious influence at the show both years. But this year, one brand, one product, one operating system evidenced itself over and over again: Android.  Whether in the multitude of tablet devices that were shown across the show, or the burgeoning number of smartphones shown (including all four forthcoming 4G-LTE handsets at Verizon Wireless’ booth) or the Google TV set top box from Logitech and the embedded implementation in new Sony TV models, Android was was there. There was excitement in the ubiquity of Android 2.2 (Froyo) and the emergence of Android 2.3 (Gingerbread).  There was anticipation around the tablet-optimized Android 3.0 (Honeycomb).  There were highly customized skins.  There was even an official CES Android app for navigating the exhibit halls and planning events.  Android was so ubiquitous, in fact, that it became surprising to find a device that was running anything else.  It was as if Android had become the de facto Original Equipment Manufacturing (OEM) operating system. Motorola’s booth was nothing less than an Android showcase.  And it was large, and it was packed.  Clearly Moto’s fortunes have improved dramatically in the last year and change.  The fact that the company morphed from being a core Windows Mobile OEM to an Android poster child seems non-coincidental to their improved fortunes. Even erstwhile WinMo OEMs who now do produce Windows Phone 7 devices were not pushing them.  Perhaps I missed them, but I couldn’t find WP7 handsets at Samsung’s booth, nor at LG’s.  And since the only carrier exhibiting at the show was Verizon Wireless, which doesn’t yet have WP7 devices, this left Microsoft’s booth as the only place to see the phones. Why is Android so popular with consumer electronics manufacturers in Japan, South Korea, China and Taiwan?  Yes, it’s free, but there’s more to it than that.  Android seems to have succeeded as an OEM OS because it’s directed at OEMs who are permitted to personalize it and extend it, and it provides enough base usability and touch-friendliness that OEMs want it.  In the process, it has become a de facto standard (which makes OEMs want it even more), and has done so in a remarkably short time: the OS was launched on a single phone in the US just 2 1/4 years ago. Despite its success and popularity, Apple’s iOS would never be used by OEMs, because it’s not meant to be embedded and customized, but rather to provide a fully finished experience.  Ironically, Windows Phone 7 is likewise disqualified from such embedded use.  Windows Mobile (6.x and earlier) may have been a candidate had it not atrophied so much in its final 5 years of life. What can Microsoft do?  It could start by developing a true touch-centric OS for tablets, whether that be within Windows 8, or derived from Windows Phone 7.  It would then need to deconstruct that finished product into components, via a new or altered version of Windows Embedded or Windows Embedded Compact.  And if Microsoft went that far, it would only make sense to work with its OEMs and mobile carriers to make certain they showcase their products using the OS at CES, and other consumer electronics venues, prominently. Mostly though, Microsoft would need to decide if it were really committed to putting sustained time, effort and money into a commodity product, especially given the far greater financial return that it now derives from its core Windows and Office franchises. Microsoft would need to see an OEM OS for what it is: a loss leader that helps build brand and platform momentum for up-level products.  Is that enough to make the investment worthwhile?  One thing is certain: if that question is not acknowledged and answered honestly, then any investment will be squandered.

    Read the article

  • ASP.NET: Using pickup directory for outgoing e-mails

    - by DigiMortal
    Sending e-mails out from web applications is very common task. When we are working on or test our systems with real e-mail addresses we don’t want recipients to receive e-mails (specially if we are using some subset of real data9. In this posting I will show you how to make ASP.NET SMTP client to write e-mails to disc instead of sending them out. SMTP settings for web application I have seen many times the code where all SMTP information is kept in app settings just to read them in code and give to SMTP client. It is not necessary because we can define all these settings under system.web => mailsettings node. If you are using web.config to keep SMTP settings then all you have to do in your code is just to create SmtpClient with empty constructor. var smtpClient = new SmtpClient(); Empty constructor means that all settings are read from web.config file. What is pickup directory? If you want drastically raise e-mail throughput of your SMTP server then it is not very wise plan to communicate with it using SMTP protocol. it adds only additional overhead to your network and SMTP server. Okay, clients make connections, send messages out and it is also overhead we can avoid. If clients write their e-mails to some folder that SMTP server can access then SMTP server has e-mail forwarding as only resource-eager task to do. File operations are way faster than communication over SMTP protocol. The directory where clients write their e-mails as files is called pickup directory. By example, Exchange server has support for pickup directories. And as there are applications with a lot of users who want e-mail notifications then .NET SMTP client supports writing e-mails to pickup directory instead of sending them out. How to configure ASP.NET SMTP to use pickup directory? Let’s say, it is more than easy. It is very easy. This is all you need. <system.net>   <mailSettings>     <smtp deliveryMethod="SpecifiedPickupDirectory">       <specifiedPickupDirectory pickupDirectoryLocation="c:\temp\maildrop\"/>     </smtp>   </mailSettings> </system.net> Now make sure you don’t miss come points: Pickup directory must physically exist because it is not created automatically. IIS (or Cassini) must have write permissions to pickup directory. Go through your code and look for hardcoded SMTP settings. Also take a look at all places in your code where you send out e-mails that there are not some custom settings used for SMTP! Also don’t forget that your mails will be written now to pickup directory and they are not sent out to recipients anymore. Advanced scenario: configuring SMTP client in code In some advanced scenarios you may need to support multiple SMTP servers. If configuration is dynamic or it is not kept in web.config you need to initialize your SmtpClient in code. This is all you need to do. var smtpClient = new SmtpClient(); smtpClient.DeliveryMethod = SmtpDeliveryMethod.SpecifiedPickupDirectory; smtpClient.PickupDirectoryLocation = pickupFolder; Easy, isn’t it? i like when advanced scenarios end up with simple and elegant solutions but not with rocket science. Note for IIS SMTP service SMTP service of IIS is also able to use pickup directory. If you have set up IIS with SMTP service you can configure your ASP.NET application to use IIS pickup folder. In this case you have to use the following setting for delivery method. SmtpDeliveryMethod.PickupDirectoryFromIis You can set this setting also in web.config file. <system.net>   <mailSettings>     <smtp deliveryMethod="PickupDirectoryFromIis" />   </mailSettings> </system.net> Conclusion Who was still using different methods to avoid sending e-mails out in development or testing environment can now remove all the bad code from application and live on mail settings of ASP.NET. It is easy to configure and you have less code to support e-mails when you use built-in e-mail features wisely.

    Read the article

  • Yippy &ndash; the F# MVVM Pattern

    - by MarkPearl
    I did a recent post on implementing WPF with F#. Today I would like to expand on this posting to give a simple implementation of the MVVM pattern in F#. A good read about this topic can also be found on Dean Chalk’s blog although my example of the pattern is possibly simpler. With the MVVM pattern one typically has 3 segments, the view, viewmodel and model. With the beauty of WPF binding one is able to link the state based viewmodel to the view. In my implementation I have kept the same principles. I have a view (MainView.xaml), and and a ViewModel (MainViewModel.fs).     What I would really like to illustrate in this posting is the binding between the View and the ViewModel so I am going to jump to that… In Program.fs I have the following code… module Program open System open System.Windows open System.Windows.Controls open System.Windows.Markup open myViewModels // Create the View and bind it to the View Model let myView = Application.LoadComponent(new System.Uri("/FSharpWPF;component/MainView.xaml", System.UriKind.Relative)) :?> Window myView.DataContext <- new MainViewModel() :> obj // Application Entry point [<STAThread>] [<EntryPoint>] let main(_) = (new Application()).Run(myView) You can see that I have simply created the view (myView) and then created an instance of my viewmodel (MainViewModel) and then bound it to the data context with the code… myView.DataContext <- new MainViewModel() :> obj If I have a look at my viewmodel (MainViewModel) it looks like this… module myViewModels open System open System.Windows open System.Windows.Input open System.ComponentModel open ViewModelBase type MainViewModel() = // private variables let mutable _title = "Bound Data to Textbox" // public properties member x.Title with get() = _title and set(v) = _title <- v // public commands member x.MyCommand = new FuncCommand ( (fun d -> true), (fun e -> x.ShowMessage) ) // public methods member public x.ShowMessage = let msg = MessageBox.Show(x.Title) () I have exposed a few things, namely a property called Title that is mutable, a command and a method called ShowMessage that simply pops up a message box when called. If I then look at my view which I have created in xaml (MainView.xaml) it looks as follows… <Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="F# WPF MVVM" Height="350" Width="525"> <Grid> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> <RowDefinition Height="*"/> </Grid.RowDefinitions> <TextBox Text="{Binding Path=Title, Mode=TwoWay}" Grid.Row="0"/> <Button Command="{Binding MyCommand}" Grid.Row="1"> <TextBlock Text="Click Me"/> </Button> </Grid> </Window>   It is also very simple. It has a button that’s command is bound to the MyCommand and a textbox that has its text bound to the Title property. One other module that I have created is my ViewModelBase. Right now it is used to store my commanding function but I would look to expand on it at a later stage to implement other commonly used functions… module ViewModelBase open System open System.Windows open System.Windows.Input open System.ComponentModel type FuncCommand (canExec:(obj -> bool),doExec:(obj -> unit)) = let cecEvent = new DelegateEvent<EventHandler>() interface ICommand with [<CLIEvent>] member x.CanExecuteChanged = cecEvent.Publish member x.CanExecute arg = canExec(arg) member x.Execute arg = doExec(arg) Put this all together and you have a basic project that implements the MVVM pattern in F#. For me this is quite exciting as it turned out to be a lot simpler to do than I originally thought possible. Also because I have my view in XAML I can use the XAML designer to design forms in F# which I believe is a much cleaner way to go rather than implementing it all in code. Finally if I look at my viewmodel code, it is actually quite clean and compact…

    Read the article

  • yield – Just yet another sexy c# keyword?

    - by George Mamaladze
    yield (see NSDN c# reference) operator came I guess with .NET 2.0 and I my feeling is that it’s not as wide used as it could (or should) be.   I am not going to talk here about necessarity and advantages of using iterator pattern when accessing custom sequences (just google it).   Let’s look at it from the clean code point of view. Let's see if it really helps us to keep our code understandable, reusable and testable.   Let’s say we want to iterate a tree and do something with it’s nodes, for instance calculate a sum of their values. So the most elegant way would be to build a recursive method performing a classic depth traversal returning the sum.           private int CalculateTreeSum(Node top)         {             int sumOfChildNodes = 0;             foreach (Node childNode in top.ChildNodes)             {                 sumOfChildNodes += CalculateTreeSum(childNode);             }             return top.Value + sumOfChildNodes;         }     “Do One Thing” Nevertheless it violates one of the most important rules “Do One Thing”. Our  method CalculateTreeSum does two things at the same time. It travels inside the tree and performs some computation – in this case calculates sum. Doing two things in one method is definitely a bad thing because of several reasons: ·          Understandability: Readability / refactoring ·          Reuseability: when overriding - no chance to override computation without copying iteration code and vice versa. ·          Testability: you are not able to test computation without constructing the tree and you are not able to test correctness of tree iteration.   I want to spend some more words on this last issue. How do you test the method CalculateTreeSum when it contains two in one: computation & iteration? The only chance is to construct a test tree and assert the result of the method call, in our case the sum against our expectation. And if the test fails you do not know wether was the computation algorithm wrong or was that the iteration? At the end to top it all off I tell you: according to Murphy’s Law the iteration will have a bug as well as the calculation. Both bugs in a combination will cause the sum to be accidentally exactly the same you expect and the test will PASS. J   Ok let’s use yield! That’s why it is generally a very good idea not to mix but isolate “things”. Ok let’s use yield!           private int CalculateTreeSumClean(Node top)         {             IEnumerable<Node> treeNodes = GetTreeNodes(top);             return CalculateSum(treeNodes);         }             private int CalculateSum(IEnumerable<Node> nodes)         {             int sumOfNodes = 0;             foreach (Node node in nodes)             {                 sumOfNodes += node.Value;             }             return sumOfNodes;         }           private IEnumerable<Node> GetTreeNodes(Node top)         {             yield return top;             foreach (Node childNode in top.ChildNodes)             {                 foreach (Node currentNode in GetTreeNodes(childNode))                 {                     yield return currentNode;                 }             }         }   Two methods does not know anything about each other. One contains calculation logic another jut the iteration logic. You can relpace the tree iteration algorithm from depth traversal to breath trevaersal or use stack or visitor pattern instead of recursion. This will not influence your calculation logic. And vice versa you can relace the sum with product or do whatever you want with node values, the calculateion algorithm is not aware of beeng working on some tree or graph.  How about not using yield? Now let’s ask the question – what if we do not have yield operator? The brief look at the generated code gives us an answer. The compiler generates a 150 lines long class to implement the iteration logic.       [CompilerGenerated]     private sealed class <GetTreeNodes>d__0 : IEnumerable<Node>, IEnumerable, IEnumerator<Node>, IEnumerator, IDisposable     {         ...        150 Lines of generated code        ...     }   Often we compromise code readability, cleanness, testability, etc. – to reduce number of classes, code lines, keystrokes and mouse clicks. This is the human nature - we are lazy. Knowing and using such a sexy construct like yield, allows us to be lazy, write very few lines of code and at the same time stay clean and do one thing in a method. That's why I generally welcome using staff like that.   Note: The above used recursive depth traversal algorithm is possibly the compact one but not the best one from the performance and memory utilization point of view. It was taken to emphasize on other primary aspects of this post.

    Read the article

  • R12.0 Cash Management Consolidated Patch Collection (CPC) And R12.1 Cash Management Recommended Patch Collection (RPC)

    - by user793553
    If you have Oracle E-Business Suite's Cash Management (CE) application installed, you'll want to be sure to install the latest CPC (Consolidated Patch Collection) if you are using a R12.0 version of the apps, or the latest RPC (Recommended Patch Collection) for the R12.1 version of the apps. These collections give you all the fixes currently available for known issues in the specified versions of the application, including all of the latest Root Cause Analysis Fixes (RCAs)! What is an "RPC" (for R12.1 users)? Since the release of 12.1, a number of recommended patches for Oracle Cash Management have been made available as standalone patches to help address important business process issues. Adoption of these patches was highly recommended at the time, but not always implemented, so to further facilitate adoption of these patches, Oracle consolidated them into product-specific Recommended Patch Collections (RPCs) - a collection of recommended patches. They were created by Oracle Development with the following goals in mind: Stability: To address data integrity issues that have been identified by Oracle Development and Oracle Software Support as having the potential to interfere with the normal completion of important business processes (such as, period close, etc.). Root Cause Fixes (RCAs): To make available root cause fixes for known data integrity issues. Compact: To keep the file footprint as small as possible to help facilitate the install process and minimize testing. Granular: To compile the collection of patches based on functional areas, allowing a customer to apply multiple RPCs at once, or in phases (based on individual needs and goals). Where to start ALL R12 Cash Management users (R12.0 and R12.1 users) should start with the following Note on My Oracle Support (MOS): Doc ID 1367845.1: R12: Cash Management Recommended Patch Collections It's a great place for important implementation information about both sets of critical patch collections! For R12.1x users R12.1 users should also take a look at the documents below for even more information about the RPC for the R12.1.x versions of the Cash Management application, and other related available RPCs: Note Number  Title                                                                                                      1489997.1 Master Troubleshooting Guide for CE: Reconciliation & Clearing [VIDEO] 954704.1 EBS: R12.1 Oracle Financials Recommended Patch Collections (RPCs) 1316506.1 R12: Oracle CE: Upgrading from R11i to R12.1: Latest Recommended Patches Patch Wizard Utility While a patch may contain several hundred files, the impact on your system may actually be minimal. Patches contain hard prerequisites that are intended to make a patch work on a very low code baseline. The Patch Wizard Utility will give you a detailed analysis of the patch’s impact on your instance BEFORE it’s applied, so you’ll know exactly what to expect from the application. Please refer to Doc ID 976188.1 for more information on this important utility

    Read the article

  • Deadlock Analysis in NetBeans 8

    - by Geertjan
    Lock contention profiling is very important in multi-core environments. Lock contention occurs when a thread tries to acquire a lock while another thread is holding it, forcing it to wait. Lock contentions result in deadlocks. Multi-core environments have even more threads to deal with, causing an increased likelihood of lock contentions. In NetBeans 8, the NetBeans Profiler has new support for displaying detailed information about lock contention, i.e., the relationship between the threads that are locked. After all, whenever there's a deadlock, in any aspect of interaction, e.g., a political deadlock, it helps to be able to point to the responsible party or, at least, the order in which events happened resulting in the deadlock. As an example, let's take the handy Deadlock sample code from the Java Tutorial and look at the tools in NetBeans IDE for identifying and analyzing the code. The description of the deadlock is nice: Alphonse and Gaston are friends, and great believers in courtesy. A strict rule of courtesy is that when you bow to a friend, you must remain bowed until your friend has a chance to return the bow. Unfortunately, this rule does not account for the possibility that two friends might bow to each other at the same time. To help identify who bowed first or, at least, the order in which bowing took place, right-click the file and choose "Profile File". In the Profile Task Manager, make the choices below: When you have clicked Run, the Threads window shows the two threads are blocked, i.e., the red "Monitor" lines tell you that the related threads are blocked while trying to enter a synchronized method or block: But which thread is holding the lock? Which one is blocked by the other? The above visualization does not answer these questions. New in NetBeans 8 is that you can analyze the deadlock in the new Lock Contention window to determine which of the threads is responsible for the lock: Here is the code that simulates the lock, very slightly tweaked at the end, where I use "setName" on the threads, so that it's even easier to analyze the threads in the relevant NetBeans tools. Also, I converted the anonymous inner Runnables to lambda expressions. package org.demo; public class Deadlock { static class Friend { private final String name; public Friend(String name) { this.name = name; } public String getName() { return this.name; } public synchronized void bow(Friend bower) { System.out.format("%s: %s" + " has bowed to me!%n", this.name, bower.getName()); bower.bowBack(this); } public synchronized void bowBack(Friend bower) { System.out.format("%s: %s" + " has bowed back to me!%n", this.name, bower.getName()); } } public static void main(String[] args) { final Friend alphonse = new Friend("Alphonse"); final Friend gaston = new Friend("Gaston"); Thread t1 = new Thread(() -> { alphonse.bow(gaston); }); t1.setName("Alphonse bows to Gaston"); t1.start(); Thread t2 = new Thread(() -> { gaston.bow(alphonse); }); t2.setName("Gaston bows to Alphonse"); t2.start(); } } In the above code, it's extremely likely that both threads will block when they attempt to invoke bowBack. Neither block will ever end, because each thread is waiting for the other to exit bow. Note: As you can see, it really helps to use "Thread.setName", everywhere, wherever you're creating a Thread in your code, since the tools in the IDE become a lot more meaningful when you've defined the name of the thread because otherwise the Profiler will be forced to use thread names like "thread-5" and "thread-6", i.e., based on the order of the threads, which is kind of meaningless. (Normally, except in a simple demo scenario like the above, you're not starting the threads in the same class, so you have no idea at all what "thread-5" and "thread-6" mean because you don't know the order in which the threads were started.) Slightly more compact: Thread t1 = new Thread(() -> { alphonse.bow(gaston); },"Alphonse bows to Gaston"); t1.start(); Thread t2 = new Thread(() -> { gaston.bow(alphonse); },"Gaston bows to Alphonse"); t2.start();

    Read the article

  • C++ strongly typed typedef

    - by Kian
    I've been trying to think of a way of declaring strongly typed typedefs, to catch a certain class of bugs in the compilation stage. It's often the case that I'll typedef an int into several types of ids, or a vector to position or velocity: typedef int EntityID; typedef int ModelID; typedef Vector3 Position; typedef Vector3 Velocity; This can make the intent of code more clear, but after a long night of coding one might make silly mistakes like comparing different kinds of ids, or adding a position to a velocity perhaps. EntityID eID; ModelID mID; if ( eID == mID ) // <- Compiler sees nothing wrong { /*bug*/ } Position p; Velocity v; Position newP = p + v; // bug, meant p + v*s but compiler sees nothing wrong Unfortunately, suggestions I've found for strongly typed typedefs include using boost, which at least for me isn't a possibility (I do have c++11 at least). So after a bit of thinking, I came upon this idea, and wanted to run it by someone. First, you declare the base type as a template. The template parameter isn't used for anything in the definition, however: template < typename T > class IDType { unsigned int m_id; public: IDType( unsigned int const& i_id ): m_id {i_id} {}; friend bool operator==<T>( IDType<T> const& i_lhs, IDType<T> const& i_rhs ); }; Friend functions actually need to be forward declared before the class definition, which requires a forward declaration of the template class. We then define all the members for the base type, just remembering that it's a template class. Finally, when we want to use it, we typedef it as: class EntityT; typedef IDType<EntityT> EntityID; class ModelT; typedef IDType<ModelT> ModelID; The types are now entirely separate. Functions that take an EntityID will throw a compiler error if you try to feed them a ModelID instead, for example. Aside from having to declare the base types as templates, with the issues that entails, it's also fairly compact. I was hoping anyone had comments or critiques about this idea? One issue that came to mind while writing this, in the case of positions and velocities for example, would be that I can't convert between types as freely as before. Where before multiplying a vector by a scalar would give another vector, so I could do: typedef float Time; typedef Vector3 Position; typedef Vector3 Velocity; Time t = 1.0f; Position p = { 0.0f }; Velocity v = { 1.0f, 0.0f, 0.0f }; Position newP = p + v*t; With my strongly typed typedef I'd have to tell the compiler that multypling a Velocity by a Time results in a Position. class TimeT; typedef Float<TimeT> Time; class PositionT; typedef Vector3<PositionT> Position; class VelocityT; typedef Vector3<VelocityT> Velocity; Time t = 1.0f; Position p = { 0.0f }; Velocity v = { 1.0f, 0.0f, 0.0f }; Position newP = p + v*t; // Compiler error To solve this, I think I'd have to specialize every conversion explicitly, which can be kind of a bother. On the other hand, this limitation can help prevent other kinds of errors (say, multiplying a Velocity by a Distance, perhaps, which wouldn't make sense in this domain). So I'm torn, and wondering if people have any opinions on my original issue, or my approach to solving it.

    Read the article

  • Very slow compile times on Visual Studio

    - by johnc
    We are getting very slow compile times, which can take upwards of 20+ minutes on dual core 2GHz, 2G Ram machines. A lot of this is due to the size of our solution which has grown to 70+ projects, as well as VSS which is a bottle neck in itself when you have a lot of files. (swapping out VSS is not an option unfortunately, so I don't want this to descend into a VSS bash) We are looking at combing projects (not nice, as we like the separation of concerns, but is a good opportunity to refactor away some dead wood). We are also looking at having multiple solutions to achieve greater separation of concerns and quicker compile times for each element of the application. This I can see will become a dll hell as we try to keep things in synch. I am interested to know how other teams have dealt with this scaling issue, what do you do when your code base reaches a critical mass that you are wasting half the day watching the status bar deliver compile messages UPDATE Apologies, I neglected to mention this is a C# solution. Thanks for all the cpp suggestions, but it's been a few years since I've had to worry about headers. At a distance I say I miss C++, but I'm not sure I want to go back EDIT: Nice suggestions that have helped so far (not saying there aren't other nice suggestions below, just what has helped) New 3GHz laptop - the power of lost utilization works wonders when whinging to management Disable Anti Virus during compile 'Disconnecting' from VSS (actually the network) during compile - I may get us to remove VS-VSS integration altogether and stick to using the VSS UI Still not rip-snorting through a compile, but every bit helps. Orion did mention in a comment that generics may have a play also. From my tests there does appear to be a minimal performance hit, but not high enough to sure - compile times can be inconsistent due to disc activity. Due to time limitations, my tests didn't include as many Generics, or as much code, as would appear in live system, so that may accumulate. I wouldn't avoid using generics where they are supposed to be used, just for compile time performance WORKAROUND We are testing the practice of building new areas of the application in new solutions, importing in the latest dlls as required, them integrating them into the larger solution when we are happy with them. We may also do them same to existing code by creating temporary solutions that just encapsulate the areas we need to work on, and throwing them away after reintegrating the code. We need to weigh up the time it will take to reintegrate this code against the time we gain by not having Rip Van Winkle like experiences with rapid recompiling during development.

    Read the article

  • Get specifc value from JSon string using JSon.Net

    - by dean nolan
    I am trying to get a value from a JSon formatted string. It was to get album info from a website called Freebase. My result is like this: { "a0": { "code": "/api/status/error", "messages": [ { "code": "/api/status/error/mql/result", "info": { "count": 20, "result": [ { "album": [ { "name": "Definitely Maybe", "release_date": "1994-08-30" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Most Wanted Rock 1", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Alternative 90s", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Live Forever: Best of Britpop", "release_date": "2003-03-03" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "The Best... In the World... Ever! Volume 5", "release_date": "1997-03-31" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Live 4 Ever", "release_date": "1998-06-29" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "De Afrekening, Volume 8", "release_date": "1994" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Now That's What I Call Music! 33", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Q: Anthems", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "The Best Anthems... Ever! Volume 2", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "1995 Mercury Music Prize: Ten Albums of the Year", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Now That's What I Call Music! 1994", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Indie Top 20, Volume 21", "release_date": "1995" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Dad Rocks!", "release_date": "2006-06-05" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Untitled", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "The Greatest Hits of 1994", "release_date": "1994-10" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Top of the Pops 2", "release_date": "2000-03-27" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Q: Anthems (disc 1)", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Jamie Oliver's Cookin'", "release_date": "2001" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Killer Buzz", "release_date": "2001" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" } ] }, "message": "Unique query may have at most one result. Got 20", "path": "", "query": { "album": [ { "name": null, "release_date": null, "sort": "release_date" } ], "artist": "Oasis", "error_inside": ".", "name": "Live Forever", "type": "/music/track" } } ] }, "code": "/api/status/ok", "status": "200 OK", "transaction_id": "cache;cache04.p01.sjc1:8101;2010-03-30T18:04:20Z;0035" } I am looking to get the first album title, Definitely Maybe, from this list. I have tried parsing the string like this: JObject o = JObject.Parse(jsonString); string album = (string)o[""]; But no matter what I have tried I don't know what to put in those quotes. How would I get this specific value or be able to search for it? Thanks

    Read the article

  • How can I read individual lines of a CSV file into a string array, to then be selectively displayed

    - by Ryan
    I need your help, guys! :| I've got myself a CSV file with the following contents: 1,The Compact,1.8GHz,1024MB,160GB,440 2,The Medium,2.4GHz,1024MB,180GB,500 3,The Workhorse,2.4GHz,2048MB,220GB,650 It's a list of computer systems, basically, that the user can purchase. I need to read this file, line-by-line, into an array. Let's call this array csvline(). The first line of the text file would stored in csvline(0). Line two would be stored in csvline(1). And so on. (I've started with zero because that's where VB starts its arrays). A drop-down list would then enable the user to select 1, 2 or 3 (or however many lines/systems are stored in the file). Upon selecting a number - say, 1 - csvline(0) would be displayed inside a textbox (textbox1, let's say). If 2 was selected, csvline(1) would be displayed, and so on. It's not the formatting I need help with, though; that's the easy part. I just need someone to help teach me how to read a CSV file line-by-line, putting each line into a string array - csvlines(count) - then increment count by one so that the next line is read into another slot. So far, I've been able to paste the numbers of each system into an combobox: Using csvfileparser As New Microsoft.VisualBasic.FileIO.TextFieldParser _ ("F:\folder\programname\programname\bin\Debug\systems.csv") Dim csvalue As String() csvfileparser.TextFieldType = Microsoft.VisualBasic.FileIO.FieldType.Delimited csvfileparser.Delimiters = New String() {","} While Not csvfileparser.EndOfData csvalue = csvfileparser.ReadFields() combobox1.Items.Add(String.Format("{1}{0}", _ Environment.NewLine, _ csvalue(0))) End While End Using But this only selects individual values. I need to figure out how selecting one of these numbers in the combobox can trigger textbox1 to be appended with just that line (I can handle the formatting, using the string.format stuff). If I try to do this using csvalue = csvtranslator.ReadLine , I get the following error message: "Error 1 Value of type 'String' cannot be converted to '1-dimensional array of String'." If I then put it as an array, ie: csvalue() = csvtranslator.ReadLine , I then get a different error message: "Error 1 Number of indices is less than the number of dimensions of the indexed array." What's the knack, guys? I've spent hours trying to figure this out. Please go easy on me - and keep any responses ultra-simple for my newbie brain - I'm very new to all this programming malarkey and just starting out! :)

    Read the article

  • SqlMetal, Sql Server 2008 database, Table with HierachyID, dal cs file is created sometimes ?

    - by judek.mp
    I have 2 databases with a 2 tables with HierachyID fields. For one database I can get a dal cs file, for the other database I cannot get a dal cs file ? HBus is a database I can get the dal cs for, ... SqlMetal /server:.\SQLSERVER2008 /database:HBus /code:HBusDC.cs /views /functions /sprocs /namespace:HBusDC /context:HBusDataContext This kicks me out a file, ... which works, but excludes the HierarchyID field for the table and includes all other fields for that table. This is OK I do not mind. The above cmd line kicks out an warning but still produces a file, like so SqlMetal /server:.\SQLSERVER2008 /database:HBus /code:HBusDC.cs /views /functions /sprocs /namespace:HBusDC /context:HBusDataContext Microsoft (R) Database Mapping Generator 2008 version 1.00.30729 for Microsoft (R) .NET Framework version 3.5 Copyright (C) Microsoft Corporation. All rights reserved. Warning : SQM1021: Unable to extract column 'OrgNode' of Table 'dbo.HMsg' from SqlServer because the column's DbType is a user-defined type (UDT). Warning : SQM1021: Unable to extract column 'OrgNode' of Table 'dbo.vwHMsg' from SqlServer because the column's DbType is a user-defined type (UDT). HMsg is a table with a HierarchyID field. I have another database, Elf, almost the same thing but I get a warning and an Error when using sql metal and I do not get a dal cs file ... SqlMetal /server:.\SQLSERVER2008 /database:Elf /code:ElfDataContextDal.cs /views /functions /sprocs /namespace:HBusDC /context:HBusDataContext An error as well as the warning and the cs file fails to appear on my disc, ... :-( SqlMetal /server:.\SQLSERVER2008 /database:Elf /code:ElfDataContextDal.cs /views /functions /sprocs /namespace:HBusDC /context:HBusDataContext Microsoft (R) Database Mapping Generator 2008 version 1.00.30729 for Microsoft (R) .NET Framework version 3.5 Copyright (C) Microsoft Corporation. All rights reserved. Warning : SQM1021: Unable to extract column 'OrgNode' of Table 'dbo.EntityLink' from SqlServer because the column's DbType is a user-defined type (UDT). Error : Requested value 'ELF.SYS.HIERARCHYID' was not found. The fields are declared the same way in Elf db OrgNode [HierarchyID] null , in HBus db ... OrgNode [HierarchyID] null , Both databases are in the same instance of sql server 2008, so the HierarchyID is an inbuilt type, neither db has HierarchyID udt ,... cheers in advance for any replies ...

    Read the article

  • Is this way of storing typed objects in memory good?

    - by Pindatjuh
    This is an "is this okay, or can it be done better" question. Topic: Storing typed objects in memory. Background information: I'm building a compiler for the x86-32 platform for my language. My goal includes typed objects. Idea: Every primitive is a semi-class (it can be used as if it was a normal class, but it's stored more compact). Every class is represented by primitives and some meta-data (containing class-properties, inheritance stuff, etc.). The meta-data is complex: it doesn't use fields but instead context-switches. For primitives, the meta-data is very small, compared to a "real" class, which is alot bigger. This enables another idea that "primitives are objects", in my language, which I found nessecairy. Example: If I have an array of 32 booleans, then the pure content of this array is exactly 4 byte (32 bits of booleans). The meta-data will contain flags that the type is an array of booleans, which contains 32 entries. The meta-data is very compacted, on bit-level: using a sort of "packing" mechanism, which is read by a FSM at runtime, when doing inspection of the type (like when passing the object to methods for checking, etc.) For instance (read from left to right, top to bottom, remember vertical possition when going to the right, and check nearest column header for meaning of switch): Primitive? Array? Type-Meta 1 Byte? || Size (1 byte) 1 1 [...] 1 [...] done 0 2 Bytes? || Size (2 bytes) 1 [...] done || Size (4 bytes) 0 [...] done Integer? 1 Byte? 2 Bytes? 0 1 0 1 done 1 done 0 done Boolean? Byte? 0 1 0 done 1 done More-Primitives 0 .... Class-Stuff (Huge) 0 ... (After reaching done the data is inserted. || = byte alignement. [...] is variable sized. ... is not described here, for simplicity. And let's call them cost-based-data-structures.) For an array of 32 booleans containing all true values, the memory for this type would be (read top-down): 1 Primitive 1 Array 1 ArrayType: Primitive 0 Not-Array 0 Not-Integer 1 Boolean 0 Not-Byte (thus bit) 1 Integer Size: 1 Byte 00100000 Array size 11111111 11111111 11111111 11111111 Data Thus, 8 bytes represent 32 booleans in an array: 11100101 00100000 11111111 11111111 11111111 11111111 Is this okay, or can it be done better?

    Read the article

  • PerlIO in Windows PowerShell and CMD.exe

    - by Evan Carroll
    Apparently, a Perl script I have results in two different output files depending on if I run it under Windows PowerShell, or cmd.exe. The script can be found at the bottom of this question. The file handle is opened with IO::File, I believe that PerlIO is doing some screwy stuff. It seems as if under cmd.exe the encoding chosen is much more compact encoding (4.09 KB), as compared to PowerShell which generates a file nearly twice the size (8.19 KB). This script takes a shell script and generates a Windows batch file. It seems like the one generated under cmd.exe is just regular ASCII (1 byte character), while the other one appears to be UTF-16 (first two bytes FF FE) Can someone verify and explain why PerlIO works differently under Windows Powershell than cmd.exe? Also, how do I explicitly get an ASCII-magic PerlIO filehandle using IO::File? Currently, only the file generated with cmd.exe is executable. The UTF-16 .bat (I think that's the encoding) is not executable by either PowerShell or cmd.exe. BTW, we're using Perl 5.12.1 for MSWin32 #!/usr/bin/env perl use strict; use warnings; use File::Spec; use IO::File; use IO::Dir; use feature ':5.10'; my $bash_ftp_script = File::Spec->catfile( 'bin', 'dm-ftp-push' ); my $fh = IO::File->new( $bash_ftp_script, 'r' ) or die $!; my @lines = grep $_ !~ /^#.*/, <$fh>; my $file = join '', @lines; $file =~ s/ \\\n/ /gm; $file =~ tr/'\t/"/d; $file =~ s/ +/ /g; $file =~ s/\b"|"\b/"/g; my @singleLnFile = grep /ncftp|echo/, split $/, $file; s/\$PWD\///g for @singleLnFile; my $dh = IO::Dir->new( '.' ); my @files = grep /\.pl$/, $dh->read; say 'echo off'; say "perl $_" for @files; say for @singleLnFile; 1;

    Read the article

  • ASP.Net JSON Web Service Post Form Data

    - by Will D
    I have a ASP.NET web service decorated with System.Web.Script.Services.ScriptService() so it can return json formatted data. This much is working for me, but ASP.Net has a requirement that parameters to the web service must be in json in order to get json out. I'm using jquery to run my ajax calls and there doesn't seem to be an easy way to create a nice javascript object from the form elements. I have looked at serialiseArray in the json2 library but it doesn't encode the field names as property name in the object. If you have 2 form elements like this <input type="text" name="namefirst" id="namefirst" value="John"/> <input type="text" name="namelast" id="namelast" value="Doe"/> calling $("form").serialize() will get you the standard query string namefirst=John&namelast=Doe calling JSON.stringify($("form").serializeArray()) will get you the (bulky) json representation [{"name":"namefirst","value":"John"},{"name":"namelast","value":"Doe"}] This will work when passing to the web service but its ugly as you have to have code like this to read it in: Public Class NameValuePair Public name As String Public value As String End Class <WebMethod()> _ Public Function GetQuote(ByVal nvp As NameValuePair()) As String End Function You would also have to wrap that json text inside another object nameed nvp to make the web service happy. Then its more work as all you have is an array of NameValuePair when you want an associative array. I might be kidding myself but i imagined something more elegant when i started this project - more like this Public Class Person Public namefirst As String Public namelast As String End Class which would require the json to look something like this: {"namefirst":"John","namelast":"Doe"} Is there an easy way to do this? Obviously it is simple for a form with two parameters but when you have a very large form concatenating strings gets ugly. Having nested objects would also complicate things The cludge I have settled on for the moment is to use the standard name value pair format stuffed inside a json object. This is compact and fast {"q":"namefirst=John&namelast=Doe"} then have a web method like this on the server that parses the query string into an associate array. <WebMethod()> _ Public Function AjaxForm(ByVal q As String) as string Dim params As NameValueCollection = HttpUtility.ParseQueryString(q) 'do stuff return "Hello" End Sub As far a cludges go this one seems reasonably elegant in terms of amount of code, but my question is: is there a better way? Is there a generally accepted way of passing form data to asp.net web/script services?

    Read the article

  • Oracle Schema Design: Seperate Schema with I/O Overhead?

    - by Guru
    We are designing database schema for a new system based on Oracle 11gR1. We have identified a main schema which would have close to 100 tables, these will be accessed from the front end Java application. We have a requirement to audit the values which got changed in close to 50 tables, this has to be done every row. Which means, it is possible that, for a single row in MYSYS.T1 there might be 50 (or more) rows in MYSYS_AUDIT.T1_AUD table. We might be having old values of every column entry and new values available from T1. DBA gave an observation, advising against this method, because he said, separate schema meant an extra I/O for every operation. Basically AUDIT schema would be used only to do some analyse and enter values (thus SELECT and INSERT). Is it true that, "a separate schema means an extra I/O" ? I could not find justification. It appears logical to me, as the AUDIT data should not be tampered with, so a separate schema. Also, we designed a separate schema for archiving some tables from MYSYS. From MYSYS_ARC the table might be backed up into tapes or deleted after sufficient time. Few stats: Few tables (close to 20, 30) in MYSYS schema could grow to around 50M rows. We have asked for a total disk space of 4 TB. MYSYS_AUDIT schema might be having 10 times that of MYSYS but we wont keep them more than 3 months. Questions Given all these, can you suggest me any improvements? Separate schema affects disc I/O? (one extra I/O for every schema ?) Any general suggestions? Figure: +-------------------+ +-------------------+ | MYSYS | | MYSYS_AUDIT | | | | | | 1. T1 | | 1. T1_AUD | | 2. T2 | | 2. T2_AUD | | 3. T3 |--------->| 3. T3_AUD | | 4. T4 |(SELECT, | 4. T4_AUD | | . | INSERT) | . | | . | | . | | . | | . | | 100. T100 | | 50. T50_AUD | +-------------------+ +-------------------+ | | | | |(INSERT) | | | * +-------------------+ | MYSYS_ARC | | | | 1. T1_ARC | | 2. T2_ARC | | 3. T3_ARC | | 4. T4_ARC | | . | | . | | . | | 100. T100_ARC | +-------------------+ Apart from this, we have two more schemas with only read only rights, but mainly they are for adhoc purpose and we dont mind the performance on them.

    Read the article

  • Rails, MySQL and Snow Leopard

    - by coneybeare
    I upgraded to Snow Leopard using the disc we got at WWDC. Trying to run some of my rails apps now complains about sql (in /Users/coneybeare/Projects/Ambiance/ambiance-server) !!! The bundled mysql.rb driver has been removed from Rails 2.2. Please install the mysql gem and try again: gem install mysql. Importing all sounds in /Users/coneybeare/Projects/Ambiance/ambiance-sounds/Import 32/Compressed/ -- AdirondackPeepers.caf !!! The bundled mysql.rb driver has been removed from Rails 2.2. Please install the mysql gem and try again: gem install mysql. rake aborted! dlopen(/opt/local/lib/ruby/gems/1.8/gems/mysql-2.7/lib/mysql.bundle, 9): Library not loaded: /usr/local/mysql/lib/libmysqlclient.16.dylib Referenced from: /opt/local/lib/ruby/gems/1.8/gems/mysql-2.7/lib/mysql.bundle Reason: image not found - /opt/local/lib/ruby/gems/1.8/gems/mysql-2.7/lib/mysql.bundle (See full trace by running task with --trace) I could have sworn I fixed this once before. The problem is that sudo gem install mysql does not work and gives the error: Building native extensions. This could take a while... ERROR: Error installing mysql: ERROR: Failed to build gem native extension. /opt/local/bin/ruby extconf.rb install mysql checking for mysql_query() in -lmysqlclient... no checking for main() in -lm... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lz... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lsocket... no checking for mysql_query() in -lmysqlclient... no checking for main() in -lnsl... no checking for mysql_query() in -lmysqlclient... no Gem files will remain installed in /opt/local/lib/ruby/gems/1.8/gems/mysql-2.7 for inspection. Results logged to /opt/local/lib/ruby/gems/1.8/gems/mysql-2.7/gem_make.out Has anybody gotten mysql to work with rails on snow leopard yet? If so, what is your setup and better yet, what can I do to reproduce it?

    Read the article

  • Multiple Connection Types for one Designer Generated TableAdapter

    - by Tim
    I have a Windows Forms application with a DataSet (.xsd) that is currently set to connect to a Sql Ce database. Compact Edition is being used so the users can use this application in the field without an internet connection, and then sync their data at day's end. I have been given a new project to create a supplemental web interface for displaying some of the same reports as the Windows Forms application so certain users can obtain reports without installing the Windows app. What I've done so far is create a new Web Project and added it to my current Solution. I have split both the reports (.rdlc) and DataSets out of the Windows Forms project into their own projects so they can be accessed by both the Windows and Web applications. So far, this is working fine. Here's my dilemma: As I said before, the DataSets are currently set up to connect to a local Sql Ce database file. This is correct for the Windows app, but for the Web application I would like to use these same TableAdapters and queries to connect to the Sql Server 2005 database. I have found that the designer generated, strongly-typed TableAdapter classes have a ConnectionModifier property that allows you to make the TableAdapter's Connection public. This exposes the Connection property and allows me to set it, however it is strongly-typed as a SqlCeConnection, whereas I would like to set it to a SqlConnection for my Web project. I'm assuming the DataSet Designer strongly-types the Connection, Command, and DataAdapter objects based on the Provider of the ConnectionString as indicated in the app.config file. Is there any way I can use some generic provider so that the DataSet Designer will use object types that can connect to both a Sql Ce database file AND the actual Sql Server 2005 database? I know that SqlCeConnection and SqlConnection both inherit from DbConnection, which implements IDbConnection. Relatively, the same goes for SqlCeCommand/SqlCommand:DbCommand:IDbCommand. It would be nice if I could just figure out a way for the designer to use the Interface types rather than the strong types, but I'm hesitant that that is possible. I hope my problem and question are clear. Any help is much appreciated. Let me know if there's anything I can clarify.

    Read the article

  • WordPress: Display Online Users' Avatars

    - by Wade D Ouellet
    Hi, I'm using version 2.7.0 of this WordPress plugin to display which users are currently online (the latest version doesn't work): http://wordpress.org/extend/plugins/wp-useronline/ It's working great but I would love to be able to alter it quickly to display the users' avatars instead of their names. Hoping someone with pretty good knowledge of WordPress queries and functions can help. The part below seems to be the part that handles all this. If this isn't enough, here is the link to download the version I am using with the full php files: http://downloads.wordpress.org/plugin/wp-useronline.2.70.zip // If No Bot Is Found, Then We Check Members And Guests if ( !$bot_found ) { if ( $current_user->ID ) { // Check For Member $user_id = $current_user->ID; $user_name = $current_user->display_name; $user_type = 'member'; $where = $wpdb->prepare("WHERE user_id = %d", $user_id); } elseif ( !empty($_COOKIE['comment_author_'.COOKIEHASH]) ) { // Check For Comment Author (Guest) $user_id = 0; $user_name = trim(strip_tags($_COOKIE['comment_author_'.COOKIEHASH])); $user_type = 'guest'; } else { // Check For Guest $user_id = 0; $user_name = __('Guest', 'wp-useronline'); $user_type = 'guest'; } } // Check For Page Title if ( is_admin() && function_exists('get_admin_page_title') ) { $page_title = ' &raquo; ' . __('Admin', 'wp-useronline') . ' &raquo; ' . get_admin_page_title(); } else { $page_title = wp_title('&raquo;', false); if ( empty($page_title) ) $page_title = ' &raquo; ' . strip_tags($_SERVER['REQUEST_URI']); elseif ( is_singular() ) $page_title = ' &raquo; ' . __('Archive', 'wp-useronline') . ' ' . $page_title; } $page_title = get_bloginfo('name') . $page_title; // Delete Users $delete_users = $wpdb->query($wpdb->prepare(" DELETE FROM $wpdb->useronline $where OR timestamp < CURRENT_TIMESTAMP - %d ", self::$options->timeout)); // Insert Users $data = compact('user_type', 'user_id', 'user_name', 'user_ip', 'user_agent', 'page_title', 'page_url', 'referral'); $data = stripslashes_deep($data); $insert_user = $wpdb->insert($wpdb->useronline, $data); // Count Users Online self::$useronline = intval($wpdb->get_var("SELECT COUNT(*) FROM $wpdb->useronline"));

    Read the article

  • How to remove music/videos DRM protection and convert to Mobile Devices such as iPod, iPhone, PSP, Z

    - by tonywesley
    The music/video files you purchased from online music stores like iTunes, Yahoo Music or Wal-Mart are under DRM protection. So you can't convert them to the formats supported by your own mobile devices such as Nokia phone, Creative Zen palyer, iPod, PSP, Walkman, Zune… You also can't share your purchased music/videos with your friends. The following step by step tutorial is dedicated to instructing music lovers to how to convert your DRM protected music/videos to mobile devices. Method 1: If you only want to remove DRM protection from your protected music, this method will not spend your money. Step 1: Burn your protected music files to CD-R/RW disc to make an audio CD Step 2: Find a free CD Ripper software to convert the audio CD track back to MP3, WAV, WMA, M4A, AAC, RA… Method 2: This guide will show you how to crack drm from protected wmv, wma, m4p, m4v, m4a, aac files and convert to unprotected WMV, MP4, MP3, WMA or any video and audio formats you like, such as AVI, MP4, Flv, MPEG, MOV, 3GP, m4a, aac, wmv, ogg, wav... I have been using Media Converter software, it is the quickest and easiest solution to remove drm from WMV, M4V, M4P, WMA, M4A, AAC, M4B, AA files by quick recording. It gets audio and video stream at the bottom of operating system, so the output quality is lossless and the conversion speed is fast . The process is as follows. Step 1: Download and install the software Step 2: Run the software and click "Add…" button to load WMA or M4A, M4B, AAC, WMV, M4P, M4V, ASF files Step 3: Choose output formats. If you want to convert protected audio files, please select "Convert audio to" list; If you want to convert protected video files, please select "Convert video to" list. Step 4: You can click "Settings" button to custom preference for output files. Click "Settings" button bellow "Convert audio to" list for protected audio files Click "Settings" button bellow "Convert video to" list for protected video files Step 5: Start remove DRM and convert your DRM protected music and videos by click on "Start" button. What is DRM? DRM, which is most commonly found in movies and music files, doesn't mean just basic copy-protection of video, audio and ebooks, but it basically means full protection for digital content, ranging from delivery to end user's ways to use the content. We can remove the Drm from video and audio files legally by quick recording.

    Read the article

  • mounting ext4 fs with block size of 65536

    - by seaquest
    I am doing some benchmarking on EXT4 performance on Compact Flash media. I have created an ext4 fs with block size of 65536. however I can not mount it on ubuntu-10.10-netbook-i386. (it is already mounting ext4 fs with 4096 bytes of block sizes) According to my readings on ext4 it should allow such big block sized fs. I want to hear your comments. root@ubuntu:~# mkfs.ext4 -b 65536 /dev/sda3 Warning: blocksize 65536 not usable on most systems. mke2fs 1.41.12 (17-May-2010) mkfs.ext4: 65536-byte blocks too big for system (max 4096) Proceed anyway? (y,n) y Warning: 65536-byte blocks too big for system (max 4096), forced to continue Filesystem label= OS type: Linux Block size=65536 (log=6) Fragment size=65536 (log=6) Stride=0 blocks, Stripe width=0 blocks 19968 inodes, 19830 blocks 991 blocks (5.00%) reserved for the super user First data block=0 1 block group 65528 blocks per group, 65528 fragments per group 19968 inodes per group Writing inode tables: done Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 37 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. root@ubuntu:~# tune2fs -l /dev/sda3 tune2fs 1.41.12 (17-May-2010) Filesystem volume name: <none> Last mounted on: <not available> Filesystem UUID: 4cf3f507-e7b4-463c-be11-5b408097099b Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 19968 Block count: 19830 Reserved block count: 991 Free blocks: 18720 Free inodes: 19957 First block: 0 Block size: 65536 Fragment size: 65536 Blocks per group: 65528 Fragments per group: 65528 Inodes per group: 19968 Inode blocks per group: 78 Flex block group size: 16 Filesystem created: Sat Feb 5 14:39:55 2011 Last mount time: n/a Last write time: Sat Feb 5 14:40:02 2011 Mount count: 0 Maximum mount count: 37 Last checked: Sat Feb 5 14:39:55 2011 Check interval: 15552000 (6 months) Next check after: Thu Aug 4 14:39:55 2011 Lifetime writes: 70 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: afb5b570-9d47-4786-bad2-4aacb3b73516 Journal backup: inode blocks root@ubuntu:~# mount -t ext4 /dev/sda3 /mnt/ mount: wrong fs type, bad option, bad superblock on /dev/sda3, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so

    Read the article

  • Can this way of storing typed objects be improved?

    - by Pindatjuh
    This is an "can it be improved"-question. Topic: Storing typed objects in memory. Background information: I'm building a compiler for the x86-32 Windows platform for my language. My goal includes typed objects. Idea: Every primitive is a semi-class (it can be used as if it was a normal class, but it's stored more compact). Every class is represented by primitives and some meta-data (containing class-properties, inheritance stuff, etc.). The meta-data is complex: it doesn't use fields but instead context-switches. For primitives, the meta-data is very small, compared to a "real" class, which is alot bigger. This enables another idea that "primitives are objects", in my language, which I found nessecairy. Example: If I have an array of 32 booleans, then the pure content of this array is exactly 4 byte (32 bits of booleans). The meta-data will contain flags that the type is an array of booleans, which contains 32 entries. The meta-data is very compacted, on bit-level: using a sort of "packing" mechanism, which is read by a FSM at runtime, when doing inspection of the type (like when passing the object to methods for checking, etc.) For instance (read from left to right, top to bottom, remember vertical position when going to the right, and check nearest column header for meaning of switch): Primitive? Array? Type-Meta 1 Byte? || Size (1 byte) 1 1 [...] 1 [...] done 0 2 Bytes? || Size (2 bytes) 1 [...] done || Size (4 bytes) 0 [...] done Integer? 1 Byte? 2 Bytes? 0 1 0 1 done 1 done 0 done Boolean? Byte? 0 1 0 done 1 done More-Primitives 0 .... Class-Stuff (Huge) 0 ... (After reaching done the data is inserted. || = byte alignment. [...] is variable sized. ... is not described here, for simplicity. And let's call them cost-based-data-structures.) For an array of 32 booleans containing all true values, the memory for this type would be (read top-down): 1 Primitive 1 Array 1 ArrayType: Primitive 0 Not-Array 0 Not-Integer 1 Boolean 0 Not-Byte (thus bit) 1 Integer Size: 1 Byte 00100000 Array size 01010101 01010101 01010101 01010101 Data (user defined) Thus, 8 bytes represent 32 booleans in an array: 11100101 00100000 01010101 01010101 01010101 01010101 How can I improve this? (Both performance- and memory-consumption wise)

    Read the article

  • Problem with continue in While Loop within Try/Catch in C# (2.0)

    - by csharpnoob
    Hi, when i try to use in my ASPX Webpage in the Code Behind this try{ while() { ... db.Open(); readDataMoney = new OleDbCommand("SELECT * FROM Customer WHERE card = '" + customer.card + "';", db).ExecuteReader(); while (readDataMoney.Read()) { try { if (!readDataMoney.IsDBNull(readDataMoney.GetOrdinal("Credit"))) { customer.credit = Convert.ToDouble(readDataMoney[readDataMoney.GetOrdinal("Credit")]); } if (!readDataMoney.IsDBNull(readDataMoney.GetOrdinal("Bonus"))) { customer.bonus = Convert.ToDouble(readDataMoney[readDataMoney.GetOrdinal("Bonus")]); } } catch (Exception ex) { Connector.writeLog("Money: " + ex.StackTrace + "" + ex.Message + "" + ex.Source); customer.credit = 0.0; customer.credit = 0.0; continue; } finally { } } readDataMoney.Close(); vsiDB.Close(); ... } }catch { continue; } The whole page hangs if there is a problem when the read from db isn't working. I tried to check for !isNull, but same problem. I have a lots of differend MDB Files to process, which are readonly (can't repair/compact) and some or others not. Same Design/Layout of Tables. With good old ASP Classic 3.0 all of them are processing with the "On Resume Next". I know I know. But that's how it is. Can't change the source. So the basic question: So is there any way to tell .NET to continue the loop whatever happens within the try loop if there is any exception? After a lots of wating time i get this exceptions: at System.Data.Common.UnsafeNativeMethods.IDBInitializeInitialize.Invoke(IntPtr pThis) at System.Data.OleDb.DataSourceWrapper.InitializeAndCreateSession(OleDbConnectionString constr, SessionWrapper& sessionWrapper) at System.Data.OleDb.OleDbConnectionInternal..ctor(OleDbConnectionString constr, OleDbConnection connection) at System.Data.OleDb.OleDbConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.CreateNonPooledConnection(DbConnection owningConnection, DbConnectionPoolGroup poolGroup) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.OleDb.OleDbConnection.Open() at GetCustomer(String card)Thread was being aborted.System.Data and System.Runtime.InteropServices.Marshal.ReadInt16(IntPtr ptr, Int32 ofs) System.Data.ProviderBase.DbBuffer.ReadInt16(Int32 offset) System.Data.OleDb.ColumnBinding.Value_I2() System.Data.OleDb.ColumnBinding.Value() System.Data.OleDb.OleDbDataReader.GetValue(Int32 ordinal) System.Data.OleDb.OleDbDataReader.get_Item(Int32 index) Thread was terminated.mscorlib Thanks for any help.

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78  | Next Page >