Search Results

Search found 16948 results on 678 pages for 'static analysis'.

Page 123/678 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • How do you combine "Revision Control" with "WorkFlow" for R?

    - by Tal Galili
    Hello all, I remember coming across R users writing that they use "Revision control" (e.g: "Source control"), and I am curious to know: How do you combine "Revision control" with your statistical analysis WorkFlow? Two (very) interesting discussions talk about how to deal with the WorkFlow. But neither of them refer to the revision control element: http://stackoverflow.com/questions/1266279/how-to-organize-large-r-programs http://stackoverflow.com/questions/1429907/workflow-for-statistical-analysis-and-report-writing A Long Update To The Question: Following some of the people's answers, and Dirk's question in the comment, I would like to direct my question a bit more. After reading the Wiki article about "revision control" (which I was previously not familiar with), it was clear to me that when using revision control, what one does is to build a development structure of his code. This structure either leads to a "final product" or to several branches. When building something like, let's say, a website. There is usually one end product you work towards (the website), with some prototypes along the way. But when doing a statistical analysis, the work (to my view) is different. Sometimes you know where you want to get to. But more often, you explore. Explore cleaning the dataset. Explore different methods for statistical analysis, and ask various questions of your data (and I am writing this, knowing how Frank Harrell, and other experience statisticians feels about Data dredging). That is way the WorkFlow question with statistical programming is (in my view) a serious and deep question, raising many issues, The simpler ones are technical: Which revision control software do you use (and why) ? Which IDE do you use(and why) ? The more interesting question are about work process: How do you structure your files? What do you keep as a separate file and what as a revision? or asking in a different way - What should be a "branch" and what should be a "sub project" in your code? For example: When starting to explore your data, should a plot be creating and then erased because it didn't lead any where (but kept as a revision) or should there be a backup file of that path? How you solve this tension was my initial curiosity. The second question is "what might I be missing?". What rules (of thumb) should one follow so to avoid common pitfalls doing statistical programming with version control? In my intuition, I feel that statistical programming is inherently different then software development (I am writing this without being a real expert in statistical programming, and even less so in software development). That's way I am unsure which of the lessons I have read here about version control would be applicable. Thanks a lot, Tal

    Read the article

  • Dynamically loaded jQuery with GreaseMonkey inconsistent on pages (refreshing seems to fix it)... do

    - by uprightnetizen
    Hi, I want a custom page analysis footer on every site I visit... so I've used a method to attach JQuery to unsafeWindow. I then create a floating footer on the page. I want to be able to call commands in a menu, do some processing, then put the results in the footer. Unfortunately it sometimes works, sometimes it doesn't. At least two alerts should happen in the printOutput function. Sometimes it only fires one, then it (crashes?) without error? On other pages, both alerts fire and it finds the element, but it doesn't add the extra text. (e.g. www.linode.com) Refreshing the page, then running the printOutput command again seems to always work. Does anyone know what's going on??? The userscript can be installed at: http://www.captionwizard.com/test/page_analysis.user.js // ==UserScript== // @name page_analysis // @namespace markspace // @description Page Analysis // @include http://*/* // ==/UserScript== (function() { // Add jQuery var GM_JQ = document.createElement('script'); GM_JQ.src = 'http://code.jquery.com/jquery-1.4.2.min.js'; GM_JQ.type = 'text/javascript'; document.getElementsByTagName('head')[0].appendChild(GM_JQ); var jqueryActive = false; //Check if jQuery's loaded function GM_wait() { if(typeof unsafeWindow.jQuery == 'undefined') { window.setTimeout(GM_wait,100); } else { $ = unsafeWindow.jQuery; letsJQuery(); } } GM_wait(); function letsJQuery() { jqueryActive = true; setupOutputFooter(); } /******************************* Analysis FOOTER Functions ******************************/ function printOutput(someText) { alert('printing output'); if($('div.analysis_footer').length) { alert('is here - appending'); $('div.analysis_footer').append('<br>' + someText); } else { alert('not here - trying again'); setupOutputFooter(); $('div.analysis_footer').append('<br>' + someText); } } GM_registerMenuCommand("Test Output", testOutput, "k", "control", "k" ); function testOutput() { printOutput('testing this'); } function setupOutputFooter() { $('<div class="analysis_footer">Page Analysis Footer:</div>').appendTo('body'); $('div.analysis_footer').css('position','fixed').css('bottom', '0px').css('background-color','#F8F8F8'); $('div.analysis_footer').css('width','100%').css('color','#3B3B3B').css('font-size', '0.8em'); $('div.analysis_footer').css('font-family', '"Myriad",Verdana,Arial,Helvetica,sans-serif').css('padding', '5px'); $('div.analysis_footer').css('border-top', '1px solid black').css('text-align', 'left'); } }());

    Read the article

  • What does the symbol ::: mean in R

    - by Milktrader
    I came across this in the following context from B. Pfaff's "Analysis of Integrated and Cointegrated Time Series in R" ## Impulse response analysis of SVAR A-type model 1 args (vars ::: irf.svarest) 2 irf.svara <- irf (svar.A, impulse = ”y1 ” , 3 response = ”y2 ” , boot = FALSE) 4 args (vars ::: plot.varirf) 5 plot (irf.svara)

    Read the article

  • SQL Server (2012 Enterprise) Browser service failing

    - by Watki02
    SQL Server (2012 Enterprise) Browser service failing I have a problem as described below: I have an instance of SQL Server 2012 Enterprise (thanks to MSDN) for local development on my PC. I try to start SQL server Browser Service from SQL Server Configuration Manager and it takes a long time to fail, then fails with: The request failed or the service did not respond in a timely fashion. Consult the event log or other applicable error logs for details. I checked event logs and found these errors in this order (all within the same 1-second time frame): The SQL Server Browser service port is unavailable for listening, or invalid. The SQL Server Browser service was unable to establish SQL instance and connectivity discovery. The SQL Server Browser is enabling SQL instance and connectivity discovery support. The SQL Server Browser service was unable to establish Analysis Services discovery. The SQL Server Browser service has started. The SQL Server Browser service has shutdown. I checked firewall rules and both port 1433 (TCP) and 1434 (UDP) are wide open, just as well - the programs and service binary had been "allowed through windows firewall". I started the "Analysis Services" service by hand and it works fine. Browser still won't start. Some History: Installed SQL 2008 R2 express advanced Installed SQL2012 Express advanced Uninstalled SQL 2008 R2 express advanced Installed 2012 SSDT and lots of features with Express install Installed a unique instance of SQL 2012 Enterprise with all features Uninstalled SSDT and reinstalled SSDT with Enterprise (solved a different problem) Uninstalled SQL 2012 Express Uninstalled SQL 2012 Enterprise Removed anything with "SQL" in the name from Control panel "Programs and features" Installed SQL 2012 Enterprise without Analysis services (This is where I noticed SQL Browser service was failing to start even on the install) Added the feature of Analysis Services (and everything else) via the installer (Browser continued to fail to start on the install) ======================== Other interesting facts: opening a command window with administrator and trying to run sqlbrowser.exe manually yielded: Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Windows\system32cd C:\Program Files (x86)\Microsoft SQL Server\90\Shared C:\Program Files (x86)\Microsoft SQL Server\90\Sharedsqlbrowser.exe -c SQLBrowser: starting up in console mode SQLBrowser: starting up SSRP redirection service SQLBrowser: failed starting SSRP redirection services -- shutting down. SQLBrowser: starting up OLAP redirection service SQLBrowser: Stopping the OLAP redirector C:\Program Files (x86)\Microsoft SQL Server\90\Shared As I try to repair the install it errors out saying The following error has occurred: Service 'SQLBrowser' start request failed. Click 'Retry' to retry the failed action, or click 'Cancel' to cancel this action and continue setup. For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft%20SQL%20Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=11.0.2100.60&EvtType=0x4F9BEA51%25400xD3BEBD98%25401211%25401 Clicking retry fails every time. When clicking cancel I get: The following error has occurred: SQL Server Browser configuration for feature 'SQL_Browser_Redist_SqlBrowser_Cpu32' was cancelled by user after a previous installation failure. The last attempted step: Starting the SQL Server Browser service 'SQLBrowser', and waiting for up to '900' seconds for the process to complete. . For help, click: http://go.microsoft.com/fwlink?LinkID=20476&ProdName=Microsoft%20SQL%20Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=11.0.2100.60&EvtType=0x4F9BEA51%25400xD3BEBD98%25401211%25401 When I go to uninstall the SQL Browser from "Programs and Features", it complains: Error opening installation log file. Verify that the specified log file location exists and is writable. Is there any way I can fix this short of re-imaging my computer and reinstalling from scratch? A possible approach would be to somehow really uninstall everything and delete all files related to SQL... is that a good idea, and how do I do that?

    Read the article

  • Windows 7 inbuilt and 3rd party (de)fragmentation related queries

    - by Karan
    I have a pretty good idea of how files end up getting fragmented. That said, I just copied ~3,200 files of varying sizes (from a few KB to ~20GB) from an external USB HDD to an internal, freshly formatted (under Windows 7 x64), NTFS, 2TB, 5400RPM, WD, SATA, non-system (i.e. secondary) drive, filling it up 57%. Since it should have been very much possible for each file to have been stored in one contiguous block, I expected the drive to be fragmented not more than 1-2% at most after this rather lengthy exercise (unfortunately this older machine doesn't support USB 3.0). Windows 7's inbuilt defrag utility told me after a quick analysis that the drive was fragmented only 1% or so, which dovetailed neatly with my expectations. However, just out of curiosity I downloaded and ran the latest portable x64 version of Piriform's Defraggler, and was shocked to see the drive being reported as being ~85% fragmented! The portable version of Auslogics Disk Defrag also agreed with Defraggler, and both clearly expected to grind away for ~10 hours to completely defragment the drive. 1) How in blazes could the inbuilt and 3rd party defrag utils disagree so badly? I mean, 10-20% variance is probably understandable, but 1% and 85% are miles apart! This Engineering Windows 7 blog post states: In Windows XP, any file that is split into more than one piece is considered fragmented. Not so in Windows Vista if the fragments are large enough – the defragmentation algorithm was changed (from Windows XP) to ignore pieces of a file that are larger than 64MB. As a result, defrag in XP and defrag in Vista will report different amounts of fragmentation on a volume. ... [Please read the entire post so the quote is not taken out of context.] Could it simply be that the 3rd party defrag utils ignore this post-XP change and continue to use analysis algos similar to those XP used? 2) Assuming that the 3rd party utils aren't lying about the real extent of fragmentation (which Windows is downplaying post-XP), how could the files have even got fragmented so badly given they were just copied over afresh to an empty drive? 3) If vastly differing analysis algos explain the yawning gap, which do I believe? I'm no defrag fanatic for sure, but 85% is enough to make me seriously consider spending 10 hours defragging this drive. On the other hand, 1% reported by Windows' own defragger clearly implies that there is no cause for concern and defragging would actually have negative consequences (as per the post). Is Windows' assumption valid and should I just let it be, or will there be any noticeable performance gains after running one of the 3rd party utils for 10 hours straight? 4) I see that out of the box Windows 7 defrag is scheduled to run weekly. Does anyone know whether it defrags every single time, or only if its analysis reveals a fragmentation percentage over a set threshold? If the latter, what is this threshold and can it be changed, maybe via a Registry edit? Thanks for reading through (my first query on this wonderful site!) and for any helpful replies. Also, if you're answering question #3, please keep in mind that any speed increases post defragging with 3rd party utils vis-à-vis Windows' inbuilt program should not include pre-Vista (preferably pre-Win7) examples. Further, examples of programs that made your system boot faster won't help in this case, since this is a non-system drive (although one that'll still be used daily).

    Read the article

  • Use IIS Application Initialization for keeping ASP.NET Apps alive

    - by Rick Strahl
    I've been working quite a bit with Windows Services in the recent months, and well, it turns out that Windows Services are quite a bear to debug, deploy, update and maintain. The process of getting services set up,  debugged and updated is a major chore that has to be extensively documented and or automated specifically. On most projects when a service is built, people end up scrambling for the right 'process' to use for administration. Web app deployment and maintenance on the other hand are common and well understood today, as we are constantly dealing with Web apps. There's plenty of infrastructure and tooling built into Web Tools like Visual Studio to facilitate the process. By comparison Windows Services or anything self-hosted for that matter seems convoluted.In fact, in a recent blog post I mentioned that on a recent project I'd been using self-hosting for SignalR inside of a Windows service, because the application is in fact a 'service' that also needs to send out lots of messages via SignalR. But the reality is that it could just as well be an IIS application with a service component that runs in the background. Either way you look at it, it's either a Windows Service with a built in Web Server, or an IIS application running a Service application, neither of which follows the standard Service or Web App template.Personally I much prefer Web applications. Running inside of IIS I get all the benefits of the IIS platform including service lifetime management (crash and restart), controlled shutdowns, the whole security infrastructure including easy certificate support, hot-swapping of code and the the ability to publish directly to IIS from within Visual Studio with ease.Because of these benefits we set out to move from the self hosted service into an ASP.NET Web app instead.The Missing Link for ASP.NET as a Service: Auto-LoadingI've had moments in the past where I wanted to run a 'service like' application in ASP.NET because when you think about it, it's so much easier to control a Web application remotely. Services are locked into start/stop operations, but if you host inside of a Web app you can write your own ticket and control it from anywhere. In fact nearly 10 years ago I built a background scheduling application that ran inside of ASP.NET and it worked great and it's still running doing its job today.The tricky part for running an app as a service inside of IIS then and now, is how to get IIS and ASP.NET launched so your 'service' stays alive even after an Application Pool reset. 7 years ago I faked it by using a web monitor (my own West Wind Web Monitor app) I was running anyway to monitor my various web sites for uptime, and having the monitor ping my 'service' every 20 seconds to effectively keep ASP.NET alive or fire it back up after a reload. I used a simple scheduler class that also includes some logic for 'self-reloading'. Hacky for sure, but it worked reliably.Luckily today it's much easier and more integrated to get IIS to launch ASP.NET as soon as an Application Pool is started by using the Application Initialization Module. The Application Initialization Module basically allows you to turn on Preloading on the Application Pool and the Site/IIS App, which essentially fires a request through the IIS pipeline as soon as the Application Pool has been launched. This means that effectively your ASP.NET app becomes active immediately, Application_Start is fired making sure your app stays up and running at all times. All the other features like Application Pool recycling and auto-shutdown after idle time still work, but IIS will then always immediately re-launch the application.Getting started with Application InitializationAs of IIS 8 Application Initialization is part of the IIS feature set. For IIS 7 and 7.5 there's a separate download available via Web Platform Installer. Using IIS 8 Application Initialization is an optional install component in Windows or the Windows Server Role Manager: This is an optional component so make sure you explicitly select it.IIS Configuration for Application InitializationInitialization needs to be applied on the Application Pool as well as the IIS Application level. As of IIS 8 these settings can be made through the IIS Administration console.Start with the Application Pool:Here you need to set both the Start Automatically which is always set, and the StartMode which should be set to AlwaysRunning. Both have to be set - the Start Automatically flag is set true by default and controls the starting of the application pool itself while Always Running flag is required in order to launch the application. Without the latter flag set the site settings have no effect.Now on the Site/Application level you can specify whether the site should pre load: Set the Preload Enabled flag to true.At this point ASP.NET apps should auto-load. This is all that's needed to pre-load the site if all you want is to get your site launched automatically.If you want a little more control over the load process you can add a few more settings to your web.config file that allow you to show a static page while the App is starting up. This can be useful if startup is really slow, so rather than displaying blank screen while the user is fiddling their thumbs you can display a static HTML page instead: <system.webServer> <applicationInitialization remapManagedRequestsTo="Startup.htm" skipManagedModules="true"> <add initializationPage="ping.ashx" /> </applicationInitialization> </system.webServer>This allows you to specify a page to execute in a dry run. IIS basically fakes request and pushes it directly into the IIS pipeline without hitting the network. You specify a page and IIS will fake a request to that page in this case ping.ashx which just returns a simple OK string - ie. a fast pipeline request. This request is run immediately after Application Pool restart, and while this request is running and your app is warming up, IIS can display an alternate static page - Startup.htm above. So instead of showing users an empty loading page when clicking a link on your site you can optionally show some sort of static status page that says, "we'll be right back".  I'm not sure if that's such a brilliant idea since this can be pretty disruptive in some cases. Personally I think I prefer letting people wait, but at least get the response they were supposed to get back rather than a random page. But it's there if you need it.Note that the web.config stuff is optional. If you don't provide it IIS hits the default site link (/) and even if there's no matching request at the end of that request it'll still fire the request through the IIS pipeline. Ideally though you want to make sure that an ASP.NET endpoint is hit either with your default page, or by specify the initializationPage to ensure ASP.NET actually gets hit since it's possible for IIS fire unmanaged requests only for static pages (depending how your pipeline is configured).What about AppDomain Restarts?In addition to full Worker Process recycles at the IIS level, ASP.NET also has to deal with AppDomain shutdowns which can occur for a variety of reasons:Files are updated in the BIN folderWeb Deploy to your siteweb.config is changedHard application crashThese operations don't cause the worker process to restart, but they do cause ASP.NET to unload the current AppDomain and start up a new one. Because the features above only apply to Application Pool restarts, AppDomain restarts could also cause your 'ASP.NET service' to stop processing in the background.In order to keep the app running on AppDomain recycles, you can resort to a simple ping in the Application_End event:protected void Application_End() { var client = new WebClient(); var url = App.AdminConfiguration.MonitorHostUrl + "ping.aspx"; client.DownloadString(url); Trace.WriteLine("Application Shut Down Ping: " + url); }which fires any ASP.NET url to the current site at the very end of the pipeline shutdown which in turn ensures that the site immediately starts back up.Manual Configuration in ApplicationHost.configThe above UI corresponds to the following ApplicationHost.config settings. If you're using IIS 7, there's no UI for these flags so you'll have to manually edit them.When you install the Application Initialization component into IIS it should auto-configure the module into ApplicationHost.config. Unfortunately for me, with Mr. Murphy in his best form for me, the module registration did not occur and I had to manually add it.<globalModules> <add name="ApplicationInitializationModule" image="%windir%\System32\inetsrv\warmup.dll" /> </globalModules>Most likely you won't need ever need to add this, but if things are not working it's worth to check if the module is actually registered.Next you need to configure the ApplicationPool and the Web site. The following are the two relevant entries in ApplicationHost.config.<system.applicationHost> <applicationPools> <add name="West Wind West Wind Web Connection" autoStart="true" startMode="AlwaysRunning" managedRuntimeVersion="v4.0" managedPipelineMode="Integrated"> <processModel identityType="LocalSystem" setProfileEnvironment="true" /> </add> </applicationPools> <sites> <site name="Default Web Site" id="1"> <application path="/MPress.Workflow.WebQueueMessageManager" applicationPool="West Wind West Wind Web Connection" preloadEnabled="true"> <virtualDirectory path="/" physicalPath="C:\Clients\…" /> </application> </site> </sites> </system.applicationHost>On the Application Pool make sure to set the autoStart and startMode flags to true and AlwaysRunning respectively. On the site make sure to set the preloadEnabled flag to true.And that's all you should need. You can still set the web.config settings described above as well.ASP.NET as a Service?In the particular application I'm working on currently, we have a queue manager that runs as standalone service that polls a database queue and picks out jobs and processes them on several threads. The service can spin up any number of threads and keep these threads alive in the background while IIS is running doing its own thing. These threads are newly created threads, so they sit completely outside of the IIS thread pool. In order for this service to work all it needs is a long running reference that keeps it alive for the life time of the application.In this particular app there are two components that run in the background on their own threads: A scheduler that runs various scheduled tasks and handles things like picking up emails to send out outside of IIS's scope and the QueueManager. Here's what this looks like in global.asax:public class Global : System.Web.HttpApplication { private static ApplicationScheduler scheduler; private static ServiceLauncher launcher; protected void Application_Start(object sender, EventArgs e) { // Pings the service and ensures it stays alive scheduler = new ApplicationScheduler() { CheckFrequency = 600000 }; scheduler.Start(); launcher = new ServiceLauncher(); launcher.Start(); // register so shutdown is controlled HostingEnvironment.RegisterObject(launcher); }}By keeping these objects around as static instances that are set only once on startup, they survive the lifetime of the application. The code in these classes is essentially unchanged from the Windows Service code except that I could remove the various overrides required for the Windows Service interface (OnStart,OnStop,OnResume etc.). Otherwise the behavior and operation is very similar.In this application ASP.NET serves two purposes: It acts as the host for SignalR and provides the administration interface which allows remote management of the 'service'. I can start and stop the service remotely by shutting down the ApplicationScheduler very easily. I can also very easily feed stats from the queue out directly via a couple of Web requests or (as we do now) through the SignalR service.Registering a Background Object with ASP.NETNotice also the use of the HostingEnvironment.RegisterObject(). This function registers an object with ASP.NET to let it know that it's a background task that should be notified if the AppDomain shuts down. RegisterObject() requires an interface with a Stop() method that's fired and allows your code to respond to a shutdown request. Here's what the IRegisteredObject::Stop() method looks like on the launcher:public void Stop(bool immediate = false) { LogManager.Current.LogInfo("QueueManager Controller Stopped."); Controller.StopProcessing(); Controller.Dispose(); Thread.Sleep(1500); // give background threads some time HostingEnvironment.UnregisterObject(this); }Implementing IRegisterObject should help with reliability on AppDomain shutdowns. Thanks to Justin Van Patten for pointing this out to me on Twitter.RegisterObject() is not required but I would highly recommend implementing it on whatever object controls your background processing to all clean shutdowns when the AppDomain shuts down.Testing it outI'm still in the testing phase with this particular service to see if there are any side effects. But so far it doesn't look like it. With about 50 lines of code I was able to replace the Windows service startup to Web start up - everything else just worked as is. An honorable mention goes to SignalR 2.0's oWin hosting, because with the new oWin based hosting no code changes at all were required, merely a couple of configuration file settings and an assembly directive needed, to point at the SignalR startup class. Sweet!It also seems like SignalR is noticeably faster running inside of IIS compared to self-host. Startup feels faster because of the preload.Starting and Stopping the 'Service'Because the application is running as a Web Server, it's easy to have a Web interface for starting and stopping the services running inside of the service. For our queue manager the SignalR service and front monitoring app has a play and stop button for toggling the queue.If you want more administrative control and have it work more like a Windows Service you can also stop the application pool explicitly from the command line which would be equivalent to stopping and restarting a service.To start and stop from the command line you can use the IIS appCmd tool. To stop:> %windir%\system32\inetsrv\appcmd stop apppool /apppool.name:"Weblog"and to start> %windir%\system32\inetsrv\appcmd start apppool /apppool.name:"Weblog"Note that when you explicitly force the AppPool to stop running either in the UI (on the ApplicationPools page use Start/Stop) or via command line tools, the application pool will not auto-restart immediately. You have to manually start it back up.What's not to like?There are certainly a lot of benefits to running a background service in IIS, but… ASP.NET applications do have more overhead in terms of memory footprint and startup time is a little slower, but generally for server applications this is not a big deal. If the application is stable the service should fire up and stay running indefinitely. A lot of times this kind of service interface can simply be attached to an existing Web application, or if scalability requires be offloaded to its own Web server.Easier to work withBut the ultimate benefit here is that it's much easier to work with a Web app as opposed to a service. While developing I can simply turn off the auto-launch features and launch the service on demand through IIS simply by hitting a page on the site. If I want to shut down an IISRESET -stop will shut down the service easily enough. I can then attach a debugger anywhere I want and this works like any other ASP.NET application. Yes you end up on a background thread for debugging but Visual Studio handles that just fine and if you stay on a single thread this is no different than debugging any other code.SummaryUsing ASP.NET to run background service operations is probably not a super common scenario, but it probably should be something that is considered carefully when building services. Many applications have service like features and with the auto-start functionality of the Application Initialization module, it's easy to build this functionality into ASP.NET. Especially when combined with the notification features of SignalR it becomes very, very easy to create rich services that can also communicate their status easily to the outside world.Whether it's existing applications that need some background processing for scheduling related tasks, or whether you just create a separate site altogether just to host your service it's easy to do and you can leverage the same tool chain you're already using for other Web projects. If you have lots of service projects it's worth considering… give it some thought…© Rick Strahl, West Wind Technologies, 2005-2013Posted in ASP.NET  SignalR  IIS   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Oracle BI Server Modeling, Part 1- Designing a Query Factory

    - by bob.ertl(at)oracle.com
      Welcome to Oracle BI Development's BI Foundation blog, focused on helping you get the most value from your Oracle Business Intelligence Enterprise Edition (BI EE) platform deployments.  In my first series of posts, I plan to show developers the concepts and best practices for modeling in the Common Enterprise Information Model (CEIM), the semantic layer of Oracle BI EE.  In this segment, I will lay the groundwork for the modeling concepts.  First, I will cover the big picture of how the BI Server fits into the system, and how the CEIM controls the query processing. Oracle BI EE Query Cycle The purpose of the Oracle BI Server is to bridge the gap between the presentation services and the data sources.  There are typically a variety of data sources in a variety of technologies: relational, normalized transaction systems; relational star-schema data warehouses and marts; multidimensional analytic cubes and financial applications; flat files, Excel files, XML files, and so on. Business datasets can reside in a single type of source, or, most of the time, are spread across various types of sources. Presentation services users are generally business people who need to be able to query that set of sources without any knowledge of technologies, schemas, or how sources are organized in their company. They think of business analysis in terms of measures with specific calculations, hierarchical dimensions for breaking those measures down, and detailed reports of the business transactions themselves.  Most of them create queries without knowing it, by picking a dashboard page and some filters.  Others create their own analysis by selecting metrics and dimensional attributes, and possibly creating additional calculations. The BI Server bridges that gap from simple business terms to technical physical queries by exposing just the business focused measures and dimensional attributes that business people can use in their analyses and dashboards.   After they make their selections and start the analysis, the BI Server plans the best way to query the data sources, writes the optimized sequence of physical queries to those sources, post-processes the results, and presents them to the client as a single result set suitable for tables, pivots and charts. The CEIM is a model that controls the processing of the BI Server.  It provides the subject areas that presentation services exposes for business users to select simplified metrics and dimensional attributes for their analysis.  It models the mappings to the physical data access, the calculations and logical transformations, and the data access security rules.  The CEIM consists of metadata stored in the repository, authored by developers using the Administration Tool client.     Presentation services and other query clients create their queries in BI EE's SQL-92 language, called Logical SQL or LSQL.  The API simply uses ODBC or JDBC to pass the query to the BI Server.  Presentation services writes the LSQL query in terms of the simplified objects presented to the users.  The BI Server creates a query plan, and rewrites the LSQL into fully-detailed SQL or other languages suitable for querying the physical sources.  For example, the LSQL on the left below was rewritten into the physical SQL for an Oracle 11g database on the right. Logical SQL   Physical SQL SELECT "D0 Time"."T02 Per Name Month" saw_0, "D4 Product"."P01  Product" saw_1, "F2 Units"."2-01  Billed Qty  (Sum All)" saw_2 FROM "Sample Sales" ORDER BY saw_0, saw_1       WITH SAWITH0 AS ( select T986.Per_Name_Month as c1, T879.Prod_Dsc as c2,      sum(T835.Units) as c3, T879.Prod_Key as c4 from      Product T879 /* A05 Product */ ,      Time_Mth T986 /* A08 Time Mth */ ,      FactsRev T835 /* A11 Revenue (Billed Time Join) */ where ( T835.Prod_Key = T879.Prod_Key and T835.Bill_Mth = T986.Row_Wid) group by T879.Prod_Dsc, T879.Prod_Key, T986.Per_Name_Month ) select SAWITH0.c1 as c1, SAWITH0.c2 as c2, SAWITH0.c3 as c3 from SAWITH0 order by c1, c2   Probably everybody reading this blog can write SQL or MDX.  However, the trick in designing the CEIM is that you are modeling a query-generation factory.  Rather than hand-crafting individual queries, you model behavior and relationships, thus configuring the BI Server machinery to manufacture millions of different queries in response to random user requests.  This mass production requires a different mindset and approach than when you are designing individual SQL statements in tools such as Oracle SQL Developer, Oracle Hyperion Interactive Reporting (formerly Brio), or Oracle BI Publisher.   The Structure of the Common Enterprise Information Model (CEIM) The CEIM has a unique structure specifically for modeling the relationships and behaviors that fill the gap from logical user requests to physical data source queries and back to the result.  The model divides the functionality into three specialized layers, called Presentation, Business Model and Mapping, and Physical, as shown below. Presentation services clients can generally only see the presentation layer, and the objects in the presentation layer are normally the only ones used in the LSQL request.  When a request comes into the BI Server from presentation services or another client, the relationships and objects in the model allow the BI Server to select the appropriate data sources, create a query plan, and generate the physical queries.  That's the left to right flow in the diagram below.  When the results come back from the data source queries, the right to left relationships in the model show how to transform the results and perform any final calculations and functions that could not be pushed down to the databases.   Business Model Think of the business model as the heart of the CEIM you are designing.  This is where you define the analytic behavior seen by the users, and the superset library of metric and dimension objects available to the user community as a whole.  It also provides the baseline business-friendly names and user-readable dictionary.  For these reasons, it is often called the "logical" model--it is a virtual database schema that persists no data, but can be queried as if it is a database. The business model always has a dimensional shape (more on this in future posts), and its simple shape and terminology hides the complexity of the source data models. Besides hiding complexity and normalizing terminology, this layer adds most of the analytic value, as well.  This is where you define the rich, dimensional behavior of the metrics and complex business calculations, as well as the conformed dimensions and hierarchies.  It contributes to the ease of use for business users, since the dimensional metric definitions apply in any context of filters and drill-downs, and the conformed dimensions enable dashboard-wide filters and guided analysis links that bring context along from one page to the next.  The conformed dimensions also provide a key to hiding the complexity of many sources, including federation of different databases, behind the simple business model. Note that the expression language in this layer is LSQL, so that any expression can be rewritten into any data source's query language at run time.  This is important for federation, where a given logical object can map to several different physical objects in different databases.  It is also important to portability of the CEIM to different database brands, which is a key requirement for Oracle's BI Applications products. Your requirements process with your user community will mostly affect the business model.  This is where you will define most of the things they specifically ask for, such as metric definitions.  For this reason, many of the best-practice methodologies of our consulting partners start with the high-level definition of this layer. Physical Model The physical model connects the business model that meets your users' requirements to the reality of the data sources you have available. In the query factory analogy, think of the physical layer as the bill of materials for generating physical queries.  Every schema, table, column, join, cube, hierarchy, etc., that will appear in any physical query manufactured at run time must be modeled here at design time. Each physical data source will have its own physical model, or "database" object in the CEIM.  The shape of each physical model matches the shape of its physical source.  In other words, if the source is normalized relational, the physical model will mimic that normalized shape.  If it is a hypercube, the physical model will have a hypercube shape.  If it is a flat file, it will have a denormalized tabular shape. To aid in query optimization, the physical layer also tracks the specifics of the database brand and release.  This allows the BI Server to make the most of each physical source's distinct capabilities, writing queries in its syntax, and using its specific functions. This allows the BI Server to push processing work as deep as possible into the physical source, which minimizes data movement and takes full advantage of the database's own optimizer.  For most data sources, native APIs are used to further optimize performance and functionality. The value of having a distinct separation between the logical (business) and physical models is encapsulation of the physical characteristics.  This encapsulation is another enabler of packaged BI applications and federation.  It is also key to hiding the complex shapes and relationships in the physical sources from the end users.  Consider a routine drill-down in the business model: physically, it can require a drill-through where the first query is MDX to a multidimensional cube, followed by the drill-down query in SQL to a normalized relational database.  The only difference from the user's point of view is that the 2nd query added a more detailed dimension level column - everything else was the same. Mappings Within the Business Model and Mapping Layer, the mappings provide the binding from each logical column and join in the dimensional business model, to each of the objects that can provide its data in the physical layer.  When there is more than one option for a physical source, rules in the mappings are applied to the query context to determine which of the data sources should be hit, and how to combine their results if more than one is used.  These rules specify aggregate navigation, vertical partitioning (fragmentation), and horizontal partitioning, any of which can be federated across multiple, heterogeneous sources.  These mappings are usually the most sophisticated part of the CEIM. Presentation You might think of the presentation layer as a set of very simple relational-like views into the business model.  Over ODBC/JDBC, they present a relational catalog consisting of databases, tables and columns.  For business users, presentation services interprets these as subject areas, folders and columns, respectively.  (Note that in 10g, subject areas were called presentation catalogs in the CEIM.  In this blog, I will stick to 11g terminology.)  Generally speaking, presentation services and other clients can query only these objects (there are exceptions for certain clients such as BI Publisher and Essbase Studio). The purpose of the presentation layer is to specialize the business model for different categories of users.  Based on a user's role, they will be restricted to specific subject areas, tables and columns for security.  The breakdown of the model into multiple subject areas organizes the content for users, and subjects superfluous to a particular business role can be hidden from that set of users.  Customized names and descriptions can be used to override the business model names for a specific audience.  Variables in the object names can be used for localization. For these reasons, you are better off thinking of the tables in the presentation layer as folders than as strict relational tables.  The real semantics of tables and how they function is in the business model, and any grouping of columns can be included in any table in the presentation layer.  In 11g, an LSQL query can also span multiple presentation subject areas, as long as they map to the same business model. Other Model Objects There are some objects that apply to multiple layers.  These include security-related objects, such as application roles, users, data filters, and query limits (governors).  There are also variables you can use in parameters and expressions, and initialization blocks for loading their initial values on a static or user session basis.  Finally, there are Multi-User Development (MUD) projects for developers to check out units of work, and objects for the marketing feature used by our packaged customer relationship management (CRM) software.   The Query Factory At this point, you should have a grasp on the query factory concept.  When developing the CEIM model, you are configuring the BI Server to automatically manufacture millions of queries in response to random user requests. You do this by defining the analytic behavior in the business model, mapping that to the physical data sources, and exposing it through the presentation layer's role-based subject areas. While configuring mass production requires a different mindset than when you hand-craft individual SQL or MDX statements, it builds on the modeling and query concepts you already understand. The following posts in this series will walk through the CEIM modeling concepts and best practices in detail.  We will initially review dimensional concepts so you can understand the business model, and then present a pattern-based approach to learning the mappings from a variety of physical schema shapes and deployments to the dimensional model.  Along the way, we will also present the dimensional calculation template, and learn how to configure the many additivity patterns.

    Read the article

  • RPi and Java Embedded GPIO: Writing Java code to blink LED

    - by hinkmond
    So, you've followed the previous steps to install Java Embedded on your Raspberry Pi ?, you went to Fry's and picked up some jumper wires, LEDs, and resistors ?, you hooked up the wires, LED, and resistor the the correct pins ?, and now you want to start programming in Java on your RPi? Yes? ???????! OK, then... Here we go. You can use the following source code to blink your first LED on your RPi using Java. In the code you can see that I'm not using any complicated gpio libraries like wiringpi or pi4j, and I'm not doing any low-level pin manipulation like you can in C. And, I'm not using python (hell no!). This is Java programming, so we keep it simple (and more readable) than those other programming languages. See: Write Java code to do this In the Java code, I'm opening up the RPi Debian Wheezy well-defined file handles to control the GPIO ports. First I'm resetting everything using the unexport/export file handles. (On the RPi, if you open the well-defined file handles and write certain ASCII text to them, you can drive your GPIO to perform certain operations. See this GPIO reference). Next, I write a "1" then "0" to the value file handle of the GPIO0 port (see the previous pinout diagram). That makes the LED blink. Then, I loop to infinity. Easy, huh? import java.io.* /* * Java Embedded Raspberry Pi GPIO app */ package jerpigpio; import java.io.FileWriter; /** * * @author hinkmond */ public class JerpiGPIO { static final String GPIO_OUT = "out"; static final String GPIO_ON = "1"; static final String GPIO_OFF = "0"; static final String GPIO_CH00="0"; /** * @param args the command line arguments */ public static void main(String[] args) { FileWriter commandFile; try { /*** Init GPIO port for output ***/ // Open file handles to GPIO port unexport and export controls FileWriter unexportFile = new FileWriter("/sys/class/gpio/unexport"); FileWriter exportFile = new FileWriter("/sys/class/gpio/export"); // Reset the port unexportFile.write(GPIO_CH00); unexportFile.flush(); // Set the port for use exportFile.write(GPIO_CH00); exportFile.flush(); // Open file handle to port input/output control FileWriter directionFile = new FileWriter("/sys/class/gpio/gpio"+GPIO_CH00+"/direction"); // Set port for output directionFile.write(GPIO_OUT); directionFile.flush(); /*--- Send commands to GPIO port ---*/ // Opne file handle to issue commands to GPIO port commandFile = new FileWriter("/sys/class/gpio/gpio"+GPIO_CH00+"/value"); // Loop forever while (true) { // Set GPIO port ON commandFile.write(GPIO_ON); commandFile.flush(); // Wait for a while java.lang.Thread.sleep(200); // Set GPIO port OFF commandFile.write(GPIO_OFF); commandFile.flush(); // Wait for a while java.lang.Thread.sleep(200); } } catch (Exception exception) { exception.printStackTrace(); } } } Hinkmond

    Read the article

  • Improving WIF&rsquo;s Claims-based Authorization - Part 1

    - by Your DisplayName here!
    As mentioned in my last post, I made several additions to WIF’s built-in authorization infrastructure to make it more flexible and easy to use. The foundation for all this work is that you have to be able to directly call the registered ClaimsAuthorizationManager. The following snippet is the universal way to get to the WIF configuration that is currently in effect: public static ServiceConfiguration ServiceConfiguration {     get     {         if (OperationContext.Current == null)         {             // no WCF             return FederatedAuthentication.ServiceConfiguration;         }         // search message property         if (OperationContext.Current.IncomingMessageProperties. ContainsKey("ServiceConfiguration"))         {             var configuration = OperationContext.Current. IncomingMessageProperties["ServiceConfiguration"] as ServiceConfiguration;             if (configuration != null)             {                 return configuration;             }         }         // return configuration from configuration file         return new ServiceConfiguration();     } }   From here you can grab ServiceConfiguration.ClaimsAuthoriationManager which give you direct access to the CheckAccess method (and thus control over claim types and values). I then created the following wrapper methods: public static bool CheckAccess(string resource, string action) {     return CheckAccess(resource, action, Thread.CurrentPrincipal as IClaimsPrincipal); } public static bool CheckAccess(string resource, string action, IClaimsPrincipal principal) {     var context = new AuthorizationContext(principal, resource, action);     return AuthorizationManager.CheckAccess(context); } public static bool CheckAccess(Collection<Claim> actions, Collection<Claim> resources) {     return CheckAccess(new AuthorizationContext(         Thread.CurrentPrincipal.AsClaimsPrincipal(), resources, actions)); } public static bool CheckAccess(AuthorizationContext context) {     return AuthorizationManager.CheckAccess(context); } I also created the same set of methods but called DemandAccess. They internally use CheckAccess and will throw a SecurityException when false is returned. All the code is part of Thinktecture.IdentityModel on Codeplex – or via NuGet (Install-Package Thinktecture.IdentityModel).

    Read the article

  • The WaitForAll Roadshow

    - by adweigert
    OK, so I took for granted some imaginative uses of WaitForAll but lacking that, here is how I am using. First, I have a nice little class called Parallel that allows me to spin together a list of tasks (actions) and then use WaitForAll, so here it is, WaitForAll's 15 minutes of fame ... First Parallel that allows me to spin together several Action delegates to execute, well in parallel.   public static class Parallel { public static ParallelQuery Task(Action action) { return new Action[] { action }.AsParallel(); } public static ParallelQuery> Task(Action action) { return new Action[] { action }.AsParallel(); } public static ParallelQuery Task(this ParallelQuery actions, Action action) { var list = new List(actions); list.Add(action); return list.AsParallel(); } public static ParallelQuery> Task(this ParallelQuery> actions, Action action) { var list = new List>(actions); list.Add(action); return list.AsParallel(); } }   Next, this is an example usage from an app I'm working on that just is rendering some basic computer information via WMI and performance counters. The WMI calls can be expensive given the distance and link speed of some of the computers it will be trying to communicate with. This is the actual MVC action from my controller to return the data for an individual computer.  public PartialViewResult Detail(string computerName) { var computer = this.Computers.Get(computerName); var perf = Factory.GetInstance(); var detail = new ComputerDetailViewModel() { Computer = computer }; try { var work = Parallel .Task(delegate { // Win32_ComputerSystem var key = computer.Name + "_Win32_ComputerSystem"; var system = this.Cache.Get(key); if (system == null) { using (var impersonation = computer.ImpersonateElevatedIdentity()) { system = computer.GetWmiContext().GetInstances().Single(); } this.Cache.Set(key, system); } detail.TotalMemory = system.TotalPhysicalMemory; detail.Manufacturer = system.Manufacturer; detail.Model = system.Model; detail.NumberOfProcessors = system.NumberOfProcessors; }) .Task(delegate { // Win32_OperatingSystem var key = computer.Name + "_Win32_OperatingSystem"; var os = this.Cache.Get(key); if (os == null) { using (var impersonation = computer.ImpersonateElevatedIdentity()) { os = computer.GetWmiContext().GetInstances().Single(); } this.Cache.Set(key, os); } detail.OperatingSystem = os.Caption; detail.OSVersion = os.Version; }) // Performance Counters .Task(delegate { using (var impersonation = computer.ImpersonateElevatedIdentity()) { detail.AvailableBytes = perf.GetSample(computer, "Memory", "Available Bytes"); } }) .Task(delegate { using (var impersonation = computer.ImpersonateElevatedIdentity()) { detail.TotalProcessorUtilization = perf.GetValue(computer, "Processor", "% Processor Time", "_Total"); } }).WithExecutionMode(ParallelExecutionMode.ForceParallelism); if (!work.WaitForAll(TimeSpan.FromSeconds(15), task => task())) { return PartialView("Timeout"); } } catch (Exception ex) { this.LogException(ex); return PartialView("Error.ascx"); } return PartialView(detail); }

    Read the article

  • Platform jumping problems with AABB collisions

    - by Vee
    See the diagram first: When my AABB physics engine resolves an intersection, it does so by finding the axis where the penetration is smaller, then "push out" the entity on that axis. Considering the "jumping moving left" example: If velocityX is bigger than velocityY, AABB pushes the entity out on the Y axis, effectively stopping the jump (result: the player stops in mid-air). If velocityX is smaller than velocitY (not shown in diagram), the program works as intended, because AABB pushes the entity out on the X axis. How can I solve this problem? Source code: public void Update() { Position += Velocity; Velocity += World.Gravity; List<SSSPBody> toCheck = World.SpatialHash.GetNearbyItems(this); for (int i = 0; i < toCheck.Count; i++) { SSSPBody body = toCheck[i]; body.Test.Color = Color.White; if (body != this && body.Static) { float left = (body.CornerMin.X - CornerMax.X); float right = (body.CornerMax.X - CornerMin.X); float top = (body.CornerMin.Y - CornerMax.Y); float bottom = (body.CornerMax.Y - CornerMin.Y); if (SSSPUtils.AABBIsOverlapping(this, body)) { body.Test.Color = Color.Yellow; Vector2 overlapVector = SSSPUtils.AABBGetOverlapVector(left, right, top, bottom); Position += overlapVector; } if (SSSPUtils.AABBIsCollidingTop(this, body)) { if ((Position.X >= body.CornerMin.X && Position.X <= body.CornerMax.X) && (Position.Y + Height/2f == body.Position.Y - body.Height/2f)) { body.Test.Color = Color.Red; Velocity = new Vector2(Velocity.X, 0); } } } } } public static bool AABBIsOverlapping(SSSPBody mBody1, SSSPBody mBody2) { if(mBody1.CornerMax.X <= mBody2.CornerMin.X || mBody1.CornerMin.X >= mBody2.CornerMax.X) return false; if (mBody1.CornerMax.Y <= mBody2.CornerMin.Y || mBody1.CornerMin.Y >= mBody2.CornerMax.Y) return false; return true; } public static bool AABBIsColliding(SSSPBody mBody1, SSSPBody mBody2) { if (mBody1.CornerMax.X < mBody2.CornerMin.X || mBody1.CornerMin.X > mBody2.CornerMax.X) return false; if (mBody1.CornerMax.Y < mBody2.CornerMin.Y || mBody1.CornerMin.Y > mBody2.CornerMax.Y) return false; return true; } public static bool AABBIsCollidingTop(SSSPBody mBody1, SSSPBody mBody2) { if (mBody1.CornerMax.X < mBody2.CornerMin.X || mBody1.CornerMin.X > mBody2.CornerMax.X) return false; if (mBody1.CornerMax.Y < mBody2.CornerMin.Y || mBody1.CornerMin.Y > mBody2.CornerMax.Y) return false; if(mBody1.CornerMax.Y == mBody2.CornerMin.Y) return true; return false; } public static Vector2 AABBGetOverlapVector(float mLeft, float mRight, float mTop, float mBottom) { Vector2 result = new Vector2(0, 0); if ((mLeft > 0 || mRight < 0) || (mTop > 0 || mBottom < 0)) return result; if (Math.Abs(mLeft) < mRight) result.X = mLeft; else result.X = mRight; if (Math.Abs(mTop) < mBottom) result.Y = mTop; else result.Y = mBottom; if (Math.Abs(result.X) < Math.Abs(result.Y)) result.Y = 0; else result.X = 0; return result; }

    Read the article

  • Server 2003 SP2 BSOD caused by fltmgr.sys

    - by MasterMax1313
    I'm running into a problem where a Server 2003 SP2 box has started crashing roughly once an hour, BSODing out with the message that fltmgr.sys is probably the cause. I ran dumpchk.exe on the memory.dmp file, indicating the same thing. Any thoughts on typical root causes? The following is the error code I'm seeing: Error code 0000007e, parameter1 c0000005, parameter2 f723e087, parameter3 f78cea8c, parameter4 f78ce788. After running dumpchk on the memory.dmp file, I get the following note: Probably caused by : fltmgr.sys ( fltmgr!FltGetIrpName+63f ) The full log is here: Microsoft (R) Windows Debugger Version 6.12.0002.633 X86 Copyright (c) Microsoft Corporation. All rights reserved. Loading Dump File [c:\windows\memory.dmp] Kernel Complete Dump File: Full address space is available Symbol search path is: *** Invalid *** **************************************************************************** * Symbol loading may be unreliable without a symbol search path. * * Use .symfix to have the debugger choose a symbol path. * * After setting your symbol path, use .reload to refresh symbol locations. * **************************************************************************** Executable search path is: ********************************************************************* * Symbols can not be loaded because symbol path is not initialized. * * * * The Symbol Path can be set by: * * using the _NT_SYMBOL_PATH environment variable. * * using the -y <symbol_path> argument when starting the debugger. * * using .sympath and .sympath+ * ********************************************************************* *** ERROR: Symbol file could not be found. Defaulted to export symbols for ntkrnlpa.exe - Windows Server 2003 Kernel Version 3790 (Service Pack 2) UP Free x86 compatible Product: Server, suite: TerminalServer SingleUserTS Built by: 3790.srv03_sp2_gdr.101019-0340 Machine Name: Kernel base = 0x80800000 PsLoadedModuleList = 0x8089ffa8 Debug session time: Wed Oct 5 08:48:04.803 2011 (UTC - 4:00) System Uptime: 0 days 14:25:12.085 ********************************************************************* * Symbols can not be loaded because symbol path is not initialized. * * * * The Symbol Path can be set by: * * using the _NT_SYMBOL_PATH environment variable. * * using the -y <symbol_path> argument when starting the debugger. * * using .sympath and .sympath+ * ********************************************************************* *** ERROR: Symbol file could not be found. Defaulted to export symbols for ntkrnlpa.exe - Loading Kernel Symbols ............................................................... ................................................. Loading User Symbols Loading unloaded module list ... ******************************************************************************* * * * Bugcheck Analysis * * * ******************************************************************************* Use !analyze -v to get detailed debugging information. BugCheck 7E, {c0000005, f723e087, f78dea8c, f78de788} ***** Kernel symbols are WRONG. Please fix symbols to do analysis. *** ERROR: Symbol file could not be found. Defaulted to export symbols for fltmgr.sys - --omitted-- Probably caused by : fltmgr.sys ( fltmgr!FltGetIrpName+63f ) Followup: MachineOwner --------- ----- 32 bit Kernel Full Dump Analysis DUMP_HEADER32: MajorVersion 0000000f MinorVersion 00000ece KdSecondaryVersion 00000000 DirectoryTableBase 004e7000 PfnDataBase 81600000 PsLoadedModuleList 8089ffa8 PsActiveProcessHead 808a61c8 MachineImageType 0000014c NumberProcessors 00000001 BugCheckCode 0000007e BugCheckParameter1 c0000005 BugCheckParameter2 f723e087 BugCheckParameter3 f78dea8c BugCheckParameter4 f78de788 PaeEnabled 00000001 KdDebuggerDataBlock 8088e3e0 SecondaryDataState 00000000 ProductType 00000003 SuiteMask 00000110 Physical Memory Description: Number of runs: 3 (limited to 3) FileOffset Start Address Length 00001000 0000000000001000 0009e000 0009f000 0000000000100000 bfdf0000 bfe8f000 00000000bff00000 00100000 Last Page: 00000000bff8e000 00000000bffff000 KiProcessorBlock at 8089f300 1 KiProcessorBlock entries: ffdff120 Windows Server 2003 Kernel Version 3790 (Service Pack 2) UP Free x86 compatible Product: Server, suite: TerminalServer SingleUserTS Built by: 3790.srv03_sp2_gdr.101019-0340 Machine Name:*** ERROR: Module load completed but symbols could not be loaded for srv.sys Kernel base = 0x80800000 PsLoadedModuleList = 0x8089ffa8 Debug session time: Wed Oct 5 08:48:04.803 2011 (UTC - 4:00) System Uptime: 0 days 14:25:12.085 start end module name 80800000 80a50000 nt Tue Oct 19 10:00:49 2010 (4CBDA491) 80a50000 80a6f000 hal Sat Feb 17 00:48:25 2007 (45D69729) b83d4000 b83fe000 Fastfat Sat Feb 17 01:27:55 2007 (45D6A06B) b8476000 b84a1000 RDPWD Sat Feb 17 00:44:38 2007 (45D69646) b8549000 b8554000 TDTCP Sat Feb 17 00:44:32 2007 (45D69640) b8fe1000 b9045000 srv Thu Feb 17 11:58:17 2011 (4D5D53A9) b956d000 b95be000 HTTP Fri Nov 06 07:51:22 2009 (4AF41BCA) b9816000 b982d780 hgfs Tue Aug 12 20:36:54 2008 (48A22CA6) b9b16000 b9b20000 ndisuio Sat Feb 17 00:58:25 2007 (45D69981) b9cf6000 b9d1ac60 iwfsd Wed Sep 29 01:43:59 2004 (415A4B9F) b9e5b000 b9e62000 parvdm Tue Mar 25 03:03:49 2003 (3E7FFF55) b9e63000 b9e67860 lgtosync Fri Sep 12 04:38:13 2003 (3F6185F5) b9ed3000 b9ee8000 Cdfs Sat Feb 17 01:27:08 2007 (45D6A03C) b9f10000 b9f2e000 EraserUtilRebootDrv Thu Jul 07 21:45:11 2011 (4E166127) b9f2e000 b9f8c000 eeCtrl Thu Jul 07 21:45:11 2011 (4E166127) b9f8c000 b9f9d000 Fips Sat Feb 17 01:26:33 2007 (45D6A019) b9f9d000 ba013000 mrxsmb Fri Feb 18 10:22:23 2011 (4D5E8EAF) ba013000 ba043000 rdbss Wed Feb 24 10:54:03 2010 (4B854B9B) ba043000 ba0ad000 SPBBCDrv Mon Dec 14 23:39:00 2009 (4B2712E4) ba0ad000 ba0d7000 afd Thu Feb 10 08:42:18 2011 (4D53EB3A) ba0d7000 ba108000 netbt Sat Feb 17 01:28:57 2007 (45D6A0A9) ba108000 ba19c000 tcpip Sat Aug 15 05:53:38 2009 (4A8685A2) ba19c000 ba1b5000 ipsec Sat Feb 17 01:29:28 2007 (45D6A0C8) ba275000 ba288600 NAVENG Fri Jul 29 08:10:02 2011 (4E32A31A) ba289000 ba2ae000 SYMEVENT Thu Apr 15 21:31:23 2010 (4BC7BDEB) ba2ae000 ba42d300 NAVEX15 Fri Jul 29 08:07:28 2011 (4E32A280) ba42e000 ba479000 SRTSP Fri Mar 04 15:31:08 2011 (4D714C0C) ba485000 ba487b00 dump_vmscsi Wed Apr 11 13:55:32 2007 (461D2114) ba4e1000 ba540000 update Mon May 28 08:15:16 2007 (465AC7D4) ba568000 ba59f000 rdpdr Sat Feb 17 00:51:00 2007 (45D697C4) ba59f000 ba5b1000 raspptp Sat Feb 17 01:29:20 2007 (45D6A0C0) ba5b1000 ba5ca000 ndiswan Sat Feb 17 01:29:22 2007 (45D6A0C2) ba5da000 ba5e4000 dump_diskdump Sat Feb 17 01:07:44 2007 (45D69BB0) ba66a000 ba67e000 rasl2tp Sat Feb 17 01:29:02 2007 (45D6A0AE) ba67e000 ba69a000 VIDEOPRT Sat Feb 17 01:10:30 2007 (45D69C56) ba69a000 ba6c1000 ks Sat Feb 17 01:30:40 2007 (45D6A110) ba6c1000 ba6d5000 redbook Sat Feb 17 01:07:26 2007 (45D69B9E) ba6d5000 ba6ea000 cdrom Sat Feb 17 01:07:48 2007 (45D69BB4) ba6ea000 ba6ff000 serial Sat Feb 17 01:06:46 2007 (45D69B76) ba6ff000 ba717000 parport Sat Feb 17 01:06:42 2007 (45D69B72) ba717000 ba72a000 i8042prt Sat Feb 17 01:30:40 2007 (45D6A110) baff0000 baff3700 CmBatt Sat Feb 17 00:58:51 2007 (45D6999B) bf800000 bf9d3000 win32k Thu Mar 03 08:55:02 2011 (4D6F9DB6) bf9d3000 bf9ea000 dxg Sat Feb 17 01:14:39 2007 (45D69D4F) bf9ea000 bf9fec80 vmx_fb Sat Aug 16 07:23:10 2008 (48A6B89E) bf9ff000 bfa4a000 ATMFD Tue Feb 15 08:19:22 2011 (4D5A7D5A) bff60000 bff7e000 RDPDD Sat Feb 17 09:01:19 2007 (45D70AAF) f7214000 f723a000 KSecDD Mon Jun 15 13:45:11 2009 (4A3688A7) f723a000 f725f000 fltmgr Sat Feb 17 00:51:08 2007 (45D697CC) f725f000 f7272000 CLASSPNP Sat Feb 17 01:28:16 2007 (45D6A080) f7272000 f7283000 symmpi Mon Dec 13 16:03:14 2004 (41BE0392) f7283000 f72a2000 SCSIPORT Sat Feb 17 01:28:41 2007 (45D6A099) f72a2000 f72bf000 atapi Sat Feb 17 01:07:34 2007 (45D69BA6) f72bf000 f72e9000 volsnap Sat Feb 17 01:08:23 2007 (45D69BD7) f72e9000 f7315000 dmio Sat Feb 17 01:10:44 2007 (45D69C64) f7315000 f733c000 ftdisk Sat Feb 17 01:08:05 2007 (45D69BC5) f733c000 f7352000 pci Sat Feb 17 00:59:03 2007 (45D699A7) f7352000 f7386000 ACPI Sat Feb 17 00:58:47 2007 (45D69997) f7487000 f7490000 WMILIB Tue Mar 25 03:13:00 2003 (3E80017C) f7497000 f74a6000 isapnp Sat Feb 17 00:58:57 2007 (45D699A1) f74a7000 f74b4000 PCIIDEX Sat Feb 17 01:07:32 2007 (45D69BA4) f74b7000 f74c7000 MountMgr Sat Feb 17 01:05:35 2007 (45D69B2F) f74c7000 f74d2000 PartMgr Sat Feb 17 01:29:25 2007 (45D6A0C5) f74d7000 f74e7000 disk Sat Feb 17 01:07:51 2007 (45D69BB7) f74e7000 f74f3000 Dfs Sat Feb 17 00:51:17 2007 (45D697D5) f74f7000 f7501000 crcdisk Sat Feb 17 01:09:50 2007 (45D69C2E) f7507000 f7517000 agp440 Sat Feb 17 00:58:53 2007 (45D6999D) f7517000 f7522000 TDI Sat Feb 17 01:01:19 2007 (45D69A2F) f7527000 f7532000 ptilink Sat Feb 17 01:06:38 2007 (45D69B6E) f7537000 f7540000 raspti Sat Feb 17 00:59:23 2007 (45D699BB) f7547000 f7556000 termdd Sat Feb 17 00:44:32 2007 (45D69640) f7557000 f7561000 Dxapi Tue Mar 25 03:06:01 2003 (3E7FFFD9) f7577000 f7580000 mssmbios Sat Feb 17 00:59:12 2007 (45D699B0) f7587000 f7595000 NDProxy Wed Nov 03 09:25:59 2010 (4CD162E7) f75a7000 f75b1000 flpydisk Tue Mar 25 03:04:32 2003 (3E7FFF80) f75b7000 f75c0080 SRTSPX Fri Mar 04 15:31:24 2011 (4D714C1C) f75d7000 f75e3000 vga Sat Feb 17 01:10:30 2007 (45D69C56) f75e7000 f75f2000 Msfs Sat Feb 17 00:50:33 2007 (45D697A9) f75f7000 f7604000 Npfs Sat Feb 17 00:50:36 2007 (45D697AC) f7607000 f7615000 msgpc Sat Feb 17 00:58:37 2007 (45D6998D) f7617000 f7624000 netbios Sat Feb 17 00:58:29 2007 (45D69985) f7627000 f7634000 wanarp Sat Feb 17 00:59:17 2007 (45D699B5) f7637000 f7646000 intelppm Sat Feb 17 00:48:30 2007 (45D6972E) f7647000 f7652000 kbdclass Sat Feb 17 01:05:39 2007 (45D69B33) f7657000 f7661000 mouclass Tue Mar 25 03:03:09 2003 (3E7FFF2D) f7667000 f7671000 serenum Sat Feb 17 01:06:44 2007 (45D69B74) f7677000 f7682000 fdc Sat Feb 17 01:07:16 2007 (45D69B94) f7687000 f7694b00 vmx_svga Sat Aug 16 07:22:07 2008 (48A6B85F) f7697000 f76a0000 watchdog Sat Feb 17 01:11:45 2007 (45D69CA1) f76a7000 f76b0000 ndistapi Sat Feb 17 00:59:19 2007 (45D699B7) f76b7000 f76c6000 raspppoe Sat Feb 17 00:59:23 2007 (45D699BB) f76c8000 f7707000 NDIS Sat Feb 17 01:28:49 2007 (45D6A0A1) f7707000 f770f000 kdcom Tue Mar 25 03:08:00 2003 (3E800050) f770f000 f7717000 BOOTVID Tue Mar 25 03:07:58 2003 (3E80004E) f7717000 f771e000 intelide Sat Feb 17 01:07:32 2007 (45D69BA4) f771f000 f7726000 dmload Tue Mar 25 03:08:08 2003 (3E800058) f777f000 f7786000 dxgthk Tue Mar 25 03:05:52 2003 (3E7FFFD0) f7787000 f778e000 vmmemctl Tue Aug 12 20:37:25 2008 (48A22CC5) f77cf000 f77d6280 vmxnet Mon Sep 08 21:17:10 2008 (48C5CE96) f77d7000 f77df000 audstub Tue Mar 25 03:09:12 2003 (3E800098) f77ef000 f77f7000 Fs_Rec Tue Mar 25 03:08:36 2003 (3E800074) f77f7000 f77fe000 Null Tue Mar 25 03:03:05 2003 (3E7FFF29) f77ff000 f7806000 Beep Tue Mar 25 03:03:04 2003 (3E7FFF28) f7807000 f780f000 mnmdd Tue Mar 25 03:07:53 2003 (3E800049) f780f000 f7817000 RDPCDD Tue Mar 25 03:03:05 2003 (3E7FFF29) f7817000 f781f000 rasacd Tue Mar 25 03:11:50 2003 (3E800136) f7878000 f7897000 Mup Tue Apr 12 15:05:46 2011 (4DA4A28A) f7897000 f7899980 compbatt Sat Feb 17 00:58:51 2007 (45D6999B) f789b000 f789e900 BATTC Sat Feb 17 00:58:46 2007 (45D69996) f789f000 f78a1b00 vmscsi Wed Apr 11 13:55:32 2007 (461D2114) f79af000 f79b0280 vmmouse Mon Aug 11 07:16:51 2008 (48A01FA3) f79b1000 f79b2280 swenum Sat Feb 17 01:05:56 2007 (45D69B44) f7b4a000 f7bdf000 Ntfs Sat Feb 17 01:27:23 2007 (45D6A04B) Unloaded modules: ba65a000 ba668000 imapi.sys Timestamp: unavailable (00000000) Checksum: 00000000 ImageSize: 0000E000 ba1c4000 ba1d5000 vpc-8042.sys Timestamp: unavailable (00000000) Checksum: 00000000 ImageSize: 00011000 f77df000 f77e7000 Sfloppy.SYS Timestamp: unavailable (00000000) Checksum: 00000000 ImageSize: 00008000 ******************************************************************************* * * * Bugcheck Analysis * * * ******************************************************************************* Use !analyze -v to get detailed debugging information. BugCheck 7E, {c0000005, f723e087, f78dea8c, f78de788} ***** Kernel symbols are WRONG. Please fix symbols to do analysis. --omitted-- Probably caused by : fltmgr.sys ( fltmgr!FltGetIrpName+63f ) Followup: MachineOwner --------- Finished dump check

    Read the article

  • Impersonation - Access is denied

    - by krisg
    I am having trouble using impersonation to delete a PerformanceCounterCategory from an MVC website. I have a static class and when the application starts it checks whether or not a PerformanceCounterCategory exists, and if it contains the correct counters. If not, it deletes the category and creates it again with the required counters. It works fine when running under the built in webserver Cassini, but when i try run it through IIS7 (Vista) i get the following error: Access is denied Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.ComponentModel.Win32Exception: Access is denied The code used is from an MS article, from memory... var username = "user"; var password = "password"; var domain = "tempuri.org"; WindowsImpersonationContext impersonationContext; // if impersonation fails - return if (!ImpersonateValidUser(username, password, domain, out impersonationContext)) { throw new AuthenticationException("Impersonation failed"); } PerformanceCounterCategory.Delete(PerfCategory); UndoImpersonation(impersonationContext); ... private static bool ImpersonateValidUser(string username, string password, string domain, out WindowsImpersonationContext impersonationContext) { const int LOGON32_LOGON_INTERACTIVE = 2; const int LOGON32_PROVIDER_DEFAULT = 0; WindowsIdentity tempWindowsIdentity; var token = IntPtr.Zero; var tokenDuplicate = IntPtr.Zero; if (RevertToSelf()) { if (LogonUserA(username, domain, password, LOGON32_LOGON_INTERACTIVE, LOGON32_PROVIDER_DEFAULT, ref token) != 0) { if (DuplicateToken(token, 2, ref tokenDuplicate) != 0) { tempWindowsIdentity = new WindowsIdentity(tokenDuplicate); impersonationContext = tempWindowsIdentity.Impersonate(); if (impersonationContext != null) { CloseHandle(token); CloseHandle(tokenDuplicate); return true; } } } } if (token != IntPtr.Zero) CloseHandle(token); if (tokenDuplicate != IntPtr.Zero) CloseHandle(tokenDuplicate); impersonationContext = null; return false; } [DllImport("advapi32.dll")] public static extern int LogonUserA(String lpszUserName, String lpszDomain, String lpszPassword, int dwLogonType, int dwLogonProvider, ref IntPtr phToken); [DllImport("advapi32.dll", CharSet = CharSet.Auto, SetLastError = true)] public static extern int DuplicateToken(IntPtr hToken, int impersonationLevel, ref IntPtr hNewToken); [DllImport("advapi32.dll", CharSet = CharSet.Auto, SetLastError = true)] public static extern bool RevertToSelf(); [DllImport("kernel32.dll", CharSet = CharSet.Auto)] public static extern bool CloseHandle(IntPtr handle); The error is thrown when processing tries to execute the PerformanceCounterCategory.Delete command. Suggestions?

    Read the article

  • How I can add JScroll bar to NavigableImagePanel which is an Image panel with an small navigation vi

    - by Sarah Kho
    Hi, I have the following NavigableImagePanel, it is under BSD license and I found it in the web. What I want to do with this panel is as follow: I want to add a JScrollPane to it in order to show images in their full size and let the users to re-center the image using the small navigation panel. Right now, the panel resize the images to fit them in the current panel size. I want it to load the image in its real size and let users to navigate to different parts of the image using the navigation panel. Source code for the panel: import java.awt.AWTEvent; import java.awt.BorderLayout; import java.awt.Color; import java.awt.Dimension; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.GraphicsEnvironment; import java.awt.Image; import java.awt.Point; import java.awt.Rectangle; import java.awt.RenderingHints; import java.awt.Toolkit; import java.awt.event.ComponentAdapter; import java.awt.event.ComponentEvent; import java.awt.event.MouseAdapter; import java.awt.event.MouseEvent; import java.awt.event.MouseMotionListener; import java.awt.event.MouseWheelEvent; import java.awt.event.MouseWheelListener; import java.awt.image.BufferedImage; import java.io.File; import java.io.IOException; import java.util.Arrays; import javax.imageio.ImageIO; import javax.swing.JFrame; import javax.swing.JOptionPane; import javax.swing.JPanel; import javax.swing.SwingUtilities; /** * @author pxt * */ public class NavigableImagePanel extends JPanel { /** * <p>Identifies a change to the zoom level.</p> */ public static final String ZOOM_LEVEL_CHANGED_PROPERTY = "zoomLevel"; /** * <p>Identifies a change to the zoom increment.</p> */ public static final String ZOOM_INCREMENT_CHANGED_PROPERTY = "zoomIncrement"; /** * <p>Identifies that the image in the panel has changed.</p> */ public static final String IMAGE_CHANGED_PROPERTY = "image"; private static final double SCREEN_NAV_IMAGE_FACTOR = 0.15; // 15% of panel's width private static final double NAV_IMAGE_FACTOR = 0.3; // 30% of panel's width private static final double HIGH_QUALITY_RENDERING_SCALE_THRESHOLD = 1.0; private static final Object INTERPOLATION_TYPE = RenderingHints.VALUE_INTERPOLATION_BILINEAR; private double zoomIncrement = 0.2; private double zoomFactor = 1.0 + zoomIncrement; private double navZoomFactor = 1.0 + zoomIncrement; private BufferedImage image; private BufferedImage navigationImage; private int navImageWidth; private int navImageHeight; private double initialScale = 0.0; private double scale = 0.0; private double navScale = 0.0; private int originX = 0; private int originY = 0; private Point mousePosition; private Dimension previousPanelSize; private boolean navigationImageEnabled = true; private boolean highQualityRenderingEnabled = true; private WheelZoomDevice wheelZoomDevice = null; private ButtonZoomDevice buttonZoomDevice = null; /** * <p>Defines zoom devices.</p> */ public static class ZoomDevice { /** * <p>Identifies that the panel does not implement zooming, * but the component using the panel does (programmatic zooming method).</p> */ public static final ZoomDevice NONE = new ZoomDevice("none"); /** * <p>Identifies the left and right mouse buttons as the zooming device.</p> */ public static final ZoomDevice MOUSE_BUTTON = new ZoomDevice("mouseButton"); /** * <p>Identifies the mouse scroll wheel as the zooming device.</p> */ public static final ZoomDevice MOUSE_WHEEL = new ZoomDevice("mouseWheel"); private String zoomDevice; private ZoomDevice(String zoomDevice) { this.zoomDevice = zoomDevice; } public String toString() { return zoomDevice; } } //This class is required for high precision image coordinates translation. private class Coords { public double x; public double y; public Coords(double x, double y) { this.x = x; this.y = y; } public int getIntX() { return (int)Math.round(x); } public int getIntY() { return (int)Math.round(y); } public String toString() { return "[Coords: x=" + x + ",y=" + y + "]"; } } private class WheelZoomDevice implements MouseWheelListener { public void mouseWheelMoved(MouseWheelEvent e) { Point p = e.getPoint(); boolean zoomIn = (e.getWheelRotation() < 0); if (isInNavigationImage(p)) { if (zoomIn) { navZoomFactor = 1.0 + zoomIncrement; } else { navZoomFactor = 1.0 - zoomIncrement; } zoomNavigationImage(); } else if (isInImage(p)) { if (zoomIn) { zoomFactor = 1.0 + zoomIncrement; } else { zoomFactor = 1.0 - zoomIncrement; } zoomImage(); } } } private class ButtonZoomDevice extends MouseAdapter { public void mouseClicked(MouseEvent e) { Point p = e.getPoint(); if (SwingUtilities.isRightMouseButton(e)) { if (isInNavigationImage(p)) { navZoomFactor = 1.0 - zoomIncrement; zoomNavigationImage(); } else if (isInImage(p)) { zoomFactor = 1.0 - zoomIncrement; zoomImage(); } } else { if (isInNavigationImage(p)) { navZoomFactor = 1.0 + zoomIncrement; zoomNavigationImage(); } else if (isInImage(p)) { zoomFactor = 1.0 + zoomIncrement; zoomImage(); } } } } /** * <p>Creates a new navigable image panel with no default image and * the mouse scroll wheel as the zooming device.</p> */ public NavigableImagePanel() { setOpaque(false); addComponentListener(new ComponentAdapter() { public void componentResized(ComponentEvent e) { if (scale > 0.0) { if (isFullImageInPanel()) { centerImage(); } else if (isImageEdgeInPanel()) { scaleOrigin(); } if (isNavigationImageEnabled()) { createNavigationImage(); } repaint(); } previousPanelSize = getSize(); } }); addMouseListener(new MouseAdapter() { public void mousePressed(MouseEvent e) { if (SwingUtilities.isLeftMouseButton(e)) { if (isInNavigationImage(e.getPoint())) { Point p = e.getPoint(); displayImageAt(p); } } } public void mouseClicked(MouseEvent e){ if (e.getClickCount() == 2) { resetImage(); } } }); addMouseMotionListener(new MouseMotionListener() { public void mouseDragged(MouseEvent e) { if (SwingUtilities.isLeftMouseButton(e) && !isInNavigationImage(e.getPoint())) { Point p = e.getPoint(); moveImage(p); } } public void mouseMoved(MouseEvent e) { //we need the mouse position so that after zooming //that position of the image is maintained mousePosition = e.getPoint(); } }); setZoomDevice(ZoomDevice.MOUSE_WHEEL); } /** * <p>Creates a new navigable image panel with the specified image * and the mouse scroll wheel as the zooming device.</p> */ public NavigableImagePanel(BufferedImage image) throws IOException { this(); setImage(image); } private void addWheelZoomDevice() { if (wheelZoomDevice == null) { wheelZoomDevice = new WheelZoomDevice(); addMouseWheelListener(wheelZoomDevice); } } private void addButtonZoomDevice() { if (buttonZoomDevice == null) { buttonZoomDevice = new ButtonZoomDevice(); addMouseListener(buttonZoomDevice); } } private void removeWheelZoomDevice() { if (wheelZoomDevice != null) { removeMouseWheelListener(wheelZoomDevice); wheelZoomDevice = null; } } private void removeButtonZoomDevice() { if (buttonZoomDevice != null) { removeMouseListener(buttonZoomDevice); buttonZoomDevice = null; } } /** * <p>Sets a new zoom device.</p> * * @param newZoomDevice specifies the type of a new zoom device. */ public void setZoomDevice(ZoomDevice newZoomDevice) { if (newZoomDevice == ZoomDevice.NONE) { removeWheelZoomDevice(); removeButtonZoomDevice(); } else if (newZoomDevice == ZoomDevice.MOUSE_BUTTON) { removeWheelZoomDevice(); addButtonZoomDevice(); } else if (newZoomDevice == ZoomDevice.MOUSE_WHEEL) { removeButtonZoomDevice(); addWheelZoomDevice(); } } /** * <p>Gets the current zoom device.</p> */ public ZoomDevice getZoomDevice() { if (buttonZoomDevice != null) { return ZoomDevice.MOUSE_BUTTON; } else if (wheelZoomDevice != null) { return ZoomDevice.MOUSE_WHEEL; } else { return ZoomDevice.NONE; } } //Called from paintComponent() when a new image is set. private void initializeParams() { double xScale = (double)getWidth() / image.getWidth(); double yScale = (double)getHeight() / image.getHeight(); initialScale = Math.min(xScale, yScale); scale = initialScale; //An image is initially centered centerImage(); if (isNavigationImageEnabled()) { createNavigationImage(); } } //Centers the current image in the panel. private void centerImage() { originX = (int)(getWidth() - getScreenImageWidth()) / 2; originY = (int)(getHeight() - getScreenImageHeight()) / 2; } //Creates and renders the navigation image in the upper let corner of the panel. private void createNavigationImage() { //We keep the original navigation image larger than initially //displayed to allow for zooming into it without pixellation effect. navImageWidth = (int)(getWidth() * NAV_IMAGE_FACTOR); navImageHeight = navImageWidth * image.getHeight() / image.getWidth(); int scrNavImageWidth = (int)(getWidth() * SCREEN_NAV_IMAGE_FACTOR); int scrNavImageHeight = scrNavImageWidth * image.getHeight() / image.getWidth(); navScale = (double)scrNavImageWidth / navImageWidth; navigationImage = new BufferedImage(navImageWidth, navImageHeight, image.getType()); Graphics g = navigationImage.getGraphics(); g.drawImage(image, 0, 0, navImageWidth, navImageHeight, null); } /** * <p>Sets an image for display in the panel.</p> * * @param image an image to be set in the panel */ public void setImage(BufferedImage image) { BufferedImage oldImage = this.image; this.image = image; //Reset scale so that initializeParameters() is called in paintComponent() //for the new image. scale = 0.0; firePropertyChange(IMAGE_CHANGED_PROPERTY, (Image)oldImage, (Image)image); repaint(); } /** * <p>resets an image to the centre of the panel</p> * */ public void resetImage() { BufferedImage oldImage = this.image; this.image = image; //Reset scale so that initializeParameters() is called in paintComponent() //for the new image. scale = 0.0; firePropertyChange(IMAGE_CHANGED_PROPERTY, (Image)oldImage, (Image)image); repaint(); } /** * <p>Tests whether an image uses the standard RGB color space.</p> */ public static boolean isStandardRGBImage(BufferedImage bImage) { return bImage.getColorModel().getColorSpace().isCS_sRGB(); } //Converts this panel's coordinates into the original image coordinates private Coords panelToImageCoords(Point p) { return new Coords((p.x - originX) / scale, (p.y - originY) / scale); } //Converts the original image coordinates into this panel's coordinates private Coords imageToPanelCoords(Coords p) { return new Coords((p.x * scale) + originX, (p.y * scale) + originY); } //Converts the navigation image coordinates into the zoomed image coordinates private Point navToZoomedImageCoords(Point p) { int x = p.x * getScreenImageWidth() / getScreenNavImageWidth(); int y = p.y * getScreenImageHeight() / getScreenNavImageHeight(); return new Point(x, y); } //The user clicked within the navigation image and this part of the image //is displayed in the panel. //The clicked point of the image is centered in the panel. private void displayImageAt(Point p) { Point scrImagePoint = navToZoomedImageCoords(p); originX = -(scrImagePoint.x - getWidth() / 2); originY = -(scrImagePoint.y - getHeight() / 2); repaint(); } //Tests whether a given point in the panel falls within the image boundaries. private boolean isInImage(Point p) { Coords coords = panelToImageCoords(p); int x = coords.getIntX(); int y = coords.getIntY(); return (x >= 0 && x < image.getWidth() && y >= 0 && y < image.getHeight()); } //Tests whether a given point in the panel falls within the navigation image //boundaries. private boolean isInNavigationImage(Point p) { return (isNavigationImageEnabled() && p.x < getScreenNavImageWidth() && p.y < getScreenNavImageHeight()); } //Used when the image is resized. private boolean isImageEdgeInPanel() { if (previousPanelSize == null) { return false; } return (originX > 0 && originX < previousPanelSize.width || originY > 0 && originY < previousPanelSize.height); } //Tests whether the image is displayed in its entirety in the panel. private boolean isFullImageInPanel() { return (originX >= 0 && (originX + getScreenImageWidth()) < getWidth() && originY >= 0 && (originY + getScreenImageHeight()) < getHeight()); } /** * <p>Indicates whether the high quality rendering feature is enabled.</p> * * @return true if high quality rendering is enabled, false otherwise. */ public boolean isHighQualityRenderingEnabled() { return highQualityRenderingEnabled; } /** * <p>Enables/disables high quality rendering.</p> * * @param enabled enables/disables high quality rendering */ public void setHighQualityRenderingEnabled(boolean enabled) { highQualityRenderingEnabled = enabled; } //High quality rendering kicks in when when a scaled image is larger //than the original image. In other words, //when image decimation stops and interpolation starts. private boolean isHighQualityRendering() { return (highQualityRenderingEnabled && scale > HIGH_QUALITY_RENDERING_SCALE_THRESHOLD); } /** * <p>Indicates whether navigation image is enabled.<p> * * @return true when navigation image is enabled, false otherwise. */ public boolean isNavigationImageEnabled() { return navigationImageEnabled; } /** * <p>Enables/disables navigation with the navigation image.</p> * <p>Navigation image should be disabled when custom, programmatic navigation * is implemented.</p> * * @param enabled true when navigation image is enabled, false otherwise. */ public void setNavigationImageEnabled(boolean enabled) { navigationImageEnabled = enabled; repaint(); } //Used when the panel is resized private void scaleOrigin() { originX = originX * getWidth() / previousPanelSize.width; originY = originY * getHeight() / previousPanelSize.height; repaint(); } //Converts the specified zoom level to scale. private double zoomToScale(double zoom) { return initialScale * zoom; } /** * <p>Gets the current zoom level.</p> * * @return the current zoom level */ public double getZoom() { return scale / initialScale; } /** * <p>Sets the zoom level used to display the image.</p> * <p>This method is used in programmatic zooming. The zooming center is * the point of the image closest to the center of the panel. * After a new zoom level is set the image is repainted.</p> * * @param newZoom the zoom level used to display this panel's image. */ public void setZoom(double newZoom) { Point zoomingCenter = new Point(getWidth() / 2, getHeight() / 2); setZoom(newZoom, zoomingCenter); } /** * <p>Sets the zoom level used to display the image, and the zooming center, * around which zooming is done.</p> * <p>This method is used in programmatic zooming. * After a new zoom level is set the image is repainted.</p> * * @param newZoom the zoom level used to display this panel's image. */ public void setZoom(double newZoom, Point zoomingCenter) { Coords imageP = panelToImageCoords(zoomingCenter); if (imageP.x < 0.0) { imageP.x = 0.0; } if (imageP.y < 0.0) { imageP.y = 0.0; } if (imageP.x >= image.getWidth()) { imageP.x = image.getWidth() - 1.0; } if (imageP.y >= image.getHeight()) { imageP.y = image.getHeight() - 1.0; } Coords correctedP = imageToPanelCoords(imageP); double oldZoom = getZoom(); scale = zoomToScale(newZoom); Coords panelP = imageToPanelCoords(imageP); originX += (correctedP.getIntX() - (int)panelP.x); originY += (correctedP.getIntY() - (int)panelP.y); firePropertyChange(ZOOM_LEVEL_CHANGED_PROPERTY, new Double(oldZoom), new Double(getZoom())); repaint(); } /** * <p>Gets the current zoom increment.</p> * * @return the current zoom increment */ public double getZoomIncrement() { return zoomIncrement; } /** * <p>Sets a new zoom increment value.</p> * * @param newZoomIncrement new zoom increment value */ public void setZoomIncrement(double newZoomIncrement) { double oldZoomIncrement = zoomIncrement; zoomIncrement = newZoomIncrement; firePropertyChange(ZOOM_INCREMENT_CHANGED_PROPERTY, new Double(oldZoomIncrement), new Double(zoomIncrement)); } //Zooms an image in the panel by repainting it at the new zoom level. //The current mouse position is the zooming center. private void zoomImage() { Coords imageP = panelToImageCoords(mousePosition); double oldZoom = getZoom(); scale *= zoomFactor; Coords panelP = imageToPanelCoords(imageP); originX += (mousePosition.x - (int)panelP.x); originY += (mousePosition.y - (int)panelP.y); firePropertyChange(ZOOM_LEVEL_CHANGED_PROPERTY, new Double(oldZoom), new Double(getZoom())); repaint(); } //Zooms the navigation image private void zoomNavigationImage() { navScale *= navZoomFactor; repaint(); } /** * <p>Gets the image origin.</p> * <p>Image origin is defined as the upper, left corner of the image in * the panel's coordinate system.</p> * @return the point of the upper, left corner of the image in the panel's coordinates * system. */ public Point getImageOrigin() { return new Point(originX, originY); } /** * <p>Sets the image origin.</p> * <p>Image origin is defined as the upper, left corner of the image in * the panel's coordinate system. After a new origin is set, the image is repainted. * This method is used for programmatic image navigation.</p>

    Read the article

  • Ninject 3.0 MVC kernel.bind error Auto Registration

    - by user295734
    Getting and error on kernel.Bind(scanner = ... "scanner" has the little error line under it in VS 2010. Cannot convert lambda expression to type 'System.Type[]' because it is not a delegate type Tyring to Auto Register like the old kernel.scan in 2.0. I can not figure out what i am doing wrong. Added and removed so many Ninject packages. completely lost, getting to be a big waste of time. using System; using System.Web; using Microsoft.Web.Infrastructure.DynamicModuleHelper; using Ninject; using Ninject.Web.Common; //using Ninject.Extensions.Conventions; using Ninject.Web.WebApi; using Ninject.Web.Mvc; using CommonServiceLocator.NinjectAdapter; using System.Reflection; using System.IO; using LR.Repository; using LR.Repository.Interfaces; using LR.Service.Interfaces; using System.Web.Http; public static class NinjectWebCommon { private static readonly Bootstrapper bootstrapper = new Bootstrapper(); /// <summary> /// Starts the application /// </summary> public static void Start() { DynamicModuleUtility.RegisterModule(typeof(OnePerRequestHttpModule)); DynamicModuleUtility.RegisterModule(typeof(NinjectHttpModule)); bootstrapper.Initialize(CreateKernel); } /// <summary> /// Stops the application. /// </summary> public static void Stop() { bootstrapper.ShutDown(); } /// <summary> /// Creates the kernel that will manage your application. /// </summary> /// <returns>The created kernel.</returns> private static IKernel CreateKernel() { var kernel = new StandardKernel(); kernel.Bind<Func<IKernel>>().ToMethod(ctx => () => new Bootstrapper().Kernel); kernel.Bind<IHttpModule>().To<HttpApplicationInitializationHttpModule>(); RegisterServices(kernel); return kernel; } /// <summary> /// Load your modules or register your services here! /// </summary> /// <param name="kernel">The kernel.</param> private static void RegisterServices(IKernel kernel) { kernel.Bind(scanner => scanner.FromAssembliesInPath(Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location)) .Select(IsServiceType) .BindToDefaultInterface() .Configure(binding => binding.InSingletonScope()) ); } private static bool IsServiceType(Type type) { // temp return true; // .Any() is not recognized either. return true; // type.IsClass && type.GetInterfaces().Any(intface => intface.Name == "I" + type.Name); }

    Read the article

  • Need help with fixing Genetic Algorithm that's not evolving correctly

    - by EnderMB
    I am working on a maze solving application that uses a Genetic Algorithm to evolve a set of genes (within Individuals) to evolve a Population of Individuals that power an Agent through a maze. The majority of the code used appears to be working fine but when the code runs it's not selecting the best Individual's to be in the new Population correctly. When I run the application it outputs the following: Total Fitness: 380.0 - Best Fitness: 11.0 Total Fitness: 406.0 - Best Fitness: 15.0 Total Fitness: 344.0 - Best Fitness: 12.0 Total Fitness: 373.0 - Best Fitness: 11.0 Total Fitness: 415.0 - Best Fitness: 12.0 Total Fitness: 359.0 - Best Fitness: 11.0 Total Fitness: 436.0 - Best Fitness: 13.0 Total Fitness: 390.0 - Best Fitness: 12.0 Total Fitness: 379.0 - Best Fitness: 15.0 Total Fitness: 370.0 - Best Fitness: 11.0 Total Fitness: 361.0 - Best Fitness: 11.0 Total Fitness: 413.0 - Best Fitness: 16.0 As you can clearly see the fitnesses are not improving and neither are the best fitnesses. The main code responsible for this problem is here, and I believe the problem to be within the main method, most likely where the selection methods are called: package GeneticAlgorithm; import GeneticAlgorithm.Individual.Action; import Robot.Robot.Direction; import Maze.Maze; import Robot.Robot; import java.util.ArrayList; import java.util.Random; public class RunGA { protected static ArrayList tmp1, tmp2 = new ArrayList(); // Implementation of Elitism protected static int ELITISM_K = 5; // Population size protected static int POPULATION_SIZE = 50 + ELITISM_K; // Max number of Iterations protected static int MAX_ITERATIONS = 200; // Probability of Mutation protected static double MUTATION_PROB = 0.05; // Probability of Crossover protected static double CROSSOVER_PROB = 0.7; // Instantiate Random object private static Random rand = new Random(); // Instantiate Population of Individuals private Individual[] startPopulation; // Total Fitness of Population private double totalFitness; Robot robot = new Robot(); Maze maze; public void setElitism(int result) { ELITISM_K = result; } public void setPopSize(int result) { POPULATION_SIZE = result + ELITISM_K; } public void setMaxIt(int result) { MAX_ITERATIONS = result; } public void setMutProb(double result) { MUTATION_PROB = result; } public void setCrossoverProb(double result) { CROSSOVER_PROB = result; } /** * Constructor for Population */ public RunGA(Maze maze) { // Create a population of population plus elitism startPopulation = new Individual[POPULATION_SIZE]; // For every individual in population fill with x genes from 0 to 1 for (int i = 0; i < POPULATION_SIZE; i++) { startPopulation[i] = new Individual(); startPopulation[i].randGenes(); } // Evaluate the current population's fitness this.evaluate(maze, startPopulation); } /** * Set Population * @param newPop */ public void setPopulation(Individual[] newPop) { System.arraycopy(newPop, 0, this.startPopulation, 0, POPULATION_SIZE); } /** * Get Population * @return */ public Individual[] getPopulation() { return this.startPopulation; } /** * Evaluate fitness * @return */ public double evaluate(Maze maze, Individual[] newPop) { this.totalFitness = 0.0; ArrayList<Double> fitnesses = new ArrayList<Double>(); for (int i = 0; i < POPULATION_SIZE; i++) { maze = new Maze(8, 8); maze.fillMaze(); fitnesses.add(startPopulation[i].evaluate(maze, newPop)); //this.totalFitness += startPopulation[i].evaluate(maze, newPop); } //totalFitness = (Math.round(totalFitness / POPULATION_SIZE)); StringBuilder sb = new StringBuilder(); for(Double tmp : fitnesses) { sb.append(tmp + ", "); totalFitness += tmp; } // Progress of each Individual //System.out.println(sb.toString()); return this.totalFitness; } /** * Roulette Wheel Selection * @return */ public Individual rouletteWheelSelection() { // Calculate sum of all chromosome fitnesses in population - sum S. double randNum = rand.nextDouble() * this.totalFitness; int i; for (i = 0; i < POPULATION_SIZE && randNum > 0; ++i) { randNum -= startPopulation[i].getFitnessValue(); } return startPopulation[i-1]; } /** * Tournament Selection * @return */ public Individual tournamentSelection() { double randNum = rand.nextDouble() * this.totalFitness; // Get random number of population (add 1 to stop nullpointerexception) int k = rand.nextInt(POPULATION_SIZE) + 1; int i; for (i = 1; i < POPULATION_SIZE && i < k && randNum > 0; ++i) { randNum -= startPopulation[i].getFitnessValue(); } return startPopulation[i-1]; } /** * Finds the best individual * @return */ public Individual findBestIndividual() { int idxMax = 0; double currentMax = 0.0; double currentMin = 1.0; double currentVal; for (int idx = 0; idx < POPULATION_SIZE; ++idx) { currentVal = startPopulation[idx].getFitnessValue(); if (currentMax < currentMin) { currentMax = currentMin = currentVal; idxMax = idx; } if (currentVal > currentMax) { currentMax = currentVal; idxMax = idx; } } // Double check to see if this has the right one //System.out.println(startPopulation[idxMax].getFitnessValue()); // Maximisation return startPopulation[idxMax]; } /** * One Point Crossover * @param firstPerson * @param secondPerson * @return */ public static Individual[] onePointCrossover(Individual firstPerson, Individual secondPerson) { Individual[] newPerson = new Individual[2]; newPerson[0] = new Individual(); newPerson[1] = new Individual(); int size = Individual.SIZE; int randPoint = rand.nextInt(size); int i; for (i = 0; i < randPoint; ++i) { newPerson[0].setGene(i, firstPerson.getGene(i)); newPerson[1].setGene(i, secondPerson.getGene(i)); } for (; i < Individual.SIZE; ++i) { newPerson[0].setGene(i, secondPerson.getGene(i)); newPerson[1].setGene(i, firstPerson.getGene(i)); } return newPerson; } /** * Uniform Crossover * @param firstPerson * @param secondPerson * @return */ public static Individual[] uniformCrossover(Individual firstPerson, Individual secondPerson) { Individual[] newPerson = new Individual[2]; newPerson[0] = new Individual(); newPerson[1] = new Individual(); for(int i = 0; i < Individual.SIZE; ++i) { double r = rand.nextDouble(); if (r > 0.5) { newPerson[0].setGene(i, firstPerson.getGene(i)); newPerson[1].setGene(i, secondPerson.getGene(i)); } else { newPerson[0].setGene(i, secondPerson.getGene(i)); newPerson[1].setGene(i, firstPerson.getGene(i)); } } return newPerson; } public double getTotalFitness() { return totalFitness; } public static void main(String[] args) { // Initialise Environment Maze maze = new Maze(8, 8); maze.fillMaze(); // Instantiate Population //Population pop = new Population(); RunGA pop = new RunGA(maze); // Instantiate Individuals for Population Individual[] newPop = new Individual[POPULATION_SIZE]; // Instantiate two individuals to use for selection Individual[] people = new Individual[2]; Action action = null; Direction direction = null; String result = ""; /*result += "Total Fitness: " + pop.getTotalFitness() + " - Best Fitness: " + pop.findBestIndividual().getFitnessValue();*/ // Print Current Population System.out.println("Total Fitness: " + pop.getTotalFitness() + " - Best Fitness: " + pop.findBestIndividual().getFitnessValue()); // Instantiate counter for selection int count; for (int i = 0; i < MAX_ITERATIONS; i++) { count = 0; // Elitism for (int j = 0; j < ELITISM_K; ++j) { // This one has the best fitness newPop[count] = pop.findBestIndividual(); count++; } // Build New Population (Population size = Steps (28)) while (count < POPULATION_SIZE) { // Roulette Wheel Selection people[0] = pop.rouletteWheelSelection(); people[1] = pop.rouletteWheelSelection(); // Tournament Selection //people[0] = pop.tournamentSelection(); //people[1] = pop.tournamentSelection(); // Crossover if (rand.nextDouble() < CROSSOVER_PROB) { // One Point Crossover //people = onePointCrossover(people[0], people[1]); // Uniform Crossover people = uniformCrossover(people[0], people[1]); } // Mutation if (rand.nextDouble() < MUTATION_PROB) { people[0].mutate(); } if (rand.nextDouble() < MUTATION_PROB) { people[1].mutate(); } // Add to New Population newPop[count] = people[0]; newPop[count+1] = people[1]; count += 2; } // Make new population the current population pop.setPopulation(newPop); // Re-evaluate the current population //pop.evaluate(); pop.evaluate(maze, newPop); // Print results to screen System.out.println("Total Fitness: " + pop.totalFitness + " - Best Fitness: " + pop.findBestIndividual().getFitnessValue()); //result += "\nTotal Fitness: " + pop.totalFitness + " - Best Fitness: " + pop.findBestIndividual().getFitnessValue(); } // Best Individual Individual bestIndiv = pop.findBestIndividual(); //return result; } } I have uploaded the full project to RapidShare if you require the extra files, although if needed I can add the code to them here. This problem has been depressing me for days now and if you guys can help me I will forever be in your debt.

    Read the article

  • Injecting jms resource in servlet & best practice for MDB

    - by kislo_metal
    using ejb 3.1, servlet 3.0 (glassfish server v3) Scenario: I have MDB that listen to jms messages and give processing to some other session bean (Stateless). Servelet injecting jms resource. Question 1: Why servlet can`t inject jms resources when they use static declaration ? @Resource(mappedName = "jms/Tarturus") private static ConnectionFactory connectionFactory; @Resource(mappedName = "jms/StyxMDB") private static Queue queue; private Connection connection; and @PostConstruct public void postConstruct() { try { connection = connectionFactory.createConnection(); } catch (JMSException e) { e.printStackTrace(); } } @PreDestroy public void preDestroy() { try { connection.close(); } catch (JMSException e) { e.printStackTrace(); } } The error that I get is : [#|2010-05-03T15:18:17.118+0300|WARNING|glassfish3.0|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=35;_ThreadName=Thread-1;|StandardWrapperValve[WorkerServlet]: PWC1382: Allocate exception for servlet WorkerServlet com.sun.enterprise.container.common.spi.util.InjectionException: Error creating managed object for class ua.co.rufous.server.services.WorkerServiceImpl at com.sun.enterprise.container.common.impl.util.InjectionManagerImpl.createManagedObject(InjectionManagerImpl.java:312) at com.sun.enterprise.web.WebContainer.createServletInstance(WebContainer.java:709) at com.sun.enterprise.web.WebModule.createServletInstance(WebModule.java:1937) at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1252) Caused by: com.sun.enterprise.container.common.spi.util.InjectionException: Exception attempting to inject Unresolved Message-Destination-Ref ua.co.rufous.server.services.WorkerServiceImpl/[email protected]@null into class ua.co.rufous.server.services.WorkerServiceImpl at com.sun.enterprise.container.common.impl.util.InjectionManagerImpl._inject(InjectionManagerImpl.java:614) at com.sun.enterprise.container.common.impl.util.InjectionManagerImpl.inject(InjectionManagerImpl.java:384) at com.sun.enterprise.container.common.impl.util.InjectionManagerImpl.injectInstance(InjectionManagerImpl.java:141) at com.sun.enterprise.container.common.impl.util.InjectionManagerImpl.injectInstance(InjectionManagerImpl.java:127) at com.sun.enterprise.container.common.impl.util.InjectionManagerImpl.createManagedObject(InjectionManagerImpl.java:306) ... 27 more Caused by: com.sun.enterprise.container.common.spi.util.InjectionException: Illegal use of static field private static javax.jms.Queue ua.co.rufous.server.services.WorkerServiceImpl.queue on class that only supports instance-based injection at com.sun.enterprise.container.common.impl.util.InjectionManagerImpl._inject(InjectionManagerImpl.java:532) ... 31 more |#] my MDB : /** * asadmin commands * asadmin create-jms-resource --restype javax.jms.ConnectionFactory jms/Tarturus * asadmin create-jms-resource --restype javax.jms.Queue jms/StyxMDB * asadmin list-jms-resources */ @MessageDriven(mappedName = "jms/StyxMDB", activationConfig = { @ActivationConfigProperty(propertyName = "connectionFactoryJndiName", propertyValue = "jms/Tarturus"), @ActivationConfigProperty(propertyName = "acknowledgeMode", propertyValue = "Auto-acknowledge"), @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue") }) public class StyxMDB implements MessageListener { @EJB private ActivationProcessingLocal aProcessing; public StyxMDB() { } public void onMessage(Message message) { try { TextMessage msg = (TextMessage) message; String hash = msg.getText(); GluttonyLogger.getInstance().writeInfoLog("geted jms message hash = " + hash); } catch (JMSException e) { } } } everything work good without static declaration: @Resource(mappedName = "jms/Tarturus") private ConnectionFactory connectionFactory; @Resource(mappedName = "jms/StyxMDB") private Queue queue; private Connection connection; Question 2: what is the best practice for working with MDB : processing full request in onMessage() or calling another bean(Stateless bean in my case) in onMessage() method that would process it. Processing including few calls to soap services, so the full processing time could be for a 3 seconds. Thank you.

    Read the article

  • Restful Services, oData, and Rest Sharp

    - by jkrebsbach
    After a great presentation by Jason Sheehan at MDC about RestSharp, I decided to implement it. RestSharp is a .Net framework for consuming restful data sources via either Json or XML. My first step was to put together a Restful data source for RestSharp to consume.  Staying entirely withing .Net, I decided to use Microsoft's oData implementation, built on System.Data.Services.DataServices.  Natively, these support Json, or atom+pub xml.  (XML with a few bells and whistles added on) There are three main steps for creating an oData data source: 1)  override CreateDSPMetaData This is where the metadata data is returned.  The meta data defines the structure of the data to return.  The structure contains the relationships between data objects, along with what properties the objects expose.  The meta data can and should be somehow cached so that the structure is not rebuild with every data request. 2) override CreateDataSource The context contains the data the data source will publish.  This method is the conduit which will populate the metadata objects to be returned to the requestor. 3) implement static InitializeService At this point we can set up security, along with setting up properties of the web service (versioning, etc)   Here is a web service which publishes stock prices for various Products (stocks) in various Categories. namespace RestService {     public class RestServiceImpl : DSPDataService<DSPContext>     {         private static DSPContext _context;         private static DSPMetadata _metadata;         /// <summary>         /// Populate traversable data source         /// </summary>         /// <returns></returns>         protected override DSPContext CreateDataSource()         {             if (_context == null)             {                 _context = new DSPContext();                 Category utilities = new Category(0);                 utilities.Name = "Electric";                 Category financials = new Category(1);                 financials.Name = "Financial";                                 IList products = _context.GetResourceSetEntities("Products");                 Product electric = new Product(0, utilities);                 electric.Name = "ABC Electric";                 electric.Description = "Electric Utility";                 electric.Price = 3.5;                 products.Add(electric);                 Product water = new Product(1, utilities);                 water.Name = "XYZ Water";                 water.Description = "Water Utility";                 water.Price = 2.4;                 products.Add(water);                 Product banks = new Product(2, financials);                 banks.Name = "FatCat Bank";                 banks.Description = "A bank that's almost too big";                 banks.Price = 19.9; // This will never get to the client                 products.Add(banks);                 IList categories = _context.GetResourceSetEntities("Categories");                 categories.Add(utilities);                 categories.Add(financials);                 utilities.Products.Add(electric);                 utilities.Products.Add(electric);                 financials.Products.Add(banks);             }             return _context;         }         /// <summary>         /// Setup rules describing published data structure - relationships between data,         /// key field, other searchable fields, etc.         /// </summary>         /// <returns></returns>         protected override DSPMetadata CreateDSPMetadata()         {             if (_metadata == null)             {                 _metadata = new DSPMetadata("DemoService", "DataServiceProviderDemo");                 // Define entity type product                 ResourceType product = _metadata.AddEntityType(typeof(Product), "Product");                 _metadata.AddKeyProperty(product, "ProductID");                 // Only add properties we wish to share with end users                 _metadata.AddPrimitiveProperty(product, "Name");                 _metadata.AddPrimitiveProperty(product, "Description");                 EntityPropertyMappingAttribute att = new EntityPropertyMappingAttribute("Name",                     SyndicationItemProperty.Title, SyndicationTextContentKind.Plaintext, true);                 product.AddEntityPropertyMappingAttribute(att);                 att = new EntityPropertyMappingAttribute("Description",                     SyndicationItemProperty.Summary, SyndicationTextContentKind.Plaintext, true);                 product.AddEntityPropertyMappingAttribute(att);                 // Define products as a set of product entities                 ResourceSet products = _metadata.AddResourceSet("Products", product);                 // Define entity type category                 ResourceType category = _metadata.AddEntityType(typeof(Category), "Category");                 _metadata.AddKeyProperty(category, "CategoryID");                 _metadata.AddPrimitiveProperty(category, "Name");                 _metadata.AddPrimitiveProperty(category, "Description");                 // Define categories as a set of category entities                 ResourceSet categories = _metadata.AddResourceSet("Categories", category);                 att = new EntityPropertyMappingAttribute("Name",                     SyndicationItemProperty.Title, SyndicationTextContentKind.Plaintext, true);                 category.AddEntityPropertyMappingAttribute(att);                 att = new EntityPropertyMappingAttribute("Description",                     SyndicationItemProperty.Summary, SyndicationTextContentKind.Plaintext, true);                 category.AddEntityPropertyMappingAttribute(att);                 // A product has a category, a category has products                 _metadata.AddResourceReferenceProperty(product, "Category", categories);                 _metadata.AddResourceSetReferenceProperty(category, "Products", products);             }             return _metadata;         }         /// <summary>         /// Based on the requesting user, can set up permissions to Read, Write, etc.         /// </summary>         /// <param name="config"></param>         public static void InitializeService(DataServiceConfiguration config)         {             config.SetEntitySetAccessRule("*", EntitySetRights.All);             config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;             config.DataServiceBehavior.AcceptProjectionRequests = true;         }     } }     The objects prefixed with DSP come from the samples on the oData site: http://www.odata.org/developers The products and categories objects are POCO business objects with no special modifiers. Three main options are available for defining the MetaData of data sources in .Net: 1) Generate Entity Data model (Potentially directly from SQL Server database).  This requires the least amount of manual interaction, and uses the edmx WYSIWYG editor to generate a data model.  This can be directly tied to the SQL Server database and generated from the database if you want a data access layer tightly coupled with your database. 2) Object model decorations.  If you already have a POCO data layer, you can decorate your objects with properties to statically inform the compiler how the objects are related.  The disadvantage is there are now tags strewn about your business layer that need to be updated as the business rules change.  3) Programmatically construct metadata object.  This is the object illustrated above in CreateDSPMetaData.  This puts all relationship information into one central programmatic location.  Here business rules are constructed when the DSPMetaData response object is returned.   Once you have your service up and running, RestSharp is designed for XML / Json, along with the native Microsoft library.  There are currently some differences between how Jason made RestSharp expect XML with how atom+pub works, so I found better results currently with the Json implementation - modifying the RestSharp XML parser to make an atom+pub parser is fairly trivial though, so use what implementation works best for you. I put together a sample console app which calls the RestSvcImpl.svc service defined above (and assumes it to be running on port 2000).  I used both RestSharp as a client, and also the default Microsoft oData client tools. namespace RestConsole {     class Program     {         private static DataServiceContext _ctx;         private enum DemoType         {             Xml,             Json         }         static void Main(string[] args)         {             // Microsoft implementation             _ctx = new DataServiceContext(new System.Uri("http://localhost:2000/RestServiceImpl.svc"));             var msProducts = RunQuery<Product>("Products").ToList();             var msCategory = RunQuery<Category>("/Products(0)/Category").AsEnumerable().Single();             var msFilteredProducts = RunQuery<Product>("/Products?$filter=length(Name) ge 4").ToList();             // RestSharp implementation                          DemoType demoType = DemoType.Json;             var client = new RestClient("http://localhost:2000/RestServiceImpl.svc");             client.ClearHandlers(); // Remove all available handlers             // Set up handler depending on what situation dictates             if (demoType == DemoType.Json)                 client.AddHandler("application/json", new RestSharp.Deserializers.JsonDeserializer());             else if (demoType == DemoType.Xml)             {                 client.AddHandler("application/atom+xml", new RestSharp.Deserializers.XmlDeserializer());             }                          var request = new RestRequest();             if (demoType == DemoType.Json)                 request.RootElement = "d"; // service root element for json             else if (demoType == DemoType.Xml)             {                 request.XmlNamespace = "http://www.w3.org/2005/Atom";             }                              // Return all products             request.Resource = "/Products?$orderby=Name";             RestResponse<List<Product>> productsResp = client.Execute<List<Product>>(request);             List<Product> products = productsResp.Data;             // Find category for product with ProductID = 1             request.Resource = string.Format("/Products(1)/Category");             RestResponse<Category> categoryResp = client.Execute<Category>(request);             Category category = categoryResp.Data;             // Specialized queries             request.Resource = string.Format("/Products?$filter=ProductID eq {0}", 1);             RestResponse<Product> productResp = client.Execute<Product>(request);             Product product = productResp.Data;                          request.Resource = string.Format("/Products?$filter=Name eq '{0}'", "XYZ Water");             productResp = client.Execute<Product>(request);             product = productResp.Data;         }         private static IEnumerable<TElement> RunQuery<TElement>(string queryUri)         {             try             {                 return _ctx.Execute<TElement>(new Uri(queryUri, UriKind.Relative));             }             catch (Exception ex)             {                 throw ex;             }         }              } }   Feel free to step through the code a few times and to attach a debugger to the service as well to see how and where the context and metadata objects are constructed and returned.  Pay special attention to the response object being returned by the oData service - There are several properties of the RestRequest that can be used to help troubleshoot when the structure of the response is not exactly what would be expected.

    Read the article

  • Glassfish v2 alternatedocroot - will DAS sync it?

    - by ring bearer
    Using Sun Glassfish Enterprise server v2.1.1 I am using "alternatedocroot" via sun-web.xml for my web application to abstract out static content from actual deploy-able code (EAR/WAR) What I have is a cluster of two server instances distributed across two physical hosts - HOST1 and HOST2. "alternatedocroot" points to /data/static-content/ on both HOST1 and HOST2. Would DAS (Domain application server )take care of syncing /data/static-content between HOST1 and HOST2 if I use syncinstances=true option while starting up the cluster? Thanks!

    Read the article

  • Compile time Meta-programming, with string literals.

    - by Hassan Syed
    I'm writing some code which could really do with some simple compile time metaprogramming. It is common practise to use empty-struct tags as compile time symbols. I need to decorate the tags with some run-time config elements. static variables seem the only way to go (to enable meta-programming), however static variables require global declarations. to side step this Scott Myers suggestion (from the third edition of Effective C++), about sequencing the initialization of static variables by declaring them inside a function instead of as class variables, came to mind. So I came up with the following code, my hypothesis is that it will let me have a compile-time symbols with string literals use-able at runtime. I'm not missing anything I hope. template<class Instance> class TheBestThing { public: void set_name(const char * name_in) { get_name() = std::string(name_in); } void set_fs_location(const char * fs_location_in) { get_fs_location() = std::string(fs_location_in); } std::string & get_fs_location() { static std::string fs_location; return fs_location; } std::string & get_name() { static std::string name; return name; } }; struct tag {}; int main() { TheBestThing<tag> x; x.set_name("xyz"); x.set_fs_location("/etc/lala"); ImportantObject<x> SinceSlicedBread; }

    Read the article

  • Silverlight ComboBox Attached Behavior

    - by Mark Cooper
    I am trying to create an attached behavior that can be applied to a Silverlight ComboBox. My behavior is this: using System.Windows.Controls; using System.Windows; using System.Windows.Controls.Primitives; namespace AttachedBehaviours { public class ConfirmChangeBehaviour { public static bool GetConfirmChange(Selector cmb) { return (bool)cmb.GetValue(ConfirmChangeProperty); } public static void SetConfirmChange(Selector cmb, bool value) { cmb.SetValue(ConfirmChangeProperty, value); } public static readonly DependencyProperty ConfirmChangeProperty = DependencyProperty.RegisterAttached("ConfirmChange", typeof(bool), typeof(Selector), new PropertyMetadata(true, ConfirmChangeChanged)); public static void ConfirmChangeChanged(DependencyObject d, DependencyPropertyChangedEventArgs args) { Selector instance = d as Selector; if (args.NewValue is bool == false) return; if ((bool)args.NewValue) instance.SelectionChanged += OnSelectorSelectionChanged; else instance.SelectionChanged -= OnSelectorSelectionChanged; } static void OnSelectorSelectionChanged(object sender, RoutedEventArgs e) { Selector item = e.OriginalSource as Selector; MessageBox.Show("Unsaved changes. Are you sure you want to change teams?"); } } } This is used in XAML as this: <UserControl x:Class="AttachedBehaviours.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:this="clr-namespace:AttachedBehaviours" mc:Ignorable="d"> <Grid x:Name="LayoutRoot"> <StackPanel> <ComboBox ItemsSource="{Binding Teams}" this:ConfirmChangeBehaviour.ConfirmChange="true" > </ComboBox> </StackPanel> </Grid> </UserControl> I am getting an error: Unknown attribute ConfirmChangeBehaviour.ConfirmChange on element ComboBox. [Line: 13 Position: 65] Intellisense is picking up the behavior, why is this failing at runtime? Thanks, Mark EDIT: Register() changed to RegisterAttached(). Same error appears.

    Read the article

  • C# rounding DateTime objects

    - by grenade
    I want to round dates/times to the nearest interval for a charting application. I'd like an extension method signature like follows so that the rounding can be acheived for any level of accuracy: static DateTime Round(this DateTime date, TimeSpan span); The idea is that if I pass in a timespan of ten minutes, it will round to the nearest ten minute interval. I can't get my head around the implementation and am hoping one of you will have written or used something similar before. I think either a floor, ceiling or nearest implementation is fine. Any ideas? Edit: Thanks to @tvanfosson & @ShuggyCoUk, the implementation looks like this: public static class DateExtensions { public static DateTime Round(this DateTime date, TimeSpan span) { long ticks = (date.Ticks / span.Ticks) + (span.Ticks / 2) + 1; return new DateTime(ticks * span.Ticks); } public static DateTime Floor(this DateTime date, TimeSpan span) { long ticks = (date.Ticks / span.Ticks); return new DateTime(ticks * span.Ticks); } public static DateTime Ceil(this DateTime date, TimeSpan span) { long ticks = (date.Ticks + span.Ticks - 1) / span.Ticks; return new DateTime(ticks * span.Ticks); } } And is called like so: DateTime nearestHour = DateTime.Now.Round(new TimeSpan(1,0,0)); DateTime minuteCeiling = DateTime.Now.Ceil(new TimeSpan(0,1,0)); DateTime weekFloor = DateTime.Now.Floor(new TimeSpan(7,0,0,0)); ... Cheers!

    Read the article

  • Demystifying Silverlight Dependency Properties

    - by dwahlin
    I have the opportunity to teach a lot of people about Silverlight (amongst other technologies) and one of the topics that definitely confuses people initially is the concept of dependency properties. I confess that when I first heard about them my initial thought was “Why do we need a specialized type of property?” While you can certainly use standard CLR properties in Silverlight applications, Silverlight relies heavily on dependency properties for just about everything it does behind the scenes. In fact, dependency properties are an essential part of the data binding, template, style and animation functionality available in Silverlight. They simply back standard CLR properties. In this post I wanted to put together a (hopefully) simple explanation of dependency properties and why you should care about them if you’re currently working with Silverlight or looking to move to it.   What are Dependency Properties? XAML provides a great way to define layout controls, user input controls, shapes, colors and data binding expressions in a declarative manner. There’s a lot that goes on behind the scenes in order to make XAML work and an important part of that magic is the use of dependency properties. If you want to bind data to a property, style it, animate it or transform it in XAML then the property involved has to be a dependency property to work properly. If you’ve ever positioned a control in a Canvas using Canvas.Left or placed a control in a specific Grid row using Grid.Row then you’ve used an attached property which is a specialized type of dependency property. Dependency properties play a key role in XAML and the overall Silverlight framework. Any property that you bind, style, template, animate or transform must be a dependency property in Silverlight applications. You can programmatically bind values to controls and work with standard CLR properties, but if you want to use the built-in binding expressions available in XAML (one of my favorite features) or the Binding class available through code then dependency properties are a necessity. Dependency properties aren’t needed in every situation, but if you want to customize your application very much you’ll eventually end up needing them. For example, if you create a custom user control and want to expose a property that consumers can use to change the background color, you have to define it as a dependency property if you want bindings, styles and other features to be available for use. Now that the overall purpose of dependency properties has been discussed let’s take a look at how you can create them. Creating Dependency Properties When .NET first came out you had to write backing fields for each property that you defined as shown next: Brush _ScheduleBackground; public Brush ScheduleBackground { get { return _ScheduleBackground; } set { _ScheduleBackground = value; } } Although .NET 2.0 added auto-implemented properties (for example: public Brush ScheduleBackground { get; set; }) where the compiler would automatically generate the backing field used by get and set blocks, the concept is still the same as shown in the above code; a property acts as a wrapper around a field. Silverlight dependency properties replace the _ScheduleBackground field shown in the previous code and act as the backing store for a standard CLR property. The following code shows an example of defining a dependency property named ScheduleBackgroundProperty: public static readonly DependencyProperty ScheduleBackgroundProperty = DependencyProperty.Register("ScheduleBackground", typeof(Brush), typeof(Scheduler), null);   Looking through the code the first thing that may stand out is that the definition for ScheduleBackgroundProperty is marked as static and readonly and that the property appears to be of type DependencyProperty. This is a standard pattern that you’ll use when working with dependency properties. You’ll also notice that the property explicitly adds the word “Property” to the name which is another standard you’ll see followed. In addition to defining the property, the code also makes a call to the static DependencyProperty.Register method and passes the name of the property to register (ScheduleBackground in this case) as a string. The type of the property, the type of the class that owns the property and a null value (more on the null value later) are also passed. In this example a class named Scheduler acts as the owner. The code handles registering the property as a dependency property with the call to Register(), but there’s a little more work that has to be done to allow a value to be assigned to and retrieved from the dependency property. The following code shows the complete code that you’ll typically use when creating a dependency property. You can find code snippets that greatly simplify the process of creating dependency properties out on the web. The MVVM Light download available from http://mvvmlight.codeplex.com comes with built-in dependency properties snippets as well. public static readonly DependencyProperty ScheduleBackgroundProperty = DependencyProperty.Register("ScheduleBackground", typeof(Brush), typeof(Scheduler), null); public Brush ScheduleBackground { get { return (Brush)GetValue(ScheduleBackgroundProperty); } set { SetValue(ScheduleBackgroundProperty, value); } } The standard CLR property code shown above should look familiar since it simply wraps the dependency property. However, you’ll notice that the get and set blocks call GetValue and SetValue methods respectively to perform the appropriate operation on the dependency property. GetValue and SetValue are members of the DependencyObject class which is another key component of the Silverlight framework. Silverlight controls and classes (TextBox, UserControl, CompositeTransform, DataGrid, etc.) ultimately derive from DependencyObject in their inheritance hierarchy so that they can support dependency properties. Dependency properties defined in Silverlight controls and other classes tend to follow the pattern of registering the property by calling Register() and then wrapping the dependency property in a standard CLR property (as shown above). They have a standard property that wraps a registered dependency property and allows a value to be assigned and retrieved. If you need to expose a new property on a custom control that supports data binding expressions in XAML then you’ll follow this same pattern. Dependency properties are extremely useful once you understand why they’re needed and how they’re defined. Detecting Changes and Setting Defaults When working with dependency properties there will be times when you want to assign a default value or detect when a property changes so that you can keep the user interface in-sync with the property value. Silverlight’s DependencyProperty.Register() method provides a fourth parameter that accepts a PropertyMetadata object instance. PropertyMetadata can be used to hook a callback method to a dependency property. The callback method is called when the property value changes. PropertyMetadata can also be used to assign a default value to the dependency property. By assigning a value of null for the final parameter passed to Register() you’re telling the property that you don’t care about any changes and don’t have a default value to apply. Here are the different constructor overloads available on the PropertyMetadata class: PropertyMetadata Constructor Overload Description PropertyMetadata(Object) Used to assign a default value to a dependency property. PropertyMetadata(PropertyChangedCallback) Used to assign a property changed callback method. PropertyMetadata(Object, PropertyChangedCalback) Used to assign a default property value and a property changed callback.   There are many situations where you need to know when a dependency property changes or where you want to apply a default. Performing either task is easily accomplished by creating a new instance of the PropertyMetadata class and passing the appropriate values to its constructor. The following code shows an enhanced version of the initial dependency property code shown earlier that demonstrates these concepts: public Brush ScheduleBackground { get { return (Brush)GetValue(ScheduleBackgroundProperty); } set { SetValue(ScheduleBackgroundProperty, value); } } public static readonly DependencyProperty ScheduleBackgroundProperty = DependencyProperty.Register("ScheduleBackground", typeof(Brush), typeof(Scheduler), new PropertyMetadata(new SolidColorBrush(Colors.LightGray), ScheduleBackgroundChanged)); private static void ScheduleBackgroundChanged(DependencyObject d, DependencyPropertyChangedEventArgs e) { var scheduler = d as Scheduler; scheduler.Background = e.NewValue as Brush; } The code wires ScheduleBackgroundProperty to a property change callback method named ScheduleBackgroundChanged. What’s interesting is that this callback method is static (as is the dependency property) so it gets passed the instance of the object that owns the property that has changed (otherwise we wouldn’t be able to get to the object instance). In this example the dependency object is cast to a Scheduler object and its Background property is assigned to the new value of the dependency property. The code also handles assigning a default value of LightGray to the dependency property by creating a new instance of a SolidColorBrush. To Sum Up In this post you’ve seen the role of dependency properties and how they can be defined in code. They play a big role in XAML and the overall Silverlight framework. You can think of dependency properties as being replacements for fields that you’d normally use with standard CLR properties. In addition to a discussion on how dependency properties are created, you also saw how to use the PropertyMetadata class to define default dependency property values and hook a dependency property to a callback method. The most important thing to understand with dependency properties (especially if you’re new to Silverlight) is that they’re needed if you want a property to support data binding, animations, transformations and styles properly. Any time you create a property on a custom control or user control that has these types of requirements you’ll want to pick a dependency property over of a standard CLR property with a backing field. There’s more that can be covered with dependency properties including a related property called an attached property….more to come.

    Read the article

  • Should this immutable struct be a mutable class?

    - by ChaosPandion
    I showed this struct to a fellow programmer and they felt that it should be a mutable class. They felt it is inconvenient not to have null references and the ability to alter the object as required. I would really like to know if there are any other reasons to make this a mutable class. [Serializable] public struct PhoneNumber : ICloneable, IEquatable<PhoneNumber> { private const int AreaCodeShift = 54; private const int CentralOfficeCodeShift = 44; private const int SubscriberNumberShift = 30; private const int CentralOfficeCodeMask = 0x000003FF; private const int SubscriberNumberMask = 0x00003FFF; private const int ExtensionMask = 0x3FFFFFFF; private readonly ulong value; public int AreaCode { get { return UnmaskAreaCode(value); } } public int CentralOfficeCode { get { return UnmaskCentralOfficeCode(value); } } public int SubscriberNumber { get { return UnmaskSubscriberNumber(value); } } public int Extension { get { return UnmaskExtension(value); } } public PhoneNumber(ulong value) : this(UnmaskAreaCode(value), UnmaskCentralOfficeCode(value), UnmaskSubscriberNumber(value), UnmaskExtension(value), true) { } public PhoneNumber(int areaCode, int centralOfficeCode, int subscriberNumber) : this(areaCode, centralOfficeCode, subscriberNumber, 0, true) { } public PhoneNumber(int areaCode, int centralOfficeCode, int subscriberNumber, int extension) : this(areaCode, centralOfficeCode, subscriberNumber, extension, true) { } private PhoneNumber(int areaCode, int centralOfficeCode, int subscriberNumber, int extension, bool throwException) { value = 0; if (areaCode < 200 || areaCode > 989) { if (!throwException) return; throw new ArgumentOutOfRangeException("areaCode", areaCode, @"The area code portion must fall between 200 and 989."); } else if (centralOfficeCode < 200 || centralOfficeCode > 999) { if (!throwException) return; throw new ArgumentOutOfRangeException("centralOfficeCode", centralOfficeCode, @"The central office code portion must fall between 200 and 999."); } else if (subscriberNumber < 0 || subscriberNumber > 9999) { if (!throwException) return; throw new ArgumentOutOfRangeException("subscriberNumber", subscriberNumber, @"The subscriber number portion must fall between 0 and 9999."); } else if (extension < 0 || extension > 1073741824) { if (!throwException) return; throw new ArgumentOutOfRangeException("extension", extension, @"The extension portion must fall between 0 and 1073741824."); } else if (areaCode.ToString()[1] - 48 > 8) { if (!throwException) return; throw new ArgumentOutOfRangeException("areaCode", areaCode, @"The second digit of the area code cannot be greater than 8."); } else { value |= ((ulong)(uint)areaCode << AreaCodeShift); value |= ((ulong)(uint)centralOfficeCode << CentralOfficeCodeShift); value |= ((ulong)(uint)subscriberNumber << SubscriberNumberShift); value |= ((ulong)(uint)extension); } } public object Clone() { return this; } public override bool Equals(object obj) { return obj != null && obj.GetType() == typeof(PhoneNumber) && Equals((PhoneNumber)obj); } public bool Equals(PhoneNumber other) { return this.value == other.value; } public override int GetHashCode() { return value.GetHashCode(); } public override string ToString() { return ToString(PhoneNumberFormat.Separated); } public string ToString(PhoneNumberFormat format) { switch (format) { case PhoneNumberFormat.Plain: return string.Format(@"{0:D3}{1:D3}{2:D4} {3:#}", AreaCode, CentralOfficeCode, SubscriberNumber, Extension).Trim(); case PhoneNumberFormat.Separated: return string.Format(@"{0:D3}-{1:D3}-{2:D4} {3:#}", AreaCode, CentralOfficeCode, SubscriberNumber, Extension).Trim(); default: throw new ArgumentOutOfRangeException("format"); } } public ulong ToUInt64() { return value; } public static PhoneNumber Parse(string value) { var result = default(PhoneNumber); if (!TryParse(value, out result)) { throw new FormatException(string.Format(@"The string ""{0}"" could not be parsed as a phone number.", value)); } return result; } public static bool TryParse(string value, out PhoneNumber result) { result = default(PhoneNumber); if (string.IsNullOrEmpty(value)) { return false; } var index = 0; var numericPieces = new char[value.Length]; foreach (var c in value) { if (char.IsNumber(c)) { numericPieces[index++] = c; } } if (index < 9) { return false; } var numericString = new string(numericPieces); var areaCode = int.Parse(numericString.Substring(0, 3)); var centralOfficeCode = int.Parse(numericString.Substring(3, 3)); var subscriberNumber = int.Parse(numericString.Substring(6, 4)); var extension = 0; if (numericString.Length > 10) { extension = int.Parse(numericString.Substring(10)); } result = new PhoneNumber( areaCode, centralOfficeCode, subscriberNumber, extension, false ); return result.value == 0; } public static bool operator ==(PhoneNumber left, PhoneNumber right) { return left.Equals(right); } public static bool operator !=(PhoneNumber left, PhoneNumber right) { return !left.Equals(right); } private static int UnmaskAreaCode(ulong value) { return (int)(value >> AreaCodeShift); } private static int UnmaskCentralOfficeCode(ulong value) { return (int)((value >> CentralOfficeCodeShift) & CentralOfficeCodeMask); } private static int UnmaskSubscriberNumber(ulong value) { return (int)((value >> SubscriberNumberShift) & SubscriberNumberMask); } private static int UnmaskExtension(ulong value) { return (int)(value & ExtensionMask); } } public enum PhoneNumberFormat { Plain, Separated }

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >