Search Results

Search found 16704 results on 669 pages for 'ews managed api'.

Page 542/669 | < Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >

  • Cant handle it (jQuery hanlder understanding needed)

    - by chainwork
    I'm embarrassed to even ask BUT could someone help me understand what a "handler" is. I am new to jQuery and the API constantly has references similar to the following: toggle( handler(eventObject), handler(eventObject), [ handler(eventObject) ] ) I scratch my head and say to myself "what the hell is a handler". Then I check my 2 jquery books and don't really see anything specific there. I get what an event handler does, it handles an event. But the word handler in the above context confuses me including "eventObject". I tried to google it but could not really find a really clear definition of what exactly a handler is as it relates to jquery. Thanks for your help =]

    Read the article

  • Java-Counting occurrence of word from huge textfile

    - by Naveen
    I have a text file of size 115MB. It consists of about 20 million words. I have to use the file as a word collection, and use it to search the occurrence of each user-given words from the collection. I am using this process as a small part in my project. I need a method for finding out the number of occurrence of given words in a faster and correct manner since i may use it in iterations. I am in need of suggestion about any API that i can make use or some other way that performs the task in a quicker manner. Any recommendations are appreciated.

    Read the article

  • I can't upload a file with CGIHTTPServer

    - by sdemingo
    Hi all, I'm using the CGIHTTPServer to implement a simple cgi server. I'm trying to upload a file by a form with the post method and the multipart/form-data enctype but I have problems when I recover the value of the fields in the cgi script. When the script catch the form fields, the value of the file is a MiniFieldStorage with two fields only (key and file name), and I can't recover the content of the file. As the API doc shows, this content is in value field of a StorageField but in the MiniFieldStorage this field isn't exits. ¿How can I recover a StorageField with the content of the file instead a MiniStorageField? ¿There are other method to upload a file using CGIHTTPServer? Thanks a lot

    Read the article

  • Bootstrap site mobile view not using full viewport

    - by jbarnett
    I'm currently making a responsive blog using the bootstrap 3.0 framework. I'm using the 1197 Max-width container for my content and it renders fine on desktops. However, when I try opening the site on my phone (I'm using an Android Galaxy Note 2), there is a lot of extra padding on the right and left sides of the viewport. I've tried following the API docs and guides as close as possible, but still can't get this to work. Here is the site I am working on http://www.justinbar.net Does anyone know what is going on with this? Am I doing something wrong, or do I need to override default behavior (which sounds a bit hacky to me).

    Read the article

  • cron library for java

    - by nutsiepully
    I am looking for a cron expression library in java. Something that can parse cron expressions and return me future fire times for the trigger. API on the lines of. CronExpression cronExpression = new CronExpression("0 30 4 * * *"); List<Date> fireTimes = cronExpression.getFireTimes(todaysDate, nextWeekDate); I don't want to use something as complicated as quartz. The purpose is to basically use cron like a regex for timings. That's all. I do not want a background scheduler. I tried googling but wasn't able to find anything very helpful. Any suggestions would be appreciated. Regards, Pulkit P.S - I looked at using the CronExpression class out of quartz. Wasn't very helpful - failing some tests.

    Read the article

  • Restful client on Codeigniter issue

    - by user1852837
    This is weird. I don't know what is problem on my website. My website works on local server but not on live server. Login page works on first signin but after logout then re-login again message says: "invalid username and password" since it works on first attempt. I found out when I debugging that http://xxxxx.com/api/authentication/sign not found. It display 404 page not found. Sometimes you can login and sometimes not. In my local it works. I contact the web server admin and I ask what is the status of the session on the server and How does it execute it's web requests? (Sockets, file_get_contents, curl?). They said that No problems reproduced with Server Sessions and PHP Curl works fine. I know it's weird but can somebody here can figure it out what is the problem behind of it.

    Read the article

  • Determining smallest number of samples for 99% accuracy

    - by test
    I'm trying to compare 100,000 records on a local database (L) with 100,000 records on a remote database (R). Basically I want to know if an elment in L exusts in R. To determine that, I have to make a request against the R for each L, which takes a long time (I know, there should be a better way, there isn't, that's the API I've got). So I would like to test a small sample of L against R, and then infer with some level of confidence how many are present in the whole R. How many do I have to test to have a 99% confidence level?

    Read the article

  • ASP.NET MVC 2 Released

    - by ScottGu
    I’m happy to announce that the final release of ASP.NET MVC 2 is now available for VS 2008/Visual Web Developer 2008 Express with ASP.NET 3.5.  You can download and install it from the following locations: Download ASP.NET MVC 2 using the Microsoft Web Platform Installer Download ASP.NET MVC 2 from the Download Center The final release of VS 2010 and Visual Web Developer 2010 will have ASP.NET MVC 2 built-in – so you won’t need an additional install in order to use ASP.NET MVC 2 with them.  ASP.NET MVC 2 We shipped ASP.NET MVC 1 a little less than a year ago.  Since then, almost 1 million developers have downloaded and used the final release, and its popularity has steadily grown month over month. ASP.NET MVC 2 is the next significant update of ASP.NET MVC. It is a compatible update to ASP.NET MVC 1 – so all the knowledge, skills, code, and extensions you already have with ASP.NET MVC continue to work and apply going forward. Like the first release, we are also shipping the source code for ASP.NET MVC 2 under an OSI-compliant open-source license. ASP.NET MVC 2 can be installed side-by-side with ASP.NET MVC 1 (meaning you can have some apps built with V1 and others built with V2 on the same machine).  We have instructions on how to update your existing ASP.NET MVC 1 apps to use ASP.NET MVC 2 using VS 2008 here.  Note that VS 2010 has an automated upgrade wizard that can automatically migrate your existing ASP.NET MVC 1 applications to ASP.NET MVC 2 for you. ASP.NET MVC 2 Features ASP.NET MVC 2 adds a bunch of new capabilities and features.  I’ve started a blog series about some of the new features, and will be covering them in more depth in the weeks ahead.  Some of the new features and capabilities include: New Strongly Typed HTML Helpers Enhanced Model Validation support across both server and client Auto-Scaffold UI Helpers with Template Customization Support for splitting up large applications into “Areas” Asynchronous Controllers support that enables long running tasks in parallel Support for rendering sub-sections of a page/site using Html.RenderAction Lots of new helper functions, utilities, and API enhancements Improved Visual Studio tooling support You can learn more about these features in the “What’s New in ASP.NET MVC 2” document on the www.asp.net/mvc web-site.  We are going to be posting a lot of new tutorials and videos shortly on www.asp.net/mvc that cover all the features in ASP.NET MVC 2 release.  We will also post an updated end-to-end tutorial built entirely with ASP.NET MVC 2 (much like the NerdDinner tutorial that I wrote that covers ASP.NET MVC 1).  Summary The ASP.NET MVC team delivered regular V2 preview releases over the last year to get feedback on the feature set.  I’d like to say a big thank you to everyone who tried out the previews and sent us suggestions/feedback/bug reports.  We hope you like the final release! Scott

    Read the article

  • Using FiddlerCore to capture HTTP Requests with .NET

    - by Rick Strahl
    Over the last few weeks I’ve been working on my Web load testing utility West Wind WebSurge. One of the key components of a load testing tool is the ability to capture URLs effectively so that you can play them back later under load. One of the options in WebSurge for capturing URLs is to use its built-in capture tool which acts as an HTTP proxy to capture any HTTP and HTTPS traffic from most Windows HTTP clients, including Web Browsers as well as standalone Windows applications and services. To make this happen, I used Eric Lawrence’s awesome FiddlerCore library, which provides most of the functionality of his desktop Fiddler application, all rolled into an easy to use library that you can plug into your own applications. FiddlerCore makes it almost too easy to capture HTTP content! For WebSurge I needed to capture all HTTP traffic in order to capture the full HTTP request – URL, headers and any content posted by the client. The result of what I ended up creating is this semi-generic capture form: In this post I’m going to demonstrate how easy it is to use FiddlerCore to build this HTTP Capture Form.  If you want to jump right in here are the links to get Telerik’s Fiddler Core and the code for the demo provided here. FiddlerCore Download FiddlerCore on NuGet Show me the Code (WebSurge Integration code from GitHub) Download the WinForms Sample Form West Wind Web Surge (example implementation in live app) Note that FiddlerCore is bound by a license for commercial usage – see license.txt in the FiddlerCore distribution for details. Integrating FiddlerCore FiddlerCore is a library that simply plugs into your application. You can download it from the Telerik site and manually add the assemblies to your project, or you can simply install the NuGet package via:       PM> Install-Package FiddlerCore The library consists of the FiddlerCore.dll as well as a couple of support libraries (CertMaker.dll and BCMakeCert.dll) that are used for installing SSL certificates. I’ll have more on SSL captures and certificate installation later in this post. But first let’s see how easy it is to use FiddlerCore to capture HTTP content by looking at how to build the above capture form. Capturing HTTP Content Once the library is installed it’s super easy to hook up Fiddler functionality. Fiddler includes a number of static class methods on the FiddlerApplication object that can be called to hook up callback events as well as actual start monitoring HTTP URLs. In the following code directly lifted from WebSurge, I configure a few filter options on Form level object, from the user inputs shown on the form by assigning it to a capture options object. In the live application these settings are persisted configuration values, but in the demo they are one time values initialized and set on the form. Once these options are set, I hook up the AfterSessionComplete event to capture every URL that passes through the proxy after the request is completed and start up the Proxy service:void Start() { if (tbIgnoreResources.Checked) CaptureConfiguration.IgnoreResources = true; else CaptureConfiguration.IgnoreResources = false; string strProcId = txtProcessId.Text; if (strProcId.Contains('-')) strProcId = strProcId.Substring(strProcId.IndexOf('-') + 1).Trim(); strProcId = strProcId.Trim(); int procId = 0; if (!string.IsNullOrEmpty(strProcId)) { if (!int.TryParse(strProcId, out procId)) procId = 0; } CaptureConfiguration.ProcessId = procId; CaptureConfiguration.CaptureDomain = txtCaptureDomain.Text; FiddlerApplication.AfterSessionComplete += FiddlerApplication_AfterSessionComplete; FiddlerApplication.Startup(8888, true, true, true); } The key lines for FiddlerCore are just the last two lines of code that include the event hookup code as well as the Startup() method call. Here I only hook up to the AfterSessionComplete event but there are a number of other events that hook various stages of the HTTP request cycle you can also hook into. Other events include BeforeRequest, BeforeResponse, RequestHeadersAvailable, ResponseHeadersAvailable and so on. In my case I want to capture the request data and I actually have several options to capture this data. AfterSessionComplete is the last event that fires in the request sequence and it’s the most common choice to capture all request and response data. I could have used several other events, but AfterSessionComplete is one place where you can look both at the request and response data, so this will be the most common place to hook into if you’re capturing content. The implementation of AfterSessionComplete is responsible for capturing all HTTP request headers and it looks something like this:private void FiddlerApplication_AfterSessionComplete(Session sess) { // Ignore HTTPS connect requests if (sess.RequestMethod == "CONNECT") return; if (CaptureConfiguration.ProcessId > 0) { if (sess.LocalProcessID != 0 && sess.LocalProcessID != CaptureConfiguration.ProcessId) return; } if (!string.IsNullOrEmpty(CaptureConfiguration.CaptureDomain)) { if (sess.hostname.ToLower() != CaptureConfiguration.CaptureDomain.Trim().ToLower()) return; } if (CaptureConfiguration.IgnoreResources) { string url = sess.fullUrl.ToLower(); var extensions = CaptureConfiguration.ExtensionFilterExclusions; foreach (var ext in extensions) { if (url.Contains(ext)) return; } var filters = CaptureConfiguration.UrlFilterExclusions; foreach (var urlFilter in filters) { if (url.Contains(urlFilter)) return; } } if (sess == null || sess.oRequest == null || sess.oRequest.headers == null) return; string headers = sess.oRequest.headers.ToString(); var reqBody = sess.GetRequestBodyAsString(); // if you wanted to capture the response //string respHeaders = session.oResponse.headers.ToString(); //var respBody = session.GetResponseBodyAsString(); // replace the HTTP line to inject full URL string firstLine = sess.RequestMethod + " " + sess.fullUrl + " " + sess.oRequest.headers.HTTPVersion; int at = headers.IndexOf("\r\n"); if (at < 0) return; headers = firstLine + "\r\n" + headers.Substring(at + 1); string output = headers + "\r\n" + (!string.IsNullOrEmpty(reqBody) ? reqBody + "\r\n" : string.Empty) + Separator + "\r\n\r\n"; BeginInvoke(new Action<string>((text) => { txtCapture.AppendText(text); UpdateButtonStatus(); }), output); } The code starts by filtering out some requests based on the CaptureOptions I set before the capture is started. These options/filters are applied when requests actually come in. This is very useful to help narrow down the requests that are captured for playback based on options the user picked. I find it useful to limit requests to a certain domain for captures, as well as filtering out some request types like static resources – images, css, scripts etc. This is of course optional, but I think it’s a common scenario and WebSurge makes good use of this feature. AfterSessionComplete like other FiddlerCore events, provides a Session object parameter which contains all the request and response details. There are oRequest and oResponse objects to hold their respective data. In my case I’m interested in the raw request headers and body only, as you can see in the commented code you can also retrieve the response headers and body. Here the code captures the request headers and body and simply appends the output to the textbox on the screen. Note that the Fiddler events are asynchronous, so in order to display the content in the UI they have to be marshaled back the UI thread with BeginInvoke, which here simply takes the generated headers and appends it to the existing textbox test on the form. As each request is processed, the headers are captured and appended to the bottom of the textbox resulting in a Session HTTP capture in the format that Web Surge internally supports, which is basically raw request headers with a customized 1st HTTP Header line that includes the full URL rather than a server relative URL. When the capture is done the user can either copy the raw HTTP session to the clipboard, or directly save it to file. This raw capture format is the same format WebSurge and also Fiddler use to import/export request data. While this code is application specific, it demonstrates the kind of logic that you can easily apply to the request capture process, which is one of the reasonsof why FiddlerCore is so powerful. You get to choose what content you want to look up as part of your own application logic and you can then decide how to capture or use that data as part of your application. The actual captured data in this case is only a string. The user can edit the data by hand or in the the case of WebSurge, save it to disk and automatically open the captured session as a new load test. Stopping the FiddlerCore Proxy Finally to stop capturing requests you simply disconnect the event handler and call the FiddlerApplication.ShutDown() method:void Stop() { FiddlerApplication.AfterSessionComplete -= FiddlerApplication_AfterSessionComplete; if (FiddlerApplication.IsStarted()) FiddlerApplication.Shutdown(); } As you can see, adding HTTP capture functionality to an application is very straight forward. FiddlerCore offers tons of features I’m not even touching on here – I suspect basic captures are the most common scenario, but a lot of different things can be done with FiddlerCore’s simple API interface. Sky’s the limit! The source code for this sample capture form (WinForms) is provided as part of this article. Adding Fiddler Certificates with FiddlerCore One of the sticking points in West Wind WebSurge has been that if you wanted to capture HTTPS/SSL traffic, you needed to have the full version of Fiddler and have HTTPS decryption enabled. Essentially you had to use Fiddler to configure HTTPS decryption and the associated installation of the Fiddler local client certificate that is used for local decryption of incoming SSL traffic. While this works just fine, requiring to have Fiddler installed and then using a separate application to configure the SSL functionality isn’t ideal. Fortunately FiddlerCore actually includes the tools to register the Fiddler Certificate directly using FiddlerCore. Why does Fiddler need a Certificate in the first Place? Fiddler and FiddlerCore are essentially HTTP proxies which means they inject themselves into the HTTP conversation by re-routing HTTP traffic to a special HTTP port (8888 by default for Fiddler) and then forward the HTTP data to the original client. Fiddler injects itself as the system proxy in using the WinInet Windows settings  which are the same settings that Internet Explorer uses and that are configured in the Windows and Internet Explorer Internet Settings dialog. Most HTTP clients running on Windows pick up and apply these system level Proxy settings before establishing new HTTP connections and that’s why most clients automatically work once Fiddler – or FiddlerCore/WebSurge are running. For plain HTTP requests this just works – Fiddler intercepts the HTTP requests on the proxy port and then forwards them to the original port (80 for HTTP and 443 for SSL typically but it could be any port). For SSL however, this is not quite as simple – Fiddler can easily act as an HTTPS/SSL client to capture inbound requests from the server, but when it forwards the request to the client it has to also act as an SSL server and provide a certificate that the client trusts. This won’t be the original certificate from the remote site, but rather a custom local certificate that effectively simulates an SSL connection between the proxy and the client. If there is no custom certificate configured for Fiddler the SSL request fails with a certificate validation error. The key for this to work is that a custom certificate has to be installed that the HTTPS client trusts on the local machine. For a much more detailed description of the process you can check out Eric Lawrence’s blog post on Certificates. If you’re using the desktop version of Fiddler you can install a local certificate into the Windows certificate store. Fiddler proper does this from the Options menu: This operation does several things: It installs the Fiddler Root Certificate It sets trust to this Root Certificate A new client certificate is generated for each HTTPS site monitored Certificate Installation with FiddlerCore You can also provide this same functionality using FiddlerCore which includes a CertMaker class. Using CertMaker is straight forward to use and it provides an easy way to create some simple helpers that can install and uninstall a Fiddler Root certificate:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } return true; } InstallCertificate() works by first checking whether the root certificate is already installed and if it isn’t goes ahead and creates a new one. The process of creating the certificate is a two step process – first the actual certificate is created and then it’s moved into the certificate store to become trusted. I’m not sure why you’d ever split these operations up since a cert created without trust isn’t going to be of much value, but there are two distinct steps. When you trigger the trustRootCert() method, a message box will pop up on the desktop that lets you know that you’re about to trust a local private certificate. This is a security feature to ensure that you really want to trust the Fiddler root since you are essentially installing a man in the middle certificate. It’s quite safe to use this generated root certificate, because it’s been specifically generated for your machine and thus is not usable from external sources, the only way to use this certificate in a trusted way is from the local machine. IOW, unless somebody has physical access to your machine, there’s no useful way to hijack this certificate and use it for nefarious purposes (see Eric’s post for more details). Once the Root certificate has been installed, FiddlerCore/Fiddler create new certificates for each site that is connected to with HTTPS. You can end up with quite a few temporary certificates in your certificate store. To uninstall you can either use Fiddler and simply uncheck the Decrypt HTTPS traffic option followed by the remove Fiddler certificates button, or you can use FiddlerCore’s CertMaker.removeFiddlerGeneratedCerts() which removes the root cert and any of the intermediary certificates Fiddler created. Keep in mind that when you uninstall you uninstall the certificate for both FiddlerCore and Fiddler, so use UninstallCertificate() with care and realize that you might affect the Fiddler application’s operation by doing so as well. When to check for an installed Certificate Note that the check to see if the root certificate exists is pretty fast, while the actual process of installing the certificate is a relatively slow operation that even on a fast machine takes a few seconds. Further the trust operation pops up a message box so you probably don’t want to install the certificate repeatedly. Since the check for the root certificate is fast, you can easily put a call to InstallCertificate() in any capture startup code – in which case the certificate installation only triggers when a certificate is in fact not installed. Personally I like to make certificate installation explicit – just like Fiddler does, so in WebSurge I use a small drop down option on the menu to install or uninstall the SSL certificate:   This code calls the InstallCertificate and UnInstallCertificate functions respectively – the experience with this is similar to what you get in Fiddler with the extra dialog box popping up to prompt confirmation for installation of the root certificate. Once the cert is installed you can then capture SSL requests. There’s a gotcha however… Gotcha: FiddlerCore Certificates don’t stick by Default When I originally tried to use the Fiddler certificate installation I ran into an odd problem. I was able to install the certificate and immediately after installation was able to capture HTTPS requests. Then I would exit the application and come back in and try the same HTTPS capture again and it would fail due to a missing certificate. CertMaker.rootCertExists() would return false after every restart and if re-installed the certificate a new certificate would get added to the certificate store resulting in a bunch of duplicated root certificates with different keys. What the heck? CertMaker and BcMakeCert create non-sticky CertificatesI turns out that FiddlerCore by default uses different components from what the full version of Fiddler uses. Fiddler uses a Windows utility called MakeCert.exe to create the Fiddler Root certificate. FiddlerCore however installs the CertMaker.dll and BCMakeCert.dll assemblies, which use a different crypto library (Bouncy Castle) for certificate creation than MakeCert.exe which uses the Windows Crypto API. The assemblies provide support for non-windows operation for Fiddler under Mono, as well as support for some non-Windows certificate platforms like iOS and Android for decryption. The bottom line is that the FiddlerCore provided bouncy castle assemblies are not sticky by default as the certificates created with them are not cached as they are in Fiddler proper. To get certificates to ‘stick’ you have to explicitly cache the certificates in Fiddler’s internal preferences. A cache aware version of InstallCertificate looks something like this:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; App.Configuration.UrlCapture.Cert = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.cert", null); App.Configuration.UrlCapture.Key = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.key", null); } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } App.Configuration.UrlCapture.Cert = null; App.Configuration.UrlCapture.Key = null; return true; } In this code I store the Fiddler cert and private key in an application configuration settings that’s stored with the application settings (App.Configuration.UrlCapture object). These settings automatically persist when WebSurge is shut down. The values are read out of Fiddler’s internal preferences store which is set after a new certificate has been created. Likewise I clear out the configuration settings when the certificate is uninstalled. In order for these setting to be used you have to also load the configuration settings into the Fiddler preferences *before* a call to rootCertExists() is made. I do this in the capture form’s constructor:public FiddlerCapture(StressTestForm form) { InitializeComponent(); CaptureConfiguration = App.Configuration.UrlCapture; MainForm = form; if (!string.IsNullOrEmpty(App.Configuration.UrlCapture.Cert)) { FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.key", App.Configuration.UrlCapture.Key); FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.cert", App.Configuration.UrlCapture.Cert); }} This is kind of a drag to do and not documented anywhere that I could find, so hopefully this will save you some grief if you want to work with the stock certificate logic that installs with FiddlerCore. MakeCert provides sticky Certificates and the same functionality as Fiddler But there’s actually an easier way. If you want to skip the above Fiddler preference configuration code in your application you can choose to distribute MakeCert.exe instead of certmaker.dll and bcmakecert.dll. When you use MakeCert.exe, the certificates settings are stored in Windows so they are available without any custom configuration inside of your application. It’s easier to integrate and as long as you run on Windows and you don’t need to support iOS or Android devices is simply easier to deal with. To integrate into your project, you can remove the reference to CertMaker.dll (and the BcMakeCert.dll assembly) from your project. Instead copy MakeCert.exe into your output folder. To make sure MakeCert.exe gets pushed out, include MakeCert.exe in your project and set the Build Action to None, and Copy to Output Directory to Copy if newer. Note that the CertMaker.dll reference in the project has been removed and on disk the files for Certmaker.dll, as well as the BCMakeCert.dll files on disk. Keep in mind that these DLLs are resources of the FiddlerCore NuGet package, so updating the package may end up pushing those files back into your project. Once MakeCert.exe is distributed FiddlerCore checks for it first before using the assemblies so as long as MakeCert.exe exists it’ll be used for certificate creation (at least on Windows). Summary FiddlerCore is a pretty sweet tool, and it’s absolutely awesome that we get to plug in most of the functionality of Fiddler right into our own applications. A few years back I tried to build this sort of functionality myself for an app and ended up giving up because it’s a big job to get HTTP right – especially if you need to support SSL. FiddlerCore now provides that functionality as a turnkey solution that can be plugged into your own apps easily. The only downside is FiddlerCore’s documentation for more advanced features like certificate installation which is pretty sketchy. While for the most part FiddlerCore’s feature set is easy to work with without any documentation, advanced features are often not intuitive to gleam by just using Intellisense or the FiddlerCore help file reference (which is not terribly useful). While Eric Lawrence is very responsive on his forum and on Twitter, there simply isn’t much useful documentation on Fiddler/FiddlerCore available online. If you run into trouble the forum is probably the first place to look and then ask a question if you can’t find the answer. The best documentation you can find is Eric’s Fiddler Book which covers a ton of functionality of Fiddler and FiddlerCore. The book is a great reference to Fiddler’s feature set as well as providing great insights into the HTTP protocol. The second half of the book that gets into the innards of HTTP is an excellent read for anybody who wants to know more about some of the more arcane aspects and special behaviors of HTTP – it’s well worth the read. While the book has tons of information in a very readable format, it’s unfortunately not a great reference as it’s hard to find things in the book and because it’s not available online you can’t electronically search for the great content in it. But it’s hard to complain about any of this given the obvious effort and love that’s gone into this awesome product for all of these years. A mighty big thanks to Eric Lawrence  for having created this useful tool that so many of us use all the time, and also to Telerik for picking up Fiddler/FiddlerCore and providing Eric the resources to support and improve this wonderful tool full time and keeping it free for all. Kudos! Resources FiddlerCore Download FiddlerCore NuGet Fiddler Capture Sample Form Fiddler Capture Form in West Wind WebSurge (GitHub) Eric Lawrence’s Fiddler Book© Rick Strahl, West Wind Technologies, 2005-2014Posted in .NET  HTTP   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • IIS7 Failure after installing Advanced Logging

    - by Guy Harwood
    I came across a nasty issue when i installed the Advanced Logging feature for IIS7 via the Web Platform Installer on my Windows 2008 Server.  Basically, after installation and reboot none of my sites were working and returned 503 – Internal Server Error. Snooping around in the Event Viewer i found the following error reported by the W3SVC… The Module DLL C:\Program Files\IIS\Advanced Logging\AdvancedLoggingModule.dll failed to load. The data is the error Even though the DLLs are there, it is not picking them up. I managed to find a fix via google that involves editing the configapplicationHost.config file in the C:\Windows\System32\inetsrv\ directory. 1.  Copy AdvancedLoggingModule.dll and ClientLoggingHandler.dll to %windir%\system32 (C:\windows\system32  on a default setup) 2.  Locate the file C:\Windows\System32\inetsrv\configapplicationHost.config and make a backup, then open it in a text editor (i recommend Notepad++). 3.  Search for the following 2 lines (mine are located on line 570).. <add name="ClientLoggingHandler" image="%ProgramFiles%\IIS\Advanced Logging\ClientLoggingHandler.dll" /> <add name="AdvancedLoggingModule" image="%ProgramFiles%\IIS\Advanced Logging\AdvancedLoggingModule.dll" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } and alter them to…. <add name="ClientLoggingHandler" image="%windir%\system32\ClientLoggingHandler.dll" /> <add name="AdvancedLoggingModule" image="%windir%\system32\AdvancedLoggingModule.dll" /> 4. Open a command prompt and run iisReset. 5. All sites should now be working. .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }

    Read the article

  • BizTalk and IBM WebSphere MQ Errors

    - by Christopher House
    The project I'm currently working on is going to make heavy use of IBM WebShere MQ to send messages from BizTalk to the client's iSeries box.  I'd never previously worked with WebSphere MQ, so I didn't really have any idea what it would take to get this to work.  I was pleasantly surprised that it wasn't too difficult to configure a send port and pass messages through it to a queue.  Or so I thought... A couple of weeks ago, the client gave me the name of a host, queue manager and queue that I'd been using for my development.  Everything was going great, I was able to put messages onto the queue, I was happy, the client was happy.  Life was good.  Then the client tells me that the host I've been connecting to is actually a Solaris box and that in prod, we'll actually be sending to an iSeries.  We both agree that it would behoove us to start pointing my dev environment to their dev iSeries box in order to flush out any weirdness there might be.  As it turns out, it was a good thing we made the change.  As soon as I reconfigured my BRE policy that sets endpoint information to point to the iSeries queue, we started seeing failures in the event log.  An example from the event log: Event Type: Error Event Source: BizTalk Server 2009 Event Category: BizTalk Server 2009 Event ID: 5754 Date:  6/9/2010 Time:  10:16:41 AM User:  N/A Computer: WINDOWS2003 Description: A message sent to adapter "MQSC" on send port "<my dynamic sendport name>" with URI "mqsc://client/tcp/<hostname>(1414)/<queue manager name>/<queue name>" is suspended.  Error details: Failure encountered while attempting to open queue. queue = <queue name> queueManager = <queue manager name>, reasonCode = 6124  MessageId:  {76825C7C-611A-4A56-8A6F-35E1124BDB5C}  InstanceID: {BA389103-DF9B-493F-8C61-44574822AAD6} The key piece of information in the event entry is the reasonCode, 6124.  A quick Google search shows that reasonCode 6124 is the code for MQRC_NOT_CONNECTED.  According to IBM's docs, this means that you've tried to send a message without first opening a connection to the queue manager.  Obviously, in the context of BizTalk, this is an unexpected error, since this sort of thing should be managed entirely by the send adapter. Perusing IBM's documentation a bit more, I came across some info on how to turn on tracing for MQ.  With tracing enabled, I tried sending a message again, then went and reviewed the trace files.  The bulk of the information in the trace files didn't mean a thing to me, but at the end of one of the files, I did notice this: 00006257 15:40:20.327795   3500.4      RSESS:000009 ------{  reqReleaseConn 00006258 15:40:20.328714   3500.4      RSESS:000009 ------}  reqReleaseConn (rc=OK) 00006259 15:40:20.328727   3500.4      RSESS:000009 ------{  xcsClearTraceIdent 0000625A 15:40:20.328739   3500.4           :       ------}  xcsClearTraceIdent (rc=OK) 0000625B 15:40:20.328752   3500.4           :       -----}! trmzstMQCONNX (rc=MQRC_NOT_AUTHORIZED) 0000625C 15:40:20.328765   3500.4           :       ----}! MQCONNX (rc=MQRC_NOT_AUTHORIZED) 0000625D 15:40:20.328766   3500.4           :       ---}! ImqQueueManager::connect (rc=MQRC_NOT_AUTHORIZED) 0000625E 15:40:20.328767   3500.4           :       --}! ImqObject::open (rc=MQRC_NOT_CONNECTED) 0000625F 15:40:20.328768   3500.4           :       --{  ImqQueue::lock 00006260 15:40:20.328769   3500.4           :       --}! ImqQueue::lock (rc=Unknown(1)) 00006261 15:40:20.328769   3500.4           :       --{  ImqQueue::unlock 00006262 15:40:20.328769   3500.4           :       --}! ImqQueue::unlock (rc=Unknown(1)) It seemed like the MQRC_NOT_CONNECTED error was being caused by a security related issue (MQRC_NOT_AUTHORIZED).  I did notice something earlier in the log where it appeared that MQ was passing a field named UID with a value equal to the account name that my BizTalk service was running under.  I ended up creating a new local account on the BizTalk server that had the same name as a user which had access to the queue manager on the iSeries.  I then created a new host instance that ran under this new account, created a send handler for the MQSC adapter on this new host instance and reconfigured my orchestration to run on the new host instance.  After bouncing all my host instances, I was now able to send messages to the iSeries. It's still not clear to me why we were able to connect to the Solaris server.  I ended up contacting IBM's support and they did confirm that the process sending to MQ does in fact pass the identity to the queue manager it's connecting to.

    Read the article

  • 12.04 Unity 3D Not working where as Unity 2D works fine

    - by Stephen Martin
    I updated from 11.10 to 12.04 using the distribution upgrade, after that I couldn't log into using the Unity 3D desktop after logging in I would either never get unity's launcher or I would get the launcher and once I tried to do anything the windows lost their decoration and nothing would respond. If I use Unity 2D it works fine and in fact I'm using it to type this. I managed to get some info out of dmesg that looks like it's the route of whats happening. dmesg: [ 109.160165] [drm:i915_hangcheck_elapsed] *ERROR* Hangcheck timer elapsed... GPU hung [ 109.160180] [drm] capturing error event; look for more information in /debug/dri/0/i915_error_state [ 109.167587] [drm:i915_wait_request] *ERROR* i915_wait_request returns -11 (awaiting 1226 at 1218, next 1227) [ 109.672273] [drm:i915_reset] *ERROR* Failed to reset chip. output of lspci | grep vga 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) output of /usr/lib/nux/unity_support_test -p IN 12.04 OpenGL vendor string: VMware, Inc. OpenGL renderer string: Gallium 0.4 on llvmpipe (LLVM 0x300) OpenGL version string: 2.1 Mesa 8.0.2 Not software rendered: no Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no The same command in 11.10: stephenm@mcr-ubu1:~$ /usr/lib/nux/unity_support_test -p OpenGL vendor string: Tungsten Graphics, Inc OpenGL renderer string: Mesa DRI Mobile Intel® GM45 Express Chipset OpenGL version string: 2.1 Mesa 7.11 Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: yes stephenm@mcr-ubu1:~$ output of /var/log/Xorg.0.log [ 11.971] (II) intel(0): EDID vendor "LPL", prod id 307 [ 11.971] (II) intel(0): Printing DDC gathered Modelines: [ 11.971] (II) intel(0): Modeline "1280x800"x0.0 69.30 1280 1328 1360 1405 800 803 809 822 -hsync -vsync (49.3 kHz) [ 12.770] (II) intel(0): Allocated new frame buffer 2176x800 stride 8704, tiled [ 15.087] (II) intel(0): EDID vendor "LPL", prod id 307 [ 15.087] (II) intel(0): Printing DDC gathered Modelines: [ 15.087] (II) intel(0): Modeline "1280x800"x0.0 69.30 1280 1328 1360 1405 800 803 809 822 -hsync -vsync (49.3 kHz) [ 33.310] (II) XKB: reuse xkmfile /var/lib/xkb/server-93A39E9580D1D5B855D779F4595485C2CC66E0CF.xkm [ 34.900] (WW) intel(0): flip queue failed: Invalid argument [ 34.900] (WW) intel(0): Page flip failed: Invalid argument [ 34.900] (WW) intel(0): flip queue failed: Invalid argument [ 34.900] (WW) intel(0): Page flip failed: Invalid argument [ 34.913] (WW) intel(0): flip queue failed: Invalid argument [ 34.913] (WW) intel(0): Page flip failed: Invalid argument [ 34.913] (WW) intel(0): flip queue failed: Invalid argument [ 34.913] (WW) intel(0): Page flip failed: Invalid argument [ 34.926] (WW) intel(0): flip queue failed: Invalid argument [ 34.926] (WW) intel(0): Page flip failed: Invalid argument [ 34.926] (WW) intel(0): flip queue failed: Invalid argument [ 34.926] (WW) intel(0): Page flip failed: Invalid argument [ 35.501] (WW) intel(0): flip queue failed: Invalid argument [ 35.501] (WW) intel(0): Page flip failed: Invalid argument [ 35.501] (WW) intel(0): flip queue failed: Invalid argument [ 35.501] (WW) intel(0): Page flip failed: Invalid argument [ 41.519] [mi] Increasing EQ size to 512 to prevent dropped events. [ 42.079] (EE) intel(0): Detected a hung GPU, disabling acceleration. [ 42.079] (EE) intel(0): When reporting this, please include i915_error_state from debugfs and the full dmesg. [ 42.598] (II) intel(0): EDID vendor "LPL", prod id 307 [ 42.598] (II) intel(0): Printing DDC gathered Modelines: [ 42.598] (II) intel(0): Modeline "1280x800"x0.0 69.30 1280 1328 1360 1405 800 803 809 822 -hsync -vsync (49.3 kHz) [ 51.052] (II) AIGLX: Suspending AIGLX clients for VT switch I know im using the beta version so I'm not expecting it to work but does any one know what the problem may be or even why they Unity compatibility test is describing my video card as vmware when its an intel i915 and Ubuntu is running on the metal not virtualised. Unity 3D worked fine in 11.10

    Read the article

  • Unity 3D Not working where as Unity 2D works fine [closed]

    - by Stephen Martin
    I updated from 11.10 to 12.04 using the distribution upgrade, after that I couldn't log into using the Unity 3D desktop after logging in I would either never get unity's launcher or I would get the launcher and once I tried to do anything the windows lost their decoration and nothing would respond. If I use Unity 2D it works fine and in fact I'm using it to type this. I managed to get some info out of dmesg that looks like it's the route of whats happening. dmesg: [ 109.160165] [drm:i915_hangcheck_elapsed] *ERROR* Hangcheck timer elapsed... GPU hung [ 109.160180] [drm] capturing error event; look for more information in /debug/dri/0/i915_error_state [ 109.167587] [drm:i915_wait_request] *ERROR* i915_wait_request returns -11 (awaiting 1226 at 1218, next 1227) [ 109.672273] [drm:i915_reset] *ERROR* Failed to reset chip. output of lspci | grep vga 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) output of /usr/lib/nux/unity_support_test -p IN 12.04 OpenGL vendor string: VMware, Inc. OpenGL renderer string: Gallium 0.4 on llvmpipe (LLVM 0x300) OpenGL version string: 2.1 Mesa 8.0.2 Not software rendered: no Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: no The same command in 11.10: stephenm@mcr-ubu1:~$ /usr/lib/nux/unity_support_test -p OpenGL vendor string: Tungsten Graphics, Inc OpenGL renderer string: Mesa DRI Mobile Intel® GM45 Express Chipset OpenGL version string: 2.1 Mesa 7.11 Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity 3D supported: yes stephenm@mcr-ubu1:~$ output of /var/log/Xorg.0.log [ 11.971] (II) intel(0): EDID vendor "LPL", prod id 307 [ 11.971] (II) intel(0): Printing DDC gathered Modelines: [ 11.971] (II) intel(0): Modeline "1280x800"x0.0 69.30 1280 1328 1360 1405 800 803 809 822 -hsync -vsync (49.3 kHz) [ 12.770] (II) intel(0): Allocated new frame buffer 2176x800 stride 8704, tiled [ 15.087] (II) intel(0): EDID vendor "LPL", prod id 307 [ 15.087] (II) intel(0): Printing DDC gathered Modelines: [ 15.087] (II) intel(0): Modeline "1280x800"x0.0 69.30 1280 1328 1360 1405 800 803 809 822 -hsync -vsync (49.3 kHz) [ 33.310] (II) XKB: reuse xkmfile /var/lib/xkb/server-93A39E9580D1D5B855D779F4595485C2CC66E0CF.xkm [ 34.900] (WW) intel(0): flip queue failed: Invalid argument [ 34.900] (WW) intel(0): Page flip failed: Invalid argument [ 34.900] (WW) intel(0): flip queue failed: Invalid argument [ 34.900] (WW) intel(0): Page flip failed: Invalid argument [ 34.913] (WW) intel(0): flip queue failed: Invalid argument [ 34.913] (WW) intel(0): Page flip failed: Invalid argument [ 34.913] (WW) intel(0): flip queue failed: Invalid argument [ 34.913] (WW) intel(0): Page flip failed: Invalid argument [ 34.926] (WW) intel(0): flip queue failed: Invalid argument [ 34.926] (WW) intel(0): Page flip failed: Invalid argument [ 34.926] (WW) intel(0): flip queue failed: Invalid argument [ 34.926] (WW) intel(0): Page flip failed: Invalid argument [ 35.501] (WW) intel(0): flip queue failed: Invalid argument [ 35.501] (WW) intel(0): Page flip failed: Invalid argument [ 35.501] (WW) intel(0): flip queue failed: Invalid argument [ 35.501] (WW) intel(0): Page flip failed: Invalid argument [ 41.519] [mi] Increasing EQ size to 512 to prevent dropped events. [ 42.079] (EE) intel(0): Detected a hung GPU, disabling acceleration. [ 42.079] (EE) intel(0): When reporting this, please include i915_error_state from debugfs and the full dmesg. [ 42.598] (II) intel(0): EDID vendor "LPL", prod id 307 [ 42.598] (II) intel(0): Printing DDC gathered Modelines: [ 42.598] (II) intel(0): Modeline "1280x800"x0.0 69.30 1280 1328 1360 1405 800 803 809 822 -hsync -vsync (49.3 kHz) [ 51.052] (II) AIGLX: Suspending AIGLX clients for VT switch I know im using the beta version so I'm not expecting it to work but does any one know what the problem may be or even why they Unity compatibility test is describing my video card as vmware when its an intel i915 and Ubuntu is running on the metal not virtualised. Unity 3D worked fine in 11.10

    Read the article

  • Advanced Data Source Engine coming to Telerik Reporting Q1 2010

    This is the final blog post from the pre-release series. In it we are going to share with you some of the updates coming to our reporting solution in Q1 2010. A new Declarative Data Source Engine will be added to Telerik Reporting, that will allow full control over data management, and deliver significant gains in rendering performance and memory consumption. Some of the engines new features will be: Data source parameters - those parameters will be used to limit data retrieved from the data source to just the data needed for the report. Data source parameters are processed on the data source side, however only queried data is fetched to the reporting engine, rather than the full data source. This leads to lower memory consumption, because data operations are performed on queried data only, rather than on all data. As a result, only the queried data needs to be stored in the memory vs. the whole dataset, which was the case with the old approach Support for stored procedures - they will assist in achieving a consistent implementation of logic across applications, and are especially practical for performing repetitive tasks. A stored procedure stores the SQL statements and logic, which can then be executed in different reports and/or applications. Stored Procedures will not only save development time, but they will also improve performance, because each stored procedure is compiled on the data base server once, and then is reutilized. In Telerik Reporting, the stored procedure will also be parameterized, where elements of the SQL statement will be bound to parameters. These parameterized SQL queries will be handled through the data source parameters, and are evaluated at run time. Using parameterized SQL queries will improve the performance and decrease the memory footprint of your application, because they will be applied directly on the database server and only the necessary data will be downloaded on the middle tier or client machine; Calculated fields through expressions - with the help of the new reporting engine you will be able to use field values in formulas to come up with a calculated field. A calculated field is a user defined field that is computed "on the fly" and does not exist in the data source, but can perform calculations using the data of the data source object it belongs to. Calculated fields are very handy for adding frequently used formulas to your reports; Improved performance and optimized in-memory OLAP engine - the new data source will come with several improvements in how aggregates are calculated, and memory is managed. As a result, you may experience between 30% (for simpler reports) and 400% (for calculation-intensive reports) in rendering performance, and about 50% decrease in memory consumption. Full design time support through wizards - Declarative data sources are a great advance and will save developers countless hours of coding. In Q1 2010, and true to Telerik Reportings essence, using the new data source engine and its features requires little to no coding, because we have extended most of the wizards to support the new functionality. The newly extended wizards are available in VS2005/VS2008/VS2010 design-time. More features will be revealed on the product's what's new page when the new version is officially released in a few days. Also make sure you attend the free webinar on Thursday, March 11th that will be dedicated to the updates in Telerik Reporting Q1 2010. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Wine 1.6.2-Trying to switch to 32-bit Wineprefix from 64-bit Wine (Trusty 14.04). Can anyone help me out?

    - by AlternateSteve90
    Hello fellow Ubuntu users, I'm having a little trouble with Wine 1.6.1 and I was wondering if someone could help me out. I recently downloaded some 32-bit games that I'd wanted to try(BeamNG Drive and Bugbear's Next Car Game demo) and I had run into some trouble trying to get either of these games to run. So I came across a couple pieces of advice on the 'Net, one here on the Ubuntu community site and the other at BeamNG's forums, on how to create a 32-bit wineprefix on a 64-bit setup. I managed to be able to create the wine32 folder, but now I'm having trouble making it my default Wine setup. Anybody have any idea how I can do that? I'll post the URLs for said advice, btw: http://www.beamng.com/threads/1788-Installing-DRIVE-under-Linux-via-Wine How do I create a 32-bit WINE prefix? Here's what I've tried so far in the Terminal: steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX='/home/user/wine32' WINEARCH='win32' wine 'wineboot' wine: chdir to /home/user/wine32 : No such file or directory steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX='/home/steven/wine32' WINEARCH='win32' wine 'wineboot' wine: created the configuration directory '/home/steven/wine32' fixme:storage:create_storagefile Storage share mode not implemented. err:mscoree:LoadLibraryShim error reading registry key for installroot err:mscoree:LoadLibraryShim error reading registry key for installroot err:mscoree:LoadLibraryShim error reading registry key for installroot err:mscoree:LoadLibraryShim error reading registry key for installroot fixme:storage:create_storagefile Storage share mode not implemented. fixme:iphlpapi:NotifyAddrChange (Handle 0x10ee890, overlapped 0x10ee89c): stub wine: configuration in '/home/steven/wine32' has been updated. steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX=$HOME/.wine32 wine dxsetup.exe wine: created the configuration directory '/home/steven/.wine32' fixme:storage:create_storagefile Storage share mode not implemented. err:mscoree:LoadLibraryShim error reading registry key for installroot err:mscoree:LoadLibraryShim error reading registry key for installroot err:mscoree:LoadLibraryShim error reading registry key for installroot err:mscoree:LoadLibraryShim error reading registry key for installroot fixme:storage:create_storagefile Storage share mode not implemented. fixme:iphlpapi:NotifyAddrChange (Handle 0x103e2b8, overlapped 0x103e2d0): stub fixme:storage:create_storagefile Storage share mode not implemented. fixme:iphlpapi:NotifyAddrChange (Handle 0x10fe890, overlapped 0x10fe89c): stub wine: configuration in '/home/steven/.wine32' has been updated. wine: cannot find L"C:\windows\system32\dxsetup.exe" steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEARCH=win64 winecfgsteven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX='/home/steven/wine32' WINEARCH='win32' wine 'wineboot' steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEARCH=win32 winecfg wine: WINEARCH set to win32 but '/home/steven/.wine' is a 64-bit installation. steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX='/home/steven/wine32' WINEARCH='win32' wine 'wineboot' steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX='/home/user/wine32' WINEARCH='win32' wine 'wineboot' wine: chdir to /home/user/wine32 : No such file or directory steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX='/home/steven/wine32' WINEARCH='win32' wine 'wineboot' steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX=/home/steven/wine32 WINEARCH='win32' wine 'wineboot' steven@steven-HP-Pavilion-17-Notebook-PC:~$ WINEPREFIX=/home/steven/wine32 WINEARCH=win32 wine wineboot steven@steven-HP-Pavilion-17-Notebook-PC:~$ So, yeah. TBH, though, I'm far from an expert and perhaps I've been going about it all the wrong way. In the meantime, I'll try to keep looking for solutions on my own, but if anybody can help me solve this dilemma, especially if anyone happens to own any of these two games in particular, I'd appreciate it. :)

    Read the article

  • Oracle Announces Oracle Exadata X3 Database In-Memory Machine

    - by jgelhaus
    Fourth Generation Exadata X3 Systems are Ideal for High-End OLTP, Large Data Warehouses, and Database Clouds; Eighth-Rack Configuration Offers New Low-Cost Entry Point ORACLE OPENWORLD, SAN FRANCISCO – October 1, 2012 News Facts During his opening keynote address at Oracle OpenWorld, Oracle CEO, Larry Ellison announced the Oracle Exadata X3 Database In-Memory Machine - the latest generation of its Oracle Exadata Database Machines. The Oracle Exadata X3 Database In-Memory Machine is a key component of the Oracle Cloud. Oracle Exadata X3-2 Database In-Memory Machine and Oracle Exadata X3-8 Database In-Memory Machine can store up to hundreds of Terabytes of compressed user data in Flash and RAM memory, virtually eliminating the performance overhead of reads and writes to slow disk drives, making Exadata X3 systems the ideal database platforms for the varied and unpredictable workloads of cloud computing. In order to realize the highest performance at the lowest cost, the Oracle Exadata X3 Database In-Memory Machine implements a mass memory hierarchy that automatically moves all active data into Flash and RAM memory, while keeping less active data on low-cost disks. With a new Eighth-Rack configuration, the Oracle Exadata X3-2 Database In-Memory Machine delivers a cost-effective entry point for smaller workloads, testing, development and disaster recovery systems, and is a fully redundant system that can be used with mission critical applications. Next-Generation Technologies Deliver Dramatic Performance Improvements Oracle Exadata X3 Database In-Memory Machines use a combination of scale-out servers and storage, InfiniBand networking, smart storage, PCI Flash, smart memory caching, and Hybrid Columnar Compression to deliver extreme performance and availability for all Oracle Database Workloads. Oracle Exadata X3 Database In-Memory Machine systems leverage next-generation technologies to deliver significant performance enhancements, including: Four times the Flash memory capacity of the previous generation; with up to 40 percent faster response times and 100 GB/second data scan rates. Combined with Exadata’s unique Hybrid Columnar Compression capabilities, hundreds of Terabytes of user data can now be managed entirely within Flash; 20 times more capacity for database writes through updated Exadata Smart Flash Cache software. The new Exadata Smart Flash Cache software also runs on previous generation Exadata systems, increasing their capacity for writes tenfold; 33 percent more database CPU cores in the Oracle Exadata X3-2 Database In-Memory Machine, using the latest 8-core Intel® Xeon E5-2600 series of processors; Expanded 10Gb Ethernet connectivity to the data center in the Oracle Exadata X3-2 provides 40 10Gb network ports per rack for connecting users and moving data; Up to 30 percent reduction in power and cooling. Configured for Your Business, Available Today Oracle Exadata X3-2 Database In-Memory Machine systems are available in a Full-Rack, Half-Rack, Quarter-Rack, and the new low-cost Eighth-Rack configuration to satisfy the widest range of applications. Oracle Exadata X3-8 Database In-Memory Machine systems are available in a Full-Rack configuration, and both X3 systems enable multi-rack configurations for virtually unlimited scalability. Oracle Exadata X3-2 and X3-8 Database In-Memory Machines are fully compatible with prior Exadata generations and existing systems can also be upgraded with Oracle Exadata X3-2 servers. Oracle Exadata X3 Database In-Memory Machine systems can be used immediately with any application certified with Oracle Database 11g R2 and Oracle Real Application Clusters, including SAP, Oracle Fusion Applications, Oracle’s PeopleSoft, Oracle’s Siebel CRM, the Oracle E-Business Suite, and thousands of other applications. Supporting Quotes “Forward-looking enterprises are moving towards Cloud Computing architectures,” said Andrew Mendelsohn, senior vice president, Oracle Database Server Technologies. “Oracle Exadata’s unique ability to run any database application on a fully scale-out architecture using a combination of massive memory for extreme performance and low-cost disk for high capacity delivers the ideal solution for Cloud-based database deployments today.” Supporting Resources Oracle Press Release Oracle Exadata Database Machine Oracle Exadata X3-2 Database In-Memory Machine Oracle Exadata X3-8 Database In-Memory Machine Oracle Database 11g Follow Oracle Database via Blog, Facebook and Twitter Oracle OpenWorld 2012 Oracle OpenWorld 2012 Keynotes Like Oracle OpenWorld on Facebook Follow Oracle OpenWorld on Twitter Oracle OpenWorld Blog Oracle OpenWorld on LinkedIn Mark Hurd's keynote with Andy Mendelsohn and Juan Loaiza - - watch for the replay to be available soon at http://www.youtube.com/user/Oracle or http://www.oracle.com/openworld/live/on-demand/index.html

    Read the article

  • JavaScript Data Binding Frameworks

    - by dwahlin
    Data binding is where it’s at now days when it comes to building client-centric Web applications. Developers experienced with desktop frameworks like WPF or web frameworks like ASP.NET, Silverlight, or others are used to being able to take model objects containing data and bind them to UI controls quickly and easily. When moving to client-side Web development the data binding story hasn’t been great since neither HTML nor JavaScript natively support data binding. This means that you have to write code to place data in a control and write code to extract it. Although it’s certainly feasible to do it from scratch (many of us have done it this way for years), it’s definitely tedious and not exactly the best solution when it comes to maintenance and re-use. Over the last few years several different script libraries have been released to simply the process of binding data to HTML controls. In fact, the subject of data binding is becoming so popular that it seems like a new script library is being released nearly every week. Many of the libraries provide MVC/MVVM pattern support in client-side JavaScript apps and some even integrate directly with server frameworks like Node.js. Here’s a quick list of a few of the available libraries that support data binding (if you like any others please add a comment and I’ll try to keep the list updated): AngularJS MVC framework for data binding (although closely follows the MVVM pattern). Backbone.js MVC framework with support for models, key/value binding, custom events, and more. Derby Provides a real-time environment that runs in the browser an in Node.js. The library supports data binding and templates. Ember Provides support for templates that automatically update as data changes. JsViews Data binding framework that provides “interactive data-driven views built on top of JsRender templates”. jQXB Expression Binder Lightweight jQuery plugin that supports bi-directional data binding support. KnockoutJS MVVM framework with robust support for data binding. For an excellent look at using KnockoutJS check out John Papa’s course on Pluralsight. Meteor End to end framework that uses Node.js on the server and provides support for data binding on  the client. Simpli5 JavaScript framework that provides support for two-way data binding. WinRT with HTML5/JavaScript If you’re building Windows 8 applications using HTML5 and JavaScript there’s built-in support for data binding in the WinJS library.   I won’t have time to write about each of these frameworks, but in the next post I’m going to talk about my (current) favorite when it comes to client-side JavaScript data binding libraries which is AngularJS. AngularJS provides an extremely clean way – in my opinion - to extend HTML syntax to support data binding while keeping model objects (the objects that hold the data) free from custom framework method calls or other weirdness. While I’m writing up the next post, feel free to visit the AngularJS developer guide if you’d like additional details about the API and want to get started using it.

    Read the article

  • Required Skill Sets Of A Software Architect

    The question has been asked as to what is the required skill sets of a software architect. The answer to this is that it truly depends. When I state that it depend, it depends on the organization, industry, and skill sets available on the open market and internally within a company. With open ended skill sets even Napoleon Dynamite could be an architect. Napoleon Dynamite’s Skills Pedro: Have you asked anybody yet? Napoleon Dynamite: No, but who would? I don't even have any good skills. Pedro: What do you mean? Napoleon Dynamite: You know, like nunchuck skills, bow hunting skills, computer hacking skills... Girls only want boyfriends who have great skills. Pedro: Aren't you pretty good at drawing, like animals and warriors and stuff? This example might be a little off base but it does illustrate a point. What are the real required skills of a software architect? In my opinion, an architect needs to demonstrate the knowledge of the following three main skill set categories so that they are successful. General Skill Sets of an Architect Basic Engineering Skills Organizational  Skills Interpersonal Skills Basic Engineering Skills are a very large part of what a software architect deal with on a daily bases when designing or updating systems. Think about it, how good would a lead mechanic be if they did not know how to fix or repair cars? They would not be, and that is my point that architects need to have at least some basic skills regarding engineering. The skills listed below are generic in nature because they change from job to job, so in this discussion I am trying to focus more on generalities so that anyone can apply this information to their individual situation. Common Basic Engineering Skills Data Modeling Code Creation Configuration Testing Deployment/Publishing System and Environment Knowledge Organizational Skills If an Architect works for or with an origination then they will need strong organization skills to survive. An architect is no use to a project if the project is missed managed. Additionally, budgets and timelines can really affect a company and their products when established deadlines are repeated not meet. By not meeting these timelines a company is forced to cancel the project and waste all the money and time spent or spend more money until it is completed, if it is ever completed. Common Organizational Skills Project Management Estimation (Cost and Time) Creation and Maintenance of Accepted Standards Interpersonal Skills For me personally Interpersonal skill ranks above the other types of skill sets because an architect can quickly pick up the other two skill sets by communicating with other team/project members so that they are quickly up to speed on a project. Additionally, in order for an architect to manage a project or even derive rough estimates they will more than likely have to consult with others actually working on the code (Programmers/Software Engineers) to get there estimates since they will be the ones actually working on the changes to be implemented. Common Interpersonal Skills Good Communicator Focus on projects success over personal Honors roles within a team Reference: Taylor, R. N., Medvidovic, N., & Dashofy, E. M. (2009). Software architecture: Foundations, theory, and practice Hoboken, NJ: John Wiley & Sons

    Read the article

  • Raspberry Pi entrance signed backed by Umbraco - Part 1

    - by Chris Houston
    Being experts on all things Umbraco, we jumped at the chance to help our client, QV Offices, with their pressing signage predicament. They needed to display a sign in the entrance to their building and approached us for our advice. Of course it had to be electronic: displaying multiple names of their serviced office clients, meeting room bookings and on-the-pulse promotions. But with a winding Victorian staircase and minimal storage space how could the monitor be run, updated and managed? That’s where we came in…Raspberry PiUmbraco CMSAutomatic updatesAutomated monitor of the signPower saving when the screen is not in useMounting the screenThe screen that has been used is a standard LED low energy Full HD screen and has been mounted on the wall using it's VESA mounting points, as the wall is a stud wall we were able to add an access panel behind the screen to feed through the mains, HDMI and sensor cables.The Raspberry Pi is then tucked away out of sight in the main electrical cupboard which just happens to be next to the sign, we had an electrician add a power point inside this cupboard to allow us to power the screen and the Raspberry Pi.Designing the interface and editing the contentAlthough a room sign was the initial requirement from QV Offices, their medium term goal has always been to add online meeting booking to their website and hence we suggested adding information about the current and next day's meetings to the sign that would be pulled directly from their online booking system.We produced the design and built the web page to fit exactly on a 1920 x 1080 screen (Full HD in Portrait)As you would expect all the information can be edited via an Umbraco CMS, they are able to add floors, rooms, clients and virtual clients as well as add meeting bookings to their meeting diary.How we configured the Raspberry PiAfter receiving a new Raspberry Pi we downloaded the latest release of Raspbian operating system and followed the official guide which shows how to copy the OS onto an SD card from a Mac, we then followed the majority of steps on this useful guide: 10 Things to Do After Buying a Raspberry Pi.Installing ChromiumWe chose to use the Chromium web browser which for those who do not know is the open sourced version of Google Chrome. You can install this from the terminal with the following command:sudo apt-get install chromium-browserInstalling UnclutterWe found this little application which automatically hides the mouse pointer, it is used in the script below and is installed using the following command:sudo apt-get install unclutterAuto start Chromium and disabling the screen saver, power saving and mouseWhen the Raspberry Pi has been installed it will not have a keyboard or mouse and hence if their was a power cut we needed it to always boot and re-loaded Chromium with the correct URL.Our preferred command line text editor is Nano and I have assumed you know how to use this editor or will be able to work it out pretty quickly.So using the following command:sudo nano /etc/xdg/lxsession/LXDE/autostartWe then changed the autostart file content to:@lxpanel --profile LXDE@pcmanfm --desktop --profile LXDE@xscreensaver -no-splash@xset s off@xset -dpms@xset s noblank@chromium --kiosk --incognito http://www.qvoffices.com/someURL@unclutter -idle 0The first few commands turn off the screen saver and power saving, we then open Cromium in Kiosk Mode (full screen with no menu etc) and pass in the URL to use (I have changed the URL in this example) We found a useful blog post with the Cromium command line switches.Finally we also open an application called Unclutter which auto hides the mouse after 0 seconds, so you will never see a mouse on the sign.We also had to edit the following file:sudo nano /etc/lightdm/lightdm.confAnd added the following line under the [SeatDefault] section:xserver-command=X -s 0 dpmsRefreshing the screenWe decided to try and add a scheduled task that would trigger Chromium to reload the page, at some point in the future we might well change this to using Javascript to update the content, but for now this works fine.First we installed the XDOTool which enables you to script Keyboard commands:sudo apt-get install xdotoolWe used the Refreshing Chromium Browser by Shell Script post as a reference and created the following shell script (which we called refreshing.sh):export DISPLAY=":0"WID=$(xdotool search --onlyvisible --class chromium|head -1)xdotool windowactivate ${WID}xdotool key ctrl+F5This selects the correct display and then sends a CTRL + F5 to refresh Chromium.You will need to give this file execute permissions:chmod a=rwx refreshing.shNow we have the script file setup we just need to schedule it to call this script periodically which is done by using Crontab, to edit this you use the following command:crontab -eAnd we added the following:*/5 * * * * DISPLAY=":.0" /home/pi/scripts/refreshing.sh >/home/pi/cronlog.log 2>&1This calls our script every 5 minutes to refresh the display and it logs any errors to the cronlog.log file.SummaryQV Offices now have a richer and more manageable booking system than they did before we started, and a great new sign to boot.How could we make sure that the sign was running smoothly downstairs in a busy office centre? A second post will follow outlining exactly how Vizioz enabled QV Offices to monitor their sign simply and remotely, from the comfort of their desks.

    Read the article

  • Oracle WebCenter Portal: Pagelet Producer – What’s New in 11.1.1.6.0 Release

    - by kellsey.ruppel
    Igor Plyakov, Sr. Principal Product Marketing Manager is back to share what's new in Oracle WebCenter Portal: Pagelet Producer. In February 2012 Oracle released 11g Release 1 (11.1.1.6.0) for WebCenter Portal. Pagelet Producer (aka Ensemble) that came out with this release added support for several new capabilities that are described in this post. As of 11.1.1.5.0 release the Pagelet Producer can expose WSRP and JPDK portlets as pagelets that can then be consumed in any portal or any third-party application that does not have a WSRP consumer. Now Pagelet Producer team is working on simplifying use of pagelets in WebCenter Sites. To expose WSRP portlets a new Producer should be registered with Pagelet Producer which can be done using Enterprise Manager, WLST or the Pagelet Producer Administration Console (for details see Section 25.9 of Administrator’s Guide for Oracle WebCenter Portal). If the producer requires authentication, Pagelet Producer allows you to select and use one of standard WSS token profiles.  After registration is finished a new resource is created and automatically populated with pagelets that represent the portlets associated with the WSRP endpoint.  For 11.1.1.6.0 release we completed extensive testing of consuming all WebCenter Services that are exposed as WSRP portlets by E2.0 Producer and delivery them as pagelets to WebCenter Interaction portal. In Pagelet Producer 11.1.1.6.0 release we added OpenSocial container that allows consuming gadgets from other OpenSocial containers, e.g. iGoogle, and expose them as pagelets. You can also use Pagelet Producer to host OpenSocial gadgets that could leverage OpenSocial APIs that it supports – People, Activities, Appdata and Pub-Sub features. Note that People and Activities expose the People Connections and Activity Stream from WebCenter Portal, i.e. to use these features Pagelet Producer requires connection to WebCenter Portal schema. Pub-Sub allows leveraging OpenAJAX Hub API for inter-gadget communication. In addition to these major new additions in Pagelet Producer 11.1.1.6.0 release we also extended several functional modules: The Clipping module was extended to support clipping of multiple regions on web resource page and then re-assembly of these separately clipped regions into a single pagelet. The auto-login feature can now be applied to web resources protected with Kerberos authentication; you would find this new functionality handy for consuming SharePoint web parts The logging module now supports full HTTP traffic between the Pagelet Producer and proxied web resource. At last, as the rest of WebCenter Portal stack the Pagelet Producer 11.1.1.6.0 can run on IBM WebSphere Application Server.

    Read the article

  • Effectively implementing a game view using java

    - by kdavis8
    I am writing a 2d game in java. The game mechanics are similar to the Pokémon game boy advance series e.g. fire red, ruby, diamond and so on. I need a way to draw a huge map maybe 5000 by 5000 pixels and then load individual in game sprites to across the entirety of the map, like rendering a scene. Game sprites would be things like terrain objects, trees, rocks, bushes, also houses, castles, NPC's and so on. But i also need to implement some kind of camera view class that focuses on the player. the camera view class needs to follow the characters movements throughout the game map but it also needs to clip the rest of the map away from the user's field of view, so that the user can only see the arbitrary proximity adjacent to the player's sprite. The proximity's range could be something like 500 pixels in every direction around the player’s sprite. On top of this, i need to implement an independent resolution for the game world so that the game view will be uniform on all screen sizes and screen resolutions. I know that this does sound like a handful and may fall under the category of multiple questions, but the questions are all related and any advice would be very much appreciated. I don’t need a full source code listing but maybe some pointers to effective java API classes that could make doing what i need to do a lot simpler. Also any algorithmic/ design advice would greatly benefit me as well. example of what i am trying to do in source code form below package myPackage; /** * The Purpose of GameView is to: Render a scene using Scene class, Create a * clipping pane using CameraView class, and finally instantiate a coordinate * grid using Path class. * * Once all of these things have been done, GameView class should then be * instantiated and used jointly with its helper classes. CameraView should be * used as the main drawing image. CameraView is the the window to the game * world.Scene passes data constantly to CameraView so that the entire map flows * smoothly. Path uses the x and y coordinates from camera view to construct * cells for path finding algorithms. */ public class GameView { // Scene is a helper class to game view. it renders the entire map to memory // for the camera view. Scene scene; // Camera View is a helper class to game view. It clips the Scene into a // small image that follows the players coordinates. CameraView Camera; // Path is a helper class to game view. It observes and calculates the // coordinates of camera view and divides them into Grids/Cells for Path // finding. Path path; // this represents the player and has a getSprite() method that will return // the current frame column row combination of the passed sprite sheet. Sprite player; }

    Read the article

  • NHibernate 2 Beginner's Guide Review

    - by Ricardo Peres
    OK, here's the review I promised a while ago. This is a beginner's introduction to NHibernate, so if you have already some experience with NHibernate, you will notice it lacks a lot of concepts and information. It starts with a good description of NHibernate and why would we use it. It goes on describing basic mapping scenarios having primary keys generated with the HiLo or Identity algorithms, without actually explaining why would we choose one over the other. As for mapping, the book talks about XML mappings and provides a simple example of Fluent NHibernate, comparing it to its XML counterpart. When it comes to relations, it covers one-to-many/many-to-one and many-to-many, not one-to-one relations, but only talks briefly about lazy loading, which is, IMO, an important concept. Only Bags are described, not any of the other collection types. The log4net configuration description gets it's own chapter, which I find excessive. The chapter on configuration merely lists the most common properties for configuring NHibernate, both in XML and in code. Querying only talks about loading by ID (using Get, not Load) and using Criteria API, on which a paging example is presented as well as some common filtering options (property equals/like/between to, no examples on conjunction/disjunction, however). There's a chapter fully dedicated to ASP.NET, which explains how we can use NHibernate in web applications. It basically talks about ASP.NET concepts, though. Following it, another chapter explains how we can build our own ASP.NET providers using NHibernate (Membership, Role). The available entity generators for NHibernate are referred and evaluated on a chapter of their own, the list is fine (CodeSmith, nhib-gen, AjGenesis, Visual NHibernate, MyGeneration, NGen, NHModeler, Microsoft T4 (?) and hbm2net), examples are provided whenever possible, however, I have some problems with some of the evaluations: for example, Visual NHibernate scores 5 out of 5 on Visual Studio integration, which simply does not exist! I suspect the author means to say that it can be launched from inside Visual Studio, but then, what can't? Finally, there's a chapter I really don't understand. It seems like a bag where a lot of things are thrown in, like NHibernate Burrow (which actually isn't explained at all), Blog.Net components, CSS template conversion and web.config settings related to the maximum request length for file uploads and ending with XML configuration, with the help of GhostDoc. Like I said, the book is only good for absolute beginners, it does a fair job in explaining the very basics, but lack a lot of not-so-basic concepts. Among other things, it lacks: Inheritance mapping strategies (table per class hierarchy, table per class, table per concrete class) Load versus Get usage Other usefull ISession methods First level cache (Identity Map pattern) Other collection types other that Bag (Set, List, Map, IdBag, etc Fetch options User Types Filters Named queries LINQ examples HQL examples And that's it! I hope you find this review useful. The link to the book site is https://www.packtpub.com/nhibernate-2-x-beginners-guide/book

    Read the article

  • Oracle UCM GET_SEARCH_RESULTS service with full text search

    - by Lyudmil Pelov
    Newly I was working on portlet which should be able to do full text search through the UCM documents and I was experimenting with the Ridc and also with the CIS API's. There are some ticks you may take care of, for example using quotes is a very spacial case and most of situations UCM will throw an exception if you not use them well. So during my tests I was able to develop one solution which works very well for me doing full text search and here is it: final IdcClientManager idcManager = new IdcClientManager(); final IdcClient idcClient = idcManager.createClient("idc://127.0.0.1:4444"); final IdcContext idcContext = new IdcContext("sysadmin"); final DataBinder binder = idcClient.createBinder(); // populate the binder with the parameters binder.putLocal ("IdcService", "GET_SEARCH_RESULTS"); binder.putLocal ("QueryText", "dDocFullText <substring> <qsch>"+yourSearchWordOrWords+"</qsch>");  binder.putLocal ("SearchEngineName", "databasefulltext"); binder.putLocal ("ResultCount", "20"); // execute the request ServiceResponse response = idcClient.sendRequest (idcContext, binder); // get the binder DataBinder serverBinder = response.getResponseAsBinder (); DataResultSet resultSet = serverBinder.getResultSet ("SearchResults"); // loop over the results for (DataObject dataObject : resultSet.getRows ()) { System.out.println ("Title is: " + dataObject.get ("dDocTitle")); System.out.println ("Author is: " + dataObject.get ("dDocAuthor")); }Nothing special so far except the line which declares the full text search. To be able to proceed with the full text search you have to use dDocFullText attribute inside the search query. The tag <substring> is the same as 'like'. Also you have to put your searching string or words in quotes which could be a problem sometime, so I used the tag <qsch>. Using this tag you can have quotes now inside you searching string without to break the code and get parsing exceptions.To be able to test the example, you do have to enable full text search inside UCM. To do this follow the steps for example from this blog here and then re-index the documents in UCM.There is also one very nice article about how to define UCM queries if want to replace the full text search with something more specific, you can read this article from Kyle's Blog here.

    Read the article

  • 'Make' command compiling errors

    - by G_T
    Im trying to locally install a program which is written in C++. I have downloaded the program and am attempting to use the "make" command to compile the program as the programs instructions dictate. However when I do I get this error: /usr/include/stdc-predef.h:30:26: fatal error: bits/predefs.h: No such file or directory compilation terminated. Looking around on the internet some people seem to address this problem by sudo apt-get install libc6-dev-i386 I checked to see if this package was installed and it was not. When I try to install it I get E: Unable to locate package libc6-dev-i386 I have already run sudo apt get update Im sure this is a rookie question but any help is appreciated, I'm running 13.10 32-bit. UPDATE: I've tried other suggestions I've found on similar error. All I have managed is a different but similar error. Here is what I get. Geoffrey@Geoffrey-Latitude-E6400:/usr/local/src/trinityrnaseq_r2013_08_14$ make Using gnu compiler for Inchworm and Chrysalis cd Inchworm && (test -e configure || autoreconf) \ && ./configure --prefix=`pwd` && make install checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... no checking for mawk... mawk checking whether make sets $(MAKE)... yes checking for g++... g++ checking for C++ compiler default output file name... a.out checking whether the C++ compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C++ compiler... yes checking whether g++ accepts -g... yes checking for style of include used by make... GNU checking dependency style of g++... gcc3 checking for library containing cos... none required configure: creating ./config.status config.status: creating Makefile config.status: creating src/Makefile config.status: creating config.h config.status: config.h is unchanged config.status: executing depfiles commands make[1]: Entering directory `/usr/local/src/trinityrnaseq_r2013_08_14/Inchworm' Making install in src make[2]: Entering directory `/usr/local/src/trinityrnaseq_r2013_08_14/Inchworm/src' if g++ -DHAVE_CONFIG_H -I. -I. -I.. -pedantic -fopenmp -Wall -Wextra -Wno-long-long -Wno-deprecated -m64 -g -O2 -MT Fasta_entry.o -MD -MP -MF ".deps/Fasta_entry.Tpo" -c -o Fasta_entry.o Fasta_entry.cpp; \ then mv -f ".deps/Fasta_entry.Tpo" ".deps/Fasta_entry.Po"; else rm -f ".deps/Fasta_entry.Tpo"; exit 1; fi In file included from Fasta_entry.hpp:4:0, from Fasta_entry.cpp:1: /usr/include/c++/4.8/string:38:28: fatal error: bits/c++config.h: No such file or directory #include <bits/c++config.h> ^ compilation terminated. make[2]: *** [Fasta_entry.o] Error 1 make[2]: Leaving directory `/usr/local/src/trinityrnaseq_r2013_08_14/Inchworm/src' make[1]: *** [install-recursive] Error 1 make[1]: Leaving directory `/usr/local/src/trinityrnaseq_r2013_08_14/Inchworm' make: *** [inchworm] Error 2

    Read the article

  • BizTalk Cross Reference Data Management Strategy

    - by charlie.mott
    Article Source: http://geekswithblogs.net/charliemott This article describes an approach to the management of cross reference data for BizTalk.  Some articles about the BizTalk Cross Referencing features can be found here: http://home.comcast.net/~sdwoodgate/xrefseed.zip http://geekswithblogs.net/michaelstephenson/archive/2006/12/24/101995.aspx http://geekswithblogs.net/charliemott/archive/2009/04/20/value-vs.id-cross-referencing-in-biztalk.aspx Options Current options to managing this data include: Maintaining xml files in the format that can be used by the out-of-the-box BTSXRefImport.exe utility. Use of user interfaces that have been developed to manage this data: BizTalk Cross Referencing Tool XRef XML Creation Tool However, there are the following issues with the above options: The 'BizTalk Cross Referencing Tool' requires a separate database to manage.  The 'XRef XML Creation' tool has no means of persisting the data settings. The 'BizTalk Cross Referencing tool' generates integers in the common id field. I prefer to use a string (e.g. acme.country.uk). This is more readable. (see naming conventions below). Both UI tools continue to use BTSXRefImport.exe.  This utility replaces all xref data. This can be a problem in continuous integration environments that support multiple clients or BizTalk target instances.  If you upload the data for one client it would destroy the data for another client.  Yet in TFS where builds run concurrently, this would break unit tests. Alternative Approach In response to these issues, I instead use simple SQL scripts to directly populate the BizTalkMgmtDb xref tables combined with a data namepacing strategy to isolate client data. Naming Conventions All data keys use namespace prefixing.  The pattern will be <companyName>.<data Type>.  The naming conventions will be to use lower casing for all items.  The data must follow this pattern to isolate it from other company cross-reference data.  The table below shows some sample data. (Note: this data uses the 'ID' cross-reference tables.  the same principles apply for the 'value' cross-referencing tables). Table.Field Description Sample Data xref_AppType.appType Application Types acme.erp acme.portal acme.assetmanagement xref_AppInstance.appInstance Application Instances (each will have a corresponding application type). acme.dynamics.ax acme.dynamics.crm acme.sharepoint acme.maximo xref_IDXRef.idXRef Holds the cross reference data types. acme.taxcode acme.country xref_IDXRefData.CommonID Holds each cross reference type value used by the canonical schemas. acme.vatcode.exmpt acme.vatcode.std acme.country.usa acme.country.uk xref_IDXRefData.AppID This holds the value for each application instance and each xref type. GBP USD SQL Scripts The data to be stored in the BizTalkMgmtDb xref tables will be managed by SQL scripts stored in a database project in the visual studio solution. File(s) Description Build.cmd A sqlcmd script to deploy data by running the SQL scripts below.  (This can be run as part of the MSBuild process).   acme.purgexref.sql SQL script to clear acme.* data from the xref tables.  As such, this will not impact data for any other company. acme.applicationInstances.sql   SQL script to insert application type and application instance data.   acme.vatcode.sql acme.country.sql etc ...  There will be a separate SQL script to insert each cross-reference data type and application specific values for these types.

    Read the article

< Previous Page | 538 539 540 541 542 543 544 545 546 547 548 549  | Next Page >