Search Results

Search found 18976 results on 760 pages for 'ash machine'.

Page 191/760 | < Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >

  • rails error on create action

    - by ash34
    SQL (2.0ms) SELECT task_report_requests_seq.NEXTVAL id FROM dual TaskReportRequest Create (2.2ms) INSERT INTO task_report_requests (location, created_at, updated_at, id, freq, login, task_dt) VALUES('020', TO_DATE('2010-05-25 05:02:38','YYYY-MM-DD HH24:MI:SS'), TO_DATE('2010-05-25 05:02:38','YYYY-MM-DD HH24:MI:SS'), 10023, 'M', NULL, TO_DATE('2010-05-30 00:00:00','YYYY-MM-DD HH24:MI:SS')) NoMethodError (You have a nil object when you didn't expect it! The error occurred while evaluating nil.call): app/controllers/task_report_requests_controller.rb:45:in `create' It says error evaluating nil.call . Can someone tell me when I would get such an error. I am not able to figure out with this information. thanks, ash

    Read the article

  • Visual Studio and .NET programming

    - by Vit
    Hi, I just want to ask wheather I am right or not about .NET. So, .NET is new framework that enables you to easily implement new and old windows functions. It is similiar to java in the way that its also compiled into "bytecode", but its name is Common Language Infrastructure, or CLI. This language is interpreted by .NET Framework, so code generated by programming using .NET cannot be executed directly by CPU. Now, few languages can be compiled to CLI. First, it was Microsoft-developed C#, than J#, C++ others. I suspect that this is in general right, at least I hope I understand it right. But, what I am still missing is, can you write to machine code compiled code in C#? And, if using Visual Studio 2005, when I select Win32 project, it is compiled into machine code, so only thing you need to run this apps are windows dynamic-link libraries, since static libraries code is implemented into app durink linking phase. And those dynamic-link libraries are implemented in every windows installation, or provided by DirectX installations. But when I select CLR in Visual Studio 2005, than app is compiled into CLI code, and it first executes .NET framework, and than .NET framework executes that program, since its not in machine code. So, I am right? I ask becouse you can read these infos on the internet, but I have noone to tell me wheather I understand it right or not. Thanks.

    Read the article

  • svn+apache per directory access control: weird permissions issue (403 Forbidden error)

    - by gveda
    Hi, I had a perfectly working svn+apache install where I was using per directory access control to restrict access to various parts of the repository. In particular, no one had access to the top level in the repository [/]. People had access to folders like [/www] etc. I was specifying these permissions in a file (svn-access-file). I had to move to a new machine. So I installed subversion-1.6.3 and httpd-2.2.11 on it, and modified the conf file to mimic the conf file on the old machine (and I copied the svn-access-file and the svn-auth-file). Then I took an svn dump and did a load to put stuff back in the new repository. Now I can check stuff out, modify stuff, and commit. However, as soon as I try to do an 'svn up' on an already checked out copy of some sub-folder [/www/people], I get the following error: svn: Server sent unexpected return value (403 Forbidden) in response to OPTIONS request for 'https://[servername]/svn' It seems the problem is that it is trying to access the top level directory [/] even though really it should only be trying to access [/www]. If I temporarily give the user access to [/], it works. Can someone please tell me how to fix this? Everything worked on the old machine. Thanks! Gaurav

    Read the article

  • HTTP Negotiate windows vs. Unix server implementation using python-kerberos

    - by ondra
    I tried to implement a simple single-sign-on in my python web server. I have used the python-kerberos package which works nicely. I have tested it from my Linux box (authenticating against active directory) and it was without problem. However, when I tried to authenticate using Firefox from Windows machine (no special setup, just having the user logged into the domain + added my server into negotiate-auth.trusted-uris), it doesn't work. I have looked at what is sent and it doesn't even resemble the things the Linux machine sends. This Microsoft description of the process pretty much resembles the way my interaction from Linux works, but the Windows machine generally sends a very short string, which doesn't even resemble the things microsoft documentation states, and when base64 decoded, it is something like 12 zero bytes followed by 3 or 4 non-zero bytes (GSS functions then return that it doesn't support such scheme) Either there is something wrong with the client Firefox settings, or there is some protocol which I am supposed to follow for the Negotiate protocol, but which I cannot find any reference anywhere. Any ideas what's wrong? Do you have any idea what protocol I should by trying to find, as it doesn' look like SPNEGO, at least from MS documentation.

    Read the article

  • Apache redirection problem!!!!

    - by vikas
    Hi guys, I am setting up a pre-built website built in php. The site was actually hosted on the linux server. Now I am trying to set it up on a Window machine with WAMP server. In this website almost every page request passes through a particular file called redirect(which is basically a php file without extension). Now the problem is that when I inspected the configuration(httpd.conf, apache.conf,.htaccess, vhost.conf etc) of the apache server on the linux machine, I nowhere found the redirect rules for doing so. Neither mod_rewrite nor mod_alias rules for this redirection were found there. But is still redirects the request properly. I also noticed that Zend Framework library is there in the exact same directory where the redirect file is. This library is included in the include_path in php.ini. However, the web site is still not developed using Zend MVC and I have seen NO proof of ZEND being used there. So I am really confused how this redirection is working there? I am unable to set up this on window machine without rewrite rules of mod_rewrite or mod_alias. Do you guys know any alternative of both the said modules for redirection? I know the site is really weird, but i have to set it up. :) Thanks in advance for your help.

    Read the article

  • What is the procedure for debugging a production-only error?

    - by Lord Torgamus
    Let me say upfront that I'm so ignorant on this topic that I don't even know whether this question has objective answers or not. If it ends up being "not," I'll delete or vote to close the post. Here's the scenario: I just wrote a little web service. It works on my machine. It works on my team lead's machine. It works, as far as I can tell, on every machine except for the production server. The exception that the production server spits out upon failure originates from a third-party JAR file, and is skimpy on information. I search the web for hours, but don't come up with anything useful. So what's the procedure for tracking down an issue that occurs only on production machines? Is there a standard methodology, or perhaps category/family of tools, for this? The error that inspired this question has already been fixed, but that was due more to good fortune than a solid approach to debugging. I'm asking this question for future reference. Some related questions: Test accounts and products in a production system Running test on Production Code/Server

    Read the article

  • Deploying a Rails app on an Ubuntu server using Git

    - by NudeCanalTroll
    I'm completely new to Linux, but today I find myself setting up a server (Ubuntu 10.04 LTS lucid) from scratch to host a Rails application. Anyway, I managed to get a Rails app up and running on the server itself, but I had to scrap that because I want to use Git. So I setup a git repository on the server, then pushed all the code from my local machine to the repository. Buuuut, of course Git doesn't actually store the files themselves in the repository -- all the code for my Rails app is now only on my local machine. How am I supposed to tell the server to host that? Right now my solution is to have the server use git to pull the code from its own repository. That's the code I'll host for all the world to see. In order to update the code, I guess I'll have to do something like this: Update the code on my local machine. Do some git adds, git commits, and a git push. On the server, do a git pull to update the code. So my question is, am I doing this the right way? enter code here

    Read the article

  • rails rollback updates when task fails

    - by ash34
    Hi, I have the following "generate_report" method being called from a rake task, which gets a hash as an input, that contains the reported hours spent by each user on a task and outputs the data as a .csv report. desc "Task reporting" task :report, [:inp_dt] => [:environment] do |t, args| h = select_data(args.inp_dt) /* not shown here */ generate_report(h) end def generate_report(h) out_dir = File.dirname(__FILE__) + '/../../output' myfile = "#{out_dir}" + "/monthly_#{Date.today.strftime("%m%d%Y")}.csv" writer = CSV.open(myfile, 'w') h.each do |h,v| v.each do |key,val| writer << val end end writer.close end where h = {:BILL=>{:PROJA=>["CYR", "00876", "2", 24], :PROJB=>["EPR", "00876", "2", 16]}, :JANE=>{:PROJA=>["TRB", "049576", "2", 16]}} I would like to set/update a 'processed' flag for each reported transaction and only commit the update when the file is written correctly or rollback the updates when the task fails. How can I accomplish this. thanks, ash

    Read the article

  • rails data aggregation

    - by ash34
    Hi, I have to create a hash of the form h[:bill] = ["Billy", "NA", 20, "PROJ_A"] by login where 20 is the cumulative number of hours reported by the login for all task transactions returned by the query where each login has multiple reported transactions. Did I do this in a bad way or this seems alright. h = Hash.new Task.find_each(:include => [:user], :joins => :user, :conditions => ["from_date >= ? AND from_date <= ? AND category = ?", Date.today - 30, Date.today + 30, 'PROJ1']) do |t| h[t.login.intern] = [t.user.name, 'NA', h[t.login.intern].nil? ? (t.hrs_per_day * t.num_days) : h[t.login.intern][2] + (t.hrs_day * t.workdays), t.category] end Also if I have to aggregate this data not just by login but login and category how do I accomplish this? thanks, ash

    Read the article

  • MySQL Can Connect Remotely but not Locally

    - by A Wizard Did It
    This is a weird problem and I'm not sure what's going on. I installed MySQL on a linux box I have running Ubuntu 10.04 LTS. I can access mysql via SSH mysql -p and perform all my commands that way. I added a user, and I can use AddedUser to connect remotely from my machine, but not from the local machine. It makes no sense to me... SELECT host, user FROM mysql.user Yields: +-----------+------------------+ | host | user | +-----------+------------------+ | % | AddedUser | | 127.0.0.1 | root | | li241-255 | root | | localhost | debian-sys-maint | | localhost | root | +-----------+------------------+ Problem is I'm developing on this machine using Node.js, and I can't connect locally from the server using the same username. I've tried FLUSH PRIVILEGES but that seems to have no effect. I know it's not Node.js because I'm using the same code on another database and it's working in that environment. Edit This is the error node is giving me. node.js:50 throw e; // process.nextTick error, or 'error' event on first tick ^ Error: ECONNREFUSED, Connection refused at Stream._onConnect (net.js:687:18) at IOWatcher.onWritable [as callback] (net.js:284:12) Edit 2 I have the right port & server as best I can tell. My /etc/mysql/my.cnf contains this: port = 3306 socket = /var/run/mysqld/mysqld.sock My MySQL object contains: { host: 'localhost', port: 3306, user: 'removed', password: 'removed', database: '', typeCast: true, flags: 260047, maxPacketSize: 16777216, charsetNumber: 192, debug: false, ending: false, connected: false, _greeting: null, _queue: [], _connection: null, _parser: null, server: 'ExternalIpAddress' } Possibly useful? netstat -ln | grep mysql unix 2 [ ACC ] STREAM LISTENING 1016418 /var/run/mysqld/mysqld.sock

    Read the article

  • Model value not being set on return from View to Controller

    - by sagesky36
    I have a boolean model variable who's value is supposed to be set to TRUE in order to perform a process on return back into the Controller. It works absolutely fine on my local machine, but not on the remote web server. Can somebody PLEASE inform me what I am missing? Below is the "proof of the pudding": The boolean value in quesion is "ShouldGeneratePdf"; MODEL: namespace PDFConverterModel.ViewModels { public partial class ViewModelTemplate_Guarantors { public ViewModelTemplate_Guarantors() { Templates = new List<PDFTemplate>(); Guarantors = new List<tGuarantor>(); } public int SelectedTemplateId { get; set; } public List<PDFTemplate> Templates { get; set; } public int SelectedGuarantorId { get; set; } public List<tGuarantor> Guarantors { get; set; } public string LoanId { get; set; } public string DepartmentId { get; set; } public bool isRepeat { get; set; } public string ddlDept { get; set; } public string SelectedDeptText { get; set; } public string LoanTypeId { get; set; } public string LoanType { get; set; } public string Error { get; set; } public string ErrorT { get; set; } public string ErrorG { get; set; } public bool ShowGeneratePDFBtn { get; set; } public bool ShouldGeneratePdf { get; set; } } } MasterPage: <!DOCTYPE html> <html> <head> <title>@ViewBag.Title</title> <link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/kendo/2012.2.913/kendo.common.min.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/kendo/2012.2.913/kendo.dataviz.min.css")" rel="stylesheet" type="text/css" /> <link href="@Url.Content("~/Content/kendo/2012.2.913/kendo.blueopal.min.css")" rel="stylesheet" type="text/css" /> <script src="@Url.Content("~/Scripts/jquery-1.7.1.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.unobtrusive-ajax.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/modernizr-2.5.3.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/kendo/2012.2.913/kendo.all.min.js")"></script> <script src="@Url.Content("~/Scripts/kendo/2012.2.913/kendo.aspnetmvc.min.js")"></script> </head> <body> <div class="page"> <header> <div id="title"> <h1>BHG :: PDF Service Generator</h1> </div> </header> <section id="main"> @RenderBody() </section> <footer> </footer> </div> </body> </html> View: @model PDFConverterModel.ViewModels.ViewModelTemplate_Guarantors @using (Html.BeginForm("ProcessForm", "Home", new AjaxOptions { HttpMethod = "POST" })) { <table style="width: 1000px"> @Html.HiddenFor(x => x.ShouldGeneratePdf) <tr> <td> <img alt="BHG Logo" src="~/Images/logo.gif" /> </td> </tr> <tr> <td> @(Html.Kendo().IntegerTextBox() .Placeholder("Enter Loan Id") .Name("LoanId") .Format("{0:#######}") .Value(Convert.ToInt32(Model.LoanId)) ) </td> </tr> <tr> <td>@Html.Label("Loan Type: ") @Html.DisplayFor(model => Model.LoanType) </td> <td> <label for="ddlDept">Department:</label> @(Html.Kendo().DropDownListFor(model => Model.ddlDept) .Name("ddlDept") .DataTextField("DepartmentName") .DataValueField("DepartmentID") .Events(e => e.Change("Refresh")) .DataSource(source => { source.Read(read => { read.Action("GetDepartments", "Home"); }); }) .Value(Model.ddlDept.ToString()) ) </td> </tr> @if (Model.ShowGeneratePDFBtn == true) { if (Model.ErrorT == string.Empty) { <tr> <td> <u><b>@Html.Label("Templates:")</b></u> </td> </tr> <tr> @for (int i = 0; i < Model.Templates.Count; i++) { <td> @Html.CheckBoxFor(model => Model.Templates[i].IsChecked) @Html.DisplayFor(model => Model.Templates[i].TemplateId) </td> } </tr> } else { <tr> <td> <b>@Html.DisplayFor(model => Model.ErrorT)</b> </td> </tr> } if (Model.ErrorG == string.Empty) { <tr> <td> <u><b>@Html.Label("Guarantors:")</b></u> </td> </tr> <tr> @for (int i = 0; i < Model.Guarantors.Count; i++) { <td> @Html.CheckBoxFor(model => Model.Guarantors[i].isChecked) @Html.DisplayFor(model => Model.Guarantors[i].GuarantorFirstName)&nbsp;@Html.DisplayFor(model => Model.Guarantors[i].GuarantorLastName) </td> } </tr> } else { <tr> <td> <b>@Html.DisplayFor(model => Model.ErrorG)</b> </td> </tr> } } <tr> <td> <input type="submit" name="submitbutton" id="btnRefresh" value='Refresh' /> </td> @if (Model.ShowGeneratePDFBtn == true) { <td> <input type="submit" name="submitbutton" id="btnGeneratePDF" value='Generate PDF' /> </td> } </tr> <tr> <td style="color: red; font: bold"> @Model.Error </td> </tr> </table> } <script type="text/javascript"> $('#btnRefresh').click(function () { Refresh(); }); function Refresh() { var LoanID = $("#LoanID").val(); if (parseInt(LoanID) != 0) { $('#ShouldGeneratePdf').val(false) document.forms[0].submit(); } else { alert("Please enter a LoanId"); } } //$(function () { // //DOM loaded // $('#btnGeneratePDF').click(function () { // DisableGeneratePDF(); // $('#ShouldGeneratePdf').val(true) // }); //}); //function DisableGeneratePDF() { // $('#btnGeneratePDF').attr("disabled", true); // $('#btnRefresh').attr("disabled", true); //} $('#btnGeneratePDF').click(function () { alert("inside click function"); DisableGeneratePDF(); $('#ShouldGeneratePdf').val(true) tof = $('#ShouldGeneratePdf').val(); alert("ShouldGeneratePdf set to " + tof); }); function DisableGeneratePDF() { alert("begin DisableGeneratePDF function"); $('#btnGeneratePDF').attr("disabled", true); $('#btnRefresh').attr("disabled", true); alert("end DisableGeneratePDF function"); } </script> Controller: [HttpPost] public ActionResult ProcessForm(string submitbutton, ViewModelTemplate_Guarantors model, FormCollection collection) if ((submitbutton == "Refresh") || (submitbutton == null) && (model.ShouldGeneratePdf == false)) { } else if ((submitbutton == "Generate PDF") || (model.ShouldGeneratePdf == true)) { } The "Alerts" in the script above come out to exactly what they should be on the remote server. The last alert shows that the value of the bool variable is "true". However, when I do page source views of the hidden variable, below is the result. The values of the hidden variable when the page loads and when the last alert button finishes are as follows: My local machine: The remote machine: As you can see, the value on my machine is set to true when the process executes. However, on the remote machine, it is set to false where it then doesn't excute. Why isn't the value in the model being returned as TRUE on the remote machine?

    Read the article

  • Why are we getting a WCF "Framing error" on some machines but not others

    - by Ian Ringrose
    We have just found we are getting “framing errors” (as reported by the WCF logs) when running our system on some customer test machine. It all works ok on our development machines. We have an abstract base class, with KnownType attributes for all its sub classes. One of it’s subclass is missing it’s DataContract attribute. However it all worked on our test machine! On the customers test machine, we got “framing error” showing up the WCF logs, this is not the error message I have seen in the past when missing a DataContract attribute, or a KnownType attribute. I wish to get to the bottom of this, as we can no longer have confidence in our ability to test the system before giving it to the customer until we can make our machines behave the some as the customer’s machines. Code that try to show what I am talking about, (not the real code) [DataContract()] [KnownType(typeof(SubClass1))] [KnownType(typeof(SubClass2))] // other subclasses with data members public abstract class Base { [DataMember] public int LotsMoreItemsThenThisInRealLife; } /// <summary> /// This works on some machines (not not others) when passed to Contract::DoIt, /// note the missing [DataContract()] /// </summary> public class SubClass1 : Base { // has no data members } /// <summary> /// This works in all cases when passed to Contract::DoIt /// </summary> [DataContract()] public class SubClass2 : Base { // has no data members } public interface IContract { void DoIt(Base[] items); } public static class MyProgram { public static IContract ConntectToServerOverWCF() { // lots of code ... return null; } public static void Startup() { IContract server = ConntectToServerOverWCF(); // this works all of the time server.DoIt(new Base[]{new SubClass2(){LotsMoreItemsThenThisInRealLife=2}}); // this works "in develperment" e.g. on our machines, but not on the customer's test machines! server.DoIt(new Base[] { new SubClass1() { LotsMoreItemsThenThisInRealLife = 2 } }); } }

    Read the article

  • C++/Win32 : XP Visual Styles - no controls are showing up?

    - by mrl33t
    Okay, so i'm pretty new to C++ & the Windows API and i'm just writing a small application. I wanted my application to make use of visual styles in both XP, Vista and Windows 7 so I added this line to the top of my code: #pragma comment(linker,"\"/manifestdependency:type='win32' name='Microsoft.Windows.Common-Controls' version='6.0.0.0' processorArchitecture='*' publicKeyToken='6595b64144ccf1df' language='*'\"") It seemed to work perfectly on my Windows 7 machine and also Vista machine. But when I tried the application on XP the application wouldn't load any controls (e.g. buttons, labels etc.) - not even messageboxes would display. This image shows a small test application which i've just put together to demonstrate what i'm trying to explain: http://img704.imageshack.us/img704/2250/myapp.png In this test application i'm not using any particularly fancy or complicated code. I've effectively just taken the most basic sample code from the MSDN Library (http://msdn.microsoft.com/en-us/library/ff381409.aspx) and added a section to the WM_CREATE message to create a button: MyBtn = CreateWindow(L"Button", L"My Button", BS_PUSHBUTTON | WS_CHILD | WS_VISIBLE, 25, 25, 100, 30, hWnd, NULL, hInst, 0); But I just can't figure out what's going on and why its not working. Any ideas guys? Thank you in advanced. (By the way the application works in XP if i remove the manifest section from the top - obviously without visual styles though. I should also probably mention that the app was built using Visual C++ 2010 Express on a Windows 7 machine - if that makes a difference?)

    Read the article

  • Why is CoRegisterClassObject creating two extra threads?

    - by Stijn Sanders
    I'm trying to fix a problem that only recently happens on a number of machine's on a VPN. They each run a client application I wrote that exposes a COM automation object. For some strange reason I haven't been able to discover yet, one thread in the application takes up all of the available CPU time, slowing other operation on the machine. In observing the application's strange behaviour, I've noticed it's the third thread started, and if I debug on my machine I notice the first call to CoRegisterClassObject created two extra threads. If the second of these two threads is the one that gets into an infinite loop, I'm not at all shure how to fix this. Where could I check next about what's wrong? Could it have started by one of the recent patches rolled out by Microsoft this last 'patch tuesday'? I had a go with ProcessExplorer to extract a stack trace of the thread: ntoskrnl.exe!ExReleaseResourceLite+0x1a3 ntoskrnl.exe!PsGetContextThread+0x329 WLDAP32.dll!Ordinal325+0x1231 WLDAP32.dll!Ordinal325+0x129e WLDAP32.dll!Ordinal325+0x1178 ntdll.dll!LdrInitializeThunk+0x24 ntdll.dll!LdrShutdownThread+0xe9 kernel32.dll!ExitThread+0x3e kernel32.dll!FreeLibraryAndExitThread+0x1e ole32.dll!StringFromGUID2+0x65d kernel32.dll!GetModuleFileNameA+0x1ba

    Read the article

  • Trouble tunneling my local Wordpress install to the mysql database on appfog

    - by alanmoo
    I've set up a wordpress install on appfog (using rackspace), and cloned the install to my local machine for development. I know the install works (using MAMP) because I created a local mysql database and changed wp-config.php to point to it. However, I want to develop without having to change wp-config.php every time I commit. After doing some research, it seems like the Appfog service Caldecott lets me tunnel into the mysql database on the server, using af tunnel. Unfortunately, I'm having issues with getting it working. Even if I change my MAMP mysql port to something like 8889, and tunnel mysql through port 3306, it looks like it's connected but I still get "Error establishing a database connection" when loading my localhost Wordpress. When I quit the mysql monitor (using ctrl+x, ctrl+c), I get a message stating "Error: 'mysql' execution failed; is it in your $PATH?'. Originally, no, it wasn't, but I've fixed my PATH variable on my local machine so that when I go to Terminal and just type mysql, it loads up. So I guess my question is 2 parts: 1.)Am I going with the right approach for Wordpress development on my local machine and 2.)If so, why is the tunnel not working?

    Read the article

  • Compiling a C++ application on Windows 7, but execute it on Win2003 Server

    - by dabs
    I have a C++ application (quite complex, multiple projects) in Visual Studio 2008, that produces a single dll. Recently I switched to Windows 7, but had previously been compiling under Windows XP. Suddenly the dll in question cannot be loaded by another application, i.e. on a machine running Windows 2003 Server. I've been trying various things: I've installed the VC9.0 redistributable package on the server Also copied various .dll's from that package to the application folder The project is of course compiled in release mode When I run depends.exe on the client machine, I do get the following error: "Error: The Side-by-Side configuration information for "my_dll.dll" contains errors. This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem (14001). Warning: At least one module has an unresolved import due to a missing export function in a delay-load dependent module." and the icon for shlwapi.dll has a red overlay icon. This didn't happen when I was compiling under WinXP, so I'm guessing that there really is no problem with the .dll's on the client machine, but somewhere there is a reference to that particular version of some dll. Does anyone know what would be the best way to resolve this? Regards, Daníel

    Read the article

  • MySQL Config File for Large System

    - by Jonathon
    We are running MySQL on a Windows 2003 Server Enterpise Edition box. MySQL is about the only program running on the box. We have approx. 8 slaves replicated to it, but my understanding is that having multiple slaves connecting to the same master does not significantly slow down performance, if at all. The master server has 16G RAM, 10 Terabyte drives in RAID 10, and four dual-core processors. From what I have seen from other sites, we have a really robust machine as our master db server. We just upgraded from a machine with only 4G RAM, but with similar hard drives, RAID, etc. It also ran Apache on it, so it was our db server and our application server. It was getting a little slow, so we split the db server onto this new machine and kept the application server on the first machine. We also distributed the application load amongst a few of our other slave servers, which also run the application. The problem is the new db server has mysqld.exe consuming 95-100% of CPU almost all the time and is really causing the app to run slowly. I know we have several queries and table structures that could be better optimized, but since they worked okay on the older, smaller server, I assume that our my.ini (MySQL config) file is not properly configured. Most of what I see on the net is for setting config files on small machines, so can anyone help me get the my.ini file correct for a large dedicated machine like ours? I just don't see how mysqld could get so bogged down! FYI: We have about 100 queries per second. We only use MyISAM tables, so skip-innodb is set in the ini file. And yes, I know it is reading the ini file correctly because I can change some settings (like the server-id and it will kill the server at startup). Here is the my.ini file: #MySQL Server Instance Configuration File # ---------------------------------------------------------------------- # Generated by the MySQL Server Instance Configuration Wizard # # # Installation Instructions # ---------------------------------------------------------------------- # # On Linux you can copy this file to /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options # (@localstatedir@ for this installation) or to # ~/.my.cnf to set user-specific options. # # On Windows you should keep this file in the installation directory # of your server (e.g. C:\Program Files\MySQL\MySQL Server X.Y). To # make sure the server reads the config file use the startup option # "--defaults-file". # # To run run the server from the command line, execute this in a # command line shell, e.g. # mysqld --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # To install the server as a Windows service manually, execute this in a # command line shell, e.g. # mysqld --install MySQLXY --defaults-file="C:\Program Files\MySQL\MySQL Server X.Y\my.ini" # # And then execute this in a command line shell to start the server, e.g. # net start MySQLXY # # # Guildlines for editing this file # ---------------------------------------------------------------------- # # In this file, you can use all long options that the program supports. # If you want to know the options a program supports, start the program # with the "--help" option. # # More detailed information about the individual options can also be # found in the manual. # # # CLIENT SECTION # ---------------------------------------------------------------------- # # The following options will be read by MySQL client applications. # Note that only client applications shipped by MySQL are guaranteed # to read this section. If you want your own MySQL client program to # honor these values, you need to specify it as an option during the # MySQL client library initialization. # [client] port=3306 [mysql] default-character-set=latin1 # SERVER SECTION # ---------------------------------------------------------------------- # # The following options will be read by the MySQL Server. Make sure that # you have installed the server correctly (see above) so it reads this # file. # [mysqld] # The TCP/IP Port the MySQL Server will listen on port=3306 #Path to installation directory. All paths are usually resolved relative to this. basedir="D:/MySQL/" #Path to the database root datadir="D:/MySQL/data" # The default character set that will be used when a new schema or table is # created and no character set is defined default-character-set=latin1 # The default storage engine that will be used when create new tables when default-storage-engine=MYISAM # Set the SQL mode to strict #sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION" # we changed this because there are a couple of queries that can get blocked otherwise sql-mode="" #performance configs skip-locking max_allowed_packet = 1M table_open_cache = 512 # The maximum amount of concurrent sessions the MySQL server will # allow. One of these connections will be reserved for a user with # SUPER privileges to allow the administrator to login even if the # connection limit has been reached. max_connections=1510 # Query cache is used to cache SELECT results and later return them # without actual executing the same query once again. Having the query # cache enabled may result in significant speed improvements, if your # have a lot of identical queries and rarely changing tables. See the # "Qcache_lowmem_prunes" status variable to check if the current value # is high enough for your load. # Note: In case your tables change very often or if your queries are # textually different every time, the query cache may result in a # slowdown instead of a performance improvement. query_cache_size=168M # The number of open tables for all threads. Increasing this value # increases the number of file descriptors that mysqld requires. # Therefore you have to make sure to set the amount of open files # allowed to at least 4096 in the variable "open-files-limit" in # section [mysqld_safe] table_cache=3020 # Maximum size for internal (in-memory) temporary tables. If a table # grows larger than this value, it is automatically converted to disk # based table This limitation is for a single table. There can be many # of them. tmp_table_size=30M # How many threads we should keep in a cache for reuse. When a client # disconnects, the client's threads are put in the cache if there aren't # more than thread_cache_size threads from before. This greatly reduces # the amount of thread creations needed if you have a lot of new # connections. (Normally this doesn't give a notable performance # improvement if you have a good thread implementation.) thread_cache_size=64 #*** MyISAM Specific options # The maximum size of the temporary file MySQL is allowed to use while # recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE. # If the file-size would be bigger than this, the index will be created # through the key cache (which is slower). myisam_max_sort_file_size=100G # If the temporary file used for fast index creation would be bigger # than using the key cache by the amount specified here, then prefer the # key cache method. This is mainly used to force long character keys in # large tables to use the slower key cache method to create the index. myisam_sort_buffer_size=64M # Size of the Key Buffer, used to cache index blocks for MyISAM tables. # Do not set it larger than 30% of your available memory, as some memory # is also required by the OS to cache rows. Even if you're not using # MyISAM tables, you should still set it to 8-64M as it will also be # used for internal temporary disk tables. key_buffer_size=3072M # Size of the buffer used for doing full table scans of MyISAM tables. # Allocated per thread, if a full scan is needed. read_buffer_size=2M read_rnd_buffer_size=8M # This buffer is allocated when MySQL needs to rebuild the index in # REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE # into an empty table. It is allocated per thread so be careful with # large settings. sort_buffer_size=2M #*** INNODB Specific options *** innodb_data_home_dir="D:/MySQL InnoDB Datafiles/" # Use this option if you have a MySQL server with InnoDB support enabled # but you do not plan to use it. This will save memory and disk space # and speed up some things. skip-innodb # Additional memory pool that is used by InnoDB to store metadata # information. If InnoDB requires more memory for this purpose it will # start to allocate it from the OS. As this is fast enough on most # recent operating systems, you normally do not need to change this # value. SHOW INNODB STATUS will display the current amount used. innodb_additional_mem_pool_size=11M # If set to 1, InnoDB will flush (fsync) the transaction logs to the # disk at each commit, which offers full ACID behavior. If you are # willing to compromise this safety, and you are running small # transactions, you may set this to 0 or 2 to reduce disk I/O to the # logs. Value 0 means that the log is only written to the log file and # the log file flushed to disk approximately once per second. Value 2 # means the log is written to the log file at each commit, but the log # file is only flushed to disk approximately once per second. innodb_flush_log_at_trx_commit=1 # The size of the buffer InnoDB uses for buffering log data. As soon as # it is full, InnoDB will have to flush it to disk. As it is flushed # once per second anyway, it does not make sense to have it very large # (even with long transactions). innodb_log_buffer_size=6M # InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and # row data. The bigger you set this the less disk I/O is needed to # access data in tables. On a dedicated database server you may set this # parameter up to 80% of the machine physical memory size. Do not set it # too large, though, because competition of the physical memory may # cause paging in the operating system. Note that on 32bit systems you # might be limited to 2-3.5G of user level memory per process, so do not # set it too high. innodb_buffer_pool_size=500M # Size of each log file in a log group. You should set the combined size # of log files to about 25%-100% of your buffer pool size to avoid # unneeded buffer pool flush activity on log file overwrite. However, # note that a larger logfile size will increase the time needed for the # recovery process. innodb_log_file_size=100M # Number of threads allowed inside the InnoDB kernel. The optimal value # depends highly on the application, hardware as well as the OS # scheduler properties. A too high value may lead to thread thrashing. innodb_thread_concurrency=10 #replication settings (this is the master) log-bin=log server-id = 1 Thanks for all the help. It is greatly appreciated.

    Read the article

  • Can the size of a structure change after compiled?

    - by Sarah Altiva
    Hi, suppose you have the following structure: #include <windows.h> // BOOL is here. #include <stdio.h> typedef struct { BOOL someBool; char someCharArray[100]; int someIntValue; BOOL moreBools, anotherOne, yetAgain; char someOthercharArray[23]; int otherInt; } Test; int main(void) { printf("Structure size: %d, BOOL size: %d.\n", sizeof(Test), sizeof(BOOL)); } When I compile this piece of code in my machine (32-bit OS) the output is the following: Structure size: 148, BOOL size: 4. I would like to know if, once compiled, these values may change depending on the machine which runs the program. E.g.: if I ran this program in a 64-bit machine, would the output be the same? Or once it's compiled it'll always be the same? Thank you very much, and forgive me if the answer to this question is obvious...

    Read the article

  • c# STILL returning wrong number of cores

    - by Justin
    Ok, so I posted in In C# GetEnvironmentVariable("NUMBER_OF_PROCESSORS") returns the wrong number asking about how to get the correct number of cores in C#. Some helpful people directed me to a couple of questions where similar questions were asked but I have already tried those solutions. My question was then closed as being the same as another question, which is true, it is, but the solution given there didn't work. So I'm opening another one hoping that someone may be able to help realising that the other solutions DID NOT work. That question was How to find the Number of CPU Cores via .NET/C#? which used WMI to try to get the correct number of cores. Well, here's the output from the code given there: Number Of Cores: 32 Number Of Logical Processors: 32 Number Of Physical Processors: 4 As per my last question, the machine is a 64 core AMD Opteron 6276 (4x16 cores) running Windows Server 2008 R2 HPC edition. Regardless of what I do Windows always seems to return 32 cores even though 64 are available. I have confirmed the machine is only using 32 and if I hardcode 64 cores, then the machine uses all of them. I'm wondering if there might be an issue with the way the AMD CPUs are detected. FYI, in case you haven't read the last question, if I type echo %NUMBER_OF_PROCESSORS" at the command line, it returns 64. It just won't do it in a programming environment. Thanks, Justin UPDATE: Outputting PROCESSOR_ARCHITECTURE returns AMD64 from the command line, but x86 from the program. The program is 32-bit running on 64-bit hardware. I was asked to compile it to 64-bit but it still shows 32 cores.

    Read the article

  • Java socket bug on linux (0xFF sent, -3 received)

    - by Marius
    While working on a WebSocket server in Java I came across this strange bug. I've reduced it down to two small java files, one is the server, the other is the client. The client simply sends 0x00, the string Hello and then 0xFF (per the WebSocket specification). On my windows machine, the server prints the following: Listening byte: 0 72 101 108 108 111 recieved: 'Hello' While on my unix box the same code prints the following: Listening byte: 0 72 101 108 108 111 -3 Instead of receiving 0xFF it gets -3, never breaks out of the loop and never prints what it has received. The important part of the code looks like this: byte b = (byte)in.read(); System.out.println("byte: "+b); StringBuilder input = new StringBuilder(); b = (byte)in.read(); while((b & 0xFF) != 0xFF){ input.append((char)b); System.out.print(b+" "); b = (byte)in.read(); } inputLine = input.toString(); System.out.println("recieved: '" + inputLine+"'"); if(inputLine.equals("bye")){ break; } I've also uploaded the two files to my server: Server.java Client.java My Windows machine is running windows 7 and my Linux machine is running Debian

    Read the article

  • Is it possible to get a truly unique id for a particular JVM instance?

    - by Uri
    I need a way to uniquely and permanently identify an instance of the JVM from within Java code running in that JVM. That is, if I have two JVMs running at the same time on the same machine, each is distinguishable. It is also distinguishable from running JVMs on other machines and from future executions on the same machine even if the process id is reused. I figure I could implement something like this by identifying the start time, the machine MAC, and the process id, and combining them in some way. I'm wondering if there is some standard way to achieve this. Update: I see that everyone recommended a UUID for the entire session. That seems like a good idea though possibly a little too heavyweight. Here is my problem though: I want to use the JVM id to create multiple unique identifiers in each JVM execution that somehow incorporate the JVM instance. My understanding is that you shouldn't really mix other numbers into a UUID because uniqueness is no longer guaranteed. An alternative is to make the UUID into a string and chain it, but then it becomes too long. Any ideas on overcoming this?

    Read the article

  • Using FiddlerCore to capture HTTP Requests with .NET

    - by Rick Strahl
    Over the last few weeks I’ve been working on my Web load testing utility West Wind WebSurge. One of the key components of a load testing tool is the ability to capture URLs effectively so that you can play them back later under load. One of the options in WebSurge for capturing URLs is to use its built-in capture tool which acts as an HTTP proxy to capture any HTTP and HTTPS traffic from most Windows HTTP clients, including Web Browsers as well as standalone Windows applications and services. To make this happen, I used Eric Lawrence’s awesome FiddlerCore library, which provides most of the functionality of his desktop Fiddler application, all rolled into an easy to use library that you can plug into your own applications. FiddlerCore makes it almost too easy to capture HTTP content! For WebSurge I needed to capture all HTTP traffic in order to capture the full HTTP request – URL, headers and any content posted by the client. The result of what I ended up creating is this semi-generic capture form: In this post I’m going to demonstrate how easy it is to use FiddlerCore to build this HTTP Capture Form.  If you want to jump right in here are the links to get Telerik’s Fiddler Core and the code for the demo provided here. FiddlerCore Download FiddlerCore on NuGet Show me the Code (WebSurge Integration code from GitHub) Download the WinForms Sample Form West Wind Web Surge (example implementation in live app) Note that FiddlerCore is bound by a license for commercial usage – see license.txt in the FiddlerCore distribution for details. Integrating FiddlerCore FiddlerCore is a library that simply plugs into your application. You can download it from the Telerik site and manually add the assemblies to your project, or you can simply install the NuGet package via:       PM> Install-Package FiddlerCore The library consists of the FiddlerCore.dll as well as a couple of support libraries (CertMaker.dll and BCMakeCert.dll) that are used for installing SSL certificates. I’ll have more on SSL captures and certificate installation later in this post. But first let’s see how easy it is to use FiddlerCore to capture HTTP content by looking at how to build the above capture form. Capturing HTTP Content Once the library is installed it’s super easy to hook up Fiddler functionality. Fiddler includes a number of static class methods on the FiddlerApplication object that can be called to hook up callback events as well as actual start monitoring HTTP URLs. In the following code directly lifted from WebSurge, I configure a few filter options on Form level object, from the user inputs shown on the form by assigning it to a capture options object. In the live application these settings are persisted configuration values, but in the demo they are one time values initialized and set on the form. Once these options are set, I hook up the AfterSessionComplete event to capture every URL that passes through the proxy after the request is completed and start up the Proxy service:void Start() { if (tbIgnoreResources.Checked) CaptureConfiguration.IgnoreResources = true; else CaptureConfiguration.IgnoreResources = false; string strProcId = txtProcessId.Text; if (strProcId.Contains('-')) strProcId = strProcId.Substring(strProcId.IndexOf('-') + 1).Trim(); strProcId = strProcId.Trim(); int procId = 0; if (!string.IsNullOrEmpty(strProcId)) { if (!int.TryParse(strProcId, out procId)) procId = 0; } CaptureConfiguration.ProcessId = procId; CaptureConfiguration.CaptureDomain = txtCaptureDomain.Text; FiddlerApplication.AfterSessionComplete += FiddlerApplication_AfterSessionComplete; FiddlerApplication.Startup(8888, true, true, true); } The key lines for FiddlerCore are just the last two lines of code that include the event hookup code as well as the Startup() method call. Here I only hook up to the AfterSessionComplete event but there are a number of other events that hook various stages of the HTTP request cycle you can also hook into. Other events include BeforeRequest, BeforeResponse, RequestHeadersAvailable, ResponseHeadersAvailable and so on. In my case I want to capture the request data and I actually have several options to capture this data. AfterSessionComplete is the last event that fires in the request sequence and it’s the most common choice to capture all request and response data. I could have used several other events, but AfterSessionComplete is one place where you can look both at the request and response data, so this will be the most common place to hook into if you’re capturing content. The implementation of AfterSessionComplete is responsible for capturing all HTTP request headers and it looks something like this:private void FiddlerApplication_AfterSessionComplete(Session sess) { // Ignore HTTPS connect requests if (sess.RequestMethod == "CONNECT") return; if (CaptureConfiguration.ProcessId > 0) { if (sess.LocalProcessID != 0 && sess.LocalProcessID != CaptureConfiguration.ProcessId) return; } if (!string.IsNullOrEmpty(CaptureConfiguration.CaptureDomain)) { if (sess.hostname.ToLower() != CaptureConfiguration.CaptureDomain.Trim().ToLower()) return; } if (CaptureConfiguration.IgnoreResources) { string url = sess.fullUrl.ToLower(); var extensions = CaptureConfiguration.ExtensionFilterExclusions; foreach (var ext in extensions) { if (url.Contains(ext)) return; } var filters = CaptureConfiguration.UrlFilterExclusions; foreach (var urlFilter in filters) { if (url.Contains(urlFilter)) return; } } if (sess == null || sess.oRequest == null || sess.oRequest.headers == null) return; string headers = sess.oRequest.headers.ToString(); var reqBody = sess.GetRequestBodyAsString(); // if you wanted to capture the response //string respHeaders = session.oResponse.headers.ToString(); //var respBody = session.GetResponseBodyAsString(); // replace the HTTP line to inject full URL string firstLine = sess.RequestMethod + " " + sess.fullUrl + " " + sess.oRequest.headers.HTTPVersion; int at = headers.IndexOf("\r\n"); if (at < 0) return; headers = firstLine + "\r\n" + headers.Substring(at + 1); string output = headers + "\r\n" + (!string.IsNullOrEmpty(reqBody) ? reqBody + "\r\n" : string.Empty) + Separator + "\r\n\r\n"; BeginInvoke(new Action<string>((text) => { txtCapture.AppendText(text); UpdateButtonStatus(); }), output); } The code starts by filtering out some requests based on the CaptureOptions I set before the capture is started. These options/filters are applied when requests actually come in. This is very useful to help narrow down the requests that are captured for playback based on options the user picked. I find it useful to limit requests to a certain domain for captures, as well as filtering out some request types like static resources – images, css, scripts etc. This is of course optional, but I think it’s a common scenario and WebSurge makes good use of this feature. AfterSessionComplete like other FiddlerCore events, provides a Session object parameter which contains all the request and response details. There are oRequest and oResponse objects to hold their respective data. In my case I’m interested in the raw request headers and body only, as you can see in the commented code you can also retrieve the response headers and body. Here the code captures the request headers and body and simply appends the output to the textbox on the screen. Note that the Fiddler events are asynchronous, so in order to display the content in the UI they have to be marshaled back the UI thread with BeginInvoke, which here simply takes the generated headers and appends it to the existing textbox test on the form. As each request is processed, the headers are captured and appended to the bottom of the textbox resulting in a Session HTTP capture in the format that Web Surge internally supports, which is basically raw request headers with a customized 1st HTTP Header line that includes the full URL rather than a server relative URL. When the capture is done the user can either copy the raw HTTP session to the clipboard, or directly save it to file. This raw capture format is the same format WebSurge and also Fiddler use to import/export request data. While this code is application specific, it demonstrates the kind of logic that you can easily apply to the request capture process, which is one of the reasonsof why FiddlerCore is so powerful. You get to choose what content you want to look up as part of your own application logic and you can then decide how to capture or use that data as part of your application. The actual captured data in this case is only a string. The user can edit the data by hand or in the the case of WebSurge, save it to disk and automatically open the captured session as a new load test. Stopping the FiddlerCore Proxy Finally to stop capturing requests you simply disconnect the event handler and call the FiddlerApplication.ShutDown() method:void Stop() { FiddlerApplication.AfterSessionComplete -= FiddlerApplication_AfterSessionComplete; if (FiddlerApplication.IsStarted()) FiddlerApplication.Shutdown(); } As you can see, adding HTTP capture functionality to an application is very straight forward. FiddlerCore offers tons of features I’m not even touching on here – I suspect basic captures are the most common scenario, but a lot of different things can be done with FiddlerCore’s simple API interface. Sky’s the limit! The source code for this sample capture form (WinForms) is provided as part of this article. Adding Fiddler Certificates with FiddlerCore One of the sticking points in West Wind WebSurge has been that if you wanted to capture HTTPS/SSL traffic, you needed to have the full version of Fiddler and have HTTPS decryption enabled. Essentially you had to use Fiddler to configure HTTPS decryption and the associated installation of the Fiddler local client certificate that is used for local decryption of incoming SSL traffic. While this works just fine, requiring to have Fiddler installed and then using a separate application to configure the SSL functionality isn’t ideal. Fortunately FiddlerCore actually includes the tools to register the Fiddler Certificate directly using FiddlerCore. Why does Fiddler need a Certificate in the first Place? Fiddler and FiddlerCore are essentially HTTP proxies which means they inject themselves into the HTTP conversation by re-routing HTTP traffic to a special HTTP port (8888 by default for Fiddler) and then forward the HTTP data to the original client. Fiddler injects itself as the system proxy in using the WinInet Windows settings  which are the same settings that Internet Explorer uses and that are configured in the Windows and Internet Explorer Internet Settings dialog. Most HTTP clients running on Windows pick up and apply these system level Proxy settings before establishing new HTTP connections and that’s why most clients automatically work once Fiddler – or FiddlerCore/WebSurge are running. For plain HTTP requests this just works – Fiddler intercepts the HTTP requests on the proxy port and then forwards them to the original port (80 for HTTP and 443 for SSL typically but it could be any port). For SSL however, this is not quite as simple – Fiddler can easily act as an HTTPS/SSL client to capture inbound requests from the server, but when it forwards the request to the client it has to also act as an SSL server and provide a certificate that the client trusts. This won’t be the original certificate from the remote site, but rather a custom local certificate that effectively simulates an SSL connection between the proxy and the client. If there is no custom certificate configured for Fiddler the SSL request fails with a certificate validation error. The key for this to work is that a custom certificate has to be installed that the HTTPS client trusts on the local machine. For a much more detailed description of the process you can check out Eric Lawrence’s blog post on Certificates. If you’re using the desktop version of Fiddler you can install a local certificate into the Windows certificate store. Fiddler proper does this from the Options menu: This operation does several things: It installs the Fiddler Root Certificate It sets trust to this Root Certificate A new client certificate is generated for each HTTPS site monitored Certificate Installation with FiddlerCore You can also provide this same functionality using FiddlerCore which includes a CertMaker class. Using CertMaker is straight forward to use and it provides an easy way to create some simple helpers that can install and uninstall a Fiddler Root certificate:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } return true; } InstallCertificate() works by first checking whether the root certificate is already installed and if it isn’t goes ahead and creates a new one. The process of creating the certificate is a two step process – first the actual certificate is created and then it’s moved into the certificate store to become trusted. I’m not sure why you’d ever split these operations up since a cert created without trust isn’t going to be of much value, but there are two distinct steps. When you trigger the trustRootCert() method, a message box will pop up on the desktop that lets you know that you’re about to trust a local private certificate. This is a security feature to ensure that you really want to trust the Fiddler root since you are essentially installing a man in the middle certificate. It’s quite safe to use this generated root certificate, because it’s been specifically generated for your machine and thus is not usable from external sources, the only way to use this certificate in a trusted way is from the local machine. IOW, unless somebody has physical access to your machine, there’s no useful way to hijack this certificate and use it for nefarious purposes (see Eric’s post for more details). Once the Root certificate has been installed, FiddlerCore/Fiddler create new certificates for each site that is connected to with HTTPS. You can end up with quite a few temporary certificates in your certificate store. To uninstall you can either use Fiddler and simply uncheck the Decrypt HTTPS traffic option followed by the remove Fiddler certificates button, or you can use FiddlerCore’s CertMaker.removeFiddlerGeneratedCerts() which removes the root cert and any of the intermediary certificates Fiddler created. Keep in mind that when you uninstall you uninstall the certificate for both FiddlerCore and Fiddler, so use UninstallCertificate() with care and realize that you might affect the Fiddler application’s operation by doing so as well. When to check for an installed Certificate Note that the check to see if the root certificate exists is pretty fast, while the actual process of installing the certificate is a relatively slow operation that even on a fast machine takes a few seconds. Further the trust operation pops up a message box so you probably don’t want to install the certificate repeatedly. Since the check for the root certificate is fast, you can easily put a call to InstallCertificate() in any capture startup code – in which case the certificate installation only triggers when a certificate is in fact not installed. Personally I like to make certificate installation explicit – just like Fiddler does, so in WebSurge I use a small drop down option on the menu to install or uninstall the SSL certificate:   This code calls the InstallCertificate and UnInstallCertificate functions respectively – the experience with this is similar to what you get in Fiddler with the extra dialog box popping up to prompt confirmation for installation of the root certificate. Once the cert is installed you can then capture SSL requests. There’s a gotcha however… Gotcha: FiddlerCore Certificates don’t stick by Default When I originally tried to use the Fiddler certificate installation I ran into an odd problem. I was able to install the certificate and immediately after installation was able to capture HTTPS requests. Then I would exit the application and come back in and try the same HTTPS capture again and it would fail due to a missing certificate. CertMaker.rootCertExists() would return false after every restart and if re-installed the certificate a new certificate would get added to the certificate store resulting in a bunch of duplicated root certificates with different keys. What the heck? CertMaker and BcMakeCert create non-sticky CertificatesI turns out that FiddlerCore by default uses different components from what the full version of Fiddler uses. Fiddler uses a Windows utility called MakeCert.exe to create the Fiddler Root certificate. FiddlerCore however installs the CertMaker.dll and BCMakeCert.dll assemblies, which use a different crypto library (Bouncy Castle) for certificate creation than MakeCert.exe which uses the Windows Crypto API. The assemblies provide support for non-windows operation for Fiddler under Mono, as well as support for some non-Windows certificate platforms like iOS and Android for decryption. The bottom line is that the FiddlerCore provided bouncy castle assemblies are not sticky by default as the certificates created with them are not cached as they are in Fiddler proper. To get certificates to ‘stick’ you have to explicitly cache the certificates in Fiddler’s internal preferences. A cache aware version of InstallCertificate looks something like this:public static bool InstallCertificate() { if (!CertMaker.rootCertExists()) { if (!CertMaker.createRootCert()) return false; if (!CertMaker.trustRootCert()) return false; App.Configuration.UrlCapture.Cert = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.cert", null); App.Configuration.UrlCapture.Key = FiddlerApplication.Prefs.GetStringPref("fiddler.certmaker.bc.key", null); } return true; } public static bool UninstallCertificate() { if (CertMaker.rootCertExists()) { if (!CertMaker.removeFiddlerGeneratedCerts(true)) return false; } App.Configuration.UrlCapture.Cert = null; App.Configuration.UrlCapture.Key = null; return true; } In this code I store the Fiddler cert and private key in an application configuration settings that’s stored with the application settings (App.Configuration.UrlCapture object). These settings automatically persist when WebSurge is shut down. The values are read out of Fiddler’s internal preferences store which is set after a new certificate has been created. Likewise I clear out the configuration settings when the certificate is uninstalled. In order for these setting to be used you have to also load the configuration settings into the Fiddler preferences *before* a call to rootCertExists() is made. I do this in the capture form’s constructor:public FiddlerCapture(StressTestForm form) { InitializeComponent(); CaptureConfiguration = App.Configuration.UrlCapture; MainForm = form; if (!string.IsNullOrEmpty(App.Configuration.UrlCapture.Cert)) { FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.key", App.Configuration.UrlCapture.Key); FiddlerApplication.Prefs.SetStringPref("fiddler.certmaker.bc.cert", App.Configuration.UrlCapture.Cert); }} This is kind of a drag to do and not documented anywhere that I could find, so hopefully this will save you some grief if you want to work with the stock certificate logic that installs with FiddlerCore. MakeCert provides sticky Certificates and the same functionality as Fiddler But there’s actually an easier way. If you want to skip the above Fiddler preference configuration code in your application you can choose to distribute MakeCert.exe instead of certmaker.dll and bcmakecert.dll. When you use MakeCert.exe, the certificates settings are stored in Windows so they are available without any custom configuration inside of your application. It’s easier to integrate and as long as you run on Windows and you don’t need to support iOS or Android devices is simply easier to deal with. To integrate into your project, you can remove the reference to CertMaker.dll (and the BcMakeCert.dll assembly) from your project. Instead copy MakeCert.exe into your output folder. To make sure MakeCert.exe gets pushed out, include MakeCert.exe in your project and set the Build Action to None, and Copy to Output Directory to Copy if newer. Note that the CertMaker.dll reference in the project has been removed and on disk the files for Certmaker.dll, as well as the BCMakeCert.dll files on disk. Keep in mind that these DLLs are resources of the FiddlerCore NuGet package, so updating the package may end up pushing those files back into your project. Once MakeCert.exe is distributed FiddlerCore checks for it first before using the assemblies so as long as MakeCert.exe exists it’ll be used for certificate creation (at least on Windows). Summary FiddlerCore is a pretty sweet tool, and it’s absolutely awesome that we get to plug in most of the functionality of Fiddler right into our own applications. A few years back I tried to build this sort of functionality myself for an app and ended up giving up because it’s a big job to get HTTP right – especially if you need to support SSL. FiddlerCore now provides that functionality as a turnkey solution that can be plugged into your own apps easily. The only downside is FiddlerCore’s documentation for more advanced features like certificate installation which is pretty sketchy. While for the most part FiddlerCore’s feature set is easy to work with without any documentation, advanced features are often not intuitive to gleam by just using Intellisense or the FiddlerCore help file reference (which is not terribly useful). While Eric Lawrence is very responsive on his forum and on Twitter, there simply isn’t much useful documentation on Fiddler/FiddlerCore available online. If you run into trouble the forum is probably the first place to look and then ask a question if you can’t find the answer. The best documentation you can find is Eric’s Fiddler Book which covers a ton of functionality of Fiddler and FiddlerCore. The book is a great reference to Fiddler’s feature set as well as providing great insights into the HTTP protocol. The second half of the book that gets into the innards of HTTP is an excellent read for anybody who wants to know more about some of the more arcane aspects and special behaviors of HTTP – it’s well worth the read. While the book has tons of information in a very readable format, it’s unfortunately not a great reference as it’s hard to find things in the book and because it’s not available online you can’t electronically search for the great content in it. But it’s hard to complain about any of this given the obvious effort and love that’s gone into this awesome product for all of these years. A mighty big thanks to Eric Lawrence  for having created this useful tool that so many of us use all the time, and also to Telerik for picking up Fiddler/FiddlerCore and providing Eric the resources to support and improve this wonderful tool full time and keeping it free for all. Kudos! Resources FiddlerCore Download FiddlerCore NuGet Fiddler Capture Sample Form Fiddler Capture Form in West Wind WebSurge (GitHub) Eric Lawrence’s Fiddler Book© Rick Strahl, West Wind Technologies, 2005-2014Posted in .NET  HTTP   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Issue 15: Oracle Exadata Marketing Campaigns

    - by rituchhibber
         PARTNER FOCUS Oracle ExadataMarketing Campaign Steve McNickleVP Europe, cVidya Steve McNickle is VP Europe for cVidya, an innovative provider of revenue intelligence solutions for telecom, media and entertainment service providers including AT&T, BT, Deutsche Telecom and Vodafone. The company's product portfolio helps operators and service providers maximise margins, improve customer experience and optimise ecosystem relationships through revenue assurance, fraud and security management, sales performance management, pricing analytics, and inter-carrier services. cVidya has partnered with Oracle for more than a decade. RESOURCES -- Oracle PartnerNetwork (OPN) Oracle Exastack Program Oracle Exastack Optimized Oracle Exastack Labs and Enablement Resources Oracle Engineered Systems Oracle Communications cVidya SUBSCRIBE FEEDBACK PREVIOUS ISSUES Are you ready for Oracle OpenWorld this October? -- -- Please could you tell us a little about cVidya's partnering history with Oracle, and expand on your Oracle Exastack accreditations? "cVidya was established just over ten years ago and we've had a strong relationship with Oracle almost since the very beginning. Through our Revenue Intelligence work with some of the world's largest service providers we collect tremendous amounts of information, amounting to billions of records per day. We help our clients to collect, store and analyse that data to ensure that their end customers are getting the best levels of service, are billed correctly, and are happy that they are on the correct price plan. We have been an Oracle Gold level partner for seven years, and crucially just two months ago we were also accredited as Oracle Exastack Optimized for MoneyMap, our core Revenue Assurance solution. Very soon we also expect to be Oracle Exastack Optimized DRMap, our Data Retention solution." What unique capabilities and customer benefits does Oracle Exastack add to your applications? "Oracle Exastack enables us to deliver radical benefits to our customers. A typical mobile operator in the UK might handle between 500 million and two billion call data record details daily. Each transaction needs to be validated, billed correctly and fraud checked. Because of the enormous volumes involved, our clients demand scalable infrastructure that allows them to efficiently acquire, store and process all that data within controlled cost, space and environmental constraints. We have proved that the Oracle Exadata system can process data up to seven times faster and load it as much as 20 times faster than other standard best-of-breed server approaches. With the Oracle Exadata Database Machine they can reduce their datacentre equipment from say, the six or seven cabinets that they needed in the past, down to just one. This dramatic simplification delivers incredible value to the customer by cutting down enormously on all of their significant cost, space, energy, cooling and maintenance overheads." "The Oracle Exastack Program has given our clients the ability to switch their focus from reactive to proactive. Traditionally they may have spent 80 percent of their day processing, and just 20 percent enabling end customers to see advanced analytics, and avoiding issues before they occur. With our solutions and Oracle Exadata they can now switch that balance around entirely, resulting not only in reduced revenue leakage, but a far higher focus on proactive leakage prevention. How has the Oracle Exastack Program transformed your customer business? "We can already see the impact. Oracle solutions allow our delivery teams to achieve successful deployments, happy customers and self-satisfaction, and the power of Oracle's Exa solutions is easy to measure in terms of their transformational ability. We gained our first sale into a major European telco by demonstrating the major performance gains that would transform their business. Clients can measure the ease of organisational change, the early prevention of business issues, the reduction in manpower required to provide protection and coverage across all their products and services, plus of course end customer satisfaction. If customers know that that service is provided accurately and that their bills are calculated correctly, then over time this satisfaction can be attributed to revenue intelligence and the underlying systems which provide it. Combine this with the further integration we have with the other layers of the Oracle stack, including the telecommunications offerings such as NCC, OCDM and BRM, and the result is even greater customer value—not to mention the increased speed to market and the reduced project risk." What does the Oracle Exastack community bring to cVidya, both in terms of general benefits, and also tangible new opportunities and partnerships? "A great deal. We have participated in the Oracle Exastack community heavily over the past year, and have had lots of meetings with Oracle and our peers around the globe. It brings us into contact with like-minded, innovative partners, who like us are not happy to just stand still and want to take fresh technology to their customer base in order to gain enhanced value. We identified three new partnerships in each of two recent meetings, and hope these will open up new opportunities, not only in areas that exactly match where we operate today, but also in some new associative areas that will expand our reach into new business sectors. Notably, thanks to the Exastack community we were invited on stage at last year's Oracle OpenWorld conference. Appearing so publically with Oracle senior VP Judson Althoff elevated awareness and visibility of cVidya and has enabled us to participate in a number of other events with Oracle over the past eight months. We've been involved in speaking opportunities, forums and exhibitions, providing us with invaluable opportunities that we wouldn't otherwise have got close to." How has Exastack differentiated cVidya as an ISV, and helped you to evolve your business to the next level? "When we are selling to our core customer base of Tier 1 telecommunications providers, we know that they want more than just software. They want an enduring partnership that will last many years, they want innovation, and a forward thinking partner who knows how to guide them on where they need to be to meet market demand three, five or seven years down the line. Membership of respected global bodies, such as the Telemanagement Forum enables us to lead standard adherence in our area of business, giving us a lot of credibility, but Oracle is also involved in this forum with its own telecommunications portfolio, strengthening our position still further. When we approach CEOs, CTOs and CIOs at the very largest Tier 1 operators, not only can we easily show them that our technology is fantastic, we can also talk about our strong partnership with Oracle, and our joint embracing of today's standards and tomorrow's innovation." Where would you like cVidya to be in one year's time? "We want to get all of our relevant products Oracle Exastack Optimized. Our MoneyMap Revenue Assurance solution is already Exastack Optimised, our DRMAP Data Retention Solution should be Exastack Optimised within the next month, and our FraudView Fraud Management solution within the next two to three months. We'd then like to extend our Oracle accreditation out to include other members of the Oracle Engineered Systems family. We are moving into the 'Big Data' space, and so we're obviously very keen to work closely with Oracle to conduct pilots, map new technologies onto Oracle Big Data platforms, and embrace and measure the benefits of other Oracle systems, namely Oracle Exalogic Elastic Cloud, the Oracle Exalytics In-Memory Machine and the Oracle SPARC SuperCluster. We would also like to examine how the Oracle Database Appliance might benefit our Tier 2 service provider customers. Finally, we'd also like to continue working with the Oracle Communications Global Business Unit (CGBU), furthering our integration with Oracle billing products so that we are able to quickly deploy fraud solutions into Oracle's Engineered System stack, give operational benefits to our clients that are pre-integrated, more cost-effective, and can be rapidly deployed rapidly and producing benefits in three months, not nine months." Chris Baker ,Senior Vice President, Oracle Worldwide ISV-OEM-Java Sales Chris Baker is the Global Head of ISV/OEM Sales responsible for working with ISV/OEM partners to maximise Oracle's business through those partners, whilst maximising those partners' business to their end users. Chris works with partners, customers, innovators, investors and employees to develop innovative business solutions using Oracle products, services and skills. Firstly, could you please explain Oracle's current strategy for ISV partners, globally and in EMEA? "Oracle customers use independent software vendor (ISV) applications to run their businesses. They use them to generate revenue and to fulfil obligations to their own customers. Our strategy is very straight-forward. We want all of our ISV partners and OEMs to concentrate on the things that they do the best – building applications to meet the unique industry and functional requirements of their customer. We want to ensure that we deliver a best in class application platform so the ISV is free to concentrate their effort on their application functionality and user experience We invest over four billion dollars in research and development every year, and we want our ISVs to benefit from all of that investment in operating systems, virtualisation, databases, middleware, engineered systems, and other hardware. By doing this, we help them to reduce their costs, gain more consistency and agility for quicker implementations, and also rapidly differentiate themselves from other application vendors. It's all about simplification because we believe that around 25 to 30 percent of the development costs incurred by many ISVs are caused by customising infrastructure and have nothing to do with their applications. Our strategy is to enable our ISV partners to standardise their application platform using engineered architecture, so they can write once to the Oracle stack and deploy seamlessly in the cloud, on-premise, or in hybrid deployments. It's really important that architecture is the same in order to keep cost and time overheads at a minimum, so we provide standardisation and an environment that enables our ISVs to concentrate on the core business that makes them the most money and brings them success." How do you believe this strategy is helping the ISVs to work hand-in-hand with Oracle to ensure that end customers get the industry-leading solutions that they need? "We work with our ISVs not just to help them be successful, but also to help them market themselves. We have something called the 'Oracle Exastack Ready Program', which enables ISVs to publicise themselves as 'Ready' to run the core software platforms that run on Oracle's engineered systems including Exadata and Exalogic. So, for example, they can become 'Database Ready' which means that they use the latest version of Oracle Database and therefore can run their application without modification on Exadata or the Oracle Database Appliance. Alternatively, they can become WebLogic Ready, Oracle Linux Ready and Oracle Solaris Ready which means they run on the latest release and therefore can run their application, with no new porting work, on Oracle Exalogic. Those 'Ready' logos are important in helping ISVs advertise to their customers that they are using the latest technologies which have been fully tested. We now also have Exadata Ready and Exalogic Ready programmes which allow ISVs to promote the certification of their applications on these platforms. This highlights these partners to Oracle customers as having solutions that run fluently on the Oracle Exadata Database Machine, the Oracle Exalogic Elastic Cloud or one of our other engineered systems. This makes it easy for customers to identify solutions and provides ISVs with an avenue to connect with Oracle customers who are rapidly adopting engineered systems. We have also taken this programme to the next level in the shape of 'Oracle Exastack Optimized' for partners whose applications run best on the Oracle stack and have invested the time to fully optimise application performance. We ensure that Exastack Optimized partner status is promoted and supported by press releases, and we help our ISVs go to market and differentiate themselves through the use our technology and the standardisation it delivers. To date we have had several hundred organisations successfully work through our Exastack Optimized programme." How does Oracle's strategy of offering pre-integrated open platform software and hardware allow ISVs to bring their products to market more quickly? "One of the problems for many ISVs is that they have to think very carefully about the technology on which their solutions will be deployed, particularly in the cloud or hosted environments. They have to think hard about how they secure these environments, whether the concern is, for example, middleware, identity management, or securing personal data. If they don't use the technology that we build-in to our products to help them to fulfil these roles, they then have to build it themselves. This takes time, requires testing, and must be maintained. By taking advantage of our technology, partners will now know that they have a standard platform. They will know that they can confidently talk about implementation being the same every time they do it. Very large ISV applications could once take a year or two to be implemented at an on-premise environment. But it wasn't just the configuration of the application that took the time, it was actually the infrastructure - the different hardware configurations, operating systems and configurations of databases and middleware. Now we strongly believe that it's all about standardisation and repeatability. It's about making sure that our partners can do it once and are then able to roll it out many different times using standard componentry." What actions would you recommend for existing ISV partners that are looking to do more business with Oracle and its customer base, not only to maximise benefits, but also to maximise partner relationships? "My team, around the world and in the EMEA region, is available and ready to talk to any of our ISVs and to explore the possibilities together. We run programmes like 'Excite' and 'Insight' to help us to understand how we can help ISVs with architecture and widen their environments. But we also want to work with, and look at, new opportunities - for example, the Machine-to-Machine (M2M) market or 'The Internet of Things'. Over the next few years, many millions, indeed billions of devices will be collecting massive amounts of data and communicating it back to the central systems where ISVs will be running their applications. The only way that our partners will be able to provide a single vendor 'end-to-end' solution is to use Oracle integrated systems at the back end and Java on the 'smart' devices collecting the data – a complete solution from device to data centre. So there are huge opportunities to work closely with our ISVs, using Oracle's complete M2M platform, to provide the infrastructure that enables them to extract maximum value from the data collected. If any partners don't know where to start or who to contact, then they can contact me directly at [email protected] or indeed any of our teams across the EMEA region. We want to work with ISVs to help them to be as successful as they possibly can through simplification and speed to market, and we also want all of the top ISVs in the world based on Oracle." What opportunities are immediately opened to new ISV partners joining the OPN? "As you know OPN is very, very important. New members will discover a huge amount of content that instantly becomes accessible to them. They can access a wealth of no-cost training and enablement materials to build their expertise in Oracle technology. They can download Oracle software and use it for development projects. They can help themselves become more competent by becoming part of a true community and uncovering new opportunities by working with Oracle and their peers in the Oracle Partner Network. As well as publishing massive amounts of information on OPN, we also hold our global Oracle OpenWorld event, at which partners play a huge role. This takes place at the end of September and the beginning of October in San Francisco. Attending ISV partners have an unrivalled opportunity to contribute to elements such as the OpenWorld / OPN Exchange, at which they can talk to other partners and really begin thinking about how they can move their businesses on and play key roles in a very large ecosystem which revolves around technology and standardisation." Finally, are there any other messages that you would like to share with the Oracle ISV community? "The crucial message that I always like to reinforce is architecture, architecture and architecture! The key opportunities that ISVs have today revolve around standardising their architectures so that they can confidently think: “I will I be able to do exactly the same thing whenever a customer is looking to deploy on-premise, hosted or in the cloud”. The right architecture is critical to being competitive and to really start changing the game. We want to help our ISV partners to do just that; to establish standard architecture and to seize the opportunities it opens up for them. New market opportunities like M2M are enormous - just look at how many devices are all around you right now. We can help our partners to interface with these devices more effectively while thinking about their entire ecosystem, rather than just the piece that they have traditionally focused upon. With standardised architecture, we can help people dramatically improve their speed, reach, agility and delivery of enhanced customer satisfaction and value all the way from the Java side to their centralised systems. All Oracle ISV partners must take advantage of these opportunities, which is why Oracle will continue to invest in and support them." -- Gergely Strbik is Oracle Hardware and Software Product Manager for Avnet in Hungary. Avnet Technology Solutions is an OracleValue Added Distributor focused on the development of the existing Oracle channel. This includes the recruitment and enablement of Oracle partners as well as driving deeper adoption of Oracle's technology and application products within the IT channel. "The main business benefits of ODA for our customers and partners are scalability, flexibility, a great price point for the high performance delivered, and the easily configurable embedded Linux operating system. People welcome a lower point of entry and the ability to grow capacity on demand as their business expands." "Marketing and selling the ODA requires another way of thinking because it is an appliance. We have to transform the ways in which our partners and customers think from buying hardware and software independently to buying complete solutions. Successful early adopters and satisfied customer reactions will certainly help us to sell the ODA. We will have more experience with the product after the first deliveries and installations—end users need to see the power and benefits for themselves." "Our typical ODA customers will be those looking for complete solutions from a single reseller partner who is also able to manage the appliance. They will have enjoyed using Oracle Database but now want a new product that is able to unlock new levels of performance. A higher proportion of potential customers will come from our existing Oracle base, with around 30% from new business, but we intend to evangelise the ODA on the market to see how we can change this balance as all our customers adjust to the concept of 'Hardware and Software, Engineered to Work Together'. -- Back to the welcome page

    Read the article

  • DVR_PLAYER.exe reports Remote200.ocx is missing?

    - by Kalamane
    I have a program called DVR_PLAYER that is downloaded from a home security camera web interface. The web interface saves surveillance footage in the form of proprietary .drv files. I am unable to use the program to view the files on the cd on any machine that isn't the original machine I downloaded the files from. Every time I try to open it up it says, " Remote200.ocx not installed or it couldn't be installed. Please check user privilege." I need other machines to be able to open and view the footage I've downloaded using this program so that I can hand it in to the local police. Any ideas?

    Read the article

  • Why is Adobe Flash Player downloaded as a ".dmg.mdlp" file?

    - by dpddt
    When I download the current Adobe flash player installer from the Adobe website using Safari 6.0.1 under OSX 10.8.2, I end up with a file named 'install_flash_player_osx.dmg.mdlp' in my downloads folder. I am curious as to why the .mdlp extension is being added to the disk image containing the flash player installer, which has always terminated with the .dmg extension in the past. The only program which uses the .mdlp extension that I am aware of is matlab; matlab is installed on this machine and it is the program the OS would like to use to open the file. I have not seen OSX, or any component thereof, replace or append file extensions in the past and I am able to download .dmg files from other websites without this phenomenon occurring. Note that I am not interested in suggestions regarding the opening of the file, but rather an explanation as to why the .mdlp extension is being applied in the first place, whether it be by the local machine or Adobe.

    Read the article

< Previous Page | 187 188 189 190 191 192 193 194 195 196 197 198  | Next Page >