Search Results

Search found 46749 results on 1870 pages for 'system preferences'.

Page 824/1870 | < Previous Page | 820 821 822 823 824 825 826 827 828 829 830 831  | Next Page >

  • problems with scrolling a java TextArea

    - by Jonathan
    All, I am running into an issue using JTextArea and JScrollPane. For some reason the scroll pane appears to not recognize the last line in the document, and will only scroll down to the line before it. The scroll bar does not even change to a state where I can slide it until the lines in the document are two greater than the number of lines the textArea shows (it should happen as soon as it is one greater). Has anyone run into this before? What would be a good solution (I want to avoid having to add an extra 'blank' line to the end of the document, which I would have to remove every time I add a new line)? Here is how I instantiate the TextArea and ScrollPane: JFrame frame = new JFrame("Java Chat Program"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); Container pane = frame.getContentPane(); if (!(pane.getLayout() instanceof BorderLayout)) { System.err.println("Error: UI Container does not implement BorderLayout."); System.exit(-1); } textArea = new JTextArea(); textArea.setPreferredSize(new Dimension(500, 100)); textArea.setEditable(false); textArea.setLineWrap(true); textArea.setWrapStyleWord(true); JScrollPane scroller = new JScrollPane(textArea); scroller.setVerticalScrollBarPolicy(ScrollPaneConstants.VERTICAL_SCROLLBAR_ALWAYS); pane.add(scroller, BorderLayout.CENTER); Here is the method I use to add a new line to textArea: public void println(String a) { textArea.append(" "+a+"\n"); textArea.setCaretPosition(textArea.getDocument().getLength()); } Thanks for your help, Jonathan EDIT: Also, as a side note, with the current code I have to manually scroll down. I assumed that setCaretPosition(doc.getLength()) in the println(line) method would automatically set the page to the bottom after a line is entered... Should that be the case, or do I need to do something differently?

    Read the article

  • Subsonic 3, MySql, won't update record.

    - by Warspawn
    [WebMethod] public string GetAuthToken(string username, string password) { var db = new LogicDB(); //var results = from u in db.Users // where u.Username == username && u.Password == password // select u; User u = db.Select .From<User>() .Where(UsersTable.UsernameColumn).IsEqualTo(username) .And(UsersTable.PasswordColumn).IsEqualTo(password) .ExecuteSingle<User>(); if (u == null) { return "{'success': false, 'reason': 'Invalid username and/or password.'}"; } else { // really there should only be one match... Guid code = Guid.NewGuid(); u.Securitycode = code.ToString(); u.Securityexp = System.DateTime.Now.AddHours(24); //u.Save(db.Provider); return "{'id':'" + u.Id.ToString() + "', 'code':'" + code.ToString() + "', 'exp':'" + u.Securityexp.ToString() + "'}" + "\n\n<br/><br/>" + u.GetDirtyColumns().ToArray().ToString(); } } When I run that, I keep getting: System.Collections.Generic.KeyNotFoundException: The given key was not present in the dictionary. This is when u.Save(db.Provider); is uncommented. And happens even with just u.Save(); or using the linq query above results instead.

    Read the article

  • Understanding ItemsSource and DataContext in a DataGrid

    - by Ben McCormack
    I'm trying to understand how the ItemsSource and DataContext properties work in a Silverlight Toolkit DataGrid. I'm currently working with dummy data and trying to get the data in the DataGrid to update when the value of a combo box changes. My MainPage.xaml.vb file currently looks like this: Partial Public Class MainPage Inherits UserControl Private IssueSummaryList As List(Of IssueSummary) Public Sub New() GetDummyIssueSummary("Day") InitializeComponent() dgIssueSummary.ItemsSource = IssueSummaryList 'dgIssueSummary.DataContext = IssueSummaryList ' End Sub Private Sub GetDummyIssueSummary(ByVal timeInterval As String) Dim lst As New List(Of IssueSummary)() 'Generate dummy data for lst ' IssueSummaryList = lst End Sub Private Sub ComboBox_SelectionChanged(ByVal sender As System.Object, ByVal e As System.Windows.Controls.SelectionChangedEventArgs) Dim cboBox As ComboBox = CType(sender, ComboBox) Dim cboBoxItem As ComboBoxItem = CType(cboBox.SelectedValue, ComboBoxItem) GetDummyIssueSummary(cboBoxItem.Content.ToString()) End Sub End Class My XAML currently looks this for the DataGrid: <sdk:DataGrid x:Name="dgIssueSummary" AutoGenerateColumns="False" > <sdk:DataGrid.Columns> <sdk:DataGridTextColumn Binding="{Binding ProblemType}" Header="Problem Type"/> <sdk:DataGridTextColumn Binding="{Binding Count}" Header="Count"/> </sdk:DataGrid.Columns> </sdk:DataGrid> The problem is that if I set the value of the ItemsSource property of the data grid equal to IssueSummaryList, it will display the data when it loads, but it won't update when the underlying IssueSummaryList collection changes. If I set the DataContext of the grid to be IssueSummaryList, no data will be displayed when it renders. I must not understand how ItemsSource and DataContext are supposed to function, because I expect one of those properties to "just work" when I assign a List object to them. What do I need to understand and change in my code so that as data changes in the List, the data in the grid is updated?

    Read the article

  • Problem converting a byte array into datatable.

    - by kranthi
    Hi, In my aspx page I have a HTML inputfile type which allows user to browse for a spreadsheet.Once the user choses the file to upload I want to read the content of the spreadsheet and store the content into mysql database table. I am using the following code to read the content of the uploaded file and convert it into a datatable in order into insert it into database table. if (filMyFile.PostedFile != null) { // Get a reference to PostedFile object HttpPostedFile myFile = filMyFile.PostedFile; // Get size of uploaded file int nFileLen = myFile.ContentLength; // make sure the size of the file is > 0 if (nFileLen > 0) { // Allocate a buffer for reading of the file byte[] myData = new byte[nFileLen]; // Read uploaded file from the Stream myFile.InputStream.Read(myData, 0, nFileLen); DataTable dt = new DataTable(); MemoryStream st = new MemoryStream(myData); st.Position = 0; System.Runtime.Serialization.IFormatter formatter = new System.Runtime.Serialization.Formatters.Binary.BinaryFormatter(); dt=(DataTable)formatter.Deserialize(st); } } But I am getting the following error when I am trying to deserialise the byte array into datatable. Binary stream '0' does not contain a valid BinaryHeader. Possible causes are invalid stream or object version change between serialization and deserialization. Could someone please tell me what am I doing wrong? I've also tried converting the bytearray into string ,then converting the string back to byte array and convert into datatable.That is also throwing the same error. Thanks.

    Read the article

  • EJB3 JNDI Lookup Failure in JEE application client

    - by Hank
    I'm trying to access an EJB3 from a JEE client-application, but keep getting nothing but lookup failures. My JEE Application 'CoreServer' is exposing a number of beans with remote interfaces. I have no problem accessing them from a Web Application deployed on the same Glassfish v3.0.1. Now I'm trying to access it from a client-application: public class Main { public static void main(String[] args) { CampaignControllerRemote bean = null; try { InitialContext ctx = new InitialContext(); bean = (CampaignControllerRemote) ctx.lookup("java:global/CoreServer/CampaignController"); } catch (Exception e) { System.out.println(e.getMessage()); } if (bean != null) { Campaign campaign = bean.get(361); if (campaign != null) { System.out.println("Got "+ campaign); } } } } When I run deploy it to Glassfish and run it from the appclient, I get this error: Lookup failed for 'java:global/CoreServer/CampaignController' in SerialContext targetHost=localhost,targetPort=3700,orb'sInitialHost=localhost,orb'sInitialPort=3700 However, that's exactly the same JNDI-name I use when I lookup the bean from the WebApplication (via SessionContext, not InitialContext - does that matter?). Also, when I deploy 'CoreServer', Glassfish reports: Portable JNDI names for EJB CampaignController : [java:global/CoreServer/CampaignController!mvs.api.CampaignControllerRemote, java:global/CoreServer/CampaignController] Glassfish-specific (Non-portable) JNDI names for EJB CampaignController : [mvs.api.CampaignControllerRemote, mvs.api.CampaignControllerRemote#mvs.api.CampaignControllerRemote] I tried all four names, none worked. Is the appclient unable to access beans with (only) Remote interfaces?

    Read the article

  • C# CreatePipe() -> Protected memory error

    - by M. Dimitri
    Hi all, I trying to create a pipe using C#. The code is quite simple but I get a error saying "Attempted to read or write protected memory. This is often an indication that other memory is corrupt." Here the COMPLETE code of my form : public partial class Form1 : Form { [DllImport("kernel32.dll", CharSet = CharSet.Auto, SetLastError = true)] public static extern bool CreatePipe(out SafeFileHandle hReadPipe, out SafeFileHandle hWritePipe, SECURITY_ATTRIBUTES lpPipeAttributes, int nSize); [StructLayout(LayoutKind.Sequential)] public struct SECURITY_ATTRIBUTES { public DWORD nLength; public IntPtr lpSecurityDescriptor; public bool bInheritHandle; } public Form1() { InitializeComponent(); } private void btCreate_Click(object sender, EventArgs e) { SECURITY_ATTRIBUTES sa = new SECURITY_ATTRIBUTES(); sa.nLength = (DWORD)System.Runtime.InteropServices.Marshal.SizeOf(sa); sa.lpSecurityDescriptor = IntPtr.Zero; sa.bInheritHandle = true; SafeFileHandle hWrite = null; SafeFileHandle hRead = null; if (CreatePipe(out hRead, out hWrite, sa, 4096)) { MessageBox.Show("Pipe created !"); } else MessageBox.Show("Error : Pipe not created !"); } } At the top I declare : using DWORD = System.UInt32; Thank you very much if someone can help.

    Read the article

  • WCF Mono - BasicHttpBinding with SSL

    - by TheNextman
    I'm trying to port an existing WCF client application to run on Linux under Mono. Right now I'm testing everything out, figuring out what works on Mono and what doesn't. The client makes a super simple call over basicHttpBinding. It works great, until I enable SSL (that is, specify BasicHttpSecurityMode.Transport in the binding). Running on .NET in Windows, it works great Running on Mono on Ubuntu 9.10 / Mono 2.6 I get the following error: Exception in async operation: System.Net.WebException: Error getting response stream (Write: The authentication or decryption has failed.): SendFailure --- System.IO.IOException: The authentication or decryption has failed. --- Mono.Security.Protocol.Tls.TlsException: Invalid certificate received from server. Error code: 0xffffffff800b010a I've read the Mono security FAQ here: http://www.mono-project.com/FAQ:_Security; however the SSL certificate on the server is from a root CA (a purchased certificate) - issued by Equifax Secure Certificate Authority. I ran the TlsTest tool on the Ubuntu install against the .svc URL and there are no problems/errors. Also I can hit the service fine in Firefox (no security warnings). What am I missing? Thanks in advance, Richard

    Read the article

  • Does SmtpClient class represent POP3 client or…?

    - by SourceC
    I assume that web controls (such as the PasswordRecovery control) use SmtpClient to send email messages. If so, does SmtpClient represent a POP3 client or does SmtpClient forward email message to POP3 client? Do attributes specified inside <smtp> element in web.config map to SmtpClient class? <system.net> <mailSettings> <smtp deliveryMethod="Network" ...></smtp> </mailSettings> </system.net> One of the possible values for the attribute deliveryMethod is Network, which tells that email should be sent through the network to an SMTP server. In other words, this value tells to send email to SMTP server using SMTP protocol?! For the PasswordRecovery control to be able to send email messages, we need to set basic properties in <MailDefinition> subelement of the PasswordRecovery control. Thus I assume MailDefinition is used by controls to create an email message?!

    Read the article

  • ASP.NET Setting Culture with InitializeCulture

    - by Helen
    I have a website with three domains .com, .de and .it Each domain needs to default to the local language/culture of the country. I have created a base page and added an InitializeCulture Protected Overrides Sub InitializeCulture() Dim url As System.Uri = Request.Url Dim hostname As String = url.Host.ToString() Dim SelectedLanguage As String If HttpContext.Current.Profile("PreferredCulture").ToString Is Nothing Then Select Case hostname Case "www.domain.de" SelectedLanguage = "de" Thread.CurrentThread.CurrentUICulture = New CultureInfo(SelectedLanguage) Thread.CurrentThread.CurrentCulture = CultureInfo.CreateSpecificCulture(SelectedLanguage) Case "www.domain.it" SelectedLanguage = "it" Thread.CurrentThread.CurrentUICulture = New CultureInfo(SelectedLanguage) Thread.CurrentThread.CurrentCulture = CultureInfo.CreateSpecificCulture(SelectedLanguage) Case Else SelectedLanguage = "en" Thread.CurrentThread.CurrentUICulture = New CultureInfo(SelectedLanguage) Thread.CurrentThread.CurrentCulture = CultureInfo.CreateSpecificCulture(SelectedLanguage) End Select End If End Sub This is fine. The problem now occurs because we also want three language selection buttons on the home page so that the user can override the domain language. So on my Default.asp.vb we have three button events like this... Protected Sub langEnglish_Click(ByVal sender As Object, ByVal e As System.Web.UI.ImageClickEventArgs) Handles langEnglish.Click Dim SelectedLanguage As String = "en" 'Save selected user language in profile HttpContext.Current.Profile.SetPropertyValue("PreferredCulture", SelectedLanguage) 'Force re-initialization of the page to fire InitializeCulture() Context.Server.Transfer(Context.Request.Path) End Sub But of course the InititalizeCulture then overrides whatever button selection has been made. Is there any way that the InitialCulture can check whether a button click has occurred and if so skip the routine? Any advice would be greatly appreciated, thanks.

    Read the article

  • Dynamics CRM Customer Portal Accelerator Installation

    - by saturdayplace
    (I've posted this question on the codeplex forums too, but have yet to get a response) I've got an on-premise installation of CRM and I'm trying to hook the portal to it. My connection string in web.config: <connectionStrings> <add name="Xrm" connectionString="Authentication Type=AD; Server=http://myserver:myport/MyOrgName; User ID=mydomain\crmwebuser; Password=thepassword" /> </connectionStrings> And my membership provider: <membership defaultProvider="CustomCRMProvider"> <providers> <add connectionStringName="Xrm" applicationName="/" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="true" passwordFormat="Hashed" minRequiredPasswordLength="1" minRequiredNonalphanumericCharacters="0" name="CustomCRMProvider" type="System.Web.Security.SqlMembershipProvider" /> </providers> </membership> Now, I'm super new to MS style web development, so please help me if I'm missing something. In Visual Studio 2010, when I go to Project ASP.NET Configuration it launches the Web Site Administration Tool. When I click the Security Tab there, I get the following error: There is a problem with your selected data store. This can be caused by an invalid server name or credentials, or by insufficient permission. It can also be caused by the role manager feature not being enabled. Click the button below to be redirected to a page where you can choose a new data store. The following message may help in diagnosing the problem: An error occurred while attempting to initialize a System.Data.SqlClient.SqlConnection object. The value that was provided for the connection string may be wrong, or it may contain an invalid syntax. Parameter name: connectionString I can't see what I'm doing wrong here. Does the user mydomain\crmwebuser need certain permissions in the SQL database, or somewhere else? edit: On the home page of the Web Site Administration Tool, I have the following: **Application**:/ **Current User Name**:MACHINENAME\USERACCOUNT Which is obviously a different set of credentials than mydomain\crmwebuser. Is this part of the problem?

    Read the article

  • ASP.NET 2.0 app runs on Win 2003 in IIS 5 isolation mode but not in (default) IIS 6 mode

    - by Tex
    The app uses DLLImport to call a legacy unmanaged dll. Let's call this dll Unmanaged.dll for the sake of this question. Unmanaged.dll has dependencies on 5 other legacy dll's. All of the legacy dll's are placed in the WebApp/bin/ directory of my ASP.NET application. When IIS is running in 5.0 isolation mode, the app works fine - calls to the legacy dll are processed without error. When IIS is running in the default 6.0 mode, the app is able to initiate the Unmanaged.dll (InitMe()), but dies during a later call to it (ProcessString()). I'm pulling my hair out here. I've moved the unmanaged dll's to various locations, tried all kinds of security settings and searched long and hard for a solution. Help! Sample code: [DllImport("Unmanaged.dll", EntryPoint="initME", CharSet=System.Runtime.InteropServices.CharSet.Ansi, CallingConvention=CallingConvention.Cdecl)] internal static extern int InitME(); //Calls to InitMe work fine - Unmanaged.dll initiates and writes some entries in a dedicated log file [DllImport("Unmanaged.dll", EntryPoint="processString", CharSet=System.Runtime.InteropServices.CharSet.Ansi, CallingConvention=CallingConvention.Cdecl)] internal static extern int ProcessString(string inStream, int inLen, StringBuilder outStream, ref int outLen, int maxLen); //Calls to ProcessString cause the app to crash, without leaving much of a trace that I can find so far

    Read the article

  • Writing tests for Rails plugins

    - by Adam
    I'm working on a plugin for Rails that would add limited in-memory caching to ActiveRecord's finders. The functionality itself is mature enough, but I can't for the life of me get unit tests to work with the plugin. I now have under vendor/plugins/my_plugin/test/my_plugin_test.rb a standard subclass of ActiveSupport::TestCase with a couple of basic tests. I try running 'rake test' from the plugin directory, and I have confirmed that this task loads the ruby file with the test case, but it doesn't actually run any of the tests. I followed the Rails plugin guide (http://guides.rubyonrails.org/plugins.html) where applicable, but it seems to be horribly outdated (it suggests things that Rails now do automatically, etc.) The only output I get is this: Kakadu:ingenious_record adam$ rake test (in /Users/adam/Sites/1_PRK/vendor/plugins/ingenious_record) /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -Ilib:lib:test "/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake/rake_test_loader.rb" "test/ingenious_record_test.rb" The simplest test case looks like this: require 'test_helper' require 'active_record' class IngeniousRecordTest < ActiveSupport::TestCase test "example" do assert false end end This should definitely produce at least some output, and the only test in that file should produce a failed assertion. Any ideas what I could do to get Rails to run my tests?

    Read the article

  • Enable DOM Access for Silverlight Web Part in Sharepoint 2010

    - by Bhaskar
    I am hosting a silverlight 3.0 control on my Sharepoint 2010 page. I am using the built-in SilverlightWebPart web part, where I have provided the path for .xap file. Its displaying properly, but when I try to access the System.Windows.Browser, its throwing an error. My code is: public static string GetQueryString(string key) { try { var documentQueryString = (Dictionary<string, string>)System.Windows.Browser.HtmlPage.Document.QueryString; if (documentQueryString.ContainsKey(key)) { return documentQueryString[key].ToString(); } } catch (Exception ex) { return ex.Message; } return string.Empty; } The error I am getting is: The DOM/scripting bridge is disabled. How do I enable this? I know if I host this in a ASP.NET page, I can add the param - <param name="enablehtmlaccess" value="true"/>. I have tried putting this webpart in a "content editor web part", and I have embedded the object tag to call the .xap file and its working totally fine. I need to make it work using the built-in Silverlight web part.

    Read the article

  • Subsonic Access To App.Config Connection Strings From Referenced DLL in Powershell Script

    - by J Wynia
    I've got a DLL that contains Subsonic-generated and augmented code to access a data model. Actually, it is a merged DLL of that original assembly, Subsonic itself and a few other referenced DLL's into a single assembly, called "PowershellDataAccess.dll. However, it should be noted that I've also tried this referencing each assembly individually in the script as well and that doesn't work either. I am then attempting to use the objects and methods in that assembly. In this case, I'm accessing a class that uses Subsonic to load a bunch of records and creates a Lucene index from those records. The problem I'm running into is that the call into the Subsonic method to retrieve data from the database says it can't find the connection string. I'm pointing the AppDomain at the appropriate config file which does contain that connection string, by name. Here's the script. $ScriptDir = Get-Location [System.IO.Directory]::SetCurrentDirectory($ScriptDir) [Reflection.Assembly]::LoadFrom("PowershellDataAccess.dll") [System.AppDomain]::CurrentDomain.SetData("APP_CONFIG_FILE", "$ScriptDir\App.config") $indexer = New-Object LuceneIndexingEngine.LuceneIndexGenerator $indexer.GeneratePageTemplateIndex("PageTemplateIndex"); I went digging into Subsonic itself and the following line in Subsonic is what's looking for the connection string and throwing the exception: ConfigurationManager.ConnectionStrings[connectionStringName] So, out of curiosity, I created an assembly with a single class that has a single property that just runs that one line to retrieve the connection string name. I created a ps1 that called that assembly and hit that property. That prototype can find the connection string just fine. Anyone have any idea why Subsonic's portion can't seem to see the connection strings?

    Read the article

  • How can one detect if a server/script is accessing their site through cURL/file_get_contents()? (excluding user-agents and IP addresses)

    - by navnav
    I've come across a question where a user is having difficulties accessing an image through a script (using cURL/file_get_contents()): How to save an image from url using PHP? The image link seems to return a 403 error when using file_get_contents() to request it. But in cURL, a more detailed error is returned: You were denied access to the system. Turn off the engine or Surf Proxy, Fake IP if you really want to access. Proxy or not accepted from any Web tools Intrusion Prevention System. Binh Minh Online Data Services @ 2008 - 2012 I also failed to access the same image after fiddling around with a cURL request myself. I tried changing the user-agent to my exact browsers user-agent which can successfully access the image. I've also tried the script on my personal local server, which (obviously) uses the same IP address as my browser... So as far as I know, user-agents and IP addresses are out of the situation. How else can someone detect a script performing a request? BTW, this is not for anything crazy. I'm just curious xD

    Read the article

  • Access denied when trying to read information about SharePoint groups

    - by strongopinions
    I am trying to get the membership of a group in WSS 3.0. I am doing this in an elevated permissions block. Here is the code: SPSecurity.RunWithElevatedPrivileges(delegate() { using (SPSite site = new SPSite(SPContext.Current.Site.ID)) { using (SPWeb rootWeb = site.RootWeb) { SPGroup gAdmins = rootWeb.SiteGroups["Admins"]; } } }); I get taken to the "access denied" SharePoint screen when I run this code. The group exists. The identity of the application pool for the web application is in the dbo role in the content database. The code works on my development server, but not on another server, which leads me to believe there is something wrong with the permissions or configuration on this server, maybe something in dcomcnfg? Here are some lines from the SharePoint log that seem to be related: PermissionMask check failed. asking for 0x08000000, have 0x00000000 Unknown SPRequest error occurred. More information: 0x80070005 Access Denied for /Pages/UserAdmin.aspx. StackTrace: Microsoft.SharePoint.Utilities.SPUtility:Void HandleAccessDenied(System.Exception), Microsoft.SharePoint.SPGlobal:Void HandleUnauthorizedAccessException(System.UnauthorizedAccessException), .... [UserAdmin.aspx hosts my custom web part containing the code]

    Read the article

  • Implementing a 2 Legged OAuth Provider

    - by Rob Wilkerson
    I'm trying to find my way around the OAuth spec, its requirements and any implementations I can find and, so far, it really seems like more trouble than its worth because I'm having trouble finding a single resource that pulls it all together. Or maybe it's just that I'm looking for something more specialized than most tutorials. I have a set of existing APIs--some in Java, some in PHP--that I now need to secure and, for a number of reasons, OAuth seems like the right way to go. Unfortunately, my inability to track down the right resources to help me get a provider up and running is challenging that theory. Since most of this will be system-to-system API usage, I'll need to implement a 2-legged provider. With that in mind... Does anyone know of any good tutorials for implementing a 2-legged OAuth provider with PHP? Given that I have securable APIs in 2 languages, do I need to implement a provider in both or is there a way to create the provider as a "front controller" that I can funnel all requests through? When securing PHP services, for example, do I have to secure each API individually by including the requisite provider resources on each? Thanks for your help.

    Read the article

  • C# and F# lambda expressions code generation

    - by ControlFlow
    Let's look at the code, generated by F# for simple function: let map_add valueToAdd xs = xs |> Seq.map (fun x -> x + valueToAdd) The generated code for lambda expression (instance of F# functional value) will looks like this: [Serializable] internal class map_add@3 : FSharpFunc<int, int> { public int valueToAdd; internal map_add@3(int valueToAdd) { this.valueToAdd = valueToAdd; } public override int Invoke(int x) { return (x + this.valueToAdd); } } And look at nearly the same C# code: using System.Collections.Generic; using System.Linq; static class Program { static IEnumerable<int> SelectAdd(IEnumerable<int> source, int valueToAdd) { return source.Select(x => x + valueToAdd); } } And the generated code for the C# lambda expression: [CompilerGenerated] private sealed class <>c__DisplayClass1 { public int valueToAdd; public int <SelectAdd>b__0(int x) { return (x + this.valueToAdd); } } So I have some questions: Why does F#-generated class is not marked as sealed? Why does F#-generated class contains public fields since F# doesn't allows mutable closures? Why does F# generated class has the constructor? It may be perfectly initialized with the public fields... Why does C#-generated class is not marked as [Serializable]? Also classes generated for F# sequence expressions are also became [Serializable] and classes for C# iterators are not.

    Read the article

  • C# Assembly not found at runtime

    - by Gustavo Cardoso
    A strange error begans to happen with my XNA project on a new pc. I have two projects on the solution and a library that is used by both of them. One of the projects, a XNA Game Project, runs perfectly. The other project is a mix of WindowsForm and XNA. The form launches a XNA class when a button is clicked. When I run the program, it works great till the moment I click the button which launch the XNA class. A FileNotFoundException is fired exactly at the moment that the constructor will be executed. System.IO.FileNotFoundException was unhandled Message="Could not load file or assembly 'Microsoft.Xna.Framework, Version=3.0.0.0, Culture=neutral, PublicKeyToken=6d5c3888ef60e27d' or one of its dependencies. The system cannot find the path specified." The reference is correct, there is no problem on compilation. We already tried to delete the reference and add it again but it didn't work. Everything worked correctly in others teammate's pc. Anyone has any idea what is the problem?

    Read the article

  • Page.User.Identity.Name is blank on pages of subdomains

    - by sparks
    I have multiple subdomains trying to use a single subdomain for authentiction using forms authentication all running on windows server 2008 r2. All of the forms authentication pages are setup to use the same name, and on the authentication page the cookie is added with the following snippet: FormsAuthentication.SetAuthCookie(txtUserName.Text, false); System.Web.HttpCookie MyCookie = System.Web.Security.FormsAuthentication.GetAuthCookie(User.Identity.Name.ToString(), false); MyCookie.Domain = ConfigurationManager.AppSettings["domainName"]; Response.AppendCookie(MyCookie); When I am logged in to signon.mysite.com the page.user.identity.isauthenticated and page.user.identity.name properties both work fine. When I navigate to subdomain.mysite.com the page.user.identity.isauthenticated returns true, bue the name is empty. I tried to retrieve it from the cookie using the following, but it also was blank. HttpCookie cookie = Request.Cookies[".ASPXAUTH"]; FormsAuthenticationTicket fat = FormsAuthentication.Decrypt(cookie.Value); user2_lbl.Text = fat.Name; When googling the issue I found some people saying something must be added to global.asax and other saying it wasn't necessary. The goal is to be able to login on the authentication subdomain and have the user identity accessible from the root site and other subdomains. Machine keys match in all web.config, and the AppSettings["domainName"] is set to "mysite.com" currently. Does anyone know what is preventing me from accessing the user information?

    Read the article

  • C# average function without overflow exception

    - by Ron Klein
    .NET Framework 3.5. I'm trying to calculate the average of some pretty large numbers. For instance: using System; using System.Linq; class Program { static void Main(string[] args) { var items = new long[] { long.MaxValue - 100, long.MaxValue - 200, long.MaxValue - 300 }; try { var avg = items.Average(); Console.WriteLine(avg); } catch (OverflowException ex) { Console.WriteLine("can't calculate that!"); } Console.ReadLine(); } } Obviously, the mathematical result is 9223372036854775607 (long.MaxValue - 200), but I get an exception there. This is because the implementation (on my machine) to the Average extension method, as inspected by .NET Reflector is: public static double Average(this IEnumerable<long> source) { if (source == null) { throw Error.ArgumentNull("source"); } long num = 0L; long num2 = 0L; foreach (long num3 in source) { num += num3; num2 += 1L; } if (num2 <= 0L) { throw Error.NoElements(); } return (((double) num) / ((double) num2)); } I know I can use a BigInt library (yes, I know that it is included in .NET Framework 4.0, but I'm tied to 3.5). But I still wonder if there's a pretty straight forward implementation of calculating the average of integers without an external library. Do you happen to know about such implementation? Thanks!! UPDATE: The previous example, of three large integers, was just an example to illustrate the overflow issue. The question is about calculating an average of any set of numbers which might sum to a large number that exceeds the type's max value. Sorry about this confusion. I also changed the question's title to avoid additional confusion. Thanks all!!

    Read the article

  • Ruby through RVM fails

    - by TheLQ
    In constant battle to install Ruby 1.9.2 on an RPM system (OS is based off of CentOS), I'm trying again with RVM. So once I install it, I then try to use it: [root@quackwall ~]# rvm use 1.9.2 Using /usr/local/rvm/gems/ruby-1.9.2-p136 [root@quackwall ~]# ruby bash: ruby: command not found [root@quackwall ~]# which ruby /usr/bin/which: no ruby in (/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin) Now that's interesting; rvm info says something completely different: [root@quackwall bin]# rvm info ruby-1.9.2-p136: system: uname: "Linux quackwall.highwow.lan 2.6.18-194.8.1.v5 #1 SMP Thu Jul 15 01:14:04 EDT 2010 i686 i686 i386 GNU/Linux" bash: "/bin/bash => GNU bash, version 3.2.25(1)-release (i686-redhat-linux-gnu)" zsh: " => not installed" rvm: version: "rvm 1.2.2 by Wayne E. Seguin ([email protected]) [http://rvm.beginrescueend.com/]" ruby: interpreter: "ruby" version: "1.9.2p136" date: "2010-12-25" platform: "i686-linux" patchlevel: "2010-12-25 revision 30365" full_version: "ruby 1.9.2p136 (2010-12-25 revision 30365) [i686-linux]" homes: gem: "/usr/local/rvm/gems/ruby-1.9.2-p136" ruby: "/usr/local/rvm/rubies/ruby-1.9.2-p136" binaries: ruby: "/usr/local/rvm/rubies/ruby-1.9.2-p136/bin/ruby" irb: "/usr/local/rvm/rubies/ruby-1.9.2-p136/bin/irb" gem: "/usr/local/rvm/rubies/ruby-1.9.2-p136/bin/gem" rake: "/usr/local/rvm/gems/ruby-1.9.2-p136/bin/rake" environment: PATH: "/usr/local/rvm/gems/ruby-1.9.2-p136/bin:/usr/local/rvm/gems/ruby-1.9.2-p136@global/bin:/usr/local/rvm/rubies/ruby-1.9.2-p136/bin:bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/rvm/bin" GEM_HOME: "/usr/local/rvm/gems/ruby-1.9.2-p136" GEM_PATH: "/usr/local/rvm/gems/ruby-1.9.2-p136:/usr/local/rvm/gems/ruby-1.9.2-p136@global" MY_RUBY_HOME: "/usr/local/rvm/rubies/ruby-1.9.2-p136" IRBRC: "/usr/local/rvm/rubies/ruby-1.9.2-p136/.irbrc" RUBYOPT: "" gemset: "" So I have RVM that says one thing and bash which says another. Any suggestions on how to get this working?

    Read the article

  • Distinguishing between .NET exception types

    - by Swingline Rage
    For the love of all things holy, how do you distinguish between different "exception flavors" within the predefined .NET exception classes? For example, a piece of code might throw an XmlException under the following conditions: The root element of the document is NULL Invalid chars are in the document The document is too long All of these are thrown as XmlException objects and all of the internal "tell me more about this exception" fields (such as Exception.HResult, Exception.Data, etc.) are usually empty or null. That leaves Exception.Message as the only thing that allows you to distinguish among these exception types, and you can't really depend on it because, you guessed it, the Exception.Message string is glocabilized, and can change when the culture changes. At least that's my read on the documentation. Exception.HResult and Exception.Data are widely ignored across the .NET libraries. They are the red-headed stepchildren of the world's .NET error-handling code. And even assuming they weren't, the HRESULT type is still the worst, downright nastiest error code in the history of error codes. Why we are still looking at HRESULTs in 2010 is beyond me. I mean if you're doing Interop or P/Invoke that's one thing but... HRESULTs have no place in System.Exception. HRESULTs are a wart on the proboscis of System.Exception. But seriously, it means I have to set up a lot of detailed specific error-handling code in order to figure out the same information that should have been passed as part of the exception data. Exceptions are useless if they force you to work like this. What am I doing wrong?

    Read the article

  • How to force two process to run on the same CPU?

    - by kovan
    Context: I'm programming a software system that consists of multiple processes. It is programmed in C++ under Linux. and they communicate among them using Linux shared memory. Usually, in software development, is in the final stage when the performance optimization is made. Here I came to a big problem. The software has high performance requirements, but in machines with 4 or 8 CPU cores (usually with more than one CPU), it was only able to use 3 cores, thus wasting 25% of the CPU power in the first ones, and more than 60% in the second ones. After many research, and having discarded mutex and lock contention, I found out that the time was being wasted on shmdt/shmat calls (detach and attach to shared memory segments). After some more research, I found out that these CPUs, which usually are AMD Opteron and Intel Xeon, use a memory system called NUMA, which basically means that each processor has its fast, "local memory", and accessing memory from other CPUs is expensive. After doing some tests, the problem seems to be that the software is designed so that, basically, any process can pass shared memory segments to any other process, and to any thread in them. This seems to kill performance, as process are constantly accessing memory from other processes. Question: Now, the question is, is there any way to force pairs of processes to execute in the same CPU?. I don't mean to force them to execute always in the same processor, as I don't care in which one they are executed, altough that would do the job. Ideally, there would be a way to tell the kernel: If you schedule this process in one processor, you must also schedule this "brother" process (which is the process with which it communicates through shared memory) in that same processor, so that performance is not penalized.

    Read the article

  • What is the target color profile in Image.FromFile?

    - by Jan Zich
    I am curious what the useEmbeddedColorManagement parameter in System.Drawing.Image.FromFile actually does. This parameter directory corresponds to a GDI+ parameter in the same method of the same class. So debugging .NET source does not lead anywhere. If my understanding of color profiles is correct, a color profile is basically a mapping which describes how particular RGB triples (or CMYK or something else) map into the so called Profile Connection Space (CIELAB or CIEXYZ). Now, if I open an image with embedded color in .NET setting useEmbeddedColorManagement to true, my experience is I get an image whose RGB values are not exactly the same as the original values in the file, i.e. is transformed. Since the original image was an RGB and the new is also an RGB, there must have been a transformation from the embedded color profile to a Profile Connection Space and the back to RGB. The thing which I don’t understand is what is the target color system? Is it some default Windows color profile? Is the current monitor profile? Is it sRGB?

    Read the article

< Previous Page | 820 821 822 823 824 825 826 827 828 829 830 831  | Next Page >