Search Results

Search found 5211 results on 209 pages for 'named'.

Page 3/209 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Create named criteria in EJB Data control

    - by shantala.sankeshwar
    This article gives the detailed steps on creating named criteria in EJB Data control.Note that this feature is available in Jdev version 11.1.2.0.0Use Case DescriptionSuppose we have defined an EJB Entity Object & we would like to filter the Entity object based on some criteria,then this filtering can be achieved by creating named criteria in EJB Data Control.Implementation stepsLet us suppose that we have created Java EE Web Application with Entities from Emp table Create session bean,generate data control for the same Edit empFindAll in DataControls.dcx fileCreate simple Named Criteria: deptno>=20Create on '+' icon to create Named Criteria:Refresh the Data Controls & create a new jspx page.Drop EmpCriteria as ADF Query Panel with TableRun the page,click on search button & we will see that Emp table shows filtered records

    Read the article

  • EJB Named Criteria - Apply bind variable in Backingbean

    - by Deepak Siddappa
    EJB Named criteria are predefined and reusable where-clause definitions that are dynamically applied to a ViewObject query. Here we often use to filter the ViewObject SQL statement query based on Where Clause conditions.Take a scenario where we need to filter the SQL statements query based on Where Clause conditions, instead of playing with SQL statements use the EJB Named Criteria which is supported by default in ADF and set the Bind Variable parameter at run time.You can download the sample workspace from here [Runs with Oracle JDeveloper 11.1.2.0.0 (11g R2) + HR Schema] Implementation StepsCreate Java EE Web Application with entity based on Employees table, then create a session bean and data control for the session bean.Open the DataControls.dcx file and create sparse xml for as shown below. In sparse xml navigate to Named criteria tab -> Bind Variable section, create binding variable deptId. Now create a named criteria and map the query attributes to the bind variable. In the ViewController create index.jspx page, from data control palette drop employeesFindAll->Named Criteria->EmployeesCriteria->Table as ADF Read-Only Filtered Table and create the backingBean as "IndexBean".Open the index.jspx page and remove the "filterModel" binding from the table, add <af:inputText />, command button and bind them to backingBean. For command button create the actionListener as "applyEmpCriteria" and add below code to the file. public void applyEmpCriteria(ActionEvent actionEvent) { DCIteratorBinding dc = (DCIteratorBinding)evaluteEL("#{bindings.employeesFindAllIterator}"); ViewObject vo = dc.getViewObject(); vo.applyViewCriteria(vo.getViewCriteriaManager().getViewCriteria("EmployeesCriteria")); vo.ensureVariableManager().setVariableValue("deptId", this.getDeptId().getValue()); vo.executeQuery(); } /** * Programmtic evaluation of EL * * @param el EL to evalaute * @return Result of the evalutaion */ public Object evaluteEL(String el) { FacesContext fctx = FacesContext.getCurrentInstance(); ELContext elContext = fctx.getELContext(); Application app = fctx.getApplication(); ExpressionFactory expFactory = app.getExpressionFactory(); ValueExpression valExp = expFactory.createValueExpression(elContext, el, Object.class); return valExp.getValue(elContext); } Run the index.jspx page, enter departmentId value as 90 and click in ApplyEmpCriteria button. Now the bind variable for the Named criteria will be applied at runtime in the backing bean and it will re-execute ViewObject query to filter based on where clause condition.

    Read the article

  • C# 4.0 Optional/Named Parameters Beginner&rsquo;s Tutorial

    - by mbcrump
    One of the interesting features of C# 4.0 is for both named and optional arguments.  They are often very useful together, but are quite actually two different things.  Optional arguments gives us the ability to omit arguments to method invocations. Named arguments allows us to specify the arguments by name instead of by position.  Code using the named parameters are often more readable than code relying on argument position.  These features were long overdue, especially in regards to COM interop. Below, I have included some examples to help you understand them more in depth. Please remember to target the .NET 4 Framework when trying these samples. Code Snippet using System;   namespace ConsoleApplication3 {     class Program     {         static void Main(string[] args)         {               //C# 4.0 Optional/Named Parameters Tutorial               Foo();                              //Prints to the console | Return Nothing 0             Foo("Print Something");             //Prints to the console | Print Something 0             Foo("Print Something", 1);          //Prints to the console | Print Something 1             Foo(x: "Print Something", i: 5);    //Prints to the console | Print Something 5             Foo(i: 5, x: "Print Something");    //Prints to the console | Print Something 5             Foo("Print Something", i: 5);       //Prints to the console | Print Something 5             Foo2(i3: 77);                       //Prints to the console | 77         //  Foo(x:"Print Something", 5);        //Positional parameters must come before named arguments. This will error out.             Console.Read();         }           static void Foo(string x = "Return Nothing", int i = 0)         {             Console.WriteLine(x + " " + i + Environment.NewLine);         }           static void Foo2(int i = 1, int i2 = 2, int i3 = 3, int i4 = 4)         {             Console.WriteLine(i3);         }     } }

    Read the article

  • Using HtmlAnchor for anchor tag that navigates in-page named anchor

    - by Frank Schwieterman
    I am trying to render a simple hyperlink that links to a named anchor within the page, for example: <a href="#namedAnchor"scroll to down</a <a name="namedAnchor"down</a The problem is that when I use an ASP.NET control like asp:HyperLink or HtmlAnchor, the href="#namedAnchor" is rendered as href="controls/#namedAnchor" (where controls is the subdirectory where the user control containing the anchor is). Here is the code for the control, using two types of anchor controls, which both have the same problem: <%@ Control Language="C#" AutoEventWireup="true" CodeBehind="Test.ascx.cs" Inherits="TestWebApplication1.controls.Test" % <a href="#namedAnchor" runat="server"HtmlAnchor</a <asp:HyperLink NavigateUrl="#namedAnchor" runat="server"HyperLink</asp:HyperLink The generated source looks like: <a href="controls/#namedAnchor"HtmlAnchor</a <a href="controls/#namedAnchor"HyperLink</a I really just want: <a href="#namedAnchor"HtmlAnchor</a <a href="#namedAnchor"HyperLink</a I am using the HtmlAnchor or HyperLink class because I want to make changes to other attributes in the code behind. I do not want to introduce a custom web control for this requirement, as the requirement I'm pursuing is not that important and it would be hard to talk the team into abandoning the traditional ASP.NET link controls. It seems like I should be able to use the ASP.NET link controls to generate the desired link.

    Read the article

  • Firefox reloading parent page in iframe when clicking named anchor

    - by masty
    My site has an iframe which is dynamically populated with html content. The html often contains named anchors, which work fine in IE/Chrome but in Firefox it reopens the entire page within the iframe. Here's an example: load the page in firefox, scroll to the bottom of the iframe, click the "back to top" link, and you will see what I am talking about. <html><head></head><body onload="setFrameContent();"><script> var htmlBody = '<html> <head></head> <body>' + '<a name="top"><h1>top</h1></a>' + '<br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/>' + '<br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/>' + '<a href="#top">back to top</a></body> </html> '; function setFrameContent(){ if (frames.length > 0) { var d = frames[0].document; d.open(); d.write(htmlBody); d.close(); } } </script> <h1>Here's an iframe:</h1> <iframe id="htmlIframe" style="height: 400px; width: 100%"><p>Your browser does not support iframes.</p></iframe> </body></html> Any ideas?

    Read the article

  • MySQL to PostreSQL and Named Scope

    - by Lowgain
    I've got a named scope for one of my models that works fine. The code is: named_scope :inbox_threads, lambda { |user| { :include => [:deletion_flags, :recipiences], :conditions => ["recipiences.user_id = ? AND deletion_flags.user_id IS NULL", user.id], :group => "msg_threads.id" }} This works fine on my local copy of the app with a MySQL database, but when I push my app to Heroku (which only uses PostgreSQL), I get the following error: ActiveRecord::StatementInvalid (PGError: ERROR: column "msg_threads.subject" must appear in the GROUP BY clause or be used in an aggregate function: SELECT "msg_threads"."id" AS t0_r0, "msg_threads"."subject" AS t0_r1, "msg_threads"."originator_id" AS t0_r2, "msg_thr eads"."created_at" AS t0_r3, "msg_threads"."updated_at" AS t0_r4, "msg_threads"."url_key" AS t0_r5, "deletion_flags"."id" AS t1_r0, "deletion_flags"."user_id" AS t1_r1, "deletion_flags"."msg_thread_id" AS t1_r2, "deletion_flags"."confirmed" AS t1_r3, "deletion_flags"."created_at" AS t1_r4, "deletion_flags"."updated_at" AS t1_r5, "recipiences"."id" AS t2_r0, "recipiences"."user_id" AS t2_r1, "recipiences"."msg_thread_id" AS t2_r2, "recipiences"."created_at" AS t2_r3, "recipien ces"."updated_at" AS t2_r4 FROM "msg_threads" LEFT OUTER JOIN "deletion_flags" ON deletion_flags.msg_thread_id = msg_threads.id LEFT OUTER JOIN "recipiences" ON recipiences.msg_thread_id = msg_threads.id WHERE (recipiences.user_id = 1 AND deletion_flags.user_id IS NULL) GROUP BY msg_threads.id) I'm not as familiar with the working of Postgres, so what would I need to add here to get this working? Thanks!

    Read the article

  • How can I determine which named property (or properties) are filling my exchange 2007 information st

    - by Mikey B
    Hi Guys, I'm experiencing the following error on an Exchange 2007 server: Event ID: 9667 Type: Error Category: General Source: msgidNamedPropsQuotaError Description: Failed to create a new named property for database "" because the number of named properties reached the quota limit (). User attempting to create the named property: . Named property GUID: . Named property name/id: . I understand that this can occur if the exchange information store is filling up with named properties... but I don't know how to determine which specific named property is at fault here. Is there a way to examine the DB for this type of info to see if there's a specific recurring named property that is consuming resources? -M

    Read the article

  • named responding recursive on norecurse queries

    - by Keks
    I have a server on which named is running. It is intercepted with another named server which it is not aware of. Querying the first named server results in timeouts. The server tries to resolve the query recursively. During that the firewall redirects the DNS Request from the first named server to the second one (the query from the first one is addressed to a e.g. a root server and has its "Recursion desired" bit set to 0). Despite that the second named responds to this request with a entirely or at least 1 level more resolved response than the first named server expects. So it ends up with a timeout even though it got a correct name server or even the full IP for the queried domain. In the first case the first name server tries to follow the authority domain ignoring the coresponding glue record and ends up in a loop it aborts: queried: google.com -> got from named#2: ns1.google.com -> ignore glue record and query: ns1.google.com -> got authority from named#2: google.com In the second case it ignores the answer section with the correct IP and instead tries to follow the name servers from the authority section, which ends up in the same dead end as case 1. So how can it be that the second named responds with recursive results even though the bit was explicitly set to 0 in the request from the first named?

    Read the article

  • C++: Implementing Named Pipes using the Win32 API

    - by Mike Pateras
    I'm trying to implement named pipes in C++, but either my reader isn't reading anything, or my writer isn't writing anything (or both). Here's my reader: int main() { HANDLE pipe = CreateFile(GetPipeName(), GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_FLAG_OVERLAPPED, NULL); char data[1024]; DWORD numRead = 1; while (numRead >= 0) { ReadFile(pipe, data, 1024, &numRead, NULL); if (numRead > 0) cout << data; } return 0; } LPCWSTR GetPipeName() { return L"\\\\.\\pipe\\LogPipe"; } And here's my writer: int main() { HANDLE pipe = CreateFile(GetPipeName(), GENERIC_WRITE, 0, NULL, OPEN_EXISTING, FILE_FLAG_OVERLAPPED, NULL); string message = "Hi"; WriteFile(pipe, message.c_str(), message.length() + 1, NULL, NULL); return 0; } LPCWSTR GetPipeName() { return L"\\\\.\\pipe\\LogPipe"; } Does that look right? numRead in the reader is always 0, for some reason, and it reads nothing but 1024 -54's (some weird I character). Solution: Reader (Server): while (true) { HANDLE pipe = CreateNamedPipe(GetPipeName(), PIPE_ACCESS_INBOUND | PIPE_ACCESS_OUTBOUND , PIPE_WAIT, 1, 1024, 1024, 120 * 1000, NULL); if (pipe == INVALID_HANDLE_VALUE) { cout << "Error: " << GetLastError(); } char data[1024]; DWORD numRead; ConnectNamedPipe(pipe, NULL); ReadFile(pipe, data, 1024, &numRead, NULL); if (numRead > 0) cout << data << endl; CloseHandle(pipe); } Writer (client): HANDLE pipe = CreateFile(GetPipeName(), GENERIC_READ | GENERIC_WRITE, 0, NULL, OPEN_EXISTING, 0, NULL); if (pipe == INVALID_HANDLE_VALUE) { cout << "Error: " << GetLastError(); } string message = "Hi"; cout << message.length(); DWORD numWritten; WriteFile(pipe, message.c_str(), message.length(), &numWritten, NULL); return 0; The server blocks until it gets a connected client, reads what the client writes, and then sets itself up for a new connection, ad infinitum. Thanks for the help, all!

    Read the article

  • Templates, interfaces (multiple inheritance) and static functions (named constructors)

    - by fledgling Cxx user
    Setup I have a graph library where I am trying to decompose things as much as possible, and the cleanest way to describe it that I found is the following: there is a vanilla type node implementing only a list of edges: class node { public: int* edges; int edge_count; }; Then, I would like to be able to add interfaces to this whole mix, like so: template <class T> class node_weight { public: T weight; }; template <class T> class node_position { public: T x; T y; }; and so on. Then, the actual graph class comes in, which is templated on the actual type of node: template <class node_T> class graph { protected: node_T* nodes; public: static graph cartesian(int n, int m) { graph r; r.nodes = new node_T[n * m]; return r; } }; The twist is that it has named constructors which construct some special graphs, like a Cartesian lattice. In this case, I would like to be able to add some extra information into the graph, depending on what interfaces are implemented by node_T. What would be the best way to accomplish this? Possible solution I thought of the following humble solution, through dynamic_cast<>: template <class node_T, class weight_T, class position_T> class graph { protected: node_T* nodes; public: static graph cartesian(int n, int m) { graph r; r.nodes = new node_T[n * m]; if (dynamic_cast<node_weight<weight_T>>(r.nodes[0]) != nullptr) { // do stuff knowing you can add weights } if (dynamic_cast<node_position<positionT>>(r.nodes[0]) != nullptr) { // do stuff knowing you can set position } return r; } }; which would operate on node_T being the following: template <class weight_T, class position_T> class node_weight_position : public node, public node_weight<weight_T>, public node_position<position_T> { // ... }; Questions Is this -- philosophically -- the right way to go? I know people don't look nicely at multiple inheritance, though with "interfaces" like these it should all be fine. There are unfortunately problems with this. From what I know at least, dynamic_cast<> involves quite a bit of run-time overhead. Hence, I run into a problem with what I had solved earlier: writing graph algorithms that require weights independently of whether the actual node_T class has weights or not. The solution with this 'interface' approach would be to write a function: template <class node_T, class weight_T> inline weight_T get_weight(node_T const & n) { if (dynamic_cast<node_weight<weight_T>>(n) != nullptr) { return dynamic_cast<node_weight<weight_T>>(n).weight; } return T(1); } but the issue with it is that it works using run-time information (dynamic_cast), yet in principle I would like to decide it at compile-time and thus make the code more efficient. If there is a different solution that would solve both problems, especially a cleaner and better one than what I have, I would love to hear about it!

    Read the article

  • ruby on rails named scope implementation

    - by Engwan
    From the book Agile Web Development With Rails class Order < ActiveRecord::Base named_scope :last_n_days, lambda { |days| {:conditions => ['updated < ?' , days] } } named_scope :checks, :conditions => {:pay_type => :check} end The statement orders = Orders.checks.last_n_days(7) will result to only one query to the database. How does rails implement this? I'm new to Ruby and I'm wondering if there's a special construct that allows this to happen. To be able to chain methods like that, the functions generated by named_scope must be returning themselves or an object than can be scoped further. But how does Ruby know that it is the last function call and that it should query the database now? I ask this because the statement above actually queries the database and not just returns an SQL statement that results from chaining.

    Read the article

  • Registering a Named Function as a Listener with Jquery

    - by Marcus
    I'm new to javascript/jquery and I've done some poking around the web, but I can't figure out why the following is invalid: var toggleSection = function(sectionName) { // Do some Jquery work to toggle stuff based on sectionName string // (concatenate sectionName with other text to form selectors) }; $('#togglecont1').click(toggleSection("container1")); Is there something obvious I'm missing? Thanks in advance.

    Read the article

  • ASP Classic Named Parameter in Paramaterized Query: Must declare the scalar variable

    - by My Alter Ego
    I'm trying to write a parameterized query in ASP Classic, and it's starting to feel like i'm beating my head against a wall. I'm getting the following error: Must declare the scalar variable "@something". I would swear that is what the hello line does, but maybe i'm missing something... <% OPTION EXPLICIT %> <!-- #include file="../common/adovbs.inc" --> <% Response.Buffer=false dim conn,connectionString,cmd,sql,rs,parm connectionString = "Provider=SQLOLEDB.1;Integrated Security=SSPI;Data Source=.\sqlexpress;Initial Catalog=stuff" set conn = server.CreateObject("adodb.connection") conn.Open(connectionString) set cmd = server.CreateObject("adodb.command") set cmd.ActiveConnection = conn cmd.CommandType = adCmdText cmd.CommandText = "select @something" cmd.NamedParameters = true cmd.Prepared = true set parm = cmd.CreateParameter("@something",advarchar,adParamInput,255,"Hello") call cmd.Parameters.append(parm) set rs = cmd.Execute if not rs.eof then Response.Write rs(0) end if %>

    Read the article

  • Rails Named Scope and overlapping conditions

    - by Tumtu
    Hi everyone, have a question about rails SQL generation: class Organization < ActiveRecord::Base has_many :people named_scope :active, :conditions => { :active => 'Yes' } end class Person < ActiveRecord::Base belongs_to :organization end Rails SQL for all active people in the first organiztion Organization.first.people.active.all [4;36;1mOrganization Load (0.0ms)[0m [0;1mSELECT TOP 1 * FROM [organizations] [0m [4;35;1mPerson Load (0.0ms)[0m [0mSELECT * FROM [people] WHERE ((([people].[active] = 'Yes') AND ([people].organization_id = 1)) AND ([people].organization_id = 1)) [0m Why Rails generates "[people].organization_id = 1" condition twice ? Does someone know how to make it DRY ? e.g. SELECT * FROM [people] WHERE (([people].[active] = 'Yes') AND ([people].organization_id = 1))

    Read the article

  • Disambiguating Named Entities in Java

    - by Alterscape
    I have a list of strings (company names, in this case), and a Java program that extracts a list of things that look like company names out of mostly-unstructured text. I need to match each element of extracted text to a string in the list. Caveat: the unstructured text has typos, things like "Blah, Inc." referred to as "Blah," etc. I've tried Levenshtein Edit Distance, but that fails for predictable reasons. Are there known best-practices ways of tackling this problem? Or am I back to manual data-entry?

    Read the article

  • Named keywords in decorators?

    - by wheaties
    I've been playing around in depth with attempting to write my own version of a memoizing decorator before I go looking at other people's code. It's more of an exercise in fun, honestly. However, in the course of playing around I've found I can't do something I want with decorators. def addValue( func, val ): def add( x ): return func( x ) + val return add @addValue( val=4 ) def computeSomething( x ): #function gets defined If I want to do that I have to do this: def addTwo( func ): return addValue( func, 2 ) @addTwo def computeSomething( x ): #function gets defined Why can't I use keyword arguments with decorators in this manner? What am I doing wrong and can you show me how I should be doing it?

    Read the article

  • Using a named pipe to simulate a serial port on a VMware virtual machine (linux host and client)

    - by Dave M
    Trying to write a python program to create a simulated data stream and feed it, through a named pipe, to a VMware virtual machine. The host is running Ubuntu 11.10 and VMware player 5.0.0. The Vm is running Ubuntu netbook 10.04. I am able to get the pipe working on the local machine but I am not able to get the pipe to pass data through the virtual serial port to the programs running on the virtual machine. #!/usr/bin/python import os # # Create a named pipe that will be used as the serial port on a VMware virtual machine SerialPipe = '/tmp/gpsd2NMEA' try: os.unlink(SerialPipe) except: pass os.mkfifo(SerialPipe) # # Open the named pipe NMEApipe = os.open(SerialPipe, os.O_RDWR|os.O_NONBLOCK) # # Write a string to the named pipe NMEAtime = "235959" os.write(NMEApipe, str( '%s\n' % NMEAtime )) Test to see if the python program is working on the host machine (displays 235959 if data is passing through the pipe) $ cat /tmp/gpsd2NMEA 235959 Serial port as defined in the VMware .vmx file: serial0.present = "TRUE" serial0.startConnected = "TRUE" serial0.fileType = "pipe" serial0.fileName = "/tmp/gpsd2NMEA" serial0.pipe.endPoint = "client" serial0.autodetect = "FALSE" serial0.tryNoRxLoss = "TRUE" serial0.yieldOnMsrRead = "TRUE" Test to see if the serial port in the VM is receiving data $ cat /dev/ttyS0 or $ minicom -D /dev/ttyS0 or $ stty -F /dev/ttyS0 cs8 -parenb -cstopb 115200 $ echo < /dev/ttyS0 None of these display any data from the python program.

    Read the article

  • MySQL on Windows - how do I set the wait_timeout for connections using named pipes?

    - by gustafc
    I use a MySQL database running on a Windows box, and for performance reasons I'm connecting to it using named pipes. The (Java) application using the database (through Hibernate) can let the connection lie idle for quite a long time, which causes the connection to fail with the following message: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 33 558 297 milliseconds ago. The last packet sent successfully to the server was 33 558 297 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem. autoReconnect unfortunately has no effect (and neither does autoReconnectForPools), but the wait_timeout docs state that wait_timeout only applies "to TCP/IP and Unix socket file connections, not to connections made via named pipes, or shared memory". How can I change the wait_timeout for named pipes?

    Read the article

  • C# 4.0: Named And Optional Arguments

    - by Paulo Morgado
    As part of the co-evolution effort of C# and Visual Basic, C# 4.0 introduces Named and Optional Arguments. First of all, let’s clarify what are arguments and parameters: Method definition parameters are the input variables of the method. Method call arguments are the values provided to the method parameters. In fact, the C# Language Specification states the following on §7.5: The argument list (§7.5.1) of a function member invocation provides actual values or variable references for the parameters of the function member. Given the above definitions, we can state that: Parameters have always been named and still are. Parameters have never been optional and still aren’t. Named Arguments Until now, the way the C# compiler matched method call definition arguments with method parameters was by position. The first argument provides the value for the first parameter, the second argument provides the value for the second parameter, and so on and so on, regardless of the name of the parameters. If a parameter was missing a corresponding argument to provide its value, the compiler would emit a compilation error. For this call: Greeting("Mr.", "Morgado", 42); this method: public void Greeting(string title, string name, int age) will receive as parameters: title: “Mr.” name: “Morgado” age: 42 What this new feature allows is to use the names of the parameters to identify the corresponding arguments in the form: name:value Not all arguments in the argument list must be named. However, all named arguments must be at the end of the argument list. The matching between arguments (and the evaluation of its value) and parameters will be done first by name for the named arguments and than by position for the unnamed arguments. This means that, for this method definition: public static void Method(int first, int second, int third) this call declaration: int i = 0; Method(i, third: i++, second: ++i); will have this code generated by the compiler: int i = 0; int CS$0$0000 = i++; int CS$0$0001 = ++i; Method(i, CS$0$0001, CS$0$0000); which will give the method the following parameter values: first: 2 second: 2 third: 0 Notice the variable names. Although invalid being invalid C# identifiers, they are valid .NET identifiers and thus avoiding collision between user written and compiler generated code. Besides allowing to re-order of the argument list, this feature is very useful for auto-documenting the code, for example, when the argument list is very long or not clear, from the call site, what the arguments are. Optional Arguments Parameters can now have default values: public static void Method(int first, int second = 2, int third = 3) Parameters with default values must be the last in the parameter list and its value is used as the value of the parameter if the corresponding argument is missing from the method call declaration. For this call declaration: int i = 0; Method(i, third: ++i); will have this code generated by the compiler: int i = 0; int CS$0$0000 = ++i; Method(i, 2, CS$0$0000); which will give the method the following parameter values: first: 1 second: 2 third: 1 Because, when method parameters have default values, arguments can be omitted from the call declaration, this might seem like method overloading or a good replacement for it, but it isn’t. Although methods like this: public static StreamReader OpenTextFile( string path, Encoding encoding = null, bool detectEncoding = true, int bufferSize = 1024) allow to have its calls written like this: OpenTextFile("foo.txt", Encoding.UTF8); OpenTextFile("foo.txt", Encoding.UTF8, bufferSize: 4096); OpenTextFile( bufferSize: 4096, path: "foo.txt", detectEncoding: false); The complier handles default values like constant fields taking the value and useing it instead of a reference to the value. So, like with constant fields, methods with parameters with default values are exposed publicly (and remember that internal members might be publicly accessible – InternalsVisibleToAttribute). If such methods are publicly accessible and used by another assembly, those values will be hard coded in the calling code and, if the called assembly has its default values changed, they won’t be assumed by already compiled code. At the first glance, I though that using optional arguments for “bad” written code was great, but the ability to write code like that was just pure evil. But than I realized that, since I use private constant fields, it’s OK to use default parameter values on privately accessed methods.

    Read the article

  • Initialized variables vs named constants

    - by Mike
    I'm working on a fundamental programming class in college and our textbook is "programming logic and design" by joyce farrell(spelling?) Anyhow, I'm struggling conceptually when it comes to initialized variables and named constants. Our class is focusing on pseudo-code for the time being and not one particular language so let me illustrate what I'm talking about. Let's say I am declaring a variable named "myVar" and the data type is numeric: num myVar now I want to initialize it (I don't understand this concept) starting with the number 5 num myVar = 5 how is that any different than creating a named constant?

    Read the article

  • Linux C: "Interactive session" with separate read and write named pipes?

    - by ~sd-imi
    Hi all, I am trying to work with "Introduction to Interprocess Communication Using Named Pipes - Full-Duplex Communication Using Named Pipes", http://developers.sun.com/solaris/articles/named_pipes.html#5 ; in particular fd_server.c (included below for reference) Here is my info and compile line: :~$ cat /etc/issue Ubuntu 10.04 LTS \n \l :~$ gcc --version gcc (Ubuntu 4.4.3-4ubuntu5) 4.4.3 :~$ gcc fd_server.c -o fd_server fd_server.c creates two named pipes, one for reading and one for writing. What one can do, is: in one terminal, run the server and read (through cat) its write pipe: :~$ ./fd_server & 2/dev/null [1] 11354 :~$ cat /tmp/np2 and in another, write (using echo) to server's read pipe: :~$ echo "heeellloooo" /tmp/np1 going back to first terminal, one can see: :~$ cat /tmp/np2 HEEELLLOOOO 0[1]+ Exit 13 ./fd_server 2 /dev/null What I would like to do, is make sort of a "interactive" (or "shell"-like) session; that is, the server is run as usual, but instead of running "cat" and "echo", I'd like to use something akin to screen. What I mean by that, is that screen can be called like screen /dev/ttyS0 38400, and then it makes a sort of a interactive session, where what is typed in terminal is passed to /dev/ttyS0, and its response is written to terminal. Now, of course, I cannot use screen, because in my case the program has two separate nodes, and as far as I can tell, screen can refer to only one. How would one go about to achieve this sort of "interactive" session in this context (with two separate read/write pipes)? Thanks, Cheers! Code below: #include <stdio.h> #include <errno.h> #include <ctype.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> //#include <fullduplex.h> /* For name of the named-pipe */ #define NP1 "/tmp/np1" #define NP2 "/tmp/np2" #define MAX_BUF_SIZE 255 #include <stdlib.h> //exit #include <string.h> //strlen int main(int argc, char *argv[]) { int rdfd, wrfd, ret_val, count, numread; char buf[MAX_BUF_SIZE]; /* Create the first named - pipe */ ret_val = mkfifo(NP1, 0666); if ((ret_val == -1) && (errno != EEXIST)) { perror("Error creating the named pipe"); exit (1); } ret_val = mkfifo(NP2, 0666); if ((ret_val == -1) && (errno != EEXIST)) { perror("Error creating the named pipe"); exit (1); } /* Open the first named pipe for reading */ rdfd = open(NP1, O_RDONLY); /* Open the second named pipe for writing */ wrfd = open(NP2, O_WRONLY); /* Read from the first pipe */ numread = read(rdfd, buf, MAX_BUF_SIZE); buf[numread] = '0'; fprintf(stderr, "Full Duplex Server : Read From the pipe : %sn", buf); /* Convert to the string to upper case */ count = 0; while (count < numread) { buf[count] = toupper(buf[count]); count++; } /* * Write the converted string back to the second * pipe */ write(wrfd, buf, strlen(buf)); } Edit: Right, just to clarify - it seems I found a document discussing something very similar, it is http://en.wikibooks.org/wiki/Serial_Programming/Serial_Linux#Configuration_with_stty - a modification of the script there ("For example, the following script configures the device and starts a background process for copying all received data from the serial device to standard output...") for the above program is below: # stty raw # ( ./fd_server 2>/dev/null; )& bgPidS=$! ( cat < /tmp/np2 ; )& bgPid=$! # Read commands from user, send them to device echo $(kill -0 $bgPidS 2>/dev/null ; echo $?) while [ "$(kill -0 $bgPidS 2>/dev/null ; echo $?)" -eq "0" ] && read cmd; do # redirect debug msgs to stderr, as here we're redirected to /tmp/np1 echo "$? - $bgPidS - $bgPid" >&2 echo "$cmd" echo -e "\nproc: $(kill -0 $bgPidS 2>/dev/null ; echo $?)" >&2 done >/tmp/np1 echo OUT # Terminate background read process - if they still exist if [ "$(kill -0 $bgPid 2>/dev/null ; echo $?)" -eq "0" ] ; then kill $bgPid fi if [ "$(kill -0 $bgPidS 2>/dev/null ; echo $?)" -eq "0" ] ; then kill $bgPidS fi # stty cooked So, saving the script as say starter.sh and calling it, results with the following session: $ ./starter.sh 0 i'm typing here and pressing [enter] at end 0 - 13496 - 13497 I'M TYPING HERE AND PRESSING [ENTER] AT END 0~?.N=?(?~? ?????}????@??????~? [garble] proc: 0 OUT which is what I'd call for "interactive session" (ignoring the debug statements) - server waits for me to enter a command; it gives its output after it receives a command (and as in this case it exits after first command, so does the starter script as well). Except that, I'd like to not have buffered input, but sent character by character (meaning the above session should exit after first key press, and print out a single letter only - which is what I expected stty raw would help with, but it doesn't: it just kills reaction to both Enter and Ctrl-C :) ) I was just wandering if there already is an existing command (akin to screen in respect to serial devices, I guess) that would accept two such named pipes as arguments, and establish a "terminal" or "shell" like session through them; or would I have to use scripts as above and/or program own 'client' that will behave as a terminal..

    Read the article

  • E: Sub-process /usr/bin/dpkg returned an error code (1) seems to be choking on kde-runtime-data version issue

    - by BMT
    12.04 LTS, on a dell mini 10. Install stable until about a week ago. Updated about 1x a week, sometimes more often. Several days ago, I booted up and the system was no longer working correctly. All these symptoms occurred simultaneously: Cannot run (exit on opening, every time): Update manager, software center, ubuntuOne, libreOffice. Vinagre autostarts on boot, no explanation, not set to startup with Ubuntu. Using apt-get to fix install results in the following: maura@pandora:~$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following package was automatically installed and is no longer required: libtelepathy-farstream2 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: gwibber gwibber-service kde-runtime-data software-center Suggested packages: gwibber-service-flickr gwibber-service-digg gwibber-service-statusnet gwibber-service-foursquare gwibber-service-friendfeed gwibber-service-pingfm gwibber-service-qaiku unity-lens-gwibber The following packages will be upgraded: gwibber gwibber-service kde-runtime-data software-center 4 upgraded, 0 newly installed, 0 to remove and 39 not upgraded. 20 not fully installed or removed. Need to get 0 B/5,682 kB of archives. After this operation, 177 kB of additional disk space will be used. Do you want to continue [Y/n]? debconf: Perl may be unconfigured (Can't locate Scalar/Util.pm in @INC (@INC contains: /etc/perl /usr/local/lib/perl/5.14.2 /usr/local/share/perl/5.14.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.14 /usr/share/perl/5.14 /usr/local/lib/site_perl .) at /usr/lib/perl/5.14/Hash/Util.pm line 9. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/Hash/Util.pm line 9. Compilation failed in require at /usr/share/perl/5.14/fields.pm line 122. Compilation failed in require at /usr/share/perl5/Debconf/Log.pm line 10. Compilation failed in require at (eval 1) line 4. BEGIN failed--compilation aborted at (eval 1) line 4. ) -- aborting (Reading database ... 242672 files and directories currently installed.) Preparing to replace gwibber 3.4.1-0ubuntu1 (using .../gwibber_3.4.2-0ubuntu1_i386.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/gwibber_3.4.2-0ubuntu1_i386.deb (--unpack): subprocess new pre-removal script returned error exit status 1 Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace gwibber-service 3.4.1-0ubuntu1 (using .../gwibber-service_3.4.2-0ubuntu1_all.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/gwibber-service_3.4.2-0ubuntu1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace kde-runtime-data 4:4.8.3-0ubuntu0.1 (using .../kde-runtime-data_4%3a4.8.4-0ubuntu0.1_all.deb) ... Unpacking replacement kde-runtime-data ... dpkg: error processing /var/cache/apt/archives/kde-runtime-data_4%3a4.8.4-0ubuntu0.1_all.deb (--unpack): trying to overwrite '/usr/share/sounds', which is also in package sound-theme-freedesktop 0.7.pristine-2 dpkg-deb (subprocess): subprocess data was killed by signal (Broken pipe) dpkg-deb: error: subprocess <decompress> returned error exit status 2 Preparing to replace python-crypto 2.4.1-1 (using .../python-crypto_2.4.1-1_i386.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/python-crypto_2.4.1-1_i386.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace software-center 5.2.2.2 (using .../software-center_5.2.4_all.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/software-center_5.2.4_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace xdiagnose 2.5 (using .../archives/xdiagnose_2.5_all.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/xdiagnose_2.5_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/gwibber_3.4.2-0ubuntu1_i386.deb /var/cache/apt/archives/gwibber-service_3.4.2-0ubuntu1_all.deb /var/cache/apt/archives/kde-runtime-data_4%3a4.8.4-0ubuntu0.1_all.deb /var/cache/apt/archives/python-crypto_2.4.1-1_i386.deb /var/cache/apt/archives/software-center_5.2.4_all.deb /var/cache/apt/archives/xdiagnose_2.5_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) maura@pandora:~$ ^C maura@pandora:~$

    Read the article

  • DNS resolution problems; dig SERVFAIL error

    - by JustinP
    I'm setting up a couple of dedicated servers, and having problems setting up my nameservers properly. One of these is a LEMP server (LAMP with nginx in place of Apache), and the other will function solely as an email server, running exim/dovecot/ASSP antispam (no Apache). The LEMP server is CentOS 5.5, with no control panel, while the email server is CentOS 5.5 as well, with cPanel/WHM. So, I've had problems getting DNS set up properly. I have two domains, each one pointing to one of these servers. The nameservers are registered correctly with the domain registrar, and the nameserver IPs are entered correctly as well. I've spoken to tech support at the registrar and they confirm that everything is set up on their end. Not knowing much about DNS, I googled nameservers and DNS until I nearly went blind, and spent hours messing with the configuration. Eventually, I got the LEMP server's DNS working properly (no cPanel). Pleased with this triumph, I'm trying to mimic that configuration and repeat the process with the email server, and it's just not happening. The nameserver starts and stops, but the domain doesn't resolve. Things I have tried Going through standard procedures to set up DNS in WHM Clearing all DNS information, uninstalling BIND, then reinstalling all of that and again going through WHM procedures for setting up DNS Clearing all DNS information, and setting up BIND via shell (completely outside of cPanel) by using my config and zone files from the LEMP server as a template named runs just fine, but nothing is resolving. When I "dig any example.com" I get a SERVFAIL message. Nslookups return no information. Here are my config and zone files. named.conf controls { inet 127.0.0.1 allow { localhost; } keys { coretext-key; }; }; options { listen-on port 53 { any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; // Those options should be used carefully because they disable port // randomization // query-source port 53; // query-source-v6 port 53; allow-query { any; }; allow-query-cache { any; }; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; view "localhost_resolver" { match-clients { 127.0.0.0/24; }; match-destinations { localhost; }; recursion yes; //zone "." IN { // type hint; // file "/var/named/named.ca"; //}; include "/etc/named.rfc1912.zones"; }; view "internal" { /* This view will contain zones you want to serve only to "internal" clients that connect via your directly attached LAN interfaces - "localnets" . */ match-clients { localnets; }; match-destinations { localnets; }; recursion yes; zone "." IN { type hint; file "/var/named/named.ca"; }; // include "/var/named/named.rfc1912.zones"; // you should not serve your rfc1912 names to non-localhost clients. // These are your "authoritative" internal zones, and would probably // also be included in the "localhost_resolver" view above : zone "example.com" { type master; file "data/db.example.com"; }; zone "3.2.1.in-addr.arpa" { type master; file "data/db.1.2.3"; }; }; view "external" { /* This view will contain zones you want to serve only to "external" clients * that have addresses that are not on your directly attached LAN interface subnets: */ match-clients { any; }; match-destinations { any; }; recursion no; // you'd probably want to deny recursion to external clients, so you don't // end up providing free DNS service to all takers allow-query-cache { none; }; // Disable lookups for any cached data and root hints // all views must contain the root hints zone: //include "/etc/named.rfc1912.zones"; zone "." IN { type hint; file "/var/named/named.ca"; }; zone "example.com" { type master; file "data/db.example.com"; }; zone "3.2.1.in-addr.arpa" { type master; file "data/db.1.2.3"; }; }; include "/etc/rndc.key"; db.example.com $TTL 1D ; ; Zone file for example.com ; ; Mandatory minimum for a working domain ; @ IN SOA ns1.example.com. contact.example.com. ( 2011042905 ; serial 8H ; refresh 2H ; retry 4W ; expire 1D ; default_ttl ) NS ns1.example.com. NS ns2.example.com. ns1 A 1.2.3.4 ns2 A 1.2.3.5 example.com. A 1.2.3.4 localhost A 127.0.0.1 www CNAME example.com. mail CNAME example.com. ; db.1.2.3 $TTL 1D $ORIGIN 3.2.1.in-addr.arpa. @ IN SOA ns1.example.com contact.example.com. ( 2011042908 ; 8H ; 2H ; 4W ; 1D ; ) NS ns1.example.com. NS ns2.example.com. 4 PTR hostname.example.com. 5 PTR hostname.example.com. ; Also of note: both of these servers are managed. Tech support is very responsive, and largely useless. Hours go by with them asking me questions to narrow down what could be wrong, then they pass the ticket to the tech on the next shift, who ignores everything that's happened already and spend his whole shift asking all the same questions the last guy asked. So, in summary: *Nameservers, with IPs, are correctly registered with domain registrar *named is configured and running *...and must not be configured correctly, because nothing resolves. Any help would be great. I changed domains and IPs in the files to generics, but let me know if you need to know the domain in question. Thanks! UPDATE I found that I didn't have 127.0.0.1 in /etc/resolv.conf, so I added it, along with my two public IPs that I have named listening on. resolv.conf search www.example.com example.com nameserver 127.0.0.1 nameserver 7.8.9.10 ;Was in here by default, authoritative nameserver of hosting company nameserver 1.2.3.4 ;Public IP #1 nameserver 1.2.3.5 ;Public IP #2 Now when I DIG example.com from the host, it resolves. If I try to DIG from my other server (in the same datacenter), or from the internet, it times out or I get SERVFAIL.

    Read the article

  • Unit testing internal methods in a strongly named assembly/project

    - by Rohit Gupta
    If you need create Unit tests for internal methods within a assembly in Visual Studio 2005 or greater, then we need to add an entry in the AssemblyInfo.cs file of the assembly for which you are creating the units tests for. For e.g. if you need to create tests for a assembly named FincadFunctions.dll & this assembly contains internal/friend methods within which need to write unit tests for then we add a entry in the FincadFunctions.dll’s AssemblyInfo.cs file like so : 1: [assembly: System.Runtime.CompilerServices.InternalsVisibleTo("FincadFunctionsTests")] where FincadFunctionsTests is the name of the Unit Test project which contains the Unit Tests. However if the FincadFunctions.dll is a strongly named assembly then you will the following error when compiling the FincadFunctions.dll assembly :      Friend assembly reference “FincadFunctionsTests” is invalid. Strong-name assemblies must specify a public key in their InternalsVisibleTo declarations. Thus to add a public key token to InternalsVisibleTo Declarations do the following: You need the .snk file that was used to strong-name the FincadFunctions.dll assembly. You can extract the public key from this .snk with the sn.exe tool from the .NET SDK. First we extract just the public key from the key pair (.snk) file into another .snk file. sn -p test.snk test.pub Then we ask for the value of that public key (note we need the long hex key not the short public key token): sn -tp test.pub We end up getting a super LONG string of hex, but that's just what we want, the public key value of this key pair. We add it to the strongly named project "FincadFunctions.dll" that we want to expose our internals from. Before what looked like: 1: [assembly: System.Runtime.CompilerServices.InternalsVisibleTo("FincadFunctionsTests")] Now looks like. 1: [assembly: System.Runtime.CompilerServices.InternalsVisibleTo("FincadFunctionsTests, 2: PublicKey=002400000480000094000000060200000024000052534131000400000100010011fdf2e48bb")] And we're done. hope this helps

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >