Search Results

Search found 2875 results on 115 pages for 'anonymous human'.

Page 21/115 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Reference properteries declared in a protocol and implemented in the anonymous category?

    - by Heath Borders
    I have the following protocol: @protocol MyProtocol @property (nonatomic, retain) NSObject *myProtocolProperty; -(void) myProtocolMethod; @end and I have the following class: @interface MyClass : NSObject { } @end I have a class extension declared, I have to redeclare my protocol properties here or else I can't implement them with the rest of my class. @interface()<MyProtocol> @property (nonatomic, retain) NSObject *myExtensionProperty; /* * This redeclaration is required or my @synthesize myProtocolProperty fails */ @property (nonatomic, retain) NSObject *myProtocolProperty; - (void) myExtensionMethod; @end @implementation MyClass @synthesize myProtocolProperty = _myProtocolProperty; @synthesize myExtensionProperty = _myExtensionProperty; - (void) myProtocolMethod { } - (void) myExtensionMethod { } @end In a consumer method, I can call my protocol methods and properties just fine. Calling my extension methods and properties produces a warning and an error respectively. - (void) consumeMyClassWithMyProtocol: (MyClass<MyProtocol> *) myClassWithMyProtocol { myClassWithMyProtocol.myProtocolProperty; // works, yay! [myClassWithMyProtocol myProtocolMethod]; // works, yay! myClassWithMyProtocol.myExtensionProperty; // compiler error, yay! [myClassWithMyProtocol myExtensionMethod]; // compiler warning, yay! } Is there any way I can avoid redeclaring the properties in MyProtocol within my class extension in order to implement MyProtocol privately?

    Read the article

  • SQL SERVER – Why Do We Need Data Quality Services – Importance and Significance of Data Quality Services (DQS)

    - by pinaldave
    Databases are awesome.  I’m sure my readers know my opinion about this – I have made SQL Server my life’s work after all!  I love technology and all things computer-related.  Of course, even with my love for technology, I have to admit that it has its limits.  For example, it takes a human brain to notice that data has been input incorrectly.  Computer “brains” might be faster than humans, but human brains are still better at pattern recognition.  For example, a human brain will notice that “300” is a ridiculous age for a human to be, but to a computer it is just a number.  A human will also notice similarities between “P. Dave” and “Pinal Dave,” but this would stump most computers. In a database, these sorts of anomalies are incredibly important.  Databases are often used by multiple people who rely on this data to be true and accurate, so data quality is key.  That is why the improved SQL Server features Master Data Management talks about Data Quality Services.  This service has the ability to recognize and flag anomalies like out of range numbers and similarities between data.  This allows a human brain with its pattern recognition abilities to double-check and ensure that P. Dave is the same as Pinal Dave. A nice feature of Data Quality Services is that once you set the rules for the program to follow, it will not only keep your data organized in the future, but go to the past and “fix up” any data that has already been entered.  It also allows you do combine data from multiple places and it will apply these rules across the board, so that you don’t have any weird issues that crop up when trying to fit a round peg into a square hole. There are two parts of Data Quality Services that help you accomplish all these neat things.  The first part is DQL Server, which you can think of as the hardware component of the system.  It is installed on the side of (it needs to install separately after SQL Server is installed) SQL Server and runs quietly in the background, performing all its cleanup services. DQS Client is the user interface that you can interact with to set the rules and check over your data.  There are three main aspects of Client: knowledge base management, data quality projects and administration.  Knowledge base management is the part of the system that allows you to set the rules, or program the “knowledge base,” so that your database is clean and consistent. Data Quality projects are what run in the background and clean up the data that is already present.  The administration allows you to check out what DQS Client is doing, change rules, and generally oversee the entire process.  The whole process is user-friendly and a pleasure to use.  I highly recommend implementing Data Quality Services in your database. Here are few of my blog posts which are related to Data Quality Services and I encourage you to try this out. SQL SERVER – Installing Data Quality Services (DQS) on SQL Server 2012 SQL SERVER – Step by Step Guide to Beginning Data Quality Services in SQL Server 2012 – Introduction to DQS SQL SERVER – DQS Error – Cannot connect to server – A .NET Framework error occurred during execution of user-defined routine or aggregate “SetDataQualitySessions” – SetDataQualitySessionPhaseTwo SQL SERVER – Configuring Interactive Cleansing Suggestion Min Score for Suggestions in Data Quality Services (DQS) – Sensitivity of Suggestion SQL SERVER – Unable to DELETE Project in Data Quality Projects (DQS) Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Data Quality Services, DQS

    Read the article

  • What common interface would be appropriate for these game object classes?

    - by Jefffrey
    Question A component based system's goal is to solve the problems that derives from inheritance: for example the fact that some parts of the code (that are called components) are reused by very different classes that, hypothetically, would lie in a very different branch of the inheritance tree. That's a very nice concept, but I've found out that CBS is often hard to accomplish without using ugly hacks. Implementations of this system are often far from clean. But I don't want to discuss this any further. My question is: how can I solve the same problems a CBS try to solve with a very clean interface? (possibly with examples, there are a lot of abstract talks about the "perfect" design already). Context Here's an example I was going for before realizing I was just reinventing inheritance again: class Human { public: Position position; Movement movement; Sprite sprite; // other human specific components }; class Zombie { Position position; Movement movement; Sprite sprite; // other zombie specific components }; After writing that I realized I needed an interface, otherwise I would have needed N containers for N different types of objects (or to use boost::variant to gather them all together). So I've thought of polymorphism (move what systems do in a CBS design into class specific functions): class Entity { public: virtual void on_event(Event) {} // not pure virtual on purpose virtual void on_update(World) {} virtual void on_draw(Window) {} }; class Human : public Entity { private: Position position; Movement movement; Sprite sprite; public: virtual void on_event(Event) { ... } virtual void on_update(World) { ... } virtual void on_draw(Window) { ... } }; class Zombie : public Entity { private: Position position; Movement movement; Sprite sprite; public: virtual void on_event(Event) { ... } virtual void on_update(World) { ... } virtual void on_draw(Window) { ... } }; Which was nice, except for the fact that now the outside world would not even be able to know where a Human is positioned (it does not have access to its position member). That would be useful to track the player position for collision detection or if on_update the Zombie would want to track down its nearest human to move towards him. So I added const Position& get_position() const; to both the Zombie and Human classes. And then I realized that both functionality were shared, so it should have gone to the common base class: Entity. Do you notice anything? Yes, with that methodology I would have a god Entity class full of common functionality (which is the thing I was trying to avoid in the first place). Meaning of "hacks" in the implementation I'm referring to I'm talking about the implementations that defines Entities as simple IDs to which components are dynamically attached. Their implementation can vary from C-stylish: int last_id; Position* positions[MAX_ENTITIES]; Movement* movements[MAX_ENTITIES]; Where positions[i], movements[i], component[i], ... make up the entity. Or to more C++-style: int last_id; std::map<int, Position> positions; std::map<int, Movement> movements; From which systems can detect if an entity/id can have attached components.

    Read the article

  • WCF service with Factory attribute on .svc is not working on web server (IIS6), but is locally using

    - by Jessica
    I am working on implementing a non web.config approach of WCF services using the factory attribute on the .svc file per Rick Strahl's blog post: Factory="System.ServiceModel.Activation.WebScriptServiceHostFactory" Locally, I am running IIS7 in Visual Studio 2008 and have no problem, but when I deploy to my web server (currently running IIS6), I am getting an authentication error in the event log: Exception: System.ServiceModel.ServiceActivationException: The service '/Services/ResourcesService.svc' cannot be activated due to an exception during compilation. The exception message is: IIS specified authentication schemes 'IntegratedWindowsAuthentication, Anonymous', but the binding only supports specification of exactly one authentication scheme. Valid authentication schemes are Digest, Negotiate, NTLM, Basic, or Anonymous. Change the IIS settings so that only a single authentication scheme is used.. --- System.InvalidOperationException: IIS specified authentication schemes 'IntegratedWindowsAuthentication, Anonymous', but the binding only supports specification of exactly one authentication scheme. Valid authentication schemes are Digest, Negotiate, NTLM, Basic, or Anonymous. Change the IIS settings so that only a single authentication scheme is used. at System.ServiceModel.Web.WebServiceHost.SetBindingCredentialBasedOnHostedEnvironment(ServiceEndpoint serviceEndpoint, AuthenticationSchemes supportedSchemes) at System.ServiceModel.Web.WebServiceHost.AddAutomaticWebHttpBindingEndpoints(ServiceHost host, IDictionary`2 implementedContracts, String multipleContractsErrorMessage) at System.ServiceModel.WebScriptServiceHost.OnOpening() at System.ServiceModel.Channels.CommunicationObject.Open(TimeSpan timeout) at System.ServiceModel.Channels.CommunicationObject.Open() at System.ServiceModel.ServiceHostingEnvironment.HostingManager.ActivateService(String normalizedVirtualPath) at System.ServiceModel.ServiceHostingEnvironment.HostingManager.EnsureServiceAvailable(String normalizedVirtualPath) After doing some Googling, I changed my authentication settings on the .svc folder within my project (on the server) to only anonymous authentication, but it did not work. I still get web service failed on the calls. IIS7 by default only had anonymous. I do not have any entries in my web.config for the services (I stripped them out per this pattern). I am using a nant script to deploy the website to the server and use this also locally to verify the script was not causing the issue. Any known issue with this? IIS 6 not able to handle?

    Read the article

  • Php: Overriding abstract method goes wrong

    - by Lu4
    Hi! I think there is a problem in php's OOP implementation. EDIT: Consider more illustrative example: abstract class Animal { public $name; // public function Communicate(Animal $partner) {} // Works public abstract function Communicate(Animal $partner); // Gives error } class Panda extends Animal { public function Communicate(Panda $partner) { echo "Hi {$partner->name} I'm a Panda"; } } class Human extends Animal { public function Communicate(Human $partner) { echo "Hi {$partner->name} I'm a Human"; } } $john = new Human(); $john->name = 'John'; $mary = new Human(); $mary->name = 'Mary'; $john->Communicate($mary); // should be ok $zuzi = new Panda(); $zuzi->name = 'Zuzi'; $zuzi->Communicate($john); // should give error The problem is that when Animal::Communicate is an abstract method, php tells that the following methods are illegal: "public function Communicate(Panda $partner)" "public function Communicate(Human $partner)" but when Animal::Communicate is non-abstract but has zero-implementation Php thinks that these methods are legal. So in my opinion it's not right because we are doing override in both cases, and these both cases are equal, so it seems like it's a bug... Older part of the post: Please consider the following code: Framework.php namespace A { class Component { ... } abstract class Decorator { public abstract function Decorate(\A\Component $component); } } Implementation.php namespace B { class MyComponent extends \A\Component { ... } } MyDecorator.php namespace A { class MyDecorator extends Decorator { public function Decorate(\B\MyComponent $component) { ... } } } The following code gives error in MyDecorator.php telling Fatal error: Declaration of MyDecorator::Decorate() must be compatible with that of A\Decorator::Decorate() in MyDecorator.php on line ... But when I change the Framework.php::Decorator class to the following implementation: abstract class Decorator { public function Decorate(\A\Component $component) {} } the problem disappears.

    Read the article

  • Why does this MySQL function return null?

    - by Shore
    Description: the query actually run have 4 results returned,as can be see from below, what I did is just concate the items then return, but unexpectedly,it's null. I think the code is self-explanatory: DELIMITER | DROP FUNCTION IF EXISTS get_idiscussion_ask| CREATE FUNCTION get_idiscussion_ask(iask_id INT UNSIGNED) RETURNS TEXT DETERMINISTIC BEGIN DECLARE done INT DEFAULT 0; DECLARE body varchar(600); DECLARE created DATETIME; DECLARE anonymous TINYINT(1); DECLARE screen_name varchar(64); DECLARE result TEXT; DECLARE cur1 CURSOR FOR SELECT body,created,anonymous,screen_name from idiscussion left join users on idiscussion.uid=users.id where idiscussion.iask_id=iask_id; DECLARE CONTINUE HANDLER FOR SQLSTATE '02000' SET done = 1; SET result = ''; OPEN cur1; REPEAT FETCH cur1 INTO body, created, anonymous, screen_name; SET result = CONCAT(result,'<comment><body><![CDATA[',body,']]></body>','<replier>',if(screen_name is not null and !anonymous,screen_name,''),'</replier>','<created>',created,'</created></comment>'); UNTIL done END REPEAT; CLOSE cur1; RETURN result; END | DELIMITER ; mysql> DELIMITER ; mysql> select get_idiscussion_ask(1); +------------------------+ | get_idiscussion_ask(1) | +------------------------+ | NULL | +------------------------+ 1 row in set (0.01 sec) mysql> SELECT body,created,anonymous,screen_name from idiscussion left join users on idiscussion.uid=users.id where idiscussion.iask_id=1; +------+---------------------+-----------+-------------+ | body | created | anonymous | screen_name | +------+---------------------+-----------+-------------+ | haha | 2009-05-27 04:57:51 | 0 | NULL | | haha | 2009-05-27 04:57:52 | 0 | NULL | | haha | 2009-05-27 04:57:52 | 0 | NULL | | haha | 2009-05-27 04:57:53 | 0 | NULL | +------+---------------------+-----------+-------------+ 4 rows in set (0.00 sec) For those who don't think the code is self-explanatory: Why the function returns NULL?

    Read the article

  • ApplicationPoolIdentity IIS 7.5 to SQL Server 2008 R2 not working.

    - by Jack
    I have a small ASP.NET test script that opens a connection to a SQL Server database on another machine in the domain. It isn't working in all cases. Setup: IIS 7.5 under W2K8R2 trying to connect to a remote SQL Server 2008 R2 instance. All machines are in the same domain. Using the ApplicationPoolIdentity for the web site it fails to connect to the SQL Server with the following: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.SqlClient.SqlException: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. However if I switch the Process Model Identity to NETWORK SERVICE or my domain account the database connection is successful. I've granted the \$ access in SQL Server. I am not doing any sort of authentication on the web site, it is just a simple script to open a connection to a database to make sure it works. I have Anonymous Authentication enabled and set to use the Application pool identity. How do I make this work? Why is the ApplicationPoolIdentity trying to use ANONYMOUS LOGON? Better yet, how do I make it stop using the Anonymous logon?

    Read the article

  • Is “Application Programming Interface” a bad name?

    - by Taylor Hawkes
    Application programming interface seems like a bad name for what it is. Is there a reason it was named such? I understand that people used to call them Advanced Programming Interfaces and then renamed to Application Programming Interface. Is that why it is poorly named? Why is it not named Application (to) Programmer Interface. I guess I'm just confused of the meaning behind that name? I write more about my confusion around the name here: BREAKING DOWN THE WORD “APPLICATION PROGRAMMING INTERFACE” This is a very confusing word. We mostly understand what the word Interface means, but “Application Programming”, what even is that. Honestly I'm confused. Is that suppose to be two words like “Application”, “Programming” and then the “Interface” is suppose to mean between the two? Like would a “Computer Human Interface” be an interface between a “Computer” and a “Human” (monitor , keyboard, mouse ) or is a “Computer Human” a real thing - perhaps the terminator. So a CHI is our boy Kyle Reese who is the only way we are able to work with the computer human. I think more likely “Application Programming Interface” was simply poorly named and doesn't really make sense. It was originally called an “Advanced Programming Interface” , but perhaps being a bit to ostentatious merged into the now wildly accepted “Application Programming Interface”. So now, not wanting to change an acronym has confused the living heck out everyone.... Any thoughts or clarification would be great, I'm giving a lecture on this topic in a month, so I would prefer not to BS my way through it.

    Read the article

  • How customers view and interact with a company

    The Harvard Business Review article written by Rayport and Jaworski is aptly titled “Best Face Forward” because it sheds light on how customers view and interact with a company. In the past most business interaction between customers was performed in a face to face meeting where one party would present an item for sale and then the other would decide whether to purchase the item. In addition, if there was a problem with a purchased item then they would bring the item back to the person who sold the item for resolution. One of my earliest examples of witnessing this was when I was around 6 or 7 years old and I was allowed to spend the summer in Tennessee with my Grandparents. My Grandfather had just written a book about the local history of his town and was selling them to his friends and local bookstores. I still remember he offered to pay me a small commission for every book I helped him sell because I was carrying the books around for him. Every sale he made was face to face with his customers which allowed him to share his excitement for the book with everyone. In today’s modern world there is less and less human interaction as the use of computers and other technologies allow us to communicate within seconds even though both parties may be across the globe or just next door. That being said, customers view a company through multiple access points called faces that represent the ability to interact without actually seeing a human face. As a software engineer this is a good and a bad thing because direct human interaction and technology based interaction have both good and bad attributes based on the customer. How organizations coordinate business and IT functions, to provide quality service varies based on each individual business and the goals and directives put in place by its management. According to Rayport and Jaworski, the type of interaction used through a particular access point may lend itself to be people-dominate, machine-dominate, or a combination of both. The method by which a company communicates information through an access point is a strategic choice that relates costs and customer outcomes. To simplify this, the choice is based on what can give the customer the best experience interacting with the company when the cost of the interaction is also a factor. I personally see examples of this every day at work. The company website is machine-dominate with people updating and maintaining information, our groups department is people dominate because most of the customer interaction is done at the customers location and is backed up by machine based data sources, and our sales/member service department is a hybrid because employees work in tandem with machines in order for them to assist customers with signing up or any other issue they may have. The positive and negative aspects of human and machine interfaces are a key aspect in deciding which interface to use when allowing customers to access a company or a combination of the two. Rayport and Jaworski also used MIT professor Erik Brynjolfsson preliminary catalog of human and machine strengths. He stated that humans outperform machines in judgment, pattern recognition, exception processing, insight, and creativity. I have found this to be true based on the example of how sales and member service reps at my company handle a multitude of questions and various situations with a lot of unknown variables. A machine interface could never effectively be able to handle these scenarios because there are too many variables to consider and would not have the built-in logic to process each customer’s claims and needs. In addition, he also stated that machines outperform humans in collecting, storing, transmitting and routine processing. An example of this would be my employer’s website. Customers can simply go online and purchase a product without even talking to a sales or member services representative. The information is then stored in a database so that the customer can always go back and review there order, and access their selected services. A human, no matter how smart they are would never be able to keep track of hundreds of thousands of customers let alone know what they purchased or how much they paid. In today’s technology driven economy every company must offer their customers multiple methods of accessibly in order to survive. The more of an opportunity a company has to create a positive experience for their customers, in my opinion, they more likely the customer will return to that company again. I have noticed this with my personal shopping habits and experiences. References Rayport, J., & Jaworski, B. (2004). Best Face Forward. Harvard Business Review, 82(12), 47-58. Retrieved from Business Source Complete database.

    Read the article

  • Potential issues with multiple home pages

    - by Maxim Zaslavsky
    I have a site where I want to have two different home pages: a general description page for anonymous users, and a dashboard page for logged-in users. I am debating between two implementations: Both pages live at / The page for anonymous users is located at / and the dashboard is at /dashboard, with automatic redirection between them based on whether a given user is logged in (e.g., if you're logged in and navigate to /, you are redirected to /dashboard. Is it cleaner to have both pages use the same URL or separate URLs? Also, I imagine that choices for that question will affect the following: Caching: the anonymous page would be completely cached, while the logged-in page would not be cached at all (except for static resources). This could lead to issues with server caching, request speed, and UX (such as if one version of the page is cached in a user's browser when the other version should be displayed, instead). SEO: how would search engines react to such canonical URLs? Load time (due to redirects or to the server having to always reevaluate which page to display)

    Read the article

  • What triggered the popularity of lambda functions in modern programming languages?

    - by Giorgio
    In the last few years anonymous functions (AKA lambda functions) have become a very popular language construct and almost every major / mainstream programming language has introduced them or is planned to introduce them in an upcoming revision of the standard. Yet, anonymous functions are a very old and very well-known concept in Mathematics and Computer Science (invented by the mathematician Alonzo Church around 1936, and used by the Lisp programming language since 1958, see e.g. here). So why didn't today's mainstream programming languages (many of which originated 15 to 20 years ago) support lambda functions from the very beginning and only introduced them later? And what triggered the massive adoption of anonymous functions in the last few years? Is there some specific event, new requirement or programming technique that started this phenomenon?

    Read the article

  • Passing class names or objects?

    - by nischayn22
    I have a switch statement switch ( $id ) { case 'abc': return 'Animal'; case 'xyz': return 'Human'; //many more } I am returning class names,and use them to call some of their static functions using call_user_func(). Instead I can also create a object of that class, return that and then call the static function from that object as $object::method($param) switch ( $id ) { case 'abc': return new Animal; case 'xyz': return new Human; //many more } Which way is efficient? To make this question broader : I have classes that have mostly all static methods right now, putting them into classes is kind of a grouping idea here (for example the DB table structure of Animal is given by class Animal and so for Human class). I need to access many functions from these classes so the switch needs to give me access to the class

    Read the article

  • Resource Monitor (resmon) in Windows Server 2008 R2

    - by Clever Human
    In Windows Server 2008 R2's Resource Monitor, is there a way to set the scale of the various graphs to be constant values instead of variable based on data? It seems to me that the utility of a graph is to get a quick overview glance at the values those graphs are showing. So if I look at the CPU graph and the line is up near the top, I can know immediately that something is using all my CPU and go investigate what. I don't really care if the CPU is jumping between .01% and 2%. Or if the network usage monitor is up near the top, I will know that all my bandwidth is being used up, and go figure out what. But the way things are now, the graphs are meaningless because the scales constantly shift. If you look at the network usage graph in one second it might have a scale out of 100kbps, and the next second have a scale based on 1mbps! So... is there a registry key or something that will peg the scale of these graphs to logical maximums? (the graph on the right hand side of the screenshot below):

    Read the article

  • ProFTPd server on Ubuntu getting access denied message when successfully authenticated?

    - by exxoid
    I have a Ubuntu box with a ProFTPD 1.3.4a Server, when I try to log in via my FTP Client I cannot do anything as it does not allow me to list directories; I have tried logging in as root and as a regular user and tried accessing different paths within the FTP Server. The error I get in my FTP Client is: Status: Retrieving directory listing... Command: CDUP Response: 250 CDUP command successful Command: PWD Response: 257 "/var" is the current directory Command: PASV Response: 227 Entering Passive Mode (172,16,4,22,237,205). Command: MLSD Response: 550 Access is denied. Error: Failed to retrieve directory listing Any idea? Here is the config of my proftpd: # # /etc/proftpd/proftpd.conf -- This is a basic ProFTPD configuration file. # To really apply changes, reload proftpd after modifications, if # it runs in daemon mode. It is not required in inetd/xinetd mode. # # Includes DSO modules Include /etc/proftpd/modules.conf # Set off to disable IPv6 support which is annoying on IPv4 only boxes. UseIPv6 off # If set on you can experience a longer connection delay in many cases. IdentLookups off ServerName "Drupal Intranet" ServerType standalone ServerIdent on "FTP Server ready" DeferWelcome on # Set the user and group that the server runs as User nobody Group nogroup MultilineRFC2228 on DefaultServer on ShowSymlinks on TimeoutNoTransfer 600 TimeoutStalled 600 TimeoutIdle 1200 DisplayLogin welcome.msg DisplayChdir .message true ListOptions "-l" DenyFilter \*.*/ # Use this to jail all users in their homes # DefaultRoot ~ # Users require a valid shell listed in /etc/shells to login. # Use this directive to release that constrain. # RequireValidShell off # Port 21 is the standard FTP port. Port 21 # In some cases you have to specify passive ports range to by-pass # firewall limitations. Ephemeral ports can be used for that, but # feel free to use a more narrow range. # PassivePorts 49152 65534 # If your host was NATted, this option is useful in order to # allow passive tranfers to work. You have to use your public # address and opening the passive ports used on your firewall as well. # MasqueradeAddress 1.2.3.4 # This is useful for masquerading address with dynamic IPs: # refresh any configured MasqueradeAddress directives every 8 hours <IfModule mod_dynmasq.c> # DynMasqRefresh 28800 </IfModule> # To prevent DoS attacks, set the maximum number of child processes # to 30. If you need to allow more than 30 concurrent connections # at once, simply increase this value. Note that this ONLY works # in standalone mode, in inetd mode you should use an inetd server # that allows you to limit maximum number of processes per service # (such as xinetd) MaxInstances 30 # Set the user and group that the server normally runs at. # Umask 022 is a good standard umask to prevent new files and dirs # (second parm) from being group and world writable. Umask 022 022 # Normally, we want files to be overwriteable. AllowOverwrite on # Uncomment this if you are using NIS or LDAP via NSS to retrieve passwords: # PersistentPasswd off # This is required to use both PAM-based authentication and local passwords AuthPAMConfig proftpd AuthOrder mod_auth_pam.c* mod_auth_unix.c # Be warned: use of this directive impacts CPU average load! # Uncomment this if you like to see progress and transfer rate with ftpwho # in downloads. That is not needed for uploads rates. # UseSendFile off TransferLog /var/log/proftpd/xferlog SystemLog /var/log/proftpd/proftpd.log # Logging onto /var/log/lastlog is enabled but set to off by default #UseLastlog on # In order to keep log file dates consistent after chroot, use timezone info # from /etc/localtime. If this is not set, and proftpd is configured to # chroot (e.g. DefaultRoot or <Anonymous>), it will use the non-daylight # savings timezone regardless of whether DST is in effect. #SetEnv TZ :/etc/localtime <IfModule mod_quotatab.c> QuotaEngine off </IfModule> <IfModule mod_ratio.c> Ratios off </IfModule> # Delay engine reduces impact of the so-called Timing Attack described in # http://www.securityfocus.com/bid/11430/discuss # It is on by default. <IfModule mod_delay.c> DelayEngine on </IfModule> <IfModule mod_ctrls.c> ControlsEngine off ControlsMaxClients 2 ControlsLog /var/log/proftpd/controls.log ControlsInterval 5 ControlsSocket /var/run/proftpd/proftpd.sock </IfModule> <IfModule mod_ctrls_admin.c> AdminControlsEngine off </IfModule> # # Alternative authentication frameworks # #Include /etc/proftpd/ldap.conf #Include /etc/proftpd/sql.conf # # This is used for FTPS connections # #Include /etc/proftpd/tls.conf # # Useful to keep VirtualHost/VirtualRoot directives separated # #Include /etc/proftpd/virtuals.con # A basic anonymous configuration, no upload directories. # <Anonymous ~ftp> # User ftp # Group nogroup # # We want clients to be able to login with "anonymous" as well as "ftp" # UserAlias anonymous ftp # # Cosmetic changes, all files belongs to ftp user # DirFakeUser on ftp # DirFakeGroup on ftp # # RequireValidShell off # # # Limit the maximum number of anonymous logins # MaxClients 10 # # # We want 'welcome.msg' displayed at login, and '.message' displayed # # in each newly chdired directory. # DisplayLogin welcome.msg # DisplayChdir .message # # # Limit WRITE everywhere in the anonymous chroot # <Directory *> # <Limit WRITE> # DenyAll # </Limit> # </Directory> # # # Uncomment this if you're brave. # # <Directory incoming> # # # Umask 022 is a good standard umask to prevent new files and dirs # # # (second parm) from being group and world writable. # # Umask 022 022 # # <Limit READ WRITE> # # DenyAll # # </Limit> # # <Limit STOR> # # AllowAll # # </Limit> # # </Directory> # # </Anonymous> # Include other custom configuration files Include /etc/proftpd/conf.d/ UseReverseDNS off <Global> RootLogin on UseFtpUsers on ServerIdent on DefaultChdir /var/www DeleteAbortedStores on LoginPasswordPrompt on AccessGrantMsg "You have been authenticated successfully." </Global> Any idea what could be wrong? Thanks for your help!

    Read the article

  • Fixed Resource Monitor Graph Scale in Windows Server 2008 R2

    - by Clever Human
    In Windows Server 2008 R2's Resource Monitor, is there a way to set the scale of the various graphs to be constant values instead of variable based on data? It seems to me that the utility of a graph is to get a quick overview glance at the values those graphs are showing. So if I look at the CPU graph and the line is up near the top, I can know immediately that something is using all my CPU and go investigate what. I don't really care if the CPU is jumping between .01% and 2%. Or if the network usage monitor is up near the top, I will know that all my bandwidth is being used up, and go figure out what. But the way things are now, the graphs are meaningless because the scales constantly shift. If you look at the network usage graph in one second it might have a scale out of 100kbps, and the next second have a scale based on 1mbps! So... is there a registry key or something that will peg the scale of these graphs to logical maximums?

    Read the article

  • Cannot Install Windows 7 SP1 (64-bit)

    - by Clever Human
    I have tried every way I know how to get Windows 7 SP1 to install. It fails every time. Below is what looks like the relevant contents of the CBS.Log file. If there are further details that would help or more information I can gather, I will get it. 2011-08-15 10:32:52, Info CBS Startup: Package: Package_for_KB976902~31bf3856ad364e35~amd64~~6.1.1.17514 completed startup processing, new state: Installed, original: Installed, targeted: Installed. hr = 0x80070490 2011-08-15 10:32:52, Info CBS WER: Generating failure report for package: Package_for_KB976932~31bf3856ad364e35~amd64~~6.1.1.17514, status: 0x80070490, failure source: CBS Other, start state: Partially Installed, target state: Installed, client id: SP Coordinater Engine 2011-08-15 10:32:52, Info CBS Failed to query DisableWerReporting flag. Assuming not set... [HRESULT = 0x80070002 - ERROR_FILE_NOT_FOUND] 2011-08-15 10:32:52, Info CBS Failed to add %windir%\winsxs\pending.xml to WER report because it is missing. Continuing without it... 2011-08-15 10:32:52, Info CBS Failed to add %windir%\winsxs\pending.xml.bad to WER report because it is missing. Continuing without it... 2011-08-15 10:32:52, Info CBS SQM: Reporting package change completion for package: Package_for_KB976932~31bf3856ad364e35~amd64~~6.1.1.17514, current: Partially Installed, original: Partially Installed, target: Installed, status: 0x80070490, failure source: CBS Other, failure details: "(null)", client id: SP Coordinater Engine, initiated offline: False, execution sequence: 517, first merged sequence: 517 2011-08-15 10:32:52, Info CBS SQM: Upload requested for report: PackageChangeEnd_Package_for_KB976932~31bf3856ad364e35~amd64~~6.1.1.17514, session id: 101457924, sample type: Standard 2011-08-15 10:32:52, Info CBS SQM: Ignoring upload request because the sample type is not enabled: Standard I have downloaded the service pack and ran it from the EXE, I have installed it from Windows Update, I have ran all the "troubleshooting" trouble shots I could find. Nothing has worked so far. Any advice would be appreciated.

    Read the article

  • A "never seen before" javascript error (Access to property denied" code: "1010)

    - by davykiash
    I was testing my jqgrid and when I was attempting to add details through jqgrid's modal window I can accross the following error in my firebug console.(Am using mozilla) Access to property denied" code: "1010 checkValues("2010-04-20", 5, table#list.ui-jqgrid-btable)grid.common.js (line 441) postIt()grid.formedit.js (line 794) anonymous(Object originalEvent=Event click type=click)grid.formedit.js (line 463) anonymous(Object originalEvent=Event click type=click)jquery-1....2.min.js (line 19) anonymous()jquery-1....2.min.js (line 19) Where do I start in solving this issue?

    Read the article

  • How can I extract paragaphs and selected lines with Perl?

    - by neversaint
    I have a text where I need to: to extract the whole paragraph under the section "Aceview summary" until the line that starts with "Please quote" (not to be included). to extract the line that starts with "The closest human gene". to store them into array with two elements. The text looks like this (also on pastebin): AceView: gene:1700049G17Rik, a comprehensive annotation of human, mouse and worm genes with mRNAs or ESTsAceView. <META NAME="title" CONTENT=" AceView: gene:1700049G17Rik a comprehensive annotation of human, mouse and worm genes with mRNAs or EST"> <META NAME="keywords" CONTENT=" AceView, genes, Acembly, AceDB, Homo sapiens, Human, nematode, Worm, Caenorhabditis elegans , WormGenes, WormBase, mouse, mammal, Arabidopsis, gene, alternative splicing variant, structure, sequence, DNA, EST, mRNA, cDNA clone, transcript, transcription, genome, transcriptome, proteome, peptide, GenBank accession, dbest, RefSeq, LocusLink, non-coding, coding, exon, intron, boundary, exon-intron junction, donor, acceptor, 3'UTR, 5'UTR, uORF, poly A, poly-A site, molecular function, protein annotation, isoform, gene family, Pfam, motif ,Blast, Psort, GO, taxonomy, homolog, cellular compartment, disease, illness, phenotype, RNA interference, RNAi, knock out mutant expression, regulation, protein interaction, genetic, map, antisense, trans-splicing, operon, chromosome, domain, selenocysteine, Start, Met, Stop, U12, RNA editing, bibliography"> <META NAME="Description" CONTENT= " AceView offers a comprehensive annotation of human, mouse and nematode genes reconstructed by co-alignment and clustering of all publicly available mRNAs and ESTs on the genome sequence. Our goals are to offer a reliable up-to-date resource on the genes, their functions, alternative variants, expression, regulation and interactions, in the hope to stimulate further validating experiments at the bench "> <meta name="author" content="Danielle Thierry-Mieg and Jean Thierry-Mieg, NCBI/NLM/NIH, [email protected]"> <!-- var myurl="av.cgi?db=mouse" ; var db="mouse" ; var doSwf="s" ; var classe="gene" ; //--> However I am stuck with the following script logic. What's the right way to achieve that? #!/usr/bin/perl -w my $INFILE_file_name = $file; # input file name open ( INFILE, '<', $INFILE_file_name ) or croak "$0 : failed to open input file $INFILE_file_name : $!\n"; my @allsum; while ( <INFILE> ) { chomp; my $line = $_; my @temp1 = (); if ( $line =~ /^ AceView summary/ ) { print "$line\n"; push @temp1, $line; } elsif( $line =~ /Please quote/) { push @allsum, [@temp1]; @temp1 = (); } elsif ($line =~ /The closest human gene/) { push @allsum, $line; } } close ( INFILE ); # close input file # Do something with @allsum There are many files like that I need to process.

    Read the article

  • Interpretation of empty User-agent

    - by Amit Agrawal
    How should I interpret a empty User-agent? I have some custom analytics code and that code has to analyze only human traffic. I have got a working list of User-agents denoting human traffic, and bot traffic, but the empty User-agent is proving to be problematic. And I am getting lots of traffic with empty user agent - 10%. Additionally - I have crafted the human traffic versus bot traffic user agent list by analyzing my current logs. As such I might be missing a lot of entries in there. Is there a well maintained list of user agents denoting bot traffic, OR the inverse a list of user agents denoting human traffic?

    Read the article

  • A public struct inside a class

    - by Koning Baard
    I am new to C++, and let's say I have two classes: Creature and Human: /* creature.h */ class Creature { private: public: struct emotion { /* All emotions are percentages */ char joy; char trust; char fear; char surprise; char sadness; char disgust; char anger; char anticipation; char love; }; }; /* human.h */ class Human : Creature { }; And I have this in my main function in main.cpp: Human foo; My question is: how can I set foo's emotions? I tried this: foo->emotion.fear = 5; But GCC gives me this compile error: error: base operand of '-' has non-pointer type 'Human' This: foo.emotion.fear = 5; Gives: error: 'struct Creature::emotion' is inaccessible error: within this context error: invalid use of 'struct Creature::emotion' Can anyone help me? Thanks P.S. No I did not forget the #includes

    Read the article

  • Javascript (using jQuery) in a large Project... organization, passing data, private method, etc.

    - by gaoshan88
    I'm working on a large project that is organized like so: Multiple javascript files are included as needed and most code is wrapped in anonymous functions... // wutang.js //Included Files Go Here // Public stuff var MethodMan; // Private stuff (function() { var someVar1; MethodMan = function(){...}; var APrivateMethod = function(){...}; $(function(){ //jquery page load stuff here $('#meh').click(APrivateMethod); }); })(); I'm wondering about a few things here. Assuming that there are a number of these files included on a page, what has access to what and what is the best way to pass data between the files? For example, I assume that MethodMan() can be accessed by anything on any included page. It is public, yes? var someVar1 is only accessible to the methods inside that particular anonymous function and nowhere else, yes? Also true of var APrivateMethod(), yes? What if I want APrivateMethod() to make something available to some other method in a different anonymous wrapped method in a different included page. What's the best way to pass data between these private, anonymous functions on different included pages? Do I simply have to make whatever I want to use between them public? How about if I want to minimize global variables? What about the following: var PootyTang = function(){ someVar1 = $('#someid').text(); //some stuff }; and in another included file used by that same page I have: var TangyPoot = function(){ someVar1 = $('#someid').text(); //some completely different stuff }; What's the best way to share the value of someVar1 across these anonymous (they are wrapped as the first example) functions?

    Read the article

  • Display username to logged in user only?

    - by RodeoRamsey
    I have a Wordpress blog set up to display comments as "Anonymous User" by hard coding it into the comments.php file. I would like to have it say the user's Username next to their comment and ONLY display that Username to THEM. In other words, if they're a guest, they'll see "Anonymous User" and if they're a registered/logged in DIFFERENT user, they'll still see "Anonymous User", but if it's THEIR comment it'll say "Your Comment" or their own username. Any clue on a snippet of code? Here's what I have so far: Anonymous User: <div class="post-txt" id="<?php comment_ID() ?>"><?php comment_text() ?></div> Thanks!

    Read the article

  • Optimising ruby regexp -- lots of match groups

    - by Farcaller
    I'm working on a ruby baser lexer. To improve performance, I joined up all tokens' regexps into one big regexp with match group names. The resulting regexp looks like: /\A(?<__anonymous_-1038694222803470993>(?-mix:\n+))|\A(?<__anonymous_-1394418499721420065>(?-mix:\/\/[\A\n]*))|\A(?<__anonymous_3077187815313752157>(?-mix:include\s+"[\A"]+"))|\A(?<LET>(?-mix:let\s))|\A(?<IN>(?-mix:in\s))|\A(?<CLASS>(?-mix:class\s))|\A(?<DEF>(?-mix:def\s))|\A(?<DEFM>(?-mix:defm\s))|\A(?<MULTICLASS>(?-mix:multiclass\s))|\A(?<FUNCNAME>(?-mix:![a-zA-Z_][a-zA-Z0-9_]*))|\A(?<ID>(?-mix:[a-zA-Z_][a-zA-Z0-9_]*))|\A(?<STRING>(?-mix:"[\A"]*"))|\A(?<NUMBER>(?-mix:[0-9]+))/ I'm matching it to my string producing a MatchData where exactly one token is parsed: bigregex =~ "\n ... garbage" puts $~.inspect Which outputs #<MatchData "\n" __anonymous_-1038694222803470993:"\n" __anonymous_-1394418499721420065:nil __anonymous_3077187815313752157:nil LET:nil IN:nil CLASS:nil DEF:nil DEFM:nil MULTICLASS:nil FUNCNAME:nil ID:nil STRING:nil NUMBER:nil> So, the regex actually matched the "\n" part. Now, I need to figure the match group where it belongs (it's clearly visible from #inspect output that it's _anonymous-1038694222803470993, but I need to get it programmatically). I could not find any option other than iterating over #names: m.names.each do |n| if m[n] type = n.to_sym resolved_type = (n.start_with?('__anonymous_') ? nil : type) val = m[n] break end end which verifies that the match group did have a match. The problem here is that it's slow (I spend about 10% of time in the loop; also 8% grabbing the @input[@pos..-1] to make sure that \A works as expected to match start of string (I do not discard input, just shift the @pos in it). You can check the full code at GH repo. Any ideas on how to make it at least a bit faster? Is there any option to figure the "successful" match group easier?

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >