Search Results

Search found 4220 results on 169 pages for 'generating passwords'.

Page 148/169 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • Creating Visual Studio projects that only contain static files

    - by Eilon
    Have you ever wanted to create a Visual Studio project that only contained static files and didn’t contain any code? While working on ASP.NET MVC we had a need for exactly this type of project. Most of the projects in the ASP.NET MVC solution contain code, such as managed code (C#), unit test libraries (C#), and Script# code for generating our JavaScript code. However, one of the projects, MvcFuturesFiles, contains no code at all. It only contains static files that get copied to the build output folder: As you may well know, adding static files to an existing Visual Studio project is easy. Just add the file to the project and in the property grid set its Build Action to “Content” and the Copy to Output Directory to “Copy if newer.” This works great if you have just a few static files that go along with other code that gets compiled into an executable (EXE, DLL, etc.). But this solution does not work well if the projects only contains static files and has no compiled code. If you create a new project in Visual Studio and add static files to it you’ll still get an EXE or DLL copied to the output folder, despite not having any actual code. We wanted to avoid having a teeny little DLL generated in the output folder. In ASP.NET MVC 2 we came up with a simple solution to this problem. We started out with a regular C# Class Library project but then edited the project file to alter how it gets built. The critical part to get this to work is to define the MSBuild targets for Build, Clean, and Rebuild to perform custom tasks instead of running the compiler. The Build, Clean, and Rebuild targets are the three main targets that Visual Studio requires in every project so that the normal UI functions properly. If they are not defined then running certain commands in Visual Studio’s Build menu will cause errors. Once you create the class library projects there are a few easy steps to change it into a static file project: The first step in editing the csproj file is to remove the reference to the Microsoft.CSharp.targets file because the project doesn’t contain any C# code: <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The second step is to define the new Build, Clean, and Rebuild targets to delete and then copy the content files: <Target Name="Build"> <Copy SourceFiles="@(Content)" DestinationFiles="@(Content->'$(OutputPath)%(RelativeDir)%(Filename)%(Extension)')" /> </Target> <Target Name="Clean"> <Exec Command="rd /s /q $(OutputPath)" Condition="Exists($(OutputPath))" /> </Target> <Target Name="Rebuild" DependsOnTargets="Clean;Build"> </Target> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The third and last step is to add all the files to the project as normal Content files (as you would do in any project type). To see how we did this in the ASP.NET MVC 2 project you can download the source code and inspect the MvcFutureFules.csproj project file. If you’re working on a project that contains many static files I hope this solution helps you out!

    Read the article

  • CQRS - Benefits

    - by Dylan Smith
    Thanks to all the comments and feedback from the last post I think I have a better understanding now of the benefits of CQRS (separate from the benefits of Event Sourcing). I’m going to try and sum it up here, and point out some areas where I could still use some advice: CQRS Benefits Sounds like the primary benefit of CQRS as an architecture is it allows you to create a simpler domain model by sucking out everything related to queries. I can definitely see the benefit to this, in general the domain logic related to commands is the high-value behavior in the software, but the logic required to service the queries would add a lot of low-value “noise” to the domain model that would dilute the high-value (command) behavior – sorting, paging, filtering, pre-fetch paths, etc. Also the most appropriate domain structure for implementing commands might not be the most optimal for implementing queries. To paraphrase Greg, this usually results in a domain model that is mediocre at both, piss-poor at one, or more likely piss-poor at both commands and queries. Not only will you be able to simplify your domain model by pulling out all the query logic, but at least a handful of commands in most systems will probably be “pass-though” type commands with little to no logic that just generate events. If these can be implemented directly in the command-handler and never touch the domain model, this allows you to slim down the domain model even more. Also, if you were to do event sourcing without CQRS, you no longer have a database containing the current state (only the domain model would) which makes it difficult (or impossible) to support ad-hoc querying and/or reporting that is common in most business software. Of course CQRS provides some great scalability benefits, not only scalability but I have to assume that it provides extremely low latency for most operations, especially if you have an asynchronous event bus. I know Greg says that you get a 3x scaling (Commands, Queries, Client) of your ability to perform parallel development, but IMHO, it seems like it only provides 1.5x scaling since even without CQRS you’re going to have your client loosely coupled to your domain - which is still a great benefit to be able to realize. Questions / Concerns If all the queries against an aggregate get pulled out to the Query layer, what if the only commands for that aggregate can be handled in a “pass-through” manner with the command handler directly generating events. Is it possible to have an aggregate that isn’t modeled in the domain model? Are there any issues or downsides to this? I know in the feedback from my previous posts it was suggested that having one domain model handling both commands and queries requires implementing a lot of traversals between objects that wouldn’t be necessary if it was only servicing commands. My question is, do you include traversals in your domain model based on the needs of the code, or based on the conceptual domain model? If none of my Commands require a Customer.Orders traversal, but the conceptual domain includes the concept of a set of orders belonging to a customer – should I model that in my domain model or not? I like the idea of using the Query side of the architecture as a place to put junior devs where the risk of them screwing something up has minimal impact. But I’m not sold on the idea that you can actually outsource it. Like I said in one of my comments on my previous post, the code to handle a query and generate DTO’s is going to be dead simple, but the code to process events and apply them to the tables on the query side is going to require a significant amount of domain knowledge to know which events to listen for to update each of the de-normalized tables (and what changes need to be made when each event is processed). I don’t know about everybody else, but having Indian/Russian/whatever outsourced developers have to do anything that requires significant domain knowledge has never been successful in my experience. And if you need to spec out for each new query which events to listen to and what to do with each one, well that’s probably going to be just as much work to document as it would be to just implement it. Greg made the point in a comment that doing an aggregate query like “Total Sales By Customer” is going to be inefficient if you use event sourcing but not CQRS. I don’t understand why that would be the case. I imagine in that case you’d simply have a method/property on the Customer object that calculated total sales for that customer by enumerating over the Orders collection. Then the application services layer would generate DTO’s off of the Customers collection that included say the CustomerID, CustomerName, TotalSales, or whatever the case may be. As long as you use a snapshotting implementation, I don’t see why that would be anymore inefficient in a DDD+Event Sourcing implementation than in a typical DDD implementation. Like I mentioned in my last post I still have some questions about query logic that haven’t been answered yet, but before I start asking those I want to make sure I have a strong grasp on what benefits CQRS provides.  My main concern with the query logic was that I know I could just toss it all into the query side, but I was concerned that I would be losing the benefits of using CQRS in the first place if I did that.  I want to elaborate more on this though with some example situations in an upcoming post.

    Read the article

  • Sharepoint AD imported users are becomming sporadically corrupted, causing us to have to create a new account

    - by TrevJen
    Sharepoint 2007 MOSS with AD imported users. All servers are 2008. ***UPDATE More details in testing. This Sharepoint is in an AD Child domain (clients.mycompany.local), which is sub to the root of the AD tree (mycompany.local). The user is in the parent tree (as are half of the other functional users. I have elevated the user rights to Domain. In looking at the logs, it seems that the Sharepoint server is trying to authenticate them by querying the DC for the clients domain (which is the way it normally works and still works for all existing identically configured users). I think if I could force it to authenticate up to the top domain DC then it would be ok?? I have around 50 users, over the past 2 months, I have had a handful of the users suddenly unable to login to Sharepoint. When they login, they either get a blank screen or they are repropmted. These users are using accounts that have been used for many months, sometimes the problem originates with a password change. In all cases, the users account works on every other Active Directory authenticated resource (domain, exchange, LDAP). In the most recent case, last night I was forced deleted a user ("John smith") because of corruption. The orifinal account name was jsmith. I deleted him from active directory, then deleted him from the profile list in Sharepoint Shared Services. I could not find a way to delete him from the Sharepoint user list, but I reran the import after recreating his account (renamed it too just to be sure to "smithj"). At first, this did not wor, the user could still access all other resources but Sharepoint. then, some 30 minutes later it inexplicably started working. This morning, the user changed passwords, which immediatly broke the login on Sharepoint again. Logs by request from matt b Office SharePoint Server Date: 4/13/2010 2:00:00 PM Event ID: 7888 Task Category: Office Server General Level: Error Keywords: Classic User: N/A Computer: nb-portal-01.clients.netboundary.local Description: A runtime exception was detected. Details follow. Message: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) – TrevJen 19 hours ago Techinal Details: System.UnauthorizedAccessException: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) at Microsoft.SharePoint.SPGlobal.HandleUnauthorizedAccessException(UnauthorizedAccessException ex) at Microsoft.SharePoint.Library.SPRequest.UpdateField(String bstrUrl, String bstrListName, String bstrXML) at Microsoft.SharePoint.SPField.UpdateCore(Boolean bToggleSealed) – TrevJen 19 hours ago at Microsoft.SharePoint.SPField.Update() at Microsoft.Office.Server.UserProfiles.SiteSynchronizer.UserSynchronizer.PushSchemaToList(Boolean& bAddedColumn) at Microsoft.Office.Server.UserProfiles.SiteSynchronizer.UserSynchronizer.SynchFull() at Microsoft.Office.Server.UserProfiles.SiteSynchronizer.Synch() at Microsoft.Office.Server.Diagnostics.FirstChanceHandler.ExceptionFilter(Boolean fRethrowException, TryBlock tryBlock, FilterBlock filter, CatchBlock catchBlock, FinallyBlock finallyBlock) – TrevJen 19 hours ago Log Name: Application Source: Office SharePoint Server Date: 4/13/2010 2:00:00 PM Event ID: 5553 Task Category: User Profiles Level: Error Keywords: Classic User: N/A Computer: nb-portal-01.clients.netboundary.local Description: failure trying to synch site 6fea15e2-0899-4c19-9016-44d77834c018 for ContentDB b2002b0b-3d4c-411a-8c4f-3d047ca9322c WebApp 3aff7051-455d-4a70-a377-5b1c36df618e. Exception message was Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)). – TrevJen 18 hours ago

    Read the article

  • JavaOne Afterglow by Simon Ritter

    - by JuergenKress
    Last week was the eighteenth JavaOne conference and I thought it would be a good idea to write up my thoughts about how things went. Firstly thanks to Yoshio Terada for the photos, I didn't bother bringing a camera with me so it's good to have some pictures to add to the words. Things kicked off full-throttle on Sunday.  We had the Java Champions and JUG leaders breakfast, which was a great way to meet up with a lot of familiar faces and start talking all things Java.  At midday the show really started with the Strategy and Technical Keynotes.  This was always going to be tougher job than some years because there was no big shiny ball to reveal to the audience.  With the Java EE 7 spec being finalised a few months ago and Java SE 8, Java ME 8 and JDK8 not due until the start of next year there was not going to be any big announcement.  I thought both keynotes worked really well each focusing on the things most important to Java developers: Strategy One of the things that is becoming more and more prominent in many companies marketing is the Internet of Things (IoT).  We've moved from the conventional desktop/laptop environment to much more mobile connected computing with smart phones and tablets.  The next wave of the internet is not just billions of people connected, but 10s or 100s of billions of devices connected to the network, all generating data and providing much more precise control of almost any process you can imagine.  This ties into the ideas of Big Data and Cloud Computing, but implementation is certainly not without its challenges.  As Peter Utzschneider explained it's about three Vs: Volume, Velocity and Value.  All these devices will create huge volumes of data at very high speed; to avoid being overloaded these devices will need some sort of processing capabilities that can filter the useful data from the redundant.  The raw data then needs to be turned into useful information that has value.  To make this happen will require applications on devices, at gateways and on the back-end servers, all very tightly integrated.  This is where Java plays a pivotal role, write once, run everywhere becomes essential, having nine million developers fluent in the language makes it the defacto lingua franca of IoT.  There will be lots more information on how this will become a reality, so watch this space. Technical How do we make the IoT a reality, technically?  Using the game of chess Mark Reinhold, with the help of people like John Ceccarelli, Jasper Potts and Richard Bair, showed what you could do.  Using Java EE on the back end, Java SE and JavaFX on the desktop and Java ME Embedded and JavaFX on devices they showed a complete end-to-end demo. This was really impressive, using 3D features from JavaFX 8 (that's included with JDK8) to make a 3D animated Duke chess board.  Jasper also unveiled the "DukePad" a home made tablet using a Raspberry Pi, touch screen and accelerometer. Although the Raspberry Pi doesn't have earth shattering CPU performance (about the same level as a mid 1990s Pentium), it does have really quite good GPU performance so the GUI works really well.  The plans are all open sourced and available here.  One small, but very significant announcement was that Java SE will now be included with the NOOB and Raspbian Linux distros provided by the Raspberry Pi foundation (these can be found here).  No more hassle having to download and install the JDK after you've flashed your SD card OS image.  The finale was the Raspberry Pi powered chess playing robot.  Really very, very cool.  I talked to Jasper about this and he told me each of the chess pieces had been 3D printed and then he had to use acetone to give them a glossy finish (not sure what his wife thought of him spending hours in the kitchen in a gas mask!)  The way the robot arm worked was very impressive as it did not have any positioning data (like a potentiometer connected to each motor), but relied purely on carefully calibrated timings to get the arm to the right place.  Having done things like this myself in the past I know how easy it is to find a small error gets magnified into very big mistakes. Here's some pictures from the keynote: The "Dukepad" architecture Nice clear perspex case so you can see the innards. The very nice 3D chess set.  Maya's obviously a great tool. Read the full article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: Simon Ritter,Java One,OOW,Oracle OpenWorld,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • ERROR 2003 (HY000): Can't connect to MySQL server on (111)

    - by JohnMerlino
    I am unable to connect to on my ubuntu installation a remote tcp/ip which contains a mysql installation: viggy@ubuntu:~$ mysql -u user.name -p -h xxx.xxx.xxx.xxx -P 3306 Enter password: ERROR 2003 (HY000): Can't connect to MySQL server on 'xxx.xxx.xxx.xxx' (111) I commented out the line below using vim in /etc/mysql/my.cnf: # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 Then I restarted the server: sudo service mysql restart But still I get the same error. This is the content of my.cnf: # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 # # Error logging goes to syslog due to /etc/mysql/conf.d/mysqld_safe_syslog.cnf. # # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ (Note that I can log into my local mysql install just fine by running mysql (and it will log me in as root) and also note that I can get into mysql in the remote server by logging into via ssh and then invoking mysql), but I am unable to connect to the remote server via my terminal using the host, and I need to do it that way so that I can then use mysql workbench.

    Read the article

  • SQL Server Issue: Could not allocate space for object ... primary filegroup is full

    - by Luke
    Trying to figure out a problem at an office that has SQL Server 2005 installed on Windows SBS Server 2008. Here's the setup: It's an office, and the person who set this all up is nowhere to be found. I'm the best hope they have... One of the programs they use on a workstation gives them an error of "Could not allocate space for object 'Billing' in database "MyDatabase" because primary filegroup is full" when trying to save an entry in their software. I searched around for hours, looking for possible solutions. One was to check for available disk space, and another was to defrag. I checked the hard drives on the server, and there is plenty of space free. I also defragged, which may have helped the problem somewhat. It's hard to say, because it seems like with the nature of the error, if you try over and over you might get it to actually save. My next step was to try to see if autogrowth was enabled on the database. This would seem to be a likely / possible solution, but I can't access the database! If I run the SQL Management Studio, I can log in as my Windows user and view the list of databases. However, if I try to do anything (actually view the database, view the properties, add or edit users), I get errors that I don't have permission. For what it's worth, I also tried runing Management Studio as Administrator, in case that would help. No difference, though. Now, what I'm guessing is going on -- from my limited knowledge of SQL and from reading online -- is that though I'm logged in as a Windows administrator, that account does NOT have SQL access. I do see a list of SQL users, including SA, but I again don't have permission to add one or to change the password on an existing one. And nobody at the office has any idea what the SQL passwords could be. So... here's my thinking thus far: 1 - The "Could not allocate" error likely points to a database that needs to be allowed to autogrow. Especially since I verified there is plenty of free space and the HD has been defragmented. 2 - Enabling autogrow would be very easy to do if I had the proper access within SQL Management Stuido. That leads me to this link: http://blogs.technet.com/b/sqlman/archive/2011/06/14/tips-amp-tricks-you-have-lost-access-to-sql-server-now-what.aspx It sounds like it's a step-by-step guide for giving me the access I need to SQL. I'm guessing that if I followed this guide, I would be able to then log in to the SQL server via Management Studio with the proper permissions, and would be able to enable autogrow (or simply view the status of the existing database), and hopefully solve the "Could not allocate space" problem! So I guess I have a few questions: 1 - Would you guys agree with my "diagnosis"? Think I'm barking up the right tree? 2 - Is there any risk at all in hurting / disabling / wrecking the current SQL database or setup with me going through the guide to regain SQL access? I understand that per the guide, I would have to temporarily shut down SQL, so obviously it wouldn't be accessible during that time. But it wouldn't be worth the risk if there's a chance I could mess anything up... Like I said, the workstations ARE currently accessing the database somehow, but nobody knows with what login info or anything. Basically, it's set up, it works (usually), but if they had to reload the software, nobody would know how. Any feedback would be appreciated!! The problem is such that it's not an emergency for them, but an annoyance. If I could fix it, it would be wonderful. But if not, I think they'll manage, especially as they are going to eventually stop using this software. Thank you so much for your time! Luke

    Read the article

  • How I do VCS

    - by Wes McClure
    After years of dabbling with different version control systems and techniques, I wanted to share some of what I like and dislike in a few blog posts.  To start this out, I want to talk about how I use VCS in a team environment.  These come in a series of tips or best practices that I try to follow.  Note: This list is subject to change in the future. Always use some form of version control for all aspects of software development. Development is an evolution.  Looking back at where we were is an invaluable asset in that process.  This includes data schemas and documentation. Reverting / reapplying changes is absolutely critical for efficient development. The tools I use: Code: Hg (preferred), SVN Database: TSqlMigrations Documents: Sometimes in code repository, also SharePoint with versioning Always tag a commit (changeset) with comments This is a quick way to describe to someone else (or your future self) what the changeset entails. Be brief but courteous. One or two sentences about the task, not the actual changes. Use precommit hooks or setup the central repository to reject changes without comments. Link changesets to documentation If your project management system integrates with version control, or has a way to externally reference stories, tasks etc then leave a reference in the commit.  This helps locate more information about the commit and/or related changesets. It’s best to have a precommit hook or system that requires this information, otherwise it’s easy to forget. Ability to work offline is required, including commits and history Yes this requires a DVCS locally but doesn’t require the central repository to be a DVCS.  I prefer to use either Git or Hg but if it isn’t possible to migrate the central repository, it’s still possible for a developer to push / pull changes to that repository from a local Hg or Git repository. Never lock resources (files) in a central repository… Rude! We have merge tools for a reason, merging sucked a long time ago, it doesn’t anymore… stop locking files! This is unproductive, rude and annoying to other team members. Always review everything in your commit. Never ever commit a set of files without reviewing the changes in each. Never add a file without asking yourself, deep down inside, does this belong? If you leave to make changes during a review, start the review over when you come back.  Never assume you didn’t touch a file, double check. This is another reason why you want to avoid large, infrequent commits. Requirements for tools Quickly show pending changes for the entire repository. Default action for a resource with pending changes is a diff. Pluggable diff & merge tool Produce a unified diff or a diff of all changes.  This is helpful to bulk review changes instead of opening each file. The central repository is not your own personal dump yard.  Breaking this rule is a sure fire way to get the F bomb dropped in front of your name, multiple times. If you turn on Visual Studio’s commit on closing studio option, I will personally break your fingers. By the way, the person(s) in charge of this feature should be fired and never be allowed near programming, ever again. Commit (integrate) to the central repository / branch frequently I try to do this before leaving each day, especially without a DVCS.  One never knows when they might need to work from remote the following day. Never commit commented out code If it isn’t needed anymore, delete it! If you aren’t sure if it might be useful in the future, delete it! This is why we have history. If you don’t know why it’s commented out, figure it out and then either uncomment it or delete it. Don’t commit build artifacts, user preferences and temporary files. Build artifacts do not belong in VCS, everything in them is present in the code. (ie: bin\*, obj\*, *.dll, *.exe) User preferences are your settings, stop overriding my preferences files! (ie: *.suo and *.user files) Most tools allow you to ignore certain files and Hg/Git allow you to version this as an ignore file.  Set this up as a first step when creating a new repository! Be polite when merging unresolved conflicts. Count to 10, cuss, grab a stress ball and realize it’s not a big deal.  Actually, it’s an opportunity to let you know that someone else is working in the same area and you might want to communicate with them. Following the other rules, especially committing frequently, will reduce the likelihood of this. Suck it up, we all have to deal with this unintended consequence at times.  Just be careful and GET FAMILIAR with your merge tool.  It’s really not as scary as you think.  I personally prefer KDiff3 as its merging capabilities rock. Don’t blindly merge and then blindly commit your changes, this is rude and unprofessional.  Make sure you understand why the conflict occurred and which parts of the code you want to keep.  Apply scrutiny when you commit a manual merge: review the diff! Make sure you test the changes (build and run automated tests) Become intimate with your version control system and the tools you use with it. Avoid trial and error as much as is possible, sit down and test the tool out, read some tutorials etc.  Create test repositories and walk through common scenarios. Find the most efficient way to do your work.  These tools will be used repetitively, so inefficiencies will add up. Sometimes this involves a mix of tools, both GUI and CLI. I like a combination of both Tortoise Hg and hg cli to get the job efficiently. Always tag releases Create a way to find a given release, whether this be in comments or an explicit tag / branch.  This should be readily discoverable. Create release branches to patch bugs and then merge the changes back to other development branch(es). If using feature branches, strive for periodic integrations. Feature branches often cause forked code that becomes irreconcilable.  Strive to re-integrate somewhat frequently with the branch this code will ultimately be merged into.  This will avoid merge conflicts in the future. Feature branches are best when they are mutually exclusive of active development in other branches. Use and abuse local commits , at least one per task in a story. This builds a trail of changes in your local repository that can be pushed to a central repository when the story is complete. Never commit a broken build or failing tests to the central repository. It’s ok for a local commit to break the build and/or tests.  In fact, I encourage this if it helps group the changes more logically.  This is one of the main reasons I got excited about DVCS, when I wanted more than one changeset for a set of pending changes but some files could be grouped into both changesets (like solution file / project file changes). If you have more than a dozen outstanding changed resources, there should probably be more than one commit involved. Exceptions when maintaining code bases that require shotgun surgery, in this case, it’s a design smell :) Don’t version sensitive information Especially usernames / passwords   There is one area I haven’t found a solution I like yet: versioning 3rd party libraries and/or code.  I really dislike keeping any assemblies in the repository, but seems to be a common practice for external libraries.  Please feel free to share your ideas about this below.    -Wes

    Read the article

  • HTML5 and Visual Studio 2010

    - by Harish Ranganathan
    All of us work with Visual Studio (or the free Visual Web Developer Express Edition) for developing web applications targeting ASP.NET / ASP.NET MVC or Silverlight etc.,  Over the years, Visual Studio has grown to a great extent.  From being a simple limited functionality tool in VS.NET 2002 to the multi-faceted, MEF driven Visual Studio 2010, it has come a long way.  And as much as Visual Studio supports rapid web development by generating HTML mark up, it also added intellisense for some of the HTML specifications that one has otherwise monotonously type every time.  Ex.- In Visual Studio 2010, one can just type the angular bracket “<” and then the first keyword “h” or “x” for html or xhtml respectively and then press tab twice and it would render the entire markup required for XHTML or HTML 1.0/1.1 strict/transitional and the fully qualified W3C URL. The same holds good for specifying HTML type declaration.  Now, the difference between HTML and XHTML has been discussed in detail already, though, if you are interested to know, you can read it from http://www.w3schools.com/xhtml/xhtml_html.asp But, the industry trend or the buzz around is HTML5.  With browsers like IE9 Beta, Google Chrome, Firefox 4 etc., supporting HTML5 standards big time, everyone wants to start developing HTML5 based websites. VS developers (like me) often get the question around when would VS start supporting HTML5.  VS 2010 was released last year and HTML5 is still specifications under development.  Clearly, with the timelines we started developing Visual Studio (way back in 2008), HTML5 specs were almost non-existent.  Even today, the HTML5 body recommends not to fully depend on the entire mark up set as they are still under development specs and might change in the future. However, with Visual Studio 2010 SP1 beta, there is quite a bit of support for HTML5 based web development.  In fact, one of my colleagues pointed out that SP1 beta’s major enhancement is its ability to support HTML5 tags and even add server mode to them. Lets look at the existing validation schema available in Visual Studio (Tools – Options – Validation) This is before installing Visual Studio 2010 SP1 Beta.  Clearly, the validation options are restricted to HTML 4.01 and XHTML 1.1 transitional and below. Also, lets consider using some of the new HTML5 input type elements.  (I found out this, just today from my friend, also an, ASP.NET team member) <input type=”email”> is one of the new input type elements according to the HTML5 specification.  Now, this works well if you type it as is  in Visual Studio and the page renders without any issue (since the default behaviour is, if there is an “undefined” type specified to input tag, it would fall back on the default mode, which is text. The moment you add <input type=”email” runat=”server” >, you get an error Naturally you don’t get intellisense support as well for these new tags.  Once you install Visual Studio 2010 Service Pack 1 Beta from here (it takes a while so you need to be patient for the installation to complete), you will start getting additional Validation templates for HTML5, as below:- Once you set this, you can start using HTML5 elements in your web page without getting errors/warnings.  Look at the screen shot below, for the new “video” tag which is showing up in intellisense (video is a part of the new HTML5 specifications)     note that, you still need to hook up the <!DOCTYPE html /> on the top manually as it doesn’t change automatically  (from the default XHTML 1.0 strict) when you create a new page. Also, the new input type tags in HTML5 are also supported One, can also use the <asp:TextBox type=”email” which would in turn generate the <input type=”email”> markup when the page is rendered.  In fact, as of SP1 beta, this is the only way to put the new input type tags with the runat=”server” attribute (otherwise you will get the parser error mentioned above.  This issue would be fixed by the final release of SP1 beta) Going further, there may be more support for having server tags for some of the common HTML5 elements, but this is work in progress currently. So, other than not having runat=”server” support for the new HTML5  input tags, you can pretty much build and target HTML5 websites with Visual Studio 2010 SP1 Beta, today.  For those who are running Visual Studio 2008, you also have the “HTML5 intellisense for Visual Studio 2010 and 2008” available for download, from http://visualstudiogallery.msdn.microsoft.com/d771cbc8-d60a-40b0-a1d8-f19fc393127d/ Note that, if you are running Visual Studio 2010, the recommended approach is to install the SP1 beta which would be the way forward for HTML5 support in Visual Studio. Of course, you need to test these on a browser supporting HTML5 such as IE9 Beta or Chrome or FireFox 4.  You can download IE9 Beta from here You can also follow the Visual Web Developer Team Blog for more updates on the stuff they are building. Cheers !!!

    Read the article

  • Installing Catalyst 11.6 for an ATI HD 6970

    - by David Oliver
    Ubuntu Maverick 10.10 is displaying the desktop okay (though limited to 1600x1200) after my having installed my new HD 6970 card, so I'm now trying to install the proprietary driver (I understand the open source one requires a more recent kernel than that in Maverick). The proprietary driver under 'Additional Drivers' resulted in a black screen on boot, so I deactivated and am trying to follow the manual install instructions at the cchtml Ubuntu Maverick Installation Guide. When I try to create the .deb packages with: sh ati-driver-installer-11-6-x86.x86_64.run --buildpkg Ubuntu/maverick I get: david@skipper:~/catalyst11.6$ sh ati-driver-installer-11-6-x86.x86_64.run --buildpkg Ubuntu/maverick Created directory fglrx-install.oLN3ux Verifying archive integrity... All good. Uncompressing ATI Catalyst(TM) Proprietary Driver-8.861......................... ===================================================================== ATI Technologies Catalyst(TM) Proprietary Driver Installer/Packager ===================================================================== Generating package: Ubuntu/maverick Package build failed! Package build utility output: ./packages/Ubuntu/ati-packager.sh: 396: debclean: not found dpkg-buildpackage: export CFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export CPPFLAGS from dpkg-buildflags (origin: vendor): dpkg-buildpackage: export CXXFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export FFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export LDFLAGS from dpkg-buildflags (origin: vendor): -Wl,-Bsymbolic-functions dpkg-buildpackage: source package fglrx-installer dpkg-buildpackage: source version 2:8.861-0ubuntu1 dpkg-buildpackage: source changed by ATI Technologies Inc. <http://ati.amd.com/support/driver.html> dpkg-source --before-build fglrx.64Vzxk dpkg-buildpackage: host architecture amd64 debian/rules build Can't exec "debian/rules": Permission denied at /usr/bin/dpkg-buildpackage line 507. dpkg-buildpackage: error: debian/rules build failed with unknown exit code -1 Cleaning in directory . /usr/bin/fakeroot: line 176: debian/rules: Permission denied debuild: fatal error at line 1319: couldn't exec fakeroot debian/rules: dpkg-buildpackage: export CFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export CPPFLAGS from dpkg-buildflags (origin: vendor): dpkg-buildpackage: export CXXFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export FFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export LDFLAGS from dpkg-buildflags (origin: vendor): -Wl,-Bsymbolic-functions dpkg-buildpackage: source package fglrx-installer dpkg-buildpackage: source version 2:8.861-0ubuntu1 dpkg-buildpackage: source changed by ATI Technologies Inc. <http://ati.amd.com/support/driver.html> dpkg-source --before-build fglrx.QEmIld dpkg-buildpackage: host architecture amd64 debian/rules build Can't exec "debian/rules": Permission denied at /usr/bin/dpkg-buildpackage line 507. dpkg-buildpackage: error: debian/rules build failed with unknown exit code -1 Cleaning in directory . Can't exec "debian/rules": Permission denied at /usr/bin/debuild line 1314. debuild: fatal error at line 1313: couldn't exec debian/rules: Permission denied dpkg-buildpackage: export CFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export CPPFLAGS from dpkg-buildflags (origin: vendor): dpkg-buildpackage: export CXXFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export FFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export LDFLAGS from dpkg-buildflags (origin: vendor): -Wl,-Bsymbolic-functions dpkg-buildpackage: source package fglrx-installer dpkg-buildpackage: source version 2:8.861-0ubuntu1 dpkg-buildpackage: source changed by ATI Technologies Inc. <http://ati.amd.com/support/driver.html> dpkg-source --before-build fglrx.xtY6vC dpkg-buildpackage: host architecture amd64 debian/rules build Can't exec "debian/rules": Permission denied at /usr/bin/dpkg-buildpackage line 507. dpkg-buildpackage: error: debian/rules build failed with unknown exit code -1 Cleaning in directory . /usr/bin/fakeroot: line 176: debian/rules: Permission denied debuild: fatal error at line 1319: couldn't exec fakeroot debian/rules: dpkg-buildpackage: export CFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export CPPFLAGS from dpkg-buildflags (origin: vendor): dpkg-buildpackage: export CXXFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export FFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export LDFLAGS from dpkg-buildflags (origin: vendor): -Wl,-Bsymbolic-functions dpkg-buildpackage: source package fglrx-installer dpkg-buildpackage: source version 2:8.861-0ubuntu1 dpkg-buildpackage: source changed by ATI Technologies Inc. <http://ati.amd.com/support/driver.html> dpkg-source --before-build fglrx.oYWICI dpkg-buildpackage: host architecture amd64 debian/rules build Can't exec "debian/rules": Permission denied at /usr/bin/dpkg-buildpackage line 507. dpkg-buildpackage: error: debian/rules build failed with unknown exit code -1 Removing temporary directory: fglrx-install.oLN3ux I've installed devscripts which has debclean in it. I've tried running the command with and without sudo. I'm not experienced with installing from downloads/source, but it seems like the file debian/source isn't being set to be executable when it needs to be. If I extract only, without using the package builder command, debian/rules is 744. As to what to do next, I'm stumped. Many thanks.

    Read the article

  • GoldenGate 12c Trail Encryption and Credentials with Oracle Wallet

    - by hamsun
    I have been asked more than once whether the Oracle Wallet supports GoldenGate trail encryption. Although GoldenGate has supported encryption with the ENCKEYS file for years, Oracle GoldenGate 12c now also supports encryption using the Oracle Wallet. This helps improve security and makes it easier to administer. Two types of wallets can be configured in Oracle GoldenGate 12c: The wallet that holds the master keys, used with trail or TCP/IP encryption and decryption, stored in the new 12c dirwlt/cwallet.sso file.   The wallet that holds the Oracle Database user IDs and passwords stored in the ‘credential store’ stored in the new 12c dircrd/cwallet.sso file.   A wallet can be created using a ‘create wallet’  command.  Adding a master key to an existing wallet is easy using ‘open wallet’ and ‘add masterkey’ commands.   GGSCI (EDLVC3R27P0) 42> open wallet Opened wallet at location 'dirwlt'. GGSCI (EDLVC3R27P0) 43> add masterkey Master key 'OGG_DEFAULT_MASTERKEY' added to wallet at location 'dirwlt'.   Existing GUI Wallet utilities that come with other products such as the Oracle Database “Oracle Wallet Manager” do not work on this version of the wallet. The default Oracle Wallet can be changed.   GGSCI (EDLVC3R27P0) 44> sh ls -ltr ./dirwlt/* -rw-r----- 1 oracle oinstall 685 May 30 05:24 ./dirwlt/cwallet.sso GGSCI (EDLVC3R27P0) 45> info masterkey Masterkey Name:                 OGG_DEFAULT_MASTERKEY Creation Date:                  Fri May 30 05:24:04 2014 Version:        Creation Date:                  Status: 1               Fri May 30 05:24:04 2014        Current   The second wallet file is used for the credential used to connect to a database, without exposing the user id or password. Once it is configured, this file can be copied so that credentials are available to connect to the source or target database.   GGSCI (EDLVC3R27P0) 48> sh cp ./dircrd/cwallet.sso $GG_EURO_HOME/dircrd GGSCI (EDLVC3R27P0) 49> sh ls -ltr ./dircrd/* -rw-r----- 1 oracle oinstall 709 May 28 05:39 ./dircrd/cwallet.sso   The encryption wallet file can also be copied to the target machine so the replicat has access to the master key to decrypt records that are encrypted in the trail. Similar to the old ENCKEYS file, the master keys wallet created on the source host must either be stored in a centrally available disk or copied to all GoldenGate target hosts. The wallet is in a platform-independent format, although it is not certified for the iSeries, z/OS, and NonStop platforms.   GGSCI (EDLVC3R27P0) 50> sh cp ./dirwlt/cwallet.sso $GG_EURO_HOME/dirwlt   The new 12c UserIdAlias parameter is used to locate the credential in the wallet so the source user id and password does not need to be stored as a parameter as long as it is in the wallet.   GGSCI (EDLVC3R27P0) 52> view param extwest extract extwest exttrail ./dirdat/ew useridalias gguamer table west.*; The EncryptTrail parameter is used to encrypt the trail using the Advanced Encryption Standard and can be used with a primary extract or pump extract. GGSCI (EDLVC3R27P0) 54> view param pwest extract pwest encrypttrail AES256 rmthost easthost, mgrport 15001 rmttrail ./dirdat/pe passthru table west.*;   Once the extracts are running, records can be encrypted using the wallet.   GGSCI (EDLVC3R27P0) 60> info extract *west EXTRACT    EXTWEST   Last Started 2014-05-30 05:26   Status RUNNING Checkpoint Lag       00:00:17 (updated 00:00:01 ago) Process ID           24982 Log Read Checkpoint  Oracle Integrated Redo Logs                      2014-05-30 05:25:53                      SCN 0.0 (0) EXTRACT    PWEST     Last Started 2014-05-30 05:26   Status RUNNING Checkpoint Lag       24:02:32 (updated 00:00:05 ago) Process ID           24983 Log Read Checkpoint  File ./dirdat/ew000004                      2014-05-29 05:23:34.748949  RBA 1483   The ‘info masterkey’ command is used to confirm the wallet contains the key after copying it to the target machine. The key is needed to decrypt the data in the trail before the replicat applies the changes to the target database.   GGSCI (EDLVC3R27P0) 41> open wallet Opened wallet at location 'dirwlt'. GGSCI (EDLVC3R27P0) 42> info masterkey Masterkey Name:                 OGG_DEFAULT_MASTERKEY Creation Date:                  Fri May 30 05:24:04 2014 Version:        Creation Date:                  Status: 1               Fri May 30 05:24:04 2014        Current   Once the replicat is running, records can be decrypted using the wallet.   GGSCI (EDLVC3R27P0) 44> info reast REPLICAT   REAST     Last Started 2014-05-30 05:28   Status RUNNING INTEGRATED Checkpoint Lag       00:00:00 (updated 00:00:02 ago) Process ID           25057 Log Read Checkpoint  File ./dirdat/pe000004                      2014-05-30 05:28:16.000000  RBA 1546   There is no need for the DecryptTrail parameter when using the Oracle Wallet, unlike when using the ENCKEYS file.   GGSCI (EDLVC3R27P0) 45> view params reast replicat reast assumetargetdefs discardfile ./dirrpt/reast.dsc, purge useridalias ggueuro map west.*, target east.*;   Once a record is inserted into the source table and committed, the encryption can be verified using logdump and then querying the target table.   AMER_SQL>insert into west.branch values (50, 80071); 1 row created.   AMER_SQL>commit; Commit complete.   The following encrypted record can be found using logdump. Logdump 40 >n 2014/05/30 05:28:30.001.154 Insert               Len    28 RBA 1546 Name: WEST.BRANCH After  Image:                                             Partition 4   G  s    0a3e 1ba3 d924 5c02 eade db3f 61a9 164d 8b53 4331 | .>...$\....?a..M.SC1   554f e65a 5185 0257                               | UO.ZQ..W  Bad compressed block, found length of  7075 (x1ba3), RBA 1546   GGS tokens: TokenID x52 'R' ORAROWID         Info x00  Length   20  4141 4157 7649 4141 4741 4141 4144 7541 4170 0001 | AAAWvIAAGAAAADuAAp..  TokenID x4c 'L' LOGCSN           Info x00  Length    7  3231 3632 3934 33                                 | 2162943  TokenID x36 '6' TRANID           Info x00  Length   10  3130 2e31 372e 3135 3031                          | 10.17.1501  The replicat automatically decrypted this record from the trail and then inserted the row to the target table using the wallet. This select verifies the row was inserted into the target database and the data is not encrypted. EURO_SQL>select * from branch where branch_number=50; BRANCH_NUMBER                  BRANCH_ZIP -------------                                   ----------    50                                              80071   Book a seat in an upcoming Oracle GoldenGate 12c: Fundamentals for Oracle course now to learn more about GoldenGate 12c new features including how to use GoldenGate with the Oracle wallet, credentials, integrated extracts, integrated replicats, the Oracle Universal Installer, and other new features. Looking for another course? View all Oracle GoldenGate training.   Randy Richeson joined Oracle University as a Senior Principal Instructor in March 2005. He is an Oracle Certified Professional (10g-12c) and a GoldenGate Certified Implementation Specialist (10-11g). He has taught GoldenGate since 2010 and also has experience teaching other technical curriculums including GoldenGate Monitor, Veridata, JD Edwards, PeopleSoft, and the Oracle Application Server.

    Read the article

  • nikto probe warning messages

    - by julio
    Hi-- I have a pretty standard VPS running Ubuntu 8.1, Apache 2.2, PHP 5 etc. -- standard Lamp stack. I am using suhosin and have tried my best to plug the obvious stuff, since I'm the only user-- there's no SSH access except via pubkey on a non-standard port, there's no root access by SSH, no FTP server running, iptables is set to discard anything outside of basically port 80 or my SSH port (there's no mail server or anything else). However, I've still been compromised (not badly as far as I can tell) probably by a SQL injection. I've locked down the SQL user (there's only one outside of root, and he's got limited priv, no file etc.) So I ran nikto to see what I'm doing wrong, and there's a list of things I've never seen, and can't find using "find" or any other method I'm aware of. See below: + /autologon.html?10514: Remotely Anywhere 5.10.415 is vulnerable to XSS attacks that can lead to cookie theft or privilege escalation. This is typically found on port 2000. + /servlet/webacc?User.html=noexist: Netware web access may reveal full path of the web server. Apply vendor patch or upgrade. + OSVDB-35878: /modules.php?name=Members_List&letter='%20OR%20pass%20LIKE%20'a%25'/*: PHP Nuke module allows user names and passwords to be viewed. + OSVDB-3092: /sitemap.xml: This gives a nice listing of the site content. + OSVDB-12184: /index.php?=PHPB8B5F2A0-3C92-11d3-A3A9-4C7B08C10000: PHP reveals potentially sensitive information via certain HTTP requests which contain specific QUERY strings. + OSVDB-12184: /some.php?=PHPE9568F36-D428-11d2-A769-00AA001ACF42: PHP reveals potentially sensitive information via certain HTTP requests which contain specific QUERY strings. + OSVDB-12184: /some.php?=PHPE9568F34-D428-11d2-A769-00AA001ACF42: PHP reveals potentially sensitive information via certain HTTP requests which contain specific QUERY strings. + OSVDB-12184: /some.php?=PHPE9568F35-D428-11d2-A769-00AA001ACF42: PHP reveals potentially sensitive information via certain HTTP requests which contain specific QUERY strings. + OSVDB-3092: /administrator/: This might be interesting... + OSVDB-3092: /Agent/: This might be interesting... + OSVDB-3092: /includes/: This might be interesting... + OSVDB-3092: /logs/: This might be interesting... + OSVDB-3092: /tmp/: This might be interesting... + ERROR: /servlet/Counter returned an error: error reading HTTP response + OSVDB-3268: /icons/: Directory indexing is enabled: /icons + OSVDB-3268: /images/: Directory indexing is enabled: /images + OSVDB-3299: /forumscalendar.php?calbirthdays=1&action=getday&day=2001-8-15&comma=%22;echo%20'';%20echo%20%60id%20%60;die();echo%22: Vbulletin allows remote command execution. See link + OSVDB-3299: /forumzcalendar.php?calbirthdays=1&action=getday&day=2001-8-15&comma=%22;echo%20'';%20echo%20%60id%20%60;die();echo%22: Vbulletin allows remote command execution. See link + OSVDB-3299: /htforumcalendar.php?calbirthdays=1&action=getday&day=2001-8-15&comma=%22;echo%20'';%20echo%20%60id%20%60;die();echo%22: Vbulletin allows remote command execution. See link + OSVDB-3299: /vbcalendar.php?calbirthdays=1&action=getday&day=2001-8-15&comma=%22;echo%20'';%20echo%20%60id%20%60;die();echo%22: Vbulletin allows remote command execution. See link + OSVDB-3299: /vbulletincalendar.php?calbirthdays=1&action=getday&day=2001-8-15&comma=%22;echo%20'';%20echo%20%60id%20%60;die();echo%22: Vbulletin allows remote command execution. See link + OSVDB-6659: /kCKAowoWuZkKCUPH7Mr675ILd9hFg1lnyc1tWUuEbkYkFCpCdEnCKkkd9L0bY34tIf9l6t2owkUp9nI5PIDmQzMokDbp71QFTZGxdnZhTUIzxVrQhVgwmPYsMK7g34DURzeiy3nyd4ezX5NtUozTGqMkxDrLheQmx4dDYlRx0vKaX41JX40GEMf21TKWxHAZSUxjgXUnIlKav58GZQ5LNAwSAn13l0w<font%20size=50>DEFACED<!--//--: MyWebServer 1.0.2 is vulnerable to HTML injection. Upgrade to a later version. I understand about the trace and index, but what about the vbulletin and autologin? I've searched, and I can't find any files like that on the server. I have no idea about the "MyWebServer" stuff, the PHP Nuke, or the Netware/servlet stuff-- there's nothing really on the server except a pretty standard Joomla site (updated to the latest version). Any help with these messages and/or what I'm doing wrong is very much appreciated.

    Read the article

  • Beginner Geek: How to Use Bookmarklets on Any Device

    - by Chris Hoffman
    Web browser bookmarklets allow you to perform actions on the current page with just a click or tap. They’re a lightweight alternative to browser extensions. They even work on mobile browsers that don’t support traditional extensions. To use bookmarklets, all you need is a web browser that supports bookmarks — that’s it! Bookmarklets Explained Web pages you view in your browser use JavaScript code. That’s why web pages aren’t just static documents anymore — they’re dynamic. A bookmarklet is a normal bookmark with a piece of JavaScript code instead of a web address. When you click or tap the bookmarklet, it will execute the JavaScript code on the current page instead of loading a different page, as most bookmarks do. Bookmarklets can be used to do something to a web page with a single click. For example, you’ll find bookmarklets associated with web services like Twitter, Facebook, Google+, LinkedIn, Pocket, and LastPass. When you click the bookmarklet, it will run code that lets you easily share the current page with that service. Bookmarklets don’t just have to be  associated with web services. A bookmarklet you click could modify the appearance of the page, stripping away most of the junk and giving you a clean “reading mode.” It could alter fonts, remove images, or insert other content. It can access anything the web page could access. For example, you could use a bookmarklet to reveal a password that just appears as ******* on the page. Unlike browser extensions, bookmarklets don’t run in the background and bog down your browser. They don’t do anything at all until you click them. Because they just use the standard bookmark system, they can also be used in mobile browsers where you couldn’t run extensions. For example, you could install the Pocket bookmarklet in Safari on an iPad and get an “Add to Pocket” option in Safari. Safari doesn’t offer browsing extensions and Apple’s iOS doesn’t offer a “Share” feature like Android and Windows 8 do, so this is the only way to get this direct integration. You could even use the LastPass bookmarklets in Safari on an iPad to integrate LastPass with the Safari web browser. Where to Find Bookmarklets If you’re looking for a bookmarklet for a particular service, you’ll generally find the bookmarklet on that service’s site. Websites like Twitter, Facebook, and Pocket host pages where they provide bookmarklets along with browser extensions. Bookmarklets aren’t like programs. They’re really just a piece of text that you can put in a bookmarklet, so you don’t have to download them a specific site. You can get them from practically anywhere — installing them just involves copying a bit of text off of a web page. For example, you can just search the web for “reveal password bookmarklet” if you wanted a bookmarklet that will reveal passwords. We’ve covered many of the must-have bookmarklets — and our readers have chimed in too — so take a look at our lists for more examples. How to Install a Bookmarklet Bookmarklets are simple to install. When you hover over a bookmarklet on a web page, you’ll see its address begins with “javascript:”. If you have your web browser’s bookmark or favorites toolbar visible, the easiest way to install a bookmarklet is with drag-and-drop. Press Ctrl+Shift+B to show your bookmarks toolbar if you’re using Chrome or Internet Explorer. In Firefox, right-click the toolbar and click Bookmarks Toolbar. Just drag and drop this link to your bookmark toolbar. The bookmarklet is now installed. You can also install bookmarklets manually. Select the bookmarklet’s code and copy it to your clipboard. If the bookmarklet is a link, right-click or long-press the link and copy its address to your clipboard. Open your browser’s bookmarks manager, add a bookmark, and paste the JavaScript code directly into the address box. Give your bookmarklet a name and save it. How to Use a Bookmarklet Bookmarklets are easiest to use if you have your browser’s bookmarks toolbar enabled. Just click the bookmarklet and your browser will run it on the current page. If you don’t have a bookmarks toolbar — such as on Safari on an iPad or another mobile browser — just open your browser’s bookmarks pane and tap or click the bookmark. In mobile Chrome, you’ll need to launch the bookmarklet from the location bar. Open the web page you want to run the bookmarklet on, tap your location bar, and start searching for the name of the bookmarklet. Tap the bookmarklet’s name to run it on the current page. Note that the bookmarklet only appears here because we have it saved as a bookmark in Chrome. You’ll need to add the bookmarklet to your browser’s bookmarks before you can use it in this way. The location bar approach may also be necessary in other browsers. The trick is loading the bookmark so that it will be associated with your current tab. You can’t just open your bookmarks in a separate browser tab and run the bookmarklet from there — it will run on that other browser tab. Bookmarklets are powerful and flexible. While they’re not as flashy as browser extensions, they’re much more lightweight and allow you to get extension-like features in more limited mobile browsers.

    Read the article

  • Towards an F# .NET Reflector add-in

    - by CliveT
    When I had the opportunity to spent some time during Red Gate's recent "down tools" week on a project of my choice, the obvious project was an F# add-in for Reflector . To be honest, this was a bit of a misnomer as the amount of time in the designated week for coding was really less than three days, so it was always unlikely that very much progress would be made in such a small amount of time (and that certainly proved to be the case), but I did learn some things from the experiment. Like lots of problems, one useful technique is to take examples, get them to work, and then generalise to get something that works across the board. Unfortunately, I didn't have enough time to do the last stage. The obvious first step is to take a few function definitions, starting with the obvious hello world, moving on to a non-recursive function and finishing with the ubiquitous recursive Fibonacci function. let rec printMessage message  =     printfn  message let foo x  =    (x + 1) let rec fib x  =     if (x >= 2) then (fib (x - 1) + fib (x - 2)) else 1 The major problem in decompiling these simple functions is that Reflector has an in-memory object model that is designed to support object-oriented languages. In particular it has a return statement that allows function bodies to finish early. I used some of the in-built functionality to take the IL and produce an in-memory object model for the language, but then needed to write a transformer to push the return statements to the top of the tree to make it easy to render the code into a functional language. This tree transform works in some scenarios, but not in others where we simply regenerate code that looks more like CPS style. The next thing to get working was library level bindings of values where these values are calculated at runtime. let x = [1 ; 2 ; 3 ; 4] let y = List.map  (fun x -> foo x) x The way that this is translated into a set of classes for the underlying platform means that the code needs to follow references around, from the property exposing the calculated value to the class in which the code for generating the value is embedded. One of the strongest selling points of functional languages is the algebraic datatypes, which allow definitions via standard mathematical-style inductive definitions across the union cases. type Foo =     | Something of int     | Nothing type 'a Foo2 =     | Something2 of 'a     | Nothing2 Such a definition is compiled into a number of classes for the cases of the union, which all inherit from a class representing the type itself. It wasn't too hard to get such a de-compilation happening in the cases I tried. What did I learn from this? Firstly, that there are various bits of functionality inside Reflector that it would be useful for us to allow add-in writers to access. In particular, there are various implementations of the Visitor pattern which implement algorithms such as calculating the number of references for particular variables, and which perform various substitutions which could be more generally useful to add-in writers. I hope to do something about this at some point in the future. Secondly, when you transform a functional language into something that runs on top of an object-based platform, you lose some fidelity in the representation. The F# compiler leaves attributes in place so that tools can tell which classes represent classes from the source program and which are there for purposes of the implementation, allowing the decompiler to regenerate these constructs again. However, decompilation technology is a long way from being able to take unannotated IL and transform it into a program in a different language. For a simple function definition, like Fibonacci, I could write a simple static function and have it come out in F# as the same function, but it would be practically impossible to take a mass of class definitions and have a decompiler translate it automatically into an F# algebraic data type. What have we got out of this? Some data on the feasibility of implementing an F# decompiler inside Reflector, though it's hard at the moment to say how long this would take to do. The work we did is included the 6.5 EAP for Reflector that you can get from the EAP forum. All things considered though, it was a useful way to gain more familiarity with the process of writing an add-in and understand difficulties other add-in authors might experience. If you'd like to check out a video of Down Tools Week, click here.

    Read the article

  • List of Commonly Used Value Types in XNA Games

    - by Michael B. McLaughlin
    Most XNA programmers are concerned about generating garbage. More specifically about allocating GC-managed memory (GC stands for “garbage collector” and is both the name of the class that provides access to the garbage collector and an acronym for the garbage collector (as a concept) itself). Two of the major target platforms for XNA (Windows Phone 7 and Xbox 360) use variants of the .NET Compact Framework. On both variants, the GC runs under various circumstances (Windows Phone 7 and Xbox 360). Of concern to XNA programmers is the fact that it runs automatically after a fixed amount of GC-managed memory has been allocated (currently 1MB on both systems). Many beginning XNA programmers are unaware of what constitutes GC-managed memory, though. So here’s a quick overview. In .NET, there are two different “types” of types: value types and reference types. Only reference types are managed by the garbage collector. Value types are not managed by the garbage collector and are instead managed in other ways that are implementation dependent. For purposes of XNA programming, the important point is that they are not managed by the GC and thus do not, by themselves, increment that internal 1 MB allocation counter. (n.b. Structs are value types. If you have a struct that has a reference type as a member, then that reference type, when instantiated, will still be allocated in the GC-managed memory and will thus count against the 1 MB allocation counter. Putting it in a struct doesn’t change the fact that it gets allocated on the GC heap, but the struct itself is created outside of the GC’s purview). Both value types and reference types use the keyword ‘new’ to allocate a new instance of them. Sometimes this keyword is hidden by a method which creates new instances for you, e.g. XmlReader.Create. But the important thing to determine is whether or not you are dealing with a value types or a reference type. If it’s a value type, you can use the ‘new’ keyword to allocate new instances of that type without incrementing the GC allocation counter (except as above where it’s a struct with a reference type in it that is allocated by the constructor, but there are no .NET Framework or XNA Framework value types that do this so it would have to be a struct you created or that was in some third-party library you were using for that to even become an issue). The following is a list of most all of value types you are likely to use in a generic XNA game: AudioCategory (used with XACT; not available on WP7) AvatarExpression (Xbox 360 only, but exposed on Windows to ease Xbox development) bool BoundingBox BoundingSphere byte char Color DateTime decimal double any enum (System.Enum itself is a class, but all enums are value types such that there are no GC allocations for enums) float GamePadButtons GamePadCapabilities GamePadDPad GamePadState GamePadThumbSticks GamePadTriggers GestureSample int IntPtr (rarely but occasionally used in XNA) KeyboardState long Matrix MouseState nullable structs (anytime you see, e.g. int? something, that ‘?’ denotes a nullable struct, also called a nullable type) Plane Point Quaternion Ray Rectangle RenderTargetBinding sbyte (though I’ve never seen it used since most people would just use a short) short TimeSpan TouchCollection TouchLocation TouchPanelCapabilities uint ulong ushort Vector2 Vector3 Vector4 VertexBufferBinding VertexElement VertexPositionColor VertexPositionColorTexture VertexPositionNormalTexture VertexPositionTexture Viewport So there you have it. That’s not quite a complete list, mind you. For example: There are various structs in the .NET framework you might make use of. I left out everything from the Microsoft.Xna.Framework.Graphics.PackedVector namespace, since everything in there ventures into the realm of advanced XNA programming anyway (n.b. every single instantiable thing in that namespace is a struct and thus a value type; there are also two interfaces but interfaces cannot be instantiated at all and thus don’t figure in to this discussion). There are so many enums you’re likely to use (PlayerIndex, SpriteSortMode, SpriteEffects, SurfaceFormat, etc.) that including them would’ve flooded the list and reduced its utility. So I went with “any enum” and trust that you can figure out what the enums are (and it’s rare to use ‘new’ with an enum anyway). That list also doesn’t include any of the pre-defined static instances of some of the classes (e.g. BlendState.AlphaBlend, BlendState.Opaque, etc.) which are already allocated such that using them doesn’t cause any new allocations and therefore doesn’t increase that 1 MB counter. That list also has a few misleading things. VertexElement, VertexPositionColor, and all the other vertex types are structs. But you’re only likely to ever use them as an array (for use with VertexBuffer or DynamicVertexBuffer), and all arrays are reference types (even arrays of value types such as VertexPositionColor[ ] or int[ ]). * So that’s it for now. The note below may be a bit confusing (it deals with how the GC works and how arrays are managed in .NET). If so, you can probably safely ignore it for now but feel free to ask any questions regardless. * Arrays of value types (where the value type doesn’t contain any reference type members) are much faster for the GC to examine than arrays of reference types, so there is a definite benefit to using arrays of value types where it makes sense. But creating arrays of value types does cause the GC’s allocation counter to increase. Indeed, allocating a large array of a value type is one of the quickest ways to increment the allocation counter since a .NET array is a sequential block of memory. An array of reference types is just a sequential block of references (typically 4 bytes each) while an array of value types is a sequential block of instances of that type. So for an array of Vector3s it would be 12 bytes each since each float is 4 bytes and there are 3 in a Vector3; for an array of VertexPositionNormalTexture structs it would typically be 32 bytes each since it has two Vector3s and a Vector2. (Note that there are a few additional bytes taken up in the creation of an array, typically 12 but sometimes 16 or possibly even more, which depend on the implementation details of the array type on the particular platform the code is running on).

    Read the article

  • Taking the training wheels off: Accelerating the Business with Oracle IAM by Brian Mozinski (Accenture)

    - by Greg Jensen
    Today, technical requirements for IAM are evolving rapidly, and the bar is continuously raised for high performance IAM solutions as organizations look to roll out high volume use cases on the back of legacy systems.  Existing solutions were often designed and architected to support offline transactions and manual processes, and the business owners today demand globally scalable infrastructure to support the growth their business cases are expected to deliver. To help IAM practitioners address these challenges and make their organizations and themselves more successful, this series we will outline the: • Taking the training wheels off: Accelerating the Business with Oracle IAM The explosive growth in expectations for IAM infrastructure, and the business cases they support to gain investment in new security programs. • "Necessity is the mother of invention": Technical solutions developed in the field Well proven tricks of the trade, used by IAM guru’s to maximize your solution while addressing the requirements of global organizations. • The Art & Science of Performance Tuning of Oracle IAM 11gR2 Real world examples of performance tuning with Oracle IAM • No Where to go but up: Extending the benefits of accelerated IAM Anything is possible, compelling new solutions organizations are unlocking with accelerated Oracle IAM Let’s get started … by talking about the changing dynamics driving these discussions. Big Companies are getting bigger everyday, and increasingly organizations operate across state lines, multiple times zones, and in many countries or continents at the same time.  No longer is midnight to 6am a safe time to take down the system for upgrades, to run recon’s and import or update user accounts and attributes.  Further IT organizations are operating as shared services with SLA’s similar to telephone carrier levels expected by their “clients”.  Workers are moved in and out of roles on a weekly, daily, or even hourly rate and IAM is expected to support those rapid changes.  End users registering for services during business hours in Singapore are expected their access to be green-lighted in custom apps hosted in Portugal within the hour.  Many of the expectations of asynchronous systems and batched updates are not adequate and the number and types of users is growing. When organizations acted more like independent teams at functional or geographic levels it was manageable to have processes that relied on a handful of people who knew how to make things work …. Knew how to get you access to the key systems to get your job done.  Today everyone is expected to do more with less, the finance administrator previously supporting their local Atlanta sales office might now be asked to help close the books for the Johannesburg team, and access certification process once completed monthly by Joan on the 3rd floor is now done by a shared pool of resources in Sao Paulo.   Fragmented processes that rely on institutional knowledge to get access to systems and get work done quickly break down in these scenarios.  Highly robust processes that have automated workflows for connected or disconnected systems give organizations the dynamic flexibility to share work across these lines and cut costs or increase productivity. As the IT industry computing paradigms continue to change with the passing of time, and as mature or proven approaches become clear, it is normal for organizations to adjust accordingly. Businesses must manage identity in an increasingly hybrid world in which legacy on-premises IAM infrastructures are extended or replaced to support more and more interconnected and interdependent services to a wider range of users. The old legacy IAM implementation models we had relied on to manage identities no longer apply. End users expect to self-request access to services from their tablet, get supervisor approval over mobile devices and email, and launch the application even if is hosted on the cloud, or run by a partner, vendor, or service provider. While user expectations are higher, they are also simpler … logging into custom desktop apps to request approvals, or going through email or paper based processes for certification is unacceptable.  Users expect security to operate within the paradigm of the application … i.e. feel like the application they are using. Citizen and customer facing applications have evolved from every where, with custom applications, 3rd party tools, and merging in from acquired entities or 3rd party OEM’s resold to expand your portfolio of services.  These all have their own user stores, authentication models, user lifecycles, session management, etc.  Often the designers/developers are no longer accessible and the documentation is limited.  Bringing together underlying directories to scale for growth, and improve user experience is critical for revenue … but also for operations. Job functions are more dynamic.... take the Olympics for example.  Endless organizations from corporations broadcasting, endorsing, or marketing through the event … to non-profit athletic foundations and public/government entities for athletes and public safety, all operate simultaneously on the world stage.  Each organization needs to spin up short-term teams, often dealing with proprietary information from hot ads to racing strategies or security plans.  IAM is expected to enable team’s to spin up, enable new applications, protect privacy, and secure critical infrastructure.  Then it needs to be disabled just as quickly as users go back to their previous responsibilities. On a more technical level … Optimized system directory; tuning guidelines and parameters are needed by businesses today. Business’s need to be making the right choices (virtual directories) and considerations via choosing the correct architectural patterns (virtual, direct, replicated, and tuning), challenge is that business need to assess and chose the correct architectural patters (centralized, virtualized, and distributed) Today's Business organizations have very complex heterogeneous enterprises that contain diverse and multifaceted information. With today's ever changing global landscape, the strategic end goal in challenging times for business is business agility. The business of identity management requires enterprise's to be more agile and more responsive than ever before. The continued proliferation of networking devices (PC, tablet, PDA's, notebooks, etc.) has caused the number of devices and users to be granted access to these devices to grow exponentially. Business needs to deploy an IAM system that can account for the demands for authentication and authorizations to these devices. Increased innovation is forcing business and organizations to centralize their identity management services. Access management needs to handle traditional web based access as well as handle new innovations around mobile, as well as address insufficient governance processes which can lead to rouge identity accounts, which can then become a source of vulnerabilities within a business’s identity platform. Risk based decisions are providing challenges to business, for an adaptive risk model to make proper access decisions via standard Web single sign on for internal and external customers,. Organizations have to move beyond simple login and passwords to address trusted relationship questions such as: Is this a trusted customer, client, or citizen? Is this a trusted employee, vendor, or partner? Is this a trusted device? Without a solid technological foundation, organizational performance, collaboration, constituent services, or any other organizational processes will languish. A Single server location presents not only network concerns for distributed user base, but identity challenges. The network risks are centered on latency of the long trip that the traffic has to take. Other risks are a performance around availability and if the single identity server is lost, all access is lost. As you can see, there are many reasons why performance tuning IAM will have a substantial impact on the success of your organization.  In our next installment in the series we roll up our sleeves and get into detailed tuning techniques used everyday by thought leaders in the field implementing Oracle Identity & Access Management Solutions.

    Read the article

  • How do I get my ubuntu server to listen for database connections?

    - by Bob Flemming
    I am having a problems connecting to my database outside of phpmyadmin. Im pretty sure this is because my server isn't listening on port 3306. When I type: sudo netstat -ntlp on my OTHER working server I can see the following line: tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 20445/mysqld However, this line does not appear on the server I am having difficulty with. How do I make my sever listen for mysql connections? Here my my.conf file: # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql #skip-networking=off #skip_networking=off #skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 0.0.0.0 # # * Fine Tuning # key_buffer = 64M max_allowed_packet = 64M thread_stack = 650K thread_cache_size = 32 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 2M query_cache_size = 32M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 # # Error logging goes to syslog due to /etc/mysql/conf.d/mysqld_safe_syslog.cnf. # # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 32M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 32M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/

    Read the article

  • Samba4 [homes] share

    - by SambaDrivesMeCrazy
    I am having issues with the [homes] share. OS is Ubuntu 12.04. I've installed samba 4.0.3, bind9 dlz, ntp, winbind, everything but pam modules, and did all the tests from https://wiki.samba.org/index.php/Samba_AD_DC_HOWTO. Running getent passwd and getent user work just fine. Creating a simple share works just fine too. I can manage the users, GPOs, and DNS from the windows mmc snap-ins. I can join winxp,7,8 to the domain and log on perfectly. I can change my passwords from windows, etc..etc.. I could say that everything is fine and be happy :) buuuut, no, home directories do not work. Searching in here, and on our good friend google I gathered that a simple [homes] read only = no path = /storage-server/users/ and mapping the user's home folder in dsa.msc to \\server-001\username or \\server-001\homes should get me a home share I could map for my user homedir. But the snap-in give me an error saying that it cannot create the home folder because the network name has not been found (rough translation from portuguese). also, running root@server-001:/storage-server/users# smbclient //server-001/test -Utest%'12345678' -c 'ls' Domain=[MYDOMAIN] OS=[Unix] Server=[Samba 4.0.3] tree connect failed: NT_STATUS_BAD_NETWORK_NAME Server name is alright, if I go for a simple share on the same server it opens just fine. If I map the user homedir to this simple share it works. What I want is that I dont have to go and manually make a new folder on linux everytime I create a new user on windows. It looks like permissions but I cant find any documentation on this (yes I've tried the manpages, but its hard to tell with so many options on man smb.conf alone). My smb.conf right now looks like this (pretty simple I know) # Global parameters [global] workgroup = MYDOMAIN realm = MYDOMAIN.LAN netbios name = SERVER-001 server role = active directory domain controller server services = s3fs, rpc, nbt, wrepl, ldap, cldap, kdc, drepl, winbind, ntp_signd, kcc, dnsupdate [netlogon] path = /usr/local/samba/var/locks/sysvol/mydomain.lan/scripts read only = No [sysvol] path = /usr/local/samba/var/locks/sysvol read only = No [homes] read only = no path = /storage-server/users Folder permissions /storage-server drwxr-xr-x 6 root root 4096 Fev 15 15:17 storage-server /storage-server/users drwxrwxrwx 6 root root 4096 Fev 18 17:05 users/ Yes, I was desperate enough to set 777 on the users folder... not proud of it. Any pointers in the right direction would be very welcome. Edited to include: root@server-001:/# wbinfo --user-info=test MYDOMAIN\test:*:3000045:100:test:/home/MYDOMAIN/test:/bin/false root@server-001:/# wbinfo -n test S-1-5-21-1957592451-3401938807-633234758-1128 SID_USER (1) root@server-001:/# id test uid=3000045(MYDOMAIN\test) gid=100(users) grupos=100(users) root@server-001:/# wbinfo -U 3000045 S-1-5-21-1957592451-3401938807-633234758-1128 root@server-001:/# Edit 2: getent passwd | grep test MYDOMAIN\test:*:3000045:100:test:/home/MYDOMAIN/test:/bin/false I have no idea how to change that home folder to /storage-server/users/test so I just went and ln -s /storage-server/users /home/MYDOMAIN just in case. still, no changes, same errors. Edit 3 On log.smbd I get the following error when trying to set the test user home folder to \server-001\test [2013/02/20 14:22:08.446658, 2] ../source3/smbd/service.c:418(create_connection_session_info) user 'MYDOMAIN\Administrator' (from session setup) not permitted to access this share (test)

    Read the article

  • ssh authentication nfs

    - by user40135
    Hi all I would like to do ssh from machine "ub0" to another machine "ub1" without using passwords. I setup using nfs on "ub0" but still I am asked to insert a password. Here is my scenario: * machine ub0 and ub1 have the same user "mpiu", with same pwd, same userid, and same group id * the 2 servers are sharing a folder that is the HOME directory for "mpiu" * I did a chmod 700 on the .ssh * I created a key using ssh-keygene -t dsa * I did "cat id_dsa.pub authorized_keys". On this last file I tried also chmod 600 and chmod 640 * off course I can guarantee that on machine ub1 the user "shared_user" can see the same fodler that wes mounted with no problem. Below the content of my .ssh folder Code: authorized_keys id_dsa id_dsa.pub known_hosts After all of this calling wathever function "ssh ub1 hostname" I am requested my password. Do you know what I can try? I also UNcommented in the ssh_config file for both machines this line IdentityFile ~/.ssh/id_dsa I also tried ssh -i $HOME/.ssh/id_dsa mpiu@ub1 Below the ssh -vv Code: OpenSSH_5.1p1 Debian-3ubuntu1, OpenSSL 0.9.8g 19 Oct 2007 OpenSSH_5.1p1 Debian-3ubuntu1, OpenSSL 0.9.8g 19 Oct 2007 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to ub1 [192.168.2.9] port 22. debug1: Connection established. debug2: key_type_from_name: unknown key type '-----BEGIN' debug2: key_type_from_name: unknown key type '-----END' debug1: identity file /mirror/mpiu/.ssh/id_dsa type 2 debug1: Checking blacklist file /usr/share/ssh/blacklist.DSA-1024 debug1: Checking blacklist file /etc/ssh/blacklist.DSA-1024 debug1: Remote protocol version 2.0, remote software version lshd-2.0.4 lsh - a GNU ssh debug1: no match: lshd-2.0.4 lsh - a GNU ssh debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.1p1 Debian-3ubuntu1 debug2: fd 3 setting O_NONBLOCK debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128,arcfour256,arcfour,aes192-cbc,aes256-cbc,[email protected],aes128-ctr,aes192-ctr,aes256-ctr debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,spki-sign-rsa debug2: kex_parse_kexinit: aes256-cbc,3des-cbc,blowfish-cbc,arcfour debug2: kex_parse_kexinit: aes256-cbc,3des-cbc,blowfish-cbc,arcfour debug2: kex_parse_kexinit: hmac-sha1,hmac-md5 debug2: kex_parse_kexinit: hmac-sha1,hmac-md5 debug2: kex_parse_kexinit: none,zlib debug2: kex_parse_kexinit: none,zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server-client 3des-cbc hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client-server 3des-cbc hmac-md5 none debug2: dh_gen_key: priv key bits set: 183/384 debug2: bits set: 1028/2048 debug1: sending SSH2_MSG_KEXDH_INIT debug1: expecting SSH2_MSG_KEXDH_REPLY debug1: Host 'ub1' is known and matches the RSA host key. debug1: Found key in /mirror/mpiu/.ssh/known_hosts:1 debug2: bits set: 1039/2048 debug1: ssh_rsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /mirror/mpiu/.ssh/id_dsa (0xb874b098) debug1: Authentications that can continue: password,publickey debug1: Next authentication method: publickey debug1: Offering public key: /mirror/mpiu/.ssh/id_dsa debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: password,publickey debug2: we did not send a packet, disable method debug1: Next authentication method: password mpiu@ub1's password: I hangs here!

    Read the article

  • A Semantic Model For Html: TagBuilder and HtmlTags

    - by Ryan Ohs
    In this post I look into the code smell that is HTML literals and show how we can refactor these pesky strings into a friendlier and more maintainable model.   The Problem When I started writing MVC applications, I quickly realized that I built a lot of my HTML inside HtmlHelpers. As I did this, I ended up moving quite a bit of HTML into string literals inside my helper classes. As I wanted to add more attributes (such as classes) to my tags, I needed to keep adding overloads to my helpers. A good example of this end result is the default html helpers that come with the MVC framework. Too many overloads make me crazy! The problem with all these overloads is that they quickly muck up the API and nobody can remember exactly what order the parameters go in. I've seen many presenters (including members of the ASP.NET MVC team!) stumble before realizing that their view wasn't compiling because they needed one more null parameter in the call to Html.ActionLink(). What if instead of writing Html.ActionLink("Edit", "Edit", null, new { @class = "navigation" }) we could do Html.LinkToAction("Edit").Text("Edit").AddClass("navigation") ? Wouldn't that be much easier to remember and understand?  We can do this if we introduce a semantic model for building our HTML.   What is a Semantic Model? According to Martin Folwer, "a semantic model is an in-memory representation, usually an object model, of the same subject that the domain specific language describes." In our case, the model would be a set of classes that know how to render HTML. By using a semantic model we can free ourselves from dealing with strings and instead output the HTML (typically via ToString()) once we've added all the elements and attributes we desire to the model. There are two primary semantic models available in ASP.NET MVC: MVC 2.0's TagBuilder and FubuMVC's HtmlTags.   TagBuilder TagBuilder is the html builder that is available in ASP.NET MVC 2.0. I'm not a huge fan but it gets the job done -- for simple jobs.  Here's an overview of how to use TagBuilder. See my Tips section below for a few comments on that example. The disadvantage of TagBuilder is that unless you wrap it up with our own classes, you still have to write the actual tag name over and over in your code. eg. new TagBuilder("div") instead of new DivTag(). I also think it's method names are a little too long. Why not have AddClass() instead of AddCssClass() or Text() instead of SetInnerText()? What those methods are doing should be pretty obvious even in the short form. I also don't like that it wants to generate an id attribute from your input instead of letting you set it yourself using external conventions. (See GenerateId() and IdAttributeDotReplacement)). Obviously these come from Microsoft's default approach to MVC but may not be optimal for all programmers.   HtmlTags HtmlTags is in my opinion the much better option for generating html in ASP.NET MVC. It was actually written as a part of FubuMVC but is available as a stand alone library. HtmlTags provides a much cleaner syntax for writing HTML. There are classes for most of the major tags and it's trivial to create additional ones by inheriting from HtmlTag. There are also methods on each tag for the common attributes. For instance, FormTag has an Action() method. The SelectTag class allows you to set the default option or first option independent from adding other options. With TagBuilder there isn't even an abstraction for building selects! The project is open source and always improving. I'll hopefully find time to submit some of my own enhancements soon.   Tips 1) It's best not to have insanely overloaded html helpers. Use fluent builders. 2) In html helpers, return the TagBuilder/tag itself (not a string!) so that you can continue to add attributes outside the helper; see my first sample above. 3) Create a static entry point into your builders. I created a static Tags class that gives me access all the HtmlTag classes I need. This way I don't clutter my code with "new" keywords. eg. Tags.Div returns a new DivTag instance. 4) If you find yourself doing something a lot, create an extension method for it. I created a Nest() extension method that reads much more fluently than the AddChildren() method. It also accepts a params array of tags so I can very easily nest many children.   I hope you have found this post helpful. Join me in my war against HTML literals! I’ll have some more samples of how I use HtmlTags in future posts.

    Read the article

  • Windows can see Ubuntu Server printer, but can't print to it

    - by Mike
    I have an old desktop that I'm trying to set up as a home backup/print server. Backup was trivial, but am having issues getting the printing to work. The printer is connected to the server running Ubuntu Server 9.10 (no gui). If I access the printer via http://hostname:631/printers/, I am able to print a test page, so I know the printer is working; however, I am having no luck from Windows. Windows can see the printer when browsed via \hostname\, but I am unable to connect. Windows says "Windows cannot connect to the printer" without indicating why. Any suggestions? From /etc/samba/smb.conf: [global] workgroup = WORKGROUP dns proxy = no security = user username map = /etc/samba/smbusers encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user load printers = yes printing = cups printcap name = cups [printers] comment = All Printers browseable = no path = /var/spool/samba writable = no printable = yes guest ok = yes read only = yes create mask = 0700 [print$] comment = Printer Drivers path = /var/lib/samba/printers browseable = yes read only = yes guest ok = yes From /etc/cups/cupsd.conf: LogLevel warn SystemGroup lpadmin Port 631 Listen /var/run/cups/cups.sock Browsing On BrowseOrder allow,deny BrowseAllow all BrowseRemoteProtocols CUPS BrowseAddress @LOCAL BrowseLocalProtocols CUPS dnssd DefaultAuthType Basic <Location /> Order allow,deny Allow all </Location> <Location /admin> Order allow,deny Allow all </Location> <Location /admin/conf> AuthType Default Require user @SYSTEM Order allow,deny Allow all </Location> <Policy default> <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job CUPS-Move-Job CUPS-Get-Document> Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default CUPS-Get-Devices> AuthType Default Require user @SYSTEM Order deny,allow </Limit> <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> <Limit CUPS-Authenticate-Job> Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order deny,allow </Limit> </Policy> <Policy authenticated> <Limit Create-Job Print-Job Print-URI> AuthType Default Order deny,allow </Limit> <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job CUPS-Move-Job CUPS-Get-Document> AuthType Default Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default> AuthType Default Require user @SYSTEM Order deny,allow </Limit> <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> <Limit Cancel-Job CUPS-Authenticate-Job> AuthType Default Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order deny,allow </Limit> </Policy>

    Read the article

  • Postfix sasl login failing no mechanism found

    - by Nat45928
    following the link here: http://flurdy.com/docs/postfix/ with posfix, courier, MySql, and sasl gave me a web server that has imap functionality working fine but when i go to log into the server to send a message using the same user id and password for connecting the the imap server it rejects my login to the smtp server. If i do not specify a login for the outgoing mail server then it will send the message just fine. the error in postfix's log is: Jul 6 17:26:10 Sj-Linux postfix/smtpd[19139]: connect from unknown[10.0.0.50] Jul 6 17:26:10 Sj-Linux postfix/smtpd[19139]: warning: SASL authentication failure: unable to canonify user and get auxprops Jul 6 17:26:10 Sj-Linux postfix/smtpd[19139]: warning: unknown[10.0.0.50]: SASL DIGEST-MD5 authentication failed: no mechanism available Jul 6 17:26:10 Sj-Linux postfix/smtpd[19139]: warning: unknown[10.0.0.50]: SASL LOGIN authentication failed: no mechanism available Ive checked all usernames and passwords for mysql. what could be going wrong? edit: here is some other information: installed libraires for postfix, courier and sasl: aptitude install postfix postfix-mysql aptitude install libsasl2-modules libsasl2-modules-sql libgsasl7 libauthen-sasl-cyrus-perl sasl2-bin libpam-mysql aptitude install courier-base courier-authdaemon courier-authlib-mysql courier-imap courier-imap-ssl courier-ssl and here is my /etc/postfix/main.cf myorigin = domain.com smtpd_banner = $myhostname ESMTP $mail_name biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. #myhostname = my hostname alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname local_recipient_maps = mydestination = relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all mynetworks_style = host # how long if undelivered before sending warning update to sender delay_warning_time = 4h # will it be a permanent error or temporary unknown_local_recipient_reject_code = 450 # how long to keep message on queue before return as failed. # some have 3 days, I have 16 days as I am backup server for some people # whom go on holiday with their server switched off. maximal_queue_lifetime = 7d # max and min time in seconds between retries if connection failed minimal_backoff_time = 1000s maximal_backoff_time = 8000s # how long to wait when servers connect before receiving rest of data smtp_helo_timeout = 60s # how many address can be used in one message. # effective stopper to mass spammers, accidental copy in whole address list # but may restrict intentional mail shots. # but may restrict intentional mail shots. smtpd_recipient_limit = 16 # how many error before back off. smtpd_soft_error_limit = 3 # how many max errors before blocking it. smtpd_hard_error_limit = 12 # Requirements for the HELO statement smtpd_helo_restrictions = permit_mynetworks, permit # Requirements for the sender details smtpd_sender_restrictions = permit_sasl_authenticated, permit_mynetworks, warn_if_reject reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_pipelining, permit # Requirements for the connecting server smtpd_client_restrictions = reject_rbl_client sbl.spamhaus.org, reject_rbl_client blackholes.easynet.nl, reject_rbl_client dnsbl.njabl.org # Requirement for the recipient address smtpd_recipient_restrictions = reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_unauth_destination, permit smtpd_data_restrictions = reject_unauth_pipelining # require proper helo at connections smtpd_helo_required = yes # waste spammers time before rejecting them smtpd_delay_reject = yes disable_vrfy_command = yes # not sure of the difference of the next two # but they are needed for local aliasing alias_maps = hash:/etc/postfix/aliases alias_database = hash:/etc/postfix/aliases # this specifies where the virtual mailbox folders will be located virtual_mailbox_base = /var/spool/mail/virtual # this is for the mailbox location for each user virtual_mailbox_maps = mysql:/etc/postfix/mysql_mailbox.cf # and this is for aliases virtual_alias_maps = mysql:/etc/postfix/mysql_alias.cf # and this is for domain lookups virtual_mailbox_domains = mysql:/etc/postfix/mysql_domains.cf # this is how to connect to the domains (all virtual, but the option is there) # not used yet # transport_maps = mysql:/etc/postfix/mysql_transport.cf virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 # SASL smtpd_sasl_auth_enable = yes # If your potential clients use Outlook Express or other older clients # this needs to be set to yes broken_sasl_auth_clients = yes smtpd_sasl_security_options = noanonymous smtpd_sasl_local_domain =

    Read the article

  • How to recover data files from xampp-windows to xampp-linux after crash?

    - by David Buehler
    My Windows box died after I developed a database in xampp on it; fortunately I have a backup of the entire F:/TestWeb/Xampp partition. Unfortunately, I did not do an Export (nor dump) of the "Lws2" database before the crash. I have replaced the defunct machine with one running Mint7 (based on Ubuntu 9.04 "Jaunty Jackalope") and installed xampp-linux into the /opt partition, so the new xampp now runs fine in /opt/lampp, and says all the elements are secured by passwords (which I just assigned during this installation.) I assumed that Xamp-Windows installed in November would migrate easily to xampp-linux installed iin February -- a bad assumption. It apparently would have been simple if I had known enough to do an Export or a Dump before the crash, but.... The backup was done to a Network Attached Storage drive, which is formatted as "vfat" so the backup does not carry with it any valid ownership permissions from MySql on NTFS. I now see from my backup that the old data resided in \TestWeb\Xampp\Mysql\Data\Lws2\ and consists of 7 ".frm" files which define my tables. The actual data -- I suppose a ".sql" file or files -- has disappeared, and I am resigning myself to two days of retyping it. But I do not wish to do the table layouts all over again. So I copied Data tree to /opt/lampp/Data -- PhpMyAdmin does not see it. So I copied Lws2 tree to /opt/lampp/Lws2 -- PhpMyAdmin does not see it. So I copied Data tree to /opt/lampp/var/mysql/Data -- PhpMyAdmin does not see it. So I copied Lws2 tree to /opt/lampp/var/mysql/Lws2 -- PhpMyAdmin does not see it. So I adjusted all the permissions to stop saying owner "nobody" to owner "root" and gave full permissions to all groups and to all others, with permissions percolating down, in all 4 trees. You guessed it -- PhpMyAdmin does not see any database named Lws2, only its 4 default ones. I double-checked the permissions and rebooted Linux and repeated the tests. At some point in that process I did see PhpMyAdmin showing "lws2(7)" but when I clicked on it I saw a "no table found" message. I have not been able to recreate that experience. Apparently there are some setup files for MySql and for PhpMyAdmin which need to be set up by running a wizard or two or by editing the files directly. I grepped the TestWeb tree and found an old "ldir = "C:TestWeb\Xampp\MySql\" and a "DataDir = C:TestWeb\Xampp\MySql\" in a .php file and in a .bat file, but I cannot find the corresponding config file names on the /opt partition/ -- so it looks as if these wizards have not been run to create them. What config files files does Linux use to setup MySql config files for PhpMyAdmin? What wizards do I need to run to point the MySql engine and the PhpMyAdmin at the folder /opt/lampp/data/ with its lws2 folder inside it? Or which files do I need to edit, with a sample of what it normally says under Linux? Incidentally, I remember I converted from MyISAM with its .MYD and .MYI files to InnoDB after entering only a small amount of the data -- and I do not know what file types to look for -- perhaps my data is still there but under another guise or in another place? Is it something as simple as linux needing to see "/data/" instead of /Data? I will check that out while waiting for a response. If anyone can point me to documentation that discusses this level of detail -- I will read it avidly! In any case, thanks for any clarification you can give on this thorny problem. wizdum

    Read the article

  • User-Defined Customer Events & their impact (FA Type Profile)

    - by Rajesh Sharma
    CC&B automatically creates field activities when a specific Customer Event takes place. This depends on the way you have setup your Field Activity Type Profiles, the templates within, and associated SP Condition(s) on the template. CC&B uses the service point type, its state and referenced customer event to determine which field activity type to generate.   Customer events available in the base product include: Cut for Non-payment (CNP) Disconnect Warning (DIWA) Reconnect for Payment (REPY) Reread (RERD) Stop Service (STOP) Start Service (STRT) Start/Stop (STSP)   Note the Field values/codes defined for each event.   CC&B comes with a flexibility to define new set of customer events. These can be defined in the Look Up - CUST_EVT_FLG. Values from the Look Up are used on the Field Activity Type Profile Template page.     So what's the use of having user-defined Customer Events? And how will the system detect such events in order to create field activity(s)?   Well, system can only detect such events when you reference a user-defined customer event on a Severance Event Type for an event type Create Field Activities.     This way you can create additional field activities of a specific field activity type for user-defined customer events.   One of our customers adopted this feature and created a user-defined customer event CNPW - Cut for Non-payment for Water Services. This event was then linked on a Field Activity Type Profile and referenced on a Severance Event - CUT FOR NON PAY-W. The associated Severance Process was configured to trigger a reconnection process if it was cancelled (done by defining a Post Cancel Algorithm). Whenever this Severance Event was executed, a specific type of Field Activity was generated for disconnection purposes. The Field Activity type was determined by the system from the Field Activity Type Profile referenced for the SP Type, SP's state and the referenced user-defined customer event. All was working well until the time when they realized that in spite of the Severance Process getting cancelled (when a payment was made); the Post Cancel Algorithm was not executed to start a Reconnection Severance Process for the purpose of generating a reconnection field activity and reconnecting the service.   Basically, the Post Cancel algorithm (if specified on a Severance Process Template) is triggered when a Severance Process gets cancelled because a credit transaction has affected/relieved a Service Agreement's debt.   So what exactly was happening? Now we come to actual question as to what is the impact in having a user-defined customer event.   System defined/base customer events are hard-coded across the entire system. There is an impact even if you remove any customer event entry from the Look Up. User-defined customer events are not recognized by the system anywhere else except in the severance process, as described above.   There are few programs which have routines to first validate the completion of disconnection field activities, which were raised as a result of customer event CNP - Cut for Non-payment in order to perform other associated actions. One such program is the Post Cancel Algorithm, referenced on a Severance Process Template, generally used to reconnect services which were disconnected from other Severance Event, specifically CNP - Cut for Non-Payment. Post cancel algorithm provided by the product - SEV POST CAN does the following (below is the algorithm's description):   This algorithm is called after a severance process has been cancelled (typically because the debt was paid and the SA is no longer eligible to be on the severance process). It checks to see if the process has a completed 'disconnect' event and, if so, starts a reconnect process using the Reconnect Severance Process Template defined in the parameter.    Notice the underlined text. This algorithm implicitly checks for Field Activities having completed status, which were generated from Severance Events as a result of CNP - Cut for Non-payment customer event.   Now if we look back to the customer's issue, we can relate that the Post Cancel algorithm was triggered, but was not able to find any 'Completed' CNP - Cut for Non-payment related field activity. And hence was not able to start a reconnection severance process. This was because a field activity was generated and completed for a customer event CNPW - Cut for Non-payment of Water Services instead.   To conclude, if you introduce new customer events that extend or simulate base customer events, the ones that are included in the base product, ensure that there is no other impact either direct or indirect to other business functions that the application has to offer.  

    Read the article

  • Samba with remote LDAP authentication doesn`t see users properly

    - by LucasBr
    I'm trying to setup a samba server authenticated by a remote LDAP server, and I'm having some problems that I can't figure how to solve. I was able to make an getent passwd at samba server and I could see all users at ldapserver, but when I tried to access \\SAMBASERVER at my windows box I had this at the /var/log/samba/log.mywindowsbox: <...snip...> [2012/10/19 13:05:22.449684, 2] smbd/sesssetup.c:1413(setup_new_vc_session) setup_new_vc_session: New VC == 0, if NT4.x compatible we would close all old resources. [2012/10/19 13:05:22.449692, 3] smbd/sesssetup.c:1212(reply_sesssetup_and_X_spnego) Doing spnego session setup [2012/10/19 13:05:22.449701, 3] smbd/sesssetup.c:1254(reply_sesssetup_and_X_spnego) NativeOS=[] NativeLanMan=[] PrimaryDomain=[] [2012/10/19 13:05:22.449717, 3] libsmb/ntlmssp.c:747(ntlmssp_server_auth) Got user=[lucas] domain=[BUSINESS] workstation=[MYWINDOWSBOX] len1=24 len2=24 [2012/10/19 13:05:22.449747, 3] auth/auth.c:216(check_ntlm_password) check_ntlm_password: Checking password for unmapped user [BUSINESS]\[lucas]@[MYWINDOWSBOX] with the new password interface [2012/10/19 13:05:22.449759, 3] auth/auth.c:219(check_ntlm_password) check_ntlm_password: mapped user is: [SAMBASERVER]\[lucas]@[MYWINDOWSBOX] [2012/10/19 13:05:22.449773, 3] smbd/sec_ctx.c:210(push_sec_ctx) push_sec_ctx(0, 0) : sec_ctx_stack_ndx = 1 [2012/10/19 13:05:22.449783, 3] smbd/uid.c:429(push_conn_ctx) push_conn_ctx(0) : conn_ctx_stack_ndx = 0 [2012/10/19 13:05:22.449791, 3] smbd/sec_ctx.c:310(set_sec_ctx) setting sec ctx (0, 0) - sec_ctx_stack_ndx = 1 [2012/10/19 13:05:22.449922, 2] lib/smbldap.c:950(smbldap_open_connection) smbldap_open_connection: connection opened [2012/10/19 13:05:23.001517, 3] lib/smbldap.c:1166(smbldap_connect_system) ldap_connect_system: successful connection to the LDAP server [2012/10/19 13:05:23.007713, 3] smbd/sec_ctx.c:418(pop_sec_ctx) pop_sec_ctx (0, 0) - sec_ctx_stack_ndx = 0 [2012/10/19 13:05:23.007733, 3] auth/auth_sam.c:399(check_sam_security) check_sam_security: Couldn't find user 'lucas' in passdb. [2012/10/19 13:05:23.007743, 2] auth/auth.c:314(check_ntlm_password) check_ntlm_password: Authentication for user [lucas] -> [lucas] FAILED with error NT_STATUS_NO_SUCH_USER [2012/10/19 13:05:23.007760, 3] smbd/error.c:80(error_packet_set) error packet at smbd/sesssetup.c(111) cmd=115 (SMBsesssetupX) NT_STATUS_LOGON_FAILURE [2012/10/19 13:05:23.010469, 3] smbd/process.c:1489(process_smb) Transaction 3 of length 142 (0 toread) <...snip...> /etc/samba/smb.conf file follows: [global] dos charset = 850 unix charset = LOCALE workgroup = BUSINESS netbios name = SAMBASERVER bind interfaces only = true interfaces = lo eth0 eth1 smb ports = 139 hosts deny = All hosts allow = 192.168.78. 192.168.255. 127.0.0.1 10.149.122. 192.168.0. name resolve order = wins bcast hosts log level = 3 syslog = 0 log file = /var/log/samba/log.%m max log size = 100000 domain logons = No wins support = Yes wins proxy = No client ntlmv2 auth = Yes lanman auth = Yes ntlm auth = Yes dns proxy = Yes time server = Yes security = user encrypt passwords = Yes obey pam restrictions = Yes ldap password sync = Yes unix password sync = Yes passdb backend = ldapsam:"ldap://192.168.78.206" ldap ssl = off ldap admin dn = uid=root,ou=Users,dc=business,dc=intranet ldap suffix = ldap group suffix = ou=Groups ldap user suffix = ou=Users ldap machine suffix = ou=Computers ldap idmap suffix = ou=Idmap ldap delete dn = Yes add user script = /usr/sbin/smbldap-useradd -m "%u" delete user script = /usr/sbin/smbldap-userdel "%u" add group script = /usr/sbin/smbldap-groupadd -p "%g" delete group script = /usr/sbin/smbldap-groupdel "%g" add user to group script = /usr/sbin/smbldap-groupmod -m "%u" "%g" delete user from group script = /usr/sbin/smbldap-groupmod -x "%u" "%g" set primary group script = /usr/sbin/smbldap-usermod -g "%g" "%u" add machine script = /usr/sbin/smbldap-useradd -W -t5 "%u" idmap backend = ldap:"ldap://192.168.78.206" idmap uid = 16777216-33554431 idmap gid = 16777216-33554431 load printers = No printcap name = /dev/null map acl inherit = Yes map untrusted to domain = Yes enable privileges = Yes veto files = /lost+found/ /publicftp/ So, \\SAMBASERVER says he couldn't find my user, but I can see it by getent passwd . What I can do in order to SAMBASERVER see and authenticate my user? Thanks in advance!

    Read the article

  • Simple Project Templates

    - by Geertjan
    The NetBeans sources include a module named "simple.project.templates": In the module sources, Tim Boudreau turns out to be the author of the code, so I asked him what it was all about, and if he could provide some usage code. His response, from approximately this time last year because it's been sitting in my inbox for a while, is below. Sure - though I think the javadoc in it is fairly complete.  I wrote it because I needed to create a bunch of project templates for Javacard, and all of the ways that is usually done were grotesque and complicated.  I figured we already have the ability to create files from templates, and we already have the ability to do substitutions in templates, so why not have a single file that defines the project as a list of file templates to create (with substitutions in the names) and some definitions of what should be in project properties. You can also add files to the project programmatically if you want.Basically, a template for an entire project is a .properties file.  Any line which doesn't have the prefix 'pp.' or 'pvp.' is treated as the definition of one file which should be created in the new project.  Any such line where the key ends in * means that file should be opened once the new project is created.  So, for example, in the nodejs module, the definition looks like: {{projectName}}.js*=Templates/javascript/HelloWorld.js .npmignore=node_hidden_templates/npmignore So, the first line means:  - Create a file with the same name as the project, using the HelloWorld template    - I.e. the left side of the line is the relative path of the file to create, and the right side is the path in the system filesystem for the template to use       - If the template is not one you normally want users to see, just register it in the system filesystem somewhere other than Templates/ (but remember to set the attribute that marks it as a template)  - Include that file in the set of files which should be opened in the editor once the new project is created. To actually create a project, first you just create a new ProjectCreator: ProjectCreator gen = new ProjectCreator( parentFolderOfNewProject ); Now, if you want to programmatically generate any files, in addition to those defined in the template, you can: gen.add (new FileCreator("nbproject", "project.xml", false) {     public DataObject create (FileObject project, Map<String,String> substitutions) throws IOException {          ...     } }); Then pass the FileObject for the project template (the properties file) to the ProjectCreator's createProject method (hmm, maybe it should be the string path to the project template instead, to save the caller trouble looking up the FileObject for the template).  That method looks like this: public final GeneratedProject createProject(final ProgressHandle handle, final String name, final FileObject template, final Map<String, String> substitutions) throws IOException { The name parameter should be the directory name for the new project;  the map is the strings you gathered in the wizard which should be used for substitutions.  createProject should be called on a background thread (i.e. use a ProgressInstantiatingIterator for the wizard iterator and just pass in the ProgressHandle you are given). The return value is a GeneratedProject object, which is just a holder for the created project directory and the set of DataObjects which should be opened when the wizard finishes. I'd love to see simple.project.templates moved out of the javacard cluster, as it is really useful and much simpler than any of the stuff currently done for generating projects.  It would also be possible to do much richer tools for creating projects in apisupport - i.e. choose (or create in the wizard) the templates you want to use, generate a skeleton wizard with a UI for all the properties you'd like to substitute, etc. Here is a partial project template from Javacard - for example usage, see org.netbeans.modules.javacard.wizard.ProjectWizardIterator in javacard.project (or the much simpler one in contrib/nodejs). #This properties file describes what to create when a project template is#instantiated.  The keys are paths on disk relative to the project root. #The values are paths to the templates to use for those files in the system#filesystem.  Any string inside {{ and }}'s will be substituted using properties#gathered in the template wizard.#Special key prefixes are #  pp. - indicates an entry for nbproject/project.properties#  pvp. - indicates an entry for nbproject/private/private.properties #File templates, in format [path-in-project=path-to-template]META-INF/javacard.xml=org-netbeans-modules-javacard/templates/javacard.xmlMETA-INF/MANIFEST.MF=org-netbeans-modules-javacard/templates/EAP_MANIFEST.MF APPLET-INF/applet.xml=org-netbeans-modules-javacard/templates/applet.xmlscripts/{{classnamelowercase}}.scr=org-netbeans-modules-javacard/templates/test.scrsrc/{{packagepath}}/{{classname}}.java*=Templates/javacard/ExtendedApplet.java nbproject/deployment.xml=org-netbeans-modules-javacard/templates/deployment.xml#project.properties contentspp.display.name={{projectname}}pp.platform.active={{activeplatform}} pp.active.device={{activedevice}}pp.includes=**pp.excludes= I will be using the above info in an upcoming blog entry and provide step by step instructions showing how to use them. However, anyone else out there should have enough info from the above to get started yourself!

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >