Search Results

Search found 3474 results on 139 pages for 'prepared statements'.

Page 129/139 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • Are python list comprehensions always a good programming practice?

    - by dln385
    To make the question clear, I'll use a specific example. I have a list of college courses, and each course has a few fields (all of which are strings). The user gives me a string of search terms, and I return a list of courses that match all of the search terms. This can be done in a single list comprehension or a few nested for loops. Here's the implementation. First, the Course class: class Course: def __init__(self, date, title, instructor, ID, description, instructorDescription, *args): self.date = date self.title = title self.instructor = instructor self.ID = ID self.description = description self.instructorDescription = instructorDescription self.misc = args Every field is a string, except misc, which is a list of strings. Here's the search as a single list comprehension. courses is the list of courses, and query is the string of search terms, for example "history project". def searchCourses(courses, query): terms = query.lower().strip().split() return tuple(course for course in courses if all( term in course.date.lower() or term in course.title.lower() or term in course.instructor.lower() or term in course.ID.lower() or term in course.description.lower() or term in course.instructorDescription.lower() or any(term in item.lower() for item in course.misc) for term in terms)) You'll notice that a complex list comprehension is difficult to read. I implemented the same logic as nested for loops, and created this alternative: def searchCourses2(courses, query): terms = query.lower().strip().split() results = [] for course in courses: for term in terms: if (term in course.date.lower() or term in course.title.lower() or term in course.instructor.lower() or term in course.ID.lower() or term in course.description.lower() or term in course.instructorDescription.lower()): break for item in course.misc: if term in item.lower(): break else: continue break else: continue results.append(course) return tuple(results) That logic can be hard to follow too. I have verified that both methods return the correct results. Both methods are nearly equivalent in speed, except in some cases. I ran some tests with timeit, and found that the former is three times faster when the user searches for multiple uncommon terms, while the latter is three times faster when the user searches for multiple common terms. Still, this is not a big enough difference to make me worry. So my question is this: which is better? Are list comprehensions always the way to go, or should complicated statements be handled with nested for loops? Or is there a better solution altogether?

    Read the article

  • how can I create macro definitions for the lines commented in the code.

    - by yaprak
    #include <stdio.h> //Here use a macro definition that assigns a value to SIZE (for example 5) int main() { int i; int array[SIZE]; int sum=0; for(i=0; i<SIZE; i++) { //Here use a macro definition named as CALCSUM to make the //following addition operation for the array printf("Enter a[%d] = ",i); scanf("%d", &array[i]); sum+=array[i]; //Here use a macro definition named as VERBOSE to print //what program does to the screen printf("The user entered %d\n", array[i]); // // //If the macro definition CALCSUM is not used, the program //should assign 0 to the i-th element of the array array[i]=0; //Here, again use VERBOSE to print what program does to the screen printf("a[%d] is assigned to zero\n", i); // // } //If CALCSUM is defined, print the summation of the array elements to the screen printf("Summation of the array is %d\n",sum); // //If CALCSUM is not defined, but VERBOSE mode is used, print the following printf("All the elements in the array are assigned to zero\n"); // printf("Program terminated\n"); return 0; } When CALCSUM is defined, the program will sum up the values of each element in the given array. If CALCSUM is not defined, each array element will be assigned to zero. Besides, when VERBOSE mode is defined, the program will make print statements pointed out active. [root@linux55]# gcc code.c [root@linux55]# ./a.out Program terminated [root@linux55]# gcc code.c -D CALCSUM [root@linux55]# ./a.out Enter a[0] = 3 Enter a[1] = 0 Enter a[2] = 2 Enter a[3] = 5 Enter a[4] = 9 Summation of the array is 19 Program terminated [root@linux55]# gcc code.c -D CALCSUM -D VERBOSE [root@linux55]# ./a.out Enter a[0] = 2 The user entered 2 Enter a[1] = 10 The user entered 10 Enter a[2] = 3 The user entered 3 Enter a[3] = 8 The user entered 8 Enter a[4] = 1 The user entered 1 Summation of the array is 24 Program terminated [root@linux55]# gcc code.c -D VERBOSE [root@linux55]# ./a.out a[0] is assigned to 0 a[1] is assigned to 0 a[2] is assigned to 0 a[3] is assigned to 0 a[4] is assigned to 0 All the elements in the array is assigned to zero Program terminated

    Read the article

  • Improving performance for WRITE operation on Oracle DB in Java

    - by Lucky
    I've a typical scenario & need to understand best possible way to handle this, so here it goes - I'm developing a solution that will retrieve data from a remote SOAP based web service & will then push this data to an Oracle database on network. Also, this will be a scheduled task that will execute every 15 minutes. I've event queues on remote service that contains the INSERT/UPDATE/DELETE operations that have been done since last retrieval, & once I retrieve the events for last 15 minutes, it again add events for next retrieval. Now, its just pushing data to Oracle so all my interactions are INSERT & UPDATE statements. There are around 60 tables on Oracle with some of them having 100+ columns. Moreover, for every 15 minutes cycle there would be around 60-70 Inserts, 100+ Updates & 10-20 Deletes. This will be an executable jar file that will terminate after operation & will again start on next 15 minutes cycle. So, I need to understand how should I handle WRITE operations (best practices) to improve performance for this application as whole ? Current Test Code (on every cycle) - Connects to remote service to get events. Creates a connection with DB (single connection object). Identifies the type of operation (INSERT/UPDATE/DELETE) & table on which it is done. After above, calls the respective method based on type of operation & table. Uses Preparedstatement with positional parameters, & retrieves each column value from remote service & assigns that to statement parameters. Commits the statement & returns to get event class to process next event. Above is repeated till all the retrieved events are processed after which program closes & then starts on next cycle & everything repeats again. Thanks for help !

    Read the article

  • The big last_insert_id() problem, again.

    - by wretrOvian
    Note - this follows my question here: http://stackoverflow.com/questions/2983685/jdbc-does-the-connection-break-if-i-lose-reference-to-the-connection-object Now i have a created a class so i can deal with JDBC easily for the rest of my code - public class Functions { private String DB_SERVER = ""; private String DB_NAME = "test"; private String DB_USERNAME = "root"; private String DB_PASSWORD = "password"; public Connection con; public PreparedStatement ps; public ResultSet rs; public ResultSetMetaData rsmd; public void connect() throws java.io.FileNotFoundException, java.io.IOException, SQLException, Exception { String[] dbParms = Parameters.load(); DB_SERVER = dbParms[0]; DB_NAME = dbParms[1]; DB_USERNAME = dbParms[2]; DB_PASSWORD = dbParms[3]; // Connect. Class.forName("com.mysql.jdbc.Driver").newInstance(); con = DriverManager.getConnection("jdbc:mysql://" + DB_SERVER + "/" + DB_NAME, DB_USERNAME, DB_PASSWORD); } public void disconnect() throws SQLException { // Close. con.close(); } } As seen Parameters.load() refreshes the connection parameters from a file every-time, so that any changes to the same may be applied on the next immediate connection. An example of this class in action - public static void add(String NAME) throws java.io.FileNotFoundException, java.io.IOException, SQLException, Exception { Functions dbf = new Functions(); dbf.connect(); String query = "INSERT INTO " + TABLE_NAME + "(" + "NAME" + ") VALUES(?)"; PreparedStatement ps = dbf.con.prepareStatement(query); ps.setString(1, NAME); ps.executeUpdate(); dbf.disconnect(); } Now here is the problem - for adding a record to the table above, the add() method will open a connection, add the record - and then call disconnect() . What if i want to get the ID of the inserted record after i call add() -like this : Department.add("new dept"); int ID = getlastID(); Isn't it possible that another add() was called between those two statements?

    Read the article

  • Persistent static routes fail on MacOS 10.6.5 startup!

    - by verbalicious
    I'm unable to get static routes to persist a reboot on Mac OS 10.6.5. I've tried all of the methods prescribed in Google search results, and previous posts on this site. I've tried manually creating a launchd daemon, and used RouteSplit's launchd daemon to no avail. It's clear that the interface is not ready when these methods attempt to apply the route. This workstation in question is getting its IP from DHCP and probably hasn't gotten its DHCP lease when the command runs. We're able to apply the route by hand when logged in, but not through startup methods. Is there another way to apply this route by sneaking the command into something later, but before the login window appears to the user? Here is some relevant log info from system.log. You can see the "route: writing to routing socket: Network is unreachable" errors where my launchd script fires off. I've tried adding extra "sleep" and "ipconfig waitall" statements later in the script but this doesn't fly. Dec 15 19:30:41 localhost com.apple.launchd[1]: *** launchd[1] has started up. *** Dec 15 19:30:45 localhost mDNSResponder[18]: mDNSResponder mDNSResponder-258.13 (Oct 8 2010 17:10:30) starting Dec 15 19:30:47 localhost configd[15]: bootp_session_transmit: bpf_write(en1) failed: Network is down (50) Dec 15 19:30:47 localhost configd[15]: DHCP en1: INIT transmit failed Dec 15 19:30:47 localhost configd[15]: network configuration changed. Dec 15 19:30:47 Administrators-MacBook-Pro configd[15]: setting hostname to "Administrators-MacBook-Pro.local" Dec 15 19:30:47 Administrators-MacBook-Pro blued[16]: Apple Bluetooth daemon started Dec 15 19:30:52 Administrators-MacBook-Pro syslog[67]: routes.sh: Starting RouteSplit Dec 15 19:30:53 Administrators-MacBook-Pro com.apple.usbmuxd[41]: usbmuxd-207 built for iTunesTenOne on Oct 19 2010 at 13:50:35, running 64 bit Dec 15 19:30:54 Administrators-MacBook-Pro /System/Library/CoreServices/loginwindow.app/Contents/MacOS/loginwindow[50]: Login Window Application Started Dec 15 19:30:55 Administrators-MacBook-Pro bootlog[61]: BOOT_TIME: 1292459441 0 Dec 15 19:30:55 Administrators-MacBook-Pro syslog[86]: routes.sh: static route 192.168.0.0/23 192.168.2.2 Dec 15 19:30:55 Administrators-MacBook-Pro net.routes.static[65]: route: writing to routing socket: Network is unreachable Dec 15 19:30:55 Administrators-MacBook-Pro net.routes.static[65]: add net 192.168.0.0: gateway 192.168.2.2: Network is unreachable Dec 15 19:30:57 Administrators-MacBook-Pro org.apache.httpd[38]: httpd: Could not reliably determine the server's fully qualified domain name, using Administrators-MacBook-Pro.local for ServerName Dec 15 19:30:58 Administrators-MacBook-Pro loginwindow[50]: Login Window Started Security Agent Dec 15 19:30:58 Administrators-MacBook-Pro WindowServer[89]: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Dec 15 19:30:58 Administrators-MacBook-Pro com.apple.WindowServer[89]: Wed Dec 15 19:30:58 Administrators-MacBook-Pro.local WindowServer[89] <Error>: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Dec 15 19:31:18 Administrators-MacBook-Pro configd[15]: network configuration changed. Dec 15 19:31:19 administrators-macbook-pro configd[15]: setting hostname to "administrators-macbook-pro.local" Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[121]: /usr/libexec/ntpd-wrapper: scutil key State:/Network/Global/DNS not present after 30 seconds Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: sntp options: a=2 v=1 e=0.100 E=5.000 P=2147483647.000 Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: d=15 c=5 x=0 op=1 l=/var/run/sntp.pid f= time.apple.com Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: sntp: getaddrinfo(hostname, ntp) failed with nodename nor servname provided, or not known Dec 15 19:31:27 administrators-macbook-pro configd[15]: network configuration changed. Dec 15 19:31:27 Administrators-MacBook-Pro configd[15]: setting hostname to "Administrators-MacBook-Pro.local" Dec 15 19:31:27 Administrators-MacBook-Pro ntpd[37]: Cannot find existing interface for address 17.151.16.20 Dec 15 19:31:27 Administrators-MacBook-Pro ntpd_initres[125]: ntpd indicates no data available! Dec 15 19:31:31 Administrators-MacBook-Pro sshd[128]: USER_PROCESS: 133 ttys000 Dec 15 19:31:37 Administrators-MacBook-Pro sudo[138]: administrator : TTY=ttys000 ; PWD=/Users/administrator ; USER=root ; COMMAND=/usr/bin/less /var/log/system.log ``You can see the following line in /var/log/kernel.log that shows the en0 interface coming up: Dec 15 19:30:51 Administrators-MacBook-Pro kernel[0]: Ethernet [AppleBCM5701Ethernet]: Link up on en0, 1-Gigabit, Full-duplex, No flow-control, Debug [796d,0f01,0de1,0300,c1e1,3800]

    Read the article

  • How do I create a Linked Server in SQL Server 2005 to a password protected Access 95 database?

    - by Brad Knowles
    I need to create a linked server with SQL Server Management Studio 2005 to an Access 95 database, which happens to be password protected at the database level. User level security has not been implemented. I cannot convert the Access database to a newer version. It is being used by a 3rd party application; so modifying it, in any way, is not allowed. I've tried using the Jet 4.0 OLE DB Provider and the ODBC OLE DB Provider. The 3rd party application creates a System DSN (with the proper database password), but I've not had any luck in using either method. If I were using a standard connection string, I think it would look something like this: Provider=Microsoft.Jet.OLEDB.4.0;Data Source='C:\Test.mdb';Jet OLEDB:Database Password=####; I'm fairly certain I need to somehow incorporate Jet OLEDB:Database Password into the linked server setup, but haven't figured out how. I've posted the scripts I'm using along with the associated error messages below. Any help is greatly appreciated. I'll provide more details if needed, just ask. Thanks! Method #1 - Using the Jet 4.0 Provider When I try to run these statements to create the linked server: sp_dropserver 'Test', 'droplogins'; EXEC sp_addlinkedserver @server = N'Test', @provider = N'Microsoft.Jet.OLEDB.4.0', @srvproduct = N'Access DB', @datasrc = N'C:\Test.mdb' GO EXEC sp_addlinkedsrvlogin @rmtsrvname=N'Test', @useself=N'False',@locallogin=NULL, @rmtuser=N'Admin', @rmtpassword='####' GO I get this error when testing the connection: TITLE: Microsoft SQL Server Management Studio ------------------------------ "The test connection to the linked server failed." ------------------------------ ADDITIONAL INFORMATION: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) ------------------------------ The OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "Test" reported an error. Authentication failed. Cannot initialize the data source object of OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "Test". OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "Test" returned message "Cannot start your application. The workgroup information file is missing or opened exclusively by another user.". (Microsoft SQL Server, Error: 7399) ------------------------------ Method #2 - Using the ODBC Provider... sp_dropserver 'Test', 'droplogins'; EXEC sp_addlinkedserver @server = N'Test', @provider = N'MSDASQL', @srvproduct = N'ODBC', @datasrc = N'Test:DSN' GO EXEC sp_addlinkedsrvlogin @rmtsrvname=N'Test', @useself=N'False',@locallogin=NULL, @rmtuser=N'Admin', @rmtpassword='####' GO I get this error: TITLE: Microsoft SQL Server Management Studio ------------------------------ "The test connection to the linked server failed." ------------------------------ ADDITIONAL INFORMATION: An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) ------------------------------ Cannot initialize the data source object of OLE DB provider "MSDASQL" for linked server "Test". OLE DB provider "MSDASQL" for linked server "Test" returned message "[Microsoft][ODBC Driver Manager] Driver's SQLSetConnectAttr failed". OLE DB provider "MSDASQL" for linked server "Test" returned message "[Microsoft][ODBC Driver Manager] Driver's SQLSetConnectAttr failed". OLE DB provider "MSDASQL" for linked server "Test" returned message "[Microsoft][ODBC Microsoft Access Driver] Cannot open database '(unknown)'. It may not be a database that your application recognizes, or the file may be corrupt.". (Microsoft SQL Server, Error: 7303)

    Read the article

  • Persistent static routes fail on MacOS 10.6.5 startup!

    - by verbalicious
    I'm unable to get static routes to persist a reboot on Mac OS 10.6.5. I've tried all of the methods prescribed in Google search results, and previous posts on this site. I've tried manually creating a launchd daemon, and used RouteSplit's launchd daemon to no avail. It's clear that the interface is not ready when these methods attempt to apply the route. This workstation in question is getting its IP from DHCP and probably hasn't gotten its DHCP lease when the command runs. We're able to apply the route by hand when logged in, but not through startup methods. Is there another way to apply this route by sneaking the command into something later, but before the login window appears to the user? Here is some relevant log info from system.log. You can see the "route: writing to routing socket: Network is unreachable" errors where my launchd script fires off. I've tried adding extra "sleep" and "ipconfig waitall" statements later in the script but this doesn't fly. Dec 15 19:30:41 localhost com.apple.launchd[1]: *** launchd[1] has started up. *** Dec 15 19:30:45 localhost mDNSResponder[18]: mDNSResponder mDNSResponder-258.13 (Oct 8 2010 17:10:30) starting Dec 15 19:30:47 localhost configd[15]: bootp_session_transmit: bpf_write(en1) failed: Network is down (50) Dec 15 19:30:47 localhost configd[15]: DHCP en1: INIT transmit failed Dec 15 19:30:47 localhost configd[15]: network configuration changed. Dec 15 19:30:47 Administrators-MacBook-Pro configd[15]: setting hostname to "Administrators-MacBook-Pro.local" Dec 15 19:30:47 Administrators-MacBook-Pro blued[16]: Apple Bluetooth daemon started Dec 15 19:30:52 Administrators-MacBook-Pro syslog[67]: routes.sh: Starting RouteSplit Dec 15 19:30:53 Administrators-MacBook-Pro com.apple.usbmuxd[41]: usbmuxd-207 built for iTunesTenOne on Oct 19 2010 at 13:50:35, running 64 bit Dec 15 19:30:54 Administrators-MacBook-Pro /System/Library/CoreServices/loginwindow.app/Contents/MacOS/loginwindow[50]: Login Window Application Started Dec 15 19:30:55 Administrators-MacBook-Pro bootlog[61]: BOOT_TIME: 1292459441 0 Dec 15 19:30:55 Administrators-MacBook-Pro syslog[86]: routes.sh: static route 192.168.0.0/23 192.168.2.2 Dec 15 19:30:55 Administrators-MacBook-Pro net.routes.static[65]: route: writing to routing socket: Network is unreachable Dec 15 19:30:55 Administrators-MacBook-Pro net.routes.static[65]: add net 192.168.0.0: gateway 192.168.2.2: Network is unreachable Dec 15 19:30:57 Administrators-MacBook-Pro org.apache.httpd[38]: httpd: Could not reliably determine the server's fully qualified domain name, using Administrators-MacBook-Pro.local for ServerName Dec 15 19:30:58 Administrators-MacBook-Pro loginwindow[50]: Login Window Started Security Agent Dec 15 19:30:58 Administrators-MacBook-Pro WindowServer[89]: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Dec 15 19:30:58 Administrators-MacBook-Pro com.apple.WindowServer[89]: Wed Dec 15 19:30:58 Administrators-MacBook-Pro.local WindowServer[89] <Error>: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Dec 15 19:31:18 Administrators-MacBook-Pro configd[15]: network configuration changed. Dec 15 19:31:19 administrators-macbook-pro configd[15]: setting hostname to "administrators-macbook-pro.local" Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[121]: /usr/libexec/ntpd-wrapper: scutil key State:/Network/Global/DNS not present after 30 seconds Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: sntp options: a=2 v=1 e=0.100 E=5.000 P=2147483647.000 Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: d=15 c=5 x=0 op=1 l=/var/run/sntp.pid f= time.apple.com Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: sntp: getaddrinfo(hostname, ntp) failed with nodename nor servname provided, or not known Dec 15 19:31:27 administrators-macbook-pro configd[15]: network configuration changed. Dec 15 19:31:27 Administrators-MacBook-Pro configd[15]: setting hostname to "Administrators-MacBook-Pro.local" Dec 15 19:31:27 Administrators-MacBook-Pro ntpd[37]: Cannot find existing interface for address 17.151.16.20 Dec 15 19:31:27 Administrators-MacBook-Pro ntpd_initres[125]: ntpd indicates no data available! Dec 15 19:31:31 Administrators-MacBook-Pro sshd[128]: USER_PROCESS: 133 ttys000 Dec 15 19:31:37 Administrators-MacBook-Pro sudo[138]: administrator : TTY=ttys000 ; PWD=/Users/administrator ; USER=root ; COMMAND=/usr/bin/less /var/log/system.log ``You can see the following line in /var/log/kernel.log that shows the en0 interface coming up: Dec 15 19:30:51 Administrators-MacBook-Pro kernel[0]: Ethernet [AppleBCM5701Ethernet]: Link up on en0, 1-Gigabit, Full-duplex, No flow-control, Debug [796d,0f01,0de1,0300,c1e1,3800]

    Read the article

  • What are the pitfalls of hardlinked files on my desktop PC?

    - by MountainX
    All the identical-content files on my PC are now hardlinked. (My data is completely de-duplicated. It is a consequence of the way I copied my data from my old computer.) What pitfalls do I need to be aware of now that certain actions on one file could silently affect a number of other files? I know that deleting the file I'm working on is not a problem (assuming I deleted it on purpose). It doesn't affect any of the other hardlinked files and I don't see that the delete action would lead to unexpected side effects. Moving or renaming the file is not a problem. I don't see any unexpected consequences. I don't think copying hardlinked files is a problem, but I'm not as confident about any unexpected consequences in this regard. What I have seen is that making a copy (to the same disk) of a hardlinked file with cp keeps the copy hardlinked (i.e., inode number doesn't change in the copy). Copying to another filesystem obviously breaks the hardlink. (I guess one pitfall is forgetting this fact, given that my PC has 3 hard disks.) Changing permissions does affect all linked files. So far this has proven handy. (I made a large number of the hardlinked files read-only.) None of the operations above seem to produce any major unexpected consequences. However, as was pointed out to me by Daniel Beck in a comment, editing or modifying a file can sometimes be a problem. It depends on the tool and maybe the type of edit. (For example, editing small text files using sed seems to always break the link while using nano doesn't.) This introduces the chance that editing one file could affect all the hardlinked files (i.e., alter the original inode). My proposed solution to this is to make all hardlinked files read-only (and that is already mostly the case). If I can't do that for some files, I will unlink those particular files. Is there any problem with this read-only approach? I'm assuming that if I go to edit a file and find it to be read-only, I'll remember to unlink that filename while making it writable. So one pitfall might be forgetting this rule. In that case, I'll have to rely on my backups. Am I correct in the above statements? And what else do I need to know? BTW, I'm running Kubuntu 12.04. I'm also using btrfs. (I have 2 SSD's and 1 HDD in the PC. I will also be adding an external USB HDD. I'm also connected to a network and I mount some NFS shares. I don't assume any of these last bits are relevant to the question, but I'm adding them just in case.) BTW, since I have more than one drive (with separate file systems), to unlink any file all I have to do is copy it to another drive, then move it back. However, using sed also works (in my testing). Here's my script: sed -i 's/\(.\)/\1/' file1 Surprisingly, this even unlinks zero byte files. In my testing it also appears to work on non-text files without any special options. (But I understand that the --binary option might be needed on Windows, MS-DOS and Cygwin.) However, copying to another disk and moving back may be the best way to unlink. For my use-case, unlink command doesn't really "unlink", rather it "removes".

    Read the article

  • Network authentication + roaming home directory - which technology should I look into using?

    - by Brian
    I'm looking into software which provides a user with a single identity across multiple computers. That is, a user should have the same permissions on each computer, and the user should have access to all of his or her files (roaming home directory) on each computer. There seem to be many solutions for this general idea, but I'm trying to determine the best one for me. Here are some details along with requirements: The network of machines are Amazon EC2 instances running Ubuntu. We access the machines with SSH. Some machines on this LAN may have different uses, but I am only discussing machines for a certain use (running a multi-tenancy platform). The system will not necessarily have a constant amount of machines. We may have to permanently or temporarily alter the amount of machines running. This is the the reason why I'm looking into centralized authentication/storage. The implementation of this effect should be a secure one. We're unsure if users will have direct shell access, but their software will potentially be running (under restricted Linux user names, of course) on our systems, which is as good as direct shell access. Let's assume that their software could potentially be malicious for the sake of security. I have heard of several technologies/combinations to achieve my goal, but I'm unsure of the ramifications of each. An older ServerFault post recommended NFS & NIS, though the combination has security problems according to this old article by Symantec. The article suggests moving to NIS+, but, as it is old, this Wikipedia article has cited statements suggesting a trending away from NIS+ by Sun. The recommended replacement is another thing I have heard of... LDAP. It looks like LDAP can be used to save user information in a centralized location on a network. NFS would still need to be used to cover the 'roaming home folder' requirement, but I see references of them being used together. Since the Symantec article pointed out security problems in both NIS and NFS, is there software to replace NFS, or should I heed that article's suggestions for locking it down? I'm tending toward LDAP because another fundamental piece of our architecture, RabbitMQ, has a authentication/authorization plugin for LDAP. RabbitMQ will be accessible in a restricted manner to users on the system, so I would like to tie the security systems together if possible. Kerberos is another secure authentication protocol that I have heard of. I learned a bit about it some years ago in a cryptography class but don't remember much about it. I have seen suggestions online that it can be combined with LDAP in several ways. Is this necessary? What are the security risks of LDAP without Kerberos? I also remember Kerberos being used in another piece of software developed by Carnegie Mellon University... Andrew File System, or AFS. OpenAFS is available for use, though its setup seems a bit complicated. At my university, AFS provides both requirements... I can log in to any machine, and my "AFS folder" is always available (at least when I acquire an AFS token). Along with suggestions for which path I should look into, does anybody have any guides which were particularly helpful? As the bold text pointed out, LDAP looks to be the best choice, but I'm particularly interested in the implementation details (Keberos? NFS?) with respect to security.

    Read the article

  • How to deploy custom MBean to Tomcat?

    - by Christian
    Hi, I'm trying to deploy a custom mbean to a tomcat. This mbean is not part of a webapp. It should be instantiated when tomcat starts. My problem is, I can't find any complete documentation about how to deploy such a mbean. I'm getting different exceptions, depending on my configuration. Has anyone hints, a complete documentation or has implemented a mbean by himself and can post an example? I configured tomcat to read a configuration from his conf directory: <Engine name="Catalina" defaultHost="localhost" mbeansFile="${catalina.base}/conf/mbeans-descriptors.xml"> The content is as follows: <?xml version="1.0"?> <!-- <!DOCTYPE mbeans-descriptors PUBLIC "-//Apache Software Foundation//DTD Model MBeans Configuration File" "http://jakarta.apache.org/commons/dtds/mbeans-descriptors.dtd"> --> <!-- Descriptions of JMX MBeans --> <mbeans-descriptors> <mbean name="Performance" description="Caculate JVM throughput" type="Performance"> <attribute name="throughput" description="calculated throughput (ratio between gc times and uptime of JVM)" type="double" writeable="false"/> </mbean> </mbeans-descriptors> When name in the xml file and class name match, I get this excption: SEVERE: Error creating mbean Performance javax.management.MalformedObjectNameException: Key properties cannot be empty at javax.management.ObjectName.construct(ObjectName.java:467) at javax.management.ObjectName.<init>(ObjectName.java:1403) at org.apache.tomcat.util.modeler.modules.MbeansSource.execute(MbeansSource.java:202) at org.apache.tomcat.util.modeler.modules.MbeansSource.load(MbeansSource.java:137) at org.apache.catalina.core.StandardEngine.readEngineMbeans(StandardEngine.java:517) at org.apache.catalina.core.StandardEngine.init(StandardEngine.java:321) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:411) at org.apache.catalina.core.StandardService.start(StandardService.java:519) at org.apache.catalina.core.StandardServer.start(StandardServer.java:710) at org.apache.catalina.startup.Catalina.start(Catalina.java:581) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:177) When changing the name attribute in the xml file to test.example:type=Performance, I get this exception: SEVERE: Error creating mbean test.example:type=Performance javax.management.NotCompliantMBeanException: MBean class must have public constructor at com.sun.jmx.mbeanserver.Introspector.testCreation(Introspector.java:127) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.createMBean(DefaultMBeanServerInterceptor.java:284) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.createMBean(DefaultMBeanServerInterceptor.java:199) at com.sun.jmx.mbeanserver.JmxMBeanServer.createMBean(JmxMBeanServer.java:393) at org.apache.tomcat.util.modeler.modules.MbeansSource.execute(MbeansSource.java:207) at org.apache.tomcat.util.modeler.modules.MbeansSource.load(MbeansSource.java:137) at org.apache.catalina.core.StandardEngine.readEngineMbeans(StandardEngine.java:517) at org.apache.catalina.core.StandardEngine.init(StandardEngine.java:321) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:411) at org.apache.catalina.core.StandardService.start(StandardService.java:519) at org.apache.catalina.core.StandardServer.start(StandardServer.java:710) at org.apache.catalina.startup.Catalina.start(Catalina.java:581) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:177) The documentation from apache is not really helpful, as it just explains a small part. I'm aware of this question but it doesn't help me. The answer I gave worked just for a short time, after that I got some other exceptions. For additional info, the java interface public interface PerformanceMBean { public double getThroughput(); } and implementing class /* some import statements */ public class Performance implements PerformanceMBean { public double getThroughput() { ... } }

    Read the article

  • Persistent static routes fail on MacOS 10.6.5 startup!

    - by verbalicious
    I'm unable to get static routes to persist a reboot on Mac OS 10.6.5. I've tried all of the methods prescribed in Google search results, and previous posts on this site. I've tried manually creating a launchd daemon, and used RouteSplit's launchd daemon to no avail. It's clear that the interface is not ready when these methods attempt to apply the route. This workstation in question is getting its IP from DHCP and probably hasn't gotten its DHCP lease when the command runs. We're able to apply the route by hand when logged in, but not through startup methods. Is there another way to apply this route by sneaking the command into something later, but before the login window appears to the user? Here is some relevant log info from system.log. You can see the "route: writing to routing socket: Network is unreachable" errors where my launchd script fires off. I've tried adding extra "sleep" and "ipconfig waitall" statements later in the script but this doesn't fly. Dec 15 19:30:41 localhost com.apple.launchd[1]: *** launchd[1] has started up. *** Dec 15 19:30:45 localhost mDNSResponder[18]: mDNSResponder mDNSResponder-258.13 (Oct 8 2010 17:10:30) starting Dec 15 19:30:47 localhost configd[15]: bootp_session_transmit: bpf_write(en1) failed: Network is down (50) Dec 15 19:30:47 localhost configd[15]: DHCP en1: INIT transmit failed Dec 15 19:30:47 localhost configd[15]: network configuration changed. Dec 15 19:30:47 Administrators-MacBook-Pro configd[15]: setting hostname to "Administrators-MacBook-Pro.local" Dec 15 19:30:47 Administrators-MacBook-Pro blued[16]: Apple Bluetooth daemon started Dec 15 19:30:52 Administrators-MacBook-Pro syslog[67]: routes.sh: Starting RouteSplit Dec 15 19:30:53 Administrators-MacBook-Pro com.apple.usbmuxd[41]: usbmuxd-207 built for iTunesTenOne on Oct 19 2010 at 13:50:35, running 64 bit Dec 15 19:30:54 Administrators-MacBook-Pro /System/Library/CoreServices/loginwindow.app/Contents/MacOS/loginwindow[50]: Login Window Application Started Dec 15 19:30:55 Administrators-MacBook-Pro bootlog[61]: BOOT_TIME: 1292459441 0 Dec 15 19:30:55 Administrators-MacBook-Pro syslog[86]: routes.sh: static route 192.168.0.0/23 192.168.2.2 Dec 15 19:30:55 Administrators-MacBook-Pro net.routes.static[65]: route: writing to routing socket: Network is unreachable Dec 15 19:30:55 Administrators-MacBook-Pro net.routes.static[65]: add net 192.168.0.0: gateway 192.168.2.2: Network is unreachable Dec 15 19:30:57 Administrators-MacBook-Pro org.apache.httpd[38]: httpd: Could not reliably determine the server's fully qualified domain name, using Administrators-MacBook-Pro.local for ServerName Dec 15 19:30:58 Administrators-MacBook-Pro loginwindow[50]: Login Window Started Security Agent Dec 15 19:30:58 Administrators-MacBook-Pro WindowServer[89]: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Dec 15 19:30:58 Administrators-MacBook-Pro com.apple.WindowServer[89]: Wed Dec 15 19:30:58 Administrators-MacBook-Pro.local WindowServer[89] <Error>: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Dec 15 19:31:18 Administrators-MacBook-Pro configd[15]: network configuration changed. Dec 15 19:31:19 administrators-macbook-pro configd[15]: setting hostname to "administrators-macbook-pro.local" Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[121]: /usr/libexec/ntpd-wrapper: scutil key State:/Network/Global/DNS not present after 30 seconds Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: sntp options: a=2 v=1 e=0.100 E=5.000 P=2147483647.000 Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: d=15 c=5 x=0 op=1 l=/var/run/sntp.pid f= time.apple.com Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: sntp: getaddrinfo(hostname, ntp) failed with nodename nor servname provided, or not known Dec 15 19:31:27 administrators-macbook-pro configd[15]: network configuration changed. Dec 15 19:31:27 Administrators-MacBook-Pro configd[15]: setting hostname to "Administrators-MacBook-Pro.local" Dec 15 19:31:27 Administrators-MacBook-Pro ntpd[37]: Cannot find existing interface for address 17.151.16.20 Dec 15 19:31:27 Administrators-MacBook-Pro ntpd_initres[125]: ntpd indicates no data available! Dec 15 19:31:31 Administrators-MacBook-Pro sshd[128]: USER_PROCESS: 133 ttys000 Dec 15 19:31:37 Administrators-MacBook-Pro sudo[138]: administrator : TTY=ttys000 ; PWD=/Users/administrator ; USER=root ; COMMAND=/usr/bin/less /var/log/system.log ``You can see the following line in /var/log/kernel.log that shows the en0 interface coming up: Dec 15 19:30:51 Administrators-MacBook-Pro kernel[0]: Ethernet [AppleBCM5701Ethernet]: Link up on en0, 1-Gigabit, Full-duplex, No flow-control, Debug [796d,0f01,0de1,0300,c1e1,3800]

    Read the article

  • How to export computers from Active Directory to XML using Powershell?

    - by CoDeRs
    I am trying to create a powershell scripts for Remote Desktop Connection Manager using the active directory module. My first thought was get a list of computers in AD and parse them out into XML format similar to the OU structure that is in AD. I have no problem with that, the below code will work just but not how I wanted. EG # here is a the array $OUs Americas/Canada/Canada Computers/Desktops Americas/Canada/Canada Computers/Laptops Americas/Canada/Canada Computers/Virtual Computers Americas/USA/USA Computers/Laptops Computers Disabled Accounts Domain Controllers EMEA/UK/UK Computers/Desktops EMEA/UK/UK Computers/Laptops Outside Sales and Service/Laptops Servers I wanted to have the basic XML structured like this Americas Canada Canada Computers Desktops Laptops Virtual Computers USA USA Computers Laptops Computers Disabled Accounts Domain Controllers EMEA UK UK Computers Desktops Laptops Outside Sales and Service Laptops Servers However if you run the below it does not nest the next string in the array it only restarts the from the beginning and duplicating Americas Canada Canada Computers Desktops Americas Canada Canada Computers Laptops Americas Canada Canada Computers Virtual Computers Americas USA USA Computers Laptops RDCMGenerator.ps1 #Importing Microsoft`s PowerShell-module for administering ActiveDirectory Import-Module ActiveDirectory #Initial variables $OUs = @() $RDCMVer = "2.2" $userName = "domain\username" $password = "Hashed Password+" $Path = "$env:temp\test.xml" $allComputers = Get-ADComputer -LDAPFilter "(OperatingSystem=*)" -Properties Name,Description,CanonicalName | Sort-Object CanonicalName | select Name,Description,CanonicalName $allOUObjects = $allComputers | Foreach {"$($_.CanonicalName)"} Function Initialize-XML{ ##<RDCMan schemaVersion="1"> $xmlWriter.WriteStartElement('RDCMan') $XmlWriter.WriteAttributeString('schemaVersion', '1') $xmlWriter.WriteElementString('version',$RDCMVer) $xmlWriter.WriteStartElement('file') $xmlWriter.WriteStartElement('properties') $xmlWriter.WriteElementString('name',$env:userdomain) $xmlWriter.WriteElementString('expanded','true') $xmlWriter.WriteElementString('comment','') $xmlWriter.WriteStartElement('logonCredentials') $XmlWriter.WriteAttributeString('inherit', 'None') $xmlWriter.WriteElementString('userName',$userName) $xmlWriter.WriteElementString('domain',$env:userdomain) $xmlWriter.WriteStartElement('password') $XmlWriter.WriteAttributeString('storeAsClearText', 'false') $XmlWriter.WriteRaw($password) $xmlWriter.WriteEndElement() $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('connectionSettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('gatewaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('remoteDesktop') $XmlWriter.WriteAttributeString('inherit', 'None') $xmlWriter.WriteElementString('size','1024 x 768') $xmlWriter.WriteElementString('sameSizeAsClientArea','True') $xmlWriter.WriteElementString('fullScreen','False') $xmlWriter.WriteElementString('colorDepth','32') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('localResources') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('securitySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('displaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteEndElement() } Function Create-Group ($groupName){ #Start Group $xmlWriter.WriteStartElement('properties') $xmlWriter.WriteElementString('name',$groupName) $xmlWriter.WriteElementString('expanded','true') $xmlWriter.WriteElementString('comment','') $xmlWriter.WriteStartElement('logonCredentials') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('connectionSettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('gatewaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('remoteDesktop') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('localResources') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('securitySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('displaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteEndElement() } Function Create-Server ($computerName, $computerDescription) { #Start Server $xmlWriter.WriteStartElement('server') $xmlWriter.WriteElementString('name',$computerName) $xmlWriter.WriteElementString('displayName',$computerDescription) $xmlWriter.WriteElementString('comment','') $xmlWriter.WriteStartElement('logonCredentials') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('connectionSettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('gatewaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('remoteDesktop') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('localResources') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('securitySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteStartElement('displaySettings') $XmlWriter.WriteAttributeString('inherit', 'FromParent') $xmlWriter.WriteEndElement() $xmlWriter.WriteEndElement() #Stop Server } Function Close-XML { $xmlWriter.WriteEndElement() $xmlWriter.WriteEndElement() # finalize the document: $xmlWriter.Flush() $xmlWriter.Close() notepad $path } #Strip out Domain and Computer Name from CanonicalName foreach($OU in $allOUObjects){ $newSplit = $OU.split("/") $rebildOU = "" for($i=1; $i -le ($newSplit.count - 2); $i++){ $rebildOU += $newSplit[$i] + "/" } $OUs += $rebildOU.substring(0,($rebildOU.length - 1)) } #Remove Duplicate OU's $OUs = $OUs | select -uniq #$OUs # get an XMLTextWriter to create the XML $XmlWriter = New-Object System.XMl.XmlTextWriter($Path,$UTF8) # choose a pretty formatting: $xmlWriter.Formatting = 'Indented' $xmlWriter.Indentation = 1 $XmlWriter.IndentChar = "`t" # write the header $xmlWriter.WriteStartDocument() # # 'encoding', 'utf-8' How? # # set XSL statements #Initialize Pre-Defined XML Initialize-XML ######################################################### # Start Loop for each OU-Path that has a computer in it ######################################################### foreach ($OU in $OUs){ $totalGroupName = "" #Create / Reset Total OU-Path Completed $OU.split("/") | foreach { #Split the OU-Path into individual OU's $groupName = "$_" #Current OU $totalGroupName += $groupName + "/" #Total OU-Path Completed $xmlWriter.WriteStartElement('group') #Start new XML Group Create-Group $groupName #Call function to create XML Group ################################################ # Start Loop for each Computer in $allComputers ################################################ foreach($computer in $allComputers){ $computerOU = $computer.CanonicalName #Set the computers OU-Path $OUSplit = $computerOU.split("/") #Create the Split for the OU-Path $rebiltOU = "" #Create / Reset the stripped OU-Path for($i=1; $i -le ($OUSplit.count - 2); $i++){ #Start Loop for OU-Path to strip out the Domain and Computer Name $rebiltOU += $OUSplit[$i] + "/" #Rebuild the stripped OU-Path } if ($rebiltOU -eq $totalGroupName){ #Compare the Current OU-Path with the computers stripped OU-Path $computerName = $computer.Name #Set the computer name $computerDescription = $computerName + " - " + $computer.Description #Set the computer Description Create-Server $computerName $computerDescription #Call function to create XML Server } } } ################################################### # Start Loop to close out XML Groups created above ################################################### $totalGroupName.split("/") | foreach { #Split the if ($_ -ne "" ){ $xmlWriter.WriteEndElement() #End Group } } } Close-XML

    Read the article

  • FTP Upload works from local command line / remote GUI client but not from PHP script

    - by MrOodles
    I originally posted this question at StackOverflow, but I'm beginning to think it's more of a server question. I have installed ProFTPd on an EC2 instance running Ubuntu 10.10. I have managed my proftpd.conf file as well as my server permissions to be able to connect and upload/move files using FTP both remotely using Filezilla, and on the server itself when connecting to 127.0.0.1. The problem I'm running into is when I try to upload/install a file using Joomla's interface. I give Joomla the same login information that I give to Filezilla, and the connection is made in the same fashion. The ftp.log file actually shows that Joomla is able to login to the server: localhost UNKNOWN nobody [17/Jan/2011:14:09:17 +0000] "USER ftpuser" 331 - localhost UNKNOWN ftpuser [17/Jan/2011:14:09:17 +0000] "PASS (hidden)" 230 - localhost UNKNOWN ftpuser [17/Jan/2011:14:09:17 +0000] "PASV" 227 - localhost UNKNOWN ftpuser [17/Jan/2011:14:09:17 +0000] "TYPE I" 200 - localhost UNKNOWN ftpuser [17/Jan/2011:14:09:17 +0000] "STOR /directory/store/location/file.zip" 550 - But it fails when attempting the STOR command. I have traced the problem in the Joomla code to the PHP FTP module. The code (with my trace statements added): if (@ftp_put($this->_conn, $remote, $local, $mode) === false) { echo "\n FTP PUT failed."; echo "\n Remote: $remote ; Local: $local ; Mode: $mode - Either ASCII: ".FTP_ASCII." or Binary: ".FTP_BINARY; echo "\n The user: ".exec("whoami"); JError::raiseWarning('35', 'JFTP::store: Bad response' ); return false; } Trace ouputs: FTP PUT failed. Remote: /directory/store/location/file.zip ; Local: /tmp/phpwuccp4 ; Mode: 2 - Either ASCII: 1 or Binary: 2 The user: www-data And in case you were curious, here is an example of the FTP log when using Filezilla: my_client_ip UNKNOWN nobody [17/Jan/2011:16:45:55 +0000] "USER ftpuser" 331 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:45:55 +0000] "PASS (hidden)" 230 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:45:55 +0000] "OPTS UTF8 ON" - - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:45:55 +0000] "PWD" 257 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:45:55 +0000] "TYPE I" 200 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:45:55 +0000] "PASV" 227 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:45:55 +0000] "MLSD" 226 3405 my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:06 +0000] "CWD location" 250 3405 my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:06 +0000] "PWD" 257 3405 my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:06 +0000] "PASV" 227 3405 my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:07 +0000] "MLSD" 226 3757 my_client_ip UNKNOWN nobody [17/Jan/2011:16:46:37 +0000] "USER ftpuser" 331 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:37 +0000] "PASS (hidden)" 230 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:37 +0000] "OPTS UTF8 ON" - - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:37 +0000] "CWD /location" 250 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:37 +0000] "PWD" 257 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:37 +0000] "TYPE I" 200 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:37 +0000] "PASV" 227 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:39 +0000] "STOR file.zip" 226 125317 my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:39 +0000] "PASV" 227 - my_client_ip UNKNOWN ftpuser [17/Jan/2011:16:46:39 +0000] "MLSD" 226 497

    Read the article

  • can't access nginx server from IP

    - by EquinoX
    So 2 days ago I can see that page where it saya "Welcome to nginx", however as of now when I tried to access it, it says 404 page not found... Why is this? Inside my sites-enabled folder I have a file named default and it has the following: # You may add here your # server { # ... # } # statements for each of your virtual hosts server { listen 80; server_name 127.0.0.1; access_log /var/log/nginx/localhost.access.log; location / { root /var/www/nginx-default; index index.html index.htm; } location /doc { root /usr/share; autoindex on; allow 127.0.0.1; deny all; } location /images { root /usr/share; autoindex on; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /var/www/nginx-default; #} # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { #proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/nginx-default$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; } } # another virtual host using mix of IP-, name-, and port-based configuration # #server { #listen 8000; #listen somename:8080; #server_name somename alias another.alias; #location / { #root html; #index index.html index.htm; #} #} # HTTPS server # #server { #listen 443; #server_name localhost; #ssl on; #ssl_certificate cert.pem; #ssl_certificate_key cert.key; #ssl_session_timeout 5m; #ssl_protocols SSLv2 SSLv3 TLSv1; #ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; #ssl_prefer_server_ciphers on; #location / { #root html; #index index.html index.htm; #} #} Here's my nginx.conf file: user www-data; worker_processes 4; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } # mail { # # See sample authentication script at: # # http://wiki.nginx.org/NginxImapAuthenticateWithApachePhpScript # # # auth_http localhost/auth.php; # # pop3_capabilities "TOP" "USER"; # # imap_capabilities "IMAP4rev1" "UIDPLUS"; # # server { # listen localhost:110; # protocol pop3; # proxy on; # } # # server { # listen localhost:143; # protocol imap; # proxy on; # } # } What am I doing wrong here? I have other virtual host setup in the sites-enabled as well... UPDATE: The server_name directives are: -admin.api.frapi -api.frapi -default -example.com -php.example.com

    Read the article

  • How to reduce iOS AVPlayer start delay

    - by Bernt Habermeier
    Note, for the below question: All assets are local on the device -- no network streaming is taking place. The videos contain audio tracks. I'm working on an iOS application that requires playing video files with minimum delay to start the video clip in question. Unfortunately we do not know what specific video clip is next until we actually need to start it up. Specifically: When one video clip is playing, we will know what the next set of (roughly) 10 video clips are, but we don't know which one exactly, until it comes time to 'immediately' play the next clip. What I've done to look at actual start delays is to call addBoundaryTimeObserverForTimes on the video player, with a time period of one millisecond to see when the video actually started to play, and I take the difference of that time stamp with the first place in the code that indicates which asset to start playing. From what I've seen thus-far, I have found that using the combination of AVAsset loading, and then creating an AVPlayerItem from that once it's ready, and then waiting for AVPlayerStatusReadyToPlay before I call play, tends to take between 1 and 3 seconds to start the clip. I've since switched to what I think is roughly equivalent: calling [AVPlayerItem playerItemWithURL:] and waiting for AVPlayerItemStatusReadyToPlay to play. Roughly same performance. One thing I'm observing is that the first AVPlayer item load is slower than the rest. Seems one idea is to pre-flight the AVPlayer with a short / empty asset before trying to play the first video might be of good general practice. [http://stackoverflow.com/questions/900461/slow-start-for-avaudioplayer-the-first-time-a-sound-is-played] I'd love to get the video start times down as much as possible, and have some ideas of things to experiment with, but would like some guidance from anyone that might be able to help. Update: idea 7, below, as-implemented yields switching times of around 500 ms. This is an improvement, but it it'd be nice to get this even faster. Idea 1: Use N AVPlayers (won't work) Using ~ 10 AVPPlayer objects and start-and-pause all ~ 10 clips, and once we know which one we really need, switch to, and un-pause the correct AVPlayer, and start all over again for the next cycle. I don't think this works, because I've read there is roughly a limit of 4 active AVPlayer's in iOS. There was someone asking about this on StackOverflow here, and found out about the 4 AVPlayer limit: fast-switching-between-videos-using-avfoundation Idea 2: Use AVQueuePlayer (won't work) I don't believe that shoving 10 AVPlayerItems into an AVQueuePlayer would pre-load them all for seamless start. AVQueuePlayer is a queue, and I think it really only makes the next video in the queue ready for immediate playback. I don't know which one out of ~10 videos we do want to play back, until it's time to start that one. ios-avplayer-video-preloading Idea 3: Load, Play, and retain AVPlayerItems in background (not 100% sure yet -- but not looking good) I'm looking at if there is any benefit to load and play the first second of each video clip in the background (suppress video and audio output), and keep a reference to each AVPlayerItem, and when we know which item needs to be played for real, swap that one in, and swap the background AVPlayer with the active one. Rinse and Repeat. The theory would be that recently played AVPlayer/AVPlayerItem's may still hold some prepared resources which would make subsequent playback faster. So far, I have not seen benefits from this, but I might not have the AVPlayerLayer setup correctly for the background. I doubt this will really improve things from what I've seen. Idea 4: Use a different file format -- maybe one that is faster to load? I'm currently using .m4v's (video-MPEG4) H.264 format. I have not played around with other formats, but it may well be that some formats are faster to decode / get ready than others. Possible still using video-MPEG4 but with a different codec, or maybe quicktime? Maybe a lossless video format where decoding / setup is faster? Idea 5: Combination of lossless video format + AVQueuePlayer If there is a video format that is fast to load, but maybe where the file size is insane, one idea might be to pre-prepare the first 10 seconds of each video clip with a version that is boated but faster to load, but back that up with an asset that is encoded in H.264. Use an AVQueuePlayer, and add the first 10 seconds in the uncompressed file format, and follow that up with one that is in H.264 which gets up to 10 seconds of prepare/preload time. So I'd get 'the best' of both worlds: fast start times, but also benefits from a more compact format. Idea 6: Use a non-standard AVPlayer / write my own / use someone else's Given my needs, maybe I can't use AVPlayer, but have to resort to AVAssetReader, and decode the first few seconds (possibly write raw file to disk), and when it comes to playback, make use of the raw format to play it back fast. Seems like a huge project to me, and if I go about it in a naive way, it's unclear / unlikely to even work better. Each decoded and uncompressed video frame is 2.25 MB. Naively speaking -- if we go with ~ 30 fps for the video, I'd end up with ~60 MB/s read-from-disk requirement, which is probably impossible / pushing it. Obviously we'd have to do some level of image compression (perhaps native openGL/es compression formats via PVRTC)... but that's kind crazy. Maybe there is a library out there that I can use? Idea 7: Combine everything into a single movie asset, and seekToTime One idea that might be easier than some of the above, is to combine everything into a single movie, and use seekToTime. The thing is that we'd be jumping all around the place. Essentially random access into the movie. I think this may actually work out okay: avplayer-movie-playing-lag-in-ios5 Which approach do you think would be best? So far, I've not made that much progress in terms of reducing the lag.

    Read the article

  • Quick guide to Oracle IRM 11g: Server configuration

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g index Welcome to the second article in this quick quide to Oracle IRM 11g. Hopefully you've just finished the first article which takes you through deploying the software onto a Linux server. This article walks you through the configuration of this new service and contains a subset of information from the official documentation and is focused on installing the server on Oracle Enterprise Linux. If you are planning to deploy on a non-Linux platform, you will need to reference the documentation for platform specific information. Contents Introduction Create IRM WebLogic Domain Starting the Admin Server and initial configuration Introduction In the previous article the database was prepared, the WebLogic Application Server installed and the files required for an IRM server installed. But we don't actually have a configured system yet. We need to now create a WebLogic Domain in which the IRM server will run, then configure some of the settings and crypography so that we can create a context and be ready to seal some content and test it all works. This article doesn't cover the configuration of SSL communication from client to server. This is quite a big topic and a separate article has been dedicated for this area. In these articles I also use the hostname, irm.company.internal to reference the IRM server and later on use the hostname irm.company.com in reference to the public facing service. Create IRM WebLogic Domain First step is creating the WebLogic domain, in a console switch to the newly created IRM installation folder as shown below and we will run the domain configuration wizard. [oracle@irm /]$ cd /oracle/middleware/Oracle_IRM/common/bin [oracle@irm bin]$ ./config.sh First thing the wizard will ask is if you wish to create a new or extend an existing domain. This guide is creating a standalone system so you should select to create a new domain. Next step is to choose what technologies from the Oracle ECM Suite you wish this domain to host. You are only interested in selecting the option "Oracle Information Rights Management". When you select this check box you will notice that it also selects "Oracle Enterprise Manager" and "Oracle JRF" as these are dependencies of the IRM server. You then need to specify where you wish to place the domain files. I usually just change the domain name from base_domain or irm_domain and leave the others with their defaults. Now the domain will have a single user initially and by default this user is called "weblogic". I usually change this account name to "sysadmin" or "administrator", but in this guide lets just accept the default. With respects to the next dialog, again for eval or dev reasons, leave the server startup mode as development. The JDK should also be automatically detected. We now need to provide details of the database. This guide is using the Oracle 11gR2 database and the settings I used can be seen in the image to the right. There is a lot of configuration that can now be done for the admin server, any managed servers and where the deployments reside. In this guide I am leaving all of these to their defaults so do not check any of the boxes. However I will on this blog be detailing later how you can go back and setup things such as automated startup of an IRM server which require changes to these default settings. But for now, lets leave it all alone and just click next. Now we are ready to install. Note that from this dialog you can scroll the left window and see there are going to be two servers created from the defaults. The AdminServer which is where you modify settings for the WebLogic Server and also hosts the Oracle Enterprise Manager for IRM which allows to monitor the IRM service performance and also make service related settings (which we shortly do below) and the IRM_server1 which hosts the actual IRM services themselves. So go right ahead and hit create, the process is pretty quick and usually under 10 minutes. When the domain creation ends, it will give you the URL to the admin server. It's worth noting this down and the URL is usually; http://irm.company.internal:7001 Starting the Admin Server and initial configuration First thing to do is to start the WebLogic Admin server and review the initial IRM server settings. In this guide we are going to run the Admin server and IRM server in console windows, in another article I will discuss running these as background services. So for now, start a console and run the Admin server by doing the following. cd /oracle/middleware/user_projects/domains/irm_domain/ ./startWebLogic.sh Wait for the server to start, you are looking for the following line to be reported in the console window. <BEA-00360><Server started in RUNNING mode> First step is configuring the IRM service via Enterprise Manager. Now that the Admin server is running you can point a browser at http://irm.company.internal:7001/em. Login with the username and password you supplied when you created the domain. In Enterprise Manager the IRM service administrator is able to make server wide configuration. However finding where to access the pages with these settings can be a bit of a challenge. After logging in on the left you'll see a tree containing elements of the Enterprise Manager farm Farm_irm_domain. Open up Content Management, then Information Rights Management and finally select the IRM node. On the right then select the IRM menu item, navigate to the Administration section and now we have four options, for now, we are just going to look at General Settings. The image on the right proves that a picture is worth a thousand words (or 113 in this case). The General Settings page allows you to set the cryptographic algorithms used for protecting sealed content. Unless you have a burning need to increase the key lengths or you need to comply to a regulation or government mandate, AES192 is a good start. You can change this later on without worry. The most important setting here we need to make is the Server URL. In this blog article I go over why this URL is so important, basically every single piece of content you protect with Oracle IRM is going to have this URL embedded in it, so if it's wrong or unresolvable, then nobody can open the secured documents. Note that in our environment we have yet to do any SSL configuration of the service. If you intend to build a server without SSL, then use http as the protocol instead of https. But I would recommend using SSL and setting this up is described in the next article. I would also probably up the device count from 1 to 3. This means that any user can retrieve rights to access content onto 3 computers at any one time. The default of 1 doesn't really make sense in development, evaluation nor even production environments and my experience is that 3 is a better number. Next step is to create the keystore for the IRM server. When a classification (called a context) is created, Oracle IRM generates a unique set of symmetric keys which are used to secure the content itself. These keys are then encrypted with a set of "wrapper" asymmetric cryptography keys which are stored externally to the server either in a Java Key Store or a HSM. These keys need to be generated and the following shows my commands and the resulting output. I have greyed out the responses from the commands so you can see the input a little easier. [oracle@irmsrv ~]$ cd /oracle/middleware/wlserver_10.3/server/bin/ [oracle@irmsrv bin]$ ./setWLSEnv.sh CLASSPATH=/oracle/middleware/patch_wls1033/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/oracle/middleware/patch_ocp353/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/usr/java/jdk1.6.0_18/lib/tools.jar:/oracle/middleware/wlserver_10.3/server/lib/weblogic_sp.jar:/oracle/middleware/wlserver_10.3/server/lib/weblogic.jar:/oracle/middleware/modules/features/weblogic.server.modules_10.3.3.0.jar:/oracle/middleware/wlserver_10.3/server/lib/webservices.jar:/oracle/middleware/modules/org.apache.ant_1.7.1/lib/ant-all.jar:/oracle/middleware/modules/net.sf.antcontrib_1.1.0.0_1-0b2/lib/ant-contrib.jar: PATH=/oracle/middleware/wlserver_10.3/server/bin:/oracle/middleware/modules/org.apache.ant_1.7.1/bin:/usr/java/jdk1.6.0_18/jre/bin:/usr/java/jdk1.6.0_18/bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/home/oracle/bin Your environment has been set. [oracle@irmsrv bin]$ cd /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/ [oracle@irmsrv fmwconfig]$ keytool -genkeypair -alias oracle.irm.wrap -keyalg RSA -keysize 2048 -keystore irm.jks Enter keystore password: Re-enter new password: What is your first and last name? [Unknown]: Simon Thorpe What is the name of your organizational unit? [Unknown]: Oracle What is the name of your organization? [Unknown]: Oracle What is the name of your City or Locality? [Unknown]: San Francisco What is the name of your State or Province? [Unknown]: CA What is the two-letter country code for this unit? [Unknown]: US Is CN=Simon Thorpe, OU=Oracle, O=Oracle, L=San Francisco, ST=CA, C=US correct? [no]: yes Enter key password for (RETURN if same as keystore password): At this point we now have an irm.jks in the directory /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig. The reason we store it here is this folder would be backed up as part of a domain backup. As with any cryptographic technology, DO NOT LOSE THESE KEYS OR THIS KEY STORE. Once you've sealed content against a context, the keys will be wrapped with these keys, lose these keys, and you can't get access to any secured content, pretty important. Now we've got the keys created, we need to go back to the IRM Enterprise Manager and set the location of the key store. Going back to the General Settings page in Enterprise Manager scroll down to Keystore Settings. Leave the type as JKS but change the location to; /oracle/Middleware/user_projects/domains/irm_domain/config/fmwconfig/irm.jks and hit Apply. The final step with regards to the key store is we need to tell the server what the password is for the Java Key Store so that it can be opened and the keys accessed. Once more fire up a console window and run these commands (again i've greyed out the clutter to see the commands easier). You will see dummy passed into the commands, this is because the command asks for a username, but in this instance we don't use one, hence the value dummy is passed and it isn't used. [oracle@irmsrv fmwconfig]$ cd /oracle/middleware/Oracle_IRM/common/bin/ [oracle@irmsrv bin]$ ./wlst.sh ... lots of settings fly by... Welcome to WebLogic Server Administration Scripting Shell Type help() for help on available commands wls:/offline>connect('weblogic','password','t3://irmsrv.us.oracle.com:7001') Connecting to t3://irmsrv.us.oracle.com:7001 with userid weblogic ... Successfully connected to Admin Server 'AdminServer' that belongs to domain 'irm_domain'. Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead. wls:/irm_domain/serverConfig>createCred("IRM","keystore:irm.jks","dummy","password") Location changed to domainRuntime tree. This is a read-only tree with DomainMBean as the root. For more help, use help(domainRuntime)wls:/irm_domain/serverConfig>createCred("IRM","key:irm.jks:oracle.irm.wrap","dummy","password") Already in Domain Runtime Tree wls:/irm_domain/serverConfig> At last we are now ready to fire up the IRM server itself. The domain creation created a managed server called IRM_server1 and we need to start this, use the following commands in a new console window. cd /oracle/middleware/user_projects/domains/irm_domain/bin/ ./startManagedWebLogic.sh IRM_server1 This will start up the server in the console, unlike the Admin server, you need to provide the username and password for the service to start. Enter in your weblogic username and password when prompted. You can change this behavior by putting the password into a boot.properties file, read more about this in the WebLogic Server documentation. Once running, wait until you see the line; <Notice><WebLogicServer><BEA-000360><Server started in RUNNING mode> At this point we can now login to the Oracle IRM Management Website at the URL. http://irm.company.internal:1600/irm_rights/ The server is just configured for HTTP at the moment, no SSL involved. Just want to ensure we can get a working system up and running. You should now see a login like the image on the right and you can now login using your weblogic username and password. The next article in this guide goes over adding SSL and now testing your server by actually adding a few users, sealing some content and opening this content as a user.

    Read the article

  • Parallelism in .NET – Part 15, Making Tasks Run: The TaskScheduler

    - by Reed
    In my introduction to the Task class, I specifically made mention that the Task class does not directly provide it’s own execution.  In addition, I made a strong point that the Task class itself is not directly related to threads or multithreading.  Rather, the Task class is used to implement our decomposition of tasks.  Once we’ve implemented our tasks, we need to execute them.  In the Task Parallel Library, the execution of Tasks is handled via an instance of the TaskScheduler class. The TaskScheduler class is an abstract class which provides a single function: it schedules the tasks and executes them within an appropriate context.  This class is the class which actually runs individual Task instances.  The .NET Framework provides two (internal) implementations of the TaskScheduler class. Since a Task, based on our decomposition, should be a self-contained piece of code, parallel execution makes sense when executing tasks.  The default implementation of the TaskScheduler class, and the one most often used, is based on the ThreadPool.  This can be retrieved via the TaskScheduler.Default property, and is, by default, what is used when we just start a Task instance with Task.Start(). Normally, when a Task is started by the default TaskScheduler, the task will be treated as a single work item, and run on a ThreadPool thread.  This pools tasks, and provides Task instances all of the advantages of the ThreadPool, including thread pooling for reduced resource usage, and an upper cap on the number of work items.  In addition, .NET 4 brings us a much improved thread pool, providing work stealing and reduced locking within the thread pool queues.  By using the default TaskScheduler, our Tasks are run asynchronously on the ThreadPool. There is one notable exception to my above statements when using the default TaskScheduler.  If a Task is created with the TaskCreationOptions set to TaskCreationOptions.LongRunning, the default TaskScheduler will generate a new thread for that Task, at least in the current implementation.  This is useful for Tasks which will persist for most of the lifetime of your application, since it prevents your Task from starving the ThreadPool of one of it’s work threads. The Task Parallel Library provides one other implementation of the TaskScheduler class.  In addition to providing a way to schedule tasks on the ThreadPool, the framework allows you to create a TaskScheduler which works within a specified SynchronizationContext.  This scheduler can be retrieved within a thread that provides a valid SynchronizationContext by calling the TaskScheduler.FromCurrentSynchronizationContext() method. This implementation of TaskScheduler is intended for use with user interface development.  Windows Forms and Windows Presentation Foundation both require any access to user interface controls to occur on the same thread that created the control.  For example, if you want to set the text within a Windows Forms TextBox, and you’re working on a background thread, that UI call must be marshaled back onto the UI thread.  The most common way this is handled depends on the framework being used.  In Windows Forms, Control.Invoke or Control.BeginInvoke is most often used.  In WPF, the equivelent calls are Dispatcher.Invoke or Dispatcher.BeginInvoke. As an example, say we’re working on a background thread, and we want to update a TextBlock in our user interface with a status label.  The code would typically look something like: // Within background thread work... string status = GetUpdatedStatus(); Dispatcher.BeginInvoke(DispatcherPriority.Normal, new Action( () => { statusLabel.Text = status; })); // Continue on in background method .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This works fine, but forces your method to take a dependency on WPF or Windows Forms.  There is an alternative option, however.  Both Windows Forms and WPF, when initialized, setup a SynchronizationContext in their thread, which is available on the UI thread via the SynchronizationContext.Current property.  This context is used by classes such as BackgroundWorker to marshal calls back onto the UI thread in a framework-agnostic manner. The Task Parallel Library provides the same functionality via the TaskScheduler.FromCurrentSynchronizationContext() method.  When setting up our Tasks, as long as we’re working on the UI thread, we can construct a TaskScheduler via: TaskScheduler uiScheduler = TaskScheduler.FromCurrentSynchronizationContext(); We then can use this scheduler on any thread to marshal data back onto the UI thread.  For example, our code above can then be rewritten as: string status = GetUpdatedStatus(); (new Task(() => { statusLabel.Text = status; })) .Start(uiScheduler); // Continue on in background method This is nice since it allows us to write code that isn’t tied to Windows Forms or WPF, but is still fully functional with those technologies.  I’ll discuss even more uses for the SynchronizationContext based TaskScheduler when I demonstrate task continuations, but even without continuations, this is a very useful construct. In addition to the two implementations provided by the Task Parallel Library, it is possible to implement your own TaskScheduler.  The ParallelExtensionsExtras project within the Samples for Parallel Programming provides nine sample TaskScheduler implementations.  These include schedulers which restrict the maximum number of concurrent tasks, run tasks on a single threaded apartment thread, use a new thread per task, and more.

    Read the article

  • The Execute SQL Task

    In this article we are going to take you through the Execute SQL Task in SQL Server Integration Services for SQL Server 2005 (although it appies just as well to SQL Server 2008).  We will be covering all the essentials that you will need to know to effectively use this task and make it as flexible as possible. The things we will be looking at are as follows: A tour of the Task. The properties of the Task. After looking at these introductory topics we will then get into some examples. The examples will show different types of usage for the task: Returning a single value from a SQL query with two input parameters. Returning a rowset from a SQL query. Executing a stored procedure and retrieveing a rowset, a return value, an output parameter value and passing in an input parameter. Passing in the SQL Statement from a variable. Passing in the SQL Statement from a file. Tour Of The Task Before we can start to use the Execute SQL Task in our packages we are going to need to locate it in the toolbox. Let's do that now. Whilst in the Control Flow section of the package expand your toolbox and locate the Execute SQL Task. Below is how we found ours. Now drag the task onto the designer. As you can see from the following image we have a validation error appear telling us that no connection manager has been assigned to the task. This can be easily remedied by creating a connection manager. There are certain types of connection manager that are compatable with this task so we cannot just create any connection manager and these are detailed in a few graphics time. Double click on the task itself to take a look at the custom user interface provided to us for this task. The task will open on the general tab as shown below. Take a bit of time to have a look around here as throughout this article we will be revisting this page many times. Whilst on the general tab, drop down the combobox next to the ConnectionType property. In here you will see the types of connection manager which this task will accept. As with SQL Server 2000 DTS, SSIS allows you to output values from this task in a number of formats. Have a look at the combobox next to the Resultset property. The major difference here is the ability to output into XML. If you drop down the combobox next to the SQLSourceType property you will see the ways in which you can pass a SQL Statement into the task itself. We will have examples of each of these later on but certainly when we saw these for the first time we were very excited. Next to the SQLStatement property if you click in the empty box next to it you will see ellipses appear. Click on them and you will see the very basic query editor that becomes available to you. Alternatively after you have specified a connection manager for the task you can click on the Build Query button to bring up a completely different query editor. This is slightly inconsistent. Once you've finished looking around the general tab, move on to the next tab which is the parameter mapping tab. We shall, again, be visiting this tab throughout the article but to give you an initial heads up this is where you define the input, output and return values from your task. Note this is not where you specify the resultset. If however you now move on to the ResultSet tab this is where you define what variable will receive the output from your SQL Statement in whatever form that is. Property Expressions are one of the most amazing things to happen in SSIS and they will not be covered here as they deserve a whole article to themselves. Watch out for this as their usefulness will astound you. For a more detailed discussion of what should be the parameter markers in the SQL Statements on the General tab and how to map them to variables on the Parameter Mapping tab see Working with Parameters and Return Codes in the Execute SQL Task. Task Properties There are two places where you can specify the properties for your task. One is in the task UI itself and the other is in the property pane which will appear if you right click on your task and select Properties from the context menu. We will be doing plenty of property setting in the UI later so let's take a moment to have a look at the property pane. Below is a graphic showing our properties pane. Now we shall take you through all the properties and tell you exactly what they mean. A lot of these properties you will see across all tasks as well as the package because of everything's base structure The Container. BypassPrepare Should the statement be prepared before sending to the connection manager destination (True/False) Connection This is simply the name of the connection manager that the task will use. We can get this from the connection manager tray at the bottom of the package. DelayValidation Really interesting property and it tells the task to not validate until it actually executes. A usage for this may be that you are operating on table yet to be created but at runtime you know the table will be there. Description Very simply the description of your Task. Disable Should the task be enabled or not? You can also set this through a context menu by right clicking on the task itself. DisableEventHandlers As a result of events that happen in the task, should the event handlers for the container fire? ExecValueVariable The variable assigned here will get or set the execution value of the task. Expressions Expressions as we mentioned earlier are a really powerful tool in SSIS and this graphic below shows us a small peek of what you can do. We select a property on the left and assign an expression to the value of that property on the right causing the value to be dynamically changed at runtime. One of the most obvious uses of this is that the property value can be built dynamically from within the package allowing you a great deal of flexibility FailPackageOnFailure If this task fails does the package? FailParentOnFailure If this task fails does the parent container? A task can he hosted inside another container i.e. the For Each Loop Container and this would then be the parent. ForcedExecutionValue This property allows you to hard code an execution value for the task. ForcedExecutionValueType What is the datatype of the ForcedExecutionValue? ForceExecutionResult Force the task to return a certain execution result. This could then be used by the workflow constraints. Possible values are None, Success, Failure and Completion. ForceExecutionValue Should we force the execution result? IsolationLevel This is the transaction isolation level of the task. IsStoredProcedure Certain optimisations are made by the task if it knows that the query is a Stored Procedure invocation. The docs say this will always be false unless the connection is an ADO connection. LocaleID Gets or sets the LocaleID of the container. LoggingMode Should we log for this container and what settings should we use? The value choices are UseParentSetting, Enabled and Disabled. MaximumErrorCount How many times can the task fail before we call it a day? Name Very simply the name of the task. ResultSetType How do you want the results of your query returned? The choices are ResultSetType_None, ResultSetType_SingleRow, ResultSetType_Rowset and ResultSetType_XML. SqlStatementSource Your Query/SQL Statement. SqlStatementSourceType The method of specifying the query. Your choices here are DirectInput, FileConnection and Variables TimeOut How long should the task wait to receive results? TransactionOption How should the task handle being asked to join a transaction? Usage Examples As we move through the examples we will only cover in them what we think you must know and what we think you should see. This means that some of the more elementary steps like setting up variables will be covered in the early examples but skipped and simply referred to in later ones. All these examples used the AventureWorks database that comes with SQL Server 2005. Returning a Single Value, Passing in Two Input Parameters So the first thing we are going to do is add some variables to our package. The graphic below shows us those variables having been defined. Here the CountOfEmployees variable will be used as the output from the query and EndDate and StartDate will be used as input parameters. As you can see all these variables have been scoped to the package. Scoping allows us to have domains for variables. Each container has a scope and remember a package is a container as well. Variable values of the parent container can be seen in child containers but cannot be passed back up to the parent from a child. Our following graphic has had a number of changes made. The first of those changes is that we have created and assigned an OLEDB connection manager to this Task ExecuteSQL Task Connection. The next thing is we have made sure that the SQLSourceType property is set to Direct Input as we will be writing in our statement ourselves. We have also specified that only a single row will be returned from this query. The expressions we typed in was: SELECT COUNT(*) AS CountOfEmployees FROM HumanResources.Employee WHERE (HireDate BETWEEN ? AND ?) Moving on now to the Parameter Mapping tab this is where we are going to tell the task about our input paramaters. We Add them to the window specifying their direction and datatype. A quick word here about the structure of the variable name. As you can see SSIS has preceeded the variable with the word user. This is a default namespace for variables but you can create your own. When defining your variables if you look at the variables window title bar you will see some icons. If you hover over the last one on the right you will see it says "Choose Variable Columns". If you click the button you will see a list of checkbox options and one of them is namespace. after checking this you will see now where you can define your own namespace. The next tab, result set, is where we need to get back the value(s) returned from our statement and assign to a variable which in our case is CountOfEmployees so we can use it later perhaps. Because we are only returning a single value then if you remember from earlier we are allowed to assign a name to the resultset but it must be the name of the column (or alias) from the query. A really cool feature of Business Intelligence Studio being hosted by Visual Studio is that we get breakpoint support for free. In our package we set a Breakpoint so we can break the package and have a look in a watch window at the variable values as they appear to our task and what the variable value of our resultset is after the task has done the assignment. Here's that window now. As you can see the count of employess that matched the data range was 2. Returning a Rowset In this example we are going to return a resultset back to a variable after the task has executed not just a single row single value. There are no input parameters required so the variables window is nice and straight forward. One variable of type object. Here is the statement that will form the soure for our Resultset. select p.ProductNumber, p.name, pc.Name as ProductCategoryNameFROM Production.ProductCategory pcJOIN Production.ProductSubCategory pscON pc.ProductCategoryID = psc.ProductCategoryIDJOIN Production.Product pON psc.ProductSubCategoryID = p.ProductSubCategoryID We need to make sure that we have selected Full result set as the ResultSet as shown below on the task's General tab. Because there are no input parameters we can skip the parameter mapping tab and move straight to the Result Set tab. Here we need to Add our variable defined earlier and map it to the result name of 0 (remember we covered this earlier) Once we run the task we can again set a breakpoint and have a look at the values coming back from the task. In the following graphic you can see the result set returned to us as a COM object. We can do some pretty interesting things with this COM object and in later articles that is exactly what we shall be doing. Return Values, Input/Output Parameters and Returning a Rowset from a Stored Procedure This example is pretty much going to give us a taste of everything. We have already covered in the previous example how to specify the ResultSet to be a Full result set so we will not cover it again here. For this example we are going to need 4 variables. One for the return value, one for the input parameter, one for the output parameter and one for the result set. Here is the statement we want to execute. Note how much cleaner it is than if you wanted to do it using the current version of DTS. In the Parameter Mapping tab we are going to Add our variables and specify their direction and datatypes. In the Result Set tab we can now map our final variable to the rowset returned from the stored procedure. It really is as simple as that and we were amazed at how much easier it is than in DTS 2000. Passing in the SQL Statement from a Variable SSIS as we have mentioned is hugely more flexible than its predecessor and one of the things you will notice when moving around the tasks and the adapters is that a lot of them accept a variable as an input for something they need. The ExecuteSQL task is no different. It will allow us to pass in a string variable as the SQL Statement. This variable value could have been set earlier on from inside the package or it could have been populated from outside using a configuration. The ResultSet property is set to single row and we'll show you why in a second when we look at the variables. Note also the SQLSourceType property. Here's the General Tab again. Looking at the variable we have in this package you can see we have only two. One for the return value from the statement and one which is obviously for the statement itself. Again we need to map the Result name to our variable and this can be a named Result Name (The column name or alias returned by the query) and not 0. The expected result into our variable should be the amount of rows in the Person.Contact table and if we look in the watch window we see that it is.   Passing in the SQL Statement from a File The final example we are going to show is a really interesting one. We are going to pass in the SQL statement to the task by using a file connection manager. The file itself contains the statement to run. The first thing we are going to need to do is create our file connection mananger to point to our file. Click in the connections tray at the bottom of the designer, right click and choose "New File Connection" As you can see in the graphic below we have chosen to use an existing file and have passed in the name as well. Have a look around at the other "Usage Type" values available whilst you are here. Having set that up we can now see in the connection manager tray our file connection manager sitting alongside our OLE-DB connection we have been using for the rest of these examples. Now we can go back to the familiar General Tab to set up how the task will accept our file connection as the source. All the other properties in this task are set up exactly as we have been doing for other examples depending on the options chosen so we will not cover them again here.   We hope you will agree that the Execute SQL Task has changed considerably in this release from its DTS predecessor. It has a lot of options available but once you have configured it a few times you get to learn what needs to go where. We hope you have found this article useful.

    Read the article

  • ASP.NET MVC 3: Implicit and Explicit code nuggets with Razor

    - by ScottGu
    This is another in a series of posts I’m doing that cover some of the new ASP.NET MVC 3 features: New @model keyword in Razor (Oct 19th) Layouts with Razor (Oct 22nd) Server-Side Comments with Razor (Nov 12th) Razor’s @: and <text> syntax (Dec 15th) Implicit and Explicit code nuggets with Razor (today) In today’s post I’m going to discuss how Razor enables you to both implicitly and explicitly define code nuggets within your view templates, and walkthrough some code examples of each of them.  Fluid Coding with Razor ASP.NET MVC 3 ships with a new view-engine option called “Razor” (in addition to the existing .aspx view engine).  You can learn more about Razor, why we are introducing it, and the syntax it supports from my Introducing Razor blog post. Razor minimizes the number of characters and keystrokes required when writing a view template, and enables a fast, fluid coding workflow. Unlike most template syntaxes, you do not need to interrupt your coding to explicitly denote the start and end of server blocks within your HTML. The Razor parser is smart enough to infer this from your code. This enables a compact and expressive syntax which is clean, fast and fun to type. For example, the Razor snippet below can be used to iterate a collection of products and output a <ul> list of product names that link to their corresponding product pages: When run, the above code generates output like below: Notice above how we were able to embed two code nuggets within the content of the foreach loop.  One of them outputs the name of the Product, and the other embeds the ProductID within a hyperlink.  Notice that we didn’t have to explicitly wrap these code-nuggets - Razor was instead smart enough to implicitly identify where the code began and ended in both of these situations.  How Razor Enables Implicit Code Nuggets Razor does not define its own language.  Instead, the code you write within Razor code nuggets is standard C# or VB.  This allows you to re-use your existing language skills, and avoid having to learn a customized language grammar. The Razor parser has smarts built into it so that whenever possible you do not need to explicitly mark the end of C#/VB code nuggets you write.  This makes coding more fluid and productive, and enables a nice, clean, concise template syntax.  Below are a few scenarios that Razor supports where you can avoid having to explicitly mark the beginning/end of a code nugget, and instead have Razor implicitly identify the code nugget scope for you: Property Access Razor allows you to output a variable value, or a sub-property on a variable that is referenced via “dot” notation: You can also use “dot” notation to access sub-properties multiple levels deep: Array/Collection Indexing: Razor allows you to index into collections or arrays: Calling Methods: Razor also allows you to invoke methods: Notice how for all of the scenarios above how we did not have to explicitly end the code nugget.  Razor was able to implicitly identify the end of the code block for us. Razor’s Parsing Algorithm for Code Nuggets The below algorithm captures the core parsing logic we use to support “@” expressions within Razor, and to enable the implicit code nugget scenarios above: Parse an identifier - As soon as we see a character that isn't valid in a C# or VB identifier, we stop and move to step 2 Check for brackets - If we see "(" or "[", go to step 2.1., otherwise, go to step 3  Parse until the matching ")" or "]" (we track nested "()" and "[]" pairs and ignore "()[]" we see in strings or comments) Go back to step 2 Check for a "." - If we see one, go to step 3.1, otherwise, DO NOT ACCEPT THE "." as code, and go to step 4 If the character AFTER the "." is a valid identifier, accept the "." and go back to step 1, otherwise, go to step 4 Done! Differentiating between code and content Step 3.1 is a particularly interesting part of the above algorithm, and enables Razor to differentiate between scenarios where an identifier is being used as part of the code statement, and when it should instead be treated as static content: Notice how in the snippet above we have ? and ! characters at the end of our code nuggets.  These are both legal C# identifiers – but Razor is able to implicitly identify that they should be treated as static string content as opposed to being part of the code expression because there is whitespace after them.  This is pretty cool and saves us keystrokes. Explicit Code Nuggets in Razor Razor is smart enough to implicitly identify a lot of code nugget scenarios.  But there are still times when you want/need to be more explicit in how you scope the code nugget expression.  The @(expression) syntax allows you to do this: You can write any C#/VB code statement you want within the @() syntax.  Razor will treat the wrapping () characters as the explicit scope of the code nugget statement.  Below are a few scenarios where we could use the explicit code nugget feature: Perform Arithmetic Calculation/Modification: You can perform arithmetic calculations within an explicit code nugget: Appending Text to a Code Expression Result: You can use the explicit expression syntax to append static text at the end of a code nugget without having to worry about it being incorrectly parsed as code: Above we have embedded a code nugget within an <img> element’s src attribute.  It allows us to link to images with URLs like “/Images/Beverages.jpg”.  Without the explicit parenthesis, Razor would have looked for a “.jpg” property on the CategoryName (and raised an error).  By being explicit we can clearly denote where the code ends and the text begins. Using Generics and Lambdas Explicit expressions also allow us to use generic types and generic methods within code expressions – and enable us to avoid the <> characters in generics from being ambiguous with tag elements. One More Thing….Intellisense within Attributes We have used code nuggets within HTML attributes in several of the examples above.  One nice feature supported by the Razor code editor within Visual Studio is the ability to still get VB/C# intellisense when doing this. Below is an example of C# code intellisense when using an implicit code nugget within an <a> href=”” attribute: Below is an example of C# code intellisense when using an explicit code nugget embedded in the middle of a <img> src=”” attribute: Notice how we are getting full code intellisense for both scenarios – despite the fact that the code expression is embedded within an HTML attribute (something the existing .aspx code editor doesn’t support).  This makes writing code even easier, and ensures that you can take advantage of intellisense everywhere. Summary Razor enables a clean and concise templating syntax that enables a very fluid coding workflow.  Razor’s ability to implicitly scope code nuggets reduces the amount of typing you need to perform, and leaves you with really clean code. When necessary, you can also explicitly scope code expressions using a @(expression) syntax to provide greater clarity around your intent, as well as to disambiguate code statements from static markup. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • BNF – how to read syntax?

    - by Piotr Rodak
    A few days ago I read post of Jen McCown (blog) about her idea of blogging about random articles from Books Online. I think this is a great idea, even if Jen says that it’s not exciting or sexy. I noticed that many of the questions that appear on forums and other media arise from pure fact that people asking questions didn’t bother to read and understand the manual – Books Online. Jen came up with a brilliant, concise acronym that describes very well the category of posts about Books Online – RTFM365. I take liberty of tagging this post with the same acronym. I often come across questions of type – ‘Hey, i am trying to create a table, but I am getting an error’. The error often says that the syntax is invalid. 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT DEFAULT Guid_Default NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); 5 The answer is usually(1), ‘Ok, let me check it out.. Ah yes – you have to put name of the DEFAULT constraint before the type of constraint: 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT Guid_Default DEFAULT NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); Why many people stumble on syntax errors? Is the syntax poorly documented? No, the issue is, that correct syntax of the CREATE TABLE statement is documented very well in Books Online and is.. intimidating. Many people can be taken aback by the rather complex block of code that describes all intricacies of the statement. However, I don’t know better way of defining syntax of the statement or command. The notation that is used to describe syntax in Books Online is a form of Backus-Naur notatiion, called BNF for short sometimes. This is a notation that was invented around 50 years ago, and some say that even earlier, around 400 BC – would you believe? Originally it was used to define syntax of, rather ancient now, ALGOL programming language (in 1950’s, not in ancient India). If you look closer at the definition of the BNF, it turns out that the principles of this syntax are pretty simple. Here are a few bullet points: italic_text is a placeholder for your identifier <italic_text_in_angle_brackets> is a definition which is described further. [everything in square brackets] is optional {everything in curly brackets} is obligatory everything | separated | by | operator is an alternative ::= “assigns” definition to an identifier Yes, it looks like these six simple points give you the key to understand even the most complicated syntax definitions in Books Online. Books Online contain an article about syntax conventions – have you ever read it? Let’s have a look at fragment of the CREATE TABLE statement: 1 CREATE TABLE 2 [ database_name . [ schema_name ] . | schema_name . ] table_name 3 ( { <column_definition> | <computed_column_definition> 4 | <column_set_definition> } 5 [ <table_constraint> ] [ ,...n ] ) 6 [ ON { partition_scheme_name ( partition_column_name ) | filegroup 7 | "default" } ] 8 [ { TEXTIMAGE_ON { filegroup | "default" } ] 9 [ FILESTREAM_ON { partition_scheme_name | filegroup 10 | "default" } ] 11 [ WITH ( <table_option> [ ,...n ] ) ] 12 [ ; ] Let’s look at line 2 of the above snippet: This line uses rules 3 and 5 from the list. So you know that you can create table which has specified one of the following. just name – table will be created in default user schema schema name and table name – table will be created in specified schema database name, schema name and table name – table will be created in specified database, in specified schema database name, .., table name – table will be created in specified database, in default schema of the user. Note that this single line of the notation describes each of the naming schemes in deterministic way. The ‘optionality’ of the schema_name element is nested within database_name.. section. You can use either database_name and optional schema name, or just schema name – this is specified by the pipe character ‘|’. The error that user gets with execution of the first script fragment in this post is as follows: Msg 156, Level 15, State 1, Line 2 Incorrect syntax near the keyword 'DEFAULT'. Ok, let’s have a look how to find out the correct syntax. Line number 3 of the BNF fragment above contains reference to <column_definition>. Since column_definition is in angle brackets, we know that this is a reference to notion described further in the code. And indeed, the very next fragment of BNF contains syntax of the column definition. 1 <column_definition> ::= 2 column_name <data_type> 3 [ FILESTREAM ] 4 [ COLLATE collation_name ] 5 [ NULL | NOT NULL ] 6 [ 7 [ CONSTRAINT constraint_name ] DEFAULT constant_expression ] 8 | [ IDENTITY [ ( seed ,increment ) ] [ NOT FOR REPLICATION ] 9 ] 10 [ ROWGUIDCOL ] [ <column_constraint> [ ...n ] ] 11 [ SPARSE ] Look at line 7 in the above fragment. It says, that the column can have a DEFAULT constraint which, if you want to name it, has to be prepended with [CONSTRAINT constraint_name] sequence. The name of the constraint is optional, but I strongly recommend you to make the effort of coming up with some meaningful name yourself. So the correct syntax of the CREATE TABLE statement from the beginning of the article is like this: 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT Guid_Default DEFAULT NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); That is practically everything you should know about BNF. I encourage you to study the syntax definitions for various statements and commands in Books Online, you can find really interesting things hidden there. Technorati Tags: SQL Server,t-sql,BNF,syntax   (1) No, my answer usually is a question – ‘What error message? What does it say?’. You’d be surprised to know how many people think I can go through time and space and look at their screen at the moment they received the error.

    Read the article

  • SQL SERVER – How to Recover SQL Database Data Deleted by Accident

    - by Pinal Dave
    In Repair a SQL Server database using a transaction log explorer, I showed how to use ApexSQL Log, a SQL Server transaction log viewer, to recover a SQL Server database after a disaster. In this blog, I’ll show you how to use another SQL Server disaster recovery tool from ApexSQL in a situation when data is accidentally deleted. You can download ApexSQL Recover here, install, and play along. With a good SQL Server disaster recovery strategy, data recovery is not a problem. You have a reliable full database backup with valid data, a full database backup and subsequent differential database backups, or a full database backup and a chain of transaction log backups. But not all situations are ideal. Here we’ll address some sub-optimal scenarios, where you can still successfully recover data. If you have only a full database backup This is the least optimal SQL Server disaster recovery strategy, as it doesn’t ensure minimal data loss. For example, data was deleted on Wednesday. Your last full database backup was created on Sunday, three days before the records were deleted. By using the full database backup created on Sunday, you will be able to recover SQL database records that existed in the table on Sunday. If there were any records inserted into the table on Monday or Tuesday, they will be lost forever. The same goes for records modified in this period. This method will not bring back modified records, only the old records that existed on Sunday. If you restore this full database backup, all your changes (intentional and accidental) will be lost and the database will be reverted to the state it had on Sunday. What you have to do is compare the records that were in the table on Sunday to the records on Wednesday, create a synchronization script, and execute it against the Wednesday database. If you have a full database backup followed by differential database backups Let’s say the situation is the same as in the example above, only you create a differential database backup every night. Use the full database backup created on Sunday, and the last differential database backup (created on Tuesday). In this scenario, you will lose only the data inserted and updated after the differential backup created on Tuesday. If you have a full database backup and a chain of transaction log backups This is the SQL Server disaster recovery strategy that provides minimal data loss. With a full chain of transaction logs, you can recover the SQL database to an exact point in time. To provide optimal results, you have to know exactly when the records were deleted, because restoring to a later point will not bring back the records. This method requires restoring the full database backup first. If you have any differential log backup created after the last full database backup, restore the most recent one. Then, restore transaction log backups, one by one, it the order they were created starting with the first created after the restored differential database backup. Now, the table will be in the state before the records were deleted. You have to identify the deleted records, script them and run the script against the original database. Although this method is reliable, it is time-consuming and requires a lot of space on disk. How to easily recover deleted records? The following solution enables you to recover SQL database records even if you have no full or differential database backups and no transaction log backups. To understand how ApexSQL Recover works, I’ll explain what happens when table data is deleted. Table data is stored in data pages. When you delete table records, they are not immediately deleted from the data pages, but marked to be overwritten by new records. Such records are not shown as existing anymore, but ApexSQL Recover can read them and create undo script for them. How long will deleted records stay in the MDF file? It depends on many factors, as time passes it’s less likely that the records will not be overwritten. The more transactions occur after the deletion, the more chances the records will be overwritten and permanently lost. Therefore, it’s recommended to create a copy of the database MDF and LDF files immediately (if you cannot take your database offline until the issue is solved) and run ApexSQL Recover on them. Note that a full database backup will not help here, as the records marked for overwriting are not included in the backup. First, I’ll delete some records from the Person.EmailAddress table in the AdventureWorks database.   I can delete these records in SQL Server Management Studio, or execute a script such as DELETE FROM Person.EmailAddress WHERE BusinessEntityID BETWEEN 70 AND 80 Then, I’ll start ApexSQL Recover and select From DELETE operation in the Recovery tab.   In the Select the database to recover step, first select the SQL Server instance. If it’s not shown in the drop-down list, click the Server icon right to the Server drop-down list and browse for the SQL Server instance, or type the instance name manually. Specify the authentication type and select the database in the Database drop-down list.   In the next step, you’re prompted to add additional data sources. As this can be a tricky step, especially for new users, ApexSQL Recover offers help via the Help me decide option.   The Help me decide option guides you through a series of questions about the database transaction log and advises what files to add. If you know that you have no transaction log backups or detached transaction logs, or the online transaction log file has been truncated after the data was deleted, select No additional transaction logs are available. If you know that you have transaction log backups that contain the delete transactions you want to recover, click Add transaction logs. The online transaction log is listed and selected automatically.   Click Add if to add transaction log backups. It would be best if you have a full transaction log chain, as explained above. The next step for this option is to specify the time range.   Selecting a small time range for the time of deletion will create the recovery script just for the accidentally deleted records. A wide time range might script the records deleted on purpose, and you don’t want that. If needed, you can check the script generated and manually remove such records. After that, for all data sources options, the next step is to select the tables. Be careful here, if you deleted some data from other tables on purpose, and don’t want to recover them, don’t select all tables, as ApexSQL Recover will create the INSERT script for them too.   The next step offers two options: to create a recovery script that will insert the deleted records back into the Person.EmailAddress table, or to create a new database, create the Person.EmailAddress table in it, and insert the deleted records. I’ll select the first one.   The recovery process is completed and 11 records are found and scripted, as expected.   To see the script, click View script. ApexSQL Recover has its own script editor, where you can review, modify, and execute the recovery script. The insert into statements look like: INSERT INTO Person.EmailAddress( BusinessEntityID, EmailAddressID, EmailAddress, rowguid, ModifiedDate) VALUES( 70, 70, N'[email protected]' COLLATE SQL_Latin1_General_CP1_CI_AS, 'd62c5b4e-c91f-403f-b630-7b7e0fda70ce', '20030109 00:00:00.000' ); To execute the script, click Execute in the menu.   If you want to check whether the records are really back, execute SELECT * FROM Person.EmailAddress WHERE BusinessEntityID BETWEEN 70 AND 80 As shown, ApexSQL Recover recovers SQL database data after accidental deletes even without the database backup that contains the deleted data and relevant transaction log backups. ApexSQL Recover reads the deleted data from the database data file, so this method can be used even for databases in the Simple recovery model. Besides recovering SQL database records from a DELETE statement, ApexSQL Recover can help when the records are lost due to a DROP TABLE, or TRUNCATE statement, as well as repair a corrupted MDF file that cannot be attached to as SQL Server instance. You can find more information about how to recover SQL database lost data and repair a SQL Server database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Dynamic Types and DynamicObject References in C#

    - by Rick Strahl
    I've been working a bit with C# custom dynamic types for several customers recently and I've seen some confusion in understanding how dynamic types are referenced. This discussion specifically centers around types that implement IDynamicMetaObjectProvider or subclass from DynamicObject as opposed to arbitrary type casts of standard .NET types. IDynamicMetaObjectProvider types  are treated special when they are cast to the dynamic type. Assume for a second that I've created my own implementation of a custom dynamic type called DynamicFoo which is about as simple of a dynamic class that I can think of:public class DynamicFoo : DynamicObject { Dictionary<string, object> properties = new Dictionary<string, object>(); public string Bar { get; set; } public DateTime Entered { get; set; } public override bool TryGetMember(GetMemberBinder binder, out object result) { result = null; if (!properties.ContainsKey(binder.Name)) return false; result = properties[binder.Name]; return true; } public override bool TrySetMember(SetMemberBinder binder, object value) { properties[binder.Name] = value; return true; } } This class has an internal dictionary member and I'm exposing this dictionary member through a dynamic by implementing DynamicObject. This implementation exposes the properties dictionary so the dictionary keys can be referenced like properties (foo.NewProperty = "Cool!"). I override TryGetMember() and TrySetMember() which are fired at runtime every time you access a 'property' on a dynamic instance of this DynamicFoo type. Strong Typing and Dynamic Casting I now can instantiate and use DynamicFoo in a couple of different ways: Strong TypingDynamicFoo fooExplicit = new DynamicFoo(); var fooVar = new DynamicFoo(); These two commands are essentially identical and use strong typing. The compiler generates identical code for both of them. The var statement is merely a compiler directive to infer the type of fooVar at compile time and so the type of fooExplicit is DynamicFoo, just like fooExplicit. This is very static - nothing dynamic about it - and it completely ignores the IDynamicMetaObjectProvider implementation of my class above as it's never used. Using either of these I can access the native properties:DynamicFoo fooExplicit = new DynamicFoo();// static typing assignmentsfooVar.Bar = "Barred!"; fooExplicit.Entered = DateTime.Now; // echo back static values Console.WriteLine(fooVar.Bar); Console.WriteLine(fooExplicit.Entered); but I have no access whatsoever to the properties dictionary. Basically this creates a strongly typed instance of the type with access only to the strongly typed interface. You get no dynamic behavior at all. The IDynamicMetaObjectProvider features don't kick in until you cast the type to dynamic. If I try to access a non-existing property on fooExplicit I get a compilation error that tells me that the property doesn't exist. Again, it's clearly and utterly non-dynamic. Dynamicdynamic fooDynamic = new DynamicFoo(); fooDynamic on the other hand is created as a dynamic type and it's a completely different beast. I can also create a dynamic by simply casting any type to dynamic like this:DynamicFoo fooExplicit = new DynamicFoo(); dynamic fooDynamic = fooExplicit; Note that dynamic typically doesn't require an explicit cast as the compiler automatically performs the cast so there's no need to use as dynamic. Dynamic functionality works at runtime and allows for the dynamic wrapper to look up and call members dynamically. A dynamic type will look for members to access or call in two places: Using the strongly typed members of the object Using theIDynamicMetaObjectProvider Interface methods to access members So rather than statically linking and calling a method or retrieving a property, the dynamic type looks up - at runtime  - where the value actually comes from. It's essentially late-binding which allows runtime determination what action to take when a member is accessed at runtime *if* the member you are accessing does not exist on the object. Class members are checked first before IDynamicMetaObjectProvider interface methods are kick in. All of the following works with the dynamic type:dynamic fooDynamic = new DynamicFoo(); // dynamic typing assignments fooDynamic.NewProperty = "Something new!"; fooDynamic.LastAccess = DateTime.Now; // dynamic assigning static properties fooDynamic.Bar = "dynamic barred"; fooDynamic.Entered = DateTime.Now; // echo back dynamic values Console.WriteLine(fooDynamic.NewProperty); Console.WriteLine(fooDynamic.LastAccess); Console.WriteLine(fooDynamic.Bar); Console.WriteLine(fooDynamic.Entered); The dynamic type can access the native class properties (Bar and Entered) and create and read new ones (NewProperty,LastAccess) all using a single type instance which is pretty cool. As you can see it's pretty easy to create an extensible type this way that can dynamically add members at runtime dynamically. The Alter Ego of IDynamicObject The key point here is that all three statements - explicit, var and dynamic - declare a new DynamicFoo(), but the dynamic declaration results in completely different behavior than the first two simply because the type has been cast to dynamic. Dynamic binding means that the type loses its typical strong typing, compile time features. You can see this easily in the Visual Studio code editor. As soon as you assign a value to a dynamic you lose Intellisense and you see which means there's no Intellisense and no compiler type checking on any members you apply to this instance. If you're new to the dynamic type it might seem really confusing that a single type can behave differently depending on how it is cast, but that's exactly what happens when you use a type that implements IDynamicMetaObjectProvider. Declare the type as its strong type name and you only get to access the native instance members of the type. Declare or cast it to dynamic and you get dynamic behavior which accesses native members plus it uses IDynamicMetaObjectProvider implementation to handle any missing member definitions by running custom code. You can easily cast objects back and forth between dynamic and the original type:dynamic fooDynamic = new DynamicFoo(); fooDynamic.NewProperty = "New Property Value"; DynamicFoo foo = fooDynamic; foo.Bar = "Barred"; Here the code starts out with a dynamic cast and a dynamic assignment. The code then casts back the value to the DynamicFoo. Notice that when casting from dynamic to DynamicFoo and back we typically do not have to specify the cast explicitly - the compiler can induce the type so I don't need to specify as dynamic or as DynamicFoo. Moral of the Story This easy interchange between dynamic and the underlying type is actually super useful, because it allows you to create extensible objects that can expose non-member data stores and expose them as an object interface. You can create an object that hosts a number of strongly typed properties and then cast the object to dynamic and add additional dynamic properties to the same type at runtime. You can easily switch back and forth between the strongly typed instance to access the well-known strongly typed properties and to dynamic for the dynamic properties added at runtime. Keep in mind that dynamic object access has quite a bit of overhead and is definitely slower than strongly typed binding, so if you're accessing the strongly typed parts of your objects you definitely want to use a strongly typed reference. Reserve dynamic for the dynamic members to optimize your code. The real beauty of dynamic is that with very little effort you can build expandable objects or objects that expose different data stores to an object interface. I'll have more on this in my next post when I create a customized and extensible Expando object based on DynamicObject.© Rick Strahl, West Wind Technologies, 2005-2012Posted in CSharp  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • T4 Template error - Assembly Directive cannot locate referenced assembly in Visual Studio 2010 proje

    - by CodeSniper
    I ran into the following error recently in Visual Studio 2010 while trying to port Phil Haack’s excellent T4CSS template which was originally built for Visual Studio 2008.   The Problem Error Compiling transformation: Metadata file 'dotless.Core' could not be found In “T4 speak”, this simply means that you have an Assembly directive in your T4 template but the T4 engine was not able to locate or load the referenced assembly. In the case of the T4CSS Template, this was a showstopper for making it work in Visual Studio 2010. On a side note: The T4CSS template is a sweet little wrapper to allow you to use DotLessCss to generate static .css files from .less files rather than using their default HttpHandler or command-line tool.    If you haven't tried DotLessCSS yet, go check it out now!  In short, it is a tool that allows you to templatize and program your CSS files so that you can use variables, expressions, and mixins within your CSS which enables rapid changes and a lot of developer-flexibility as you evolve your CSS and UI. Back to our regularly scheduled program… Anyhow, this post isn't about DotLessCss, its about the T4 Templates and the errors I ran into when converting them from Visual Studio 2008 to Visual Studio 2010. In VS2010, there were quite a few changes to the T4 Template Engine; most were excellent changes, but this one bit me with T4CSS: “Project assemblies are no longer used to resolve template assembly directives.” In VS2008, if you wanted to reference a custom assembly in your T4 Template (.tt file) you would simply right click on your project, choose Add Reference and select that assembly.  Afterwards you were allowed to use the following syntax in your T4 template to tell it to look at the local references: <#@ assembly name="dotless.Core.dll" #> This told the engine to look in the “usual place” for the assembly, which is your project references. However, this is exactly what they changed in VS2010.  They now basically sandbox the T4 Engine to keep your T4 assemblies separate from your project assemblies.  This can come in handy if you want to support different versions of an assembly referenced both by your T4 templates and your project. Who broke the build?  Oh, Microsoft Did! In our case, this change causes a problem since the templates are no longer compatible when upgrading to VS 2010 – thus its a breaking change.  So, how do we make this work in VS 2010? Luckily, Microsoft now offers several options for referencing assemblies from T4 Templates: GAC your assemblies and use Namespace Reference or Fully Qualified Type Name Use a hard-coded Fully Qualified UNC path Copy assembly to Visual Studio "Public Assemblies Folder" and use Namespace Reference or Fully Qualified Type Name.  Use or Define a Windows Environment Variable to build a Fully Qualified UNC path. Use a Visual Studio Macro to build a Fully Qualified UNC path. Option #1 & 2 were already supported in Visual Studio 2008, so if you want to keep your templates compatible with both Visual Studio versions, then you would have to adopt one of these approaches. Yakkety Yak, use the GAC! Option #1 requires an additional pre-build step to GAC the referenced assembly, which could be a pain.  But, if you go that route, then after you GAC, all you need is a simple type name or namespace reference such as: <#@ assembly name="dotless.Core" #> Hard Coding aint that hard! The other option of using hard-coded paths in Option #2 is pretty impractical in most situations since each developer would have to use the same local project folder paths, or modify this setting each time for their local machines as well as for production deployment.  However, if you want to go that route, simply use the following assembly directive style: <#@ assembly name="C:\Code\Lib\dotless.Core.dll" #> Lets go Public! Option #3, the Visual Studio Public Assemblies Folder, is the recommended place to put commonly used tools and libraries that are only needed for Visual Studio.  Think of it like a VS-only GAC.  This is likely the best place for something like dotLessCSS and is my preferred solution.  However, you will need to either use an installer or a pre-build action to copy the assembly to the right folder location.   Normally this is located at:  C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PublicAssemblies Once you have copied your assembly there, you use the type name or namespace syntax again: <#@ assembly name="dotless.Core" #> Save the Environment! Option #4, using a Windows Environment Variable, is interesting for enterprise use where you may have standard locations for files, but less useful for demo-code, frameworks, and products where you don't have control over the local system.  The syntax for including a environment variable in your assembly directive looks like the following, just as you would expect: <#@ assembly name="%mypath%\dotless.Core.dll" #> “mypath” is a Windows environment variable you setup that points to some fully qualified UNC path on your system.  In the right situation this can be a great solution such as one where you use a msi installer for deployment, or where you have a pre-existing environment variable you can re-use. OMG Macros! Finally, Option #5 is a very nice option if you want to keep your T4 template’s assembly reference local and relative to the project or solution without muddying-up your dev environment or GAC with extra deployments.  An example looks like this: <#@ assembly name="$(SolutionDir)lib\dotless.Core.dll" #> In this example, I’m using the “SolutionDir” VS macro so I can reference an assembly in a “/lib” folder at the root of the solution.   This is just one of the many macros you can use.  If you are familiar with creating Pre/Post-build Event scripts, you can use its dialog to look at all of the different VS macros available. This option gives the best solution for local assemblies without the hassle of extra installers or other setup before the build.   However, its still not compatible with Visual Studio 2008, so if you have a T4 Template you want to use with both, then you may have to create multiple .tt files, one for each IDE version, or require the developer to set a value in the .tt file manually.   I’m not sure if T4 Templates support any form of compiler switches like “#if (VS2010)”  statements, but it would definitely be nice in this case to switch between this option and one of the ones more compatible with VS 2008. Conclusion As you can see, we went from 3 options with Visual Studio 2008, to 5 options (plus one problem) with Visual Studio 2010.  As a whole, I think the changes are great, but the short-term growing pains during the migration may be annoying until we get used to our new found power. Hopefully this all made sense and was helpful to you.  If nothing else, I’ll just use it as a reference the next time I need to port a T4 template to Visual Studio 2010.  Happy T4 templating, and “May the fourth be with you!”

    Read the article

  • Building The Right SharePoint Team For Your Organization

    - by Mark Rackley
    I see the question posted fairly often asking what kind SharePoint team an organization should have. How many people do I need? What roles do I need to fill? What is best for my organization? Well, just like every other answer in SharePoint, the correct answer is “it depends”. Do you ever get sick of hearing that??? I know I do… So, let me give you my thoughts and opinions based upon my experience and what I’ve seen and let you come to your own conclusions. What are the possible SharePoint roles? I guess the first thing you need to understand are the different roles that exist in SharePoint (and their are LOTS). Remember, SharePoint is a massive beast and you will NOT find one person who can do it all. If you are hoping to find that person you will be sorely disappointed. For the most part this is true in SharePoint 2007 and 2010. However, generally things are improved in 2010 and easier for junior individuals to grasp. SharePoint Administrator The absolutely positively only role that you should not be without no matter the size of your organization or SharePoint deployment is a SharePoint administrator. These guys are essential to keeping things running and figuring out what’s wrong when things aren’t running well. These unsung heroes do more before 10 am than I do all day. The bad thing is, when these guys are awesome, you don’t even know they exist because everything is running so smoothly. You should definitely invest some time and money here to make sure you have some competent if not rockstar help. You need an admin who truly loves SharePoint and will go that extra mile when necessary. Let me give you a real world example of what I’m talking about: We have a rockstar admin… and I’m sure she’s sick of my throwing her name around so she’ll just have to live with remaining anonymous in this post… sorry Lori… Anyway! A couple of weeks ago our Server teams came to us and said Hi Lori, I’m finalizing the MOSS servers and doing updates that require a restart; can I restart them? Seems like a harmless request from your server team does it not? Sure, go ahead and apply the patches and reboot during our scheduled maintenance window. No problem? right? Sounded fair to me… but no…. not to our fearless SharePoint admin… I need a complete list of patches that will be applied. There is an update that is out there that will break SharePoint… KB973917 is the patch that has been shown to cause issues. What? You mean Microsoft released a patch that would actually adversely affect SharePoint? If we did NOT have a rockstar admin, our server team would have applied these patches and then when some problem occurred in SharePoint we’d have to go through the fun task of tracking down exactly what caused the issue and resolve it. How much time would that have taken? If you have a junior SharePoint admin or an admin who’s not out there staying on top of what’s going on you could have spent days tracking down something so simple as applying a patch you should not have applied. I will even go as far to say the only SharePoint rockstar you NEED in your organization is a SharePoint admin. You can always outsource really complicated development projects or bring in a rockstar contractor every now and then to make sure you aren’t way off track in other areas. For your day-to-day sanity and to keep SharePoint running smoothly, you need an awesome Admin. Some rockstars in this category are: Ben Curry, Mike Watson, Joel Oleson, Todd Klindt, Shane Young, John Ferringer, Sean McDonough, and of course Lori Gowin. SharePoint Developer Another essential role for your SharePoint deployment is a SharePoint developer. Things do start to get a little hazy here and there are many flavors of “developers”. Are you writing custom code? using SharePoint Designer? What about SharePoint Branding?  Are all of these considered developers? I would say yes. Are they interchangeable? I’d say no. Development in SharePoint is such a large beast in itself. I would say that it’s not so large that you can’t know it all well, but it is so large that there are many people who specialize in one particular category. If you are lucky enough to have someone on staff who knows it all well, you better make sure they are well taken care of because those guys are ready-made to move over to a consulting role and charge you 3 times what you are probably paying them. :) Some of the all-around rockstars are Eric Shupps, Andrew Connell (go Razorbacks), Rob Foster, Paul Schaeflein, and Todd Bleeker SharePoint Power User/No-Code Solutions Developer These SharePoint Swiss Army Knives are essential for quick wins in your organization. These people can twist the out-of-the-box functionality to make it do things you would not even imagine. Give these guys SharePoint Designer, jQuery, InfoPath, and a little time and they will create views, dashboards, and KPI’s that will blow your mind away and give your execs the “wow” they are looking for. Not only can they deliver that wow factor, but they can mashup, merge, and really help make your SharePoint application usable and deliver an overall better user experience. Before you hand off a project to your SharePoint Custom Code developer, let one of these rockstars look at it and show you what they can do (in probably less time). I would say the second most important role you can fill in your organization is one of these guys. Rockstars in this category are Christina Wheeler, Laura Rogers, Jennifer Mason, and Mark Miller SharePoint Developer – Custom Code If you want to really integrate SharePoint into your legacy systems, or really twist it and make it bend to your will, you are going to have to open up Visual Studio and write some custom code.  Remember, SharePoint is essentially just a big, huge, ginormous .NET application, so you CAN write code to make it do ANYTHING, but do you really want to spend the time and effort to do so? At some point with every other form of SharePoint development you are going to run into SOME limitation (SPD Workflows is the big one that comes to mind). If you truly want to knock down all the walls then custom development is the way to go. PLEASE keep in mind when you are looking for a custom code developer that a .NET developer does NOT equal a SharePoint developer. Just SOME of the things these guys write are: Custom Workflows Custom Web Parts Web Service functionality Import data from legacy systems Export data to legacy systems Custom Actions Event Receivers Service Applications (2010) These guys are also the ones generally responsible for packaging everything up into solution packages (you are doing that, right?). Rockstars in this category are Phil Wicklund, Christina Wheeler, Geoff Varosky, and Brian Jackett. SharePoint Branding “But it LOOKS like SharePoint!” Somebody call the WAAAAAAAAAAAAHMbulance…   Themes, Master Pages, Page Layouts, Zones, and over 2000 styles in CSS.. these guys not only have to be comfortable with all of SharePoint’s quirks and pain points when branding, but they have to know it TWICE for publishing and non-publishing sites.  Not only that, but these guys really need to have an eye for graphic design and be able to translate the ramblings of business into something visually stunning. They also have to be comfortable with XSLT, XML, and be able to hand off what they do to your custom developers for them to package as solutions (which you are doing, right?). These rockstars include Heater Waterman, Cathy Dew, and Marcy Kellar SharePoint Architect SharePoint Architects are generally SharePoint Admins or Developers who have moved into more of a BA role? Is that fair to say? These guys really have a grasp and understanding for what SharePoint IS and what it can do. These guys help you structure your farms to meet your needs and help you design your applications the correct way. It’s always a good idea to bring in a rockstar SharePoint Architect to do a sanity check and make sure you aren’t doing anything stupid.  Most organizations probably do not have a rockstar architect on staff. These guys are generally brought in at the deployment of a farm, upgrade of a farm, or for large development projects. I personally also find architects very useful for sitting down with the business to translate their needs into what SharePoint can do. A good architect will be able to pick out what can be done out-of-the-box and what has to be custom built and hand those requirements to the development Staff. Architects can generally fill in as an admin or a developer when needed. Some rockstar architects are Rick Taylor, Dan Usher, Bill English, Spence Harbar, Neil Hodgkins, Eric Harlan, and Bjørn Furuknap. Other Roles / Specialties On top of all these other roles you also get these people who specialize in things like Reporting, BDC (BCS in 2010), Search, Performance, Security, Project Management, etc... etc... etc... Again, most organizations will not have one of these gurus on staff, they’ll just pay out the nose for them when they need them. :) SharePoint End User Everyone else in your organization that touches SharePoint falls into this category. What they actually DO in SharePoint is determined by your governance and what permissions you give these guys. Hopefully you have these guys on a fairly short leash and are NOT giving them access to tools like SharePoint Designer. Sadly end users are the ones who truly make your deployment a success by using it, but are also your biggest enemy in breaking it.  :)  We love you guys… really!!! Okay, all that’s fine and dandy, but what should MY SharePoint team look like? It depends! Okay… Are you just doing out of the box team sites with no custom development? Then you are probably fine with a great Admin team and a great No-Code Solution Development team. How many people do you need? Depends on how busy you can keep them. Sorry, can’t answer the question about numbers without knowing your specific needs. I can just tell you who you MIGHT need and what they will do for you. I’ll leave you with what my ideal SharePoint Team would look like for a particular scenario: Farm / Organization Structure Dev, QA, and 2 Production Farms. 5000 – 10000 Users Custom Development and Integration with legacy systems Team Sites, My Sites, Intranet, Document libraries and overall company collaboration Team Rockstar SharePoint Administrator 2-3 junior SharePoint Administrators SharePoint Architect / Lead Developer 2 Power User / No-Code Solution Developers 2-3 Custom Code developers Branding expert With a team of that size and skill set, they should be able to keep a substantial SharePoint deployment running smoothly and meet your business needs. This does NOT mean that you would not need to bring in contract help from time to time when you need an uber specialist in one area. Also, this team assumes there will be ongoing development for the life of your SharePoint farm. If you are just going to be doing sporadic custom development, it might make sense to partner with an awesome firm that specializes in that sort of work (I can give you the name of a couple if you are interested).  Again though, the size of your team depends on the number of requests you are receiving and how much active deployment you are doing. So, don’t bring in a team that looks like this and then yell at me because they are sitting around with nothing to do or are so overwhelmed that nothing is getting done. I do URGE you to take the proper time to asses your needs and determine what team is BEST for your organization. Also, PLEASE PLEASE PLEASE do not skimp on the talent. When it comes to SharePoint you really do get what you pay for when it comes to employees, contractors, and software.  SharePoint can become absolutely critical to your business and because you skimped on hiring a developer he created a web part that brings down the farm because he doesn’t know what he’s doing, or you hire an admin who thinks it’s fine to stick everything in the same Content Database and then can’t figure out why people are complaining. SharePoint can be an enormous blessing to an organization or it’s biggest curse. Spend the time and money to do it right, or be prepared to spending even more time and money later to fix it.

    Read the article

  • A Guided Tour of Complexity

    - by JoshReuben
    I just re-read Complexity – A Guided Tour by Melanie Mitchell , protégé of Douglas Hofstadter ( author of “Gödel, Escher, Bach”) http://www.amazon.com/Complexity-Guided-Tour-Melanie-Mitchell/dp/0199798109/ref=sr_1_1?ie=UTF8&qid=1339744329&sr=8-1 here are some notes and links:   Evolved from Cybernetics, General Systems Theory, Synergetics some interesting transdisciplinary fields to investigate: Chaos Theory - http://en.wikipedia.org/wiki/Chaos_theory – small differences in initial conditions (such as those due to rounding errors in numerical computation) yield widely diverging outcomes for chaotic systems, rendering long-term prediction impossible. System Dynamics / Cybernetics - http://en.wikipedia.org/wiki/System_Dynamics – study of how feedback changes system behavior Network Theory - http://en.wikipedia.org/wiki/Network_theory – leverage Graph Theory to analyze symmetric  / asymmetric relations between discrete objects Algebraic Topology - http://en.wikipedia.org/wiki/Algebraic_topology – leverage abstract algebra to analyze topological spaces There are limits to deterministic systems & to computation. Chaos Theory definitely applies to training an ANN (artificial neural network) – different weights will emerge depending upon the random selection of the training set. In recursive Non-Linear systems http://en.wikipedia.org/wiki/Nonlinear_system – output is not directly inferable from input. E.g. a Logistic map: Xt+1 = R Xt(1-Xt) Different types of bifurcations, attractor states and oscillations may occur – e.g. a Lorenz Attractor http://en.wikipedia.org/wiki/Lorenz_system Feigenbaum Constants http://en.wikipedia.org/wiki/Feigenbaum_constants express ratios in a bifurcation diagram for a non-linear map – the convergent limit of R (the rate of period-doubling bifurcations) is 4.6692016 Maxwell’s Demon - http://en.wikipedia.org/wiki/Maxwell%27s_demon - the Second Law of Thermodynamics has only a statistical certainty – the universe (and thus information) tends towards entropy. While any computation can theoretically be done without expending energy, with finite memory, the act of erasing memory is permanent and increases entropy. Life & thought is a counter-example to the universe’s tendency towards entropy. Leo Szilard and later Claude Shannon came up with the Information Theory of Entropy - http://en.wikipedia.org/wiki/Entropy_(information_theory) whereby Shannon entropy quantifies the expected value of a message’s information in bits in order to determine channel capacity and leverage Coding Theory (compression analysis). Ludwig Boltzmann came up with Statistical Mechanics - http://en.wikipedia.org/wiki/Statistical_mechanics – whereby our Newtonian perception of continuous reality is a probabilistic and statistical aggregate of many discrete quantum microstates. This is relevant for Quantum Information Theory http://en.wikipedia.org/wiki/Quantum_information and the Physics of Information - http://en.wikipedia.org/wiki/Physical_information. Hilbert’s Problems http://en.wikipedia.org/wiki/Hilbert's_problems pondered whether mathematics is complete, consistent, and decidable (the Decision Problem – http://en.wikipedia.org/wiki/Entscheidungsproblem – is there always an algorithm that can determine whether a statement is true).  Godel’s Incompleteness Theorems http://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_theorems  proved that mathematics cannot be both complete and consistent (e.g. “This statement is not provable”). Turing through the use of Turing Machines (http://en.wikipedia.org/wiki/Turing_machine symbol processors that can prove mathematical statements) and Universal Turing Machines (http://en.wikipedia.org/wiki/Universal_Turing_machine Turing Machines that can emulate other any Turing Machine via accepting programs as well as data as input symbols) that computation is limited by demonstrating the Halting Problem http://en.wikipedia.org/wiki/Halting_problem (is is not possible to know when a program will complete – you cannot build an infinite loop detector). You may be used to thinking of 1 / 2 / 3 dimensional systems, but Fractal http://en.wikipedia.org/wiki/Fractal systems are defined by self-similarity & have non-integer Hausdorff Dimensions !!!  http://en.wikipedia.org/wiki/List_of_fractals_by_Hausdorff_dimension – the fractal dimension quantifies the number of copies of a self similar object at each level of detail – eg Koch Snowflake - http://en.wikipedia.org/wiki/Koch_snowflake Definitions of complexity: size, Shannon entropy, Algorithmic Information Content (http://en.wikipedia.org/wiki/Algorithmic_information_theory - size of shortest program that can generate a description of an object) Logical depth (amount of info processed), thermodynamic depth (resources required). Complexity is statistical and fractal. John Von Neumann’s other machine was the Self-Reproducing Automaton http://en.wikipedia.org/wiki/Self-replicating_machine  . Cellular Automata http://en.wikipedia.org/wiki/Cellular_automaton are alternative form of Universal Turing machine to traditional Von Neumann machines where grid cells are locally synchronized with their neighbors according to a rule. Conway’s Game of Life http://en.wikipedia.org/wiki/Conway's_Game_of_Life demonstrates various emergent constructs such as “Glider Guns” and “Spaceships”. Cellular Automatons are not practical because logical ops require a large number of cells – wasteful & inefficient. There are no compilers or general program languages available for Cellular Automatons (as far as I am aware). Random Boolean Networks http://en.wikipedia.org/wiki/Boolean_network are extensions of cellular automata where nodes are connected at random (not to spatial neighbors) and each node has its own rule –> they demonstrate the emergence of complex  & self organized behavior. Stephen Wolfram’s (creator of Mathematica, so give him the benefit of the doubt) New Kind of Science http://en.wikipedia.org/wiki/A_New_Kind_of_Science proposes the universe may be a discrete Finite State Automata http://en.wikipedia.org/wiki/Finite-state_machine whereby reality emerges from simple rules. I am 2/3 through this book. It is feasible that the universe is quantum discrete at the plank scale and that it computes itself – Digital Physics: http://en.wikipedia.org/wiki/Digital_physics – a simulated reality? Anyway, all behavior is supposedly derived from simple algorithmic rules & falls into 4 patterns: uniform , nested / cyclical, random (Rule 30 http://en.wikipedia.org/wiki/Rule_30) & mixed (Rule 110 - http://en.wikipedia.org/wiki/Rule_110 localized structures – it is this that is interesting). interaction between colliding propagating signal inputs is then information processing. Wolfram proposes the Principle of Computational Equivalence - http://mathworld.wolfram.com/PrincipleofComputationalEquivalence.html - all processes that are not obviously simple can be viewed as computations of equivalent sophistication. Meaning in information may emerge from analogy & conceptual slippages – see the CopyCat program: http://cognitrn.psych.indiana.edu/rgoldsto/courses/concepts/copycat.pdf Scale Free Networks http://en.wikipedia.org/wiki/Scale-free_network have a distribution governed by a Power Law (http://en.wikipedia.org/wiki/Power_law - much more common than Normal Distribution). They are characterized by hubs (resilience to random deletion of nodes), heterogeneity of degree values, self similarity, & small world structure. They grow via preferential attachment http://en.wikipedia.org/wiki/Preferential_attachment – tipping points triggered by positive feedback loops. 2 theories of cascading system failures in complex systems are Self-Organized Criticality http://en.wikipedia.org/wiki/Self-organized_criticality and Highly Optimized Tolerance http://en.wikipedia.org/wiki/Highly_optimized_tolerance. Computational Mechanics http://en.wikipedia.org/wiki/Computational_mechanics – use of computational methods to study phenomena governed by the principles of mechanics. This book is a great intuition pump, but does not cover the more mathematical subject of Computational Complexity Theory – http://en.wikipedia.org/wiki/Computational_complexity_theory I am currently reading this book on this subject: http://www.amazon.com/Computational-Complexity-Christos-H-Papadimitriou/dp/0201530821/ref=pd_sim_b_1   stay tuned for that review!

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >