Search Results

Search found 15803 results on 633 pages for 'self join'.

Page 292/633 | < Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >

  • how can i introspect properties and model fields in django?

    - by shreddd
    I am trying to get a list of all existing model fields and properties for a given object. Is there a clean way to instrospect an object so that I can get a dict of fields and properties. class MyModel(Model) url = models.TextField() def _get_location(self): return "%s/jobs/%d"%(url, self.id) location = property(_get_location) What I want is something that returns a dict that looks like this: { 'id' : 1, 'url':'http://foo', 'location' : 'http://foo/jobs/1' } I can use model._meta.fields to get the model fields, but this doesn't give me things that are properties but not real DB fields.

    Read the article

  • How to add a multiline title bar in UINavigationController

    - by Cocoa Matters
    I have try to add a two line title bar in UINavigationController I want to adjust font size automatically set according to string length.My String max size goes to 60. I have try to implemented through following code UILabel *bigLabel = [[UILabel alloc] init]; bigLabel.text = @"1234567890 1234567890 1234567890 1234567890 1234567890 123456"; bigLabel.backgroundColor = [UIColor clearColor]; bigLabel.textColor = [UIColor whiteColor]; bigLabel.font = [UIFont boldSystemFontOfSize:20]; bigLabel.adjustsFontSizeToFitWidth = YES; bigLabel.clipsToBounds = NO; bigLabel.numberOfLines = 2; bigLabel.textAlignment = ([self.title length] < 10 ? NSTextAlignmentCenter : NSTextAlignmentLeft); [bigLabel sizeToFit]; self.navigationItem.titleView = bigLabel; It didn't work for me can you help me please. I have to made this for iPhone and iPad screen

    Read the article

  • Error reading file with accented vowels

    - by Daniel Dcs
    The following statement to fill a list from a file : action = [] with open (os.getcwd() + "/files/" + "actions.txt") as temp:          action = list (temp) gives me the following error: (result, consumed) = self._buffer_decode (data, self.errors, end) UnicodeDecodeError: 'utf-8' codec can not decode byte 0xf1 in position 67: invalid continuation byte if I add errors = 'ignore': action = [] with open (os.getcwd () + "/ files /" + "actions.txt", errors = 'ignore') as temp:          action = list (temp) Is read the file but not the ñ and vowels accented á-é-í-ó-ú being that python 3 works, as I have understood, default to 'utf-8' I'm looking for a solution for two or more days, and I'm getting more confused. In advance thank you very much for any suggestions.

    Read the article

  • iOS Display Different Image on Click

    - by user1506841
    Using XCode, I am trying to figure out how to display a different image when someone clicks or presses down on one of my buttons before being taken to a second screen. For example, I have a contact icon on my home screen. When a user clicks the icon, it should change to a darker version on tap before going to the contact screen. Any help is appreciated. -(IBAction) ButtonPressed :(id)sender { UIButton *tempButton = (UIButton *) sender; int tag = tempButton.tag; NSString *viewName; switch (tag) { case 1: [FlurryAnalytics logEvent:@"Contact-Screen"]; viewName = @"ContactScreen"; if( self.appDelegate.sound) [Click play]; [self.appDelegate moveToView:viewName]; break; } }

    Read the article

  • Way to kill python thread from inside thread?

    - by user859434
    I have some python code that currently performs expensive computation by performing the computation in parallel through many threads. For a given time period, many threads are created and started on the fly that share the same code which is explicitly stated within the run method of the thread. My question is how do I stop/kill a thread at the end of its run method? (the run is only called once) I need to do this in order to create more threads for the next batch of computation. #Example class someThread(threading.Thread): def __init__(self): #some init code def run(self): #Explicitly Stated Code without constant loops #Something performed to stop/kill this thread

    Read the article

  • Module.new with class_eval

    - by dorelal
    This is a large commit. But I want you to concentrate on this change block. http://github.com/rails/rails/commit/d916c62cfc7c59ab6411407a05b946d3dd7535e9#L2L1304 Even without understanding the full context of the code I am not able to think of a scenario where I would use include Modue.new { class_eval <<-RUBY def foo puts 'foo' end RUBY } Then end result is that in the root context (self just before include Moduel.new) a method call foo has been added. If I take out the Module.new code and if I only leave class_eval in that case also I will have a method called foo in self. What am I missing.

    Read the article

  • iPhone Application

    - by user553627
    Hello, Im working on an iPhone project using xcode and i actually have not programmed using objective-c before. So, my problem mainly is that my app crashes whenever i hit the button that it suppose to show a view of the world map. I think the problem is within the last 2 lines of the code but still i cant figure out why ?!! because whenever i comment out the line "[self presemtM.....]" the program doesn't crash. Would appreciate your help! -(IBAction) pushedGo:(id)sender { CLLocationCoordinate2D coord = {37.331689, -122.030731}; MapViewController *mapView = [[MapViewController alloc] initWithCoordinates:coord andTitle:@"Apple" andSubTitle:@"111"]; [self presentModalViewController:mapView animated:YES] [mapView release]; }

    Read the article

  • An NSMutableArray that doesn't retain?

    - by synic
    A few UIViewControllers in my app that need to register with a "provider" class in their viewDidLoad methods. I've just been adding them to an NSMutableArray contained in the provider class. However, I don't want this NSMutableArray to keep them from being dealloc'ed, and I also want to have them remove themselves from the NSMutableArray in their dealloc methods. I tried just issuing a [self release] after adding them to the array, and this works, but in order to avoid a crash when they get dealloc'ed, I have to issue a [self retain] right before I remove them. It seems like I'm doing something horribly wrong by retaining an object in it's own dealloc method. Is there a better way to store these values?

    Read the article

  • How to use `wx.ProgressDialog` with my own method?

    - by user1401950
    How can I use the wx.ProgressDialog to time my method called imgSearch? The imgSearch method finds image files on the user's pc. How can I make the wx.ProgressDialog run while imgSearch is still running and display how long the imgSearch is taking? Here's my code: def onFind (self,event)# triggered by a button click max = 80 dlg = wx.ProgressDialog("Progress dialog example","An informative message",parent=self, style = wx.PD_CAN_ABORT| wx.PD_APP_MODAL| wx.PD_ELAPSED_TIME| wx.PD_REMAINING_TIME) keepGoing = True count = 0 imageExtentions = ['*.jpg', '*.jpeg', '*.png', '*.tif', '*.tiff'] selectedDir = 'C:\\' imgSearch.findImages(imageExtentions, selectedDir)# my method while keepGoing and count < max: count += 1 wx.MilliSleep(250) if count >= max / 2: (keepGoing, skip) = dlg.Update(count, "Half-time!") else: (keepGoing, skip) = dlg.Update(count) dlg.Destroy()

    Read the article

  • app-engine-rest-server to raise KeyError("name %s already used" % model_name)

    - by fx
    I'm playing with the project appengine-rest-server to create the REST webservices for all the existing models. I got a strange error, the first time I query the browser: http://localhost:8080/rest/metadata/user, it gives me the result: <xs:schema> - <xs:element name="user"> - <xs:complexType> - <xs:sequence> <xs:element maxOccurs="1" minOccurs="0" name="key" type="xs:normalizedString"/> <xs:element maxOccurs="1" minOccurs="0" name="surname" type="xs:string"/> <xs:element maxOccurs="1" minOccurs="0" name="firstname" type="xs:string"/> <xs:element maxOccurs="1" minOccurs="0" name="ages" type="xs:long"/> <xs:element maxOccurs="1" minOccurs="0" name="sex" type="xs:boolean"/> <xs:element maxOccurs="1" minOccurs="0" name="updatedDate" type="xs:dateTime"/> <xs:element maxOccurs="1" minOccurs="0" name="createdDate" type="xs:dateTime"/> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> But refreshing the page, gives me this error: Traceback (most recent call last): File "/Users/foo/Documents/AppEngine/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/dev_appserver.py", line 3185, in _HandleRequest self._Dispatch(dispatcher, self.rfile, outfile, env_dict) File "/Users/foo/Documents/AppEngine/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/dev_appserver.py", line 3128, in _Dispatch base_env_dict=env_dict) File "/Users/foo/Documents/AppEngine/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/dev_appserver.py", line 515, in Dispatch base_env_dict=base_env_dict) File "/Users/foo/Documents/AppEngine/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/dev_appserver.py", line 2387, in Dispatch self._module_dict) File "/Users/foo/Documents/AppEngine/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/dev_appserver.py", line 2297, in ExecuteCGI reset_modules = exec_script(handler_path, cgi_path, hook) File "/Users/foo/Documents/AppEngine/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/tools/dev_appserver.py", line 2195, in ExecuteOrImportScript script_module.main() File "/Users/foo/Documents/AppEngine/helloworld/main.py", line 48, in main rest.Dispatcher.add_models({"user": UserModel}) File "/Users/foo/Documents/AppEngine/helloworld/rest/__init__.py", line 845, in add_models cls.add_model(model_name, model_type) File "/Users/foo/Documents/AppEngine/helloworld/rest/__init__.py", line 863, in add_model raise KeyError("name %s already used" % model_name) KeyError: 'name user already used' Can someone give me the explanation on why it happens? Restarting the server, run on the browser again I get the xml result, but refreshing causes the error. Is it a bug in the appengine-rest-server application or it is in my code? My helloworld application is available for download here.

    Read the article

  • Communication Between Your PC and Azure VM via Windows Azure Connect

    - by Shaun
    With the new release of the Windows Azure platform there are a lot of new features available. In my previous post I introduced a little bit about one of them, the remote desktop access to azure virtual machine. Now I would like to talk about another cool stuff – Windows Azure Connect.   What’s Windows Azure Connect I would like to quote the definition of the Windows Azure Connect in MSDN With Windows Azure Connect, you can use a simple user interface to configure IP-sec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. IP-sec protects communications over Internet Protocol (IP) networks through the use of cryptographic security services. There’s an image available at the MSDN as well that I would like to forward here As we can see, using the Windows Azure Connect the Worker Role 1 and Web Role 1 are connected with the development machines and database servers which some of them are inside the organization some are not. With the Windows Azure Connect, the roles deployed on the cloud could consume the resource which located inside our Intranet or anywhere in the world. That means the roles can connect to the local database, access the local shared resource such as share files, folders and printers, etc.   Difference between Windows Azure Connect and AppFabric It seems that the Windows Azure Connect are duplicated with the Windows Azure AppFabric. Both of them are aiming to solve the problem on how to communication between the resource in the cloud and inside the local network. The table below lists the differences in my understanding. Category Windows Azure Connect Windows Azure AppFabric Purpose An IP-sec connection between the local machines and azure roles. An application service running on the cloud. Connectivity IP-sec, Domain-joint Net Tcp, Http, Https Components Windows Azure Connect Driver Service Bus, Access Control, Caching Usage Azure roles connect to local database server Azure roles use local shared files,  folders and printers, etc. Azure roles join the local AD. Expose the local service to Internet. Move the authorization process to the cloud. Integrate with existing identities such as Live ID, Google ID, etc. with existing local services. Utilize the distributed cache.   And also some scenarios on which of them should be used. Scenario Connect AppFabric I have a service deployed in the Intranet and I want the people can use it from the Internet.   Y I have a website deployed on Azure and need to use a database which deployed inside the company. And I don’t want to expose the database to the Internet. Y   I have a service deployed in the Intranet and is using AD authorization. I have a website deployed on Azure which needs to use this service. Y   I have a service deployed in the Intranet and some people on the Internet can use it but need to be authorized and authenticated.   Y I have a service in Intranet, and a website deployed on Azure. This service can be used from Internet and that website should be able to use it as well by AD authorization for more functionalities. Y Y   How to Enable Windows Azure Connect OK we talked a lot information about the Windows Azure Connect and differences with the Windows Azure AppFabric. Now let’s see how to enable and use the Windows Azure Connect. First of all, since this feature is in CTP stage we should apply before use it. On the Windows Azure Portal we can see our CTP features status under Home, Beta Program page. You can send the apply to join the Beta Programs to Microsoft in this page. After a few days the Microsoft will send an email to you (the email of your Live ID) when it’s available. In my case we can see that the Windows Azure Connect had been activated by Microsoft and then we can click the Connect button on top, or we can click the Virtual Network item from the left navigation bar.   The first thing we need, if it’s our first time to enter the Connect page, is to enable the Windows Azure Connect. After that we can see our Windows Azure Connect information in this page.   Add a Local Machine to Azure Connect As we explained below the Windows Azure Connect can make an IP-sec connection between the local machines and azure role instances. So that we firstly add a local machine into our Azure Connect. To do this we will click the Install Local Endpoint button on top and then the portal will give us an URL. Copy this URL to the machine we want to add and it will download the software to us. This software will be installed in the local machines which we want to join the Connect. After installed there will be a tray-icon appeared to indicate this machine had been joint our Connect. The local application will be refreshed to the Windows Azure Platform every 5 minutes but we can click the Refresh button to let it retrieve the latest status at once. Currently my local machine is ready for connect and we can see my machine in the Windows Azure Portal if we switched back to the portal and selected back Activated Endpoints node.   Add a Windows Azure Role to Azure Connect Let’s create a very simple azure project with a basic ASP.NET web role inside. To make it available on Windows Azure Connect we will open the azure project property of this role from the solution explorer in the Visual Studio, and select the Virtual Network tab, check the Activate Windows Azure Connect. The next step is to get the activation token from the Windows Azure Portal. In the same page there is a button named Get Activation Token. Click this button then the portal will display the token to me. We copied this token and pasted to the box in the Visual Studio tab. Then we deployed this application to azure. After completed the deployment we can see the role instance was listed in the Windows Azure Portal - Virtual Connect section.   Establish the Connect Group The final task is to create a connect group which contains the machines and role instances need to be connected each other. This can be done in the portal very easy. The machines and instances will NOT be connected until we created the group for them. The machines and instances can be used in one or more groups. In the Virtual Connect section click the Groups and Roles node from the left side navigation bar and clicked the Create Group button on top. This will bring up a dialog to us. What we need to do is to specify a group name, description; and then we need to select the local computers and azure role instances into this group. After the Azure Fabric updated the group setting we can see the groups and the endpoints in the page. And if we switch back to the local machine we can see that the tray-icon have been changed and the status turned connected. The Windows Azure Connect will update the group information every 5 minutes. If you find the status was still in Disconnected please right-click the tray-icon and select the Refresh menu to retrieve the latest group policy to make it connected.   Test the Azure Connect between the Local Machine and the Azure Role Instance Now our local machine and azure role instance had been connected. This means each of them can communication to others in IP level. For example we can open the SQL Server port so that our azure role can connect to it by using the machine name or the IP address. The Windows Azure Connect uses IPv6 to connect between the local machines and role instances. You can get the IP address from the Windows Azure Portal Virtual Network section when select an endpoint. I don’t want to take a full example for how to use the Connect but would like to have two very simple tests. The first one would be PING.   When a local machine and role instance are connected through the Windows Azure Connect we can PING any of them if we opened the ICMP protocol in the Filewall setting. To do this we need to run a command line before test. Open the command window on the local machine and the role instance, execute the command as following netsh advfirewall firewall add rule name="ICMPv6" dir=in action=allow enable=yes protocol=icmpv6 Thanks to Jason Chen, Patriek van Dorp, Anton Staykov and Steve Marx, they helped me to enable  the ICMPv6 setting. For the full discussion we made please visit here. You can use the Remote Desktop Access feature to logon the azure role instance. Please refer my previous blog post to get to know how to use the Remote Desktop Access in Windows Azure. Then we can PING the machine or the role instance by specifying its name. Below is the screen I PING my local machine from my azure instance. We can use the IPv6 address to PING each other as well. Like the image following I PING to my role instance from my local machine thought the IPv6 address.   Another example I would like to demonstrate here is folder sharing. I shared a folder in my local machine and then if we logged on the role instance we can see the folder content from the file explorer window.   Summary In this blog post I introduced about another new feature – Windows Azure Connect. With this feature our local resources and role instances (virtual machines) can be connected to each other. In this way we can make our azure application using our local stuff such as database servers, printers, etc. without expose them to Internet.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Understanding LINQ to SQL (11) Performance

    - by Dixin
    [LINQ via C# series] LINQ to SQL has a lot of great features like strong typing query compilation deferred execution declarative paradigm etc., which are very productive. Of course, these cannot be free, and one price is the performance. O/R mapping overhead Because LINQ to SQL is based on O/R mapping, one obvious overhead is, data changing usually requires data retrieving:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { Product product = database.Products.Single(item => item.ProductID == id); // SELECT... product.UnitPrice = unitPrice; // UPDATE... database.SubmitChanges(); } } Before updating an entity, that entity has to be retrieved by an extra SELECT query. This is slower than direct data update via ADO.NET:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (SqlConnection connection = new SqlConnection( "Data Source=localhost;Initial Catalog=Northwind;Integrated Security=True")) using (SqlCommand command = new SqlCommand( @"UPDATE [dbo].[Products] SET [UnitPrice] = @UnitPrice WHERE [ProductID] = @ProductID", connection)) { command.Parameters.Add("@ProductID", SqlDbType.Int).Value = id; command.Parameters.Add("@UnitPrice", SqlDbType.Money).Value = unitPrice; connection.Open(); command.Transaction = connection.BeginTransaction(); command.ExecuteNonQuery(); // UPDATE... command.Transaction.Commit(); } } The above imperative code specifies the “how to do” details with better performance. For the same reason, some articles from Internet insist that, when updating data via LINQ to SQL, the above declarative code should be replaced by:private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.ExecuteCommand( "UPDATE [dbo].[Products] SET [UnitPrice] = {0} WHERE [ProductID] = {1}", id, unitPrice); } } Or just create a stored procedure:CREATE PROCEDURE [dbo].[UpdateProductUnitPrice] ( @ProductID INT, @UnitPrice MONEY ) AS BEGIN BEGIN TRANSACTION UPDATE [dbo].[Products] SET [UnitPrice] = @UnitPrice WHERE [ProductID] = @ProductID COMMIT TRANSACTION END and map it as a method of NorthwindDataContext (explained in this post):private static void UpdateProductUnitPrice(int id, decimal unitPrice) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.UpdateProductUnitPrice(id, unitPrice); } } As a normal trade off for O/R mapping, a decision has to be made between performance overhead and programming productivity according to the case. In a developer’s perspective, if O/R mapping is chosen, I consistently choose the declarative LINQ code, unless this kind of overhead is unacceptable. Data retrieving overhead After talking about the O/R mapping specific issue. Now look into the LINQ to SQL specific issues, for example, performance in the data retrieving process. The previous post has explained that the SQL translating and executing is complex. Actually, the LINQ to SQL pipeline is similar to the compiler pipeline. It consists of about 15 steps to translate an C# expression tree to SQL statement, which can be categorized as: Convert: Invoke SqlProvider.BuildQuery() to convert the tree of Expression nodes into a tree of SqlNode nodes; Bind: Used visitor pattern to figure out the meanings of names according to the mapping info, like a property for a column, etc.; Flatten: Figure out the hierarchy of the query; Rewrite: for SQL Server 2000, if needed Reduce: Remove the unnecessary information from the tree. Parameterize Format: Generate the SQL statement string; Parameterize: Figure out the parameters, for example, a reference to a local variable should be a parameter in SQL; Materialize: Executes the reader and convert the result back into typed objects. So for each data retrieving, even for data retrieving which looks simple: private static Product[] RetrieveProducts(int productId) { using (NorthwindDataContext database = new NorthwindDataContext()) { return database.Products.Where(product => product.ProductID == productId) .ToArray(); } } LINQ to SQL goes through above steps to translate and execute the query. Fortunately, there is a built-in way to cache the translated query. Compiled query When such a LINQ to SQL query is executed repeatedly, The CompiledQuery can be used to translate query for one time, and execute for multiple times:internal static class CompiledQueries { private static readonly Func<NorthwindDataContext, int, Product[]> _retrieveProducts = CompiledQuery.Compile((NorthwindDataContext database, int productId) => database.Products.Where(product => product.ProductID == productId).ToArray()); internal static Product[] RetrieveProducts( this NorthwindDataContext database, int productId) { return _retrieveProducts(database, productId); } } The new version of RetrieveProducts() gets better performance, because only when _retrieveProducts is first time invoked, it internally invokes SqlProvider.Compile() to translate the query expression. And it also uses lock to make sure translating once in multi-threading scenarios. Static SQL / stored procedures without translating Another way to avoid the translating overhead is to use static SQL or stored procedures, just as the above examples. Because this is a functional programming series, this article not dive into. For the details, Scott Guthrie already has some excellent articles: LINQ to SQL (Part 6: Retrieving Data Using Stored Procedures) LINQ to SQL (Part 7: Updating our Database using Stored Procedures) LINQ to SQL (Part 8: Executing Custom SQL Expressions) Data changing overhead By looking into the data updating process, it also needs a lot of work: Begins transaction Processes the changes (ChangeProcessor) Walks through the objects to identify the changes Determines the order of the changes Executes the changings LINQ queries may be needed to execute the changings, like the first example in this article, an object needs to be retrieved before changed, then the above whole process of data retrieving will be went through If there is user customization, it will be executed, for example, a table’s INSERT / UPDATE / DELETE can be customized in the O/R designer It is important to keep these overhead in mind. Bulk deleting / updating Another thing to be aware is the bulk deleting:private static void DeleteProducts(int categoryId) { using (NorthwindDataContext database = new NorthwindDataContext()) { database.Products.DeleteAllOnSubmit( database.Products.Where(product => product.CategoryID == categoryId)); database.SubmitChanges(); } } The expected SQL should be like:BEGIN TRANSACTION exec sp_executesql N'DELETE FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0',N'@p0 int',@p0=9 COMMIT TRANSACTION Hoverer, as fore mentioned, the actual SQL is to retrieving the entities, and then delete them one by one:-- Retrieves the entities to be deleted: exec sp_executesql N'SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued] FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0',N'@p0 int',@p0=9 -- Deletes the retrieved entities one by one: BEGIN TRANSACTION exec sp_executesql N'DELETE FROM [dbo].[Products] WHERE ([ProductID] = @p0) AND ([ProductName] = @p1) AND ([SupplierID] IS NULL) AND ([CategoryID] = @p2) AND ([QuantityPerUnit] IS NULL) AND ([UnitPrice] = @p3) AND ([UnitsInStock] = @p4) AND ([UnitsOnOrder] = @p5) AND ([ReorderLevel] = @p6) AND (NOT ([Discontinued] = 1))',N'@p0 int,@p1 nvarchar(4000),@p2 int,@p3 money,@p4 smallint,@p5 smallint,@p6 smallint',@p0=78,@p1=N'Optimus Prime',@p2=9,@p3=$0.0000,@p4=0,@p5=0,@p6=0 exec sp_executesql N'DELETE FROM [dbo].[Products] WHERE ([ProductID] = @p0) AND ([ProductName] = @p1) AND ([SupplierID] IS NULL) AND ([CategoryID] = @p2) AND ([QuantityPerUnit] IS NULL) AND ([UnitPrice] = @p3) AND ([UnitsInStock] = @p4) AND ([UnitsOnOrder] = @p5) AND ([ReorderLevel] = @p6) AND (NOT ([Discontinued] = 1))',N'@p0 int,@p1 nvarchar(4000),@p2 int,@p3 money,@p4 smallint,@p5 smallint,@p6 smallint',@p0=79,@p1=N'Bumble Bee',@p2=9,@p3=$0.0000,@p4=0,@p5=0,@p6=0 -- ... COMMIT TRANSACTION And the same to the bulk updating. This is really not effective and need to be aware. Here is already some solutions from the Internet, like this one. The idea is wrap the above SELECT statement into a INNER JOIN:exec sp_executesql N'DELETE [dbo].[Products] FROM [dbo].[Products] AS [j0] INNER JOIN ( SELECT [t0].[ProductID], [t0].[ProductName], [t0].[SupplierID], [t0].[CategoryID], [t0].[QuantityPerUnit], [t0].[UnitPrice], [t0].[UnitsInStock], [t0].[UnitsOnOrder], [t0].[ReorderLevel], [t0].[Discontinued] FROM [dbo].[Products] AS [t0] WHERE [t0].[CategoryID] = @p0) AS [j1] ON ([j0].[ProductID] = [j1].[[Products])', -- The Primary Key N'@p0 int',@p0=9 Query plan overhead The last thing is about the SQL Server query plan. Before .NET 4.0, LINQ to SQL has an issue (not sure if it is a bug). LINQ to SQL internally uses ADO.NET, but it does not set the SqlParameter.Size for a variable-length argument, like argument of NVARCHAR type, etc. So for two queries with the same SQL but different argument length:using (NorthwindDataContext database = new NorthwindDataContext()) { database.Products.Where(product => product.ProductName == "A") .Select(product => product.ProductID).ToArray(); // The same SQL and argument type, different argument length. database.Products.Where(product => product.ProductName == "AA") .Select(product => product.ProductID).ToArray(); } Pay attention to the argument length in the translated SQL:exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(1)',@p0=N'A' exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(2)',@p0=N'AA' Here is the overhead: The first query’s query plan cache is not reused by the second one:SELECT sys.syscacheobjects.cacheobjtype, sys.dm_exec_cached_plans.usecounts, sys.syscacheobjects.[sql] FROM sys.syscacheobjects INNER JOIN sys.dm_exec_cached_plans ON sys.syscacheobjects.bucketid = sys.dm_exec_cached_plans.bucketid; They actually use different query plans. Again, pay attention to the argument length in the [sql] column (@p0 nvarchar(2) / @p0 nvarchar(1)). Fortunately, in .NET 4.0 this is fixed:internal static class SqlTypeSystem { private abstract class ProviderBase : TypeSystemProvider { protected int? GetLargestDeclarableSize(SqlType declaredType) { SqlDbType sqlDbType = declaredType.SqlDbType; if (sqlDbType <= SqlDbType.Image) { switch (sqlDbType) { case SqlDbType.Binary: case SqlDbType.Image: return 8000; } return null; } if (sqlDbType == SqlDbType.NVarChar) { return 4000; // Max length for NVARCHAR. } if (sqlDbType != SqlDbType.VarChar) { return null; } return 8000; } } } In this above example, the translated SQL becomes:exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(4000)',@p0=N'A' exec sp_executesql N'SELECT [t0].[ProductID] FROM [dbo].[Products] AS [t0] WHERE [t0].[ProductName] = @p0',N'@p0 nvarchar(4000)',@p0=N'AA' So that they reuses the same query plan cache: Now the [usecounts] column is 2.

    Read the article

  • Your Day-by-Day Guide to Agile PLM at Oracle OpenWorld 2012

    - by Kerrie Foy
    This year’s Oracle OpenWorld conference is nearly here, and we’re all excited about what we have planned! With five days of activities and customer presenters from market leaders and top innovators like The Coca-Cola Company, Starbucks, JDSU, Facebook, GlobalFoundries, and more, this is an event you don't want to miss. I've compiled this day-by-day guide to help anyone keep track of all the “Product Lifecycle Management and Product Value Chain” sessions and activities at OpenWorld 2012, September 30 – October 4 in San Francisco, California.  Monday, October 1 There are great networking activities on Sunday September 30, but PLM specific sessions start after general conference keynotes on Monday, October 1 at 10:45 a.m. at the InterContinental Hotel in room Telegraph Hill. In fact, most of our sessions this year will be held in this room, which is still close to the conference keynotes in Moscone, but just far enough away to allow some focused networking and discussions.   This first session, 10:45 – 11:45 a.m. is a joint session with the Agile and AutoVue teams, entitled “Streamline PLM Design-to-Manufacturing Processes with AutoVue Visualization Soltuions” featuring presenters from Oracle as well as joint AutoVue and Agile PLM customer GlobalFoundries. In the following 12:15 – 1:15 p.m. slot, there are two sessions to choose from, so if you have a team of representatives attending OpenWorld, you may consider splitting up to catch both of these: a) Our General Session will be held in the InterContinental Hotel Ballroom C, which will cover our complete enterprise PLM strategy, product updates, and roadmaps. It’s our pleasure to feature a customer keynote presentation from Chris Bedi, CIO, and Rajeev Sethi, Director IT Business Engagement, of JDSU. b) A focused session on integrating PLM with Engineering and Supply Chain Systems will be held on the second floor of Moscone West (next to the InterContinental) in room 2022. Join to discover how these types of integrations help companies manage common and integrated design information across all MCAD, ECAD, and software components. After a lunch break and perhaps a visit to the Demogrounds in Moscone West, select from two product roadmap sessions in the next time slot (3:15 – 4:15 p.m.): an Agile 9.3.x session located in the InterContinental’s Ballroom C, and an Agile PLM for Process session located back in the InterContinental’s Telegraph Room. Both sessions will have strong content around each product line’s latest releases, vision, and customer examples. We are very pleased to feature Daniel Soosai of Facebook in the A9 session and Vinnie D’Agostino of The Coca-Cola Company in the PLM for Process session. Afterwards, hang in there for one last session of the day from 4:45 – 5:45 p.m.; it’s an insightful discussion on leveraging Agile PLM as the Foundation for Enterprise Quality Management, and it’s sure to be one of the best. In the Telegraph Room, this session will feature Oracle experts, partner co-presenter David Bartlett from CPG Solutions, and customer co-presenter Thomas Crowe, CIO of PL Developments. Hear their experience around implementing collaborative, integrated solutions to ensure effective knowledge transfer throughout an organization, and how to perform analysis in real time to resolve product quality issues swiftly and efficiently. On Monday evening there will be plenty of industry, product, and partner dinners, so take advantage of all the networking opportunities and catch some great tunes at the 5 day Oracle OpenWorld Music Festival! Tuesday, October 2 Tuesday starts early with a special PLM Networking Brunch, sponsored by several partners, from 8:30 a.m. – 10:30 a.m. at the B Restaurant that sits atop Yerba Buena Gardens. You’ll have the unique opportunity to meet with like-minded industry peers and a PLM partner to discuss a topic of your choosing while enjoying a delicious meal. Registration is required, so to inquire about attending this brunch, please email Terri.Hiskey-AT-oracle.com. After wrapping up your conversations over brunch, head over to the Marriott Marquis in the Nob Hill CD room for a chance to experience the Oracle Product Lifecycle Analytics solution in a Hands-On Lab, open from 10:15 a.m. – 12:45 p.m. Experts will be there to answer your questions. Back in the InterContinental Hotel’s Telegraph room, the session on “Ideation and Requirements Management: Capturing the Voice of the Customer” begins at 11:45 a.m. – 12:45 p.m. This may be the session for you if you’re struggling with challenges like too many repositories of customer needs, requests, and ideas; limited visibility into which ideas are being advanced by customers and field resources; or if you’re unable to leverage internal expertise to expose effort and potential risks. This session will discuss how Agile PLM can help you overcome ideation challenges to deliver the right products to their targeted markets and fulfill customer desires. Next, from 1:15 – 2:15 p.m. join us for a session on Managing Profitable Innovation with Oracle Product Lifecycle Analytics. If you missed the Hands-on Lab, have more questions, or simply want to be inspired by the product’s forward-thinking vision and capabilities, this is a great opportunity to meet the progressive-minded executives behind the application. After this session, it may be a good opportunity to swing by the Demogrounds in Moscone West and visit the Agile PLM demos at exhibit booths #81 for Agile PLM for Discrete Manufacturing, #70 for Agile PLM for Process, and #82 for AutoVue and Agile PLM Enterprise Visualization. Check out the related Supply Chain Management booths close by if you’re interested - here's the map. There’s always lots to see and do around the exhibit area. But don’t forget the last session of the day from 5:00 p.m. – 6:00 p.m. in Telegraph Hill on Managing Product Innovation and Compliance in Life Science Companies, a “must-see” if you’re in this industry. Launching innovative products quickly is already a high-stakes challenge, but companies in the life sciences industry face uniquely severe consequences when new products don’t perform or comply as required. In recent years, more and more regulations have become mandatory, and new ones, such as REACH, are currently going into effect for several companies. Customer presenters from pharmaceutical leader Eli Lilly will share how they’ve leveraged Agile PLM to deliver high-quality, innovative products in a fast-paced, heavily regulated market environment. Tuesday evening unwind at the Supply Chain Management Reception from 6:00 – 8:00 p.m. at the premier boutique Roe Nightclub and Lounge, which is located about three blocks down on Howard Street (on the other side of Moscone from the InterContinental Hotel). Registration is required. Click here for the details.   Wednesday, October 3 We have another full line-up on Wednesday, so be ready for an action-packed day. We start with a session at 10:15 – 11:15 a.m. in the Telegraph Room where we have a session on “PLM for Consumer Products: Building an Engine for Quality and Innovation” with featured presenters from Starbucks and partner Kalypso. This is a rare opportunity to learn directly from Starbucks how they instill quality and innovation throughout their organization, products, and processes, leveraging PLM disciplines with strong support from their partner.  If you’re not in the consumer products industry, we recommend attending another session at 10:15 – 11:15 a.m. in Moscone West room 3005: “Eco-Enterprise Innovation Awards and the Business Case for Sustainability” featuring Jeff Henley, Oracle’s Chairman of the Board and Jon Chorley, Chief Sustainability Officer. Oracle will honor select customers with Oracle’s Eco-Enterprise Innovation award, which recognizes customers and their respective partners who rely on Oracle products to support their green business practices to reduce their environmental impact while improving business efficiencies and reducing costs. The awards presentation is followed by a panel discussion with customers and Oracle executives, who describe how these award-winning organizations are embracing environmental initiatives as a central part of their business strategy and how information technology plays a pivotal role. Next at 11:45 a.m. – 12:45 p.m. in Telegraph Hill attend our session devoted to exploring Product Lifecycle Management’s role in Software Lifecycle Management. This is a thought leadership session with Oracle experts in the field on the importance of change management, and we’ll discuss how Oracle has for years leveraged Agile PLM to develop Agile PLM. If software lifecycle management doesn’t apply to your business or you’d rather engage in some lively one-on-one discussions, we also have a “Supply Chain Meet the Experts” session in Moscone West Room 2001A. Product experts, thought leaders and executives will be on hand to discuss your questions/topics, so come prepared. This session tends to fill up fast so try to get in early. At 1:15 – 2:15 p.m. join us back in Telegraph Hill for a session focused on leveraging the Agile Product Portfolio Management application as the Product Development Master Schedule to improve efficiencies, optimize resources, and gain visibility across projects enterprise-wide to improve portfolio profitability. Customer presenters from Broadcom will explain how they’ve leveraged the product to enable a master schedule with enterprise-level, phase-gate program and project collaboration and resource optimization. Again in Telegraph Hill from 3:30 – 4:30 p.m. we have an interesting session with leading semiconductor customer LSI and partner Kalypso on how LSI leveraged Agile PLM to advance from homegrown applications to complete Product Value Chain Management. That type of transition can be challenging, and LSI details how they were able to achieve their goals and the value they gained along the journey – a fascinating account for any company interested in leveraging best practices to innovate their business processes and even end products. Lastly, we’ll wrap up in Telegraph Hill from 5:00 – 6:00 p.m. with a session on “Ensuring New Product Success by Achieving Excellence in New Product Introduction.” This is a cross-industry session, guaranteed to deliver insight in the often elusive practice of creating winning products, and we’re very excited about. According to IDC Manufacturing Insights analyst Joe Barkai, “Product Failures are not necessarily a result of bad ideas…they are a result of suboptimal decisions.” We’ll show you how to wire your business processes to enhance decision-making and maximize product potential. Now, quickly hit your hotel room to freshen up and then catch one of the many complimentary shuttles to the much-anticipated Oracle Customer Appreciation Event on Treasure Island. We have a very exciting show planned – check out what’s in store here. Thursday, October 4 PLM has a light schedule on Thursday this year with just one session, but this again is one of our best sessions on managing the Product Value Chain: at 11:15 a.m – 12:15 p.m.in Telegraph Hill, it’s a customer and partner driven session with Sonoco Products and Deloitte telling their story about how to achieve integrated change control by interfacing Agile PLM with Oracle E-Business Suite. Sonoco Products, a global manufacturer of consumer and industrial packaging materials, with its systems integrator, Deloitte, is doing this by implementing prebuilt integration (Oracle Design-to-Release Integration Pack for Agile Product Lifecycle Management for Process and Oracle Process) to integrate Agile with Oracle Product Hub/Oracle Product Information Management and Oracle E-Business Suite. This session presents a case study of how Sonoco is leveraging this solution to improve data quality and build a framework for stronger master data governance. Even though that ends our PLM line-up at OpenWorld, there will still be many sessions and activities at the conference, so visit the Oracle OpenWorld website to review agendas and build your schedule. And of course, download and bring this guide and the latest version of the Agile PLM Focus-On Document (available soon!). San Francisco is a wonderful city to explore, and we’re glad you’re considering joining the Agile PLM team at Oracle OpenWorld!  I hope to see you there! Follow me before the conference and on site for real-time updates about #OOW12 on Twitter @Kerrie_Foy or @AgilePLM.

    Read the article

  • Monitoring SQL Server Agent job run times

    - by okeofs
    Introduction A few months back, I was asked how long a particular nightly process took to run. It was a super question and the one thing that struck me was that there were a plethora of factors affecting the processing time. This said, I developed a query to ascertain process run times, the average nightly run times and applied some KPI’s to the end query. The end goal being to enable me to quickly detect anomalies and processes that are running beyond their normal times. As many of you are aware, most of the necessary data for this type of query, lies within the MSDB database. The core portion of the query is shown below.select sj.name,sh.run_date, sh.run_duration, case when len(sh.run_duration) = 6 then convert(varchar(8),sh.run_duration) when len(sh.run_duration) = 5 then '0' + convert(varchar(8),sh.run_duration) when len(sh.run_duration) = 4 then '00' + convert(varchar(8),sh.run_duration) when len(sh.run_duration) = 3 then '000' + convert(varchar(8),sh.run_duration) when len(sh.run_duration) = 2 then '0000' + convert(varchar(8),sh.run_duration) when len(sh.run_duration) = 1 then '00000' + convert(varchar(8),sh.run_duration) end as tt from dbo.sysjobs sj with (nolock) inner join dbo.sysjobHistory sh with (nolock) on sj.job_id = sh.job_id where sj.name = 'My Agent Job' and [sh.Message] like '%The job%') Run_date and run_duration are obvious fields. The field ‘Name’ is the name of the job that we wish to follow. The only major challenge was that the format of the run duration which was not as ‘user friendly’ as I would have liked. As an example, the run duration 1 hour 10 minutes and 3 seconds would be displayed as 11003; whereas I wanted it to display this in a more user friendly manner as 01:10:03. In order to achieve this effect, we need to add leading zeros to the run_duration based upon the case logic shown above. At this point what we need to do add colons between the hours and minutes and one between the minutes and seconds. To achieve this I nested the query shown above (in purple) within a ‘super’ query. Thus the run time ([Run Time]) is constructed concatenating a series of substrings (See below in Blue). select run_date,substring(convert(varchar(20),tt),1,2) + ':' +substring(convert(varchar(20),tt),3,2) + ':' +substring(convert(varchar(20),tt),5,2) as [run_time] from (select sj.name,sh.run_date, sh.run_duration,case when len(sh.run_duration) = 6 then convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 5 then '0' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 4 then '00' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 3 then '000' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 2 then '0000' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 1 then '00000' + convert(varchar(8),sh.run_duration)end as ttfrom dbo.sysjobs sj with (nolock)inner join dbo.sysjobHistory sh with (nolock) on sj.job_id = sh.job_id where sj.name = 'My Agent Job'and [sh.Message] like '%The job%') a Now that I had each nightly run time in hours, minutes and seconds (01:10:03), I decided that it would very productive to calculate a rolling run time average. To do this, I decided to do the calculations in base units of seconds. This said, I encapsulated the query shown above into a further ‘super’ query (see the code in RED below). This encapsulation is shown below. The astute reader will note that I used implied casting from integer to string, which is not the best method to use however it works. This said and if I were constructing the query again I would definitely do an explicit convert. To Recap: I now have a key field of ‘1’, each and every applicable run date and the total number of SECONDS that the process ran for each run date, all of this data within the #rawdata1 temporary table. Select 1 as keyy,run_date,(substring(b.run_time,1,2)*3600) + (substring(b.run_time,4,2)*60) + (substring(b.run_time,7,2)) as run_time_in_Seconds,run_time into #rawdata1 from ( select run_date,substring(convert(varchar(20),tt),1,2) + ':' + substring(convert(varchar(20),tt),3,2) + ':' +substring(convert(varchar(20),tt),5,2) as [run_time] from (select sj.name,sh.run_date, sh.run_duration, case when len(sh.run_duration) = 6 then convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 5 then '0' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 4 then '00' + convert(varchar(8),sh.run_duration)when len(sh.run_duration)    = 3 then '000' + convert(varchar(8),sh.run_duration)when len(sh.run_duration)    = 2 then '0000' + convert(varchar(8),sh.run_duration)when len(sh.run_duration) = 1 then '00000' + convert(varchar(8),sh.run_duration)end as ttfrom dbo.sysjobs sj with (nolock)inner join dbo.sysjobHistory sh with (nolock)on sj.job_id = sh.job_id where sj.name = 'My Agent Job'and [sh.Message] like '%The job%') a )b   Calculating the average run time We now select each run time in seconds from #rawdata1 and place the values into another temporary table called #rawdata2. Once again we create a ‘key’, a hardwired ‘1’. select 1 as Keyy, run_time_in_Seconds into #rawdata2 from #rawdata1The purpose of doing so is to make the average time AVG() available to the query immediately without having to do adverse grouping. Applying KPI Logic At this point, we shall apply some logic to determine whether processing times are within the norms. We do this by applying colour names. Obviously, this example is a super one for SSRS and traffic light icons.select rd1.run_date, rd1.run_time, rd1.run_time_in_Seconds ,Avg(rd2.run_time_in_Seconds) as Average_run_time_in_seconds,casewhenConvert(decimal(10,1),rd1.run_time_in_Seconds)/Avg(rd2.run_time_in_Seconds)<= 1.2 then 'Green' when Convert(decimal(10,1),rd1.run_time_in_Seconds)/Avg(rd2.run_time_in_Seconds)< 1.4 then 'Yellow' else 'Red'end as [color], Calculating the Average Run Time in Hours Minutes and Seconds and the end of the query. casewhen len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))else convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))end + ':' + case when len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)else convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)end + ':' + case when len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)else convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)end as [Average Run Time HH:MM:SS] from #rawdata2 rd2 innerjoin #rawdata1 rd1on rd1.keyy = rd2.keyygroup by run_date,rd1.run_time ,rd1.run_time_in_Seconds order by run_date descThe complete code example use msdbgo/*drop table #rawdata1drop table #rawdata2go*/select 1 as keyy,run_date,(substring(b.run_time,1,2)*3600) + (substring(b.run_time,4,2)*60) + (substring(b.run_time,7,2)) as run_time_in_Seconds,run_time into #rawdata1 from (select run_date,substring(convert(varchar(20),tt),1,2) + ':' +substring(convert(varchar(20),tt),3,2) + ':' +substring(convert(varchar(20),tt),5,2) as [run_time] from (select name,run_date, run_duration, casewhenlen(run_duration) = 6 then convert(varchar(8),run_duration)whenlen(run_duration) = 5 then '0' + convert(varchar(8),run_duration)whenlen(run_duration) = 4 then '00' + convert(varchar(8),run_duration)whenlen(run_duration) = 3 then '000' + convert(varchar(8),run_duration)whenlen(run_duration) = 2 then '0000' + convert(varchar(8),run_duration)whenlen(run_duration) = 1 then '00000' + convert(varchar(8),run_duration)end as ttfrom dbo.sysjobs sj with (nolock)innerjoin dbo.sysjobHistory sh with (nolock) on sj.job_id = sh.job_id where name = 'My Agent Job'and [Message] like '%The job%') a ) bselect 1 as Keyy, run_time_in_Seconds into #rawdata2 from #rawdata1select rd1.run_date, rd1.run_time, rd1.run_time_in_Seconds ,Avg(rd2.run_time_in_Seconds) as Average_run_time_in_seconds,casewhenConvert(decimal(10,1),rd1.run_time_in_Seconds)/Avg(rd2.run_time_in_Seconds)<= 1.2 then 'Green' when Convert(decimal(10,1),rd1.run_time_in_Seconds)/Avg(rd2.run_time_in_Seconds)< 1.4 then 'Yellow' else 'Red'end as [color],Case when len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))else convert(varchar(2),Avg(rd2.run_time_in_Seconds)/(3600))end + ':' + case when len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)else convert(varchar(2),Avg(rd2.run_time_in_Seconds)%(3600)/60)end + ':' + case when len(convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)) = 1 then '0' + convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)else convert(varchar(2),Avg(rd2.run_time_in_Seconds)%60)end as [Average Run Time HH:MM:SS] from #rawdata2 rd2 innerjoin #rawdata1 rd1on rd1.keyy = rd2.keyygroup by run_date,rd1.run_time ,rd1.run_time_in_Seconds order by run_date desc  

    Read the article

  • TLS (STARTTLS) Failure After 10.6 Upgrade to Open Directory Master

    - by Thomas Kishel
    Hello, Environment: Mac OS X 10.6.3 install/import of a MacOS X 10.5.8 Open Directory Master server. After that upgrade, LDAP+TLS fails on our MacOS X 10.5, 10.6, CentOS, Debian, and FreeBSD clients (Apache2 and PAM). Testing using ldapsearch: ldapsearch -ZZ -H ldap://gnome.darkhorse.com -v -x -b "dc=darkhorse,dc=com" '(uid=donaldr)' uid ... fails with: ldap_start_tls: Protocol error (2) Testing adding "-d 9" fails with: res_errno: 2, res_error: <unsupported extended operation>, res_matched: <> Testing without requiring STARTTLS or with LDAPS: ldapsearch -H ldap://gnome.darkhorse.com -v -x -b "dc=darkhorse,dc=com" '(uid=donaldr)' uid ldapsearch -H ldaps://gnome.darkhorse.com -v -x -b "dc=darkhorse,dc=com" '(uid=donaldr)' uid ... succeeds with: # donaldr, users, darkhorse.com dn: uid=donaldr,cn=users,dc=darkhorse,dc=com uid: donaldr # search result search: 2 result: 0 Success # numResponses: 2 # numEntries: 1 result: 0 Success (We are specifying "TLS_REQCERT never" in /etc/openldap/ldap.conf) Testing with openssl: openssl s_client -connect gnome.darkhorse.com:636 -showcerts -state ... succeeds: CONNECTED(00000003) SSL_connect:before/connect initialization SSL_connect:SSLv2/v3 write client hello A SSL_connect:SSLv3 read server hello A depth=1 /C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department verify error:num=19:self signed certificate in certificate chain verify return:0 SSL_connect:SSLv3 read server certificate A SSL_connect:SSLv3 read server done A SSL_connect:SSLv3 write client key exchange A SSL_connect:SSLv3 write change cipher spec A SSL_connect:SSLv3 write finished A SSL_connect:SSLv3 flush data SSL_connect:SSLv3 read finished A --- Certificate chain 0 s:/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=MIS/CN=gnome.darkhorse.com i:/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department 1 s:/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department i:/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department --- Server certificate -----BEGIN CERTIFICATE----- <deleted for brevity> -----END CERTIFICATE----- subject=/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=MIS/CN=gnome.darkhorse.com issuer=/C=US/ST=Oregon/L=Milwaukie/O=Dark Horse Comics, Inc./OU=Dark Horse Network/CN=DHC MIS Department --- No client certificate CA names sent --- SSL handshake has read 2640 bytes and written 325 bytes --- New, TLSv1/SSLv3, Cipher is AES256-SHA Server public key is 1024 bit Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : AES256-SHA Session-ID: D3F9536D3C64BAAB9424193F81F09D5C53B7D8E7CB5A9000C58E43285D983851 Session-ID-ctx: Master-Key: E224CC065924DDA6FABB89DBCC3E6BF89BEF6C0BD6E5D0B3C79E7DE927D6E97BF12219053BA2BB5B96EA2F6A44E934D3 Key-Arg : None Start Time: 1271202435 Timeout : 300 (sec) Verify return code: 0 (ok) So we believe that the slapd daemon is reading our certificate and writing it to LDAP clients. Apple Server Admin adds ProgramArguments ("-h ldaps:///") to /System/Library/LaunchDaemons/org.openldap.slapd.plist and TLSCertificateFile, TLSCertificateKeyFile, TLSCACertificateFile, and TLSCertificatePassphraseTool to /etc/openldap/slapd_macosxserver.conf when enabling SSL in the LDAP section of the Open Directory service. While that appears enough for LDAPS, it appears that this is not enough for TLS. Comparing our 10.6 and 10.5 slapd.conf and slapd_macosxserver.conf configuration files yields no clues. Replacing our certificate (generated with a self-signed ca) with an Apple Server Admin generated self signed certificate results in no change in ldapsearch results. Setting -d to 256 in /System/Library/LaunchDaemons/org.openldap.slapd.plist logs: 4/13/10 5:23:35 PM org.openldap.slapd[82162] conn=384 op=0 EXT oid=1.3.6.1.4.1.1466.20037 4/13/10 5:23:35 PM org.openldap.slapd[82162] conn=384 op=0 do_extended: unsupported operation "1.3.6.1.4.1.1466.20037" 4/13/10 5:23:35 PM org.openldap.slapd[82162] conn=384 op=0 RESULT tag=120 err=2 text=unsupported extended operation Any debugging advice much appreciated. -- Tom Kishel

    Read the article

  • SQL SERVER – Introduction to Extended Events – Finding Long Running Queries

    - by pinaldave
    The job of an SQL Consultant is very interesting as always. The month before, I was busy doing query optimization and performance tuning projects for our clients, and this month, I am busy delivering my performance in Microsoft SQL Server 2005/2008 Query Optimization and & Performance Tuning Course. I recently read white paper about Extended Event by SQL Server MVP Jonathan Kehayias. You can read the white paper here: Using SQL Server 2008 Extended Events. I also read another appealing chapter by Jonathan in the book, SQLAuthority Book Review – Professional SQL Server 2008 Internals and Troubleshooting. After reading these excellent notes by Jonathan, I decided to upgrade my course and include Extended Event as one of the modules. This week, I have delivered Extended Events session two times and attendees really liked the said course. They really think Extended Events is one of the most powerful tools available. Extended Events can do many things. I suggest that you read the white paper I mentioned to learn more about this tool. Instead of writing a long theory, I am going to write a very quick script for Extended Events. This event session captures all the longest running queries ever since the event session was started. One of the many advantages of the Extended Events is that it can be configured very easily and it is a robust method to collect necessary information in terms of troubleshooting. There are many targets where you can store the information, which include XML file target, which I really like. In the following Events, we are writing the details of the event at two locations: 1) Ringer Buffer; and 2) XML file. It is not necessary to write at both places, either of the two will do. -- Extended Event for finding *long running query* IF EXISTS(SELECT * FROM sys.server_event_sessions WHERE name='LongRunningQuery') DROP EVENT SESSION LongRunningQuery ON SERVER GO -- Create Event CREATE EVENT SESSION LongRunningQuery ON SERVER -- Add event to capture event ADD EVENT sqlserver.sql_statement_completed ( -- Add action - event property ACTION (sqlserver.sql_text, sqlserver.tsql_stack) -- Predicate - time 1000 milisecond WHERE sqlserver.sql_statement_completed.duration > 1000 ) -- Add target for capturing the data - XML File ADD TARGET package0.asynchronous_file_target( SET filename='c:\LongRunningQuery.xet', metadatafile='c:\LongRunningQuery.xem'), -- Add target for capturing the data - Ring Bugger ADD TARGET package0.ring_buffer (SET max_memory = 4096) WITH (max_dispatch_latency = 1 seconds) GO -- Enable Event ALTER EVENT SESSION LongRunningQuery ON SERVER STATE=START GO -- Run long query (longer than 1000 ms) SELECT * FROM AdventureWorks.Sales.SalesOrderDetail ORDER BY UnitPriceDiscount DESC GO -- Stop the event ALTER EVENT SESSION LongRunningQuery ON SERVER STATE=STOP GO -- Read the data from Ring Buffer SELECT CAST(dt.target_data AS XML) AS xmlLockData FROM sys.dm_xe_session_targets dt JOIN sys.dm_xe_sessions ds ON ds.Address = dt.event_session_address JOIN sys.server_event_sessions ss ON ds.Name = ss.Name WHERE dt.target_name = 'ring_buffer' AND ds.Name = 'LongRunningQuery' GO -- Read the data from XML File SELECT event_data_XML.value('(event/data[1])[1]','VARCHAR(100)') AS Database_ID, event_data_XML.value('(event/data[2])[1]','INT') AS OBJECT_ID, event_data_XML.value('(event/data[3])[1]','INT') AS object_type, event_data_XML.value('(event/data[4])[1]','INT') AS cpu, event_data_XML.value('(event/data[5])[1]','INT') AS duration, event_data_XML.value('(event/data[6])[1]','INT') AS reads, event_data_XML.value('(event/data[7])[1]','INT') AS writes, event_data_XML.value('(event/action[1])[1]','VARCHAR(512)') AS sql_text, event_data_XML.value('(event/action[2])[1]','VARCHAR(512)') AS tsql_stack, CAST(event_data_XML.value('(event/action[2])[1]','VARCHAR(512)') AS XML).value('(frame/@handle)[1]','VARCHAR(50)') AS handle FROM ( SELECT CAST(event_data AS XML) event_data_XML, * FROM sys.fn_xe_file_target_read_file ('c:\LongRunningQuery*.xet', 'c:\LongRunningQuery*.xem', NULL, NULL)) T GO -- Clean up. Drop the event DROP EVENT SESSION LongRunningQuery ON SERVER GO Just run the above query, afterwards you will find following result set. This result set contains the query that was running over 1000 ms. In our example, I used the XML file, and it does not reset when SQL services or computers restarts (if you are using DMV, it will reset when SQL services restarts). This event session can be very helpful for troubleshooting. Let me know if you want me to write more about Extended Events. I am totally fascinated with this feature, so I’m planning to acquire more knowledge about it so I can determine its other usages. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Training, SQLServer, T SQL, Technology Tagged: SQL Extended Events

    Read the article

  • SQL SERVER – Update Statistics are Sampled By Default

    - by pinaldave
    After reading my earlier post SQL SERVER – Create Primary Key with Specific Name when Creating Table on Statistics, I have received another question by a blog reader. The question is as follows: Question: Are the statistics sampled by default? Answer: Yes. The sampling rate can be specified by the user and it can be anywhere between a very low value to 100%. Let us do a small experiment to verify if the auto update on statistics is left on. Also, let’s examine a very large table that is created and statistics by default- whether the statistics are sampled or not. USE [AdventureWorks] GO -- Create Table CREATE TABLE [dbo].[StatsTest]( [ID] [int] IDENTITY(1,1) NOT NULL, [FirstName] [varchar](100) NULL, [LastName] [varchar](100) NULL, [City] [varchar](100) NULL, CONSTRAINT [PK_StatsTest] PRIMARY KEY CLUSTERED ([ID] ASC) ) ON [PRIMARY] GO -- Insert 1 Million Rows INSERT INTO [dbo].[StatsTest] (FirstName,LastName,City) SELECT TOP 1000000 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Update the statistics UPDATE STATISTICS [dbo].[StatsTest] GO -- Shows the statistics DBCC SHOW_STATISTICS ("StatsTest"PK_StatsTest) GO -- Clean up DROP TABLE [dbo].[StatsTest] GO Now let us observe the result of the DBCC SHOW_STATISTICS. The result shows that Resultset is for sure sampling for a large dataset. The percentage of sampling is based on data distribution as well as the kind of data in the table. Before dropping the table, let us check first the size of the table. The size of the table is 35 MB. Now, let us run the above code with lesser number of the rows. USE [AdventureWorks] GO -- Create Table CREATE TABLE [dbo].[StatsTest]( [ID] [int] IDENTITY(1,1) NOT NULL, [FirstName] [varchar](100) NULL, [LastName] [varchar](100) NULL, [City] [varchar](100) NULL, CONSTRAINT [PK_StatsTest] PRIMARY KEY CLUSTERED ([ID] ASC) ) ON [PRIMARY] GO -- Insert 1 Hundred Thousand Rows INSERT INTO [dbo].[StatsTest] (FirstName,LastName,City) SELECT TOP 100000 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Update the statistics UPDATE STATISTICS [dbo].[StatsTest] GO -- Shows the statistics DBCC SHOW_STATISTICS ("StatsTest"PK_StatsTest) GO -- Clean up DROP TABLE [dbo].[StatsTest] GO You can see that Rows Sampled is just the same as Rows of the table. In this case, the sample rate is 100%. Before dropping the table, let us also check the size of the table. The size of the table is less than 4 MB. Let us compare the Result set just for a valid reference. Test 1: Total Rows: 1000000, Rows Sampled: 255420, Size of the Table: 35.516 MB Test 2: Total Rows: 100000, Rows Sampled: 100000, Size of the Table: 3.555 MB The reason behind the sample in the Test1 is that the data space is larger than 8 MB, and therefore it uses more than 1024 data pages. If the data space is smaller than 8 MB and uses less than 1024 data pages, then the sampling does not happen. Sampling aids in reducing excessive data scan; however, sometimes it reduces the accuracy of the data as well. Please note that this is just a sample test and there is no way it can be claimed as a benchmark test. The result can be dissimilar on different machines. There are lots of other information can be included when talking about this subject. I will write detail post covering all the subject very soon. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics

    Read the article

  • How to represent a Rubik's Cube in a data structure

    - by Mel
    I am just curious, how will you guys create a data structure for a rubik's cube with X number of sides. Things to consider: - the cube can be of any size - it is a rubik's cube! so layers can be rotated (in all three axes) and a plus question: using the data structure, how can we know if a certain cube in a certain state is solvable? I have been struggling with this question my self and haven't quite found the answer yet.

    Read the article

  • How do you find local fellow programmers?

    - by Pepijn
    I'm a self-tought programmer living in a small town. Except for the occasional meetups at the other end of the country, I rarely talk face-to-face with other programmers. I'm well aware of the merits of pair programming, feedback, discussion with other programmers and all... What do you do to get in contact with other local programmers? p.s. If you live near Loenen (gld), Netherlands, I'd like to have contact ;)

    Read the article

  • XAML2CPP 1.0.2.0

    - by Valter Minute
    A new updated release of everybody favourite XAML to CPP conversion tool (at least because it’s the only one available!). New features: - support for resource dictionaries (app.xaml if you use Blend to generate your XAML) Bugfixes: - the parameters for the mouseleftbuttondown and up events were incorrect As usual you can download the new release here: http://cid-9b7b0aefe3514dc5.skydrive.live.com/self.aspx/.Public/XAML2CPP.zip Technorati Tags: XAML,Silverlight for Windows Embedded

    Read the article

  • PHP PSR-0 + several namespaces in one file and autoload

    - by Nemoden
    I've been thinking for a while about defining several namespaces in one php file and so, having several classes inside this file. Suppose, I want to implement something like Doctrine\ORM\Query\Expr: Expr.php Expr |-- Andx.php |-- Base.php |-- Comparison.php |-- Composite.php |-- From.php |-- Func.php |-- GroupBy.php |-- Join.php |-- Literal.php |-- Math.php |-- OrderBy.php |-- Orx.php `-- Select.php It would be nice if I had all of this in one file - Expr.php: namespace Doctrine\ORM\Query; class Expr { // code } namespace Doctrine\ORM\Query\Expr; class Func { // code } // etc... What I'm thinking of is directories naming convention and, unlike PSR-0 having several classes and namespaces in one file. It's best explained by the code: ls Doctrine/orm/query Expr.php that's it - only Expr.php Since Expr.php is somewhat I call a "meta-namespace" for Expr\Func, it make sense to place all the classes inside Expr.php (as shown above). So, the vendor name is still starts with an uppercased letter (Doctrine) and the other parts of namespace start with lowercased letter. We can write an autoload so it would respect this notion: function load_class($class) { if (class_exists($class)) { return true; } $tokenized_path = explode(array("_", "\\"), DIRECTORY_SEPARATOR, $class); // array('Doctrine', 'orm', 'query', 'Expr', 'Func'); // ^^^^ // first, we are looking for first uppercased namespace part // and if it's not last (not the class name), we use it as a filename // and wiping away the rest to compose a path to a file we need to include if (FALSE !== ($meta_class_index = find_meta_class($tokenized_path))) { $new_tokenized_path = array_slice($tokenized_path, 0, $meta_class_index); $path_to_class = implode(DIRECTORY_SEPARATOR, $new_tokenized_path); } else { // no meta class found $path_to_class = implode(DIRECTORY_SEPARATOR, $tokenized_path); } if (file_exists($path_to_class.'.php')) { require_once $path_to_class.'.php'; } return false; } Another reason to do so is to reduce a number of php files scattered among directories. Usually you check file existence before you require a file to fail gracefully: file_exists($path_to_class.'.php'); If you take a look at actual Doctrine\ORM\Query\Expr code, you'll see they use all of the "inner-classes", so you actually do: file_exists("/path/to/Doctrine/ORM/Query/Expr.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/AndX.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Base.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Comparison.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Composite.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/From.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Func.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/GroupBy.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Join.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Literal.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Math.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/OrderBy.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Orx.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Select.php"); in your autoload which causes quite a few I/O reads. Isn't it too much to check on each user's hit? I'm just putting this on a discussion. I want to hear from another PHP programmers what do they think of it. And, of course, if you have a silver bullet addressing this problems I've designated here, please share. I also have been thinking if my vogue question fits here and according to the FAQ it seems like this question addresses "software architecture" problem slash proposal. I'm sorry if my scribble may seem a bit clunky :) Thanks.

    Read the article

  • An XEvent a Day (3 of 31) – Managing Event Sessions

    - by Jonathan Kehayias
    Yesterdays post, Querying the Extended Events Metadata , showed how to discover the objects available for use in Extended Events.  In todays post, we’ll take a look at the DDL Commands that are used to create and manage Event Sessions based on the objects available in the system.  Like other objects inside of SQL Server, there are three DDL commands that are used with Extended Events; CREATE EVENT SESSION , ALTER EVENT SESSION , and DROP EVENT SESSION .  The command names are self...(read more)

    Read the article

  • Challenge 19 – An Explanation of a Query

    - by Dave Ballantyne
    I have received a number of requests for an explanation of my winning query of TSQL Challenge 19. This involved traversing a hierarchy of employees and rolling a count of orders from subordinates up to superiors. The first concept I shall address is the hierarchyId , which is constructed within the CTE called cteTree.   cteTree is a recursive cte that will expand the parent-child hierarchy of the personnel in the table @emp.  One useful feature with a recursive cte is that data can be ‘passed’ from the parent to the child data.  The hierarchyId column is similar to the hierarchyId data type that was introduced in SQL Server 2008 and represents the position of the person within the organisation. Let us start with a simplistic example Albert manages Bob and Eddie.  Bob manages Carl and Dave. The hierarchyId will represent each person’s position in this relationship in a single field.  In this simple example we could append the userID together into a varchar field as detailed below. This will enable us to select a branch of the tree by filtering using Where hierarchyId  ‘1,2%’ to select Bob and all his subordinates.  Naturally, this is not comprehensive enough to provide a full solution, but as opposed to concatenating the Id’s together into a varchar datatyped column, we can apply the same theory to a varbinary.  By CASTing the ID’s into a datatype of varbinary(4) ,4 is used as 4 bytes of data are used to store an integer and building a hierarchyId  from those.  For example: The important point to bear in mind for later in the query is that the binary data generated is 'byte order comparable'. ie We can ORDER a dataset with it and the resulting data, will be in the order required. Now, would probably be a good time to download the example file and, after the cte ‘cteTree’, uncomment the line ‘select * from cteTree’.  Mark this and all prior code and execute.  This will show you how this theory directly relates to the actual challenge data.  The only deviation from the above, is that instead of using the ID of an employee, I have used the row_number() ranking function to order each level by LastName,Firstname.  This enables me to order by the HierarchyId in the final result set so that the result set is in the required order. Your output should be something like the below.  Notice also the ‘Level’ Column that contains the depth that the employee is within the tree.  I would encourage you to ‘play’ with the query, change the order in the row_number() or the length of the cast in the hierarchyId to see how that effects the outcome.  The next cte, ‘cteTreeWithOrderCount’, is a join between cteTree and the @ord table, and COUNT’s the number of orders per employee.  A LEFT JOIN is employed here to account for the occasion where an employee has made no sales.   Executing a ‘Select * from cteTreeWithOrderCount’ will return the result set as below.  The order here is unimportant as this is only a staging point of the data and only the final result set in a cte chain needs an Order by clause, unless TOP is utilised. cteExplode joins the above result set to the tally table (Nums) for Level Occurances.  So, if level is 2 then 2 rows are required.  This is done to expand the dataset, to create a new column (PathInc), which is the (n+1) integers contained within the heirarchyid.  For example, with the data for Robert King as given above, the below 3 rows will be returned. From this you can see that the pathinc column now contains the values for Andrew Fuller and Steven Buchanan who are Robert King’s superiors within the tree.    Finally cteSumUp, sums the orders for each person and their subordinates using the PathInc generated above, and the final select does the final simple mathematics and filters to restrict the result set to only the ‘original’ row per employee.

    Read the article

  • Spending the summer at camp… Web Camp, that is

    - by Jon Galloway
    Microsoft is sponsoring a series of Web Camps this summer. They’re a series of free two day events being held worldwide, and I’m really excited about being taking part. The camp is targeted at a broad range of developer background and experience. Content builds from 101 level introductory material to 200-300 level coverage, but we hit some advanced bits (e.g. MVC 2 features, jQuery templating, IIS 7 features, etc.) that advanced developers may not yet have seen. We start with a lap around ASP.NET & Web Forms, then move on to building and application with ASP.NET MVC 2, jQuery, and Entity Framework 4, and finally deploy to IIS. I got to spend some time working with James before the first Web Camp refining the content, and I think he’s packed about as much goodness into the time available as is scientifically possible. The content is really code focused – we start with File/New Project and spend the day building a real, working application. The second day of the Web Camp provides attendees an opportunity to get hands on. There are two options: Join a team and build an application of your choice Work on a lab or tutorial James Senior and I kicked off the fun with the first Web Camp in Toronto a few weeks ago. It was sold out, lots of fun, and by all accounts a great way to spend two days. I’m really enthusiastic about the format. Rather than just listening to speakers and then forgetting everything in a few days, attendees actually build something of their choice. They get an opportunity to pitch projects they’re interested in, form teams, and build it – getting experience with “real world” problems, with all the help they need from experienced developers. James got help on the second day practical part from the good folks that run Startup Weekend. Startup Weekend is a fantastic program that gathers developers together to build cool apps in a weekend, so their input on how to organize successful teams for weekend projects was invaluable. Nick Seguin joined us in Toronto, and in addition to making sure that everything flowed smoothly, he just added a lot of fun and excitement to the event, reminding us all about how much fun it is to come up with a cool idea and just build it. In addition to the Toronto camp, I’ll be at the Mountain View, London, Munich, and New York camps over the next month. London is sold out, but the rest still have space available, so come join us! Here’s the full list, with the ones I’ll be at bolded because - you know - it’s my blog. The the whole speaker list is great, including Scott Guthrie, Scott Hanselman, James Senior, Rachel Appel, Dan Wahlin, and Christian Wenz. Toronto May 7-8 (James Senior and I were thrown out on our collective ears) Moscow May 19 Beijing May 21-22 Shanghai May 24-25 Mountain View May 27-28 (I’m speaking with Rachel Appel) Sydney May 28-29 Singapore June 04-05 London June 04-05 (I’m speaking with Christian Wenz – SOLD OUT) Munich June 07-08 (I’m speaking with Christian Wenz) Chicago June 11-12 Redmond, WA June 18-19 New York June 25-26 (I’m speaking with Dan Wahlin) Come say hi!

    Read the article

  • Online Judge System

    - by Deni Mf
    I'm planing to host a programing competition within my company, if the event is successful and there is a interest we plan to do this couple times a year. I've found the following self hosted platforms: http://www.domjudge.org/development http://sankhs.com/codejudge/ http://sharifjudge.ir/news/sharif-judge-12-released (does not support c#) And this online free service: http://www.codechef.com/hostyourcontest Can you share experience in hosting such event and what platforms did you used?

    Read the article

< Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >