Search Results

Search found 37260 results on 1491 pages for 'command query responsibil'.

Page 154/1491 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • Killing Stuck Child JVM's

    - by ACShorten
    Note: This facility only applies to Oracle Utilities Application Framework products using COBOL. In some situations, the Child JVM's may spin. This causes multiple startup/shutdown Child JVM messages to be displayed and recursive child JVM's to be initiated and shunned. If the following: Unable to establish connection on port …. after waiting .. seconds.The issue can be caused intermittently by CPU spins in connection to the creation of new processes, specifically Child JVMs. Recursive (or double) invocation of the System.exit call in the remote JVM may be caused by a Process.destroy call that the parent JVM always issues when shunning a JVM. The issue may happen when the thread in the parent JVM that is responsible for the recycling gets stuck and it affects all child JVMs. If this issue occurs at your site then there are a number of options to address the issue: Configure an Operating System level kill command to force the Child JVM to be shunned when it becomes stuck. Configure a Process.destroy command to be used if the kill command is not configured or desired. Specify a time tolerance to detect stuck threads before issuing the Process.destroy or kill commands. Note: This facility is also used when the Parent JVM is also shutdown to ensure no zombie Child JVM's exit. The following additional settings must be added to the spl.properties for the Business Application Server to use this facility: spl.runtime.cobol.remote.kill.command – Specify the command to kill the Child JVM process. This can be a command or specify a script to execute to provide additional information. The kill.command property can accept two arguments, {pid} and {jvmNumber}, in the specified string. The arguments must be enclosed in curly braces as shown here. Note: The PID will be appended to the killcmd string, unless the {pid} and {jvmNumber} arguments are specified. The jvmNumber can be useful if passed to a script for logging purposes. Note: If a script is used it must be in the path and be executable by the OS user running the system. spl.runtime.cobol.remote.destroy.enabled – Specify whether to use the Process.destroy command instead of the kill command. Specify true or false. Default value is false. Note: Unless otherwise required, it is recommended to use the kill command option if shunning JVM's is an issue. There this value can remain its default value, false, unless otherwise required. spl.runtime.cobol.remote.kill.delaysecs – Specify the number of seconds to wait for the Child JVM to terminate naturally before issuing the Process.destroy or kill commands. Default is 10 seconds. For example: spl.runtime.cobol.remote.kill.command=kill -9 {pid} {jvmNumber}spl.runtime.cobol.remote.destroy.enabled=falsespl.runtime.cobol.remote.kill.delaysecs=10 When a Child JVM is to be recycled, these properties are inspected and the spl.runtime.cobol.remote.kill.command, executed if provided. This is done after waiting for spl.runtime.cobol.remote.kill.delaysecs seconds to give the JVM time to shut itself down. The spl.runtime.cobol.remote.destroy.enabled property must be set to true AND the spl.runtime.cobol.remote.kill.command omitted for the original Process.destroy command to be used on the process. Note: By default the spl.runtime.cobol.remote.destroy enabled is set to false and is therefore disabled. If neither spl.runtime.cobol.remote.kill.command nor spl.runtime.cobol.remote.destroy.enabled is specified, child JVMs will not beforcibly killed. They will be left to shut themselves down (which may lead to orphan JVMs). If both are specified, the spl.runtime.cobol.remote.kill.command is preferred and spl.runtime.cobol.remote.destroy.enabled defaulted to false.It is recommended to invoke a script to issue the direct kill command instead of directly using the kill -9 commands.For example, the following sample script ensures that the process Id is an active cobjrun process before issuing the kill command: forcequit.sh #!/bin/shTHETIME=`date +"%Y-%m-%d %H:%M:%S"`if [ "$1" = "" ]then  echo "$THETIME: Process Id is required" >>$SPLSYSTEMLOGS/forcequit.log  exit 1fijavaexec=cobjrunps e $1 | grep -c $javaexecif [ $? = 0 ]then  echo "$THETIME: Process $1 is an active $javaexec process -- issuing kill-9 $1" >>$SPLSYSTEMLOGS/forcequit.log  kill -9 $1exit 0else  echo "$THETIME: Process id $1 is not a $javaexec process or not active --  kill will not be issued" >>$SPLSYSTEMLOGS/forcequit.logexit 1fi This script's name would then be specified as the value for the spl.runtime.cobol.remote.kill.command property, for example: spl.runtime.cobol.remote.kill.command=forcequit.sh The forcequit script does not have any explicit parameters but pid is passed automatically. To use the jvmNumber parameter it must explicitly specified in the command. For example, to call script forcequit.sh and pass it the pid and the child JVM number, specify it as follows: spl.runtime.cobol.remote.kill.command=forcequit.sh {pid} {jvmNumber} The script can then use the JVM number for logging purposes or to further ensure that the correct pid is being killed.If the arguments are omitted, the pid is automatically appended to the spl.runtime.cobol.remote.kill.command string. To use this facility the following patches must be installed: Patch 13719584 for Oracle Utilities Application Framework V2.1, Patches 13684595 and 13634933 for Oracle Utilities Application Framework V2.2 Group Fix 4 (as Patch 13640668) for Oracle Utilities Application Framework V4.1.

    Read the article

  • WPF: TreeViewItem bound to an ICommand

    - by Richard
    Hi All, I am busy creating my first MVVM application in WPF. Basically the problem I am having is that I have a TreeView (System.Windows.Controls.TreeView) which I have placed on my WPF Window, I have decide that I will bind to a ReadOnlyCollection of CommandViewModel items, and these items consist of a DisplayString, Tag and a RelayCommand. Now in the XAML, I have my TreeView and I have successfully bound my ReadOnlyCollection to this. I can view this and everything looks fine in the UI. The issue now is that I need to bind the RelayCommand to the Command of the TreeViewItem, however from what I can see the TreeViewItem doesn't have a Command. Does this force me to do it in the IsSelected property or even in the Code behind TreeView_SelectedItemChanged method or is there a way to do this magically in WPF? This is the code I have: <TreeView BorderBrush="{x:Null}" HorizontalAlignment="Stretch" VerticalAlignment="Stretch"> <TreeView.Items> <TreeViewItem Header="New Commands" ItemsSource="{Binding Commands}" DisplayMemberPath="DisplayName" IsExpanded="True"> </TreeViewItem> </TreeView.Items> and ideally I would love to just go: <TreeView BorderBrush="{x:Null}" HorizontalAlignment="Stretch" VerticalAlignment="Stretch"> <TreeView.Items> <TreeViewItem Header="New Trade" ItemsSource="{Binding Commands}" DisplayMemberPath="DisplayName" IsExpanded="True" Command="{Binding Path=Command}"> </TreeViewItem> </TreeView.Items> Does someone have a solution that allows me to use the RelayCommand infrastructure I have. Thanks guys, much appreciated! Richard

    Read the article

  • How to handle One View with multiple ViewModel and fire different Commands

    - by Naresh
    Hi All, I have senario in which one view and view has binding with multiple ViewModel. Eg. One View displaying Phone Detail and ViewModel as per bellow: Phone basic features- PhoneViewModel, Phone Price Detail- PhoneSubscriptionViewModel, Phone Accessories- PhoneAccessoryViewModel For general properties- PhoneDetailViewModel I have placed View's general properties to PhoneViewModel.Now senario is like this: By default View displays Phone Basic feaures which is bind with ObservationCollection of PhoneViewModel. My view have button - 'View Accessories', onclick of this button one popup screen- in my design I have display/hide Grid and bind it with ObservationCollection of PhoneAccessoryViewModel. Now problem begins- Accessory List also have button 'View Detail' onclick I have to open one popup screen, here also I had placed one Grid and Visible/Hide it. I have bind 'ViewAccessoryDetailCommand' command to 'View Detail' button. And on command execution one function fires and set property which Visible the Popup screen. Using such programming command fires, function calls but the property change not raises and so my view does not display popup. Summary: One View-- ViewModel1--Grid Bind view ViewModel2 --Grid Have Button and Onclick display new Grid which binded with ViewModel3-this Command fires but property not raises. I think there is some problem in my methodology, Please, give your suggetions.

    Read the article

  • How can I add the "--watch" flag to this TextMate snippet?

    - by Jannis
    I love TextMate as my editor for all things web, and so I'd like to use a snippet to use it with style.less files to automatically take advantage of the .less way of compiling .css files on the fly using the native $ lessc {filepath} --watch as suggested in the less documentation (link) My (thanks to someone who wrote the LESS TM Bundle!) current TextMate snippet works well for writing the currently opened .less file to the .css file but I'd like to take advantage of the --watch parameter so that every change to the .less file gets automatically compiled into the .css file. This works well when using the Terminal command line for it, so I am sure it must be possible to use it in an adapted version of the current LESS Command for TextMate since that only invokes the command to compile the file. So how do I add the --watch flag to this command:? #!/usr/bin/env ruby file = STDIN.read[/lessc: ([^*]+\.less)/, 1] || ENV["TM_FILEPATH"] system("lessc \"#{file}\"") I assume it should be something like: #!/usr/bin/env ruby file = STDIN.read[/lessc: ([^*]+\.less)/, 1] || ENV["TM_FILEPATH"] system("lessc \"#{file}\" --watch") But doing so only crashes the TextMate.app. Any ideas would be much appreciated. Thanks for reading. Jannis

    Read the article

  • Passing arguments to commandline with directories having spaces

    - by superstar
    Hi guys, I am making a system call from perl for ContentCheck.pl and passing parameters with directories (having spaces). So I pass them in quotes, but they are not being picked up in the ContentCheck.pl file Random.pm my $call = "$perlExe $contentcheck -t $target_path -b $base_path -o $output_path -s $size_threshold"; print "\ncall: ".$call."\n"; system($call); Contentcheck.pl use vars qw($opt_t $opt_b $opt_o $opt_n $opt_s $opt_h); # initialize getopts('t:b:o:n:s:h') or do{ print "*** Error: Invalid command line option. Use option -h for help.\a\n"; exit 1}; if ($opt_h) {print $UsagePage; exit; } my $tar; if ($opt_t) {$tar=$opt_t; print "\ntarget ".$tar."\n";} else { print " in target"; print "*** Error: Invalid command line option. Use option -h for help.\a\n"; exit 1;} my $base; if ($opt_b) {$base=$opt_b;} else { print "\nin base\n"; print "*** Error: Invalid command line option. Use option -h for help.\a\n"; exit 1;} This is the output in the commandline call: D:\tools\PacketCreationTool/bin/perl/winx64/bin/perl.exe D:/tools/PacketCr eationTool/scripts/ContentCheck.pl -t "C:/Documents and Settings/pkkonath/Deskto p/saved/myMockName.TGZ" -b "input file/myMockName.TGZ" -o myMockName.validate -s 10 target C:/Documents in base *** Error: Invalid command line option. Use option -h for help. Any suggestions are welcome! Thanks.

    Read the article

  • What are good design practices when working with Entity Framework

    - by AD
    This will apply mostly for an asp.net application where the data is not accessed via soa. Meaning that you get access to the objects loaded from the framework, not Transfer Objects, although some recommendation still apply. This is a community post, so please add to it as you see fit. Applies to: Entity Framework 1.0 shipped with Visual Studio 2008 sp1. Why pick EF in the first place? Considering it is a young technology with plenty of problems (see below), it may be a hard sell to get on the EF bandwagon for your project. However, it is the technology Microsoft is pushing (at the expense of Linq2Sql, which is a subset of EF). In addition, you may not be satisfied with NHibernate or other solutions out there. Whatever the reasons, there are people out there (including me) working with EF and life is not bad.make you think. EF and inheritance The first big subject is inheritance. EF does support mapping for inherited classes that are persisted in 2 ways: table per class and table the hierarchy. The modeling is easy and there are no programming issues with that part. (The following applies to table per class model as I don't have experience with table per hierarchy, which is, anyway, limited.) The real problem comes when you are trying to run queries that include one or many objects that are part of an inheritance tree: the generated sql is incredibly awful, takes a long time to get parsed by the EF and takes a long time to execute as well. This is a real show stopper. Enough that EF should probably not be used with inheritance or as little as possible. Here is an example of how bad it was. My EF model had ~30 classes, ~10 of which were part of an inheritance tree. On running a query to get one item from the Base class, something as simple as Base.Get(id), the generated SQL was over 50,000 characters. Then when you are trying to return some Associations, it degenerates even more, going as far as throwing SQL exceptions about not being able to query more than 256 tables at once. Ok, this is bad, EF concept is to allow you to create your object structure without (or with as little as possible) consideration on the actual database implementation of your table. It completely fails at this. So, recommendations? Avoid inheritance if you can, the performance will be so much better. Use it sparingly where you have to. In my opinion, this makes EF a glorified sql-generation tool for querying, but there are still advantages to using it. And ways to implement mechanism that are similar to inheritance. Bypassing inheritance with Interfaces First thing to know with trying to get some kind of inheritance going with EF is that you cannot assign a non-EF-modeled class a base class. Don't even try it, it will get overwritten by the modeler. So what to do? You can use interfaces to enforce that classes implement some functionality. For example here is a IEntity interface that allow you to define Associations between EF entities where you don't know at design time what the type of the entity would be. public enum EntityTypes{ Unknown = -1, Dog = 0, Cat } public interface IEntity { int EntityID { get; } string Name { get; } Type EntityType { get; } } public partial class Dog : IEntity { // implement EntityID and Name which could actually be fields // from your EF model Type EntityType{ get{ return EntityTypes.Dog; } } } Using this IEntity, you can then work with undefined associations in other classes // lets take a class that you defined in your model. // that class has a mapping to the columns: PetID, PetType public partial class Person { public IEntity GetPet() { return IEntityController.Get(PetID,PetType); } } which makes use of some extension functions: public class IEntityController { static public IEntity Get(int id, EntityTypes type) { switch (type) { case EntityTypes.Dog: return Dog.Get(id); case EntityTypes.Cat: return Cat.Get(id); default: throw new Exception("Invalid EntityType"); } } } Not as neat as having plain inheritance, particularly considering you have to store the PetType in an extra database field, but considering the performance gains, I would not look back. It also cannot model one-to-many, many-to-many relationship, but with creative uses of 'Union' it could be made to work. Finally, it creates the side effet of loading data in a property/function of the object, which you need to be careful about. Using a clear naming convention like GetXYZ() helps in that regards. Compiled Queries Entity Framework performance is not as good as direct database access with ADO (obviously) or Linq2SQL. There are ways to improve it however, one of which is compiling your queries. The performance of a compiled query is similar to Linq2Sql. What is a compiled query? It is simply a query for which you tell the framework to keep the parsed tree in memory so it doesn't need to be regenerated the next time you run it. So the next run, you will save the time it takes to parse the tree. Do not discount that as it is a very costly operation that gets even worse with more complex queries. There are 2 ways to compile a query: creating an ObjectQuery with EntitySQL and using CompiledQuery.Compile() function. (Note that by using an EntityDataSource in your page, you will in fact be using ObjectQuery with EntitySQL, so that gets compiled and cached). An aside here in case you don't know what EntitySQL is. It is a string-based way of writing queries against the EF. Here is an example: "select value dog from Entities.DogSet as dog where dog.ID = @ID". The syntax is pretty similar to SQL syntax. You can also do pretty complex object manipulation, which is well explained [here][1]. Ok, so here is how to do it using ObjectQuery< string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); The first time you run this query, the framework will generate the expression tree and keep it in memory. So the next time it gets executed, you will save on that costly step. In that example EnablePlanCaching = true, which is unnecessary since that is the default option. The other way to compile a query for later use is the CompiledQuery.Compile method. This uses a delegate: static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => ctx.DogSet.FirstOrDefault(it => it.ID == id)); or using linq static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet where dog.ID == id select dog).FirstOrDefault()); to call the query: query_GetDog.Invoke( YourContext, id ); The advantage of CompiledQuery is that the syntax of your query is checked at compile time, where as EntitySQL is not. However, there are other consideration... Includes Lets say you want to have the data for the dog owner to be returned by the query to avoid making 2 calls to the database. Easy to do, right? EntitySQL string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)).Include("Owner"); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); CompiledQuery static readonly Func<Entities, int, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, Dog>((ctx, id) => (from dog in ctx.DogSet.Include("Owner") where dog.ID == id select dog).FirstOrDefault()); Now, what if you want to have the Include parametrized? What I mean is that you want to have a single Get() function that is called from different pages that care about different relationships for the dog. One cares about the Owner, another about his FavoriteFood, another about his FavotireToy and so on. Basicly, you want to tell the query which associations to load. It is easy to do with EntitySQL public Dog Get(int id, string include) { string query = "select value dog " + "from Entities.DogSet as dog " + "where dog.ID = @ID"; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>(query, EntityContext.Instance)) .IncludeMany(include); oQuery.Parameters.Add(new ObjectParameter("ID", id)); oQuery.EnablePlanCaching = true; return oQuery.FirstOrDefault(); } The include simply uses the passed string. Easy enough. Note that it is possible to improve on the Include(string) function (that accepts only a single path) with an IncludeMany(string) that will let you pass a string of comma-separated associations to load. Look further in the extension section for this function. If we try to do it with CompiledQuery however, we run into numerous problems: The obvious static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.Include(include) where dog.ID == id select dog).FirstOrDefault()); will choke when called with: query_GetDog.Invoke( YourContext, id, "Owner,FavoriteFood" ); Because, as mentionned above, Include() only wants to see a single path in the string and here we are giving it 2: "Owner" and "FavoriteFood" (which is not to be confused with "Owner.FavoriteFood"!). Then, let's use IncludeMany(), which is an extension function static readonly Func<Entities, int, string, Dog> query_GetDog = CompiledQuery.Compile<Entities, int, string, Dog>((ctx, id, include) => (from dog in ctx.DogSet.IncludeMany(include) where dog.ID == id select dog).FirstOrDefault()); Wrong again, this time it is because the EF cannot parse IncludeMany because it is not part of the functions that is recognizes: it is an extension. Ok, so you want to pass an arbitrary number of paths to your function and Includes() only takes a single one. What to do? You could decide that you will never ever need more than, say 20 Includes, and pass each separated strings in a struct to CompiledQuery. But now the query looks like this: from dog in ctx.DogSet.Include(include1).Include(include2).Include(include3) .Include(include4).Include(include5).Include(include6) .[...].Include(include19).Include(include20) where dog.ID == id select dog which is awful as well. Ok, then, but wait a minute. Can't we return an ObjectQuery< with CompiledQuery? Then set the includes on that? Well, that what I would have thought so as well: static readonly Func<Entities, int, ObjectQuery<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, ObjectQuery<Dog>>((ctx, id) => (ObjectQuery<Dog>)(from dog in ctx.DogSet where dog.ID == id select dog)); public Dog GetDog( int id, string include ) { ObjectQuery<Dog> oQuery = query_GetDog(id); oQuery = oQuery.IncludeMany(include); return oQuery.FirstOrDefault; } That should have worked, except that when you call IncludeMany (or Include, Where, OrderBy...) you invalidate the cached compiled query because it is an entirely new one now! So, the expression tree needs to be reparsed and you get that performance hit again. So what is the solution? You simply cannot use CompiledQueries with parametrized Includes. Use EntitySQL instead. This doesn't mean that there aren't uses for CompiledQueries. It is great for localized queries that will always be called in the same context. Ideally CompiledQuery should always be used because the syntax is checked at compile time, but due to limitation, that's not possible. An example of use would be: you may want to have a page that queries which two dogs have the same favorite food, which is a bit narrow for a BusinessLayer function, so you put it in your page and know exactly what type of includes are required. Passing more than 3 parameters to a CompiledQuery Func is limited to 5 parameters, of which the last one is the return type and the first one is your Entities object from the model. So that leaves you with 3 parameters. A pitance, but it can be improved on very easily. public struct MyParams { public string param1; public int param2; public DateTime param3; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where dog.Age == myParams.param2 && dog.Name == myParams.param1 and dog.BirthDate > myParams.param3 select dog); public List<Dog> GetSomeDogs( int age, string Name, DateTime birthDate ) { MyParams myParams = new MyParams(); myParams.param1 = name; myParams.param2 = age; myParams.param3 = birthDate; return query_GetDog(YourContext,myParams).ToList(); } Return Types (this does not apply to EntitySQL queries as they aren't compiled at the same time during execution as the CompiledQuery method) Working with Linq, you usually don't force the execution of the query until the very last moment, in case some other functions downstream wants to change the query in some way: static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public IEnumerable<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name); } public void DataBindStuff() { IEnumerable<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } What is going to happen here? By still playing with the original ObjectQuery (that is the actual return type of the Linq statement, which implements IEnumerable), it will invalidate the compiled query and be force to re-parse. So, the rule of thumb is to return a List< of objects instead. static readonly Func<Entities, int, string, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, int, string, IEnumerable<Dog>>((ctx, age, name) => from dog in ctx.DogSet where dog.Age == age && dog.Name == name select dog); public List<Dog> GetSomeDogs( int age, string name ) { return query_GetDog(YourContext,age,name).ToList(); //<== change here } public void DataBindStuff() { List<Dog> dogs = GetSomeDogs(4,"Bud"); // but I want the dogs ordered by BirthDate gridView.DataSource = dogs.OrderBy( it => it.BirthDate ); } When you call ToList(), the query gets executed as per the compiled query and then, later, the OrderBy is executed against the objects in memory. It may be a little bit slower, but I'm not even sure. One sure thing is that you have no worries about mis-handling the ObjectQuery and invalidating the compiled query plan. Once again, that is not a blanket statement. ToList() is a defensive programming trick, but if you have a valid reason not to use ToList(), go ahead. There are many cases in which you would want to refine the query before executing it. Performance What is the performance impact of compiling a query? It can actually be fairly large. A rule of thumb is that compiling and caching the query for reuse takes at least double the time of simply executing it without caching. For complex queries (read inherirante), I have seen upwards to 10 seconds. So, the first time a pre-compiled query gets called, you get a performance hit. After that first hit, performance is noticeably better than the same non-pre-compiled query. Practically the same as Linq2Sql When you load a page with pre-compiled queries the first time you will get a hit. It will load in maybe 5-15 seconds (obviously more than one pre-compiled queries will end up being called), while subsequent loads will take less than 300ms. Dramatic difference, and it is up to you to decide if it is ok for your first user to take a hit or you want a script to call your pages to force a compilation of the queries. Can this query be cached? { Dog dog = from dog in YourContext.DogSet where dog.ID == id select dog; } No, ad-hoc Linq queries are not cached and you will incur the cost of generating the tree every single time you call it. Parametrized Queries Most search capabilities involve heavily parametrized queries. There are even libraries available that will let you build a parametrized query out of lamba expressions. The problem is that you cannot use pre-compiled queries with those. One way around that is to map out all the possible criteria in the query and flag which one you want to use: public struct MyParams { public string name; public bool checkName; public int age; public bool checkAge; } static readonly Func<Entities, MyParams, IEnumerable<Dog>> query_GetDog = CompiledQuery.Compile<Entities, MyParams, IEnumerable<Dog>>((ctx, myParams) => from dog in ctx.DogSet where (myParams.checkAge == true && dog.Age == myParams.age) && (myParams.checkName == true && dog.Name == myParams.name ) select dog); protected List<Dog> GetSomeDogs() { MyParams myParams = new MyParams(); myParams.name = "Bud"; myParams.checkName = true; myParams.age = 0; myParams.checkAge = false; return query_GetDog(YourContext,myParams).ToList(); } The advantage here is that you get all the benifits of a pre-compiled quert. The disadvantages are that you most likely will end up with a where clause that is pretty difficult to maintain, that you will incur a bigger penalty for pre-compiling the query and that each query you run is not as efficient as it could be (particularly with joins thrown in). Another way is to build an EntitySQL query piece by piece, like we all did with SQL. protected List<Dod> GetSomeDogs( string name, int age) { string query = "select value dog from Entities.DogSet where 1 = 1 "; if( !String.IsNullOrEmpty(name) ) query = query + " and dog.Name == @Name "; if( age > 0 ) query = query + " and dog.Age == @Age "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); if( !String.IsNullOrEmpty(name) ) oQuery.Parameters.Add( new ObjectParameter( "Name", name ) ); if( age > 0 ) oQuery.Parameters.Add( new ObjectParameter( "Age", age ) ); return oQuery.ToList(); } Here the problems are: - there is no syntax checking during compilation - each different combination of parameters generate a different query which will need to be pre-compiled when it is first run. In this case, there are only 4 different possible queries (no params, age-only, name-only and both params), but you can see that there can be way more with a normal world search. - Noone likes to concatenate strings! Another option is to query a large subset of the data and then narrow it down in memory. This is particularly useful if you are working with a definite subset of the data, like all the dogs in a city. You know there are a lot but you also know there aren't that many... so your CityDog search page can load all the dogs for the city in memory, which is a single pre-compiled query and then refine the results protected List<Dod> GetSomeDogs( string name, int age, string city) { string query = "select value dog from Entities.DogSet where dog.Owner.Address.City == @City "; ObjectQuery<Dog> oQuery = new ObjectQuery<Dog>( query, YourContext ); oQuery.Parameters.Add( new ObjectParameter( "City", city ) ); List<Dog> dogs = oQuery.ToList(); if( !String.IsNullOrEmpty(name) ) dogs = dogs.Where( it => it.Name == name ); if( age > 0 ) dogs = dogs.Where( it => it.Age == age ); return dogs; } It is particularly useful when you start displaying all the data then allow for filtering. Problems: - Could lead to serious data transfer if you are not careful about your subset. - You can only filter on the data that you returned. It means that if you don't return the Dog.Owner association, you will not be able to filter on the Dog.Owner.Name So what is the best solution? There isn't any. You need to pick the solution that works best for you and your problem: - Use lambda-based query building when you don't care about pre-compiling your queries. - Use fully-defined pre-compiled Linq query when your object structure is not too complex. - Use EntitySQL/string concatenation when the structure could be complex and when the possible number of different resulting queries are small (which means fewer pre-compilation hits). - Use in-memory filtering when you are working with a smallish subset of the data or when you had to fetch all of the data on the data at first anyway (if the performance is fine with all the data, then filtering in memory will not cause any time to be spent in the db). Singleton access The best way to deal with your context and entities accross all your pages is to use the singleton pattern: public sealed class YourContext { private const string instanceKey = "On3GoModelKey"; YourContext(){} public static YourEntities Instance { get { HttpContext context = HttpContext.Current; if( context == null ) return Nested.instance; if (context.Items[instanceKey] == null) { On3GoEntities entity = new On3GoEntities(); context.Items[instanceKey] = entity; } return (YourEntities)context.Items[instanceKey]; } } class Nested { // Explicit static constructor to tell C# compiler // not to mark type as beforefieldinit static Nested() { } internal static readonly YourEntities instance = new YourEntities(); } } NoTracking, is it worth it? When executing a query, you can tell the framework to track the objects it will return or not. What does it mean? With tracking enabled (the default option), the framework will track what is going on with the object (has it been modified? Created? Deleted?) and will also link objects together, when further queries are made from the database, which is what is of interest here. For example, lets assume that Dog with ID == 2 has an owner which ID == 10. Dog dog = (from dog in YourContext.DogSet where dog.ID == 2 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Person owner = (from o in YourContext.PersonSet where o.ID == 10 select dog).FirstOrDefault(); //dog.OwnerReference.IsLoaded == true; If we were to do the same with no tracking, the result would be different. ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog = oDogQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>) (from o in YourContext.PersonSet where o.ID == 10 select o); oPersonQuery.MergeOption = MergeOption.NoTracking; Owner owner = oPersonQuery.FirstOrDefault(); //dog.OwnerReference.IsLoaded == false; Tracking is very useful and in a perfect world without performance issue, it would always be on. But in this world, there is a price for it, in terms of performance. So, should you use NoTracking to speed things up? It depends on what you are planning to use the data for. Is there any chance that the data your query with NoTracking can be used to make update/insert/delete in the database? If so, don't use NoTracking because associations are not tracked and will causes exceptions to be thrown. In a page where there are absolutly no updates to the database, you can use NoTracking. Mixing tracking and NoTracking is possible, but it requires you to be extra careful with updates/inserts/deletes. The problem is that if you mix then you risk having the framework trying to Attach() a NoTracking object to the context where another copy of the same object exist with tracking on. Basicly, what I am saying is that Dog dog1 = (from dog in YourContext.DogSet where dog.ID == 2).FirstOrDefault(); ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>) (from dog in YourContext.DogSet where dog.ID == 2 select dog); oDogQuery.MergeOption = MergeOption.NoTracking; Dog dog2 = oDogQuery.FirstOrDefault(); dog1 and dog2 are 2 different objects, one tracked and one not. Using the detached object in an update/insert will force an Attach() that will say "Wait a minute, I do already have an object here with the same database key. Fail". And when you Attach() one object, all of its hierarchy gets attached as well, causing problems everywhere. Be extra careful. How much faster is it with NoTracking It depends on the queries. Some are much more succeptible to tracking than other. I don't have a fast an easy rule for it, but it helps. So I should use NoTracking everywhere then? Not exactly. There are some advantages to tracking object. The first one is that the object is cached, so subsequent call for that object will not hit the database. That cache is only valid for the lifetime of the YourEntities object, which, if you use the singleton code above, is the same as the page lifetime. One page request == one YourEntity object. So for multiple calls for the same object, it will load only once per page request. (Other caching mechanism could extend that). What happens when you are using NoTracking and try to load the same object multiple times? The database will be queried each time, so there is an impact there. How often do/should you call for the same object during a single page request? As little as possible of course, but it does happens. Also remember the piece above about having the associations connected automatically for your? You don't have that with NoTracking, so if you load your data in multiple batches, you will not have a link to between them: ObjectQuery<Dog> oDogQuery = (ObjectQuery<Dog>)(from dog in YourContext.DogSet select dog); oDogQuery.MergeOption = MergeOption.NoTracking; List<Dog> dogs = oDogQuery.ToList(); ObjectQuery<Person> oPersonQuery = (ObjectQuery<Person>)(from o in YourContext.PersonSet select o); oPersonQuery.MergeOption = MergeOption.NoTracking; List<Person> owners = oPersonQuery.ToList(); In this case, no dog will have its .Owner property set. Some things to keep in mind when you are trying to optimize the performance. No lazy loading, what am I to do? This can be seen as a blessing in disguise. Of course it is annoying to load everything manually. However, it decreases the number of calls to the db and forces you to think about when you should load data. The more you can load in one database call the better. That was always true, but it is enforced now with this 'feature' of EF. Of course, you can call if( !ObjectReference.IsLoaded ) ObjectReference.Load(); if you want to, but a better practice is to force the framework to load the objects you know you will need in one shot. This is where the discussion about parametrized Includes begins to make sense. Lets say you have you Dog object public class Dog { public Dog Get(int id) { return YourContext.DogSet.FirstOrDefault(it => it.ID == id ); } } This is the type of function you work with all the time. It gets called from all over the place and once you have that Dog object, you will do very different things to it in different functions. First, it should be pre-compiled, because you will call that very often. Second, each different pages will want to have access to a different subset of the Dog data. Some will want the Owner, some the FavoriteToy, etc. Of course, you could call Load() for each reference you need anytime you need one. But that will generate a call to the database each time. Bad idea. So instead, each page will ask for the data it wants to see when it first request for the Dog object: static public Dog Get(int id) { return GetDog(entity,"");} static public Dog Get(int id, string includePath) { string query = "select value o " + " from YourEntities.DogSet as o " +

    Read the article

  • PHP PSR-0 + several namespaces in one file and autoload

    - by Nemoden
    I've been thinking for a while about defining several namespaces in one php file and so, having several classes inside this file. Suppose, I want to implement something like Doctrine\ORM\Query\Expr: Expr.php Expr |-- Andx.php |-- Base.php |-- Comparison.php |-- Composite.php |-- From.php |-- Func.php |-- GroupBy.php |-- Join.php |-- Literal.php |-- Math.php |-- OrderBy.php |-- Orx.php `-- Select.php It would be nice if I had all of this in one file - Expr.php: namespace Doctrine\ORM\Query; class Expr { // code } namespace Doctrine\ORM\Query\Expr; class Func { // code } // etc... What I'm thinking of is directories naming convention and, unlike PSR-0 having several classes and namespaces in one file. It's best explained by the code: ls Doctrine/orm/query Expr.php that's it - only Expr.php Since Expr.php is somewhat I call a "meta-namespace" for Expr\Func, it make sense to place all the classes inside Expr.php (as shown above). So, the vendor name is still starts with an uppercased letter (Doctrine) and the other parts of namespace start with lowercased letter. We can write an autoload so it would respect this notion: function load_class($class) { if (class_exists($class)) { return true; } $tokenized_path = explode(array("_", "\\"), DIRECTORY_SEPARATOR, $class); // array('Doctrine', 'orm', 'query', 'Expr', 'Func'); // ^^^^ // first, we are looking for first uppercased namespace part // and if it's not last (not the class name), we use it as a filename // and wiping away the rest to compose a path to a file we need to include if (FALSE !== ($meta_class_index = find_meta_class($tokenized_path))) { $new_tokenized_path = array_slice($tokenized_path, 0, $meta_class_index); $path_to_class = implode(DIRECTORY_SEPARATOR, $new_tokenized_path); } else { // no meta class found $path_to_class = implode(DIRECTORY_SEPARATOR, $tokenized_path); } if (file_exists($path_to_class.'.php')) { require_once $path_to_class.'.php'; } return false; } Another reason to do so is to reduce a number of php files scattered among directories. Usually you check file existence before you require a file to fail gracefully: file_exists($path_to_class.'.php'); If you take a look at actual Doctrine\ORM\Query\Expr code, you'll see they use all of the "inner-classes", so you actually do: file_exists("/path/to/Doctrine/ORM/Query/Expr.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/AndX.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Base.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Comparison.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Composite.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/From.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Func.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/GroupBy.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Join.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Literal.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Math.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/OrderBy.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Orx.php"); file_exists("/path/to/Doctrine/ORM/Query/Expr/Select.php"); in your autoload which causes quite a few I/O reads. Isn't it too much to check on each user's hit? I'm just putting this on a discussion. I want to hear from another PHP programmers what do they think of it. And, of course, if you have a silver bullet addressing this problems I've designated here, please share. I also have been thinking if my vogue question fits here and according to the FAQ it seems like this question addresses "software architecture" problem slash proposal. I'm sorry if my scribble may seem a bit clunky :) Thanks.

    Read the article

  • 12.10 update breaks NFS mount

    - by TarekEldeeb
    I've just upgraded to the latest 12.10 beta. Rebooted twice. The problem is with the NFS folders not mounting, here's a verbose log. # mount -v myserver:/nfs_shared/tools /tools/ mount: no type was given - I'll assume nfs because of the colon mount.nfs: timeout set for Mon Oct 1 11:42:28 2012 mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: trying text-based options 'vers=4,addr=192.168.0.22,clientaddr=192.168.0.109' mount.nfs: mount(2): No such file or directory mount.nfs: trying text-based options 'addr=192.168.0.22' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: portmap query retrying: RPC: Timed out mount.nfs: prog 100003, trying vers=3, prot=17 mount.nfs: portmap query failed: RPC: Timed out mount.nfs: Connection timed out My IP is 109, the server is 22. Just before the last upgrade, I was able to mount normally. Other PCs on my network are able to mount too, it's not a server issue. How can I analyse the problem? Certain log files? Any help?

    Read the article

  • Installing a php extension (xdiff) from pecl on Linux

    - by wilsonsilva
    I was getting this error on my php script: Fatal error: Call to undefined function xdiff_file_diff() I realised that I didn't had the xdiff extension installed. When I tried to install it using the install pecl xdiff command, I got this errors (among other lines): configure: WARNING: You will need re2c 0.13.4 or later if you want to regenerate PHP configure: error: Please reinstall the libxdiff distribution Then I installed re2c and libxdiff: wget http://www.compdigitec.com/labs/files/re2c_0135_redhat.rpm wget ftp://ftp.task.gda.pl/vol/vol1/ftp.pld-linux.org/dists/2.0/PLD/i386/PLD/RPMS/libxdiff-0.7-1.i386.rpm rpm -ivh re2c_0135_redhat.rpm rpm -ivh libxdiff-0.7-1.i386 But after that I still get the same errors. PS: I've Googled a LOT and I found a couple of people with this problem but they didn't get an answer either :(

    Read the article

  • tar a directory and only include certain file types

    - by Susan
    Following the instructions here: http://linuxdevcenter.com/pub/a/linux/lpt/20_08.html I'm aiming to tar up a directory but only want to include .php files from that directory. Given the aforementioned instructions, I've come up with this command. It creates a file called IncludeTheseFiles which lists all the .php files, then the tar is supposed to do it's job only using the files listed in IncludeTheseFiles find myProjectDirectory -type f -print | \ egrep '(\.[php]|[Mm]akefile)$' > IncludeTheseFiles tar cvf myProjectTarName -I IncludeTheseFiles However, when I run this it doesn't like the I include option? tar: invalid option -- I

    Read the article

  • Ubuntu mount hard drive confusing

    - by Fresheyeball
    I'm new to linux and have a home server set up running ubuntu. In the ui its very easy to mount my additional internal harddrives. I just double click on them. Since I have made this server headless, I now need to mount via the command line. How can I replicate the very simple double click gui behavior? So far all the information I've found is very complex. Ubuntu auto generated folders for each hd in under /media and I can see the harddrives under /dev but have no idea which is which as the hardware is identical between them. I also don't know how they are formated. Thanks in advance for any advice you have.

    Read the article

  • How to allow wget to overwrite files

    - by Gnanam
    Using wget command, how do I allow/instruct to overwrite my local file everytime, irrespective of how many times I invoke. Let's say, I want to download a file from the location: http://server/folder/file1.html Here, whenever I say wget http://server/folder/file1.html, I want this file1.html to be overwritten in my local system irrespective of the time it is changed, already downloaded, etc. My intention/use case here is that when I call wget, I'm very sure that I want to replace/overwrite the existing file. I've tried out the following options, but each option is intended/meant for some other purpose. -nc = --no-clobber -N = Turn on time-stamping -r = Turn on recursive retrieving

    Read the article

  • Copy a file to a new directory path in DOS

    - by nodmonkey
    How can I copy a file using DOS commands into a directory structure that may not yet exist? I need to be able to force the creation of the directory path to the target file location if that location doesn't already exist. For example, there is already a file.txt in this location: C:\file.txt And I want to copy it to C:\example\new\path\to\copy\of\file\file.txt but at this time C:\example\ and all the subdirectories may or may not yet exist. Basically, I am looking for a "copy and create the target path if necessary" command. What would you recommend is the best way to achieve this?

    Read the article

  • When using grep from VIM, how to jump to results?

    - by Marplesoft
    When using the grep plugin to VIM, I can search the current directory for all occurrences of a string within a set of files, like this: :grep Ryan *.txt This outputs something like this: file1.txt:3:Ryan was here file2.txt:10:Ryan likes VIM file3.txt:5:superuser.com is a fav of Ryan (1 of 3): Ryan was here Press ENTER or type command to continue If I press enter, it just takes me back to my editor. What I really want to do is be able to open up one of those files and jump to the place where the string was found. Is there a way to do this? The 1 of 3 part makes me think there's a way to tab through the results, but I don't know what commands are available to me. Can anybody shed some light on this?

    Read the article

  • Different color prompts for different machines when using terminal/ssh?

    - by bcrawl
    I have 5 machines I constantly ssh into to do work. Its getting increasingly frustrating when I am issuing wrong commands on wrong boxes. Luckily I havent done anything bad yet. I wanted to know if there is any hack which I can hardcode which will display my prompt in different colors based on the machine I am ssh into? Such as blue for desktop1, purple for laptop, red for server etc? Is this possible? Currently I am using this command export PS1="\e[0;31m[\u@\h \W]\$ \e[m " taken from here http://www.cyberciti.biz/faq/bash-shell-change-the-color-of-my-shell-prompt-under-linux-or-unix/ but it obviously doesnt work across ssh. Also, if you have any other cool bash tips for helping me ease my sight will be wonderful. I got this tip which colors the man pages. http://linuxtidbits.wordpress.com/2009/03/23/less-colors-for-man-pages/

    Read the article

  • How to provide multiple input to ffmpeg?

    - by tomm89
    I'm using ffmpeg to create time-lapses and it's working great. As input I have images named 001.jpg, 002.jpg, etc. and then with the command ffmpeg -i %3d.jpg -sameq -s 1440x1080 video.mp4 I create the video. But my problem comes when I want to create a video using multiple sets as input. For example, I'm in a dir where I have two folders set1 and set2, each with photos in it in the format explained previously. I tried doing ffmpeg -i ./set1/%3d.jpg -i ./set2/%3d.jpg -sameq -s 1440x1080 video.mp4 but it ends up doing a video using only the first set. Is there an easy way to do this? Thanks!

    Read the article

  • Search files for text matching format of a Unix directory

    - by BrandonKowalski
    I am attempting to search through all the files in a directory for text matching the pattern of any arbitrary directory. The output of this I hope to use to make a list of all directories referenced in the files (this part I think I can figure out on my own). I have looked at various regex resources and made my own expression that seems to work in the browser based tool but not with grep in the command line. /\w+[(/\w+)]+ My understanding so far is the above expression will look for the beginning / of a directory then look for an indeterminate number of characters before looking for a repeating block of the same thing. Any guidance would be greatly appreciated.

    Read the article

  • Windows 7 Recovery Console File Access Denied

    - by Ty Rozak
    Recently by computer crashed and was stuck in a boot loop. So I created a Windows Recovery CD and booted from that. When I use the command prompt in the recovery console, I cannot see any of my personal files or folders (such a my Users folder with My Documents). Is there a way to access these files? The only reason I would need to fix the computer properly would be to get these files off of the computer and onto a hard drive. Any other fix suggestions would be greatly appreciated. I have tried both system repair and system restore from the Recovery Console, but neither seem to work. Thanks.

    Read the article

  • trouble backing up large mysql database

    - by Patrick
    I have a wordpress MU database with something like 10,000+ tables for various user's blogs. I need to upgrade wordpress MU to newest version, but want to backup the DB before hand. PHPMyAdmin fails to even load the page when i click export. Ive tried going into the server (windows) and using dos command line: mysqldump -u USERNAME -p PASSWORD> BACKUP.sql but it hangs for a minute and gives me the error: error 23: out of resources when opinging file '.\USERNAME\wp_1037_links.MYD' (Errorcode: 24) when using LOCK Tables What am i doing wrong, or should i be doing? Is PHPMyAdmin right for something this size? Is there a better way of doing this than the two methods i tried? **Note that this is not my site, so any suggestions as to the setup of the DB ill have to run by the owner. Im just here for WP related crap, this is kind of out of scope for what i was brought on to do.

    Read the article

  • Prevent zsh from trying to expand everything

    - by Attila O.
    Recently switched from bash, I noticed that zsh will try to expand every command or argument that looks like it has wildcards in it. So the following lines won't work any more: git diff master{,^^} zsh: no matches found: master^^ scp remote:~/*.txt . zsh: no matches found: remote:~/*.txt The only way to make the above commands work is to quote the arguments, which is quite annoying. Q: How do I configure zsh to still try to expand wildcards, but if there are no matches, just pass on the argument as-is? EDIT: Possibly related: scp with zsh : no matches found

    Read the article

  • Running Webapp on Mac in UTC (either changing MacBook timezone or tomcat timezone)

    - by Andy A
    To run my web app, I need to set my timezone to UTC on my MacBook. I can do this temporarily by opening a Konsole and entering sudo ln -sf /usr/share/zoneinfo/UTC /etc/localtime However, my timezone returns to normal when I restart my machine! Any advice? Edit : The response to this question by 'Celada' implies that I can just make my Server UTC. I am using Apache Tomcat 7. Adding to Celada's response, how can I make it UTC? Update - 3rd April : Following Celada's response, I have tried adding SetEnv TZ UTC at the top of startup.sh. This didn't seem to make a difference. After some research, I tried adding export JAVA_OPTS="-Duser.timezone=UTC" to startup.sh, but this too had no effect. Am I adding the correct command to the correct file?

    Read the article

  • NET VIEW occasionally does not show all computers

    - by Neil
    I am having an issue on my workstation, where periodically the NET VIEW command does not seem to work. Most times, NET VIEW will return a list of all the machines connected to our domain. Occasionally, it will only list 4 or 5 (instead of the usual 200+). The only way I've found to remedy it is by rebooting my machine. I am running Windows XP Pro, the Domain Controller is running Server 2003. What could be causing this, and how can I remedy it? Thanks!

    Read the article

  • Configuring Notepad++ to run knitr2pdf

    - by Stat-R
    Sorry for such a basic question but since I do not have a degree in programming... I am a user of R, knitr and Notepad++. I was trying to configure Notepad++ to run pdflatex and knitr(knit2pdf). By googling I found how to do it for pdflatex but I did not find anything for sweave/knitr. The following are the good ones for pdflatex http://www2.sofi.su.se/~mbe/docs/npp_r_latex.pdf http://www.tlhiv.org/ma497/software/ I inserted cmd /c cd /d "$(CURRENT_DIRECTORY)" && pdflatex.exe -shell-escape "$(FILE_NAME)" into the runrun menu in Notepad++. I do not fully understand the command though. I will appreciate any help on this. Please direct me if you can to any resource for learning these commands. I will be grateful to get any help in configuring Notepad++ for running knitr.

    Read the article

  • When your field-terminating char appears within field values

    - by Jonathan Sampson
    I've had a very colorful morning learning the innerparts of Linux's sort command, and have come across yet another issue that I can't seem to find an answer for in the documentation. I'm currently using -t, to indicate that my fields are split by the comma character, but I'm finding that in some of my files, the comma is used (between double-quotes) within values: Jonathan Sampson,,[email protected],0987654321 "Foobar CEO,","CEO,",[email protected],, How can I use a comma to terminate my fields, but ignore the occurences of it within values? Is this fairly simple, or do I need to re-export all of my data using a more-foreign field-terminator?

    Read the article

  • Text Terminal Hardware (for Linux)

    - by DLH
    Is there any way to obtain a hardware text terminal (preferably small in size) with a screen and a keypad to connect to a Linux machine (preferably via usb)? I'd like to be able to log into a command line and do some work there while simultaneously running a graphical environment on the main display. It seems like there should be some kind of lcd screen and qwerty keypad device designed for this purpose. Does this exist, and how do I get one? Edit: I'd be happy with a small networked device as well, as long as I could get a remote terminal into my computer.

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >