Search Results

Search found 13469 results on 539 pages for 'avoid trouble'.

Page 445/539 | < Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >

  • Grails - Recursive computing call "StackOverflowError" and don't update on show

    - by Kedare
    Hello, I have a little problem, I have made a Category domain on Grails, but I want to "cache" the string representation on the database by generating it when the representation field is empty or NULL (to avoid recursive queries at each toString), here is the code : package devkedare class Category { String title; String description; String representation; Date dateCreated; Date lastUpdate; Category parent; static constraints = { title(blank:false, nullable:false, size:2..100); description(blank:true, nullable:false); dateCreated(nullable:true); lastUpdate(nullable:true); autoTimestamp: true; } static hasMany = [childs:Category] @Override String toString() { if((!this.representation) || (this.representation == "")) { this.computeRepresentation(true) } return this.representation; } String computeRepresentation(boolean updateChilds) { if(updateChilds) { for(child in this.childs) { child.representation = child.computeRepresentation(true); } } if(this.parent) { this.representation = "${this.parent.computeRepresentation(false)}/${this.title}"; } else { this.representation = this.title } return this.representation; } void beforeUpdate() { this.computeRepresentation(true); } } There is 2 problems : It don't update the representation when the representation if NULL or empty and toString id called, it only update it on save, how to fix that ? On some category, updating it throw a StackOverflowError, here is an example of my actual category table as CSV : Uploaded the csv file here, pasting csv looks buggy: http://kedare.free.fr/Dist/01.csv Updating the representation works everywhere except on "IT" that throw a StackOverlowError (this is the only category that has many childs). Any idea of how can I fix those 2 problems ? Thank you !

    Read the article

  • How to log correct context with Threadpool threads using log4net?

    - by myotherme
    I am trying to find a way to log useful context from a bunch of threads. The problem is that a lot of code is dealt with on Events that are arriving via threadpool threads (as far as I can tell) so their names are not in relation to any context. The problem can be demonstrated with the following code: class Program { private static readonly log4net.ILog log = log4net.LogManager.GetLogger(System.Reflection.MethodBase.GetCurrentMethod().DeclaringType); static void Main(string[] args) { new Thread(TestThis).Start("ThreadA"); new Thread(TestThis).Start("ThreadB"); Console.ReadLine(); } private static void TestThis(object name) { var nameStr = (string)name; Thread.CurrentThread.Name = nameStr; log4net.ThreadContext.Properties["ThreadContext"] = nameStr; log4net.LogicalThreadContext.Properties["LogicalThreadContext"] = nameStr; log.Debug("From Thread itself"); ThreadPool.QueueUserWorkItem(x => log.Debug("From threadpool Thread: " + nameStr)); } } The Conversion pattern is: %date [%thread] %-5level %logger [%property] - %message%newline The output is like so: 2010-05-21 15:08:02,357 [ThreadA] DEBUG LogicalContextTest.Program [{LogicalThreadContext=ThreadA, log4net:HostName=xxx, ThreadContext=ThreadA}] - From Thread itself 2010-05-21 15:08:02,357 [ThreadB] DEBUG LogicalContextTest.Program [{LogicalThreadContext=ThreadB, log4net:HostName=xxx, ThreadContext=ThreadB}] - From Thread itself 2010-05-21 15:08:02,404 [7] DEBUG LogicalContextTest.Program [{log4net:HostName=xxx}] - From threadpool Thread: ThreadA 2010-05-21 15:08:02,420 [16] DEBUG LogicalContextTest.Program [{log4net:HostName=xxx}] - From threadpool Thread: ThreadB As you can see the last two rows have no Names of useful information to distinguish the 2 threads, other than manually adding the name to the message (which I want to avoid). How can I get the Name/Context into the log for the threadpool threads without adding it to the message at every call?

    Read the article

  • php scandir --> search for files/directories

    - by Peter
    Hi! I searched before I ask, without lucky.. I looking for a simple script for myself, which I can search for files/folders. Found this code snippet in the php manual (I think I need this), but it is not work for me. "Was looking for a simple way to search for a file/directory using a mask. Here is such a function. By default, this function will keep in memory the scandir() result, to avoid scaning multiple time for the same directory." <?php function sdir( $path='.', $mask='*', $nocache=0 ){ static $dir = array(); // cache result in memory if ( !isset($dir[$path]) || $nocache) { $dir[$path] = scandir($path); } foreach ($dir[$path] as $i=>$entry) { if ($entry!='.' && $entry!='..' && fnmatch($mask, $entry) ) { $sdir[] = $entry; } } return ($sdir); } ?> Thank you for any help, Peter

    Read the article

  • How to deal with multiple sub-type of one super-type in Django admin

    - by Henri
    What would be the best solution for adding/editing multiple sub-types. E.g a super-type class Contact with sub-type class Client and sub-type class Supplier. The way shown here works, but when you edit a Contact you get both inlines i.e. sub-type Client AND sub-type Supplier. So even if you only want to add a Client you also get the fields for Supplier of vice versa. If you add a third sub-type , you get three sub-type field groups, while you actually only want one sub-type group, in the mentioned example: Client. E.g.: class Contact(models.Model): contact_name = models.CharField(max_length=128) class Client(models.Model): contact = models.OneToOneField(Contact, primary_key=True) user_name = models.CharField(max_length=128) class Supplier(models.Model): contact.OneToOneField(Contact, primary_key=True) company_name = models.CharField(max_length=128) and in admin.py class ClientInline(admin.StackedInline): model = Client class SupplierInline(admin.StackedInline): model = Supplier class ContactAdmin(admin.ModelAdmin): inlines = (ClientInline, SupplierInline,) class ClientAdmin(admin.ModelAdmin): ... class SupplierAdmin(admin.ModelAdmin): ... Now when I want to add a Client, i.e. only a Client I edit Contact and I get the inlines for both Client and Supplier. And of course the same for Supplier. Is there a way to avoid this? When I want to add/edit a Client that I only see the Inline for Client and when I want to add/edit a Supplier that I only see the Inline for Supplier, when adding/editing a Contact? Or perhaps there is a different approach. Any help or suggestion will be greatly appreciated.

    Read the article

  • ASP.NET MVC 2: How to write this Linq SQL as a Dynamic Query (using strings)?

    - by Dr. Zim
    Skip to the "specific question" as needed. Some background: The scenario: I have a set of products with a "drill down" filter (Query Object) populated with DDLs. Each progressive DDL selection will further limit the product list as well as what options are left for the DDLs. For example, selecting a hammer out of tools limits the Product Sizes to only show hammer sizes. Current setup: I created a query object, sent it to a repository, and fed each option to a SQL "table valued function" where null values represent "get all products". I consider this a good effort, but far from DDD acceptable. I want to avoid any "programming" in SQL, hopefully doing everything with a repository. Comments on this topic would be appreciated. Specific question: How would I rewrite this query as a Dynamic Query? A link to something like 101 Linq Examples would be fantastic, but with a Dynamic Query scope. I really want to pass to this method the field in quotes "" for which I want a list of options and how many products have that option. (from p in db.Products group p by p.ProductSize into g select new Category { PropertyType = g.Key, Count = g.Count() }).Distinct(); Each DDL option will have "The selection (21)" where the (21) is the quantity of products that have that attribute. Upon selecting an option, all other remaining DDLs will update with the remaining options and counts.

    Read the article

  • Does JAXWS client make difference between an empty collection and a null collection value as returne

    - by snowflake
    Since JAX-WS rely on JAXB, and since I observed the code that unpack the XML bean in JAX-B Reference Implementation, I guess the difference is not made and that a JAXWS client always return an empty collection, even the webservice result was a null element: public T startPacking(BeanT bean, Accessor<BeanT, T> acc) throws AccessorException { T collection = acc.get(bean); if(collection==null) { collection = ClassFactory.create(implClass); if(!acc.isAdapted()) acc.set(bean,collection); } collection.clear(); return collection; } I agree that for best interoperability service contracts should be non ambiguous and avoid such differences, but it seems that the JAX-WS service I'm invoking (hosted on a Jboss server with Jbossws implementation) is returning as expected either null either empty collection (tested with SoapUI). I used for my test code generated by wsimport. Return element is defined as: @XmlElement(name = "return", nillable = true) protected List<String> _return; I even tested to change the Response class getReturn method from : public List<String> getReturn() { if (_return == null) { _return = new ArrayList<String>(); } return this._return; } to public List<String> getReturn() { return this._return; } but without success. Any helpful information/comment regarding this problem is welcome !

    Read the article

  • Android:Playing bigger size audio wav sound file produces crash

    - by user187532
    Hi Android experts, I am trying to play the bigger size audio wav file(which is 20 mb) using the following code(AudioTrack) on my Android 1.6 HTC device which basically has less memory. But i found device crash as soon as it executes reading, writing and play. But the same code works fine and plays the lesser size audio wav files(10kb, 20 kb files etc) very well. P.S: I should play PCM(.wav) buffer sound, the reason behind why i use AudioTrack here. Though my device has lesser memory, how would i read bigger audio files bytes by bytes and play the sound to avoid crashing due to memory constraints. private void AudioTrackPlayPCM() throws IOException { String filePath = "/sdcard/myWav.wav"; // 8 kb file byte[] byteData = null; File file = null; file = new File(filePath); byteData = new byte[(int) file.length()]; FileInputStream in = null; try { in = new FileInputStream( file ); in.read( byteData ); in.close(); } catch (FileNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } int intSize = android.media.AudioTrack.getMinBufferSize(8000, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_8BIT); AudioTrack at = new AudioTrack(AudioManager.STREAM_MUSIC, 8000, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_8BIT, intSize, AudioTrack.MODE_STREAM); at.play(); at.write(byteData, 0, byteData.length); at.stop(); at.release(); } Could someone guide me please to play the AudioTrack code for bigger size wav files?

    Read the article

  • "do it all" page structure and things to watch out for?

    - by Andrew Heath
    I'm still getting my feet wet in PHP (my 1st language) and I've reached the competency level where I can code one page that handles all sorts of different related requests. They generally have a structure like this: (psuedo code) <?php include 'include/functions.php'; IF authorized IF submit (add data) ELSE IF update (update data) ELSE IF list (show special data) ELSE IF tab switch (show new area) ELSE display vanilla (show default) ELSE "must be registered/logged-in" ?> <HTML> // snip <?php echo $output; ?> // snip </HTML> and it all works nicely, and quite quickly which is cool. But I'm still sorta feeling my way in the dark... and would like some input from the pros regarding this type of page design... is it a good long-term structure? (it seems easily expanded...) are there security risks particular to this design? are there corners I should avoid painting myself into? Just curious about what lies ahead, really...

    Read the article

  • Avoiding shutdown hook

    - by meryl
    Through the following code I can play and cut and audio file. Is there any other way to avoid using a shutdown hook? The problem is that whenever I push the cut button , the file doesn't get saved until I close the application thanks ...................... void play_cut() { try { // First, we get the format of the input file final AudioFileFormat.Type fileType = AudioSystem.getAudioFileFormat(inputAudio).getType(); // Then, we get a clip for playing the audio. c = AudioSystem.getClip(); // We get a stream for playing the input file. AudioInputStream ais = AudioSystem.getAudioInputStream(inputAudio); // We use the clip to open (but not start) the input stream c.open(ais); // We get the format of the audio codec (not the file format we got above) final AudioFormat audioFormat = ais.getFormat(); // We add a shutdown hook, an anonymous inner class. Runtime.getRuntime().addShutdownHook(new Thread() { public void run() { // We're now in the hook, which means the program is shutting down. // You would need to use better exception handling in a production application. try { // Stop the audio clip. c.stop(); // Create a new input stream, with the duration set to the frame count we reached. Note that we use the previously determined audio format AudioInputStream startStream = new AudioInputStream(new FileInputStream(inputAudio), audioFormat, c.getLongFramePosition()); // Write it out to the output file, using the same file type. AudioSystem.write(startStream, fileType, outputAudio); } catch(IOException e) { e.printStackTrace(); } } }); // After setting up the hook, we start the clip. c.start(); } catch (UnsupportedAudioFileException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } catch (LineUnavailableException e) { e.printStackTrace(); } }// end play_cut ......................

    Read the article

  • Get directory path by fd

    - by tylerl
    I've run into the need to be able refer to a directory by path given its file descriptor in Linux. The path doesn't have to be canonical, it just has to be functional so that I can pass it to other functions. So, taking the same parameters as passed to a function like fstatat(), I need to be able to call a function like getxattr() which doesn't have a f-XYZ-at() variant. So far I've come up with these solutions; though none are particularly elegant. The simplest solution is to avoid the problem by calling openat() and then using a function like fgetxattr(). This works, but not in every situation. So another method is needed to fill the gaps. The next solution involves looking up the information in proc: if (!access("/proc/self/fd",X_OK)) { sprintf(path,"/proc/self/fd/%i/",fd); } This, of course, totally breaks on systems without proc, including some chroot environments. The last option, a more portable but potentially-race-condition-prone solution, looks like this: DIR* save = opendir("."); fchdir(fd); getcwd(path,PATH_MAX); fchdir(dirfd(save)); closedir(save); The obvious problem here is that in a multithreaded app, changing the working directory around could have side effects. However, the fact that it works is compelling: if I can get the path of a directory by calling fchdir() followed by getcwd(), why shouldn't I be able to just get the information directly: fgetcwd() or something. Clearly the kernel is tracking the necessary information. So how do I get to it?

    Read the article

  • what's an effective way to build a csproj file in C#?

    - by jcollum
    I'd like to avoid a command line for this. I've been using the MSBuild API ( Microsoft.Build.Framework and Microsoft.Build.BuildEngine) with code that looks like this: this.buildEngine = new Engine(); BuildPropertyGroup props = new BuildPropertyGroup(); props.SetProperty("Configuration", "Debug"); this.buildEngine.RegisterLogger(this.logger); Project proj = new Project(this.buildEngine); proj.LoadXml(this.projectFileAndPath, ProjectLoadSettings.None); this.buildEngine.BuildProject(proj, "Build"); However I've run into enough problems that I can't find answers for that I'm really wondering if I'm doing this right. First, I can't find the output (there's no bin directory in any of the places where I figured the dll's would end up). Second, I tried building a project that I had made in VS2008 and the line proj.LoadXml( fails for invalid xml encoding. But of course the xml file is valid, since VS2008 can build it (I checked). At this point I'm beginning to wonder if I've picked up some code that's way out of date or a methodology that's been superseded by something else. Opinions?

    Read the article

  • Capturing Set Behavior with Mutating Elements

    - by Carl
    Using the Guava library, I have the following situation: SetMultimap<ImmutableFoo, Set<Foo>> setMM = HashMultimap.create(); Set<Foo> mask = Sets.newHashSet(); // ... some iteration construct { setMM.put(ImmutableFoo1, Sets.difference(SomeSetFoo1,mask)); setMM.put(ImmutableFoo1, Sets.difference(SomeSetFoo2,mask)); mask.add(someFoo); } that is, the same iteration to create the setMM is also used to create the mask - this can of course result in changes to hashCode()s and create duplicates within the SetMultimap backing. Ideally, I'd like the duplicates to drop without me having to make it happen, and avoid repeating the iteration to separately construct the multimap and mask. Any easy libraries/Set implementations to make that happen? Alternatively, can you identify a better way to drop the duplicates than: for (ImmutableFoo f : setMM.keySet()) setMM.putAll(f,setMM.removeAll(f)); revisiting the elements is probably not a performance problem, since I could combine a separate filter operation that needs to visit all the elements anyway.

    Read the article

  • When *not* to use prepared statements?

    - by Ben Blank
    I'm re-engineering a PHP-driven web site which uses a minimal database. The original version used "pseudo-prepared-statements" (PHP functions which did quoting and parameter replacement) to prevent injection attacks and to separate database logic from page logic. It seemed natural to replace these ad-hoc functions with an object which uses PDO and real prepared statements, but after doing my reading on them, I'm not so sure. PDO still seems like a great idea, but one of the primary selling points of prepared statements is being able to reuse them… which I never will. Here's my setup: The statements are all trivially simple. Most are in the form SELECT foo,bar FROM baz WHERE quux = ? ORDER BY bar LIMIT 1. The most complex statement in the lot is simply three such selects joined together with UNION ALLs. Each page hit executes at most one statement and executes it only once. I'm in a hosted environment and therefore leery of slamming their servers by doing any "stress tests" personally. Given that using prepared statements will, at minimum, double the number of database round-trips I'm making, am I better off avoiding them? Can I use PDO::MYSQL_ATTR_DIRECT_QUERY to avoid the overhead of multiple database trips while retaining the benefit of parametrization and injection defense? Or do the binary calls used by the prepared statement API perform well enough compared to executing non-prepared queries that I shouldn't worry about it? EDIT: Thanks for all the good advice, folks. This is one where I wish I could mark more than one answer as "accepted" — lots of different perspectives. Ultimately, though, I have to give rick his due… without his answer I would have blissfully gone off and done the completely Wrong Thing even after following everyone's advice. :-) Emulated prepared statements it is!

    Read the article

  • Using Qt signals/slots instead of a worker thread

    - by Rob
    I am using Qt and wish to write a class that will perform some network-type operations, similar to FTP/HTTP. The class needs to connect to lots of machines, one after the other but I need the applications UI to stay (relatively) responsive during this process, so the user can cancel the operation, exit the application, etc. My first thought was to use a separate thread for network stuff but the built-in Qt FTP/HTTP (and other) classes apparently avoid using threads and instead rely on signals and slots. So, I'd like to do something similar and was hoping I could do something like this: class Foo : public QObject { Q_OBJECT public: void start(); signals: void next(); private slots: void nextJob(); }; void Foo::start() { ... connect(this, SIGNAL(next()), this, SLOT(nextJob())); emit next(); } void Foo::nextJob() { // Process next 'chunk' if (workLeftToDo) { emit next(); } } void Bar::StartOperation() { Foo* foo = new Foo; foo->start(); } However, this doesn't work and UI freezes until all operations have completed. I was hoping that emitting signals wouldn't actually call the slots immediately but would somehow be queued up by Qt, allowing the main UI to still operate. So what do I need to do in order to make this work? How does Qt achieve this with the multitude of built-in classes that appear to perform lengthy tasks on a single thread?

    Read the article

  • silverlight 3: long running wcf call triggers 401.1 (access denied)

    - by sympatric greg
    I have a wcf service consumed by a silverlight 3 control. The Silverlight client uses a basicHttpBindinging that is constructed at runtime from the control's initialization parameters like this: public static T GetServiceClient<T>(string serviceURL) { BasicHttpBinding binding = new BasicHttpBinding(Application.Current.Host.Source.Scheme.Equals("https", StringComparison.InvariantCultureIgnoreCase) ? BasicHttpSecurityMode.Transport : BasicHttpSecurityMode.None); binding.MaxReceivedMessageSize = int.MaxValue; binding.MaxBufferSize = int.MaxValue; binding.Security.Mode = BasicHttpSecurityMode.TransportCredentialOnly; return (T)Activator.CreateInstance(typeof(T), new object[] { binding, new EndpointAddress(serviceURL)}); } The Service implements windows security. Calls were returning as expected until the result set increased to several thousand rows at which time HTTP 401.1 errors were received. The Service's HttpBinding defines closeTime, openTimeout, receiveTimeout and sendTimeOut of 10 minutes. If I limit the size of the resultset the call suceeds. Additional Observations from Fiddler: When Method2 is modified to return a smaller resultset (and avoid the problem), control initialization consists of 4 calls: Service1/Method1 -- result:401 Service1/Method1 -- result:401 (this time header includes element "Authorization: Negotiate TlRMTV..." Service1/Method1 -- result:200 Service1/Method2 -- result:200 (1.25 seconds) When Method2 is configured to return the larger resultset we get: Service1/Method1 -- result:401 Service1/Method1 -- result:401 (this time header includes element "Authorization: Negotiate TlRMTV..." Service1/Method1 -- result:200 Service1/Method2 -- result:401.1 (7.5 seconds) Service1/Method2 -- result:401.1 (15ms) Service1/Method2 -- result:401.1 (7.5 seconds)

    Read the article

  • Inserting a ContentControl after another ContentControl

    - by Markus Roth
    In our VSTO Word 2010 Addin, we are trying to insert a RichTextControl after a given other ContentControl. We have tried this: public ContentControl AddContentControl(WdContentControlType type, int position) { Paragraph paragraphBefore = null; if (position == 0) { if (WordDocument.Paragraphs.Count == 0) { WordDocument.Paragraphs.Add(); } paragraphBefore = WordDocument.Paragraphs.First; } else { paragraphBefore = Controls.ElementAt(position - 1).Range.Paragraphs.Last; } object start = paragraphBefore.Range.End; object end = paragraphBefore.Range.End + 1; paragraphBefore.Range.InsertParagraphAfter(); Range rangeToUse = WordDocument.Range(ref start, ref end); ContentControl newControl = _ContentControl = _WordDocument.ContentControls.Add(type, rangeToInsert); Controls.Insert(position, newControl); OnNewContentControl(newControl, position); return newControl.ContentControl; } which works fine, unless the control that is before the one we want to insert has an empty paragraph at the end. If that is the case, the new ContentControl is inserted within the last control. How can we avoid this?

    Read the article

  • Multithreading and Interrupts

    - by Nicholas Flynt
    I'm doing some work on the input buffers for my kernel, and I had some questions. On Dual Core machines, I know that more than one "process" can be running simultaneously. What I don't know is how the OS and the individual programs work to protect collisions in data. There are two things I'd like to know on this topic: (1) Where do interrupts occur? Are they guaranteed to occur on one core and not the other, and could this be used to make sure that real-time operations on one core were not interrupted by, say, file IO which could be handled on the other core? (I'd logically assume that the interrupts would happen on the 1st core, but is that always true, and how would you tell? Or perhaps does each core have its own settings for interrupts? Wouldn't that lead to a scenario where each core could react simultaneously to the same interrupt, possibly in different ways?) (2) How does the dual core processor handle opcode memory collision? If one core is reading an address in memory at exactly the same time that another core is writing to that same address in memory, what happens? Is an exception thrown, or is a value read? (I'd assume the write would work either way.) If a value is read, is it guaranteed to be either the old or new value at the time of the collision? I understand that programs should ideally be written to avoid these kinds of complications, but the OS certainly can't expect that, and will need to be able to handle such events without choking on itself.

    Read the article

  • recursively implementing 'minimum number of coins' in python

    - by user5198
    This problem is same as asked in here. Given a list of coins, their values (c1, c2, c3, ... cj, ...), and the total sum i. Find the minimum number of coins the sum of which is i (we can use as many coins of one type as we want), or report that it's not possible to select coins in such a way that they sum up to S. I"m just introduced to dynamic programming yesterday and I tried to make a code for it. # Optimal substructure: C[i] = 1 + min_j(C[i-cj]) cdict = {} def C(i, coins): if i <= 0: return 0 if i in cdict: return cdict[i] else: answer = 1 + min([C(i - cj, coins) for cj in coins]) cdict[i] = answer return answer Here, C[i] is the optimal solution for amount of money 'i'. And available coins are {c1, c2, ... , cj, ...} for the program, I've increased the recursion limit to avoid maximum recursion depth exceeded error. But, this program gives the right answer only someones and when a solution is not possible, it doesn't indicate that. What is wrong with my code and how to correct it?

    Read the article

  • Building an extension framework for a Rails app

    - by obvio171
    I'm starting research on what I'd need in order to build a user-level plugin system (like Wordpress plugins) for a Rails app, so I'd appreciate some general pointers/advice. By user-level plugin I mean a package a user can extract into a folder and have it show up on an admin interface, allowing them to add some extra configuration and then activate it. What is the best way to go about doing this? Is there any other opensource project that does this already? What does Rails itself already offer for programmer-level plugins that could be leveraged? Any Rails plugins that could help me with this? A plugin would have to be able to: run its own migrations (with this? it's undocumented) have access to my models (plugins already do) have entry points for adding content to views (can be done with content_for and yield) replace entire views or partials (how?) provide its own admin and user-facing views (how?) create its own routes (or maybe just announce its presence and let me create the routes for it, to avoid plugins stepping on each other's toes) Anything else I'm missing? Also, is there a way to limit which tables/actions the plugin has access to concerning migrations and models, and also limit their access to routes (maybe letting them include, but not remove routes)? P.S.: I'll try to keep this updated, compiling stuff I figure out and relevant answers so as to have a sort of guide for others.

    Read the article

  • Does query plan optimizer works well with joined/filtered table-valued functions?

    - by smoothdeveloper
    In SQLSERVER 2005, I'm using table-valued function as a convenient way to perform arbitrary aggregation on subset data from large table (passing date range or such parameters). I'm using theses inside larger queries as joined computations and I'm wondering if the query plan optimizer work well with them in every condition or if I'm better to unnest such computation in my larger queries. Does query plan optimizer unnest table-valued functions if it make sense? If it doesn't, what do you recommend to avoid code duplication that would occur by manually unnesting them? If it does, how do you identify that from the execution plan? code sample: create table dbo.customers ( [key] uniqueidentifier , constraint pk_dbo_customers primary key ([key]) ) go /* assume large amount of data */ create table dbo.point_of_sales ( [key] uniqueidentifier , customer_key uniqueidentifier , constraint pk_dbo_point_of_sales primary key ([key]) ) go create table dbo.product_ranges ( [key] uniqueidentifier , constraint pk_dbo_product_ranges primary key ([key]) ) go create table dbo.products ( [key] uniqueidentifier , product_range_key uniqueidentifier , release_date datetime , constraint pk_dbo_products primary key ([key]) , constraint fk_dbo_products_product_range_key foreign key (product_range_key) references dbo.product_ranges ([key]) ) go . /* assume large amount of data */ create table dbo.sales_history ( [key] uniqueidentifier , product_key uniqueidentifier , point_of_sale_key uniqueidentifier , accounting_date datetime , amount money , quantity int , constraint pk_dbo_sales_history primary key ([key]) , constraint fk_dbo_sales_history_product_key foreign key (product_key) references dbo.products ([key]) , constraint fk_dbo_sales_history_point_of_sale_key foreign key (point_of_sale_key) references dbo.point_of_sales ([key]) ) go create function dbo.f_sales_history_..snip.._date_range ( @accountingdatelowerbound datetime, @accountingdateupperbound datetime ) returns table as return ( select pos.customer_key , sh.product_key , sum(sh.amount) amount , sum(sh.quantity) quantity from dbo.point_of_sales pos inner join dbo.sales_history sh on sh.point_of_sale_key = pos.[key] where sh.accounting_date between @accountingdatelowerbound and @accountingdateupperbound group by pos.customer_key , sh.product_key ) go -- TODO: insert some data -- this is a table containing a selection of product ranges declare @selectedproductranges table([key] uniqueidentifier) -- this is a table containing a selection of customers declare @selectedcustomers table([key] uniqueidentifier) declare @low datetime , @up datetime -- TODO: set top query parameters . select saleshistory.customer_key , saleshistory.product_key , saleshistory.amount , saleshistory.quantity from dbo.products p inner join @selectedproductranges productrangeselection on p.product_range_key = productrangeselection.[key] inner join @selectedcustomers customerselection on 1 = 1 inner join dbo.f_sales_history_..snip.._date_range(@low, @up) saleshistory on saleshistory.product_key = p.[key] and saleshistory.customer_key = customerselection.[key] I hope the sample makes sense. Much thanks for your help!

    Read the article

  • what webserver / mod / technique should I use to serve everything from memory?

    - by reinier
    I've lots of lookuptables from which I'll generate my webresponse. I think IIS with Asp.net enables me to keep static lookuptables in memory which I can use to serve up my responses very fast. Are there however also non .net solutions which can do the same? I've looked at fastcgi, but I think this starts X processes, of which anyone can handle Y requests. But the processes are by definition shielded from eachother. I could configure fastcgi to use just 1 process, but does this have scalability implications? anything using PHP or any other interpreted language won't fly because it is also cgi or fastcgi bound right? I understand memcache could be an option, though this would require another (local) socket connection which I'd rather avoid since everything in memory would be much faster. The solution can work under WIndows or Unix... it doesn't matter too much. The only thing which matters is that there will be a lot of requests (100/sec now and growing to 500/sec in a year), and I want to reduce the amount of webservers needed to process it. The current solution is done using PHP and memcache (and the occasional hit to the SQL server backend). Although it is fast (for php anyway), Apache has real problems when the 50/sec is passed. I've put a bounty on this question since I've not seen enough responses to make a wise choice. At the moment I'm considering either Asp.net or fastcgi with C(++).

    Read the article

  • Using Git to work with subversion: Ignoring modifications to tracked files

    - by Chris Nicola
    I am currently working with a subversion repository but I am using git to work locally on my machine. It makes work much easier, but it also makes some of the bad behavior going on in the subversion repo quite glaring and that creates problems for me. There is a somewhat complex local build process after pulling down the code and it creates (and unfortunately modifies) a number of files. Obviously these changes are not meant to be committed back to the repository. Unfortunately the build process is actually modifying some tracked files (yes, most likely because someone mistakenly committed these build artifacts at some point to the subversion repository). Since these are modifications adding them to my ignore file does nothing for me. I can avoid checking these changes back it, I simple don't stage or commit them, but having unstaged local changes means I can't rebase without first cleaning them up. What I would like to know is if there any way to ignore future changes to a set of tracked files? Alternatively, is there another way to handle the problem I am having, or will I just have to tell whoever checked in these files to clean them up?

    Read the article

  • How make this special many2many fields validation using Django ORM?

    - by e-satis
    I have the folowing model: class Step(models.Model): order = models.IntegerField() latitude = models.FloatField() longitude = models.FloatField() date = DateField(blank=True, null=True) class Journey(models.Model): boat = models.ForeignKey(Boat) route = models.ManyToManyField(Step) departure = models.ForeignKey(Step, related_name="departure_of", null=True) arrival = models.ForeignKey(Step, related_name="arrival_of", null=True) I would like to implement the following check: # If a there is less than one step, raises ValidationError. routes = tuple(self.route.order_by("date")) if len(routes) <= 1: raise ValidationError("There must be at least two setps in the route") # save the first and the last step as departure and arrival self.departure = routes[0] self.arrival = routes[-1] # departure and arrival must at least have a date if not (self.departure.date or self.arrival.date): raise ValidationError("There must be an departure and an arrival date. " "Please set the date field for the first and last Step of the Journey") # departure must occurs before arrival if not (self.departure.date > self.arrival.date): raise ValidationError("Departure must take place the same day or any date before arrival. " "Please set accordingly the date field for the first and last Step of the Journey") I tried to do that by overloading save(). Unfortunately, Journey.route is empty in save(). What's more, Journey.id doesn't exists yet. I didn't try django.db.models.signals.post_save but suppose it will fail because Journey.route is empty as well (when does this get filled anyway?). I see a solution in django.db.models.signals.m2m_changed but there are a lot of steps (thousands), and I want to avoid to perform an operation for every single of them.

    Read the article

  • Delphi, PGDac vs Zeos, Fetch, Lookup?

    - by durumdara
    Hi! I used Zeos to test to know: is ZTable uses fetch technics, or not? May in the future we migrate our lesser system to PGSQL, and this used now "Table" components (as BDE, but it have an SQL-like server). These tables use real cursors, a "Window" with N record, so lookup is very fast, because the Locate/Lookup is started on server, and only these N records are refreshed, no matter, how many records in the lookup table. PGSQL uses fetch technics as I know, and I tested it with a table (id int, name varchar(100)), and 1 million records. (I also trying this with mysql). The adapter is Zeos. ID, sec to find, allocated memory in bytes on client. MySQL 500000 2,761 113 196 344 1000000 3,214 225 471 232 313800 0,437 225 471 232 328066 0,468 225 471 232 276374 0,390 225 471 232 905984 1,264 225 471 232 260253 0,359 225 471 232 PGSQL 500000 3,042 113 188 184 1000000 3,744 225 463 064 313800 0,436 225 463 064 328066 0,452 225 463 064 276374 0,375 225 463 064 905984 1,295 225 463 064 260253 0,359 225 463 064 142023 0,203 225 463 064 As you see the records are fetched locally, this cause the 225 MB usage, and searches are slow a little, based where is the record we must find. I want to ask more things: a.) Is PGDAC have some technics to we can use the lookups without pay the fetch with memory and secs? b.) Or is PG ODBC driver can help in this problem with ADO? (As I know ADO can use server side cursors)? c.) Have anybody some experience with lookup tables, and performance? Is this critical question or it is not? (With client memory usage too). d.) If no chance to avoid fetch hell with lookups, what we can do? Server Side Joins, and unique code for Lookup field changing without real Lookup? Thanks for your help: dd

    Read the article

  • strange syntax error in python, version 2.6 and 3.1

    - by flow
    this may not be an earth-shattering deficiency of python, but i still wonder about the rationale behind the following behavior: when i run source = """ print( 'helo' ) if __name__ == '__main__': print( 'yeah!' ) #""" print( compile( source, '<whatever>', 'exec' ) ) i get :: File "<whatever>", line 6 # ^ SyntaxError: invalid syntax i can avoid this exception by (1) deleting the trailing #; (2) deleting or outcommenting the if __name__ == '__main__':\n print( 'yeah!' ) lines; (3) add a newline to very end of the source. moreover, if i have the source end without a trailing newline right behind the print( 'yeah!' ), the source will also compile without error. i could also reproduce this behavior with python 2.6, so it’s not new to the 3k series. i find this error to be highly irritating, all the more since when i put above source inside a file and execute it directly or have it imported, no error will occur—which is the expected behavior. a # (hash) outside a string literal should always represent the start of a (possibly empty) comment in a python source; moreover, the presence or absence of a if __name__ == '__main__' clause should not change the interpretation of a soure on a syntactical level. can anyone reproduce the above problem, and/or comment on the phenomenon? cheers

    Read the article

< Previous Page | 441 442 443 444 445 446 447 448 449 450 451 452  | Next Page >