Search Results

Search found 17501 results on 701 pages for 'stored functions'.

Page 569/701 | < Previous Page | 565 566 567 568 569 570 571 572 573 574 575 576  | Next Page >

  • usort - more parameters

    - by pgfonline
    Hallo, how can I pass more parameters to usort? I have different functions, which are very similar in structure, I want to have just one function: <?php $arr = array( array('number' => 100, 'string'=>'aaa'), array('number'=>50, 'string'=>'bdef'), array('number'=>150, 'string'=>'cbba') ); usort($arr, 'sortNumberDesc'); //How can I use just a single function? //How can I pass further parameters to usort? function sortNumberDesc($a, $b){ $a = $a['number']; $b = $b['number']; if ($a == $b) return 0; return ($a > $b) ? -1 : +1; } function sortNumberAsc($a, $b){ $a = $a['number']; $b = $b['number']; if ($a == $b) return 0; return ($a < $b) ? -1 : +1; } //I want to do the same with just one function: //Sort ID is the search index, reverse DESC or ASC function sort($a, $b, $sortId='number', $reverse = 0){ $a = $a[$sortId]; $b = $b[$sortId]; if ($a == $b) return 0; if($reverse == false) return ($a > $b) ? -1 : +1; else return ($a < $b) ? -1 : +1; } print_r($arr); ?>

    Read the article

  • Using memory-based cache together with conventional cache

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • Checking for valid email addresses

    - by Roland
    I'm running a website with more than 60 000 registered users. Every week notifications are send to these users via email, now I've noticed some of the mail addresses do not exists anymore eg. the domain address is valid but the email name en asdas@ is not valid anymore since person does not work at a company anymore etc. Now I'm looping through the database and doing some regular expression checks and checking if the MX records exist with the following two functions function verify_email($email){ if(!preg_match('/^[_A-z0-9-]+((\.|\+)[_A-z0-9-]+)*@[A-z0-9-]+(\.[A-z0-9-]+)*(\.[A-z]{2,4})$/',$email)){ return false; } else { return true; } } // Our function to verify the MX records function verify_email_dns($email){ list($name, $domain) = split('@',$email); if(!checkdnsrr($domain,'MX')){ return false; } else { return true; } } If the email address is in an invalid format or the domain does not exists I delete the users account. Are there any methods I could use to check if the email address still exists or not if the domain name is valid and the email address is in the correct format? For example [email protected] does not exist anymore but test.com is a valid domain name.

    Read the article

  • How do I design a Wizard-based system with SoC in mind?

    - by Erik Forbes
    I'm building a Windows Forms system (in C# if it matters to anyone) that provides an application automation service. As this application is targeted at users who are not computer savvy, I've decided to break up the functions of the application into various tasks, and provide these tasks via a wizard UI. I'd like to avoid coupling the views and view engine (from which the Wizard will be built) to the automation engine. The problem I'm having is that the automation engine, which runs on a separate thread while it does its thing, needs to report status information back to the user, as well as listen for cancel or pause events from the user. Since I don't want the view engine or the automation engine to rely on each other, I'm having a hard time figuring out how to provide for this information conduit. Any insights into this issue I'm having would be greatly appreciated. I've been wracking my brain for a couple weeks now on this point, and I really don't want to give up and just couple everything together. If anyone needs additional details to help come up with some sort of idea please let me know and I'll be happy to provide them.

    Read the article

  • Bilinear interpolation - DirectX vs. GDI+

    - by holtavolt
    I have a C# app for which I've written GDI+ code that uses Bitmap/TextureBrush rendering to present 2D images, which can have various image processing functions applied. This code is a new path in an application that mimics existing DX9 code, and they share a common library to perform all vector and matrix (e.g. ViewToWorld/WorldToView) operations. My test bed consists of DX9 output images that I compare against the output of the new GDI+ code. A simple test case that renders to a viewport that matches the Bitmap dimensions (i.e. no zoom or pan) does match pixel-perfect (no binary diff) - but as soon as the image is zoomed up (magnified), I get very minor differences in 5-10% of the pixels. The magnitude of the difference is 1 (occasionally 2)/256. I suspect this is due to interpolation differences. Question: For a DX9 ortho projection (and identity world space), with a camera perpendicular and centered on a textured quad, is it reasonable to expect DirectX.Direct3D.TextureFilter.Linear to generate identical output to a GDI+ TextureBrush filled rectangle/polygon when using the System.Drawing.Drawing2D.InterpolationMode.Bilinear setting? For this (magnification) case, the DX9 code is using this (MinFilter,MipFilter set similarly): Device.SetSamplerState(0, SamplerStageStates.MagFilter, (int)TextureFilter.Linear); and the GDI+ path is using: g.InterpolationMode = InterpolationMode.Bilinear; I thought that "Bilinear Interpolation" was a fairly specific filter definition, but then I noticed that there is another option in GDI+ for "HighQualityBilinear" (which I've tried, with no difference - which makes sense given the description of "added prefiltering for shrinking") Followup Question: Is it reasonable to expect pixel-perfect output matching between DirectX and GDI+ (assuming all external coordinates passed in are equal)? If not, why not? Finally, there are a number of other APIs I could be using (Direct2D, WPF, GDI, etc.) - and this question generally applies to comparing the output of "equivalent" bilinear interpolated output images across any two of these. Thanks!

    Read the article

  • JSF how to temporary disable validators to save draft

    - by Swiety
    I have a pretty complex form with lots of inputs and validators. For the user it takes pretty long time (even over an hour) to complete that, so they would like to be able to save the draft data, even if it violates rules like mandatory fields being not typed in. I believe this problem is common to many web applications, but can't find any well recognised pattern how this should be implemented. Can you please advise how to achieve that? For now I can see the following options: use of immediate=true on "Save draft" button doesn't work, as the UI data would not be stored on the bean, so I wouldn't be able to access it. Technically I could find the data in UI component tree, but traversing that doesn't seem to be a good idea. remove all the fields validation from the page and validate the data programmaticaly in the action listener defined for the form. Again, not a good idea, form is really complex, there are plenty of fields so validation implemented this way would be very messy. implement my own validators, that would be controlled by some request attribute, which would be set for standard form submission (with full validation expected) and would be unset for "save as draft" submission (when validation should be skipped). Again, not a good solution, I would need to provide my own wrappers for all validators I am using. But as you see no one is really reasonable. Is there really no simple solution to the problem?

    Read the article

  • Risking the exception anti-pattern.. with some modifications

    - by Sridhar Iyer
    Lets say that I have a library which runs 24x7 on certain machines. Even if the code is rock solid, a hardware fault can sooner or later trigger an exception. I would like to have some sort of failsafe in position for events like this. One approach would be to write wrapper functions that encapsulate each api a: returnCode=DEFAULT; try { returnCode=libraryAPI1(); } catch(...) { returnCode=BAD; } return returnCode; The caller of the library then restarts the whole thread, reinitializes the module if the returnCode is bad. Things CAN go horribly wrong. E.g. if the try block(or libraryAPI1()) had: func1(); char *x=malloc(1000); func2(); if func2() throws an exception, x will never be freed. On a similar vein, file corruption is a possible outcome. Could you please tell me what other things can possibly go wrong in this scenario?

    Read the article

  • How to make Web Storage persistent in Cordova using JS?

    - by ett
    I have a small quiz app, which is a cross-platform mobile app, that I plan for it to run on Android, iOS, and WP8. I want to store a local highscore, where it will keep track how many points the user had, and if he/she does better than the already stored highscore, update the current highscore. Though, I want the highscore to be persistent, what I mean is, every time the user opens the app I want the last highscore to be present and it to be compared with the new quiz score. Meaning, I don't want the highscore to be deleted after each time the app is closed. I also want for the first time when the app is ran, the highscore to be set to 0, so obviously the first time the user finishes the quiz gets a new highscore. I have those codes so far: scoredb.js // Wait for device API libraries to load document.addEventListener("deviceready", initScoreDB, false); // Device APIs are available function initScoreDB() { window.localStorage.setItem("score", 0); highscore = window.localStorage.getItem("score"); } main.js var correct = 0; var highscore; // In between I have some code that keeps incrementing // correct variable for each correct answer. if (correct > highscore) { window.localStorage.setItem("score", correct); highscore = correct; } It does seem to work okay once the app is started. I did the quiz three times, in the simulator, and it keeps the score as it should. Though, each time I open the app, highscore is reseted to 0. I guess it is due to the fact that I call initScoreDB when the device is ready, and I initialize the score to 0 there and give that value to highscore. Can someone help me to initialize the score to 0 only when the app is ran for the first time, and all the other times to keep the latest highscore as the current highscore and compare it each time with the score that is achieved when the quiz is finished. If someone can help me, I would be glad.

    Read the article

  • Need some help with a MySQL subquery count

    - by Ferdy
    I'm running into my own limits of MySQL query skills, so I hope some SQL guru can help out on this one. The situation is as follow: I have images that can be tagged. As you might expect this is stored in three tables: Image Tag Tag_map (maps images to tags) I have a SQL query that calculates the related tags based on a tag id. The query basically checks what other tags were used for images for images using that tag. Example: Image1 tagged as "Bear" Image2 tagged as "Bear" and "Canada" If I throw "Bear" (or its tag id) at the query, it will return "Canada". This works fine. Here's the query: SELECT tag.name, tag.id, COUNT(tag_map.id) as cnt FROM tag_map,tag WHERE tag_map.tag_id = tag.id AND tag.id != '185' AND tag_map.image_id IN (SELECT tag_map.image_id FROM tag_map INNER JOIN tag ON tag_map.tag_id = tag.id WHERE tag.id = '185') GROUP BY tag_map.id LIMIT 0,100 The part I'm stuck with is the count. For each related tag returned, I want to know how many images are in that tag. Currently it always returns 1, even if there are for example 3. I've tried counting different columns all resulting in the same output, so I guess there is a flaw in my thinking.

    Read the article

  • drupal: standard way for creating a slug from a string

    - by egarcia
    Hi there, A slug on this context is a string that its safe to use as an identifier, on urls or css. For example, if you have this string: I'd like to eat at McRunchies! Its slug would be: i-d-like-to-eat-at-mcrunchies- I want to know whether there's a standard way of building such strings on Drupal (or php functions available from drupal). More precisely, inside a Drupal theme. Context: I'm modifying a drupal theme so the html of the nodes it generates include their taxonomy terms as css classes on their containing div. Trouble is, some of those terms' names aren't valid css class names. I need to "slugify" the them. I've read that some people simply do this: str_replace(" ", "-", $term->name) This isn't really a enough for me. It doesn't replace uppercase letters with downcase, but more importantly, doesn't replace non-ascii characters (like à or é) by their ascii equivalents. Is there a function in drupal 6 (or the php libs) that provides a way to slugify a string, and can be used on a template.php file of a drupal theme?

    Read the article

  • How to determine the size of a project (lines of code, function points, other)

    - by sixtyfootersdude
    How would you evaluate project size? Part A: Before you start a project. Part B: For a complete project. I am interested in comparing unrelated projects. Here are some options: 1) Lines of code. I know that this is not a good metric of productivity but is this a reasonable measure of project size? If I wanted to estimate how long it would take to recreate a project would this be a reasonable way to do it? How many lines of code should I estimate a day? 2) Function Points. Functions points are defined as the number of: inputs outputs inquires internal files external interfaces Anyone have a veiw point on whether this is a good measure? Is there a way to **actually do this? Does anyone have another solution? Hours taken seems like it could be a useful metric but not solely. If I ask you what is a "bigger program" and give you two programs how would you approach the question? I have seen several discussions of this on stackover flow but most discuss how to measure programmer productivity. I am more interested in project size.

    Read the article

  • Java SWT: wrapping syncExec and asyncExec to clean up code

    - by jonescb
    I have a Java Application using SWT as the toolkit, and I'm getting tired of all the ugly boiler plate code it takes to update a GUI element. Just to set a disabled button to be enabled I have to go through something like this: shell.getDisplay().asyncExec(new Runnable() { public void run() { buttonOk.setEnabled(true); } }); I prefer keeping my source code as flat as I possibly can, but I need a whopping 3 indentation levels just to do something simple. Is there some way I can wrap it? I would like a class like: public class UIUpdater { public static void updateUI(Shell shell, *function_ptr*) { shell.getDisplay().asyncExec(new Runnable() { public void run() { //Execute function_ptr } }); } } And can be used like so: UIUpdater.updateUI(shell, buttonOk.setEnabled(true)); Something like this would be great for hiding that horrible mess SWT seems to think is necessary to do anything. As I understand it, Java cannot do functions pointers. But Java 7 will have something called Closures which should be what I want. But in the meantime is there anything at all I can do to pass a function pointer or callback to another function to be executed? As an aside, I'm starting to think it'd be worth the effort to redo this application in Swing, and I don't have to put up with this ugly crap and non-cross-platformyness of SWT.

    Read the article

  • mod_rewrite if file exists

    - by Mathieu Parent
    Hi everyone, I already have two rewrite rules that work correctly for now but some more code has to be added to work perfectly. I have a website hosted at mydomain.com and all subdom.mydomain.com are rewrited to mydomain.com/subs/subdom . My CMS has to handle the request if the file being reached does not exist, the rewrite is done like so: RewriteCond $1 !^subs/ RewriteCond %{HTTP_HOST} ^([^.]+)\.mydomain\.com$ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ subs/%1/index.php?page=$1 [L] My CMS handles the next part of the parsing as usual. The problem is if a file really exists, I need to link to it without passing through my CMS, I managed to do it like this: RewriteCond $1 !^subs/ RewriteCond %{HTTP_HOST} ^([^.]+)\.mydomain\.com$ RewriteCond %{REQUEST_FILENAME} -f [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^(.*)$ subs/%1/$1 [L] So far it seems to work like a charm. Now I am being picky and I need to have default files that are stored in subs/default/. If the file exists in the subdomain folder, we should grab this one but if not, we need to get the file from the default subdomain. And if the file does not exist anywhere, we should be using the 404 page from the current subdomain unless there is none. I hope it describes well enough. Thank you for your time!

    Read the article

  • One big executable or many small DLL's?

    - by Patrick
    Over the years my application has grown from 1MB to 25MB and I expect it to grow further to 40, 50 MB. I don't use DLL's, but put everything in this one big executable. Having one big executable has certain advantages: Installing my application at the customer is really: copy and run. Upgrades can be easily zipped and sent to the customer There is no risk of having conflicting DLL's (where the customer has version X of the EXE, but version Y of the DLL) The big disadvantage of the big EXE is that linking times seem to grow exponentially. Additional problem is that a part of the code (let's say about 40%) is shared with another application. Again, the advantages are that: There is no risk on having a mix of incorrect DLL versions Every developer can make changes on the common code which speeds up developments. But again, this has a serious impact on compilation times (everyone compiles the common code again on his PC) and on linking times. The question http://stackoverflow.com/questions/2387908/grouping-dlls-for-use-in-executable mentions the possibility of mixing DLL's in one executable, but it looks like this still requires you to link all functions manually in your application (using LoadLibrary, GetProcAddress, ...). What is your opinion on executable sizes, the use of DLL's and the best 'balance' between easy deployment and easy/fast development?

    Read the article

  • String operations

    - by NEO
    I am trying to find the most repeated word in a string. My code is as follows: public class Word { private String toWord; private int Count; public Word(int count, String word){ toWord = word; Count = count; } public static void main(String args[]){ String str="my name is neo and my other name is also neo because I am neo"; String []str1=str.split(" "); Word w1=new Word(0,str1[0]); LinkedList<Word> list = new LinkedList<Word>(); list.add(w1); ListIterator itr = list.listIterator(); for(int i=1;i<str1.length;i++){ while(itr.hasNext()){ if(str1[i].equalsTO(????)); else list.add(new Word(0,str1[i])); } How do I compare the string from string array str1 to the string stored in the linked list and then how do i increase the respective count. I will then print the string with the highest count, I dont know how to do that either.

    Read the article

  • Question about WeakHashMap

    - by michael
    Hi, In the Javadoc of "http://java.sun.com/j2se/1.4.2/docs/api/java/util/WeakHashMap.html", it said "Each key object in a WeakHashMap is stored indirectly as the referent of a weak reference. Therefore a key will automatically be removed only after the weak references to it, both inside and outside of the map, have been cleared by the garbage collector." And then Note that a value object may refer indirectly to its key via the WeakHashMap itself; that is, a value object may strongly refer to some other key object whose associated value object, in turn, strongly refers to the key of the first value object. But should not both Key and Value should be used weak reference in WeakHashMap? i.e. if there is low on memory, GC will free the memory held by the value object (since the value object most likely take up more memory than key object in most cases)? And if GC free the Value object, the Key Object can be free as well? Basically, I am looking for a HashMap which will reduce memory usage when there is low memory (GC collects the value and key objects if necessary). Is it possible in Java? Thank you.

    Read the article

  • How can I alter an external variable from inside my AJAX?

    - by tmedge
    I keep on having these same two problems. I have been trying to use Remy Sharp's wonderful tagSuggest plugin, and it works great. Until I try to use an AJAX call to get tags from my database. My setGlobalTags() works great, with my defined myTagList at the top of the function. What I want to do is set myTagList equal to the result from my AJAX. My problem is that I can neither call setGlobalTags() from inside my success or error functions, nor actually alter the original myTagList. Also, I keep on having this other issue as well. I keep this code in my Master page, and my AJAX returns 'success' on almost every page. I only (and always) get the error alert when I navigate to a page that actually contains something of id="ParentTags". I don't see why this is happening, because my $('#ParentTags').tagSuggest(); is definitely after my AJAX call. I realize that this is probably just some dumb conventions error, but I am new to this and I'm here to learn from you guys. Thanks in advance! $(function() { var myTagList = ['test', 'testMore', 'testALot']; $.ajax({ type: "POST", url: 'Admin/GetTagList', dataType: 'json', success: function(resultTags) { myTagList = resultTags; alert(myTagList[0]); setGlobalTags(myTagList); }, error: function() { alert('Error'); setGlobalTags(myTagList); } }); setGlobalTags(myTagList); $('#ParentTags').tagSuggest(); });

    Read the article

  • What are the fastest-performing options for a read-only, unordered collection of unique strings?

    - by Dan Tao
    Disclaimer: I realize the totally obvious answer to this question is HashSet<string>. It is absurdly fast, it is unordered, and its values are unique. But I'm just wondering, because HashSet<T> is a mutable class, so it has Add, Remove, etc.; and so I am not sure if the underlying data structure that makes these operations possible makes certain performance sacrifices when it comes to read operations -- in particular, I'm concerned with Contains. Basically, I'm wondering what are the absolute fastest-performing data structures in existence that can supply a Contains method for objects of type string. Within or outside of the .NET framework itself. I'm interested in all kinds of answers, regardless of their limitations. For example I can imagine that some structure might be restricted to strings of a certain length, or may be optimized depending on the problem domain (e.g., range of possible input values), etc. If it exists, I want to hear about it. One last thing: I'm not restricting this to read-only data structures. Obviously any read-write data structure could be embedded inside a read-only wrapper. The only reason I even mentioned the word "read-only" is that I don't have any requirement for a data structure to allow adding, removing, etc. If it has those functions, though, I won't complain.

    Read the article

  • Choose a XML node in SQL Server based on max value of a child element

    - by Jay
    I am trying to select from SQL Server 2005 XML datatype some values based on the max data that is located in a child node. I have multiple rows with XML similar to the following stored in a field in SQL Server: <user> <name>Joe</name> <token> <id>ABC123</id> <endDate>2013-06-16 18:48:50.111</endDate> </token> <token> <id>XYX456</id> <endDate>2014-01-01 18:48:50.111</endDate> </token> </user> I want to perform a select from this XML column where it determines the max date within the token element and would return the datarows similar to the result below for each record: Joe XYZ456 2014-01-01 18:48:50.111 I have tried to find a max function for xpath that would all me to select the correct token element but I couldn't find one that would work. I also tried to use the SQL MAX function but I wasn't able to get it working with that method either. If I only have a single token it of course works fine but when I have more than one I get a NULL, most likely because the query doesn't know which date to pull. I was hoping there would be a way to specify a where clause [max(endDate)] on the token element but haven't found a way to do that. Here is an example of the one that works when I only have a single token: SELECT XMLCOL.query('user/name').value('.','NVARCHAR(20)') as name XMLCOL.query('user/token/id').value('.','NVARCHAR(20)') as id XMLCOL.query('user/token/endDate').value(,'xs:datetime(.)','DATETIME') as endDate FROM MYTABLE

    Read the article

  • Apply CSS style to anchor problem

    - by Jake
    Using jquery I have a clicking tab mechanism that are nothing but anchor tags that return false but call javascript functions to run some events on the page. The problem is I am using jquery to apply an opacity style to the active anchor. and the other sibling anchor get a lesser opacity view. My code looks like this $("#menutab li a").click(function(){ $(this).animate({opacity:'1'},1000); $(this).siblings().animate({opacity:'.25'},1000); } I would think this code would act only on the clicked element and apply that css style to that element and the other style to the other anchor tags except the clicked one. It kind of does that, but also what it does is leave the earlier clicked element to opacity =1, so if I click an element it sets it opacity to 1 and then if I click another one it sets it opacity to 1 while leave the earlier clicked one to 1 also instead of setting it to .25 like the others. Edit: I changed the above code to: $("#menutab ul li").click(function(){ $(this).children().animate({opacity:'1'},1000); $(this).siblings().children().animate({opacity:'.25'},1000); }); and now I get the desired effect, except that when the first anchor in the list is clicked doesn't follow the event rules, When the first one is clicked its as if, the click event is not triggered, because no opacity style changes. which I don't understand.

    Read the article

  • Ideas on frameworks in .NET that can be used for job processing and notifications

    - by Rajat Mehta
    Scenario: We have one instance of WCF windows service which exposes contracts like: AddNewJob(Job job), GetJobs(JobQuery query) etc. This service is consumed by 70-100 instances of client which is Windows Form based .NET app. Typically the service has 50-100 inward calls/minute to add or query jobs that are stored in a table on Sql Server. The same service is also responsible for processing these jobs in real time. It queries database every 5 seconds picks up the queued jobs and starts processing them. A job has 6 states. Queued, Pre-processing, Processing, Post-processing, Completed, Failed, Locked. Another responsibility on this service is to update all clients on every state change of every job. This means almost 200+ callbacks to clients per second. Question: This whole implementation is done using WCF Duplex bindings and works perfectly fine on small number of parallel jobs. Problem arises when we scale it up to 1000 jobs at a time. The notifications don't work as expected, it leads to memory overflow etc. Is there any standard framework that can provide a clean infrastructure for handling this scenario?? Apologies for the long explanation!

    Read the article

  • JQuery methods and DOM properties

    - by Bob Smith
    I am confused as to when I can use the DOM properties and when I could use the Jquery methods on a Jquery object. Say, I use a selector var $elemSel = $('#myDiv').find('[id *= \'select\']') At this point, $elemSel is a jquery object which I understand to be a wrapper around the array of DOM elements. I could get a reference to the DOM elements by iterating through the $elemSel object/array (Correct?) My questions: 1. Is there a way to convert this $elemSel into a non JQuery regular array of DOM elements? 2. Can I combine DOM properties and JQuery methods at the same time (something like this) $elemSel.children('td').nodeName (nodeName is DOM related, children is JQuery related) EDIT: What's wrong with this? $elemSel.get(0).is(':checked') EDIT 2: Thanks for the responses. I understand now that I can use the get(0) to get a DOM element. Additional questions: How would I convert a DOM element to a JQuery object? If I assign "this" to a variable, is that new var DOM or JQuery? If it's JQuery, how can I convert this to a DOM element? (Since I can't use get(0)) var $elemTd = $(this); When I do a assignment like the one above, I have seen some code samples not include the $ sign for the variable name. Why? And as for my original question, can I combine the DOM properties and JQuery functions at the same time on a JQuery object? $elemSel.children('td').nodeName

    Read the article

  • MS Query returns data inside itself but does not export it to Excel

    - by kappa
    Hi, I'm having a strange problem with Excel and MS Query: I'm using MS Query to run a T-SQL query against a Microsoft SQL Server 2000 and return the results to Excel. To do this, I open Excel, go to Data - Import external data - New database query, select my data source, paste the SQL script in MS Query and click File - Return data to Microsoft Office Excel, leaving all the query options to their defaults. This works fine for many other Excel files, but this time although MS Query shows the correct data when I paste the SQL script, after returning to Excel all I get is the query name in the upper left cell, with no data returned. I fear the cause could be the SQL script, as it contains some advanced functions like union all, UDFs and variables. Here's the script: declare @date smalldatetime set @date = dateadd(day, datediff(day, 0, getdate()), 0) select [date], sum([hours]) as [hours] from ( select [date], [hours] from [server].[dbo].[udf] (84, '2010-01-01', @date) union all select [date], [hours] from [server].[dbo].[udf] (89, '2010-01-01', @date) union all select [date], [hours] from [server].[dbo].[udf] (93, '2010-01-01', @date) ) as [a] group by [date] order by [date] asc I can't get rid of the UDF as inside them are done advanced groupings involving cursors and temporary tables, nor I can remove the variable as the UDF won't accept dateadd(day, datediff(day, 0, getdate()), 0) as parameter. Any ideas? Thanks in advance, Andrea.

    Read the article

  • Load SQL query result data into cache in advance

    - by Marc
    I have the following situation: .net 3.5 WinForm client app accessing SQL Server 2008 Some queries returning relatively big amount of data are used quite often by a form Users are using local SQL Express and restarting their machines at least daily Other users are working remotely over slow network connections The problem is that after a restart, the first time users open this form the queries are extremely slow and take more or less 15s on a fast machine to execute. Afterwards the same queries take only 3s. Of course this comes from the fact that no data is cached and must be loaded from disk first. My question: Would it be possible to force the loading of the required data in advance into SQL Server cache? Note My first idea was to execute the queries in a background worker when the application starts, so that when the user starts the form the queries will already be cached and execute fast directly. I however don't want to load the result of the queries over to the client as some users are working remotely or have otherwise slow networks. So I thought just executing the queries from a stored procedure and putting the results into temporary tables so that nothing would be returned. Turned out that some of the result sets are using dynamic columns so I couldn't create the corresponding temp tables and thus this isn't a solution. Do you happen to have any other idea?

    Read the article

  • How do you parse the XDG/gnome/kde menu/desktop item structure in c++??

    - by Joe Soul-bringer
    I would like to parse the menu structure for Gnome Panels (the standard Gnome Desktop application launcher) and it's KDE equivalent using c/c++ function calls. That is, I'd like a list of what the base menu categories and submenu are installed in a given machine. I would like to do with using fairly simple c/c++ function calls (with NO shelling out please). I understand that these menus are in the standard xdg format. I understand that this menu structure is stored in xml files such as: /home/user/.config/menus/applications.menu I've look here: http://www.freedesktop.org/wiki/Specifications/menu-spec?action=show&redirect=Standards%2Fmenu-spec but all they offer is the standard and some shell files to insert item entries (I don't want shell scripts, I don't want installation, I definitely don't want to create a c-library from the XDG specification. I want to find the existing menu structure). I've looked here: http://library.gnome.org/admin/system-admin-guide/stable/menustructure-13.html.en for more notes on these structures. None of this gives me a good idea of how determine the menu structures using a c/c++ program. The actual gnome menu structures seem to be a horrifically hairy things - they don't seem to show the menu structure but to give an XML-coded description of all the changes that the menus have gone through since installation. I assume gnome panels parses these file so there's a function buried somewhere to do this but I've yet to find where that function is after scanning library.gnome.org for a couple of days. I've scanned the Nautilus source code as well but Panels seem to exist elsewhere or are burried well. Thanks in advance

    Read the article

< Previous Page | 565 566 567 568 569 570 571 572 573 574 575 576  | Next Page >