Search Results

Search found 140 results on 6 pages for 'buried shopno'.

Page 4/6 | < Previous Page | 1 2 3 4 5 6  | Next Page >

  • O(N log N) Complexity - Similar to linear?

    - by gav
    Hey All, So I think I'm going to get buried for asking such a trivial but I'm a little confused about something. I have implemented quicksort in Java and C and I was doing some basic comparissons. The graph came out as two straight lines, with the C being 4ms faster than the Java counterpart over 100,000 random integers. The code for my tests can be found here; android-benchmarks I wasn't sure what an (n log n) line would look like but I didn't think it would be straight. I just wanted to check that this is the expected result and that I shouldn't try to find an error in my code. I stuck the formula into excel and for base 10 it seems to be a straight line with a kink at the start. Is this because the difference between log(n) and log(n+1) increases linearly? Thanks, Gav

    Read the article

  • Is there a standardized (Meta?) Tag for the Date of a Website?

    - by Michael Stum
    One thing that search engines really suck with is the date when a website was created. You know the problem: You search for some CSS or JavaScript problem and Google returns a ton of results from 2002 explaining how to fix the problem in IE 5.5 and Netscape 4.6 while the helpful articles are buried on Page 3. There is only one use for Page 3, and meaningful search results are not it. Anyway, I just wonder if there is a standardized or at least generally accepted tag or meta tag that I can put on my own pages to indicate the date they were created? Not that it helps filtering out the old crap out of search results (especially since the people at #1 with their 2002 articles have zero incentive to change), but I'd just like to do my part :P

    Read the article

  • Cleanest way to store lists of filter coefficients in a C header

    - by Nick T
    I have many (~100 or so) filter coefficients calculated with the aid of some Matlab and Excel that I want to dump into a C header file for general use, but I'm not sure what the best way to do this would be. I was starting out as so: #define BUTTER 1 #define BESSEL 2 #define CHEBY 3 #if FILT_TYPE == BUTTER #if FILT_ROLLOFF == 0.010 #define B0 256 #define B1 512 #define B2 256 #define A1 467 #define A2 -214 #elif FILT_ROLLOFF == 0.015 #define B0 256 #define B1 512 // and so on... However, if I do that and shove them all into a header, I need to set the conditionals (FILT_TYPE, FILT_ROLLOFF) in my source before including it, which seems kinda nasty. What's more, if I have 2+ different filters that want different roll-offs/filter types it won't work. I could #undef my 5 coefficients (A1-2, B0-2) in that coefficients file, but it still seems wrong to have to insert an #include buried in code.

    Read the article

  • How/When/Where to Extend Gem Classes (via class_eval and Modules) in Rails 3?

    - by viatropos
    What is the recommended way to extend class behavior, via class_eval and modules (not by inheritance) if I want to extend a class buried in a Gem from a Rails 3 app? An example is this: I want to add the ability to create permalinks for tags and categories (through the ActsAsTaggableOn and ActsAsCategory gems). They have defined Tag and Category models. I want to basically do this: Category.class_eval do has_friendly_id :title end Tag.class_eval do has_friendly_id :title end Even if there are other ways of adding this functionality that might be specific to the gem, what is the recommended way to add behavior to classes in a Rails 3 application like this? I have a few other gems I've created that I want to do this to, such as a Configuration model and an Asset model. I would like to be able to add create an app/models/configuration.rb model class to my app, and it would act as if I just did class_eval. Anyways, how is this supposed to work? I can't find anything that covers this from any of the current Rails 3 blogs/docs/gists.

    Read the article

  • do people value information or aesthetic value of websites ? [closed]

    - by fwfwfw
    I'm thinking, why does the web have to be so colorful. meaning, all the information is buried deep beneath layers of flash, javascripts, html and images. Sure, a good positioning of these media files, create an aesthetic value but how important is it to the user ? moreover, aren't people looking for information after all ? why can't the internet be a uniform looking data warehouse ? now we've gotta digg through all the aesthetic junk, using shady web scraping techniques, unless RSS or API is provided. why can't we settle for just a dull grey button and framesets for navigation ? why can't all sites have navigation frame on the left and top ? why can't all sites put their damn data always in normalized table tag ?

    Read the article

  • Choosing an alternative ( visual basic) high level programming language

    - by user370244
    I used to be visual basic 6 programmer, i was pleased with visual basic : it is high level language that do stuff fast,easy to learn, easy to do stuff in,you can drag and drop stuff to the form and write your code,it is simply amazing.however microsoft buried VB6 and pointed us to VB.NET which is so different that it is not the old VB anymore.I didn't like what microsoft did and would like to look somewhere AWAY from microsoft and from any other proprietary language . I would like to look into a similar language that is easy,cross platform (windows / Linux), object oriented, visual design, non proprietary and compile (for some guarding against reverse engineering). i am freelancer so the choice is entirely mine,i don't care about performance of programs, the time taken to develop a given programs is much more important. desktop / database / GUI /networking programming is what i am looking for. so any such language offered by our open source community ? thank you so much

    Read the article

  • Similar code detector

    - by Let_Me_Be
    I'm search for a tool that could compare source codes for similarity. We have a very trivial system right now that has huge amount of false positives and the real positives can easily get buried in them. My requirements are: reasonably small amount of false positives good detection rate (yeah these are going against each other) ideally with a more complex output than just a single value usable for C (C99) and C++ (C++03 and optimally C++11) still maintained usable for comparing two source files against each other usable in non-interactive mode EDIT: To avoid confusion, the following two code snippets are identical and should be detected as such: for (int i = 0; i < 10; i++) { bla; } int i; while (i < 10) { bla; i++; } The same here: int x = 10; y = x + 5; int a = 10; y = a + 5;

    Read the article

  • access a property via string with array in php?

    - by sprugman
    (This is in drupal, but I don't really think that matters.) I have a big list of properties that I need to map between two objects, and in one, the value that I need to map is buried inside an array. I'm hoping to avoid hard-coding the property names in the code. If I have a class like this: class Product { public $colors, $sizes; } I can access the properties like this: $props = array('colors', 'sizes'); foreach ($props as $p) { $this->$p = $other_object->$p; } As far as I can tell, if each of the properties on the left are an array, I can't do this: foreach ($props as $p) { $this->$p[0]['value'] = $other_object->$p; } Is that correct, or am I missing some clever way around this?

    Read the article

  • NDepend 4.0 Released

    - by Anthony Trudeau
    Last week version 4.0 of NDepend was released.  NDepend is a Visual Studio add-in designed for intense code analysis with the goal of high quality code.  A month ago I wrapped up my evaluation of the previous version of NDepend. The new version contains many minor changes, several bug fixes, and adds about 50 new code rules.  The version also adds support for Visual Studio 11, .NET Framework 4.5, and SilverLight 5.0.  But, the biggest change was the shift from CQL to CQLinq. Introducing CQLinq The latest version replaces the CQL rules language with CQLinq (CQL is still an option although the editor is buried).  As you might guess CQLinq is a flavor of Linq designed specifically for the code rules. The best way to illustrate the differences is with an example.  I used the following CQL example in Part 3 of my review: WARN IF Count > 0 IN SELECT TYPES WHERE IsInterface AND !NameLike “I” This same query looks like this when implemented in CQLinq: warnif count > 0 from t in Types where t.IsInterface == true && !t.NameLike(“I”) select t I like the syntax and it is a natural fit, but I found writing the queries frustrating in the Queries and Rules Edit window.  The Queries and Rules Edit window replaces the CQL Query Edit window.  The new editor has the same style of Intellisense as the previous editor.  However, it has a few annoyances.  The error indicator is a red block.  It has the tendency of obscuring your cursor.  Additionally, writing CQLing queries is like writing plain old Linq queries, so the fact that the editor uses Enter to select from Intellisense instead of Tab is jarring.  These issues can be an obstacle to writing queries quickly.CQLinq makes it possible to write rules that weren't possible before.  Additionally, a JustMyCode domain is now possible making it easy to eliminate generated code from the analysis.Should you Buy? I recommend NDepend overall.  It has some rough points for me that I have detailed in my earlier evaluation (starting here).  But, it’s definitely worth the money.  The bigger question is: should I pay for the upgrade to 4.0?  At this point I’m on the fence, but I would go for it if you need support for Visual Studio 2011, .NET Framework 4.5, or Silverlight 5.0; or if you need one of the many rules that weren't possible before CQLinq. Disclaimer: Patrick Smacchia contacted me about reviewing NDepend. I received a free license in return for sharing my experiences and talking about the capabilities of the add-in on this site. There is no expectation of a positive review elicited from the author of NDepend. Resources: NDepend Release Notes

    Read the article

  • Dark Sun Dispatch 001.5 (a review of City Under The Sand)

    - by Chris Williams
    City Under The Sand - a review I'm moderately familiar with the Dark Sun setting. I've read the other Dark Sun novels, ages ago and I recently started running a D&D 4.0 campaign in the Dark Sun world, so I picked up this book to help re-familiarize myself with the setting. Overall, it did accomplish that, in a limited way. The book takes place in Nibenay and a neighboring expanse of desert that includes a formerly buried city, a small town and a bandit outpost. The book does a more interesting job of describing Nibenese politics and the court of the ruling Sorcerer King, his templars and the expected jockeying for position that occurs between the Templar Wives. There is a fair amount of combat, which was interesting and fairly well detailed. The ensemble cast is introduced and eventually brought together over the first few chapters. Not a lot of backstory on most of the characters, but you get a feel for them fairly quickly. The storyline was somewhat predictable after the first third of the book. Some of the reviews on Amazon complain about the 2-dimensional characterizations, and yes there were some... but it's easy to ignore because there is a lot going on in the book... several interwoven plotlines that all eventually converge. Where the book falls short... First, it appears to have been edited by a 4th grader who knows how to use spellcheck but lacks the attention to detail to notice the frequent occurence of incorrect words that often don't make sense or change the context of the entire sentence. It happened just enough to be distracting, and honestly I expect better from WOTC. Second, there is a lot of buildup to the end of the story... the big fight, the confrontation between good and evil, etc... which is handled in just a few pages and then the story basically just ends. Kind of a letdown, honestly. There wasn't a big finish, and it wasn't a cliffhanger, it just wraps up neatly and ends. It felt pretty rushed. Overall, aside from the very end, I enjoyed it. I really liked the insight into that region of Athas and it gave me some good ideas for fleshing out my own campaign. In that sense, the book served its purpose for me. If you're looking for a light read (got a 5-6 hour flight somewhere?) or you want to learn more about the Dark Sun setting, then I'd recommend this book.

    Read the article

  • PeopleSoft New Design Solves Navigation Problem

    - by Applications User Experience
    Anna Budovsky, User Experience Principal Designer, Applications User Experience In PeopleSoft we strive to improve User Experience on all levels. Simplifying navigation and streamlining access to the most important pages is always an important goal. No one likes to waste time waiting for pages to load and watching a spinning glass going on and on. Those performance-affecting server trips, page-load waits and just-too-many clicks were complained about for a long time. Something had to be done. A few new designs came in PeopleSoft 9.2 helping users to access their everyday work areas easier and faster. For example, Dashboard and Work Center aggregate most accessed information sections on a single page; Related Information allows users to complete transaction-related-research without interrupting a transaction and Secure Search gets users to a specific page directly. Today we’ll talk about the Actions menu. Most PeopleSoft pages are shared between individual products and product lines. It means changing the content on a single page involves Oracle development and quality assurance time for making and testing the changes. In order to streamline the navigation and cut down on accessing PeopleSoft pages one-page-at-a-time, we introduced a new menu design. The new menu allows accessing shared pages without the Oracle development team making any local changes, and it works as an additional one-click-path to specific high-traffic actionable pages. Let’s look at how many steps it took to Change Salary for an employee in HCM 9.1 before: Figure 1. BEFORE: The 6 steps a user would take to Change Salary in PeopleSoft HCM 9.1 In PeopleSoft 9.1 it took 5 steps + page loading time + additional verification time for making sure a correct employee is selected from the table. In PeopleSoft 9.2 it only takes 2 steps. To complete Ad Hoc Change Salary action, the user can start from the HCM Manager's Dashboard, click the Action menu within a table, choose a menu option, and access a correct employee’s details page to take an action. Figure 2. AFTER: The 2 steps a user would take to Change Salary in PeopleSoft HCM 9.2 The new menu is placed on a row level which ensures the user accesses the correct employee’s details page. The Actions menu separates menu options into hierarchical sections which help to scan and access the correct option quickly. The new menu’s small size and its structure enabled users to access high-traffic pages from any page and from any part of the page. No more spinning hourglass, no more multiple pages upload. The flexible design fits anywhere on a page and provides a fast and reliable path to the correct destination within the product. Now users can: Access any target page no matter how far it is buried from the starting point; Reduce navigation and page-load time; Improve productivity and reduce errors. The new menu design is available and widely used in all PeopleSoft 9.2 product lines.

    Read the article

  • Is my JavaScript/jQuery methodology good? [migrated]

    - by absentx
    I am seeking critique on what has become my normal methodology of writing JavaScript code. I have become heavily reliant on the jQuery library, but I think this has helped me learn the native language better also. Anyway, please critique the following style of JavaScript coding... Buried are a lot of questions of scope; if you could point out the strengths and weaknesses of this style I would appreciate it. var critique ={ start: function(){ globalness = 'GLOBAL-GLOBAL'; //Available to all critique's methods var notglobalness = 'LOCAL-LOCAL'; // Only available to critiques start method //Am I using the "method" teminology properly here?? $('#stuff').on('click','a.closer-target',function(){ $target = $(this); if($target.hasClass('active')){ $target.removeClass('active'); } else{ $target.addClass('active'); critique.madness($target); } }) console.log(notglobalness+': at least I am useful at home'); console.log('note here that: '+notglobalness+' is no longer available after this point, lets continue on:'); critique.madness(notglobalness); }, madness: function($e){ //Do a bunch of awesomeness with $e, //but continue to keep it seperate because you think its best to keep things isolated. //Send to the next function when complete here console.log('Here is globalness, which is still available from the start method of critique!! ' + globalness); console.log('Let us see if the globalness carries on to a new var object!!'); console.log('The locally isolated variable of NOTGLOBALNESS is available here, because it was passed to this method. Let us show it:'+$e); carryOn.start(); } } //end critique var carryOn={ start: function(){ console.log('any chance critique.globalness will work here??? lets see: ' +globalness); console.log('it absolutely does'); } } $(document).ready(critique.start); (I always struggle with which of the Stack Exchange sites is best to post "questions of theory" like this, but I think Programmers is the best, if not, as usual a mod will move it, etc...)

    Read the article

  • How to chain actions/animations together and delay their execution?

    - by codinghands
    I'm trying to build a simple game with a number of screens - 'TitleScreen', 'LoadingScreen', 'PlayScreen', 'HighScoreScreen' - each of which has it's own draw & update logic methods, sprites, other useful fields, etc. This works well for me as a game dev beginner, and it runs. However, on my 'PlayScreen' I want to run some animations before the player gets control - dropping in some artwork, playing some sound effects, generally prettifying things a little. However, I'm not sure what the best way to chain animations / sound effects / other timed general events is. I could make an intermediary screen, 'PrePlayScreen', which simply has all of this hardcoded like so: Update(){ Animation anim1 = new Animation(.....); Animation anim2 = new Animation(.....); anim1.Run(); if(anim1.State == AnimationState.Complete) anim2.Run(); if(anim2.State == AnimationState.Complete) // Load 'PlayScreen' screen } But this doesn't seem so great - surely their must be a better way? I then thought, 'Hey - an AnimationManager! That'd be awesome!'. But then that creeping OOP panic set in as I thought about it some more. If I create the Animation in my Screen, then add it to the AnimationManager (which may or may not be a GameComponent hooked up to Update/Draw), how can I get 'back' to it? To signal commands like start / end / repeat? I'd still need to keep a reference to the object in my Screen so that I could still communicate with it once it's buried in the bosom of a List in my AnimationManager. This seems bad. I've also tried using events - call 'Update' on all the animations in the PlayScreen update loop, but crucially all of the animations have a bool flag ('Active') which determines whether they should begin. The first animation has this set to 'true', all others 'false'. On completion the first animation raises an event, which sets animation 2's bool flag to true (and so it then runs). Once animation 2 is complete another 'anim complete' event is raised, and the screen state changes. Considering the game I'm making is basically as simple as it gets I know I'm overthinking this... it's just the paradigm shift from web - game development is making me break out in a serious case of the stupids.

    Read the article

  • What are some of the best answer file settings for a WDS Deployment?

    - by drpcken
    I've had my head buried in answer files for days now and have gotten quite comfortable setting them up, test, etc... I use a handful of Components to help my migrations, for my unattend.xml I like: Windows-International-Core-WinPE -- this is good for setting Locales the preboot environment (en-us for us english US speakers). Keeps me from having to set these on the initial image boot. Windows-Setup_neutral -- I like the WindowsDeploymentServices -> ImageSelection, especially if I'm only pushing a single image. This keeps me from having to select it each time. My OOBE_Unattend.xml is really useful and I barely have to touch anything during this part of the installation: Windows-Shell-Setup_neutral -- This lets me put a ProductKey in for my MAK volume license (very useful and time saving). I can also set the TimeZone for the installation. Windows UnattendedJoin_neutral -- I couldn't live without this component. It joins the machine on my domain before logging in as a domain administrator. I would hate to not have this ability. Windows-International-Core -- Again this component really speeds up the OOBE process. I configure my locals and time zone so I don't have to do it by hand when the machine enteres OOBE. Windows-Shell-Setup -- Allows you to configure an autologon when the new machine is finished. I like to logon as a domain admin automatically for customizing and troubleshooting the new machine immediately after it is imaged. Also the OOBE component under here lets me skip the EULA, Hide Wireless Setup, and set my default NetworkLocation. All of this makes the entire OOBE totally automated. What are some other good components I am missing as far as helping me get these images pushed and configured as quickly as possible?

    Read the article

  • DataContractSerializer: type is not serializable because it is not public?

    - by Michael B. McLaughlin
    I recently ran into an odd and annoying error when working with the DataContractSerializer class for a WP7 project. I thought I’d share it to save others who might encounter it the same annoyance I had. So I had an instance of  ObservableCollection<T> that I was trying to serialize (with T being a class I wrote for the project) and whenever it would hit the code to save it, it would give me: The data contract type 'ProjectName.MyMagicItemsClass' is not serializable because it is not public. Making the type public will fix this error. Alternatively, you can make it internal, and use the InternalsVisibleToAttribute attribute on your assembly in order to enable serialization of internal members - see documentation for more details. Be aware that doing so has certain security implications. This, of course, was malarkey. I was trying to write an instance of MyAwesomeClass that looked like this: [DataContract] public class MyAwesomeClass { [DataMember] public ObservableCollection<MyMagicItemsClass> GreatItems { get; set; }   [DataMember] public ObservableCollection<MyMagicItemsClass> SuperbItems { get; set; }     public MyAwesomeClass { GreatItems = new ObservableCollection<MyMagicItemsClass>(); SuperbItems = new ObservableCollection<MyMagicItemsClass>(); } }   That’s all well and fine. And MyMagicItemsClass was also public with a parameterless public constructor. It too had DataContractAttribute applied to it and it had DataMemberAttribute applied to all the properties and fields I wanted to serialize. Everything should be cool, but it’s not because I keep getting that “not public” exception. I could tell you about all the things I tried (generating a List<T> on the fly to make sure it wasn’t ObservableCollection<T>, trying to serialize the the Collections directly, moving it all to a separate library project, etc.), but I want to keep this short. In the end, I remembered my the “Debug->Exceptions…” VS menu option that brings up the list of exception-related circumstances under which the Visual Studio debugger will break. I checked the “Thrown” checkbox for “Common Language Runtime Exceptions”, started the project under the debugger, and voilà: the true problem revealed itself. Some of my properties had fairly elaborate setters whose logic I wanted to ignore. So for some of them, I applied an IgnoreDataMember attribute to them and applied the DataMember attribute to the underlying fields instead. All of which, in line with good programming practices, were private. Well, it just so happens that WP7 apps run in a “partial trust” environment and outside of “full trust”-land, DataContractSerializer refuses to serialize or deserialize non-public members. Of course that exception was swallowed up internally by .NET so all I ever saw was that bizarre message about things that I knew for certain were public being “not public”. I changed all the private fields I was serializing to public and everything worked just fine. In hindsight it all makes perfect sense. The serializer uses reflection to build up its graph of the object in order to write it out. In partial trust, you don’t want people using reflection to get at non-public members of an object since there are potential security problems with allowing that (you could break out of the sandbox pretty quickly by reflecting and calling the appropriate methods and cause some havoc by reflecting and setting the appropriate fields in certain circumstances. The fact that you cannot reflect your own assembly seems a bit heavy-handed, but then again I’m not a compiler writer or a framework designer and I have no idea what sorts of difficulties would go into allowing that from a compilation standpoint or what sorts of security problems allowing that could present (if any). So, lesson learned. If you get an incomprehensible exception message, turn on break on all thrown exceptions and try running it again (it might take a couple of tries, depending) and see what pops out. Chances are you’ll find the buried exception that actually explains what was going on. And if you’re getting a weird exception when trying to use DataContractSerializer complaining about public types not being public, chances are you’re trying to serialize a private or protected field/property.

    Read the article

  • Mobile Deals: the Consumer Wants You in Their Pocket

    - by Mike Stiles
    Mobile deals offer something we talk about a lot in social marketing, relevant content. If a consumer is already predisposed to liking your product and gets a timely deal for it that’s easy and convenient to use, not only do you score on the marketing side, it clearly generates some of that precious ROI that’s being demanded of social. First, a quick gut-check on the public’s adoption of mobile. Nielsen figures have 55.5% of US mobile owners using smartphones. If young people are indeed the future, you can count on the move to mobile exploding exponentially. Teens are the fastest growing segment of smartphone users, and 58% of them have one. But the largest demographic of smartphone users is 25-34 at 74%. That tells you a focus on mobile will yield great results now, and even better results straight ahead. So we can tell both from statistics and from all the faces around you that are buried in their smartphones this is where consumers are. But are they looking at you? Do you have a valid reason why they should? Everybody likes a good deal. BIA/Kelsey says US consumers will spend $3.6 billion this year for daily deals (the Groupons and LivingSocials of the world), up 87% from 2011. The report goes on to say over 26% of small businesses are either "very likely" or "extremely likely" to offer up a deal in the next 6 months. Retail Gazette reports 58% of consumers shop with coupons, a 40% increase in 4 years. When you consider that a deal can be the impetus for a real-world transaction, a first-time visit to a store, an online purchase, entry into a loyalty program, a social referral, a new fan or follower, etc., that 26% figure shows us there’s a lot of opportunity being left on the table by brands. The existing and emerging technologies behind mobile devices make the benefits of offering deals listed above possible. Take how mobile payment systems are being tied into deal delivery and loyalty programs. If it’s really easy to use a coupon or deal, it’ll get used. If it’s complicated, it’ll be passed over as “not worth it.” When you can pay with your mobile via technologies that connects store and user, you get the deal, you get the loyalty credit, you pay, and your receipt is uploaded, all in one easy swipe. Nothing to keep track of, nothing to lose or forget about. And the store “knows” you, so future offers will be based on your tastes. Consider the endgame. A customer who’s a fan of your belt buckle store’s Facebook Page is in one of your physical retail locations. They pull up your app, because they’ve gotten used to a loyalty deal being offered when they go to your store. Voila. A 10% discount active for the next 30 minutes. Maybe the app also surfaces social references to your brand made by friends so they can check out a buckle someone’s raving about. If they aren’t a fan of your Page or don’t have your app, perhaps they’ve opted into location-based deal services so you can still get them that 10% deal while they’re in the store. Or maybe they’ve walked in with a pre-purchased Groupon or Living Social voucher. They pay with one swipe, and you’ve learned about their buying preferences, credited their loyalty account and can encourage them to share a pic of their new buckle on social. Happy customer. Happy belt buckle company. All because the brand was willing to use the tech that’s available to meet consumers where they are, incentivize them, and show them how much they’re valued through rewards.

    Read the article

  • Using Recursive SQL and XML trick to PIVOT(OK, concat) a "Document Folder Structure Relationship" table, works like MySQL GROUP_CONCAT

    - by Kevin Shyr
    I'm in the process of building out a Data Warehouse and encountered this issue along the way.In the environment, there is a table that stores all the folders with the individual level.  For example, if a document is created here:{App Path}\Level 1\Level 2\Level 3\{document}, then the DocumentFolder table would look like this:IDID_ParentFolderName1NULLLevel 121Level 232Level 3To my understanding, the table was built so that:Each proposal can have multiple documents stored at various locationsDifferent users working on the proposal will have different access level to the folder; if one user is assigned access to a folder level, she/he can see all the sub folders and their content.Now we understand from an application point of view why this table was built this way.  But you can quickly see the pain this causes the report writer to show a document link on the report.  I wasn't surprised to find the report query had 5 self outer joins, which is at the mercy of nobody creating a document that is buried 6 levels deep, and not to mention the degradation in performance.With the help of 2 posts (at the end of this post), I was able to come up with this solution:Use recursive SQL to build out the folder pathUse SQL XML trick to concat the strings.Code (a reminder, I built this code in a stored procedure.  If you copy the syntax into a simple query window and execute, you'll get an incorrect syntax error) Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} -- Get all folders and group them by the original DocumentFolderID in PTSDocument table;WITH DocFoldersByDocFolderID(PTSDocumentFolderID_Original, PTSDocumentFolderID_Parent, sDocumentFolder, nLevel)AS (-- first member      SELECT 'PTSDocumentFolderID_Original' = d1.PTSDocumentFolderID            , PTSDocumentFolderID_Parent            , 'sDocumentFolder' = sName            , 'nLevel' = CONVERT(INT, 1000000)      FROM (SELECT DISTINCT PTSDocumentFolderID                  FROM dbo.PTSDocument_DY WITH(READPAST)            ) AS d1            INNER JOIN dbo.PTSDocumentFolder_DY AS df1 WITH(READPAST)                  ON d1.PTSDocumentFolderID = df1.PTSDocumentFolderID      UNION ALL      -- recursive      SELECT ddf1.PTSDocumentFolderID_Original            , df1.PTSDocumentFolderID_Parent            , 'sDocumentFolder' = df1.sName            , 'nLevel' = ddf1.nLevel - 1      FROM dbo.PTSDocumentFolder_DY AS df1 WITH(READPAST)            INNER JOIN DocFoldersByDocFolderID AS ddf1                  ON df1.PTSDocumentFolderID = ddf1.PTSDocumentFolderID_Parent)-- Flatten out folder path, DocFolderSingleByDocFolderID(PTSDocumentFolderID_Original, sDocumentFolder)AS (SELECT dfbdf.PTSDocumentFolderID_Original            , 'sDocumentFolder' = STUFF((SELECT '\' + sDocumentFolder                                         FROM DocFoldersByDocFolderID                                         WHERE (PTSDocumentFolderID_Original = dfbdf.PTSDocumentFolderID_Original)                                         ORDER BY PTSDocumentFolderID_Original, nLevel                                         FOR XML PATH ('')),1,1,'')      FROM DocFoldersByDocFolderID AS dfbdf      GROUP BY dfbdf.PTSDocumentFolderID_Original) And voila, I use the second CTE to join back to my original query (which is now a CTE for Source as we can now use MERGE to do INSERT and UPDATE at the same time).Each part of this solution would not solve the problem by itself because:If I don't use recursion, I cannot build out the path properly.  If I use the XML trick only, then I don't have the originating folder ID info that I need to link to the document.If I don't use the XML trick, then I don't have one row per document to show in the report.I could conceivably do this in the report function, but I'd rather not deal with the beginning or ending backslash and how to attach the document name.PIVOT doesn't do strings and UNPIVOT runs into the same problem as the above.I'm excited that each version of SQL server provides us new tools to solve old problems and/or enables us to solve problems in a more elegant wayThe 2 posts that helped me along:Recursive Queries Using Common Table ExpressionHow to use GROUP BY to concatenate strings in SQL server?

    Read the article

  • Getting types of the attributes in an ActiveRecord object

    - by Michael Neale
    I would like to know if it is possible to get the types (as known by AR - eg in the migration script and database) programmatically (I know the data exists in there somewhere). For example, I can deal with all the attribute names: ar.attribute_names.each { |name| puts name } .attributes just returns a mapping of the names to their current values (eg no type info if the field isn't set). Some places I have seen it with the type information: in script/console, type the name of an AR entity: >> Driver => Driver(id: integer, name: string, created_at: datetime, updated_at: datetime) So clearly it knows the types. Also, there is .column_for_attribute, which takes an attr name and returns a column object - which has the type buried in the underlying database column object, but it doesn't appear to be a clean way to get it. I would also be interested in if there is a way that is friendly for the new "ActiveModel" that is coming (rails3) and is decoupled from database specifics (but perhaps type info will not be part of it, I can't seem to find out if it is). Thanks.

    Read the article

  • Javascript/Greasemonkey: search for something then set result as a value

    - by thewinchester
    Ok, I'm a bit of a n00b when it comes to JS (I'm not the greatest programmer) so please be gentle - specially if my questions been asked already somewhere and I'm too stupid to find the right answer. Self deprecation out of the way, let's get to the question. Problem There is a site me and a large group of friends frequently use which doesn't display all the information we may like to know - in this case an airline bookings site and the class of travel. While the information is buried in the code of the page, it isn't displayed anywhere to the user. Using a Greasemonkey script, I'd like to liberate this piece of information and display it in a suitable format. Here's the psuedocode of what I'm looking to do. Search dom for specified element define variables Find a string of text If found Set result to a variable Write contents to page at a specific location (before a specified div) If not found Do nothing I think I've achieved most of it so far, except for the key bits of: Searching for the string: The page needs to search for the following piece of text in the page HEAD: mileageRequest += "&CLASSES=S,S-S,S-S"; The Content I need to extract and store is between the second equals (=) sign and the last comma ("). The contents of this area can be any letter between A-Z. I'm not fussed about splitting it up into an array so I could use the elements individually at this stage. Writing the result to a location: Taking that found piece of text and writing it to another location. Code so far This is what I've come up so far, with bits missing highlighted. buttons = document.getElementById('buttons'); ''Search goes here var flightClasses = document.createElement("div"); flightClasses.innerHTML = '<div id="flightClasses"> ' + '<h2>Travel classes</h2>' + 'For the above segments, your flight classes are as follows:' + 'write result here' + '</div>'; main.parentNode.insertBefore(flightClasses, buttons); If anyone could help me, or point me in the right direction to finish this off I'd appreciate it.

    Read the article

  • HTML 4, HTML 5, XHTML, MIME types - the definitive resource

    - by deceze
    The topics of HTML vs. XHTML and XHTML as text/html vs. XHTML as XHTML are quite complex. Unfortunately it's hard to get a complete picture, since information is spread mostly in bits and pieces around the web or is buried deep in W3C tech jargon. In addition there's some misinformation being circulated. I propose to make this the definite SO resource about the topic, describing the most important aspects of: HTML 4 HTML 5 XHTML 1.0/1.1 as text/html XHTML 1.0/1.1 as XHTML What are the practical implications of each? What are common pitfalls? What is the importance of proper MIME types for each? How do different browsers handle them? I'd like to see one answer per technology. I'm making this a community wiki, so rather than contributing redundant answers, please edit answers to complete the picture. Feel free to start with stubs. Also feel free to edit this question.

    Read the article

  • HTML 4, HTML 5, XHTML, MIME types - the definite resource

    - by deceze
    The topics of HTML vs. XHTML and XHTML as text/html vs. XHTML as XHTML are quite complex. Unfortunately it's hard to get a complete picture, since information is spread mostly in bits and pieces around the web or is buried deep in W3C tech jargon. In addition there's some misinformation being circulated. I propose to make this the definite SO resource about the topic, describing the most important aspects of: HTML 4 HTML 5 XHTML 1.0/1.1 as text/html XHTML 1.0/1.1 as XHTML What are the practical implications of each? What are common pitfalls? What is the importance of proper MIME types for each? How do different browsers handle them? I'd like to see one answer per technology. I'm making this a community wiki, so rather than contributing redundant answers, please edit answers to complete the picture. Feel free to start with stubs. Also feel free to edit this question.

    Read the article

  • xsd.exe - schema to class - for use with WCF

    - by NealWalters
    I have created a schema as an agreed upon interface between our company and an external company. I am now creating a WCF C# web service to handle the interface. I ran the XSD utility and it created a C# class. The schema was built in BizTalk, and references other schemas, so all-in-all there are over 15 classes being generated. I put [DataContract} attribute in front of each of the classes. Do I have to put the [DataMember] attribute on every single property? When I generate a test client program, the proxy does not have any code for any of these 15 classes. We used to use this technique when using .asmx services, but not sure if it will work the same with WCF. If we change the schema, we would want to regenerate the WCF class, and then we would haev to each time redecorate it with all the [DataMember] attributes? Is there an newer tool similar to XSD.exe that will work better with WCF? Thanks, Neal Walters SOLUTION (buried in one of Saunders answer/comments): Add the XmlSerializerFormat to the Interface definition: [OperationContract] [XmlSerializerFormat] // ADD THIS LINE Transaction SubmitTransaction(Transaction transactionIn); Two notes: 1) After I did this, I saw a lot more .xsds in the my proxy (Service Reference) test client program, but I didn't see the new classes in my intellisense. 2) For some reason, until I did a build on the project, I didn't get all the classes in the intellisense (not sure why).

    Read the article

  • Mysql german accents not-sensitive search in full-text searches

    - by lukaszsadowski
    Let`s have a example hotels table: CREATE TABLE `hotels` ( `HotelNo` varchar(4) character set latin1 NOT NULL default '0000', `Hotel` varchar(80) character set latin1 NOT NULL default '', `City` varchar(100) character set latin1 default NULL, `CityFR` varchar(100) character set latin1 default NULL, `Region` varchar(50) character set latin1 default NULL, `RegionFR` varchar(100) character set latin1 default NULL, `Country` varchar(50) character set latin1 default NULL, `CountryFR` varchar(50) character set latin1 default NULL, `HotelText` text character set latin1, `HotelTextFR` text character set latin1, `tagsforsearch` text character set latin1, `tagsforsearchFR` text character set latin1, PRIMARY KEY (`HotelNo`), FULLTEXT KEY `fulltextHotelSearch` (`HotelNo`,`Hotel`,`City`,`CityFR`,`Region`,`RegionFR`,`Country`,`CountryFR`,`HotelText`,`HotelTextFR`,`tagsforsearch`,`tagsforsearchFR`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COLLATE=latin1_german1_ci; In this table for example we have only one hotel with Region name = "Graubünden" (please note umlaut ü character) And now I want to achieve same search match for phrases: 'graubunden' and 'graubünden' This is simple with use of MySql built in collations in regular searches as follows: SELECT * FROM `hotels` WHERE `Region` LIKE CONVERT(_utf8 '%graubunden%' USING latin1) COLLATE latin1_german1_ci This works fine for 'graubunden' and 'graubünden' and as a result I receive proper result, but problem is when we make MySQL full text search Whats wrong with this SQL statement?: SELECT * FROM hotels WHERE MATCH (`HotelNo`,`Hotel`,`Address`,`City`,`CityFR`,`Region`,`RegionFR`,`Country`,`CountryFR`, `HotelText`, `HotelTextFR`, `tagsforsearch`, `tagsforsearchFR`) AGAINST( CONVERT('+graubunden' USING latin1) COLLATE latin1_german1_ci IN BOOLEAN MODE) ORDER BY Country ASC, Region ASC, City ASC This doesn`t return any result. Any ideas where the dog is buried ?

    Read the article

  • How to implement Session timeout in Web Server Side?

    - by Morgan Cheng
    I beheld a web framework implementing in-memory session in this way. The session object is added to Cache with timeout. When the time is out, the session is removed from Cache automatically. To protect race condition, each request should acquire lock on given session object to proceed. Each request will "touch" the session in Cache to refresh the timeout. Everything looks fine, until this scenario is discovered. Say, one operation takes a long time, longer than timeout. Another request comes and wait on session lock which is currently hold by the long-time request. Finally, the long-time request is over, it releases the lock. But, since it already takes longer time than timeout, the session object is already removed from Cache. This is obvious because the only request holding the lock doesn't have a chance to "touch" the session object in cache. The second request gets the lock but cannot retrieve the expired Session object. Oops... To fix this issue, the second request has to re-create the Session object. But, this is just like digging a buried dead body from tomb and try to bring it back to life. It causes buggy code. I'm wondering what's the best way to implement timeout in session to handle such scenario. I know that current platform must have good session mechanism. I just want to know the under-the-hood how.

    Read the article

  • PHP - Database schema: version control, branching, migrations.

    - by Billiam
    I'm trying to come up with (or find) a reusable system for database schema versioning in php projects. There are a number of Rails-style migration projects available for php. http://code.google.com/p/mysql-php-migrations/ is a good example. It uses timestamps for migration files, which helps with conflicts between branches. General problem with this kind of system: When development branch A is checked out, and you want to check out branch B instead, B may have new migration files. This is fine, migrating to newer content is straight forward. If branch A has newer migration files, you would need to migrate downwards to the nearest shared patch. If branch A and B have significantly different code bases, you may have to migrate down even further. This may mean: Check out B, determine shared patch number, check out A, migrate downwards to this patch. This must be done from A since the actual applied patches are not available in B. Then, checkout branch B, and migrate to newest B patch. Reverse process again when going from B to A. Proposed system: When migrating upwards, instead of just storing the patch version, serialize the whole patch in database for later use, though I'd probably only need the down() method. When changing branches, compare patches that have been run to patches that are available in the destination branch. Determine nearest shared patch (or oldest difference, maybe) between db table of run patches and patches in destination branch by ID or hash. Could also look for new or missing patches that are buried under a number of shared patches between the two branches. Automatically merge down to the nearest shared patch, using the db table stored down() methods, and then merge up to the branche's latest patch. My question is: Is this system too crazy and/or fraught with consequences to bother developing? My experience with database schema versioning is limited to PHP autopatch, which is an up()-only system requiring filenames with sequential IDs.

    Read the article

< Previous Page | 1 2 3 4 5 6  | Next Page >