Search Results

Search found 58440 results on 2338 pages for 'data cleansing'.

Page 137/2338 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • A design pattern for data binding an object (with subclasses) to asp.net user control

    - by Rohith Nair
    I have an abstract class called Address and I am deriving three classes ; HomeAddress, Work Address, NextOfKin address. My idea is to bind this to a usercontrol and based on the type of Address it should bind properly to the ASP.NET user control. My idea is the user control doesn't know which address it is going to present and based on the type it will parse accordingly. How can I design such a setup, based on the fact that, the user control can take any type of address and bind accordingly. I know of one method like :- Declare class objects for all the three types (Home,Work,NextOfKin). Declare an enum to hold these types and based on the type of this enum passed to user control, instantiate the appropriate object based on setter injection. As a part of my generic design, I just created a class structure like this :- I know I am missing a lot of pieces in design. Can anybody give me an idea of how to approach this in proper way.

    Read the article

  • Google Fonts API JSON Data in WordPress Options-Framework-Theme

    - by Rob
    I'm developing a child-theme off of the new Twenty Twelve theme using Wordpress 3.4.2 and the development version of the Options Theme Framework by Devin Price. In Devin's tutorial, it shows of a way to implement 15 Google Web Fonts into the Theme Options page, but not all of them (roughly 560). I know I can create a "manual list", like in the tutorial that states each one with fallbacks, but this is time consuming and unproductive as Google may or may not add to, update, change or remove some of these fonts from their list. The list I've created above will ultimately store unavailable fonts the user thinks is there because of what they can see in the drop-down menu and it won't have any new ones - making the list and some selections obsolete. On the Google Developer API Web Fonts page, it talks briefly on retrieving a "dynamic list" using JSON/JavaScript. I was wondering how would I be able to pull the Google Web Fonts API into my Wordpress Theme Options page so I'm not creating my own list or have to constantly release an update to solve this issue. Could someone please walk me through what I would need to paste into my options.php, functions.php, /inc/options-framework.php file etc. or even in a new one to implement this? I've also had a look into some screencasts, plugins and tutorials on how it works, but none of them are specific enough for people just starting out. Please keep in mind I'm not the best coder... Thank you.

    Read the article

  • Deploying SSIS to Integration Services Catalog (SSISDB) via SQL Server Data Tools

    - by Kevin Shyr
    There are quite a few good articles/blogs on this.  For a straight forward deployment, read this (http://www.bibits.co/post/2012/08/23/SSIS-SQL-Server-2012-Project-Deployment.aspx).  For a more dynamic and comprehensive understanding about all the different settings, read part 1 (http://www.mssqltips.com/sqlservertip/2450/ssis-package-deployment-model-in-sql-server-2012-part-1-of-2/) and part 2 (http://www.mssqltips.com/sqlservertip/2451/ssis-package-deployment-model-in-sql-server-2012-part-2-of-2/) Microsoft official doc: http://technet.microsoft.com/en-us/library/hh213373 This only thing I would add is the following.  After your first deployment, you'll notice that the subsequent deployment skips the second step (go directly "Select Destination" and skipped "Select Source").  That's because after your initial deployment, a ispac file is created to track deployment.  If you decide to go back to "Select Source" and select SSIS catalog again, the deployment process will complete, but the packages will not be deployed.

    Read the article

  • xml file save/read error (making a highscore system for XNA game)

    - by Eddy
    i get an error after i write player name to the file for second or third time (An unhandled exception of type 'System.InvalidOperationException' occurred in System.Xml.dll Additional information: There is an error in XML document (18, 17).) (in highscores load method In data = (HighScoreData)serializer.Deserialize(stream); it stops) the problem is that some how it adds additional "" at the end of my .dat file could anyone tell me how to fix this? the file before save looks: <?xml version="1.0"?> <HighScoreData xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <PlayerName> <string>neil</string> <string>shawn</string> <string>mark</string> <string>cindy</string> <string>sam</string> </PlayerName> <Score> <int>200</int> <int>180</int> <int>150</int> <int>100</int> <int>50</int> </Score> <Count>5</Count> </HighScoreData> the file after save looks: <?xml version="1.0"?> <HighScoreData xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <PlayerName> <string>Nick</string> <string>Nick</string> <string>neil</string> <string>shawn</string> <string>mark</string> </PlayerName> <Score> <int>210</int> <int>210</int> <int>200</int> <int>180</int> <int>150</int> </Score> <Count>5</Count> </HighScoreData>> the part of my code that does all of save load to xml is: DECLARATIONS PART [Serializable] public struct HighScoreData { public string[] PlayerName; public int[] Score; public int Count; public HighScoreData(int count) { PlayerName = new string[count]; Score = new int[count]; Count = count; } } IAsyncResult result = null; bool inputName; HighScoreData data; int Score = 0; public string NAME; public string HighScoresFilename = "highscores.dat"; Game1 constructor public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; Width = graphics.PreferredBackBufferWidth = 960; Height = graphics.PreferredBackBufferHeight =640; GamerServicesComponent GSC = new GamerServicesComponent(this); Components.Add(GSC); } Inicialize function (end of it) protected override void Initialize() { //other game code base.Initialize(); string fullpath =Path.Combine(HighScoresFilename); if (!File.Exists(fullpath)) { //If the file doesn't exist, make a fake one... // Create the data to save data = new HighScoreData(5); data.PlayerName[0] = "neil"; data.Score[0] = 200; data.PlayerName[1] = "shawn"; data.Score[1] = 180; data.PlayerName[2] = "mark"; data.Score[2] = 150; data.PlayerName[3] = "cindy"; data.Score[3] = 100; data.PlayerName[4] = "sam"; data.Score[4] = 50; SaveHighScores(data, HighScoresFilename); } } all methods for loading saving and output public static void SaveHighScores(HighScoreData data, string filename) { // Get the path of the save game string fullpath = Path.Combine("highscores.dat"); // Open the file, creating it if necessary FileStream stream = File.Open(fullpath, FileMode.OpenOrCreate); try { // Convert the object to XML data and put it in the stream XmlSerializer serializer = new XmlSerializer(typeof(HighScoreData)); serializer.Serialize(stream, data); } finally { // Close the file stream.Close(); } } /* Load highscores */ public static HighScoreData LoadHighScores(string filename) { HighScoreData data; // Get the path of the save game string fullpath = Path.Combine("highscores.dat"); // Open the file FileStream stream = File.Open(fullpath, FileMode.OpenOrCreate, FileAccess.Read); try { // Read the data from the file XmlSerializer serializer = new XmlSerializer(typeof(HighScoreData)); data = (HighScoreData)serializer.Deserialize(stream);//this is the line // where program gives an error } finally { // Close the file stream.Close(); } return (data); } /* Save player highscore when game ends */ private void SaveHighScore() { // Create the data to saved HighScoreData data = LoadHighScores(HighScoresFilename); int scoreIndex = -1; for (int i = 0; i < data.Count ; i++) { if (Score > data.Score[i]) { scoreIndex = i; break; } } if (scoreIndex > -1) { //New high score found ... do swaps for (int i = data.Count - 1; i > scoreIndex; i--) { data.PlayerName[i] = data.PlayerName[i - 1]; data.Score[i] = data.Score[i - 1]; } data.PlayerName[scoreIndex] = NAME; //Retrieve User Name Here data.Score[scoreIndex] = Score; // Retrieve score here SaveHighScores(data, HighScoresFilename); } } /* Iterate through data if highscore is called and make the string to be saved*/ public string makeHighScoreString() { // Create the data to save HighScoreData data2 = LoadHighScores(HighScoresFilename); // Create scoreBoardString string scoreBoardString = "Highscores:\n\n"; for (int i = 0; i<5;i++) { scoreBoardString = scoreBoardString + data2.PlayerName[i] + "-" + data2.Score[i] + "\n"; } return scoreBoardString; } when ill make this work i will start this code when i call game over (now i start it when i press some buttons, so i could test it faster) public void InputYourName() { if (result == null && !Guide.IsVisible) { string title = "Name"; string description = "Write your name in order to save your Score"; string defaultText = "Nick"; PlayerIndex playerIndex = new PlayerIndex(); result= Guide.BeginShowKeyboardInput(playerIndex, title, description, defaultText, null, null); // NAME = result.ToString(); } if (result != null && result.IsCompleted) { NAME = Guide.EndShowKeyboardInput(result); result = null; inputName = false; SaveHighScore(); } } this where i call output to the screen (ill call this in highscores meniu section when i am done with debugging) spriteBatch.DrawString(Font1, "" + makeHighScoreString(),new Vector2(500,200), Color.White); }

    Read the article

  • dnssec zonesigner ignoring out-of-zone data

    - by jordi12100
    I am trying to configure DNSSec with BIND9 on CentOS 6.4 running DirectAdmin control panel. I am using this tutorial to make it work: https://www.dnssec-tools.org/wiki/index.php/Zonesigner But I can't get it work... When I run this command: zonesigner --genkeys jordikroon.nl.db jordikroon.nl.db.signed I get this error: jordikroon.nl.db:17: ignoring out-of-zone data (jordikroon.nl) jordikroon.nl.db:18: ignoring out-of-zone data (jordikroon.nl) jordikroon.nl.db:22: ignoring out-of-zone data (jordikroon.nl) jordikroon.nl.db:29: ignoring out-of-zone data (jordikroon.nl) jordikroon.nl.db:33: ignoring out-of-zone data (jordikroon.nl) zone jordikroon.nl.db/IN: has no NS records zone jordikroon.nl.db/IN: not loaded due to errors. I can't find anything on the web about this error. This is my zone db file: $TTL 14400 @ IN SOA ns1.ghservers.org. hostmaster.jordikroon.nl. ( 2013090703 14400 3600 1209600 86400 ) jordikroon.nl. 14400 IN NS ns1.ghservers.org. jordikroon.nl. 14400 IN NS ns2.ghservers.org. cp 14400 IN A 85.17.32.228 ftp 14400 IN A 85.17.32.228 jordikroon.nl. 14400 IN A 85.17.32.228 localhost 14400 IN A 127.0.0.1 mail 14400 IN A 85.17.32.228 pop 14400 IN A 85.17.32.228 smtp 14400 IN A 85.17.32.228 www 14400 IN A 85.17.32.228 jordikroon.nl. 14400 IN MX 10 mail jordikroon.nl. 14400 IN TXT "v=spf1 a mx ip4:85.17.32.228 ~all" localhost 14400 IN AAAA ::1 How do I have to fix this? All IN keywords are being ignored. Any help is welcome:-)

    Read the article

  • SSRS Parameters and MDX Data Sets

    - by blakmk
    Having spent the past few weeks creating reports and dashboards in SSRS and SSAS 2008, I was frustrated by how difficult it is to use custom datasets to generate parameter drill downs. It also seems Reporting Services can be quite unforgiving when it comes to renaming things like datasets, so I want to share a couple of techniques that I found useful. One of the things I regularly do is to add parameters to the querys. However doing this causes Reporting Services to generate a hidden dataset and parameter name for you. One of the things I like to do is tweak these hidden datasets removing the ‘ALL’ level which is a tip I picked up from Devin Knight in his blog: There are some rules i’ve developed for myself since working with SSRS and MDX, they may not be the best or only way but they work for me. Rule 1 – Never trust the automatically generated hidden datasets Or even ANY, automatically generated MDX queries for that matter.... I’ve previously blogged about this here.   If you examine the MDX generated in the hidden dataset you will see that it generates the MDX in the context of the originiating query by building a subcube, this mean it may NOT be appropriate to use this in a subsequent query which has a different context. Make sure you always understand what is going on. Often when i’m developing a dashboard or a report there are several parameter oriented datasets that I like to manually create. It can be that I have different datasets using the same dimension but in a different context. One example of this, is that I often use a dataset for last month and a dataset for the last 6 months. Both use the same date hierarchy. However Reporting Services seems not to be too smart when it comes to generating unique datasets when working with and renaming parameters and datasets. Very often I have come across this error when it comes to refactoring parameter names and default datasets. "an item with the same key has already been added" The only way I’ve found to reliably avoid this is to obey to rule 2. Rule 2 – Follow this sequence when it comes to working with Parameters and DataSets: 1.    Create Lookup and Default Datasets in advance 2.    Create parameters (set the datasets for available and default values) 3.    Go into query and tick parameter check box 4.    On dataset properties screen, select the parameter defined earlier from the parameter value defined earlier. Rule 3 – Dont tear your hair out when you have just renamed objects and your report doesn’t build Just use XML notepad on the original report file. I found I gained a good understanding of the structure of the underlying XML document just by using XML notepad. From this you can do a search and find references of the missing object. You can also just do a wholesale search and replace (after taking a backup copy of course ;-) So I hope the above help to save the sanity of anyone who regularly works with SSRS and MDX.

    Read the article

  • Windows Azure Emulators On Your Desktop

    - by BuckWoody
    Many people feel they have to set up a full Azure subscription online to try out and develop on Windows Azure. But you don’t have to do that right away. In fact, you can download the Windows Azure Compute Emulator – a “cloud development environment” – right on your desktop. No, it’s not for production use, and no, you won’t have other people using your system as a cloud provider, and yes, there are some differences with Production Windows Azure, but you’ll be able code, run, test, diagnose, watch, change and configure code without having any connection to the Internet at all. The best thing about this approach is that when you are ready to deploy the code you’ve been testing, a few clicks deploys it to your subscription when you make one.   So what deep-magic does it take to run such a thing right on your laptop or even a Virtual PC? Well, it’s actually not all that difficult. You simply download and install the Windows Azure SDK (you can even get a free version of Visual Studio for it to run on – you’re welcome) from here: http://msdn.microsoft.com/en-us/windowsazure/cc974146.aspx   This SDK will also install the Windows Azure Compute Emulator and the Windows Azure Storage Emulator – and then you’re all set. Right-click the icon for Visual Studio and select “Run as Administrator”:    Now open a new “Cloud” type of project:   Add your Web and Worker Roles that you want to code:   And when you’re done with your design, press F5 to start the desktop version of Azure:   Want to learn more about what’s happening underneath? Right-click the tray icon with the Azure logo, and select the two emulators to see what they are doing:          In the configuration files, you’ll see a “Use Development Storage” setting. You can call the BLOB, Table or Queue storage and it will all run on your desktop. When you’re ready to deploy everything to Windows Azure, you simply change the configuration settings and add the storage keys and so on that you need.   Want to learn more about all this?   Overview of the Windows Azure Compute Emulator: http://msdn.microsoft.com/en-us/library/gg432968.aspx Overview of the Windows Azure Storage Emulator: http://msdn.microsoft.com/en-us/library/gg432983.aspx January 2011 Training Kit: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=413E88F8-5966-4A83-B309-53B7B77EDF78&displaylang=en      

    Read the article

  • WCF Data Service Pipeline

    - by Daniel Cazzulino
    For documentation purposes, I just draw the following UML sequence diagrams for the “Astoria” pipeline, using Visual Studio 2010 Ultimate: For a single-entity (or non-batched) request, this is the sequence: For a batch request, this is the sequence instead: DataService component is your own DataService<T>-derived class, and DataService.ProcessingPipeline refers to its ProcessingPipeline property pipeline events.   /kzu

    Read the article

  • SQL SERVER – SSIS Look Up Component – Cache Mode – Notes from the Field #028

    - by Pinal Dave
    [Notes from Pinal]: Lots of people think that SSIS is all about arranging various operations together in one logical flow. Well, the understanding is absolutely correct, but the implementation of the same is not as easy as it seems. Similarly most of the people think lookup component is just component which does look up for additional information and does not pay much attention to it. Due to the same reason they do not pay attention to the same and eventually get very bad performance. Linchpin People are database coaches and wellness experts for a data driven world. In this 28th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to write a good lookup component with Cache Mode. In SQL Server Integration Services, the lookup component is one of the most frequently used tools for data validation and completion.  The lookup component is provided as a means to virtually join one set of data to another to validate and/or retrieve missing values.  Properly configured, it is reliable and reasonably fast. Among the many settings available on the lookup component, one of the most critical is the cache mode.  This selection will determine whether and how the distinct lookup values are cached during package execution.  It is critical to know how cache modes affect the result of the lookup and the performance of the package, as choosing the wrong setting can lead to poorly performing packages, and in some cases, incorrect results. Full Cache The full cache mode setting is the default cache mode selection in the SSIS lookup transformation.  Like the name implies, full cache mode will cause the lookup transformation to retrieve and store in SSIS cache the entire set of data from the specified lookup location.  As a result, the data flow in which the lookup transformation resides will not start processing any data buffers until all of the rows from the lookup query have been cached in SSIS. The most commonly used cache mode is the full cache setting, and for good reason.  The full cache setting has the most practical applications, and should be considered the go-to cache setting when dealing with an untested set of data. With a moderately sized set of reference data, a lookup transformation using full cache mode usually performs well.  Full cache mode does not require multiple round trips to the database, since the entire reference result set is cached prior to data flow execution. There are a few potential gotchas to be aware of when using full cache mode.  First, you can see some performance issues – memory pressure in particular – when using full cache mode against large sets of reference data.  If the table you use for the lookup is very large (either deep or wide, or perhaps both), there’s going to be a performance cost associated with retrieving and caching all of that data.  Also, keep in mind that when doing a lookup on character data, full cache mode will always do a case-sensitive (and in some cases, space-sensitive) string comparison even if your database is set to a case-insensitive collation.  This is because the in-memory lookup uses a .NET string comparison (which is case- and space-sensitive) as opposed to a database string comparison (which may be case sensitive, depending on collation).  There’s a relatively easy workaround in which you can use the UPPER() or LOWER() function in the pipeline data and the reference data to ensure that case differences do not impact the success of your lookup operation.  Again, neither of these present a reason to avoid full cache mode, but should be used to determine whether full cache mode should be used in a given situation. Full cache mode is ideally useful when one or all of the following conditions exist: The size of the reference data set is small to moderately sized The size of the pipeline data set (the data you are comparing to the lookup table) is large, is unknown at design time, or is unpredictable Each distinct key value(s) in the pipeline data set is expected to be found multiple times in that set of data Partial Cache When using the partial cache setting, lookup values will still be cached, but only as each distinct value is encountered in the data flow.  Initially, each distinct value will be retrieved individually from the specified source, and then cached.  To be clear, this is a row-by-row lookup for each distinct key value(s). This is a less frequently used cache setting because it addresses a narrower set of scenarios.  Because each distinct key value(s) combination requires a relational round trip to the lookup source, performance can be an issue, especially with a large pipeline data set to be compared to the lookup data set.  If you have, for example, a million records from your pipeline data source, you have the potential for doing a million lookup queries against your lookup data source (depending on the number of distinct values in the key column(s)).  Therefore, one has to be keenly aware of the expected row count and value distribution of the pipeline data to safely use partial cache mode. Using partial cache mode is ideally suited for the conditions below: The size of the data in the pipeline (more specifically, the number of distinct key column) is relatively small The size of the lookup data is too large to effectively store in cache The lookup source is well indexed to allow for fast retrieval of row-by-row values No Cache As you might guess, selecting no cache mode will not add any values to the lookup cache in SSIS.  As a result, every single row in the pipeline data set will require a query against the lookup source.  Since no data is cached, it is possible to save a small amount of overhead in SSIS memory in cases where key values are not reused.  In the real world, I don’t see a lot of use of the no cache setting, but I can imagine some edge cases where it might be useful. As such, it’s critical to know your data before choosing this option.  Obviously, performance will be an issue with anything other than small sets of data, as the no cache setting requires row-by-row processing of all of the data in the pipeline. I would recommend considering the no cache mode only when all of the below conditions are true: The reference data set is too large to reasonably be loaded into SSIS memory The pipeline data set is small and is not expected to grow There are expected to be very few or no duplicates of the key values(s) in the pipeline data set (i.e., there would be no benefit from caching these values) Conclusion The cache mode, an often-overlooked setting on the SSIS lookup component, represents an important design decision in your SSIS data flow.  Choosing the right lookup cache mode directly impacts the fidelity of your results and the performance of package execution.  Know how this selection impacts your ETL loads, and you’ll end up with more reliable, faster packages. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSIS

    Read the article

  • Highlight Word add-in for Visual Studio 2010 [SSDT]

    - by jamiet
    I’ve just been alerted by my colleague Kyle Harvie to a Visual Studio 2010 add-in that should prove very useful if you are an SSDT user. Its simply called Highlight all occurrences of selected word and does exactly what it says on the tin, you highlight a word and it shows all other occurrences of that word in your script: There’s a limitation for .sql files (which I have reported) where the highlighting doesn’t work if the word is wrapped in square brackets but what the heck, its free, it takes about ten seconds to install….install it already! @Jamiet

    Read the article

  • Microsoft Entourage/Exchange Server problem: all objects disappeared from server - still in some for

    - by splattne
    One of our employees works with Entourage on his MacBook Pro (OSX 10.6) accessing Exchange Server 2007. Last Friday morning, I think while working over a VPN, Entourage (I think it was Entourage) deleted all his objects (mail, calendar, contacts) on the server and while creating a lot of strange folders (starting with underscores) on the client. The local data seems to be there, but not in a consistent form. Since the user's mailbox is rather big, I suspect, that there was some kind of "move" operation which did not complete. I tried to export the data, but the export stops because of a corrupted object. Is there a tool or another way to export or retrieve the local data? Edit - FYI: we solved the problem getting his data from the previous night's backup.

    Read the article

  • In the Cloud, Everything Costs Money

    - by BuckWoody
    I’ve been teaching my daughter about budgeting. I’ve explained that most of the time the money coming in is from only one or two sources – and you can only change that from time to time. The money going out, however, is to many locations, and it changes all the time. She’s made a simple debits and credits spreadsheet, and I’m having her research each part of the budget. Her eyes grow wide when she finds out everything has a cost – the house, gas for the lawnmower, dishes, water for showers, food, electricity to run the fridge, a new fridge when that one breaks, everything has a cost. She asked me “how do you pay for all this?” It’s a sentiment many adults have looking at their own budgets – and one reason that some folks don’t even make a budget. It’s hard to face up to the realities of how much it costs to do what we want to do. When we design a computing solution, it’s interesting to set up a similar budget, because we don’t always consider all of the costs associated with it. I’ve seen design sessions where the new software or servers are considered, but the “sunk” costs of personnel, networking, maintenance, increased storage, new sizes for backups and offsite storage and so on are not added in. They are already on premises, so they are assumed to be paid for already. When you move to a distributed architecture, you'll see more costs directly reflected. Store something, pay for that storage. If the system is deployed and no one is using it, you’re still paying for it. As you watch those costs rise, you might be tempted to think that a distributed architecture costs more than an on-premises one. And you might be right – for some solutions. I’ve worked with a few clients where moving to a distributed architecture doesn’t make financial sense – so we didn’t implement it. I still designed the system in a distributed fashion, however, so that when it does make sense there isn’t much re-architecting to do. In other cases, however, if you consider all of the on-premises costs and compare those accurately to operating a system in the cloud, the distributed system is much cheaper. Again, I never recommend that you take a “here-or-there-only” mentality – I think a hybrid distributed system is usually best – but each solution is different. There simply is no “one size fits all” to architecting a solution. As you design your solution, cost out each element. You might find that using a hybrid approach saves you money in one design and not in another. It’s a brave new world indeed. So yes, in the cloud, everything costs money. But an on-premises solution also costs money – it’s just that “dad” (the company) is paying for it and we don’t always see it. When we go out on our own in the cloud, we need to ensure that we consider all of the costs.

    Read the article

  • Proper Data Structure for Commentable Comments

    - by Wesley
    Been struggling with this on an architectural level. I have an object which can be commented on, let's call it a Post. Every post has a unique ID. Now I want to comment on that Post, and I can use ID as a foreign key, and each PostComment has an ItemID field which correlates to the Post. Since each Post has a unique ID, it is very easy to assign "Top Level" comments. When I comment on a comment however, I feel like I now need a PostCommentComment, which attaches to the ID of the PostComment. Since ID's are assigned sequentially, I can no longer simply use ItemID to differentiate where in the tree the comment is assigned. I.E. both a Post and a Post Comment might have an ID of '5', so my foreign key relationship is invalid. This seems like it could go on infinitely, with PostCommentCommentComment's etc... What's the best way to solve this? Should I have a field in the comment called "IsPostComment" or something of the like to know which collection to attach the ID to? This strikes me as the best solution I've seen so far, but now I feel like I need to make recursive DataBase calls which start to get expensive. Meaning, I get a Post and get all PostComments where ItemID == Post.ID && where IsPostComment == true Then I take that as a collection, gather all the ID's of the PostComments, and do another search where ItemID == PostComment[all].ID && where IsPostComment == false, then repeat infinitely. This means I make a call for every layer, and if I'm calling 100 Posts, I might make 1000 DB calls to get 10 layers of comments each. What is the right way to do this?

    Read the article

  • Removing Duplicate Data From SQL Query Output For Display On A Web Page [migrated]

    - by doubleJ
    I had asked a similar question on stackoverflow but didn't really get anywhere. This page shows the output that I'm currently getting from my MSSQL server. I have a table of venue information (name, address, etc...) that our events happen on. Separately, I have a table of the actual events that are scheduled (an event may happen multiple times in one day and/or over multiple days). I join those tables with this query: <?php try { $dbh = new PDO("sqlsrv:Server=localhost;Database=Sermons", "", ""); $dbh->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); $sql = "SELECT TOP (100) PERCENT dbo.TblSermon.Day, dbo.TblSermon.Date, dbo.TblSermon.Time, dbo.TblSermon.Speaker, dbo.TblSermon.Series, dbo.TblSermon.Sarasota, dbo.TblSermon.NonFlc, dbo.TblJoinSermonLocation.MeetingName, dbo.TblLocation.Location, dbo.TblLocation.Pastors, dbo.TblLocation.Address, dbo.TblLocation.City, dbo.TblLocation.State, dbo.TblLocation.Zip, dbo.TblLocation.Country, dbo.TblLocation.Phone, dbo.TblLocation.Email, dbo.TblLocation.WebAddress FROM dbo.TblLocation RIGHT OUTER JOIN dbo.TblJoinSermonLocation ON dbo.TblLocation.ID = dbo.TblJoinSermonLocation.Location RIGHT OUTER JOIN dbo.TblSermon ON dbo.TblJoinSermonLocation.Sermon = dbo.TblSermon.ID WHERE (dbo.TblSermon.Date >= { fn NOW() }) ORDER BY dbo.TblSermon.Date, dbo.TblSermon.Time"; $stmt = $dbh->prepare($sql); $stmt->execute(); $stmt->setFetchMode(PDO::FETCH_ASSOC); foreach ($stmt as $row) { echo "<pre>"; print_r($row); echo "</pre>"; } unset($row); $dbh = null; } catch(PDOException $e) { echo $e->getMessage(); } ?> So, as it loops through the query results, it creates an array for each record and ends up like this: Array ( [Day] => Tuesday [Date] => 2012-10-30 00:00:00.000 [Time] => 07:00 PM [Speaker] => Keith Moore [Location] => The Ark Church [Pastors] => Alan & Joy Clayton [Address] => 450 Humble Tank Rd. [City] => Conroe [State] => TX [Zip] => 77305.0 [Phone] => (936) 756-1988 [Email] => [email protected] [WebAddress] => http://www.thearkchurch.org ) Array ( [Day] => Wednesday [Date] => 2012-10-31 00:00:00.000 [Time] => 07:00 PM [Speaker] => Keith Moore [Location] => The Ark Church [Pastors] => Alan & Joy Clayton [Address] => 450 Humble Tank Rd. [City] => Conroe [State] => TX [Zip] => 77305.0 [Phone] => (936) 756-1988 [Email] => [email protected] [WebAddress] => http://www.thearkchurch.org ) Array ( [Day] => Tuesday [Date] => 2012-11-06 00:00:00.000 [Time] => 07:00 PM [Speaker] => Keith Moore [Location] => Fellowship Of Faith Christian Center [Pastors] => Michael & Joan Kalstrup [Address] => 18999 Hwy. 59 [City] => Oakland [State] => IA [Zip] => 51560.0 [Phone] => (712) 482-3455 [Email] => [email protected] [WebAddress] => http://www.fellowshipoffaith.cc ) Array ( [Day] => Wednesday [Date] => 2012-11-14 00:00:00.000 [Time] => 07:00 PM [Speaker] => Keith Moore [Location] => Faith Family Church [Pastors] => Michael & Barbara Cameneti [Address] => 8200 Freedom Ave NW [City] => Canton [State] => OH [Zip] => 44720.0 [Phone] => (330) 492-0925 [Email] => [WebAddress] => http://www.myfaithfamily.com ) As you can see, The Ark Church and its associated contact information is duplicated, so when I work with those arrays and output them to the page, I see a bunch of duplicate content. I'd like to remove the duplicate information so that I get results similar to this: The Ark Church Alan & Joy Clayton 450 Humble Tank Rd. Conroe, TX 77305 (936) 756-1988 [email protected] http://www.thearkchurch.org Meetings: Tuesday, 2012-10-30 07:00 PM Wednesday, 2012-10-31 07:00 PM Fellowship Of Faith Christian Center Michael & Joan Kalstrup 18999 Hwy. 59 Oakland, IA 51560 (712) 482-3455 [email protected] http://www.fellowshipoffaith.cc Meetings: Tuesday, 2012-11-06 07:00 PM Faith Family Church Michael & Barbara Cameneti 8200 Freedom Ave NW Canton, OH 44720 (330) 492-0925 http://www.myfaithfamily.com Meetings: Wednesday, 2012-11-14 07:00 PM It doesn't necessarily have to end up like that (I'm not looking for code specific for these results, but a concept of how to not show the duplicated information). I'm assuming that an additional foreach or while will do it, but I haven't figured out any logic that says <?php if ($location == $previouslocation) echo ""; ?>.

    Read the article

  • Revision Methodology for Developer Post as Entry Level

    - by Demla Pawan
    I had revised all basic concepts of my computer science ciriculum like: Core Java(basics),SQL(basics),C++(basics),XHTML,PHP(basics),Datastructures(basics) and what I need to do,and How to do, as their may be fault in my preparation methods for revision session's, So can Anybody suggest Methodology to revise those technical things,to which you are not in touch at present, but you can write basic programs or have used 1-2 years ago. And also can U suggest some Quick revision links on Net for various technologies mentioned above.

    Read the article

  • Which design better when use foreign key instead of a string to store a list of id

    - by Kien Thanh
    I'm building online examination system. I have designed to table, Question and GeneralExam. The table GeneralExam contains info about the exam like name, description, duration,... Now I would like to design table GeneralQuestion, it will contain the ids of questions belongs to a general exam. Currently, I have two ideas to design GeneralQuestion table: It will have two columns: general_exam_id, question_id. It will have two columns: general_exam_id, list_question_ids (string/text). I would like to know which designing is better, or pros and cons of each designing. I'm using Postgresql database.

    Read the article

  • Gathering all data in single iteration vs using functions for readable code

    - by user828584
    Say I have an array of runners with which I need to find the tallest runner, the fastest runner, and the lightest runner. It seems like the most readable solution would be: runners = getRunners(); tallestRunner = getTallestRunner(runners); fastestRunner = getFastestRunner(runners); lightestRunner = getLightestRunner(runners); ..where each function iterates over the runners and keeps track of the largest height, greatest speed, and lowest weight. Iterating over the array three times, however, doesn't seem like a very good idea. It would instead be better to do: int greatestHeght, greatestSpeed, leastWeight; Runner tallestRunner, fastestRunner, lightestRunner; for(runner in runners){ if(runner.height > greatestHeight) { greatestHeight = runner.height; tallestRunner = runner; } if(runner.speed > ... } While this isn't too unreadable, it can get messy when there is more logic for each piece of information being extracted in the iteration. What's the middle ground here? How can I use only a single iteration while still keeping the code divided into logical units?

    Read the article

  • Depends libntfs10 error while installing testdisk

    - by user260223
    I want to install testdisk to ubuntu 10.04 LTS but i'm getting an error. Any help? Here is the output: # sudo apt-get install testdisk Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: testdisk: Depends: libntfs10 (>= 2.0.0) but it is not installable E: Broken packages I also tried: wget http://launchpadlibrarian.net/40386584/libntfs10_2.0.0-1ubuntu4_i386.deb; sudo dpkg -i *.deb And i get this error: dpkg: error processing libntfs10_2.0.0-1ubuntu4_i386.deb (--install): package architecture (i386) does not match system (amd64) Errors were encountered while processing: libntfs10_2.0.0-1ubuntu4_i386.deb

    Read the article

  • different data with same title and keywords

    - by Junaid Saeed
    here is my scenario i have a website where i redirect my users basing upon the device they were using, lets say a user is visiting from an iPad, i take him directly to the page of iPad wallpapers, the user selects iPad version & i take the user to the gallery of wallpapers where the user can select & download any wallpaper. Every wallpaper is the required resolution, i have my reasons for doing this, now the thing is there are diff. resolution. versions of an image appearing one 5 diff. sections of my website, each having their own view page Now there is only one record in db.table for the image, and basing on the my consistent naming convention of the images, i pick the required image. this means when 5 different pages are generated in 5 categorized sections of the website, due to a shared DB record, the keywords, the titles and every single detail of the 5 pages is same besides the resolution of the image, and the section specific details that the page has and yeah the pages also have different paths like wallpapers.com\ipad-1\cars\Ferrari-dino.html wallpapers.com\ipad-2\cars\Ferrari-dino.html wallpapers.com\ipad-3\cars\Ferrari-dino.html wallpapers.com\ipad-4\cars\Ferrari-dino.html wallpapers.com\ipad-5\cars\Ferrari-dino.html now this is my scenario, How do Search Engines see it and how do they rank it? Is it a Good or Normal or Bad SEO practice? If bad how dangerous it is for my sites SEO? i need your comments on my scenario.

    Read the article

  • winusb formatting and installation error, lost data storage on 32gb flash drive

    - by Cary Felton
    i recently purchased a pny brand 32gb usb drive to use for configuring a dual boot on my computer, and for hdd files backup. i also recently installed zorin os 6.1 as my main os, and then downloaded winusb for linux, and set it to install the iso for windows 8 release preview as the activation code is still good untill jan.2013, and there was an error on the installation. i rebooted the computer, plugged in the usb drive, and i went from 32gb usable space to somehow having a partitioned drive (mounted as two separate drives) one being 3.7gb, and the other being i believe 25gb or so... i then formatted the drive with gparted back to fat32, and it still read as a windows usb with the installation files on it still? so then i took my usb drive to my library which runs windows 8, i installed bootice, and completely reformatted my usb drive, and somehow it is usable, but still not perfect. it reads i only have 29.9gb usable space... i have already thought of the non usable area, and that doesnt account for this error as when i first bought the drive and plugged it in on linux, it read total drive space was 32.2 and exactly 32 was usable. somehow i am short by 2gb of space which is very critical for what i am doing. i love linux, i only needed windows for bluray playback with daplayer as it wont work well in wine, but if i keep losing space on my usb drives im afraid ill have to switch back. any help would be appreciated as i already visited pny's site, and they have no support for this issue, and im not that fond of partitions, file systems, or formatting.

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >