Search Results

Search found 4815 results on 193 pages for 'parameterized queries'.

Page 111/193 | < Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >

  • Is there a simple, flat, XML-based query-able data storage solution? [closed]

    - by alex gray
    I have been in long pursuit of an XML-based query-able data store, and despite continued searches and evaluations, I have yet to find a solution that meets the my needs, which include: Data is wholly contained within XML nodes, in flat text files. There is a "native" - or at least unobtrusive - method with which to perform Create/Read/Update/Delete (CRUD) operations onto the "schema". I would consider access via http, XHR, javascript, PHP, BASH, or PERL to be unobtrusive, dependent on the complexity of the set of dependencies. Server-side file-system reads and writes. A client-side interface element, accessible in any browser without a plug-in. Some extra, preferred (but optional) requirements include: Respond to simple SQL, or similarly syntax queries. Serve the data on a bare bones https server, with no "extra stuff", either via XMLHTTPRequest, HTTP proper, or JSON. A few thoughts: What I'm looking for may be possible via some Java server implementations, but for the sake of this question, please do not suggest that - unless it meets ALL the requirements. Java, especially on the client-side is not really an option, nor is it appealing from a development viewpoint.* I know walking the filesystem is a stretch, and I've heard it's possible with XPATH or XSLT, but as far as I know, that's not ready for primetime, nor even yet a recommendation. However the ability to recursively traverse the filesystem is needed for such a system to be of useful facility. At this point, I have basically implemented what I described via, of all things, CGI and Bash, but there has to be an easier way. Thoughts?

    Read the article

  • An online version of ClearTrace

    - by Bill Graziano
    When I visit clients for the first time and conduct a performance review I introduce them to ClearTrace. It’s still the best way I know to identify exactly which queries are consuming the most resources.  The downside is that it needs to be downloaded and create a database to store the results.  I finally decided it would be easier if I could just upload a trace immediately. You can find the online version of ClearTrace at TraceTune.com.  It provides a simple way to upload a trace file and see exactly which stored procedures or SQL statements consume the most CPU and disk.   This is still a work in progress as I try to determine exactly which features from ClearTrace are important.  I’ve also limited the file upload to 10MB in this beta release.  That might not sound like much but I get over 20,000 events using this stored procedure to generate the trace. If you’re looking for something to do on a Friday, I’d suggest a little performance tuning.  Generating 10MB of trace data doesn’t take long at all and in a short time you’ll see exactly which SQL statements you need to tune first.

    Read the article

  • What is a good way to refactor a large, terribly written code base by myself? [closed]

    - by AgentKC
    Possible Duplicate: Techniques to re-factor garbage and maintain sanity? I have a fairly large PHP code base that I have been writing for the past 3 years. The problem is, I wrote this code when I was a terrible programmer and now it's tens of thousands of lines of conditionals and random MySQL queries everywhere. As you can imagine, there are a ton of bugs and they are extremely hard to find and fix. So I would like a good method to refactor this code so that it is much more manageable. The source code is quite bad; I did not even use classes or functions when I originally wrote it. At this point, I am considering rewriting the whole thing. I am the only developer and my time is pretty limited. I would like to get this done as quickly as possible, so I can get back to writing new features. Since rewriting the code would take a long time, I am looking for some methods that I can use to clean up the code as quickly as possible without leaving more bad architecture that will come back to haunt me later. So this is the basic question: What is a good way for a single developer to take a fairly large code base that has no architecture and refactor it into something with reasonable architecture that is not a nightmare to maintain and expand?

    Read the article

  • Information I need to know as a Java Developer [on hold]

    - by Woy
    I'm a java developer. I'm trying to get more knowledge to become a better programmer. I've listed a number of technologies to learn. Instead of what I've listed, what technologies would you suggest to learn as well for a Junior Java Developer? I realize, there's a lot of things to study. Java: - how a garbage collector works - resource management - network programming - TCP/IP HTTP - transactions, - consistency: interfaces, classes collections, hash codes, algorithms, comp. complexity concurrent programming: synchronizing, semafores steam management metability: thread-safety byte code manipulations, reflections, Aspect-Oriented Programming as base to understand frameworks such as Spring etc. Web stack: servlets, filters, socket programming Libraries: JDK, GWT, Apache Commons, Joda-Time, Dependency Injections: Spring, Nano Tools: IDE: very good knowledge - debugger - profiler - web analyzers: Wireshark, firebugs - unit testing SQL/Databases: Basics SELECTing columns from a table Aggregates Part 1: COUNT, SUM, MAX/MIN Aggregates Part 2: DISTINCT, GROUP BY, HAVING + Intermediate JOINs, ANSI-89 and ANSI-92 syntax + UNION vs UNION ALL x NULL handling: COALESCE & Native NULL handling Subqueries: IN, EXISTS, and inline views Subqueries: Correlated ITH syntax: Subquery Factoring/CTE Views Advanced Topics Functions, Stored Procedures, Packages Pivoting data: CASE & PIVOT syntax Hierarchical Queries Cursors: Implicit and Explicit Triggers Dynamic SQL Materialized Views Query Optimization: Indexes Query Optimization: Explain Plans Query Optimization: Profiling Data Modelling: Normal Forms, 1 through 3 Data Modelling: Primary & Foreign Keys Data Modelling: Table Constraints Data Modelling: Link/Corrollary Tables Full Text Searching XML Isolation Levels Entity Relationship Diagrams (ERDs), Logical and Physical Transactions: COMMIT, ROLLBACK, Error Handling

    Read the article

  • Can too many 301 redirects cause a DNS error?

    - by Graham
    For a site http://imageocd.com that I just set up I initially spelled the category "automobiles" as "autimobiles"... I know it's rediculous. I then set up over 10,000 pages behind that category e.g. http://imageocd.com/automobiles/hillman-minx-cabrio-pictures-and-wallpapers. So, I set up over 10,000 301 url redirects to change the spelling on automobiles. I just checked my Google Webmasters report and got an error saying: http://www.imageocd.com/: Googlebot can't access your siteSep 7, 2012 Over the last 24 hours, Googlebot encountered 2 errors while attempting to retrieve DNS information for your site. The overall error rate for DNS queries for your site is 66.7%. Could the overabundance of 301 redirects be causing this? I host 13 sites on this dedicated server and all sites are running fine. I also contacted GoDaddy and they said the server is running fine. Any ideas on what might be going on? Also, I have "canonical" set up for every URL. Could this be part of the error? Thanks.

    Read the article

  • what's a good approach to working with multiple databases?

    - by Riz
    I'm working on a project that has its own database call it InternalDb, but also it queries two other databases, call them ExternalDb1 and ExternalDb2. Both ExternalDb1 and ExternalDb2 are actually required by a few other projects. I'm wondering what the best approach for dealing with this is? Currently, I've just created a project for each of these external databases and then generated Edmx and entities using the entity-framework approach. My thought was that I could then include these projects in any of my solutions that require access to these databases. Also, I don't have any separate business layers. I just have a solution like below: Project.Domain ExternalDb1Project.Domain ExternalDb2Project.Domain Project.Web So my Domain projects contain the data access as well as the POCOs generated by Entity Framework and any business logic. But I'm not sure if this is a good approach. For example if I want to do Validation in my Project.Domain on the entities in the InternalDb, it's fine. But if I want to do Validation for entities from either of the ExternalDbs, then I wonder where it should go? To be more specific, I retrieve Employees from ExternalDb1Project.Domain. However, I want to make sure they are Active. Where should this Validation go? How to architect a project like this at a high level? Also, I want to make sure that I use IoC for my data contexts so I can create Fakes when writing tests. I wonder where the interfaces for these various data contexts would reside?

    Read the article

  • How can I get non-programmer colleagues on board with bespoke software rather than Dynamics CRM + Sharepoint?

    - by Bendos
    I am working with a company which designs and builds one-off machines. They have been 'dabbling' with hosted Dynamics CRM and Sharepoint (on different servers!) in an attempt to centralise their data and help colleagues collaborate more effectively across projects. They haven't used either system to their potential. Now we are looking at the engineering department who already use a form of version control software for the various CAD files (Autodesk Vault) however it is becoming increasingly necessary to implement more of a generic file version control system as they use many more files than can be managed in Vault (sometimes just photos or scans of paper documents), hence why they were looking at using Sharepoint. However... as the 'programmer' of the bunch, I can see several scenarios which don't seem to fit well with the Dynamics + Sharepoint approach; simple reports based on cross-table queries, exporting certain metrics as a spreadsheet, defining project hierarchies and many-many relationships, and as such I have been pushing for an in-house developed 'ECM' / 'ERP' software package (perhaps in .NET or php). Some colleagues seem to attach a greater value to the MS software (perhaps becuase it has a logo!) but don't see that it's just a framework, not a solution. Can anyone provide a good example of when custom software would actually be better than using Dynamics + Sharepoint and how do I relate that to non-technical staff?

    Read the article

  • Massive 404 attack with non existent URLs. How to prevent this?

    - by tattvamasi
    The problem is a whole load of 404 errors, as reported by Google Webmaster Tools, with pages and queries that have never been there. One of them is viewtopic.php, and I've also noticed a scary number of attempts to check if the site is a WordPress site (wp_admin) and for the cPanel login. I block TRACE already, and the server is equipped with some defense against scanning/hacking. However, this doesn't seem to stop. The referrer is, according to Google Webmaster, totally.me. I have looked for a solution to stop this, because it isn't certainly good for the poor real actual users, let alone the SEO concerns. I am using the Perishable Press mini black list (found here), a standard referrer blocker (for porn, herbal, casino sites), and even some software to protect the site (XSS blocking, SQL injection, etc). The server is using other measures as well, so one would assume that the site is safe (hopefully), but it isn't ending. Does anybody else have the same problem, or am I the only one seeing this? Is it what I think, i.e., some sort of attack? Is there a way to fix it, or better, prevent this useless resource waste? EDIT I've never used the question to thank for the answers, and hope this can be done. Thank you all for your insightful replies, which helped me to find my way out of this. I have followed everyone's suggestions and implemented the following: a honeypot a script that listens to suspect urls in the 404 page and sends me an email with user agent/ip, while returning a standard 404 header a script that rewards legitimate users, in the same 404 custom page, in case they end up clicking on one of those urls. In less than 24 hours I have been able to isolate some suspect IPs, all listed in Spamhaus. All the IPs logged so far belong to spam VPS hosting companies. Thank you all again, I would have accepted all answers if I could.

    Read the article

  • Tuning WebServer Response -

    - by Vedran Wex Maricevic
    I have this sam e question on StackOverflow and I was advised to ask it here hoping for more information. Here is the question: I am in rather unfavorable situation. I have aspdotneststore front e-commerce application and search addon called VibeTrib. I dont have source code for both of those. Store that runs on StoreFront and VibeTrib has close to 250k products. Also we have lots of filters. I spoke to ViTrib reps, and they want extra money so they could optimize Queries that they use. Money they require is nto a big deal, but the problem is I dont trust them anymore. What we got is much different then wha is being advertised. To cut the long story short. I am runing the store on Amazon AWS now, and regardless of what DB (MsSQL 2012) server I set (I tried 32GB RAM monsters instances) it is slow. Ajax search uses Full Text search and it displays search keywords relatively fast, but once the search is performed ( to display all results) it is still slow.!!! There is something that I could to do accelerate the speed on my own end? I do have full control over EC2 Instance (Web server Server 2012 and IIS 8). Can I set IIS to step in for the search and cache some of it? I was hoping to cache at least some most common words. My best bet is IIS 8 :) Is there any help in my case? Thanks

    Read the article

  • Site experiencing low traffic volume between 8AM and 4PM BST

    - by BizNuge
    There may be no definitive answer to this question but I thought peer review of the problem might stimulate some ideas on the topic. We have a boutique sales site that is experiencing low volumes of traffic (both UK and international) between 8AM and 4PM BST. This seems sort of strange since our target audience for the site is UK based, and this would seem to be when people are awake and online. We are in contact with another boutique site in the same sector who don't experience this issue, so it seems kinda strange. Later on in the day we are getting traffic from the UK, as well as a fair amount of international traffic, so I'm at a loss to figure this one out. The site is fairly well optimised including:- sitemap.xml Proper caching policies across the board google merchant dublin core microdata html5 pretty urls meta and content are reviewed as an ongoing concern we have decent sitelinks for direct queries thru google on the site name a decent amount of inbound links FB, Twitter, Google +1 Google maps listing [verified] site has been selling for ~4 months and is getting ~250 users per day. So I'm not entirely sure how to explain the mid day dip in our figures.... Any ideas at all would be useful. Cheers all!

    Read the article

  • What would be the fastest way of storing or calculating legal move sets for chess pieces?

    - by ioSamurai
    For example if a move is attempted I could just loop through a list of legal moves and compare the x,y but I have to write logic to calculate those at least every time the piece is moved. Or, I can store in an array [,] and then check if x and y are not 0 then it is a legal move and then I save the data like this [0][1][0][0] etc etc for each row where 1 is a legal move, but I still have to populate it. I wonder what the fastest way to store and read a legal move on a piece object for calculation. I could use matrix math as well I suppose but I don't know. Basically I want to persist the rules for a given piece so I can assign a piece object a little template of all it's possible moves considering it is starting from it's current location, which should be just one table of data. But is it faster to loop or write LINQ queries or store arrays and matrices? public class Move { public int x; public int y; } public class ChessPiece : Move { private List<Move> possibleMoves { get; set; } public bool LegalMove(int x, int y){ foreach(var p in possibleMoves) { if(p.x == x && p.y == y) { return true; } } } } Anyone know?

    Read the article

  • Solutions for "Maintenance Mode"

    - by Ka Lyse
    Given a web application running across 10+ servers, what techniques have you put in place for doing things like altering the state of your website so that you can implement certain features. For instance, you might want to: Restrict Logins/Disable Certain Features Turn Site to "Read Only" Turn Site to Single "Maintenance Mode" page. Doing any of the above is pretty trivial. You can throw a particular "flag" in an .ini file, or add a row/value to a site_options table in your database and just read that value and do the appropriate thing. But these solutions have their problems. Disadvantages/Problems For instance, if you use a file for your application, and you want to switch off a certain feature temporarily, then you need to update this file on all servers. So then you might want to look at running something like ZooKeeper, but you are probably overcomplicating things. So then, you might decide that you want to store these "feature" flags in a database. But then you are obviously adding unncessary queries to each page request. So you think to yourself, that you will throw memcached in to the mix and just cache the query. Then you just retrieve all of your "Features" from memcached and add a 2ms~ latency to your application on every page. So to get around this, you decide to use a two tier-cache system, whereby you use an inmemory cache on each machine, (like Apc/Redis etc). This would work, but then it gets complicated, because you would have to set the key/hash life to perhaps 60 seconds, so that when you purge/invalidate the memcached object storing your "Features" result, your on machine cache is prompt enough to get the the new states. What suggestions might you have? Keeping in mind that optimization/efficiency is the priority here.

    Read the article

  • My sound is not working, so I'm going to reinstall FYI [closed]

    - by fer
    I've had trouble getting the sound to work in Ubuntu 12.04 I'm running an Acer Aspire 5739g laptop. This is using a clean install. This wasn't a problem when ubuntu first installed. Rather, it was when I ran the updates that it stopped working. I already tried the suggestions on the ubuntu sites and other similar queries, and they haven't fixed it. Something in the updates is making my sound not work. Edit: It turns out that this might be a bug (the sound issue, first paragraph). After reinstall, it happened again (it's not caused by updates at all, or any software, because I fixed it now w/o reinstall). It seems like I replicated it as follows: I changed auto-hide in the behavior tab of Appearance settings by turning it on, and setting the sensitivity to below the recommended setting. Then instead of restarting, I just logged out and back in. The sound stopped working again. I set the behavior settings to default, restarted, and now it's back to normal. Not sure if it's due to only logging out (and not restarting) or b/c I set my sensitivity to a low setting. Not sure if this helps anyone, but thought I'd mention it.

    Read the article

  • Best CMS for review-type sites

    - by Pru
    Is there an ideal CMS for making a review site? By review site, I mean like a restaurant review site where you have each entry belonging to different major categories like Cuisine and City. Then users can browse and filter by each or by combination (Chinese Food in Los Angeles, with suggestions of other Chinese restaurants in LA, etc). Furthermore, I'd want it to support other fields like price, parking, kid-friendliness, etc. And to have users be able to filter by those criteria. I've been told that with a combination of custom taxonomies, plug-ins and many clever little queries, that Wordpress 3.x can handle this. But I'm having a heck of a time with it getting into the nitty gritty, and that's where I find the community support is lacking. The sort of stuff you'd think would work in WP, like making one parent category for Cuisine and one for City, don't really work once you get further in and start trying to pull it all together. Then you find these blog posts where people say, "This example shows that one could create a huge movie review site using custom taxonomies..." but when you go and try it you hit all sorts of challenges and oddities that point a big long finger at Wordpress being in fact a blogging platform. The best I came up with was one category for the cuisine and one tag for the city, then I created a couple of custom tag-like taxonomies for the other features. It's quite a mess to try to figure out how to assemble all of that into a natural, intuitive site. I expect a few versions down the road WP will be able to do these sorts of sites out of the box. So I thought I'd take a step back before I run back into the Wordpress fray and find out if maybe there is another platform better suited to this sort of relational content site. Directory scripts in some ways offer many of the features I'm looking for, but I need something more flexible and, hopefully, interactive (comments, reviews). I'm especially looking for feedback from people who've crafted sites like this. Thanks!

    Read the article

  • Simplifying data search using .NET

    - by Peter
    An example on the asp.net site has an example of using Linq to create a search feature on a Music album site using MVC. The code looks like this - public ActionResult Index(string movieGenre, string searchString) { var GenreLst = new List<string>(); var GenreQry = from d in db.Movies orderby d.Genre select d.Genre; GenreLst.AddRange(GenreQry.Distinct()); ViewBag.movieGenre = new SelectList(GenreLst); var movies = from m in db.Movies select m; if (!String.IsNullOrEmpty(searchString)) { movies = movies.Where(s => s.Title.Contains(searchString)); } if (!string.IsNullOrEmpty(movieGenre)) { movies = movies.Where(x => x.Genre == movieGenre); } return View(movies); } I have seen similar examples in other tutorials and I have tried them in a real-world business app that I develop/maintain. In practice this pattern doesn't seem to scale well because as the search criteria expands I keep adding more and more conditions which looks and feels unpleasant/repetitive. How can I refactor this pattern? One idea I have is to create a column in every table that is "searchable" which could be a computed column that concatenates all the data from the different columns (SQL Server 2008). So instead of having movie genre and title it would be something like. if (!String.IsNullOrEmpty(searchString)) { movies = movies.Where(s => s.SearchColumn.Contains(searchString)); } What are the performance/design/architecture implications of doing this? I have also tried using procedures that use dynamic queries but then I have just moved the ugliness to the database. E.g. CREATE PROCEDURE [dbo].[search_music] @title as varchar(50), @genre as varchar(50) AS -- set the variables to null if they are empty IF @title = '' SET @title = null IF @genre = '' SET @genre = null SELECT m.* FROM view_Music as m WHERE (title = @title OR @title IS NULL) AND (genre LIKE '%' + @genre + '%' OR @genre IS NULL) ORDER BY Id desc OPTION (RECOMPILE) Any suggestions? Tips?

    Read the article

  • Agressive Auto-Updating?

    - by MattiasK
    What do you guys think is best practice regarding auto-updating? Google Chrome for instance seems to auto-update itself as soon as it get's a chance without asking and I'm fine with it. I think most "normal" users benefits from updates being a transparent process. Then again, some more technical users might be miffed if you update their app without permission, as I see it there's 3 options: 1) Have a checkbox when installing that says "allow automatic updates" 2) Just have a preference somewhere that allows you to "disable automatic updates" so that you have to "check for updates manually" I'm leaning towards 2) because 1) feels like it might alienate non-technical users and I'd rather avoid installation queries if possible. Also I'm thinking about making it easy to downgrade if an upgrade (heaven forbid) causes trouble, what are your thoughts? Another question, even if auto-updates are automatically, perhaps they should be announced. If there's new features for example otherwise you might not realize and use them One thing that kinda scares me though is the security implications, someone could theorically hack my server and push out spyware/zombieware to all my customers. It seems that using digital signatures to prevent man-in-the-middle attacks is the least you could do otherwise you might be hooked up to a network that spoofs the address of of update server.

    Read the article

  • Google Rolls Out Secured Search. It’s Slightly Different From Regular Search

    - by Gopinath
    Google rolled out secured version of it’s search engine at https://google.com (did you notice https instead of http?). This search engine lets everyone to use Google search in a secured way. How is it secured? When you use https://google.com, the data exchanged between your browser and Google servers is encrypted to make sure that no one can sniff it. Is my search history secured from Google? No. The search queries you submit to Google are stored in Google servers. There is no change Google’s search history recording. Any differences between Regular Search and Secured Search Results? Yes. Secured search is slightly different from regular search. When you are accessing Google Secured Search Image search options will not be available on the left side bar. Site may respond slow compared to regular search site as there is a overhead to establish between your browser and the server. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • Aggressive Auto-Updating?

    - by MattiasK
    What do you guys think is best practice regarding auto-updating? Google Chrome for instance seems to auto-update itself as soon as it get's a chance without asking and I'm fine with it. I think most "normal" users benefits from updates being a transparent process. Then again, some more technical users might be miffed if you update their app without permission, as I see it there's 3 options: 1) Have a checkbox when installing that says "allow automatic updates" 2) Just have a preference somewhere that allows you to "disable automatic updates" so that you have to "check for updates manually" I'm leaning towards 2) because 1) feels like it might alienate non-technical users and I'd rather avoid installation queries if possible. Also I'm thinking about making it easy to downgrade if an upgrade (heaven forbid) causes trouble, what are your thoughts? Another question, even if auto-updates are automatically, perhaps they should be announced. If there's new features for example otherwise you might not realize and use them One thing that kinda scares me though is the security implications, someone could theorically hack my server and push out spyware/zombieware to all my customers. It seems that using digital signatures to prevent man-in-the-middle attacks is the least you could do otherwise you might be hooked up to a network that spoofs the address of of update server.

    Read the article

  • ADF Logging In Deployed Apps

    - by Duncan Mills
    Harking back to my series on using the ADF logger and the related  ADF Insider Video, I've had a couple of queries this week about using the logger from Enterprise Manager (EM). I've alluded in those previous materials to how EM can be used but it's evident that folks need a little help.  So in this article, I'll quickly look at how you can switch logging on from the EM console for an application and how you can view the output.  Before we start I'm assuming that you have EM up and running, in my case I have a small test install of Fusion Middleware Patchset 5 with an ADF application deployed to a managed server. Step 1 - Select your Application In the EM navigator select the app you're interested in: At this point you can actually bring up the context ( right mouse click) menu to jump to the logging, but let's do it another way.  Step 2 - Open the Application Deployment Menu At the top of the screen, underneath the application name, you'll find a drop down menu which will take you to the options to view log messages and configure logging, thus: Step 3 - Set your Logging Levels  Just like the log configuration within JDeveloper, we can set up transient or permanent (not recommended!) loggers here. In this case I've filtered the class list down to just oracle.demo, and set the log level to config. You can now go away and do stuff in the app to generate log entries. Step 4 - View the Output  Again from the Application Deployment menu we can jump to the log viewer screen and, as I have here, start to filter down the logging output to the stuff you're interested in.  In this case I've filtered by module name. You'll notice here that you can again look at related log messages. Importantly, you'll also see the name of the log file that holds this message, so it you'd rather analyse the log in more detail offline, through the ODL log analyser in JDeveloper, then you can see which log to download.

    Read the article

  • Best approach for tracking dependent state

    - by Pace
    Let's pretend I work on a project tracking application. The application is a database backed, server hosted, web application. In this application there are Projects which have many Activities which have many Tasks. A Task has two date fields an originalDueDate and a projectedDueDate. In addition, there are dynamic fields on the Activities and the Projects which indicate whether the Activity or Project is behind schedule based on the projected due dates of the child tasks and various other variables such as remaining buffer time, etc. There are a number of things that can cause the projectedDueDate to change. For example, an employee working on the project may (via a server request) enter in a shipping delay. Alternatively, a site may (via a server request) enter in an unexpected closure. When any of these things occur I need to not only update the projectedDueDate of the Task but also trigger the corresponding Project and Activity to update as well. What is the best way to do this? I've thought of the observer pattern but I don't keep a single copy of all these objects in memory. When a request comes in, I query the Task in from the database, at that point there is no associated Activity in memory that would be a listener. I could remove the ability to query for Tasks and force the application to query first by Project, then by Activity (in context of Project), then by task (in context of Activity) adding the observer relationships at each step but I'm not sure if that is the best way. I could setup a database event listening system so when a Task modified event is dispatched I have a handler which queries for the Activity at that point. I could simply setup a two-way relationship between Task and Activity so that the Task knows about the parent Activity and when the Task updates his state the Task grabs his parent and updates state. Right now I'm stuck considering all the options and am wondering if any single approach (doesn't have to be a listed approach) is jumping out at others as the best approach.

    Read the article

  • You cannot do cross joins in SQL Azure but there is a way around that....

    - by SeanBarlow
    So I was asked today how to do cross joins in SQL Azure using Linq. Well the simple answer is you cant do it. It is not supported but there are ways around that. The solution is actually very simple and easy to implement. So here is what I did and how I did it. I created two SQL Azure Databases. The first Database is called AccountDb and has a single table named Account, which has an ID, CompanyId and Name in it. The second database I called CompanyDb and it contains two tables. The first table I named Company and the second I named Address. The Company Table has an Id and Name column. The Address Table has an Id and CompanyId columns. Since we cannot do cross joins in Azure we have to have one of the models preloaded with data. I simply put the Accounts into a List of accounts and use that in my join.   var accounts = new AccountsModelContainer().Accounts.ToList(); var companies = new CompanyModelContainer().Companies; var query = from account in accounts             join company in                 (                       from c in companies                      select c                  ) on account.CompanyId equals company.Id             select new AccountView() {                                               AccountName = account.Name, CompanyName = company.Name,                                 Addresses = company.Addresses                         }; return query.ToList();   So as long as you have your data loaded from one of the contexts you can still execute your queries and get the data back that you want.

    Read the article

  • How Does Domain Know Where Your Web Host Is Located [closed]

    - by icu222much
    Possible Duplicate: How Does Domain Know Where Your Web Host Is Located? I just purchased a domain name from RapidNames, and a hosting plan at JustHost. I was told to enter JustHost's name server (ns1.justhost.com) in my domain name's name server field and wait for 24 hours for the process to be complete. I do not understand how RapidNames can find my account on JustHost's server as I am sure I am not JustHost's only customer. I have read the article How DNS Works that John Conde has posted, but I still do not understand the issue. After reading several other articles, I am beginning to understand how it works, but I would still like someone to confirm if I am correct or not. From my understanding, linking your domain name to your web host is a two step process. First, you need to tell your domain name who your web host is. This is done by providing the two DNS server addresses. Secondly, you need to tell your web host which domain names you own by entering your domain names into the domain name manager. As a result, when someone queries your domain name, they will be forwarded to your web host. The web host will look in their database to match the domain name the account's owner, and then serve the appropriate website. I want to confirm if my understanding of how a domain knows where your web host is located is accurate?

    Read the article

  • What is meant by "no password set" for root account (and otthers)?

    - by MMA
    Several years back, we were more accustomed to changing to the root account using the su command. First, we switched to the root account, and then executed those root commands. Now we are more accustomed to using the sudo command. But we know that the root account is there. We can readily find the home directory of user root. $ ls -ld /root/ drwx------ 18 root root 4096 Oct 22 17:21 /root/ Now my point is, it is stated that "the root password in Ubuntu is left unset". Please see the answers to this question. Most of the answers have something to this effect in the first paragraph. One or two answers further state that "the account is left disabled". Now my (primary) questions are, What is meant by an unset password? Is it blank? Is it null? Or something else more cryptic? How does the account becomes enabled once I set password for it? (sudo password root) In order get a better understanding, I checked the /etc/shadow file. Since I have already set a password for the root account, I can no longer see what is there (encrypted password). So, I created another account and left it disabled. The corresponding entry in the /etc/shadow file is, testpassword:!:16020:0:99999:7::: Now perhaps my above queries need to be changed to, what does an ! in password field mean? Other encrypted passwords are those very long cryptic strings. How come this encrypted form is only one character long? And does an account become disabled if I put an ! in the (encrypted) password field?

    Read the article

  • What are the signs that a ten days debugging session will not resolve an issue? [on hold]

    - by smonff
    Ten days ago, we fixed a bug on a large application and the hot fix has created a disappearing of some data from the user point of view (side effect). Data are not deleted, but have been set to hidden status. It could be possible to get the data back, but the thing seems to be hard: we've already spent 10 days to understand and reproduce the problem (mostly with SQL queries but sometimes it is necessary to update the database to test the application logic). My questions are : is 10 days a normal amount of time for these kind of problems? should we keep on and retrieve the data or should we give up this work (so the customer-relationship person will tell these users sorry for the loss, but your data have disappeared or maybe tell nothing at all)? what can be the signs that shows that we should stop to search for how to solve this issue? Edit about the context : we are a small team(3), users are not the customers, and lost data are not about the users money, bank or vital data. This is a question from a confused developer about development methodologies and business concerns, not about how we should deal with the customers.

    Read the article

  • OOP Structure for web application

    - by Query
    Ok so I have a website in which users complete tasks to earn points. When they earn enough points, they rise in rank. The site from my understanding is very basic and only executes one query or two queries at most a page. There is a user table, a support ticket table, and an orders table. All of these contain a relational row for username. Our class was familiarized with OOP back in highschool with Java but that was for video games and I could grasp the concept on why you would need a class player and class enemy. However I don't understand it's web application. At least not in my situation. I understand the user class might contain stuff like: getUsername getPoints getEmail setEmail addPoints (does this belong here? OR only things the user can manipulate should be here?) etc.. But I'm at a loss with everything else such as user registration. Can you help give me a wire framework that I could wrap my head around? Pointing me to a good eBook would help greatly

    Read the article

< Previous Page | 107 108 109 110 111 112 113 114 115 116 117 118  | Next Page >