Search Results

Search found 27143 results on 1086 pages for 'include path'.

Page 813/1086 | < Previous Page | 809 810 811 812 813 814 815 816 817 818 819 820  | Next Page >

  • Class instance clustering in object reference graph for multi-entries serialization

    - by Juh_
    My question is on the best way to cluster a graph of class instances (i.e. objects, the graph nodes) linked by object references (the -directed- edges of the graph) around specifically marked objects. To explain better my question, let me explain my motivation: I currently use a moderately complex system to serialize the data used in my projects: "marked" objects have a specific attributes which stores a "saving entry": the path to an associated file on disc (but it could be done for any storage type providing the suitable interface) Those object can then be serialized automatically (eg: obj.save()) The serialization of a marked object 'a' contains implicitly all objects 'b' for which 'a' has a reference to, directly s.t: a.b = b, or indirectly s.t.: a.c.b = b for some object 'c' This is very simple and basically define specific storage entries to specific objects. I have then "container" type objects that: can be serialized similarly (in fact their are or can-be "marked") they don't serialize in their storage entries the "marked" objects (with direct reference): if a and a.b are both marked, a.save() calls b.save() and stores a.b = storage_entry(b) So, if I serialize 'a', it will serialize automatically all objects that can be reached from 'a' through the object reference graph, possibly in multiples entries. That is what I want, and is usually provides the functionalities I need. However, it is very ad-hoc and there are some structural limitations to this approach: the multi-entry saving can only works through direct connections in "container" objects, and there are situations with undefined behavior such as if two "marked" objects 'a'and 'b' both have a reference to an unmarked object 'c'. In this case my system will stores 'c' in both 'a' and 'b' making an implicit copy which not only double the storage size, but also change the object reference graph after re-loading. I am thinking of generalizing the process. Apart for the practical questions on implementation (I am coding in python, and use Pickle to serialize my objects), there is a general question on the way to attach (cluster) unmarked objects to marked ones. So, my questions are: What are the important issues that should be considered? Basically why not just use any graph parsing algorithm with the "attach to last marked node" behavior. Is there any work done on this problem, practical or theoretical, that I should be aware of? Note: I added the tag graph-database because I think the answer might come from that fields, even if the question is not.

    Read the article

  • Restoring MSDB

    - by David-Betteridge
    We recently performed a disaster recovery exercise which included the restoration of the MSDB database onto our DR server.  I did a quick google to see if there were any special considerations and found the following MS article.  Considerations for Restoring the model and msdb Databases (http://msdn.microsoft.com/en-us/library/ms190749(v=sql.105).aspx).   It said both the original and replacement servers must be on the same version,  I double-checked and in my case they are both SQL Server 2008 R2 SP1 (10.50.2500).. So I went ahead and stopped SQL Server agent, restored the database and restarted the agent.  Checked the jobs and they were all there, everything looked great, and was until the server was rebooted a few days later.Then the syspolicy_purge_history job started failing on the 3rd step with the error message “Unable to start execution of step 3 (reason: The PowerShell subsystem failed to load [see the SQLAGENT.OUT file for details]; The job has been suspended). The step failed.”   A bit more googling pointed me to the msdb.dbo.syssubsystems table SELECT * FROM msdb.dbo.syssubsystems WHERE start_entry_point ='PowerShellStart'   And in particular the value for the subsystem_dll. It still had the path to the SQLPOWERSHELLSS.DLL but on the old server. The DR instance has a different name to the live instance and so the paths are different.   This was quickly fixed with the following SQL Use msdb; GO sp_configure 'allow updates', 1 ; RECONFIGURE WITH OVERRIDE ; GO UPDATE msdb.dbo.syssubsystems SET subsystem_dll='C:\Program Files\Microsoft SQL Server\MSSQL10_50.DR\MSSQL\binn\SQLPOWERSHELLSS.DLL' WHERE start_entry_point ='PowerShellStart'; GO sp_configure 'allow updates', 0; RECONFIGURE WITH OVERRIDE ; GO Stopped and started SQL Server agent and now the job completes.   I then wondered if anything else might be broken, SELECT subsystem_dll FROM msdb.dbo.syssubsystems Shows a further 10 wrong paths – fortunately for parts of SQL (replication, SSIS etc) we aren’t using! Lessons Learnt 1.       DR exercises are a good thing! 2.       Keep the Live and DR environments as similar as possible.    

    Read the article

  • Impressions from VMworld - Clearing up Misconceptions

    - by Monica Kumar
    Gorgeous sunny weather…none of the usual summer fog…the Oracle Virtualization team has been busy at VMworld in San Francisco this week. From the time exhibits opened on Sunday, our booth staff was fully engaged with visitors. It was great to meet with customers and prospects, and there were many…most with promises to meet again in October at Oracle OpenWorld 2012. Interests and questions ran the gamut - from implementation details to consolidating applications to how does Oracle VM enable rapid application deployment to Oracle support and licensing. All good stuff! Some inquiries are poignant and really help us get at the customer pain points. Some are just based on misconceptions. We’d like to address a couple of common misconceptions that we heard: 1) Rapid deployment of enterprise applications is great but I don’t do this all the time. So why bother? While production applications don’t get updated or upgraded as often, development and QA staging environments are much more dynamic. Also, in today’s Cloud based computing environments, end users expect an entire solution, along with the virtual machine, to be provisioned instantly, on-demand, as and when they need to scale. Whether it’s adding a new feature to meet customer demands or updating applications to meet business/service compliance, these environments undergo change frequently. The ability to rapidly stand up an entire application stack with all the components such as database tier, mid-tier, OS, and applications tightly integrated, can offer significant value. Hand patching, installation of the OS, application and configurations to ensure the entire stack works well together can take days and weeks. Oracle VM Templates provide a much faster path to standing up a development, QA or production stack in a matter of hours or minutes. I see lots of eyes light up as we get to this point of the conversation. 2) Oracle Software licensing on VMware vSphere In the world of multi-vendor IT stacks, understanding license boundaries and terms and conditions for each product in the stack can be challenging.  Oracle’s licensing, though, is straightforward.  Oracle software is licensed per physical processor in the server or cluster where the Oracle software is installed and/or running.  The use of third party virtualization technologies such as VMware is not allowed as a means to change the way Oracle software is licensed.  Exceptions are spelled out in the licensing document labeled “Hard Partitioning". Here are some fun pictures! Visitors to our booth told us they loved the Oracle SUV courtesy shuttles that are helping attendees get to/from hotels. Also spotted were several taxicabs sporting an Oracle banner! Stay tuned for more highlights across desktop and server virtualization as we wrap up our participation at VMworld.

    Read the article

  • Partial Submit vs. Auto Submit

    - by Frank Nimphius
    Partial Submit ADF Faces adds the concept of partial form submit to JavaServer Faces 1.2 and beyond. A partial submit actually is a form submit that does not require a page refresh and only updates components in the view that are referenced from the command component PartialTriggers property. Another option for refreshing a component in response to a partial submit is call AdfContext.getCurrentInstance.addPartialTarget(component_instance_handle_goes_here)in a managed bean. If a form contains required fields that the user left empty invoking the partial submit, then errors are shown for each of the field as the full form gets submitted. Autosubmit An input component that has its autosubmit property set to true also performs a partial submit of the form. However, this time it doesn't submit the entire form but only the component that triggers the submit plus components referenced it in their PartialTriggers property. For example, consider a form that has three input fields inpA, inpB and inpC with autosubmit=true set on inpA and required=true set on inpB and inpC. use case 1: Running the view, entering data into inpA and then tabbing out of the field will submit the content for inpA but not for inpB and inpC. Further more, none of the required field settings on inpB and inpC causes an error. use case 2: You change the configuration of inpC and set its PartialTriggers property to point to the ID of component inpA. When rerunning the sample, entering a value into inpA and tabbing out of the field will now submit the inpA and inpC fields and thus show an error for the missing required value on inpC. Internally, using autosubmit=true on an input component sets the event root to just this field, which good to have in case of dependent field validation or behavior. The event root can extended to include other components by using the Partial Triggers property on these components to point to the input field that has autosubmit=true defined. PartialSubmit vs. AutoSubmit Partial submit set on a command component submits the whole form and leaves it to the developer to decide which UI component is refreshed in response. Client side required field validation (as well as the server side equivalent) is not disabled by executed in this scenario. Setting immediate=true on the command item to skip validation doesn't help as it would also skip the model update. Auto submit is a functionality on the input components and also performs a partial form submit. However, in addition an event root is defined that narrows the scope for the submitted data and thus the components that are validated on the request. To read more about this topic, see: http://docs.oracle.com/cd/E23943_01/web.1111/b31973/af_lifecycle.htm#CIAHCFJF

    Read the article

  • Using a SQL Prompt snippet with template parameters

    - by SQLDev
    As part of my product management role I regularly attend trade shows and man the Red Gate booth in the vendor exhibition hall. Amongst other things this involves giving product demos to customers. Our latest demo involves SQL Source Control and SQL Test in a continuous integration environment. In order to demonstrate quite how easy it is to set up our tools from scratch we start the demo by creating an entirely new database to link to source control, using an individual database name for each conference attendee. In SQL Server Management Studio this can be done either by selecting New Database from the Object Explorer or by executing “CREATE DATABASE DemoDB_John” in a query window. We recently extended the demo to include SQL Test. This uses an open source SQL Server unit testing framework called tSQLt (www.tsqlt.org), which has a CLR object that requires EXTERNAL_ACCESS to be set as follows: ALTER DATABASE DemoDB_John SET TRUSTWORTHY ON This isn’t hard to do, but if you’re giving demo after demo, this two-step process soon becomes tedious. This is where SQL Prompt snippets come into their own. I can create a snippet named create_demo_db for this following: CREATE DATABASE DemoDB_John GO USE DemoDB_John GO ALTER DATABASE DemoDB_John SET TRUSTWORTHY ON Now I just have to type the first few characters of the snippet name, select the snippet from SQL Prompt’s candidate list, and execute the code. Simple! The problem is that this can only work once due to the hard-coded database name. Luckily I can leverage a nice feature in SQL Server Management Studio called Template Parameters. If I modify my snippet to be: CREATE DATABASE <DBName,, DemoDB_> GO USE <DBName,, DemoDB_> GO ALTER DATABASE <DBName,, DemoDB_> SET TRUSTWORTHY ON Once I’ve invoked the snippet, I can press Ctrl-Shift-M, which calls up the Specify Values for Template Parameters dialog, where I can type in my database name just once. Now you can click OK and run the query. Easy. Ideally I’d like for SQL Prompt to auto-invoke the Template Parameter dialog for all snippets where it detects the angled bracket syntax, but typing in the keyboard shortcut is a small price to pay for the time savings.

    Read the article

  • Play or Lift: which one is more explicit?

    - by Andrea
    I am going to investigate web development with Scala, and the choice is between learning Lift or Play: probably I will not have enough time to try both, at least at first. Now, many comparisons between the two are available on the internet, but I would like to know how do they compare with respect to being explicit and involving less magic. Let me explain what I mean by example. I have used, to various degrees, CakePHP, symfony2, Django and Grails. I feel a very clear distinction between Django and symfony2, which are very explicit about what you are doing, and Grails and CakePHP, which try to do their best to guess what you are trying to achieve and often feel "magical". Let me give some examples comparing Django and Grails. In Django, views are functions that take a request as input and return a response. You can instantiate explicitly an instance of HttpResponse and populate its body with a string, or you can use shortcut functions to leverage the template system. In any case the return value from your view always has the same type. In contrast, the render method from Grails is highly polymorphic. You can throw a context at it and it will try to render a template which is found by convention using that context. Or you can pass it a pair of a template path and a context and that will work too. Or a string. Or XML. Grails tries hard to make sense of whatever you return from your controller. In the Django ORM, each model class has a static attribute representing the manager for that class. That manager exposes a fluent interface to build querysets. In Grails, you can have a similar functionality by composing detached criteria. Still, the most common way to query objects seems to be the use of runtime-generated methods like FindUserByEmailNotNull or FindPostByDateGreaterThan. I will not go further, but my point is that in Django-like frameworks you have control over the whole flow of the request/response process, while in Grails-like ones I feel I only have to feel the blanks and the framework will manage the rest of the flow for me. This is not to criticize Grails or CakePHP; which type you prefer is mainly a matter of preference. In fact, I happen to like some aspects of Grails, but I feel more comfortable with a framework which does less for me. Back to the point of the question: which one among Play and Lift is more explicit about what you do and which one tries to simplify more what you have to do with a layer of "magic"?

    Read the article

  • Reduce weight in healthy way - Day 2

    - by krnites
    My second day of reducing weight and it seems most of the blog are correct in saying that you can reduce weight if your calorie consumption is less than what you burn. In one day I have lost 1 lbs without doing anything. My current weight is 177.4 lbs. Yesterday I ate small portion of dinner that I used to eat that also around 7 PM. Normally I eat my dinner around 10 PM and withing 2 hour of eating I go for sleep, but yesterday I ate around 7 PM and went for sleep only after 12.On my second day I have eaten noodles and 3 eggs in breakfast and sesame chicken ( I love it) and fried rice in lunch, I still have not gone for running but had plan to go for running and then swimming. I hope it will at least burn the calories that I had taken. On some site it was written that a normal men body needs around 2000 Calorie a day. So if I am eating less than 2000 calorie ( noodles + 3 eggs = 400+200, rice + sesame chicken = 1300, total = 1900) and burning around 300 calorie, my total calorie intake will be 1600 which is less than what my body needs. So most probably by tomorrow I should come under 176 lb bracket.Apart from counting the calorie that I am taking in everyday and approx number of calorie that I am burning everyday, I had also starting tracking my physical activities on my mobile. I have got a beautiful Samsung Focus S Windows 7.5 mobile. And after browsing through the market I have downloaded couple of health Apps.1. 6 Week training - this has set of exercise and lets you choose the number of sets you want to do for all exercise. Its focus on your core muscles.2. Fast food Calories - This apps has all the fast food chain listed and give the calorie count of each of the food item available on there menu. Like for Burger King's French Fries Large (Salted) contains 500 Calorie.3. Gym Pocket Guide - Contains instructions for different kind of exercise and tells a right way of doing them.4.  RunSat - kind of GPS based application. Its mark the distance you have run, shows the path you have taken on a map, total calorie burnt, laps completed. I love this apps.5. Stop Watch I also have noticed that If I am running in GYM and have television in front of me where a movie or serial is going on which I like,  I normally didn't notice the time. Most of the time running on treadmill is very boring, but if some music video is playing or some kind of sitcom is going, I can run for  a hour or half.So on day 2 I have lost 1 lbs and had learnt that calorie intake should be less then calorie burnt for a given day.

    Read the article

  • Games at Work Part 1: Introduction to Gamification and Applications

    - by ultan o'broin
    Games Are Everywhere How many of you (will admit to) remember playing Pong? OK then, do you play Angry Birds on your phone during work hours? Thought about why we keep playing online, video, and mobile games and what this "gamification" business we're hearing about means for the enterprise applications user experience? In Reality Is Broken: Why Games Make Us Better and How They Can Change the World, Jane McGonigal says that playing computer and online games now provides more rewards for people than their real lives do. Games offer intrinsic rewards and happiness to the players as they pursue more satisfying work and the success, social connection, and meaning that goes with it. Yep, Gran Turismo, Dungeons & Dragons, Guitar Hero, Mario Kart, Wii Boxing, and the rest are all forms of work it seems. Games are, in fact, work taken so seriously that governments now move to limit the impact of virtual gaming currencies on the real financial system. Anyone who spends hours harvesting crops on FarmVille realizes it’s hard work too. Yet games evoke a positive emotion in players who voluntarily stay engaged with games for hours, day after day. Some 183 million active gamers in the United States play on average 13 hours per week. Weekly, 5 million of those gamers play for longer than a working week (45 hours). So why not harness the work put into games to solve real-world problems? Or, in the case of our applications users, real-world work problems? What’s a Game? Jane explains that all games have four defining traits: a goal, rules, a feedback system, and voluntary participation. We need to look at what motivational ideas behind the dynamics of the game—what we call gamification—are appropriate for our users. Typically, these motivators are achievement, altruism, competition, reward, self-expression, and status). Common game techniques for leveraging these motivations include: Badging and avatars Points and awards Leader boards Progress charts Virtual currencies or goods Gifting and giving Challenges and quests Some technology commentators argue for a game layer on top of everything, but this layer is already part of our daily lives in many instances. We see gamification working around us already: the badging and kudos offered on My Oracle Support or other Oracle community forums, becoming a Dragon Slayer implementor of Atlassian applications, being made duke of your favorite coffee shop on Yelp, sharing your workout details with Nike+, or donating to Japanese earthquake relief through FarmVille, for example. And what does all this mean for the applications that you use in your work? Read on in part two...

    Read the article

  • ArchBeat Facebook Friday: Top 10 Shared Links - May 23-29, 2014

    - by OTN ArchBeat
    Among the 5,144 fans of the OTN ArchBeat Facebook Page the following Top 10 items were the most popular over the last seven days, May 23-29, 2014. GlassFish/Java EE Community Open Forum Today! | Reza Rahman Have questions about Glassfish? Java EE/GlassFish evangelist Reza Rahman has answers, and you can pick his brain tomorrow during an online forum organized by the London Glassfish User Group and C2B2. The event is free, but you must register in order to participate. Click the link for more information. Twitter Tuesday - Top 10 @ArchBeat Tweets - May 20-26, 2014 The top 10 @OTNArchBeat tweets for the week of May 20-26, 2014. Topics covered include ADF, Cloud, GoldenGate, KScope14, OBIEE, ODI, WebLogic, WebCenter, and more. FrameworkFolders Support has come to Oracle WebCenter Portal | JayJay Zheng Interested in working with Framework Folders in Oracle WebCenter Portal? Oracle ACE JayJay Zheng reviews the essentials. Video: Programming Best Practices - ADF Business Components | Frank Nimphius Frank Nimphius discusses best practices and recommendations for ADF Business Components in the latest video from ADF Architecture TV. Video: Kscope 2014 Preview: Data Modeling and Moving Meditation with Kent Graziano For your mind and your body! Oracle ACE Director Kent Graziano previews his Kscope 2014 data modeling presentations and the early morning Chi Gung sessions he will once again lead for Kscope attendees. OAG and OES Integration for Web API Security: skin and guts | Andre Correa A-Team architect Andre Correa's post examines a strategy for web API security that uses OAG (Oracle API Gateway) and OES (Oracle Entitlements Server). Getting Started with Coherence*Web in WebLogic Server 12.1.2 | Tim Middleton Solution architect Tim Middleton shows you how to configure Coherence*Web in WebLogic Server 12.1.2 and deploy a basic web application. SOA and Business Processes: You are the Process! Part of the 13-part "Industrial SOA" article series, this article looks at best practices for modeling and managing effective business processes. Authentication in Oracle Identity Federation/ IdP | Damien Carru Damien Carru discuss authentication when OIF acts as an IdP and how the server can be configured to use specific OAM Authentication Schemes to challenge the user. Caveats on Using WebLogic Server with JDK7 | JayJay Zheng Quick tech tips from Oracle ACE JayJay Zheng.

    Read the article

  • Improving the performance of JDeveloper11g (part 2) and JVMs in general

    - by asantaga
    Just received an email from one of our JVM developers who read my blog entry on Performance tuning JDeveloper11g and he's confirmed that all of the above parameters are totally supported :-) He's also provided a description of the parameters so we can learn what magic is actually being applied. - -XX:+AggressiveOpts -- this enables the latest and greatest JVM optimizations. It will likely help most Java applications. It's fully supported. The downside of it is that because it has the latest and greatest optimizations, there is some small probability that it may not offer as good of an experience. As those features enabled with this command line option have "matured", they are made the default in a future JDK release. So, you can think of this command line option as the place where the newest optimizations get introduced. Some time later they are moved out from under AggressiveOpts to become default behavior. -XX:+OptimizeStringConcat -- only works with the -server JVM. It may be enabled by the default in a future JDK 7 update release. This option delays the construction of a StringBuilder/StringBuffer and attempts to avoid re-sizing the underlying char[] by attempting to detect the size of the char[] to allocate based on what's being appended to the StringBuilder/StringBuffer. -XX:+UseStringCache -- I would not suggest using this unless you knew that JDeveloper allocated the same string over and over again. And, the string that's allocated over and over again is one of the first 100,000 allocated strings. In short, I'd recommend against using it. And, in fact, in Java 7 (currently) does not include this feature. -XX:+UseCompressedOops -- applicable to 64-bit JVMs. And, if you're using a 64-bit JVM, I'd suggest you use it. It's auto enabled in JDK 7 64-bit JVMs and later JDK 6 64-bit JVMs enable it by default too. -XX:+UseGCOverheadLimit -- by default this option is already enabled. One other command line option to consider is -XX:+TieredCompilation for a JDK 6 Update 25 or later, or JDK 7. This gives you the startup of a -client JVM and the peak performance of a -server JVM. Awesome-ness!  Finally, Charlies also pointed out to me a "new" book he's just published where he goes into the details of JVM tuning, a must for all Fusion Middleware tuning exercises..  (click the book)  Thanks Charlie!

    Read the article

  • NEW: Oracle Certification Exam Preparation Seminars

    - by Harold Green
    Hi Everyone, I am really excited about a new offering that we are announcing this week - Oracle Certification Exam Preparation Seminars. These are something that will make a big difference for many of you in your efforts to become certified and move your career forward. They are also something that have previously only been available (but very popular) to the limited number of customers who have attended our annual conferences in San Francisco (Oracle OpenWorld and JavaOne). These are the first in a series of offerings that we are releasing over the next few months. So for those of you either preparing or considering Oracle certification - keep watching here on the blog, Facebook, Twitter and the Oracle Certification website for additional announcements related to our most popular certification areas. Details of the new Exam Preparation Seminars are found below: NEW: ORACLE CERTIFICATION EXAM PREPARATION SEMINARS Becoming Oracle certified is a great way to build your career, gain additional credibility and improve your earning power. We know that the decision to become certified is not trivial. Our surveys indicate that people consider their time investment a critical factor in their decision to become certified. Your time is important. In order to help candidates maximize the efficiency of their study time we are releasing a new series of video-based seminars called Exam Preparation Seminars. These seminars are patterned after the extremely popular Exam Cram sessions that until now have only been available at our annual customer conferences (Oracle Open World and JavaOne). Beginning today they are now available to anyone, anywhere as a part of this Exam Prep Seminar series. Features: Fast-paced objective by objective review of the exam topics - led by top Oracle University instructors 24/7 access through Oracle University's training on demand platform. Ability to re-watch all or part of the the seminar. All the conveniences of video-based training: start, stop, fast-forward, skip, rewind, review. Tips that will help you better understand what you need to know to pass the exam. The Exam Preparation Seminars are meant to help anyone with a working knowledge of the technology get that extra boost to help them finalize their preparation, and will help anyone who wants a better understanding of the the depth and breadth of the exam topics and objectives. Benefits: Save time by understanding what you should study. Makes you efficient because you will understand the breadth and depth of each of the exam topics. Helps you create a better, more efficient study plan. Improves your confidence in your skills and ability to pass the certification exam. Exam Preparation Seminars are available individually, or in convenient Value Packages (which include the Exam Preparation Seminar, and an exam voucher which includes one free-retake if you need it). Currently we are releasing two seminars - one for DBA SQL and one for DBA Administration I. Additional offerings are in process. Find out more: General WEB: Oracle Certification Exam Preparation Seminars VIDEO: Exam Preparation Seminars Promo (1:27) Oracle Database Administration I (11g, 10g) VIDEO: Instructor Introduction (1:08) VIDEO: Sample Video (2:16) Oracle Database SQL VIDEO: Instructor Introduction (1:08) VIDEO: Sample Video (2:16)

    Read the article

  • Shrink NTFS Windows 7 Partition with GParted

    - by user15961
    I am running a dual-boot system with Windows 7 and Ubuntu 10.10. Initially I allocated about 20GB for my Ubuntu partition; however, I quickly ran out of that space and am now looking to expand my partition. Currently my NTFS partition (450GB) has about 130GB of free space. I tried using GParted to shrink the partition but encountered the following error. I booted into windows so I could run chkdsk but the countdown freezes at 1 upon reboot. I tried multiple methods to resolve that issue but nothing seems to work. Finally I gave up, and now I just want to know what is the best way for me to force GParted to shrink the partition regardless of the errors. I don't really have anything important and I don't mind risking the data. I just don't want to wipe the entire NTFS partition because I don't have a Windows install CD and might require Windows later on for some programs. I tried using sudo ntfsresize but that spews out the same error as GParted... Any ideas? Check and repair file system (ntfs) on /dev/sda2 00:00:09 ( ERROR ) calibrate /dev/sda2 00:00:00 ( SUCCESS ) path: /dev/sda2 start: 36944325 end: 976771119 size: 939826795 (448.14 GiB) check file system on /dev/sda2 for errors and (if possible) fix them 00:00:09 ( ERROR ) ntfsresize -P -i -f -v /dev/sda2 ntfsresize v2.0.0 (libntfs 10:0:0) Device name : /dev/sda2 NTFS volume version: 3.1 Cluster size : 4096 bytes Current volume size: 481191318016 bytes (481192 MB) Current device size: 481191319040 bytes (481192 MB) Checking for bad sectors ... Checking filesystem consistency ... Cluster 63468 is referenced multiple times! Cluster 63469 is referenced multiple times! Cluster 63465 is referenced multiple times! Cluster 63466 is referenced multiple times! Cluster 63467 is referenced multiple times! Cluster 165621 is referenced multiple times! Cluster 165622 is referenced multiple times! Cluster 165623 is referenced multiple times! Cluster 165624 is referenced multiple times! ERROR: Filesystem check failed! ERROR: 9 clusters are referenced multiply times. NTFS is inconsistent. Run chkdsk /f on Windows then reboot it TWICE! The usage of the /f parameter is very IMPORTANT! No modification was and will be made to NTFS by this software until it gets repaired.

    Read the article

  • What should filenames and URLs of images contain for SEO benefit?

    - by Baumr
    We know that good site architecture usually looks like this: example-company.com/ example-company.com/about/ example-company.com/contact/ example-company.com/products/ example-company.com/products/category/ example-company.com/products/category/productname/ Now, when it comes to Google Image search, it is clear that the img alt tag, filename/URL, and surrounding text (captions, headings, paragraphs) have an effect on ranking. I want to ask about the filename of the images that we should use (e.g. product-photo.jpg). ...but first about the URL: Often web developers stick all images in a single folder in the root: example-company.com/img/ — and I have stopped doing that. (I don't want to get into it, but basically, it seems more semantic for images which make up part of the content at each sub-directory) However, when all images appear in a folder, I feel that their filename needs to reflect what they are a bit more than usual, for example: example-company.com/img/example-company-productname-category.jpg It's a longer filename than just product.png, but as long as it's relevant, I see no problem with regards to SEO (unless you're keyword stuffing), and it could even help rank for keywords: "example company" "productname" "category" So no questions there. But what about when we have places images in the site architecture we outlined at the beginning? In other words, what if image URL paths look like this: example-company.com/products/category/productname/productname.jpg My question is, should the URL be kept short like above and only have the "productname" (and some descriptive keywords) as part of it's filename? Or, should it also include the "example-company" and "category"? Like so: example-company.com/products/category/productname/example-company-category-productname.jpg That seems much longer, and redundant when we look at the URL, but here are a few considerations. Images are often downloaded onto computers, and, to the average user, they lose their original URL and thus — it isn't clear where they came from. Also, some social networks, forums, and other platforms leave the filename intact when uploaded. (Many others rewrite it, for example, Pinterest and Facebook.) Another consideration, will this really help (even if ever so slightly) rank in Google Image Search, or at least inform Google that the product is something specific to the "example-company"? For example, what if this product can only be bought at this store and is the flagship product? In addition to an abundance of internal links to this product page, would having the "example company" name and "category" help it appear in "example company" searches? In other words, is less more?

    Read the article

  • Apache doesn't load .php files

    - by Haddex
    First, sorry for my English and asking something that it's quite answered all over the web. I've read a lot of post about this problem but I still can't find the solution. I'm a web developer who recently moved to Ubuntu from Windows 7. I had a website done (it's online and working) and I set up LAMP to keep working with it. I made a test.php file with: <?php phpinfo(); ?> and put it on /var/www/html directory, it shows all the information about the php and I was really happy: "Ok, it's all done, tomorrow I will work hard" But I placed my whole web into /var/www/html , not in a folder, the index.php is in /var/www/html but guess what: doesn't load any of my .php files, the browser just keep thinking. What I did: I rebooted Apache: /etc/init.d/apache2 restart I tried again with the test.php file and it works fine I put in /var/www/html a .html file and works fine. I looked for /etc/apache2/sites-enable/000-default.conf and it says: DocumentRoot /var/www/html I looked for /etc/apache2/mods-enabled/dir.conf and it says: DirectoryIndex index.html index.cgi index.pl index.php ... Edit* I think it's something related to phpmyadmin, like if I'm not able to connect with the database. But I got nothing on the screen when trying to load the page so...I'm not sure. I can access to the url localhost/phpmyadmin and I edited the connection.php file like this: <?php # FileName="Connection_php_mysql.htm" # Type="MYSQL" # HTTP="true" $hostname_rakstadconnection = "localhost"; $database_rakstadconnection = "rakstadclandb"; $username_rakstadconnection = "root"; $password_rakstadconnection = "admin"; $rakstadconnection = mysql_connect($hostname_rakstadconnection, $username_rakstadconnection, $password_rakstadconnection) or trigger_error(mysql_error(),E_USER_ERROR); mysql_query("SET NAMES 'utf8'"); ?> The name of the database is correct, like the user and password. http://i89.photobucket.com/albums/k220/Haddex/Capturadepantallade2014-06-09112609_zpsc45ddb72.png http://i89.photobucket.com/albums/k220/Haddex/Capturadepantallade2014-06-09112120_zps0b9e15f7.png *Edit2: could this be because it's a website that I brought to Linux from Windows? I used Dreamweaver. Edit3: I changed the # to /*/, nothing. The error.log file says: [Mon Jun 09 17:08:13.627881 2014] [:error] [pid 1517] [client 127.0.0.1:46663] PHP Warning: require_once(/var/www/html/Connections/rakstadconnection.php): failed to open stream: Permission denied in /var/www/html/index.php on line 1 [Mon Jun 09 17:08:13.627933 2014] [:error] [pid 1517] [client 127.0.0.1:46663] PHP Fatal error: require_once(): Failed opening required 'Connections/rakstadconnection.php' (include_path='.:/usr/share/php:/usr/share/pear') in /var/www/html/index.php on line 1 I'm reading error log but...should I add a linux path into a my index.php file? Don't think so. Thanks.

    Read the article

  • Running C++ AMP kernels on the CPU

    - by Daniel Moth
    One of the FAQs we receive is whether C++ AMP can be used to target the CPU. For targeting multi-core we have a technology we released with VS2010 called PPL, which has had enhancements for VS 11 – that is what you should be using! FYI, it also has a Linux implementation via Intel's TBB which conforms to the same interface. When you choose to use C++ AMP, you choose to take advantage of massively parallel hardware, through accelerators like the GPU. Having said that, you can always use the accelerator class to check if you are running on a system where the is no hardware with a DirectX 11 driver, and decide what alternative code path you wish to follow.  In fact, if you do nothing in code, if the runtime does not find DX11 hardware to run your code on, it will choose the WARP accelerator which will run your code on the CPU, taking advantage of multi-core and SSE2 (depending on the CPU capabilities WARP also uses SSE3 and SSE 4.1 – it does not currently use AVX and on such systems you hopefully have a DX 11 GPU anyway). A few things to know about WARP It is our fallback CPU solution, not intended as a primary target of C++ AMP. WARP stands for Windows Advanced Rasterization Platform and you can read old info on this MSDN page on WARP. What is new in Windows 8 Developer Preview is that WARP now supports DirectCompute, which is what C++ AMP builds on. It is not currently clear if we will have a CPU fallback solution for non-Windows 8 platforms when we ship. When you create a WARP accelerator, its is_emulated property returns true. WARP does not currently support double precision.   BTW, when we refer to WARP, we refer to this accelerator described above. If we use lower case "warp", that refers to a bunch of threads that run concurrently in lock step and share the same instruction. In the VS 11 Developer Preview, the size of warp in our Ref emulator is 4 – Ref is another emulator that runs on the CPU, but it is extremely slow not intended for production, just for debugging. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Turnkey with LightSwitch

    - by Laila
    Microsoft has long wanted to find a replacement for Microsoft Access. The best attempt yet, which is due out in, or before, September is Visual Studio LightSwitch, with which it is said to be as 'easy as flipping a switch' to use Silverlight to create simple form-driven business applications. It is easy to get confused by the various initiatives from Microsoft. No, this isn't WebMatrix. There is no 'Razor', for this isn't meant for cute little ecommerce sites, but is designed to build simple database-applications of the card-box type. It is more clearly a .NET-based solution to the problem that every business seems to suffer from; the plethora of Access-based, and Excel-based 'private' and departmental database-applications. These are a nightmare for any IT department since they are often 'stealth' applications built by the business in the teeth of opposition from the IT Department zealots. As they are undocumented, it is scarily easy to bring a whole department into disarray by decommissioning a PC tucked under a desk somewhere. With LightSwitch, it is easy to re-write such applications in a standard, maintainable, way, using a SQL Server database, deployed somewhere reasonably safe such as Azure. Even Sharepoint or Windows Communication Foundation can be used as data sources. Oracle's ApEx has taken off remarkably well, and has shaken the perception that, for the business user, Oracle must remain a mystic force accessible only to the priests and acolytes. Microsoft, by comparison had only Access, which was first released in 1992, the year of the Madonna conical bustier. It looks just as dated. Microsoft badly needed an entirely new solution to the same business requirement that led to Access's and Foxpro's long-time popularity, but which had the same allure as ApEx. LightSwitch is sound in its ideas, and comfortingly conventional in its architecture. By giving an easy access to SQL Server databases, and providing a 'thumb and blanket' migration path to Access-heads, LightSwitch seems likely to offer a simple way of pulling more Microsoft users into the .NET community. If Microsoft puts its weight behind it, then it will give some glimmer of hope to the many Silverlight developers that Microsoft is capable of seeing through its .NET revolution.

    Read the article

  • Pay in the future should make you think in the present

    - by BuckWoody
    Distributed Computing - and more importantly “-as-a-Service” models of computing have a different cost model. This is something that sounds obvious on the surface but it’s often forgotten during the design and coding phase of a project. In on-premises computing, we’re used to purchasing a server and all of the hardware infrastructure and software licenses needed not only for one project, but several. This is an up-front or “sunk” cost that we consume by running code the organization needs to perform its function. Using a direct connection over wires you’ve already paid for, we don’t often have to think about bandwidth, hits on the data store or the amount of compute we use - we just know more is better. In a pay-as-you-go model, however, each of these architecture decisions has a potential cost impact. The amount of data you store, the number of times you access it, and the amount you send back all come with a charge. The offset is that you don’t buy anything at all up-front, so that sunk cost is freed up. And financial professionals know that money now is worth more than money later. Saving that up-front cost allows you to invest it in other things. It’s not just that you’re using things that now cost money - it’s that the design itself in distributed computing has a cost impact. That can be a really good thing, such as when you dynamically add capacity for paying customers. If you can tie back the cost of a series of clicks to what a user will pay to do so, you can set a profit margin that is easy to track. Here’s a case in point: Assume you are using a large instance in Windows Azure to compute some data that you retrieve from a SQL Azure database. If you don’t monitor the path of the application, you may not know what you are really using. Since you’re paying by the size of the instance, it’s best to maximize it all the time. Recently I evaluated just this situation, and found that downsizing the instance and adding another one where needed, adding a caching function to the application, moving part of the data into Windows Azure tables not only increased the speed of the application, but reduced the cost and more closely tied the cost to the profit. The key is this: from the very outset - the design - make sure you include metrics to measure for the cost/performance (sometimes these are the same) for your application. Windows Azure opens up awesome new ways of doing things, so make sure you study distributed systems architecture before you try and force in the application design you have on premises into your new application structure.

    Read the article

  • Is Cygwin or Windows Command Prompt preferable for getting a consistent terminal experience for development?

    - by Paul Hazen
    The question: Which is better, installing cygwin or one of its cousins on all my windows machines to have a consistent terminal experience across all my development machines, or becoming well trained in the skill of mentally switching from linux terminal to windows command prompt? Systems I use: OSX Lion on a Macbook Air Windows 8 on a desktop Windows 7 on the same desktop Fedora 16 on the same desktop What I'm trying to accomplish Configure an entirely consistent (or consistent enough) terminal experience across all my machines. "enough" in this context is clearly subjective. Please be clear in your answer why the configuration you suggest is consistent enough. One more thing to keep in mind: While I do write a lot of code intended to run on Windows (actually code that runs on Windows Phone which necessitates a windows machine), I also write a lot of Java code, and prefer to do so in vim. I test a local repo in Java on my windows machine, and push to another test machine running ubuntu later in the development stage. When I push to the ubuntu machine, I'm exclusively in terminal, since I'm accessing it via SSH. Summary, with more accurate question: Is there a good way to accomplish what I'm trying to do, or is it better to get accustomed to remembering different commands based on the system I'm on? Which (if either) is considered "best practice" by the development community? Alternatively, for a consistent development experience, would it be better to write all my code SSHed into another machine, and move things to windows for compile / build only when I needed to? That seems like too much work... but could be a solution. Update: While there are insightful responses below, I have yet to hear an answer that talks about why any given solution is superior. Cygwin/GnuWin32 is certainly a way to accomplish a similar experience on all platforms, but since I'm just learning all things command line, I don't want to set myself up to do a lot of relearning/unlearning in the future. Cygwin/GnuWin32 has its peculiarities I would imagine, and being aware of how that set up works on Windows is a learning curve. Additionally, using Cygwin/GnuWin32 robs me of learning the benefits of PowerShell. As a newcomer to working in a command line, which path should I choose to minimize having to relearn/unlearn things in the future? or as my first paragraph poses: [is it better to use Cygwin] ...or [become] well trained in the skill of mentally switching from linux terminal to windows command prompt?

    Read the article

  • Upgrading 10.04LTS -> 10.10 using custom sources

    - by Boatzart
    I'm trying to upgrade to 10.10 from 10.04 LTS using a custom sources.list file that points to an unofficial mirror*. The mirror does have maverick, but I get the following output when upgrading: boatzart@somecomputer: > sudo do-release-upgrade Checking for a new ubuntu release Done Upgrade tool signature Done Upgrade tool Done downloading extracting 'maverick.tar.gz' authenticate 'maverick.tar.gz' against 'maverick.tar.gz.gpg' tar: Removing leading `/' from member names Reading cache Checking package manager Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Updating repository information WARNING: Failed to read mirror file No valid mirror found While scanning your repository information no mirror entry for the upgrade was found. This can happen if you run a internal mirror or if the mirror information is out of date. Do you want to rewrite your 'sources.list' file anyway? If you choose 'Yes' here it will update all 'lucid' to 'maverick' entries. If you select 'No' the upgrade will cancel. Continue [yN] y WARNING: Failed to read mirror file 96% [Working] Checking package manager Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Calculating the changes Calculating the changes Could not calculate the upgrade An unresolvable problem occurred while calculating the upgrade: The package 'update-manager-kde' is marked for removal but it is in the removal blacklist. This can be caused by: * Upgrading to a pre-release version of Ubuntu * Running the current pre-release version of Ubuntu * Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug against the 'update-manager' package and include the files in /var/log/dist-upgrade/ in the bug report. Restoring original system state Aborting Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Here is the relevant section from /var/log/dist-upgrade/main.log: 2010-11-18 14:05:52,117 DEBUG The package 'update-manager-kde' is marked for removal but it's in the removal blacklist 2010-11-18 14:05:52,136 ERROR Dist-upgrade failed: 'The package 'update-manager-kde' is marked for removal but it is in the removal blacklist.' 2010-11-18 14:05:52,136 DEBUG abort called *I'm located inside of USC, and for some crazy reason any sustained downloads to anywhere outside of the University are throttled down to 5kbps inside of my lab. Because of this I need to use the following sources.list: deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid main restricted universe multiverse deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid-updates main restricted universe multiverse deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid-backports main restricted universe multiverse deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid-security main restricted universe multiverse I've tried adding four more entries to the sources.list with s/lucid/maverick/ but that didn't help. Does anyone know how to fix this? Thanks!

    Read the article

  • Finding a new programming language for web development?

    - by Xeoncross
    I'm wondering if there are any un-biased resources that give good, specific overviews of programming languages and their intended goals. I would like to learn a new language, but visiting the sites of each language isn't working. Each one talks about how great it is without much mention of it's weaknesses or specific goals. Ruby is a dynamic, open source programming language with a focus on simplicity and productivity. Python is a programming language that lets you work more quickly and integrate your systems more effectively. Having been a PHP developer for years, Vic Cherubini sums up my plight well: I knew PHP well, had my own framework, and could work quickly to get something up and running. I programmed like this throughout the MVC revolution. I got better and better jobs (read: better paying, better title) as a PHP developer, but all along the way realizing that the code I wrote on my own time was great, and the code I worked with at work was horrible. Like, worse than horrible. Atrocious. OS Commerce level bad. Having side projects kept me sane, because the code I worked with at work made me miserable. This is why I'm retiring from PHP for my side projects and new programming ventures. I'm spent with PHP. Exhausted, if you will. I've reached a level where I think I'm at the top with it as a language and if I don't move on to a new language soon, I'll be done completely with programming and I do not want that. Languages I've looked at include JavaScript (for node.js), Ruby, Python, & Erlang. I've even thought about Scala or C++. The problem is figuring out which ones are built to handle my needs the best. So where can I go to skip the hype and get real information about the maturity of a platform, the size of the community, and the strengths & weaknesses of that language. If I know these then picking a language to continue my web development should be easy.

    Read the article

  • design a model for a system of dependent variables

    - by dbaseman
    I'm dealing with a modeling system (financial) that has dozens of variables. Some of the variables are independent, and function as inputs to the system; most of them are calculated from other variables (independent and calculated) in the system. What I'm looking for is a clean, elegant way to: define the function of each dependent variable in the system trigger a re-calculation, whenever a variable changes, of the variables that depend on it A naive way to do this would be to write a single class that implements INotifyPropertyChanged, and uses a massive case statement that lists out all the variable names x1, x2, ... xn on which others depend, and, whenever a variable xi changes, triggers a recalculation of each of that variable's dependencies. I feel that this naive approach is flawed, and that there must be a cleaner way. I started down the path of defining a CalculationManager<TModel> class, which would be used (in a simple example) something like as follows: public class Model : INotifyPropertyChanged { private CalculationManager<Model> _calculationManager = new CalculationManager<Model>(); // each setter triggers a "PropertyChanged" event public double? Height { get; set; } public double? Weight { get; set; } public double? BMI { get; set; } public Model() { _calculationManager.DefineDependency<double?>( forProperty: model => model.BMI, usingCalculation: (height, weight) => weight / Math.Pow(height, 2), withInputs: model => model.Height, model.Weight); } // INotifyPropertyChanged implementation here } I won't reproduce CalculationManager<TModel> here, but the basic idea is that it sets up a dependency map, listens for PropertyChanged events, and updates dependent properties as needed. I still feel that I'm missing something major here, and that this isn't the right approach: the (mis)use of INotifyPropertyChanged seems to me like a code smell the withInputs parameter is defined as params Expression<Func<TModel, T>>[] args, which means that the argument list of usingCalculation is not checked at compile time the argument list (weight, height) is redundantly defined in both usingCalculation and withInputs I am sure that this kind of system of dependent variables must be common in computational mathematics, physics, finance, and other fields. Does someone know of an established set of ideas that deal with what I'm grasping at here? Would this be a suitable application for a functional language like F#? Edit More context: The model currently exists in an Excel spreadsheet, and is being migrated to a C# application. It is run on-demand, and the variables can be modified by the user from the application's UI. Its purpose is to retrieve variables that the business is interested in, given current inputs from the markets, and model parameters set by the business.

    Read the article

  • Pragmas and exceptions

    - by Darryl Gove
    The compiler pragmas: #pragma no_side_effect(routinename) #pragma does_not_write_global_data(routinename) #pragma does_not_read_global_data(routinename) are used to tell the compiler more about the routine being called, and enable it to do a better job of optimising around the routine. If a routine does not read global data, then global data does not need to be stored to memory before the call to the routine. If the routine does not write global data, then global data does not need to be reloaded after the call. The no side effect directive indicates that the routine does no I/O, does not read or write global data, and the result only depends on the input. However, these pragmas should not be used on routines that throw exceptions. The following example indicates the problem: #include <iostream extern "C" { int exceptional(int); #pragma no_side_effect(exceptional) } int exceptional(int a) { if (a==7) { throw 7; } else { return a+1; } } int a; int c=0; class myclass { public: int routine(); }; int myclass::routine() { for(a=0; a<1000; a++) { c=exceptional(c); } return 0; } int main() { myclass f; try { f.routine(); } catch(...) { std::cout << "Something happened" << a << c << std::endl; } } The routine "exceptional" is declared as having no side effects, however it can throw an exception. The no side effects directive enables the compiler to avoid storing global data back to memory, and retrieving it after the function call, so the loop containing the call to exceptional is quite tight: $ CC -O -S test.cpp ... .L77000061: /* 0x0014 38 */ call exceptional ! params = %o0 ! Result = %o0 /* 0x0018 36 */ add %i1,1,%i1 /* 0x001c */ cmp %i1,999 /* 0x0020 */ ble,pt %icc,.L77000061 /* 0x0024 */ nop However, when the program is run the result is incorrect: $ CC -O t.cpp $ ./a.out Something happend00 If the code had worked correctly, the output would have been "Something happened77" - the exception occurs on the seventh iteration. Yet, the current code produces a message that uses the original values for the variables 'a' and 'c'. The problem is that the exception handler reads global data, and due to the no side effects directive the compiler has not updated the global data before the function call. So these pragmas should not be used on routines that have the potential to throw exceptions.

    Read the article

  • Need help identifing what resources (eg. In MIT OpenCourseWare) can help me prepare for a test [closed]

    - by jiewmeng
    I am entering uni soon. I can sit for a placement test to see if I elegible for exemptions. The details are http://www.comp.nus.edu.sg/undergraduates/TestScope11_12.html Or CS2100 Computer Organisation (please click title) The objective of this module is to familiarise students with the fundamentals of computing devices. Through this module students will understand the basics of data representation, and how the various parts of a computer work, separately and with each other. This allows students to understand the issues in computing devices, and how these issues affect the implementation of solutions. Topics covered include data representation systems, combinational and sequential circuit design techniques, assembly language, processor execution cycles, pipelining, memory hierarchy and input/output systems. Recommended Textbooks Digital Design: Principles and Practices [DDPP] by John F. Wakerly, Prentice-Hall. ISBN 0-13-324500-4. Computer Organizations and Design (The hardware/software interface) by David A. Patterson and John L. Hennessy. CS2105 Introduction to Computer Networks (please click title) This course aims to provide a broad introduction to computer networks and some appreciations of network application programming. It covers a range of topics including basic data communication and computer network concepts, protocols, networked computing concepts and principles, network applications development and network security. The emphasis of teaching is on the working principles and application of computer networks. As an integral part of the course, tutorials and practical assignments enforcing learning will also be given. These assignments provide an early exposure in network application programming and they should be able to complete by using personal computers and school's network facilities. Topics included: An overview of computer networks and the Internet Basic data communications Application layer Transport layer Network layer and routing Link layer and local area networks Recommended Textbook James F. Kurose & Keith W. Ross, Computer networking: A top-down approach featuring internet, Addison Wesley, 2001 I am wondering what resources eg. MIT OpenCourseWare or other universities resources are available to help he perpare for these particular modubles. I am thinking does the Networking one look like CCNA? The computer oragization. Its like electronics, assembly etc? I learnt some electronics in Poly but looking at the sample papers, uni looks very different... I have about 1 month to prepare if I want any chance of exempting from these modules :) any help?

    Read the article

  • Design pattern to handle queries using multiple models

    - by coderkane
    I am presented with a dilemma while trying to re-designing the class structure for my PHP/MySQL application to make it more elegant and conform it to the SOLID principle. The problem goes like this: Let as assume, there is an abstract class called person which has certain properties to define a generic person, such as name, age, date of birth etc. There are two classes, student, and teacher, that implements this abstract class. They add their own unique properties to it. I have designed all the three classes to include all the operational logic (details of which are not relevant in context of the question). Now, I need to create views/reports/data grids which contain details from multiple classes, for example, say, a list of all students doing projects in Chemistry mentored by a teacher whose name is the parameter to the query. This is just one example of a view, there are many different views in the application, which uses data from 3-4 tables, and each of them have multiple input parameters to generate them. Considering this particular example, I have written the relevant query using JOIN and the results are as expected and proper, now here is the dilemma: Keeping in mind the single responsibility principle, where should I keep this query? It does not belong to either Student class, or Teacher class or any other classes currently present. a) Should I create a new class, say dataView class, and design it as a MVC pattern and keep the query there? What about the other views? how do they fit in this architecture? b) Should I not keep the query in code at all, and make it DB View ? c) Am I completely wrong in the approach? If so what is the right approach? My considerations are as follows: a) should be easy to add new views later on if requirement comes, without having to copy-paste-modify code b) would like to make it as loosely coupled as possible so that if minor db structure changes happen, it does not break I did google searches on report design and OOP report generators, but all the result seem to focus on the visual design of the report rather than fetching the data. I have already taken care of the visual aspect of the report using MVC with html templates. I am sure this is a very fundamental problem with known solution, but I am somehow not able to find it (maybe searching with wrong keyword). Edit1: Modified the title to make it more relevant Edit2: The accepted answer got me thinking in the right direction and identify my design flaws, which eventually led me to find this question and the solution in Stack Overflow which gave me the detailed answer to clear the confusion.

    Read the article

  • Exchange 2010, Exchange 2003 Mail Flow issue

    - by Ryan Roussel
    While performing the initial Exchange 2010 deployment for a customer migrating from Exchange 2003, I ran into an issue with mail flow between the two environments.  The Exchange 2003 mailboxes could send to Exchange 2010, as well as to and from the internet.  Exchange 2010 mailboxes could send and receive to the internet, however they could not send to Exchange 2003 mailboxes.   After scouring the internet for a solution, it seemed quite a few people were experiencing this issue with no resolution to be found, or at least not easily.  After many attempts of manually deleting and recreating the routing group connectors,  I finally lucked onto the answer in an obscure comment left to another blogger.   If inheritable permissions are not allowed on the Exchange 2003 object in the Active Directory schema, exchange server authentication cannot be achieved between the servers.   It seems when Blackberry Enterprise Server gets added to 2003 environments, a lot of Admins get tricky and add the BES Admin user explicitly to the server object  to allow  inheritance down from there to all mailboxes.  The problem is they also coincidently turn off inheritance to the server object itself from its parent containers.  You can re-establish inheritance without overwriting the existing ACL however so that the BES Admin can remain in the server object ACL.   By re-establishing inheritance to the 2003 server object, mail flow was instantly restored between the servers.    To re-establish inheritance: 1. Open ASDIedit by adding the snap-in to a MMC (should be included on your 2008 server where Exchange 2010 is installed) 2. Navigate to Configuration > Services > Microsoft Exchange > Exchange Organization > Administrative Groups > First Administrative Group > Servers 3. In the right pane, right click on the CN=Server Name of your Exchange 2003 Server, select properties 4. Navigate to the Security tab, hit advanced toward the bottom. 5. Check the checkbox that reads “include inheritable permissions” toward the bottom of the dialogue box.

    Read the article

< Previous Page | 809 810 811 812 813 814 815 816 817 818 819 820  | Next Page >