Search Results

Search found 66916 results on 2677 pages for 'real time strategy'.

Page 517/2677 | < Previous Page | 513 514 515 516 517 518 519 520 521 522 523 524  | Next Page >

  • Trying to Organise a Software Craftsman Pilgrimage

    - by Liam McLennan
    As I have previously written, I am trying to organise a software craftsman pilgrimage. The idea is to donate some time working with quality developers so that we learn from each other. To be honest I am also trying to be the worst. “Always be the worst guy in every band you’re in.” Pat Metheny I ended up posting a message to both the software craftsmanship group and the Seattle Alt.NET group and I got a good response from both. I have had discussions with people based in: Seattle, New York, Long Island, Austin and Chicago. Over the next week I have to juggle my schedule and confirm the company(s) I will be spending time with, but the good news is it seems that I will not be left hanging.

    Read the article

  • Some Insight on the Field of Knowledge Representations

    - by picmate
    I started following an MS in computer sciences after about two years of work for a software company. I worked primarily in data warehousing and business intelligence related software development during my previous occupation. There is a high chance for me to select a research in knowledge representations, ontologies and reasoning, as there are no other research available in any other interesting fields, such as pattern recognition and navigation. I developed an interest towards knowledge representation with what I learnt from the courses I am taking currently. But I do not have a deep understanding of it in terms of which areas such a field would have an impact in a real life scenario, and how it will help me when I am hunting for a job in the near future. Some thought about this would be greatly appreciated.

    Read the article

  • Lenovo e420s overheating

    - by Matthew
    I recently bought a Lenovo e420s and decided to install Ubuntu (first time). The interface is great, but the computer overheats horribly (the fan is on full all the time and the cpu temperature is much higher than in Windows). In other words, it's not a fan problem, but an OS problem. Does anybody have a fix that is implementable by a luddite? I'd like to use linux, but it's not possible if it turns my computer into a radiator. Thanks.

    Read the article

  • Studies on code documentation productivity gains/losses

    - by J T
    Hi everyone, After much searching, I have failed to answer a basic question pertaining to an assumed known in the software development world: WHAT IS KNOWN: Enforcing a strict policy on adequate code documentation (be it Doxygen tags, Javadoc, or simply an abundance of comments) adds over-head to the time required to develop code. BUT: Having thorough documentation (or even an API) brings with it productivity gains (one assumes) in new and seasoned developers when they are adding features, or fixing bugs down the road. THE QUESTION: Is the added development time required to guarantee such documentation offset by the gains in productivity down-the-road (in a strictly economical sense)? I am looking for case studies, or answers that can bring with them objective evidence supporting the conclusions that are drawn. Thanks in advance!

    Read the article

  • Cheerp -- C++ for web: advance or regression?

    - by Henrique Barcelos
    Recently I've run into Cheerp, a C++ to Javascript compiler, which uses a modified version of clang to generate Javascript code from C++ sources. That makes me wonder: why in the seven kingdoms would someone do this in their right mind? I mean: why would you take a language that is not designed for web at all, that is far more convoluted and bureaucratic, write your code and then compile it into Javascript itself? Can anybody see any advantages in doing so? We surely can discard performance as a reason, because in the end it generates pure Javascript code. Is there anyone here that have real experience with this? P.S.: I'm not sure if this is an on topic question, but this is the most general forum about programming that I could find in the StackExchange network. Edit Although this seems like a subjective question, it is not. I am asking for reasons that this tool could be useful. I got interested at first, but started wondering why would someone use it.

    Read the article

  • How to fix corrupted desktop icons and fonts?

    - by David Harvey
    I love Linux, but am a real novice. I installed Ubuntu 12.04 LTS 32bit alongside Windows XP. The installation seems to work just fine except for the desktop. The icons & font becomes corrupted to the point that it looks like Chinese. After surfing with Mozilla Firefox for a long time, the same problem begins with the web pages. I want to be free of Windows, but must solve this problem first. Can you help? Thank you.

    Read the article

  • OM: Effective Troubleshooting Techniques to Debug Order Import

    - by ChristineS
    There is a new document available to help you debug Order Import in Order Management:  Effective Troubleshooting Techniques to Debug Order Import Issues (Doc ID 1558196.1) The white paper addresses debugging from a technical perspective. This approach is to assist users in understanding the actual issue, as well as help them in the early resolution of any order import issue. It will walk you through several cases, with supporting debug logs / trace files taken for each case. Educating you along the way for what debug logs / trace files should be gathered, as trace files are not always needed.  It will also walk you through the supporting documents so you will know what to look for in your case. Please refer to this note the next time you have an Order Import error. Or you could step through it now, so you are informed the next time you encounter an Order Import error.  Happy debugging!

    Read the article

  • What is the benefit of writing to a temp location, And then copying it to the intended destination?

    - by Devdatta Tengshe
    I am writing an application that works with satellite Images, and my boss asked me to look at some of the commercial application, and see how they behave. I found a strange behavior and then as I was looking, I found it in other standard applications as well. These Programs first write to the temp folder, and then copy it to the intended destination. Example: 7zip first extracts to the temp folder, and then copies the extracted data to the location that you had asked it to extract the data to. I see several problems with this approach: 1.The temp folder might not have enough space, while the intended location might have that much space. 2.If it is a large file, it can take a non-negligible amount of time for the copy operation. I thought about it a lot, but I couldn't see one single positive point to doing this. Am I missing something, or is there a real benefit to doing this?

    Read the article

  • What are QUICK interview questions for the Microsoft stack development jobs?

    - by Dubmun
    I'm looking for your best "quick answer" technical interview questions. We are a 100% Microsoft shop and do the majority of our development on the ASP.NET web stack in C# and have a custom SOA framework also written in C#. We use a combination of Web Forms, MVC, Web Services, WCF, Entity Framework, SQL Server, TSQL, jQuery, LINQ, and TFS in a SCRUM environment. We are currently on .NET 3.5 with a very near transition to .NET 4.0. Our interviewing process includes a 55 minute interview with two technical people (usually an architect and a senior developer). The two interviewers have to share the time for questions. That isn't enough time for very many true programming problems so I'm looking for more good questions that have quick, yet meaningful, answers. We are mainly interviewing for Senior Dev positions right now but may interview for some Juniors in the future. Please help?

    Read the article

  • Public JCP EC Meeting on 12 November

    - by Heather VanCura
    The next JCP EC Meeting, and the last public EC Meeting of 2013, is scheduled for Tuesday, 12 November at 08:00 AM PST.  Agenda includes a discussion on invigorating your community participation in the JCP program. We hope you will join us, but if you cannot attend, the recording and materials will also be public on the JCP.org multimedia page. Meeting details below. Meeting information ------------------------------------------------------- Topic: Public EC Meeting Date: Tuesday, November 12, 2013 Time: 8:00 am, Pacific Standard Time (San Francisco, GMT-08:00) Meeting Number: 809 853 126 Meeting Password: 1234 ------------------------------------------------------- To start or join the online meeting ------------------------------------------------------- Go to https://jcp.webex.com/jcp/j.php?ED=239354237&UID=491098062&PW=NZjAyM2Q2YTVj&RT=MiM0 ------------------------------------------------------- Audio conference information ------------------------------------------------------- +1 (866) 682-4770 (US)   Conference code: 5731908   Security code: 1234 For global access numbers https://www.intercallonline.com/listNumbersByCode.action?confCode=5731908 Or +1 (408) 774-4073   

    Read the article

  • compiz hiding unity and all menus

    - by Lennart Guldbrandsson
    After updating to Ubuntu 11.10, I was severly disturbed by Dash and wanted to go back to Ubuntu Classic. So I tried to read up, and found Compiz SettingsManager. In there, I clicked "on" never hide menus. For some reason this made all my menus at the top of the screen (volume, network, my login identity, and shutoff, etc) disappear - as well as the quickstart menu to the left (Unity). I am not very technical, so I have a hard time finding any programs now, and I just got on the internet by clicking on a link in a document, that I was fortunate to have on my Desktop. Without it, I wouldn't be able to ask for help. What I wish for is a) to get back the menu at the top, b) to restore the Ubuntu Classic without the irritating launcher and Dash, c) these two things to not disappear every time there is a new version of Ubuntu. Any help is greatly appreciated.

    Read the article

  • What's the best version control/QA workflow for a legacy system?

    - by John Cromartie
    I am struggling to find a good balance with our development and testing process. We use Git right now, and I am convinced that ReinH's Git Workflow For Agile Teams is not just great for capital-A Agile, but for pretty much any team on DVCS. That's what I've tried to implement but it's just not catching. We have a large legacy system with a complex environment, hundreds of outstanding and undiscovered defects, and no real good way to set up a test environment with realistic data. It's also hard to release updates without disrupting users. Most of all, it's hard to do thorough QA with this process... and we need thorough testing with this legacy system. I feel like we can't really pull off anything as slick as the Git workflow outlined in the link. What's the way to do it?

    Read the article

  • How long can a user remember what they were working on? [migrated]

    - by GlenPeterson
    A web application lets the user browse its screens for future or past months. The time period the user is currently viewing follows the user through every screen of the system. But users can be logged in for a month or more. After a certain period of inactivity, we will prompt the user: You were viewing November 2008 when you last clicked. Want to view the current (default) time period instead? How long between user clicks should we wait to show this message? I'm guessing somewhere between 30 minutes and 3 hours most people will forget what they were doing, but I'd love to have some data, or someone's experience to base it on. Other suggestions related to this issue?

    Read the article

  • How-To limit user-names in LightDM login-screen when AccountsService is used

    - by David A. Cobb
    I have several "user" names in passwd that don't represent real people, and that should not appear on the LightDM login-screen. The lightdm-gtk-greeter configuration file clearly says that if AccountsService is installed, the program uses that and ignores its owh configureation files. HOWEVER, there is less than nothing for documentation about how to configure AccountsService! Please, can someone tell me how to configure the system so that only an explicitly specified group of users are shown on the greeter? I could uninstall AccountsService. I did that before, but it comes back (dependencies, I suppose). TIA

    Read the article

  • When starting or log off/on, add'l panels show and some panel applets are duplicated

    - by keepitsimpleengineer
    After upgrading to 12.04 Gnome Classic amd64,every time I log on a new additional blank panel appears on the top and bottom panels - which I delete every time. In addition, the default panel applets are duplicated in the original panels. Also the panels in the second screen are empty (in addition to the wallpaper being hidden). Update: As suggested by fossfreedomm I created a new user and the same behavior occurred. I increased the height of the panels (2436) and the added panels did not change. If I don't remove the added panels and log off/on I get still another - they accumulate. If I switch user instead of logoff/logon, the add'l panel do not appear.

    Read the article

  • SQLPASS DB Design Precon Preview

    - by drsql
    It is just a few months left before SQLPASS and I am pushing to get my precon prepped for you. While it will be the second time I produce this on the year, I listened to the feedback and positive comments I have heard from potential attendees, so I am making a couple of big changes to fit what people really liked. Lots more design time. We will do more designs in some form, as a group, teams, and individually, depending on the room and people in attendance. (Figure on a lot of databases centered...(read more)

    Read the article

  • How important are unit tests in software development?

    - by Lo Wai Lun
    We are doing software testing by testing a lot of I/O cases, so developers and system analysts can open reviews and test for their committed code within a given time period (e.g. 1 week). But when it come across with extracting information from a database, how to consider the cases and the corresponding methodology to start with? Although that is more likely to be a case studies because the unit-testing depends on the project we have involved which is too specific and particular most of the time. What is the general overview of the steps and precautions for unit-testing?

    Read the article

  • Use your own domain email and tired of SPAM? SPAMfighter FTW

    - by Dave Campbell
    I wouldn't post this if I hadn't tried it... and I paid for it myself, so don't anybody be thinking I'm reviewing something someone sent me! Long ago and far away I got very tired of local ISPs and 2nd phone lines and took the plunge and got hooked up to cable... yeah I know the 2nd phone line concept may be hard for everyone to understand, but that's how it was in 'the old days'. To avoid having to change email addresses all the time, I decided to buy a domain name, get minimal hosting, and use that for all email into the house. That way if I changed providers, all the email addresses wouldn't have to change. Of course, about a dozen domains later, I have LOTS of pop email addresses and even an exchange address to my client's server... times have changed. What also has changed is the fact that we get SPAM... 'back in the day' when I was a beta tester for the first ISP in Phoenix, someone tried sending an ad to all of us, and what he got in return for his trouble was a bunch of core dumps that locked up his email... if you don't know what a core dump is, ask your grandfather. But in today's world, we're all much more civilized than that, and as with many things, the criminals seem to have much more rights than we do, so we get inundated with email offering all sorts of wild schemes that you'd have to be brain-dead to accept, but yet... if people weren't accepting them, they'd stop sending them. I keep hoping that survival of the smartest would weed out the mental midgets that respond and then the jumk email stop, but that hasn't happened yet anymore than finding high-quality hearing aids at the checkout line of Safeway because of all the dimwits playing music too loud inside their car... but that's another whole topic and I digress. So what's the solution for all the spam? And I mean *all*... on that old personal email address, I am now getting over 150 spam messages a day! Yes I know that's why God invented the delete key, but I took it on as a challenge, and it's a matter of principle... why should I switch email addresses, or convert from [email protected] to something else, or have all my email filtered through some service just because some A-Hole somewhere has a site up trying to phish Ma & Pa Kettle (ask your grandfather about that too) out of their retirement money? Well... I got an email from my cousin the other day while I was writing yet another email rule, and there was a banner on the bottom of his email that said he was protected by SPAMfighter. SPAMfighter huh.... so I took a look at their site, and found yet one more of the supposed tools to help us. But... I read that they're a Microsoft Gold Partner... and that doesn't come lightly... so I took a gamble and here's what I found: I installed it, and had to do a couple things: 1) SPAMfighter stuffed the SPAMfighter folder into my client's exchange address... I deleted it, made a new SPAMfighter folder where I wanted it to go, then in the SPAMfighter Clients settings for Outlook, I told it to put all spam there. 2) It didn't seem to be doing anything. There's a ribbon button that you can select "Block", and I did that, wondering if I was 'training' it, but it wasn't picking up duplicates 3) I sent email to support, and wrote a post on the forum (not to self: reply to that post). By the time the folks from the home office responded, it was the next day, and first up, SPAMfighter knocked down everything that came through when Outlook opend... two thumbs up! I disabled my 'garbage collection' rule from Outlook, and told Outlook not to use the junk folder thinking it was interfering. 4) Day 2 seemed to go about like Day 1... but I hung in there. 5) Day 3 is now a whole new day... I had left Outlook open and hadn't looked at the PC since sometime late yesterday afternoon, and when I looked this morning, *every bit* of spam was in the SPAMfighter folder!! I'm a new paying customer After watching SPAMfighter work this morning, I've purchased a 1-year license, and I now can sit and watch as emails come in and disappear from my inbox into the SPAMfighter folder. No more continual tweaking of the rules. I've got SPAMfighter set to 'Very Hard' filtering... personally I'd rather pull the few real emails out of the SPAMfighter folder than pull spam out of the real folders. Yes this is simply another way of using the delete key, but you know what? ... it feels good :) Here's a screenshot of the stats after just about 48 hours of being onboard: Note that all the ones blocked by me were during Day 1 and 2... I've blocked none today, and everything is blocked. Stay in the 'Light!

    Read the article

  • Connectify Dispatch Links Multiple Network Nodes Into a Mega Connection

    - by Jason Fitzpatrick
    Connectify Dispatch wants to change the way you interact with the networks around you by making it dead simple to mesh all available Wi-Fi, Cellular, and Ethernet connections into a massive and stable pipeline. Dispatch makes it open-and-click easy to hook up multiple Wi-Fi nodes, your cellphone, and even Ethernet connections into a single blended connection. While the video above gives a great overview of the process, check out the video below to see it in real world action: The project is currently in the last phase of KickStarter funding, so now is a great time to score Connectify Dispatch at a steep discount–pledging as little as $10 to fund the project, for example, scores you 50% of a 6-month Pro license. Hit up the link below to read more about the project, check the KickStarter status, and see all the neat features in the development pipeline. Dispatch: The Internet, Faster. [KickStarter] HTG Explains: What The Windows Event Viewer Is and How You Can Use It HTG Explains: How Windows Uses The Task Scheduler for System Tasks HTG Explains: Why Do Hard Drives Show the Wrong Capacity in Windows?

    Read the article

  • Working with Timelines with LINQ to Twitter

    - by Joe Mayo
    When first working with the Twitter API, I thought that using SinceID would be an effective way to page through timelines. In practice it doesn’t work well for various reasons. To explain why, Twitter published an excellent document that is a must-read for anyone working with timelines: Twitter Documentation: Working with Timelines This post shows how to implement the recommended strategies in that document by using LINQ to Twitter. You should read the document in it’s entirety before moving on because my explanation will start at the bottom and work back up to the top in relation to the Twitter document. What follows is an explanation of SinceID, MaxID, and how they come together to help you efficiently work with Twitter timelines. The Role of SinceID Specifying SinceID says to Twitter, “Don’t return tweets earlier than this”. What you want to do is store this value after every timeline query set so that it can be reused on the next set of queries.  The next section will explain what I mean by query set, but a quick explanation is that it’s a loop that gets all new tweets. The SinceID is a backstop to avoid retrieving tweets that you already have. Here’s some initialization code that includes a variable named sinceID that will be used to populate the SinceID property in subsequent queries: // last tweet processed on previous query set ulong sinceID = 210024053698867204; ulong maxID; const int Count = 10; var statusList = new List<status>(); Here, I’ve hard-coded the sinceID variable, but this is where you would initialize sinceID from whatever storage you choose (i.e. a database). The first time you ever run this code, you won’t have a value from a previous query set. Initially setting it to 0 might sound like a good idea, but what if you’re querying a timeline with lots of tweets? Because of the number of tweets and rate limits, your query set might take a very long time to run. A caveat might be that Twitter won’t return an entire timeline back to Tweet #0, but rather only go back a certain period of time, the limits of which are documented for individual Twitter timeline API resources. So, to initialize SinceID at too low of a number can result in a lot of initial tweets, yet there is a limit to how far you can go back. What you’re trying to accomplish in your application should guide you in how to initially set SinceID. I have more to say about SinceID later in this post. The other variables initialized above include the declaration for MaxID, Count, and statusList. The statusList variable is a holder for all the timeline tweets collected during this query set. You can set Count to any value you want as the largest number of tweets to retrieve, as defined by individual Twitter timeline API resources. To effectively page results, you’ll use the maxID variable to set the MaxID property in queries, which I’ll discuss next. Initializing MaxID On your first query of a query set, MaxID will be whatever the most recent tweet is that you get back. Further, you don’t know what MaxID is until after the initial query. The technique used in this post is to do an initial query and then use the results to figure out what the next MaxID will be.  Here’s the code for the initial query: var userStatusResponse = (from tweet in twitterCtx.Status where tweet.Type == StatusType.User && tweet.ScreenName == "JoeMayo" && tweet.SinceID == sinceID && tweet.Count == Count select tweet) .ToList(); statusList.AddRange(userStatusResponse); // first tweet processed on current query maxID = userStatusResponse.Min( status => ulong.Parse(status.StatusID)) - 1; The query above sets both SinceID and Count properties. As explained earlier, Count is the largest number of tweets to return, but the number can be less. A couple reasons why the number of tweets that are returned could be less than Count include the fact that the user, specified by ScreenName, might not have tweeted Count times yet or might not have tweeted at least Count times within the maximum number of tweets that can be returned by the Twitter timeline API resource. Another reason could be because there aren’t Count tweets between now and the tweet ID specified by sinceID. Setting SinceID constrains the results to only those tweets that occurred after the specified Tweet ID, assigned via the sinceID variable in the query above. The statusList is an accumulator of all tweets receive during this query set. To simplify the code, I left out some logic to check whether there were no tweets returned. If  the query above doesn’t return any tweets, you’ll receive an exception when trying to perform operations on an empty list. Yeah, I cheated again. Besides querying initial tweets, what’s important about this code is the final line that sets maxID. It retrieves the lowest numbered status ID in the results. Since the lowest numbered status ID is for a tweet we already have, the code decrements the result by one to keep from asking for that tweet again. Remember, SinceID is not inclusive, but MaxID is. The maxID variable is now set to the highest possible tweet ID that can be returned in the next query. The next section explains how to use MaxID to help get the remaining tweets in the query set. Retrieving Remaining Tweets Earlier in this post, I defined a term that I called a query set. Essentially, this is a group of requests to Twitter that you perform to get all new tweets. A single query might not be enough to get all new tweets, so you’ll have to start at the top of the list that Twitter returns and keep making requests until you have all new tweets. The previous section showed the first query of the query set. The code below is a loop that completes the query set: do { // now add sinceID and maxID userStatusResponse = (from tweet in twitterCtx.Status where tweet.Type == StatusType.User && tweet.ScreenName == "JoeMayo" && tweet.Count == Count && tweet.SinceID == sinceID && tweet.MaxID == maxID select tweet) .ToList(); if (userStatusResponse.Count > 0) { // first tweet processed on current query maxID = userStatusResponse.Min( status => ulong.Parse(status.StatusID)) - 1; statusList.AddRange(userStatusResponse); } } while (userStatusResponse.Count != 0 && statusList.Count < 30); Here we have another query, but this time it includes the MaxID property. The SinceID property prevents reading tweets that we’ve already read and Count specifies the largest number of tweets to return. Earlier, I mentioned how it was important to check how many tweets were returned because failing to do so will result in an exception when subsequent code runs on an empty list. The code above protects against this problem by only working with the results if Twitter actually returns tweets. Reasons why there wouldn’t be results include: if the first query got all the new tweets there wouldn’t be more to get and there might not have been any new tweets between the SinceID and MaxID settings of the most recent query. The code for loading the returned tweets into statusList and getting the maxID are the same as previously explained. The important point here is that MaxID is being reset, not SinceID. As explained in the Twitter documentation, paging occurs from the newest tweets to oldest, so setting MaxID lets us move from the most recent tweets down to the oldest as specified by SinceID. The two loop conditions cause the loop to continue as long as tweets are being read or a max number of tweets have been read.  Logically, you want to stop reading when you’ve read all the tweets and that’s indicated by the fact that the most recent query did not return results. I put the check to stop after 30 tweets are reached to keep the demo from running too long – in the console the response scrolls past available buffer and I wanted you to be able to see the complete output. Yet, there’s another point to be made about constraining the number of items you return at one time. The Twitter API has rate limits and making too many queries per minute will result in an error from twitter that LINQ to Twitter raises as an exception. To use the API properly, you’ll have to ensure you don’t exceed this threshold. Looking at the statusList.Count as done above is rather primitive, but you can implement your own logic to properly manage your rate limit. Yeah, I cheated again. Summary Now you know how to use LINQ to Twitter to work with Twitter timelines. After reading this post, you have a better idea of the role of SinceID - the oldest tweet already received. You also know that MaxID is the largest tweet ID to retrieve in a query. Together, these settings allow you to page through results via one or more queries. You also understand what factors affect the number of tweets returned and considerations for potential error handling logic. The full example of the code for this post is included in the downloadable source code for LINQ to Twitter.   @JoeMayo

    Read the article

  • Is it possible to use Google Analytics to track file downloads?

    - by Eric Falsken
    It's always bothered me that Google Analytics (and similar embedded web traffic monitoring services) can only see a reflection of the traffic going to my server and can only see page visits since it depends on the browser executing a Javascript snippet. If I want to track real downloads of a software package (ZIP file), there's no way Google Analytics can possibly tell me that because its javascript can't be attached to a ZIP file. Is there a way I can upload my log files to Google so that the pointy-haired boss can see downloads of our ZIP/PDF/BIN files and not just visits to the download page?

    Read the article

  • File system implementation in MongoDB with GridFS

    - by Ralph
    I am working on two projects that will both implement a Webdav server backed by a MongoDB GridFS. In each case, there is the potential for the system to store tens of millions of files spread across thousands of hierarchical directories. I can come up with two different ways of storing the directory structure: As a "true" hierarchical file system, with directories containing the IDs (_id) of subdirectories and regular files. The paths will be separated by slashes (/) as in a POSIX-compliant file system. The path /a/b/c will be represented as a directory a containing a directory b containing a file c. As a flat file system, where file names include the slashes. The path /a/b/c will be stored as a single file with the name /a/b/c What are the advantages and disadvantages of each, with respect to a "real" folder-based file system?

    Read the article

  • SQL SERVER – ?Finding Out What Changed in a Deleted Database – Notes from the Field #041

    - by Pinal Dave
    [Note from Pinal]: This is a 41th episode of Notes from the Field series. The real world is full of challenges. When we are reading theory or book, we sometimes do not realize how real world reacts works and that is why we have the series notes from the field, which is extremely popular with developers and DBA. Let us talk about interesting problem of how to figure out what has changed in the DELETED database. Well, you think I am just throwing the words but in reality this kind of problems are making our DBA’s life interesting and in this blog post we have amazing story from Brian Kelley about the same subject. In this episode of the Notes from the Field series database expert Brian Kelley explains a how to find out what has changed in deleted database. Read the experience of Brian in his own words. Sometimes, one of the hardest questions to answer is, “What changed?” A similar question is, “Did anything change other than what we expected to change?” The First Place to Check – Schema Changes History Report: Pinal has recently written on the Schema Changes History report and its requirement for the Default Trace to be enabled. This is always the first place I look when I am trying to answer these questions. There are a couple of obvious limitations with the Schema Changes History report. First, while it reports what changed, when it changed, and who changed it, other than the base DDL operation (CREATE, ALTER, DELETE), it does not present what the changes actually were. This is not something covered by the default trace. Second, the default trace has a fixed size. When it hits that size, the changes begin to overwrite. As a result, if you wait too long, especially on a busy database server, you may find your changes rolled off. But the Database Has Been Deleted! Pinal cited another issue, and that’s the inability to run the Schema Changes History report if the database has been dropped. Thankfully, all is not lost. One thing to remember is that the Schema Changes History report is ultimately driven by the Default Trace. As you may have guess, it’s a trace, like any other database trace. And the Default Trace does write to disk. The trace files are written to the defined LOG directory for that SQL Server instance and have a prefix of log_: Therefore, you can read the trace files like any other. Tip: Copy the files to a working directory. Otherwise, you may occasionally receive a file in use error. With the Default Trace files, if you ask the question early enough, you can see the information for a deleted database just the same as any other database. Testing with a Deleted Database: Here’s a short script that will create a database, create a schema, create an object, and then drop the database. Without the database, you can’t do a standard Schema Changes History report. CREATE DATABASE DeleteMe; GO USE DeleteMe; GO CREATE SCHEMA Test AUTHORIZATION dbo; GO CREATE TABLE Test.Foo (FooID INT); GO USE MASTER; GO DROP DATABASE DeleteMe; GO This sets up the perfect situation where we can’t retrieve the information using the Schema Changes History report but where it’s still available. Finding the Information: I’ve sorted the columns so I can see the Event Subclass, the Start Time, the Database Name, the Object Name, and the Object Type at the front, but otherwise, I’m just looking at the trace files using SQL Profiler. As you can see, the information is definitely there: Therefore, even in the case of a dropped/deleted database, you can still determine who did what and when. You can even determine who dropped the database (loginame is captured). The key is to get the default trace files in a timely manner in order to extract the information. If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Direct Code Support?

    - by Josh Kahane
    A few times in the past I've hit a major wall and simply couldn't progress with a certain aspect of an app as Im a beginner and still learning the ropes (Objective-C specifically). I was curious if anyone knows of any services which support programmers in real time, paid or free and will over video, audio or text chat sit and work a problem out till its fixed and look through your code? I understand Stackoverflow does a super job at this! However Im in need of something a little more tailored where someone can spend a little time to sit and look and what Im dealing with and delve into my a code if need be. Thanks.

    Read the article

  • OpenGL programming vs Blender Software, which is better for custom video creation?

    - by iammilind
    I am learning OpenGL API bit by bit and also develop my own C++ framework library for effectively using them. Recently came across Blender software which is used for graphics creation and is in turn written in OpenGL itself. For my part time hobby of graphics learning, I want to just create small-small movie or video segments; e.g. related to construction engineering, epic stories and so on. There may be very minimal to nil mouse-keyboard interaction for those videos, unlike video games which are highly interactive. I was wondering if learning OpenGL from scratch is worth for it or should I invest my time in learning Blender software? There are quite a few good movie examples are created using Blender and are shown in its website. Other such opensource cross platform alternatives are also welcome, which can serve my aforementioned purpose.

    Read the article

< Previous Page | 513 514 515 516 517 518 519 520 521 522 523 524  | Next Page >