Search Results

Search found 54098 results on 2164 pages for 'something broken'.

Page 116/2164 | < Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >

  • TFS 2010 and the missing Area & Iterations (stale data) Issue

    - by andresv
    The symptom is this: you change some area or iteration in a TFS Project, but the change is not reflected (or updated) in VS or any other TFS Client. Well, it happens that TFS now has some clever caching mechanisms that need to be updated when you make a change like this, and those changes are propagated by some scheduled jobs TFS is continuously running in the Application Tier.  So, you you get this behavior, please check (and possibly restart) the "Visual Studio Team Foundation Background Job Agent" service. In my case, this service was logging a very odd "Object Reference Not Set" into the Windows Event Log, and a simple restart fixed it. Hope this is fixed by RTM...   (we are using the RC version). And by the way, if the job agent is broken there are some other things that stops working like email notifications. Best regards, Andrés G Vettori, CTO, VMBC  

    Read the article

  • git tagging comments - best practices

    - by Evan
    I've adopted a tagging system of x.x.x.x, and this works fine. However, you also need to leave a comment with your git tag. I've been using descriptions such as "fixes bug Y" or "feature X", but is this the best sort of comment to be leaving? Particularly, what if a tag encompasses several fixes, it seems not to make sense to have a very long tag comment. Does this mean that I should be creating a tag for every bug fix or feature, or should the tag comments be reflective of something else? I have a few ideas that may be good, but I'd love some advice from seasoned git tagging veterans :) For those who prefer specific examples: 1.0.0.0 - initial release 1.0.0.1 - bug fix for issue X 1.0.0.2 - (what if this is a bug fix for multiple issues, the comment would be too long, no?) Another example, in this example, the comments are more or less the same as the tags, it seems redundant. Is there something else we could be describing? https://github.com/osCommerce/oscommerce2/tags

    Read the article

  • Find Nearest Object

    - by ultifinitus
    I have a fairly sizable game engine created, and I'm adding some needed features, such as this, how do I find the nearest object from a list of points? In this case, I could simply use the Pythagorean theorem to find the distance, and check the results. I know I can't simply add x and y, because that's the distance to the object, if you only took right angle turns. However I'm wondering if there's something else I could do? I also have a collision system, where essentially I turn objects into smaller objects on a smaller grid, kind of like a minimap, and only if objects exist in the same gridspace do I check for collisions, I could do the same thing, only make the gridspace larger to check for closeness. (rather than checking every. single. object) however that would take additional setup in my base class and clutter up the already cluttered object. TL;DR Question: Is there something efficient and accurate that I can use to detect which object is closest, based on a list of points and sizes?

    Read the article

  • Popup Details for a Table Record

    - by shay.shmeltzer
    This one started as an OTN how-to question that seemed like something that should work automatically - turns out you need a couple of small tweaks to get it working. The idea is to have a table on a page showing multiple records, you can click any row in the table - and get a pop-up window that shows more data about that row. At first I thought I'll just need to drag the same view twice to the page - once as a table and then as a form in a pop-up. But then the Form didn't reflect the new row that got selected in the table - you'll always see the first row you selected. Adding a Partial Page Rendering between the table and the pop-up didn't do the trick either. Then I realized that the content delivery attribute of the pop-up was set to lazy, when I switched it to immediate - everything worked. Here is a little demo showing the whole development process: Note that the content delivery method attribute is also something you might want to check if you see your tables being refreshed too often when you scroll through records for example.

    Read the article

  • Can't download web photo albums to Picasa

    - by Arcadie
    Someone has shared a Picasa web album (Limited, anyone with the link), but I can't download it to Picasa. The following alert appears: Firefox doesn't know how to open this address, because the protocol (picasa) isn't associated with any program. I have Picasa 3.0.0 installed on Ubuntu 11.04, I remember it saying something about registering the picasa protocol with Firefox during the installation. I have Firefox 6.0.2, and these settings are present in about:config network.protocol-handler.app.picasa;/usr/bin/picasa network.protocol-handler.expose.picasa;true network.protocol-handler.external.picasa;true Picasa is located here: $ which picasa /usr/bin/picasa Is there something I can do to make this work? PS: I hope this is not off-topic here, and I can't find the "picasa" tag. Could someone please add it, if appropriate?

    Read the article

  • Issues installing synapse launcher

    - by George Morton
    I am trying to install synapse launcher on my desktop . I am using these two commands: sudo add-apt-repository ppa:synapse-core/ppa sudo apt-get update && sudo apt-get install synapse However I am getting an error with the second command saying E: Some index files failed to download, they have been ignored, or old ones used instead. I presume this has something to do with my connection to the hosting servers. But what I don't understand is the fact that synaptic is working it just seems to be something about that ppa. I don't know what I am doing wrong as the commands are widely suggested around the web, But they don't seem to work for me! I would greatly appreciate some advice on this as it is proving to be very frustrating. Many thanks, George

    Read the article

  • 555 Footstool Turns Tech into Mad Scientist Decor

    - by Jason Fitzpatrick
    If you just can’t find the appropriate footstool for your laboratory, this laser-cut footstool styled to look like the ubiquitous 555 Timer should fit the bill. At Evil Mad Scientist Laboratories they were in search for the perfect footstool. Never ones to do something halfway they set out to build a footstool shaped like the famous integrated circuit design the 555 Timer. The project involved computer design, CNC routers, laser engraving, lots of plywood and glue, and paint. Hit up the link below to see pictures of the entire build process. 555 Footstool [Evil Mad Scientist Laboratories] How To Encrypt Your Cloud-Based Drive with BoxcryptorHTG Explains: Photography with Film-Based CamerasHow to Clean Your Dirty Smartphone (Without Breaking Something)

    Read the article

  • ANTS CLR and Memory Profiler In Depth Review (Part 1 of 2 &ndash; CLR Profiler)

    - by ToStringTheory
    One of the things that people might not know about me, is my obsession to make my code as efficient as possible.  Many people might not realize how much of a task or undertaking that this might be, but it is surely a task as monumental as climbing Mount Everest, except this time it is a challenge for the mind…  In trying to make code efficient, there are many different factors that play a part – size of project or solution, tiers, language used, experience and training of the programmer, technologies used, maintainability of the code – the list can go on for quite some time. I spend quite a bit of time when developing trying to determine what is the best way to implement a feature to accomplish the efficiency that I look to achieve.  One program that I have recently come to learn about – Red Gate ANTS Performance (CLR) and Memory profiler gives me tools to accomplish that job more efficiently as well.  In this review, I am going to cover some of the features of the ANTS profiler set by compiling some hideous example code to test against. Notice As a member of the Geeks With Blogs Influencers program, one of the perks is the ability to review products, in exchange for a free license to the program.  I have not let this affect my opinions of the product in any way, and Red Gate nor Geeks With Blogs has tried to influence my opinion regarding this product in any way. Introduction The ANTS Profiler pack provided by Red Gate was something that I had not heard of before receiving an email regarding an offer to review it for a license.  Since I look to make my code efficient, it was a no brainer for me to try it out!  One thing that I have to say took me by surprise is that upon downloading the program and installing it you fill out a form for your usual contact information.  Sure enough within 2 hours, I received an email from a sales representative at Red Gate asking if she could help me to achieve the most out of my trial time so it wouldn’t go to waste.  After replying to her and explaining that I was looking to review its feature set, she put me in contact with someone that setup a demo session to give me a quick rundown of its features via an online meeting.  After having dealt with a massive ordeal with one of my utility companies and their complete lack of customer service, Red Gates friendly and helpful representatives were a breath of fresh air, and something I was thankful for. ANTS CLR Profiler The ANTS CLR profiler is the thing I want to focus on the most in this post, so I am going to dive right in now. Install was simple and took no time at all.  It installed both the profiler for the CLR and Memory, but also visual studio extensions to facilitate the usage of the profilers (click any images for full size images): The Visual Studio menu options (under ANTS menu) Starting the CLR Performance Profiler from the start menu yields this window If you follow the instructions after launching the program from the start menu (Click File > New Profiling Session to start a new project), you are given a dialog with plenty of options for profiling: The New Session dialog.  Lots of options.  One thing I noticed is that the buttons in the lower right were half-covered by the panel of the application.  If I had to guess, I would imagine that this is caused by my DPI settings being set to 125%.  This is a problem I have seen in other applications as well that don’t scale well to different dpi scales. The profiler options give you the ability to profile: .NET Executable ASP.NET web application (hosted in IIS) ASP.NET web application (hosted in IIS express) ASP.NET web application (hosted in Cassini Web Development Server) SharePoint web application (hosted in IIS) Silverlight 4+ application Windows Service COM+ server XBAP (local XAML browser application) Attach to an already running .NET 4 process Choosing each option provides a varying set of other variables/options that one can set including options such as application arguments, operating path, record I/O performance performance counters to record (43 counters in all!), etc…  All in all, they give you the ability to profile many different .Net project types, and make it simple to do so.  In most cases of my using this application, I would be using the built in Visual Studio extensions, as they automatically start a new profiling project in ANTS with the options setup, and start your program, however RedGate has made it easy enough to profile outside of Visual Studio as well. On the flip side of this, as someone who lives most of their work life in Visual Studio, one thing I do wish is that instead of opening an entirely separate application/gui to perform profiling after launching, that instead they would provide a Visual Studio panel with the information, and integrate more of the profiling project information into Visual Studio.  So, now that we have an idea of what options that the profiler gives us, its time to test its abilities and features. Horrendous Example Code – Prime Number Generator One of my interests besides development, is Physics and Math – what I went to college for.  I have especially always been interested in prime numbers, as they are something of a mystery…  So, I decided that I would go ahead and to test the abilities of the profiler, I would write a small program, website, and library to generate prime numbers in the quantity that you ask for.  I am going to start off with some terrible code, and show how I would see the profiler being used as a development tool. First off, the IPrimes interface (all code is downloadable at the end of the post): interface IPrimes { IEnumerable<int> GetPrimes(int retrieve); } Simple enough, right?  Anything that implements the interface will (hopefully) provide an IEnumerable of int, with the quantity specified in the parameter argument.  Next, I am going to implement this interface in the most basic way: public class DumbPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _analyzing = 4; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; //start dividing at 2 //divide until number is reached, or determined not prime for (int i = 2; i < _analyzing && isPrime; i++) { //if (i) goes into _analyzing without a remainder, //_analyzing is NOT prime if (_analyzing % i == 0) isPrime = false; } //if it is prime, add to found list if (isPrime) _foundPrimes.Add(_analyzing); //increment number to analyze next _analyzing++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } This is the simplest way to get primes in my opinion.  Checking each number by the straight definition of a prime – is it divisible by anything besides 1 and itself. I have included this code in a base class library for my solution, as I am going to use it to demonstrate a couple of features of ANTS.  This class library is consumed by a simple non-MVVM WPF application, and a simple MVC4 website.  I will not post the WPF code here inline, as it is simply an ObservableCollection<int>, a label, two textbox’s, and a button. Starting a new Profiling Session So, in Visual Studio, I have just completed my first stint developing the GUI and DumbPrimes IPrimes class, so now I want to check my codes efficiency by profiling it.  All I have to do is build the solution (surprised initiating a profiling session doesn’t do this, but I suppose I can understand it), and then click the ANTS menu, followed by Profile Performance.  I am then greeted by the profiler starting up and already monitoring my program live: You are provided with a realtime graph at the top, and a pane at the bottom giving you information on how to proceed.  I am going to start by asking my program to show me the first 15000 primes: After the program finally began responding again (I did all the work on the main UI thread – how bad!), I stopped the profiler, which did kill the process of my program too.  One important thing to note, is that the profiler by default wants to give you a lot of detail about the operation – line hit counts, time per line, percent time per line, etc…  The important thing to remember is that this itself takes a lot of time.  When running my program without the profiler attached, it can generate the 15000 primes in 5.18 seconds, compared to 74.5 seconds – almost a 1500 percent increase.  While this may seem like a lot, remember that there is a trade off.  It may be WAY more inefficient, however, I am able to drill down and make improvements to specific problem areas, and then decrease execution time all around. Analyzing the Profiling Session After clicking ‘Stop Profiling’, the process running my application stopped, and the entire execution time was automatically selected by ANTS, and the results shown below: Now there are a number of interesting things going on here, I am going to cover each in a section of its own: Real Time Performance Counter Bar (top of screen) At the top of the screen, is the real time performance bar.  As your application is running, this will constantly update with the currently selected performance counters status.  A couple of cool things to note are the fact that you can drag a selection around specific time periods to drill down the detail views in the lower 2 panels to information pertaining to only that period. After selecting a time period, you can bookmark a section and name it, so that it is easy to find later, or after reloaded at a later time.  You can also zoom in, out, or fit the graph to the space provided – useful for drilling down. It may be hard to see, but at the top of the processor time graph below the time ticks, but above the red usage graph, there is a green bar. This bar shows at what times a method that is selected in the ‘Call tree’ panel is called. Very cool to be able to click on a method and see at what times it made an impact. As I said before, ANTS provides 43 different performance counters you can hook into.  Click the arrow next to the Performance tab at the top will allow you to change between different counters if you have them selected: Method Call Tree, ADO.Net Database Calls, File IO – Detail Panel Red Gate really hit the mark here I think. When you select a section of the run with the graph, the call tree populates to fill a hierarchical tree of method calls, with information regarding each of the methods.   By default, methods are hidden where the source is not provided (framework type code), however, Red Gate has integrated Reflector into ANTS, so even if you don’t have source for something, you can select a method and get the source if you want.  Methods are also hidden where the impact is seen as insignificant – methods that are only executed for 1% of the time of the overall calling methods time; in other words, working on making them better is not where your efforts should be focused. – Smart! Source Panel – Detail Panel The source panel is where you can see line level information on your code, showing the code for the currently selected method from the Method Call Tree.  If the code is not available, Reflector takes care of it and shows the code anyways! As you can notice, there does seem to be a problem with how ANTS determines what line is the actual line that a call is completed on.  I have suspicions that this may be due to some of the inline code optimizations that the CLR applies upon compilation of the assembly.  In a method with comments, the problem is much more severe: As you can see here, apparently the most offending code in my base library was a comment – *gasp*!  Removing the comments does help quite a bit, however I hope that Red Gate works on their counter algorithm soon to improve the logic on positioning for statistics: I did a small test just to demonstrate the lines are correct without comments. For me, it isn’t a deal breaker, as I can usually determine the correct placements by looking at the application code in the region and determining what makes sense, but it is something that would probably build up some irritation with time. Feature – Suggest Method for Optimization A neat feature to really help those in need of a pointer, is the menu option under tools to automatically suggest methods to optimize/improve: Nice feature – clicking it filters the call tree and stars methods that it thinks are good candidates for optimization.  I do wish that they would have made it more visible for those of use who aren’t great on sight: Process Integration I do think that this could have a place in my process.  After experimenting with the profiler, I do think it would be a great benefit to do some development, testing, and then after all the bugs are worked out, use the profiler to check on things to make sure nothing seems like it is hogging more than its fair share.  For example, with this program, I would have developed it, ran it, tested it – it works, but slowly. After looking at the profiler, and seeing the massive amount of time spent in 1 method, I might go ahead and try to re-implement IPrimes (I actually would probably rewrite the offending code, but so that I can distribute both sets of code easily, I’m just going to make another implementation of IPrimes).  Using two pieces of knowledge about prime numbers can make this method MUCH more efficient – prime numbers fall into two buckets 6k+/-1 , and a number is prime if it is not divisible by any other primes before it: public class SmartPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _k = 1; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; int potentialPrime; //analyze 6k-1 //assign the value to potential potentialPrime = 6 * _k - 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); if (_foundPrimes.Count() == retrieve) break; //analyze 6k+1 //assign the value to potential potentialPrime = 6 * _k + 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); //increment k to analyze next _k++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } Now there are definitely more things I can do to help make this more efficient, but for the scope of this example, I think this is fine (but still hideous)! Profiling this now yields a happy surprise 27 seconds to generate the 15000 primes with the profiler attached, and only 1.43 seconds without.  One important thing I wanted to call out though was the performance graph now: Notice anything odd?  The %Processor time is above 100%.  This is because there is now more than 1 core in the operation.  A better label for the chart in my mind would have been %Core time, but to each their own. Another odd thing I noticed was that the profiler seemed to be spot on this time in my DumbPrimes class with line details in source, even with comments..  Odd. Profiling Web Applications The last thing that I wanted to cover, that means a lot to me as a web developer, is the great amount of work that Red Gate put into the profiler when profiling web applications.  In my solution, I have a simple MVC4 application setup with 1 page, a single input form, that will output prime values as my WPF app did.  Launching the profiler from Visual Studio as before, nothing is really different in the profiler window, however I did receive a UAC prompt for a Red Gate helper app to integrate with the web server without notification. After requesting 500, 1000, 2000, and 5000 primes, and looking at the profiler session, things are slightly different from before: As you can see, there are 4 spikes of activity in the processor time graph, but there is also something new in the call tree: That’s right – ANTS will actually group method calls by get/post operations, so it is easier to find out what action/page is giving the largest problems…  Pretty cool in my mind! Overview Overall, I think that Red Gate ANTS CLR Profiler has a lot to offer, however I think it also has a long ways to go.  3 Biggest Pros: Ability to easily drill down from time graph, to method calls, to source code Wide variety of counters to choose from when profiling your application Excellent integration/grouping of methods being called from web applications by request – BRILLIANT! 3 Biggest Cons: Issue regarding line details in source view Nit pick – Processor time vs. Core time Nit pick – Lack of full integration with Visual Studio Ratings Ease of Use (7/10) – I marked down here because of the problems with the line level details and the extra work that that entails, and the lack of better integration with Visual Studio. Effectiveness (10/10) – I believe that the profiler does EXACTLY what it purports to do.  Especially with its large variety of performance counters, a definite plus! Features (9/10) – Besides the real time performance monitoring, and the drill downs that I’ve shown here, ANTS also has great integration with ADO.Net, with the ability to show database queries run by your application in the profiler.  This, with the line level details, the web request grouping, reflector integration, and various options to customize your profiling session I think create a great set of features! Customer Service (10/10) – My entire experience with Red Gate personnel has been nothing but good.  their people are friendly, helpful, and happy! UI / UX (8/10) – The interface is very easy to get around, and all of the options are easy to find.  With a little bit of poking around, you’ll be optimizing Hello World in no time flat! Overall (8/10) – Overall, I am happy with the Performance Profiler and its features, as well as with the service I received when working with the Red Gate personnel.  I WOULD recommend you trying the application and seeing if it would fit into your process, BUT, remember there are still some kinks in it to hopefully be worked out. My next post will definitely be shorter (hopefully), but thank you for reading up to here, or skipping ahead!  Please, if you do try the product, drop me a message and let me know what you think!  I would love to hear any opinions you may have on the product. Code Feel free to download the code I used above – download via DropBox

    Read the article

  • Do I suffer from encapsulation overuse?

    - by Florenc
    I have noticed something in my code in various projects that seems like code smell to me and something bad to do, but I can't deal with it. While trying to write "clean code" I tend to over-use private methods in order to make my code easier to read. The problem is that the code is indeed cleaner but it's also more difficult to test (yeah I know I can test private methods...) and in general it seems a bad habit to me. Here's an example of a class that reads some data from a .csv file and returns a group of customers (another object with various fields and attributes). public class GroupOfCustomersImporter { //... Call fields .... public GroupOfCustomersImporter(String filePath) { this.filePath = filePath; customers = new HashSet<Customer>(); createCSVReader(); read(); constructTTRP_Instance(); } private void createCSVReader() { //.... } private void read() { //.... Reades the file and initializes the class attributes } private void readFirstLine(String[] inputLine) { //.... Method used by the read() method } private void readSecondLine(String[] inputLine) { //.... Method used by the read() method } private void readCustomerLine(String[] inputLine) { //.... Method used by the read() method } private void constructGroupOfCustomers() { //this.groupOfCustomers = new GroupOfCustomers(**attributes of the class**); } public GroupOfCustomers getConstructedGroupOfCustomers() { return this.GroupOfCustomers; } } As you can see the class has only a constructor which calls some private methods to get the job done, I know that's not a good practice not a good practice in general but I prefer to encapsulate all the functionality in the class instead of making the methods public in which case a client should work this way: GroupOfCustomersImporter importer = new GroupOfCustomersImporter(filepath) importer.createCSVReader(); read(); GroupOfCustomer group = constructGoupOfCustomerInstance(); I prefer this because I don't want to put useless lines of code in the client's side code bothering the client class with implementation details. So, Is this actually a bad habit? If yes, how can I avoid it? Please note that the above is just a simple example. Imagine the same situation happening in something a little bit more complex.

    Read the article

  • wpa_supplicant ioctl[SIOCSIWENCODEEXT]: Invalid argument

    - by RobinJ
    I just installed Ubuntu Server on a spare computer with a broken graphic card, and I needed to setup my wifi. I thought No problem, I've done it before from the command line, in Arch Linux., only the steps I had to take in Arch Linux didn't work in Ubuntu Server (that's odd, wasn't Ubuntu supposed to be so much more user-friendly? :/). This is what I'd normally do: ip link set wlan0 up wpa_passphrase <ESSID> <WPA KEY> > /etc/wpa_supplicant.conf wpa_supplicant -B -Dwext -i wlan0 -c /etc/wpa_supplicant.conf dhcpcd wlan0 Only at the wpa_supplicant I get some strane message: ioctl[SIOCSIWENCODEEXT]: Invalid argument ioctl[SIOCSIWENCODEEXT]: Invalid argument And if I try to run dhcpcd then (which I manually installed, as it didn't come preinstalled with Ubuntu Server), it times out. Help on how to do it would be much appreciated. My wireless driver is iwl3945.

    Read the article

  • What to do when:"The following packages have unmet dependencies"

    - by André Izquierdo
    When I try to install Wine, PlayonLinux, and some other programs, I get the following message: At the terminal: The following packages have unmet dependencies: wine : Depends: wine1.5 but it is not going to be installed E: Unable to correct problems, you have held broken packages. At the software center: Package dependencies cannot be resolved This error could be caused by required additional software packages which are missing or not installable. Furthermore there could be a conflict between software packages which are not allowed to be installed at the same time. I already tried installing them with "aptitude" and failed. I am running Ubuntu 12.10 Please help! I've had this problem for a while now and don't know what to do!

    Read the article

  • some files just wont install- linux-image-3.5.0-42-generic

    - by jatin
    It gives the following details of the package operation failing after using the sofware updater on 12.10: installArchives() failed: perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Preconfiguring packages ... perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Preconfiguring packages ... perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Preconfiguring packages ... perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LANG = "en_IN.ISO8859-1" are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Preconfiguring packages ... (Reading database ... (Reading database ... 217566 files and directories currently installed.) Preparing to replace linux-image-3.5.0-36-generic 3.5.0-36.57~precise1 (using .../linux-image-3.5.0-36-generic_3.5.0-36.57_amd64.deb) ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Done. Unpacking replacement linux-image-3.5.0-36-generic ... dpkg: error processing /var/cache/apt/archives/linux-image-3.5.0-36-generic_3.5.0-36.57_amd64.deb (--unpack): unable to make backup link of `./boot/System.map-3.5.0-36-generic' before installing new version: Operation not permitted No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.5.0-36-generic /boot/vmlinuz-3.5.0-36-generic run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.5.0-36-generic /boot/vmlinuz-3.5.0-36-generic Preparing to replace linux-image-3.5.0-42-generic 3.5.0-42.65~precise1 (using .../linux-image-3.5.0-42-generic_3.5.0-42.65_amd64.deb) ... locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Done. Unpacking replacement linux-image-3.5.0-42-generic ... dpkg: error processing /var/cache/apt/archives/linux-image-3.5.0-42-generic_3.5.0-42.65_amd64.deb (--unpack): unable to make backup link of `./boot/System.map-3.5.0-42-generic' before installing new version: Operation not permitted No apport report written because MaxReports is reached already Examining /etc/kernel/postrm.d . run-parts: executing /etc/kernel/postrm.d/initramfs-tools 3.5.0-42-generic /boot/vmlinuz-3.5.0-42-generic dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) run-parts: executing /etc/kernel/postrm.d/zz-update-grub 3.5.0-42-generic /boot/vmlinuz-3.5.0-42-generic Preparing to replace memtest86+ 4.20-1.1ubuntu1 (using .../memtest86+_4.20-1.1ubuntu2.1_amd64.deb) ... Unpacking replacement memtest86+ ... dpkg: error processing /var/cache/apt/archives/memtest86+_4.20-1.1ubuntu2.1_amd64.deb (--unpack): unable to make backup link of `./boot/memtest86+.bin' before installing new version: Operation not permitted No apport report written because MaxReports is reached already locale: Cannot set LC_CTYPE to default locale: No such file or directory locale: Cannot set LC_MESSAGES to default locale: No such file or directory locale: Cannot set LC_ALL to default locale: No such file or directory Errors were encountered while processing: /var/cache/apt/archives/linux-image-3.5.0-36-generic_3.5.0-36.57_amd64.deb /var/cache/apt/archives/linux-image-3.5.0-42-generic_3.5.0-42.65_amd64.deb /var/cache/apt/archives/memtest86+_4.20-1.1ubuntu2.1_amd64.deb Error in function: Output of df -h: Filesystem Size Used Avail Use% Mounted on /dev/sda6 38G 6.5G 30G 19% / udev 3.5G 4.0K 3.5G 1% /dev tmpfs 1.5G 924K 1.5G 1% /run none 5.0M 4.0K 5.0M 1% /run/lock none 3.6G 296K 3.6G 1% /run/shm none 100M 64K 100M 1% /run/user /dev/sda1 197M 87M 111M 44% /boot /dev/sda5 243G 971M 230G 1% /home

    Read the article

  • Software Engineering Practices &ndash; Different Projects should have different maturity levels

    - by Dylan Smith
    I’ve had a lot of discussions at the office lately about the drastically different sets of software engineering practices used on our various projects, if what we are doing is appropriate, and what factors should you be considering when determining what practices are most appropriate in a given context. I wanted to write up my thoughts in a little more detail on this subject, so here we go: If you compare any two software projects (specifically comparing their codebases) you’ll often see very different levels of maturity in the software engineering practices employed. By software engineering practices, I’m specifically referring to the quality of the code and the amount of technical debt present in the project. Things such as Test Driven Development, Domain Driven Design, Behavior Driven Development, proper adherence to the SOLID principles, etc. are all practices that you would expect at the mature end of the spectrum. At the other end of the spectrum would be the quick-and-dirty solutions that are done using something like an Access Database, Excel Spreadsheet, or maybe some quick “drag-and-drop coding”. For this blog post I’m going to refer to this as the Software Engineering Maturity Spectrum (SEMS). I believe there is a time and a place for projects at every part of that SEMS. The risks and costs associated with under-engineering solutions have been written about a million times over so I won’t bother going into them again here, but there are also (unnecessary) costs with over-engineering a solution. Sometimes putting multiple layers, and IoC containers, and abstracting out the persistence, etc is complete overkill if a one-time use Access database could solve the problem perfectly well. A lot of software developers I talk to seem to automatically jump to the very right-hand side of this SEMS in everything they do. A common rationalization I hear is that it may seem like a small trivial application today, but these things always grow and stick around for many years, then you’re stuck maintaining a big ball of mud. I think this is a cop-out. Sure you can’t always anticipate how an application will be used or grow over its lifetime (can you ever??), but that doesn’t mean you can’t manage it and evolve the underlying software architecture as necessary (even if that means having to toss the code out and re-write it at some point…maybe even multiple times). My thoughts are that we should be making a conscious decision around the start of each project approximately where on the SEMS we want the project to exist. I believe this decision should be based on 3 factors: 1. Importance - How important to the business is this application? What is the impact if the application were to suddenly stop working? 2. Complexity - How complex is the application functionality? 3. Life-Expectancy - How long is this application expected to be in use? Is this a one-time use application, does it fill a short-term need, or is it more strategic and is expected to be in-use for many years to come? Of course this isn’t an exact science. You can’t say that Project X should be at the 73% mark on the SEMS and expect that to be helpful. My point is not that you need to precisely figure out what point on the SEMS the project should be at then translate that into some prescriptive set of practices and techniques you should be using. Rather my point is that we need to be aware that there is a spectrum, and that not everything is going to be (or should be) at the edges of that spectrum, indeed a large number of projects should probably fall somewhere within the middle; and different projects should adopt a different level of software engineering practices and maturity levels based on the needs of that project. To give an example of this way of thinking from my day job: Every couple of years my company plans and hosts a large event where ~400 of our customers all fly in to one location for a multi-day event with various activities. We have some staff whose job it is to organize the logistics of this event, which includes tracking which flights everybody is booked on, arranging for transportation to/from airports, arranging for hotel rooms, name tags, etc The last time we arranged this event all these various pieces of data were tracked in separate spreadsheets and reconciliation and cross-referencing of all the data was literally done by hand using printed copies of the spreadsheets and several people sitting around a table going down each list row by row. Obviously there is some room for improvement in how we are using software to manage the event’s logistics. The next time this event occurs we plan to provide the event planning staff with a more intelligent tool (either an Excel spreadsheet or probably an Access database) that can track all the information in one location and make sure that the various pieces of data are properly linked together (so for example if a person cancels you only need to delete them from one place, and not a dozen separate lists). This solution would fall at or near the very left end of the SEMS meaning that we will just quickly create something with very little attention paid to using mature software engineering practices. If we examine this project against the 3 criteria I listed above for determining it’s place within the SEMS we can see why: Importance – If this application were to stop working the business doesn’t grind to a halt, revenue doesn’t stop, and in fact our customers wouldn’t even notice since it isn’t a customer facing application. The impact would simply be more work for our event planning staff as they revert back to the previous way of doing things (assuming we don’t have any data loss). Complexity – The use cases for this project are pretty straightforward. It simply needs to manage several lists of data, and link them together appropriately. Precisely the task that access (and/or Excel) can do with minimal custom development required. Life-Expectancy – For this specific project we’re only planning to create something to be used for the one event (we only hold these events every 2 years). If it works well this may change (see below). Let’s assume we hack something out quickly and it works great when we plan the next event. We may decide that we want to make some tweaks to the tool and adopt it for planning all future events of this nature. In that case we should examine where the current application is on the SEMS, and make a conscious decision whether something needs to be done to move it further to the right based on the new objectives and goals for this application. This may mean scrapping the access database and re-writing it as an actual web or windows application. In this case, the life-expectancy changed, but let’s assume the importance and complexity didn’t change all that much. We can still probably get away with not adopting a lot of the so-called “best practices”. For example, we can probably still use some of the RAD tooling available and might have an Autonomous View style design that connects directly to the database and binds to typed datasets (we might even choose to simply leave it as an access database and continue using it; this is a decision that needs to be made on a case-by-case basis). At Anvil Digital we have aspirations to become a primarily product-based company. So let’s say we use this tool to plan a handful of events internally, and everybody loves it. Maybe a couple years down the road we decide we want to package the tool up and sell it as a product to some of our customers. In this case the project objectives/goals change quite drastically. Now the tool becomes a source of revenue, and the impact of it suddenly stopping working is significantly less acceptable. Also as we hold focus groups, and gather feedback from customers and potential customers there’s a pretty good chance the feature-set and complexity will have to grow considerably from when we were using it only internally for planning a small handful of events for one company. In this fictional scenario I would expect the target on the SEMS to jump to the far right. Depending on how we implemented the previous release we may be able to refactor and evolve the existing codebase to introduce a more layered architecture, a robust set of automated tests, introduce a proper ORM and IoC container, etc. More likely in this example the jump along the SEMS would be so large we’d probably end up scrapping the current code and re-writing. Although, if it was a slow phased roll-out to only a handful of customers, where we collected feedback, made some tweaks, and then rolled out to a couple more customers, we may be able to slowly refactor and evolve the code over time rather than tossing it out and starting from scratch. The key point I’m trying to get across is not that you should be throwing out your code and starting from scratch all the time. But rather that you should be aware of when and how the context and objectives around a project changes and periodically re-assess where the project currently falls on the SEMS and whether that needs to be adjusted based on changing needs. Note: There is also the idea of “spectrum decay”. Since our industry is rapidly evolving, what we currently accept as mature software engineering practices (the right end of the SEMS) probably won’t be the same 3 years from now. If you have a project that you were to assess at somewhere around the 80% mark on the SEMS today, but don’t touch the code for 3 years and come back and re-assess its position, it will almost certainly have changed since the right end of the SEMS will have moved farther out (maybe the project is now only around 60% due to decay). Developer Skills Another important aspect to this whole discussion is around the skill sets of your architects and lead developers. When talking about the progression of a developers skills from junior->intermediate->senior->… they generally start by only being able to write code that belongs on the left side of the SEMS and as they gain more knowledge and skill they become capable of working at a higher and higher level along the SEMS. We all realize that the learning never stops, but eventually you’ll get to the point where you can comfortably develop at the right-end of the SEMS (the exact practices and techniques that translates to is constantly changing, but that’s not the point here). A critical skill that I’d love to see more evidence of in our industry is the most senior guys not only being able to work at the right-end of the SEMS, but more importantly be able to consciously work at any point along the SEMS as project needs dictate. An even more valuable skill would be if you could make the conscious decision to move a projects code further right on the SEMS (based on changing needs) and do so in an incremental manner without having to start from scratch. An exercise that I’m planning to go through with all of our projects here at Anvil in the near future is to map out where I believe each project currently falls within this SEMS, where I believe the project *should* be on the SEMS based on the business needs, and for those that don’t match up (i.e. most of them) come up with a plan to improve the situation.

    Read the article

  • Sitefinity SimpleImageSelector to return Url of image instead of Guid

    - by Joey Brenn
    It's been quite a while but I've found something to blog about!I've been working with Sitefinity for some time now and one of the things that I've struggled with, and I'm not the only one is something that should be simple.  See, all I want to do is be able to choose a picture from one of the libraries within Sitefinity and be able to display it via the GUID it returns or the path of the URL.  I want to do this from my user control or a custom control.Well, it turns out that this is not built in, at least I've not been able to get anything working correctly until I found this post and was able to get it to work.  However, I want to store the relative URL of the image so I made a small change to make it return the URL instead of the GUID.To make the change, in the SimpleImageSelectorDialog.js file, on line 43, change the original line:var selectedValue = this.get_imageSelector().get_selectedImageId();to the new line:var selectedValue = this.get_imageSelector().get_selectedImageUrl();var selectedValue = this.get_imageSelector().get_selectedImageUrl();Of course, save and recomple the project and now it will return the URL instead of the GUID of the image from the choosen Album.

    Read the article

  • .NET development on a “Retina” MacBook Pro

    - by Jeff
    The rumor that Apple would release a super high resolution version of its 15” laptop has been around for quite awhile, and one I watched closely. After more than three years with a 17” MacBook Pro, and all of the screen real estate it offered, I was ready to replace it with something much lighter. It was a fantastic machine, still doing 6 or 7 hours after 460 charge cycles, but I wanted lighter and faster. With the SSD I put in it, I was able to sell it for $750. The appeal of higher resolution goes way back, when I would plug into a projector and scale up. Consolas, as it turns out, is a nice looking font for code when it’s bigger. While I have mostly indifference for iOS, I have to admit that a higher dot pitch on the iPhone and iPad is pretty to look at. So I ordered the new 15” “Retina” model as soon as the Apple Store went live with it, and got it seven days later. I’ve been primarily using Parallels as my VM of choice from OS X for about five years. They recently put out an update for compatibility with the display, though I’m not entirely sure what that means. I figured there would have to be some messing around to get the VM to look right. The combination that seems to work best is this: Set the display in OS X to “more room,” which is roughly the equivalent of the 1920x1200 that my 17” did. It’s not as stunning as the text at the default 1440x900 equivalent (in OS X), but it’s still quite readable. Parallels still doesn’t entirely know what to do with the high resolution, though what it should do is somehow treat it as native. That flaw aside, I set the Windows 7 scaling to 125%, and it generally looks pretty good. It’s not really taking advantage of the display for sharpness, but hopefully that’s something that Parallels will figure out. Screen tweaking aside, I got the base model with 16 gigs of RAM, so I give the VM 8. I can boot a Windows 7 VM in 9 seconds. Nine seconds! The Windows Experience Index scores are all 7 and above, except for graphics, which are both at 6. Again, that’s in a VM. It’s hard to believe there’s something so fast in a little slim package like that. Hopefully this one gets me at least three years, like the last one.

    Read the article

  • How to make Rect of irregular shape sprite?

    - by Anil gupta
    I used masking for breaking an image as the below mention pattern now its breaking in different pieces but now i have one issue to make the Rect of each pieces, i need to drag the broken pieces and to adjust at correct position so that i can make again actual images. To drag and put at right positing i need to make Rect but i am not getting idea how to make Rect of this irregular shape, I will be very thankful to you, any idea or code to make rect . My previous Question is: How do I break an image into 6 or 8 pieces of different shapes? Thanks.

    Read the article

  • Does Scrum turn active developers into passive developers?

    - by Saeed Neamati
    I'm a web developer working in a team of three developers and one designer. It's now about five months that we've implemented the agile scrum software development methodology. But I have a weird feeling I just wanted to share in this site. One important factor in human life is decision-making process. However, there is a big difference in decisions you make. Some decisions are just the outcome of an internal or external force, while other decisions are completely based on your free will, and some decisions are simply something in between. The more freedom you have in making decisions, the more self-driven your work would become. This seems to be a rule. Because we tend to shape our lives ourselves. There is a big difference between you deciding what to do, or being told what to do. Before scrum, I felt like having more freedom in making the decisions which were related to development, analysis, prioritizing implementation, etc. I had more feeling like I'm deciding what I'm doing. However, due to the scrum methodology, now many decisions simply come from the product owner. He prioritizes PBIs, he analyzes how the software should work, even sometimes how the UI and functionality should be implemented. I know that this is part of the scrum methodology, and I also know that this may result in better sales of product in future. However, I now feel like I'm always getting told to do something, instead of deciding to do something. This syndrome now has made me more passive towards the work. I tend to search less to find a better solution, approach, or technique I don't wake up in the morning expecting to get to an enjoyable work. Rather, I feel like being forced to work in order to live I have more hunger to work on my own hobby projects after work I won't push the team anymore to get to the higher technological levels I spend more time now on dinner, or tea-times and have less enthusiasm to get back to work I'm now willing more for the work to finish sooner, so that I can get home The big problem is, I see and diagnose this behavior in my colleagues too. Is it the outcome of scrum? Does scrum really makes the development team feel like they have no part in forming the overall software, thus making the passive to the project? How can I overcome this feeling?

    Read the article

  • Ted Kummert to make significant announcement related to SQL Server

    - by jamiet
    Microsoft have announced a conference call tomorrow with the head of all things SQL Server, Ted Kummert: Normally I wouldn’t take any notice of such things but the mysterious pre-conference-call-announcement (not something that the SQL Server team do regularly as I recall) has me intrigued. Logic says that it will have something to do with SQL Server R2, we shall see! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How can I write only to the stencil buffer in OpenGL ES 2.0?

    - by stephelton
    I'd like to write to the stencil buffer without incurring the cost of my expensive shaders. As I understand it, I write to the stencil buffer as a 'side effect' of rendering something. In this first pass where I write to the stencil buffer, I don't want to write anything to the color or depth buffer, and I definitely don't want to run through my lighting equations in my shaders. Do I need to create no-op shaders for this (and can I just discard fragments), or is there a better way to do this? As the title says, I'm using OpenGL ES 2.0. I haven't used the stencil buffer before, so if I seem to be misunderstanding something, feel free to be verbose.

    Read the article

  • A C# implementation of the CallStream pattern

    - by Bertrand Le Roy
    Dusan published this interesting post a couple of weeks ago about a novel JavaScript chaining pattern: http://dbj.org/dbj/?p=514 It’s similar to many existing patterns, but the syntax is extraordinarily terse and it provides a new form of friction-free, plugin-less extensibility mechanism. Here’s a JavaScript example from Dusan’s post: CallStream("#container") (find, "div") (attr, "A", 1) (css, "color", "#fff") (logger); The interesting thing here is that the functions that are being passed as the first argument are arbitrary, they don’t need to be declared as plug-ins. Compare that with a rough jQuery equivalent that could look something like this: $.fn.logger = function () { /* ... */ } $("selector") .find("div") .attr("A", 1) .css("color", "#fff") .logger(); There is also the “each” method in jQuery that achieves something similar, but its syntax is a little more verbose. Of course, that this pattern can be expressed so easily in JavaScript owes everything to the extraordinary way functions are treated in that language, something Douglas Crockford called “the very best part of JavaScript”. One of the first things I thought while reading Dusan’s post was how I could adapt that to C#. After all, with Lambdas and delegates, C# also has its first-class functions. And sure enough, it works really really well. After about ten minutes, I was able to write this: CallStreamFactory.CallStream (p => Console.WriteLine("Yay!")) (Dump, DateTime.Now) (DumpFooAndBar, new { Foo = 42, Bar = "the answer" }) (p => Console.ReadKey()); Where the Dump function is: public static void Dump(object options) { Console.WriteLine(options.ToString()); } And DumpFooAndBar is: public static void DumpFooAndBar(dynamic options) { Console.WriteLine("Foo is {0} and bar is {1}.", options.Foo, options.Bar); } So how does this work? Well, it really is very simple. And not. Let’s say it’s not a lot of code, but if you’re like me you might need an Advil after that. First, I defined the signature of the CallStream method as follows: public delegate CallStream CallStream (Action<object> action, object options = null); The delegate define a call stream as something that takes an action (a function of the options) and an optional options object and that returns a delegate of its own type. Tricky, but that actually works, a delegate can return its own type. Then I wrote an implementation of that delegate that calls the action and returns itself: public static CallStream CallStream (Action<object> action, object options = null) { action(options); return CallStream; } Pretty nice, eh? Well, yes and no. What we are doing here is to execute a sequence of actions using an interesting novel syntax. But for this to be actually useful, you’d need to build a more specialized call stream factory that comes with some sort of context (like Dusan did in JavaScript). For example, you could write the following alternate delegate signature that takes a string and returns itself: public delegate StringCallStream StringCallStream(string message); And then write the following call stream (notice the currying): public static StringCallStream CreateDumpCallStream(string dumpPath) { StringCallStream str = null; var dump = File.AppendText(dumpPath); dump.AutoFlush = true; str = s => { dump.WriteLine(s); return str; }; return str; } (I know, I’m not closing that stream; sure; bad, bad Bertrand) Finally, here’s how you use it: CallStreamFactory.CreateDumpCallStream(@".\dump.txt") ("Wow, this really works.") (DateTime.Now.ToLongTimeString()) ("And that is all."); Next step would be to combine this contextual implementation with the one that takes an action parameter and do some really fun stuff. I’m only scratching the surface here. This pattern could reveal itself to be nothing more than a gratuitous mind-bender or there could be applications that we hardly suspect at this point. In any case, it’s a fun new construct. Or is this nothing new? You tell me… Comments are open :)

    Read the article

  • How to render axometric/isometric tiles that are a 2d array in logic, but inclined 45º visually?

    - by TheLima
    I am making a tile-based strategy game which i plan to have 2.5D visuals in an axometric/isometric fashion. Right now i'm programming it's logic and rendering it as a literal 2-dimensional array (perfect squares, like an isometric top-down-view). In short, i have something like this: And i want to turn it to something like this: Do i keep going on the 2d-array logic? Is it all just a change in rendering behavior, as i'm thinking it is? or 2d-array is the wrong approach for my objective and I should change before it's too late? What are the ways of doing it, anyways? How should i apply the 2.5D axometric/isometric view (45º rotation to the side, and 45º rotation upwards)?

    Read the article

  • Which GPU with my CPU for 1080p flash?

    - by oshirowanen
    Based on the following site: http://www.adobe.com/products/flashplayer/systemreqs/ I need the following minimum spec to play 1080p flash video via a browser: CPU: 1.8GHz Intel Core Duo, AMD Athlon 64 X2 4200+, or faster processor RAM: 512MB of RAM GPU: 64MB of graphics memory I only have a 2.8GHz Pentium 4 process which is no where near as good as the processor listed above. I don't want to upgrade my processor as I think it will mean I have to change the motherboard etc. So, my question is, what is the cheapest PCI-E GPU I can buy which will allow me to play smooth 1080p flash video via a browser. I think the cheapest I can get is the 8400GS, but am not sure if that will be able to handle 1080p with the processor I have. I have looked at the GT520 and was wondering if this is the cheapest GPU which I need, or if there is something cheaper which will do 1080p with a 2.8GHz Pentium 4. Or, will I have to get something better than a GT520?

    Read the article

  • Can't Boot, video drivers or x-server problem?

    - by ZacharyH
    I was uninstalling some programs that I installed to try and get my iPod touch working with Ubuntu (I gave up on that) when ubuntu just crashed. Now after I choose ubuntu in GRUB, it gives me a screen that says "Ubuntu is running in low-graphics mode: your screen, graphics card, and input device settings could not be detected correctly. You will need to configure these yourself" It was working just fine before I started to uninstall those programs. I think that I might have uninstalled something necessary to the system. If I click OK on the screen, it gives me options to reconfigure, troubleshoot, exit to console, or restart X. But no matter what I choose I still can't boot into ubuntu - I get stuck looking at the splash screen which stalls forever. I was receiving support from one of my mate's and he was doing something with the LiveCD, and now the message doesn't pop up any more, I just get stuck at a never ending splash screen. Any help would be appreciated, thanks!

    Read the article

  • Odd Profiler Results with EF4

    - by AjarnMark
    I have been doing some testing of using the Microsoft Entity Framework 4 with stored procedures and ran across some really odd results in SQL Server Profiler. The application that is running which uses Entity Framework 4 is a simple Web Application written in C#, and the Entity Data Model is actually contained in a referenced class library of its own.  I’ll write more about my experiences with this later.  For now the question is, why does SQL Profiler think that the stored procedure is running in Master, and not in my application database? While analyzing the effects of using custom helper methods on my EDM classes to call the stored procedure, I decided to run Profiler while I stepped through the code so that I had a clear understanding of exactly when and what calls were made to the SQL Server.  I ran Profiler switching back and forth between the TSQL and TSQL_SP templates.  However, to reduce the amount of results rows I needed to wade through, I set a filter on DatabaseID to be equal to my application’s database.  Each time I ran this, the only thing that I saw was an Audit:Login to the database, but no procedure or T-SQL statements executed, yet I was definitely getting results back to my web page.  I tried other Profiler templates, still filtering on DatabaseID (tangent: I found, at least back in SQL 2000 Profiler, that filtering on DatabaseID was more reliable than filtering on DatabaseName.  Even though I’m now running SQL 2008, that habit sticks with me).  Still no results other than the Login.  Very weird! Finally, I decided to run Profiler with no filtering and discovered that that lines which represent my stored procedure and its T-SQL commands are all marked with DatabaseID = 1, which is Master.  Why in the world would that be?  My procedure is definitely in the application database, and not in Master, and there is nothing funny about the call to the procedure evident in Profiler (i.e. it is not called as MyAppDB.dbo.MyProcName, but rather just dbo.MyProcName).  There must be something funny with the way the Entity Framework is wrapping this call, and I don’t like it…I don’t like it one bit.  My primary PROD server contains 40+ databases on it, and when I need to profile something, I expect to be able to filter based on DatabaseID (for the record, I displayed DatabaseName in my results, too, and it also shows Master). I find the same pattern of everything except the Login showing up as being in Master when I run my version that uses standard LINQ to Entities instead of stored procedures, so that suggests it is not my code, but rather something funny with SQL Server 2008 Profiler or the Entity Framework. If you have any ideas about why this might be so, please comment below.

    Read the article

  • Issues with ganglia and upstart in 13.04

    - by theist
    I'm having an issue with ganglia-monitor and upstart. Just after installing it starts and it cannot be stopped. I tried to solve it following http://upstart.ubuntu.com/cookbook/#expect but I can achieve upstart to track the actual pid of gmond. Assuming the configuration of the init config is wrong i followed direcetions in http://upstart.ubuntu.com/cookbook/#how-to-establish-fork-count in order to try to fix it. As it states a count of 1 means "exec fork" stanza and 2 means "exec daemon" .... gmond gives a count of 8... And of course I can get it to track the actual PID. I haven't found a bug report or something like that. Is there something I missed ? Upstart seems to be failing and the worst thing is that upstart hangs on start/stop and I cannot even uninstall or reconfigure packages.

    Read the article

< Previous Page | 112 113 114 115 116 117 118 119 120 121 122 123  | Next Page >