Search Results

Search found 101 results on 5 pages for 'nail yener'.

Page 3/5 | < Previous Page | 1 2 3 4 5  | Next Page >

  • Aptronyms: fitting the profession to the name

    - by Tony Davis
    Writing a recent piece on the pains of index fragmentation, I found myself wondering why, in SQL Server, you can’t set the equivalent of a fill factor, on a heap table. I scratched my head…who might know? Phil Factor, of course! I approached him with a due sense of optimism only to find that not only did he not know, he also didn’t seem to care much either. I skulked off thinking how this may be the final nail in the coffin of nominative determinism. I’ve always wondered if there was anything in it, though. If your surname is Plumb or Leeks, is there even a tiny, extra percentage chance that you’ll end up fitting bathrooms? Some examples are quite common. I’m sure we’ve all met teachers called English or French, or lawyers called Judge or Laws. I’ve also known a Doctor called Coffin, a Urologist called Waterfall, and a Dentist called Dentith. Two personal favorites are Wolfgang Wolf who ended up managing the German Soccer team, Wolfsburg, and Edmund Akenhead, a Crossword Editor for The Times newspaper. Having forgiven Phil his earlier offhandedness, I asked him for if he knew of any notable examples. He had met the famous Dr. Batty and Dr. Nutter, both Psychiatrists, knew undertakers called Death and Stiff, had read a book by Frederick Page-Turner, and suppressed a giggle at the idea of a feminist called Gurley-Brown. He even managed to better my Urologist example, citing the article on incontinence in the British Journal of Urology (vol.49, pp.173-176, 1977) by A. J. Splatt and D. Weedon. What, however, if you were keen to gently nudge your child down the path to a career in IT? What name would you choose? Subtlety probably doesn’t really work, although in a recent interview, Rodney Landrum did congratulate PowerShell MVP Max Trinidad on being named after a SQL function. Grant “The Memory” Fritchey (OK, I made up that nickname) doesn’t do badly either. Some surnames, seem to offer a natural head start, although I know of no members of the Page-Reid clan in the profession. There are certainly families with the Table surname, although sadly, Little Bobby Tables was merely a legend by xkcd. A member of the well-known Key family would need to name their son Primary, or maybe live abroad, to make their mark. Nominate your examples of people seemingly destined, by name, for their chosen profession (extra points for IT). The best three will receive a prize. Cheers, Tony.

    Read the article

  • Keeping your options open in a cloud solution

    - by BuckWoody
    In on-premises solutions we have the full range of options open for a given computing solution – but we don’t always take advantage of them, for multiple reasons. Data goes in a Relational Database Management System, files go on a share, and e-mail goes to the Exchange server. Over time, vendors (including ourselves) add in functionality to one product that allow non-standard use of the platform. For example, SQL Server (and Oracle, and others) allow large binary storage in or through the system – something not originally intended for an RDBMS to handle. There are certainly times when this makes sense, of course, but often these platform hammers turn every problem into a nail. It can make us “lazy” in our design – we sometimes don’t take the time to learn another architecture because the one we’ve spent so much time with can handle what we want to do. But there’s a distinct danger here. In nature, when a population shares too many of the same traits, it can cause a complete collapse if a situation exploits a weakness shared by that population. The same is true with not using the righttool for the job in a computing environment. Your company or organization depends on your knowledge as a professional to select the best mix of supportable, flexible, cost-effective technologies to solve their problems, whether you’re in an architect role or not.  So take some time today to learn something new. The way I do this is to select a given problem, and try to solve it with a technology I’m not familiar with. For instance – create a Purchase Order system in Excel, then in Hadoop or MongoDB, or even in flat-files using PowerShell as an interface. No, I’m not suggesting any of these architectures are the proper way to solve the PO problem, but taking something concrete that you know well and applying that meta-knowledge to another platform will assist you in exercising the “little grey cells” and help you and your organization understand what is open to you. And of course you can do all of this on-premises – but my recommendation is to check out a cloud platform (my suggestion would of course be Windows Azure :) ) and try it there. Most providers (including Microsoft) provide free time to do that.

    Read the article

  • Tiling Problem Solutions for Various Size "Dominoes"

    - by user67081
    I've got an interesting tiling problem, I have a large square image (size 128k so 131072 squares) with dimensons 256x512... I want to fill this image with certain grain types (a 1x1 tile, a 1x2 strip, a 2x1 strip, and 2x2 square) and have no overlap, no holes, and no extension past the image boundary. Given some probability for each of these grain types, a list of the number required to be placed is generated for each. Obviously an iterative/brute force method doesn't work well here if we just randomly place the pieces, instead a certain algorithm is required. 1) all 2x2 square grains are randomly placed until exhaustion. 2) 1x2 and 2x1 grains are randomly placed alternatively until exhaustion 3) the remaining 1x1 tiles are placed to fill in all holes. It turns out this algorithm works pretty well for some cases and has no problem filling the entire image, however as you might guess, increasing the probability (and thus number) of 1x2 and 2x1 grains eventually causes the placement to stall (since there are too many holes created by the strips and not all them can be placed). My approach to this solution has been as follows: 1) Create a mini-image of size 8x8 or 16x16. 2) Fill this image randomly and following the algorithm specified above so that the desired probability of the entire image is realized in the mini-image. 3) Create N of these mini-images and then randomly successively place them in the large image. Unfortunately there are some downfalls to this simplification. 1) given the small size of the mini-images, nailing an exact probability for the entire image is not possible. Example if I want p(2x1)=P(1x2)=0.4, the mini image may only give 0.41 as the closes probability. 2) The mini-images create a pseudo boundary where no overlaps occur which isn't really descriptive of the model this is being used for. 3) There is only a fixed number of mini-images so i'm not sure how random this really is. I'm really just looking to brainstorm about possible solutions to this. My main concern is really to nail down closer probabilities, now one might suggest I just increase the mini-image size. Well I have, and it turns out that in certain cases(p(1x2)=p(2x1)=0.5) the mini-image 16x16 isn't even iteratively solvable.. So it's pretty obvious how difficult it is to randomly solve this for anything greater than 8x8 sizes.. So I'd love to hear some ideas. Thanks

    Read the article

  • Investigating a potential CPU failure

    - by Jernej
    On a Ubuntu server that I am using for computations I have recently observed that some CPU extensive programs (GUROBI,CPLEX) often segfault. Being in correspondence with tech support of the respective programs I was suggested that it may be a hardware issue. The administrator of the server performed a detailed memtest and it turned out that the RAM modules appear to be fine. Hence I've used the tool mprime to test the CPU and the following two lines appear multiple times durring the execution of the stress tests: [Worker #4 Oct 18 18:47] FATAL ERROR: Rounding was 0.498046875, expected less than 0.4 [Worker #4 Oct 18 18:47] Hardware failure detected, consult stress.txt file. The stress.txt file in itself is not very verbose about what could be the cause of this error so I would like to ask whether anyone here happens to know what could be the cause of this issue? Is there some other test I could perform to nail the problem even further? The temperature of the system (and all cores) was fine during the entire stress test (+69.0°C (high = +80.0°C, crit = +98.0°C)) the CPU in question is a Intel Core i7-2600K CPU @ 3.40GHz and is not overclocked or modified in any way. Also what is interesting that if I run mprime to only stress the CPU all tests pass fine. The error is only triggered when I let mprime stress the CPU+RAM.

    Read the article

  • Why is Apache htdigest authentication failing in IE10 on Windows 8?

    - by Kevin Fodness
    One of our developers reported that for the past week or two, the htdigest authentication that we have set up on our test sites in Apache is not working in IE10 on Windows 8. It's fine on IE10 on Windows 7, and it's fine on Chrome on Windows 8. The specific behavior is: Navigate to site with htdigest authentication enabled, username and password form pops up, enter correct username and password, and the username and password box pops up again. Potentially useful information: All patches applied on Windows 8 box No additional software on Windows 8 box other than Outlook 2013 and a browser test suite (Chrome, Firefox, Opera, Chrome Canary, Opera Next) Win8 running in a virtual machine on Xen Same behavior can be replicated on Win8/IE10 on Browserstack.com Server running Ubuntu 10.10 with Apache 2.2.16 This feels like a patch was applied to the Windows box that broke digest authentication for IE10 on Win8 (box configured for automatic updates). However, without knowing a specific date I can't necessarily nail this down. Has anyone else experienced this problem? EDIT: This problem only happens in the "Metro" interface, not when running IE10 in desktop mode. As of a few weeks ago, it worked fine even in the "Metro" interface.

    Read the article

  • Big Data – Basics of Big Data Architecture – Day 4 of 21

    - by Pinal Dave
    In yesterday’s blog post we understood how Big Data evolution happened. Today we will understand basics of the Big Data Architecture. Big Data Cycle Just like every other database related applications, bit data project have its development cycle. Though three Vs (link) for sure plays an important role in deciding the architecture of the Big Data projects. Just like every other project Big Data project also goes to similar phases of the data capturing, transforming, integrating, analyzing and building actionable reporting on the top of  the data. While the process looks almost same but due to the nature of the data the architecture is often totally different. Here are few of the question which everyone should ask before going ahead with Big Data architecture. Questions to Ask How big is your total database? What is your requirement of the reporting in terms of time – real time, semi real time or at frequent interval? How important is the data availability and what is the plan for disaster recovery? What are the plans for network and physical security of the data? What platform will be the driving force behind data and what are different service level agreements for the infrastructure? This are just basic questions but based on your application and business need you should come up with the custom list of the question to ask. As I mentioned earlier this question may look quite simple but the answer will not be simple. When we are talking about Big Data implementation there are many other important aspects which we have to consider when we decide to go for the architecture. Building Blocks of Big Data Architecture It is absolutely impossible to discuss and nail down the most optimal architecture for any Big Data Solution in a single blog post, however, we can discuss the basic building blocks of big data architecture. Here is the image which I have built to explain how the building blocks of the Big Data architecture works. Above image gives good overview of how in Big Data Architecture various components are associated with each other. In Big Data various different data sources are part of the architecture hence extract, transform and integration are one of the most essential layers of the architecture. Most of the data is stored in relational as well as non relational data marts and data warehousing solutions. As per the business need various data are processed as well converted to proper reports and visualizations for end users. Just like software the hardware is almost the most important part of the Big Data Architecture. In the big data architecture hardware infrastructure is extremely important and failure over instances as well as redundant physical infrastructure is usually implemented. NoSQL in Data Management NoSQL is a very famous buzz word and it really means Not Relational SQL or Not Only SQL. This is because in Big Data Architecture the data is in any format. It can be unstructured, relational or in any other format or from any other data source. To bring all the data together relational technology is not enough, hence new tools, architecture and other algorithms are invented which takes care of all the kind of data. This is collectively called NoSQL. Tomorrow Next four days we will answer the Buzz Words – Hadoop. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Simple task framework - building software from reusable pieces

    - by RuslanD
    I'm writing a web service with several APIs, and they will be sharing some of the implementation code. In order not to copy-paste, I would like to ideally implement each API call as a series of tasks, which are executed in a sequence determined by the business logic. One obvious question is whether that's the best strategy for code reuse, or whether I can look at it in a different way. But assuming I want to go with tasks, several issues arise: What's a good task interface to use? How do I pass data computed in one task to another task in the sequence that might need it? In the past, I've worked with task interfaces like: interface Task<T, U> { U execute(T input); } Then I also had sort of a "task context" object which had getters and setters for any kind of data my tasks needed to produce or consume, and it gets passed to all tasks. I'm aware that this suffers from a host of problems. So I wanted to figure out a better way to implement it this time around. My current idea is to have a TaskContext object which is a type-safe heterogeneous container (as described in Effective Java). Each task can ask for an item from this container (task input), or add an item to the container (task output). That way, tasks don't need to know about each other directly, and I don't have to write a class with dozens of methods for each data item. There are, however, several drawbacks: Each item in this TaskContext container should be a complex type that wraps around the actual item data. If task A uses a String for some purpose, and task B uses a String for something entirely different, then just storing a mapping between String.class and some object doesn't work for both tasks. The other reason is that I can't use that kind of container for generic collections directly, so they need to be wrapped in another object. This means that, based on how many tasks I define, I would need to also define a number of classes for the task items that may be consumed or produced, which may lead to code bloat and duplication. For instance, if a task takes some Long value as input and produces another Long value as output, I would have to have two classes that simply wrap around a Long, which IMO can spiral out of control pretty quickly as the codebase evolves. I briefly looked at workflow engine libraries, but they kind of seem like a heavy hammer for this particular nail. How would you go about writing a simple task framework with the following requirements: Tasks should be as self-contained as possible, so they can be composed in different ways to create different workflows. That being said, some tasks may perform expensive computations that are prerequisites for other tasks. We want to have a way of storing the results of intermediate computations done by tasks so that other tasks can use those results for free. The task framework should be light, i.e. growing the code doesn't involve introducing many new types just to plug into the framework.

    Read the article

  • Love and Hate Outlook autocomplete, Outlook 2010/Exchange 2010

    - by Kay Sellenrode
    I think that almost every Exchange admin can concur with me that the Outlook autocomplete cache is one of those things you love but at the same time also hate. Users mostly love this function, except when it fails.Luckily since Outlook 2010 things got a little better and we got rid of the dreaded nk2 files.Outlook 2010 now includes a folder named "Suggested Contacts", all users you send an email to and that don't already have an contact object are saved in this suggested contacts folder.A lot of people thought this folder is also the source for the autocomplete cache, which would make it somewhat easy to manage, I wish the solution was that easy.Badly enough separate from the suggested contacts, outlook still maintains a cache for the autocomplete function. Let us say you run in to the following situation: John works for company A and is a popular contact for almost everyone in your organization.Now John quit his job at Company A and moved to Company B.Luckily John maintains your company as customer, but his email address is now changed from companyA.com to companyB.comSince you don't want to do any business with Company A anymore, you want to make sure none of your users accidentally mail to his old address.Now this is where the real fun starts, cause almost all of your 1000 users have mailed at least once with John.Resulting in the fact that every user has John most probably listed in their autocomplete cache.  I have run into sort like situations multiple times with several customers, which is always a pain.And of course this blog post is the result of one of those issues once again.I knew that with the Suggested contacts we could do more than previously, but still never spent time on it before.But today I thought lets nail this now and forever!!  Ok let's start of that things are different for every combination of outlook and exchange.I explain the procedure for Exchange 2010 SP1+ in combination with Outlook 2010.At first we want to get rid of all contact objects that contain [email protected] do this we need to be assigned to the RBAC role "Mailbox Import Export", which can be done through the Exchange Control panel.In my test environment I assigned this role to the Organization admins, but in real life you might want to add it to a custom role. Open the Exchange control panel by logging in to the ecp url, in my case https://ITFEX.itf.local/ECP, and make sure you selected your organization as management scope.Browse to Roles & Auditing, and open the properties for the organization management role group.click on the Add button to add a new role to the Organization Management role group, select the Mailbox Import Export role and click on add and OK to add it to the role.  Once you have assigned that role to your account you can open the Exchange Management Shell and execute the following command: Get-mailbox –resultsize unlimited | search-mailbox –targetmailbox "your.account" –targetfolder searchanddelete –loglevel full –logonly –searchquery "kind:contact AND [email protected]" This command will create a list with all mailboxes and any contacts that were found with an email address that contains [email protected], this list is then posted in the mailbox you specified at your.account in the folder searchanddelete.Now examine the report that was created and posted in the mailbox to see if it matches what you think it should match.My results looked like this:  When you're confident that the search includes all references and no false positives you can execute almost the same command, but this time with an delete action instead of the logonly. Get-mailbox –resultsize unlimited | search-mailbox –targetmailbox "your.account" –targetfolder searchanddelete –loglevel full –DeleteContent –searchquery "kind:contact AND [email protected]" Now most people would think this would remove the contact object from the suggested contacts, resulting in a removal from the autocomplete list.Sad but not true, to clean up the autocomplete list start Outlook with the command: "outlook /cleanautocompletecache" This will result in an empty cache, but luckily this is rebuild based on the suggested contacts, which now doesn't include the [email protected] contact anymore.

    Read the article

  • Heading out to Dallas GiveCamp 2011

    - by dotgeek
    The day has finally arrived for twelve local charities here in the Dallas area, when they’ll get some help from various local Developers with their website initiative needs at this years Dallas GiveCamp. I’m really looking forward to helping out at this year event and what I hope will be the start of many more GiveCamps to follow. Similar to Habitat for Humanity, where people gather to help build and improve homes for people in need, GiveCamp brings together programmers and equips them with the virtual tools they need to build and improve their existing websites. Tonight is when things will kickoff for this weekends events and teams will start working on their various projects. The building continues on through the night then and all the way through until Sunday afternoon. The end goal for the teams and charities is to have a completed and working website for each charity to begin using and turn over all the production code and digital assets to them. None of this would be possible with out the great sponsors we have returning once again and their donations of various products to help these charities out with their projects, like Telerik's CMS product Sitefinity 4.0, paired with a year of hosting from Verio to mention just a few of them. Just like the skilled builders who might help train volunteers in the use of a nail gun in building a house. Training is also available here on site for the Developers and these local Charities. Giving them all the skills in how to manage and use these products, from site development and then into actual production is a key to the success of this weekends event.     Tonight's training sessions will kick off with a real treat from Giovanni Gallucci, as he speaks about Social Media for NPOs and then later Gabe Sumner from Telerik will begin a training session on Sitefinity for Developers. These training sessions will continue through out the weekend with .Net Nuke and Mojo Portal sessions also planned as well. If you’re a developer and would like to help out in the future, then check in your area and with your local User Groups to find out if you already have a GiveCamp near you to help out. If you don’t have one available, then consider starting up a local GiveCamp and then you too can help Code it Forward. About GiveCamp GiveCamp is a weekend-long event where software developers, designers, and database administrators donate their time to create custom software for non-profit organizations. This custom software could be a new website for the nonprofit organization, a small data-collection application to keep track of members, or a application for the Red Cross that automatically emails a blood donor three months after they’ve donated blood to remind them that they are now eligible to donate again. The only limitation is that the project should be scoped to be able to be completed in a weekend. During GiveCamp, developers are welcome to go home in the evenings or camp out all weekend long. There are usually food and drink provided at the event. There are sometimes even game systems set up for when you and your need a little break! Overall, it’s a great opportunity for people to work together, developing new friendships, and doing something important for their community. At GiveCamp, there is an expectation of “What Happens at GiveCamp, Stays at GiveCamp”. Therefore, all source code must be turned over to the charities at the end of the weekend (developers cannot ask for payment) and the charities are responsible for maintaining the code moving forward (charities cannot expect the developers to maintain the codebase).

    Read the article

  • IE9 and the Mystery of the Broken Video Tag

    - by David Wesst
    I was very excited when Microsoft released the Internet Explorer 9 Release Candidate. As far as I was concerned, this was another nail in the coffin for IE6 and step in the right direction for us .NET web developers as our base camp was finally starting to support the latest and greatest future-web standards. Unfortunately, my celebration was short lived as I soon hit a snag while loading up an HTML5 site I was building in Visual Studio 2010. The Mystery After updating Internet Explorer, I ran my HTML5 site that had the oh-so-lovely HTML5 video tag showing a video. Even though this worked in IE9 Beta, it appeared that IE9 RC could not load the same file. I figured that it was the video codec. Maybe IE9 RC no longer supported the video codec I used to encode my video. Here's the code I used: <video width="854" height="480" id="myOtherVideo" autoplay="" controls=""> <source src="/DemoSite1/Media/big_buck_bunny.mp4"/> <div> <p>Your browser does not support HTML5 Video.</p> </div> </video> As you can see from the code, I had the "fail-safe" code inside the video tag. The idea there being that if the video tag, or the video files themselves, are not supported by the browser my video should fail gracefully. What was even more strange was the fact that it worked in all the other HTML5 browsers that supported video. The Investigation Whoa! DJ stop the music. How can any of that make sense? Would the IE team really take such huge strides forward only to forget to include a feature that was already in the beta? I don't think so. I did plenty of searching on the web and asking around on the web, but could not seem to find anyone else having the same problem. Eventually I came across this post talking about declaring the MIME type in the .htaccess file. That got me thinking: does my web server support the video MIME type? I was using VS2010, so how do I know what kind of MIME types are supported by default? Still, my page hosted in Cassini (the web development server in VS2010) works on the other browsers. Why wouldn't it work with IE9 RC? To answer that, it was time to open up the upgraded toolbox known as the Developer's Tools in IE9 and use the new Network Tab. The Conclusion If you take a closer look at the results displayed from the Network tab, you can see that IE9 RC has interpreted the video file as text/html rather than video/mp4. To make this work, I decided to use IIS to debug my HTML5 web application by setting the web project's properties. Then, I added the MIME types that I want to support (i.e. video/mp4, video/ogg, video/webm). Et voila! The Mystery of the Broken Video Tag is solved. After Thoughts After solving the mystery, I still had the question about why my site worked in Chrome, Safari, and Firefox 3.6. After asking around, the best answer that I received was from my colleague Tyler Doerksen. He said that IE9 likely depends on the server telling it what kind of file it is downloading rather than trying to read the metadata about the data it is trying to download before doing anything. I have no facts to back this up, but it makes sense to me. In a browser war where milliseconds can make your browser fall back a few places in the race for supremacy, maybe the IE team opted to depend on the server knowing what kind of content it is serving up. Makes sense to me. In any case, that is just an educated guess. If you have any comments, feel free to post on them below. This post also appears at http://david.wes.st

    Read the article

  • ASP.NET MVC, Url Routing: Maximum Path (URL) Length

    - by Martin Aatmaa
    The Scenario I have an application where we took the good old query string URL structure: ?x=1&y=2&z=3&a=4&b=5&c=6 and changed it into a path structure: /x/1/y/2/z/3/a/4/b/5/c/6 We're using ASP.NET MVC and (naturally) ASP.NET routing. The Problem The problem is that our parameters are dynamic, and there is (theoretically) no limit to the amount of parameters that we need to accommodate for. This is all fine until we got hit by the following train: HTTP Error 400.0 - Bad Request ASP.NET detected invalid characters in the URL. IIS would throw this error when our URL got past a certain length. The Nitty Gritty Here's what we found out: This is not an IIS problem IIS does have a max path length limit, but the above error is not this. Learn dot iis dot net How to Use Request Filtering Section "Filter Based on Request Limits" If the path was too long for IIS, it would throw a 404.14, not a 400.0. Besides, the IIS max path (and query) length are configurable: <requestLimits maxAllowedContentLength="30000000" maxUrl="260" maxQueryString="25" /> This is an ASP.NET Problem After some poking around: IIS Forums Thread: ASP.NET 2.0 maximum URL length? http://forums.iis.net/t/1105360.aspx it turns out that this is an ASP.NET (well, .NET really) problem. The shit of the matter is that, as far as I can tell, ASP.NET cannot handle paths longer than 260 characters. The nail in the coffin in that this is confirmed by Phil the Haack himself: Stack Overflow ASP.NET url MAX_PATH limit Question ID 265251 The Question So what's the question? The question is, how big of a limitation is this? For my app, it's a deal killer. For most apps, it's probably a non-issue. What about disclosure? No where where ASP.NET Routing is mentioned have I ever heard a peep about this limitation. The fact that ASP.NET MVC uses ASP.NET routing makes the impact of this even bigger. What do you think?

    Read the article

  • number->string and related procedures in GIMP scheme scripting

    - by dim fish
    I am frustrated with string-to-number and number-to-string conversion in GIMP scripting. I am runnning GIMP 2.6.8 in Windows Vista. I understand that GIMP's internal Scheme implementation changes over the versions and I can't seem to nail down the documentation. From what I can gather GIMP's Scheme is a subset of TinyScheme and/or supports the R5RS standard procedures. In any case, I usually just look in the packaged script directory for examples when I want to try something new, because that should work for sure, right? For example, grid-system.scm comes with the latest GIMP release and has the expression, (string-append (number->string obj) " ") which is exactly what I want. However, if I use number-string in my own script, or even type it into GIMP's script console (which is how I usually test out new stuff I want to do) it tells me number-string is an unbound variable: > (number->string 3) Error: eval: unbound variable: number->string Other standard procedures from, say R5RS, work just fine: > (string-append "frust" "rated") "frustrated" So, 1) Is there some lurking documentation for current GIMP Scheme scripting other than something drastic like searching GIMP's source code? 2) Can I use the GIMP console to spit out a list of all defined procedures to find something I need? 3) Anyone else confirm that number-string is not defined for the current Windows build, even though it appears in the packaged scripts? My web searches haven't turned up any related problems, and a complete uninstall of all GIMP versions, back to latest puts me in the same scrape.

    Read the article

  • Silverlight 4 seems like starving of memory

    - by Marco
    I have been playing a bit with Silverlight and try to port my Silverlight 3.0 application to Silverlight 4.0. My application loads different XAP files and upon a user request create an instance of a Xaml user control and adds it to the main container, in a sort of MEF approach in order I can have an extensible and pluggable application. The application is pretty huge and to keep acceptable the performances and the initial loading I have built up some helper classes to load in the background all pages and user controls that might be used later on. On Silverlight 3.0 everything was running smoothly without any problem so far. Switching to SL 4.0 I have noticed that when the process approaches to create the instances of the user controls the layout freezes unexpectedly for a minute and sometimes for more. Looking at the task manager the memory usage of IE jumps from 50MB to 400MB and sometimes up to 1.5 GB. If the process won't take that much the layout is rendered properly even though the memory usage is still extremely high. Otherwise everything crashes due to out of memory exception. Running the same application compiled in SL3, the memory used is about 200MB when all the usercontrols are loaded. Time spent to load the application in SL3 is about 10 seconds, while it takes up to 3 mins in SL4 There are no transparencies, no opacities set, no effects and animations in the layout. User controls are instantied on the fly and added or removed in the visual tree on purpose when the user switches from one screen to another. The resources are all cleaned properly when a usercontrol is removed from the visual tree to allow the GC to operate in the background. I may do something wrong but I could not figure out where exactly nail out the source of this problem. As far as I know there is no memory profiler in SL4 that can help me out to find where to look at. But again I could not be updated on new debugging tools available.

    Read the article

  • onfocus="this.blur();" problem

    - by carpenter
    // I am trying to apply an "onfocus="this.blur();"" so as to remove the dotted border lines around pics that are being clicked-on // the effect should be applied to all thumb-nail links/a-tags within a div.. // sudo code (where I am): $(".box a").focus( // so as to effect only a tags within divs of class=box | mousedown vs. onfocus vs. *** ?? | javascript/jquery... ??? function () { var num = $(this).attr('id').replace('link_no', ''); alert("Link no. " + num + " was clicked on, but I would like an onfocus=\"this.blur();\" effect to work here instead of the alert..."); // sudo bits of code that I'm after: // $('#link_no' + num).blur(); // $(this).blur(); // $(this).onfocus = function () { this.blur(); }; } ); // the below works for me in firefox and ie also, but I would like it to effect only a tags within my div with class="box" function blurAnchors2() { if (document.getElementsByTagName) { var a = document.getElementsByTagName("a"); for (var i = 0; i < a.length; i++) { a[i].onfocus = function () { this.blur(); }; } } }

    Read the article

  • SSRS2005 timeout error

    - by jaspernygaard
    Hi I've been running around circles the last 2 days, trying to figure a problem in our customers live environment. I figured I might as well post it here, since google gave me very limited information on the error message (5 results to be exact). The error boils down to a timeout when requesting a certain report in SSRS2005, when a certain parameter is used. The deployment scenario is: Machine #1 Running reporting services (SQL2005, W2K3, IIS6) Machine #2 Running datawarehouse database (SQL2005, W2K3) which is the data source for #1 Both machines are running on the same vm cluster and LAN. The report requests a fairly simple SP - lets called it sp(param $a, param $b). When requested with param $a filled, it executes correctly. When using param $b, it times out after the global timeout periode has passed. If I run the stored procedure with param $b directly from sql management studio on #2, it returns the results perfectly fine (within 3-4s). I've profiled the datawarehouse database on #2 and when param $b is used, the query from the reporting service to the database, never reaches #2. The error message that I get upon timeout, when using param $b, when invoking the report directly from SSRS web interface is: "An error has occurred during report processing. Cannot read the next data row for the data set DataSet. A severe error occurred on the current command. The results, if any, should be discarded. Operation cancelled by user." The ExecutionLog for the SSRS does give me much information besides the error message rsProcessingAborted I'm running out of ideas of how to nail this problem. So I would greatly appreciate any comments, suggestions or ideas. Thanks in advance!

    Read the article

  • Database Functional Programming in Clojure

    - by Ralph
    "It is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." - Abraham Maslow I need to write a tool to dump a large hierarchical (SQL) database to XML. The hierarchy consists of a Person table with subsidiary Address, Phone, etc. tables. I have to dump thousands of rows, so I would like to do so incrementally and not keep the whole XML file in memory. I would like to isolate non-pure function code to a small portion of the application. I am thinking that this might be a good opportunity to explore FP and concurrency in Clojure. I can also show the benefits of immutable data and multi-core utilization to my skeptical co-workers. I'm not sure how the overall architecture of the application should be. I am thinking that I can use an impure function to retrieve the database rows and return a lazy sequence that can then be processed by a pure function that returns an XML fragment. For each Person row, I can create a Future and have several processed in parallel (the output order does not matter). As each Person is processed, the task will retrieve the appropriate rows from the Address, Phone, etc. tables and generate the nested XML. I can use a a generic function to process most of the tables, relying on database meta-data to get the column information, with special functions for the few tables that need custom processing. These functions could be listed in a map(table name -> function). Am I going about this in the right way? I can easily fall back to doing it in OO using Java, but that would be no fun. BTW, are there any good books on FP patterns or architecture? I have several good books on Clojure, Scala, and F#, but although each covers the language well, none look at the "big picture" of function programming design.

    Read the article

  • JUnit test failing - complaining of missing data that was just inserted

    - by Collin Peters
    I have an extremely odd problem in my JUnit tests that I just can't seem to nail down. I have a multi-module java webapp project with a fairly standard structure (DAO's, service clasess, etc...). Within this project I have a 'core' project which contains some abstracted setup code which inserts a test user along with the necessary items for a user (in this case an 'enterprise', so a user must belong to an enterprise and this is enforced at the database level) Fairly simple so far... but here is where the strangeness begins some tests fail to run and throw a database exception where it complains that a user cannot be inserted because an enterprise does not exist. But it just created the enterprise in the preceding line of code! And there was no errors in the insertion of the enterprise. Stranger yet, if this test class is run by itself everything works fine. It is only when the test is run as part of the project that it fails! And the exact same abstracted code was run by 10+ tests before the one that fails! f I have been banging my head against a wall with this for days and haven't really made any progress. I'm not even sure what information to offer up to help diagnose this. Using JUnit 4.4, Spring 2.5.6, iBatis 2.3.0, Postgresql 8.3 Switching to org.springframework.jdbc.datasource.DriverManagerDataSource from org.apache.commons.dbcp.BasicDataSource changed the problem. Using DriverManagerDataSource the tests work for the first time, but now all of a sudden a lot of data isn't rolled back in the database! It leaves everything behind. All with no errors Tests fail when run via Eclipse & Maven Please ask for any info which may help me solve my problem!

    Read the article

  • Digitally sign MS Office (Word, Excel, etc..) and PDF files on the server

    - by Sébastien Nussbaumer
    I need to digitally sign MS Office and PDF files that are stored on a server. I really mean a digital signature that is integrated in the document, according to each specific file formats. This is the process I had in mind : Create a hash of the file's content Send the hash to a custom written java applet in the browser The user encrypts the hash with his/her private key (on an usb token via PKCS#11 for example), thus effectively signing the file. The applet then sends the signature to the server On the server I would then incorporate the signature in the file's (MS Office and PDF files can do that without changing the file's content, probably by just setting some metadata field) What is cool is that you never have to download and upload the complete file to the server again. What is even cooler, the customer doesn't need Office or PDF Writer to sign the files. Parts 2, 3 and 4 are OK for me, my company bought all the JAVA technology I need for that for a previous project I worked on. Problem : I can't seem to find any documentation/examples to do parts 1 and 5 for Office files . Are my google skills failing me this time ? Do you have any pointers to documentation or examples for doing that for MS Office files ? The underlying technology isn't that important to me : I can use Java, .Net, COM, any working technology is OK ! Note : I'm 95% sure I can nail points 1 and 5 for PDF files using iText Thanks ** Edit : If I can't do that with hashes and must download the complete file to the client, it's also possible. But then I still need the documentation to be able to sign Office file... in java this time (from an applet)

    Read the article

  • Finding cause of memory leaks in large PHP stacks

    - by Mike B
    I have CLI script that runs over several thousand iterations between runs and it appears to have a memory leak. I'm using a tweaked version of Zend Framework with Smarty for view templating and each iteration uses several MB worth of code. The first run immediately uses nearly 8MB of memory (which is fine) but every following run adds about 80kb. My main loop looks like this (very simplified) $users = UsersModel::getUsers(); foreach($users as $user) { $obj = new doSomethingAwesome(); $obj->run($user); $obj = null; unset($obj); } The point is that everything in scope should be unset and the memory freed. My understanding is that PHP runs through its garbage collection process at it's own desire but it does so at the end of functions/methods/scripts. So something must be leaking memory inside doSomethingAwesome() but as I said it is a huge stack of code. Ideally, I would love to find some sort of tool that displayed all my variables no matter the scope at some point during execution. Some sort of symbol-table viewer for php. Does anything like that or any other tools that could help nail down memory leaks in php exist?

    Read the article

  • Factorial function - design and test.

    - by lukas
    I'm trying to nail down some interview questions, so I stared with a simple one. Design the factorial function. This function is a leaf (no dependencies - easly testable), so I made it static inside the helper class. public static class MathHelper { public static int Factorial(int n) { Debug.Assert(n >= 0); if (n < 0) { throw new ArgumentException("n cannot be lower that 0"); } Debug.Assert(n <= 12); if (n > 12) { throw new OverflowException("Overflow occurs above 12 factorial"); } //by definition if (n == 0) { return 1; } int factorialOfN = 1; for (int i = 1; i <= n; ++i) { //checked //{ factorialOfN *= i; //} } return factorialOfN; } } Testing: [TestMethod] [ExpectedException(typeof(OverflowException))] public void Overflow() { int temp = FactorialHelper.MathHelper.Factorial(40); } [TestMethod] public void ZeroTest() { int factorialOfZero = FactorialHelper.MathHelper.Factorial(0); Assert.AreEqual(1, factorialOfZero); } [TestMethod] public void FactorialOf5() { int factOf5 = FactorialHelper.MathHelper.Factorial(5); Assert.AreEqual(120,factOf5); } [TestMethod] [ExpectedException(typeof(ArgumentException))] public void NegativeTest() { int factOfMinus5 = FactorialHelper.MathHelper.Factorial(-5); } I have a few questions: Is it correct? (I hope so ;) ) Does it throw right exceptions? Should I use checked context or this trick ( n 12 ) is ok? Is it better to use uint istead of checking for negative values? Future improving: Overload for long, decimal, BigInteger or maybe generic method? Thank you

    Read the article

  • [Concept] How does unlink() find the file to delete?

    - by Prasad
    My app has a 'Photo' field to store URL. It uses sfWidgetFormInputFileEditable for the widget schema. To delete the old image when a new image is uploaded, I use unlink before setting the value in the over-ridden setter and it works!!! if (file_exists($this->_get('photo'))) unlink($this->_get('photo')); Photos are stored in uploads/photos and when saving 'Photo' only the file name xxx-yyy.zzz is saved (and not the full path). However, I wish to know how symfony/php knows the full path of the file to be deleted? Part 2: I am using sfThumbnailPlugin to generate thumbnails. So the actual code looks like this: public function setPhoto($value) { if(!empty($value)) { Contact::generateThumbnail($value); // delete current Photo & create thumbnail $this->_set('photo',$value); // setting new value after deleting old one } } public function generateThumbnail($value) { $uploadDir = sfConfig::get('app_photo_upload'); // path to upload folder if (file_exists($this->_get('photo'))) { unlink($this->_get('photo')); // delete full-size image // path to thumbnail $thumbpath = $uploadDir.'/thumbnails/'.$this->get('photo'); // read a blog, tried setting dir manually, doesn't work :( //chdir('/thumbnails/'); // tried closing the file too, doesn't work! :( //fclose($thumbpath) or die("can't close file"); //unlink($this->_get('photo')); // doesn't work; no error :( unlink($thumbpath); // doesn't work, no error :( } $thumbnail = new sfThumbnail(150, 150); $thumbnail->loadFile($uploadDir.'/'.$value); $thumbnail->save($uploadDir.'/thumbnails/'.$value, 'image/png'); } Why can't the thumbnail be deleted using unlink()? is the sequence of ops incorrect? Is it because the old thumbnail is displayed in the sfWidgetFormInputFileEditable widget? I've spent hours trying to figure this out, but unable to nail down the real cause. Thanks in advance.

    Read the article

  • Order words by number of letters, then place words neatly

    - by bmaster
    I have a list of words in javascript similar to this: var words = ["mine", "minute", "mist", "mixed", "money", "monkey", "month", "moon", "morning", "mother", "motion", "mountain", "mouth", "move", "much", "muscle", "music", "nail", "name", "narrow", "nation", "natural", "near", "necessary", "neck", "need", "needle", "nerve", "net", "new", "news", "night"]; The words can be 1-25? letters long. I have a div id="words", with a set width of 700px (but I might change it from this). Using css/javascript/jquery, how can I make it: Order the words by number of letters Place the words inside the div tag, left to right, but so that there are no gaps at the right edge of the words div, and there is even spacing between words on a line. Each word should have a border around it and a background. Like this: |reallylongwordssdf shorterwordfdf dfsdfsdfsdf sdfsdfsdf| |sdfsdfsdf sdffsdop sdfjpogs sdfsds dfsdsd dfsdsd dfsdsd| I really have no idea where to begin with this. Perhaps I could manage to write code to order the words by number of letters, but after that, I'd be stuck. Edit: I forgot to add, the words must be links.

    Read the article

  • The perverse hangman problem

    - by Shalmanese
    Perverse Hangman is a game played much like regular Hangman with one important difference: The winning word is determined dynamically by the house depending on what letters have been guessed. For example, say you have the board _ A I L and 12 remaining guesses. Because there are 13 different words ending in AIL (bail, fail, hail, jail, kail, mail, nail, pail, rail, sail, tail, vail, wail) the house is guaranteed to win because no matter what 12 letters you guess, the house will claim the chosen word was the one you didn't guess. However, if the board was _ I L M, you have cornered the house as FILM is the only word that ends in ILM. The challenge is: Given a dictionary, a word length & the number of allowed guesses, come up with an algorithm that either: a) proves that the player always wins by outputting a decision tree for the player that corners the house no matter what b) proves the house always wins by outputting a decision tree for the house that allows the house to escape no matter what. As a toy example, consider the dictionary: bat bar car If you are allowed 3 wrong guesses, the player wins with the following tree: Guess B NO -> Guess C, Guess A, Guess R, WIN YES-> Guess T NO -> Guess A, Guess R, WIN YES-> Guess A, WIN

    Read the article

  • PHP form validation function

    - by Barbs
    I am currently writing some PHP form validation (I have already validated clientside) and have some repetitive code that I think would work well in a nice little PHP function. However I have having trouble getting it to work. I'm sure it's just a matter of syntax but I just can't nail it down. Any help appreciated. //Validate phone number field to ensure 8 digits, no spaces. if(0 === preg_match("/^[0-9]{8}$/",$_POST['Phone']) { $errors['Phone'] = "Incorrect format for 'Phone'"; } if(!$errors) { //Do some stuff here.... } I found that I was writing the validation code a lot and I could save some time and some lines of code by creating a function. //Validate Function function validate($regex,$index,$message) { if(0 === preg_match($regex,$_POST[$index],$message) { $errors[$index] = $message; } And call it like so.... validate("/^[0-9]{8}$/","Phone","Incorrect format for Phone"); Can anyone see why this wouldn't work? Note I have disabled the client side validation while I work on this to try to trigger the error, so the value I am sending for 'Phone' is invalid.

    Read the article

  • Internet Working, Browsing Not.

    - by jeffreypriebe
    I have a very odd problem that I can't resolve. I am connected to the internet, but my browsing doesn't work. I don't mean a web browser - I mean browsing. Firefox, Chrome, Curl all fail to successfully connect to an HTTP address. However existing connections, e.g. to mail in Outlook (Exchange Server and also IMAP server) continue to work. Also, the internet is on, I can confirm both from my machine (other ports / connections) as well as from any other computer connected to the same network. Additionally, it appears to be HTTP, not simple a port issue as HTTP over port 8443 (Tortoise SVN if you must know - running over HTTP not over SVN) also fails. I am using Windows Vista SP2 (build 6002). It seems to "creep up" in that after running the computer for a few hours it will fail. (No found way to systematically reproduce the problem.) Additionally, it seems to be more prone on days where the internet connection is flaky already (not sure why the internet is flaky, just is, lot's of failed browsing requests and have to retry/reload often). What I have tried (when the problem arises) - none have yielded any resolution: Resetting the network connection (dis-connect, re-connect) Disable/re-enable the network adapter Double-checked the ip settings Double-checked the HOSTS file. Note: DNS continues to work (both new and cached responses to DNS queries). (Thanks for the suggestion Daniel and antenore.) Checked the routing tables (ip4 only as ipv6 is beyond my understanding) resetting all involved hardware (routers and modems) Close and reopen browsers Looked for malware interference: Run HijackThis Looked for suspicious processes using SysInternals procexp. Looked for explorer hijacks, lsa provider interference, winsock provider interference using SysInternals Autoruns. Run a complete anti-virus scan. Reviewed the output of a netstat -onab to see if there were stuck ports open or unusual processes running somewhere The only thing that works is to do a full reboot. That works 100% of the time to restore browsing. What else can I try to nail down the problem?

    Read the article

< Previous Page | 1 2 3 4 5  | Next Page >