Search Results

Search found 6514 results on 261 pages for 'mini filter'.

Page 205/261 | < Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >

  • What micro web-framework has the lowest overhead but includes templating

    - by Simon Martin
    I want to rewrite a simple small (10 page) website and besides a contact form it could be written in pure html. It is currently built with classic asp and Dreamweaver templates. The reason I'm not simply writing 10 html pages is that I want to keep the layout all in 1 place so would need either includes or a masterpage. I don't want to use Dreamweaver templates, or batch processing (like org-mode) because I want to be able to edit using notepad (or Visual Studio) because occasionally I might need to edit a file on the server (Go Daddy's IIS admin interface will let me edit text). I don't want to use ASP.NET MVC or WebForms (which I use in my day job) because I don't need all the overhead they bring with them when essentially I'm serving up 9 static files, 1 contact form and 1 list of clubs (that I aim to use jQuery to filter). The shared hosting package I have on Go Daddy seems to take a long time to spin up when serving aspx files. Currently the clubs page is driven from an MS SQL database that I try to keep up to date by manually checking the dojo locator on the main HQ pages and editing the entries myself, this is again way over the top. I aim to get a text file with the club details (probably in JSON or xml format) and use that as the source for the clubs page. There will need to be a bit of programming for this as the HQ site is unable to provide an extract / feed so something will have to scrape the site periodically to update my clubs persistence file. I'd like that to be automated - but I'm happy to have that triggered on a visit to the clubs page so I don't need to worry about scheduling a job. I would probably have a separate process that updates the persistence that has nothing to do with the rest of the site. Ideally I'd like to use Mercurial (or git) to publish, I know Bitbucket (and github) both serve static page sites so they wouldn't work in this scenario (dynamic pages and a contact form) but that's the model I'd like to use if there is such a thing. My requirements are: Simple templating system, 1 place to define header, footers, menu etc., that can be edited using just notepad. Very minimal / lightweight framework. I don't need a monster for 10 pages Must run either on IIS7 (shared Go Daddy Windows hosting) or other free host

    Read the article

  • Red Gate does Byte Night 2012

    - by red(at)work
    On the 5th of October 2012, a team of nine plucky Red Gaters braved the howling wind and the driving rain to sleep outside. No tents or mattresses were allowed – all we took for protection were sleeping bags, groundsheets, plastic sacks and Colin’s enormous fishing umbrella (a godsend in umbrella-y disguise). Why would we do such a thing? For Byte Night, an annual tech sector sleepout in support of Action for Children, who tackle the causes as well as the consequences of youth homelessness. Byte Night encourages technology professionals to do for one night a year what thousands of young people have to do every night – sleep rough.  We signed up for Byte Night in the warm, heady midst of the British summer, thinking it couldn’t possibly be all that bad. Even on the night itself – before the rain began to fall, sat in the comfort and warmth of a company canteen, drinking wine and eating chill and preparing to win the pub quiz – we were excited and optimistic about the night that lay ahead of us. All of that changed as soon as we stepped out into one of the worst rainstorms of the year. Brian, the team’s birthday boy, describes it best: Picture the scene: it’s 3 am on a Friday. I’m lying outside, fully clothed in a sleeping bag, wearing a raincoat, trussed up inside a large plastic pocket, on a ground sheet beneath a giant umbrella, wedged so tightly between two of my colleagues that I can’t move my arms. I’m wide awake, staring up at the grey sky beyond the edge of the umbrella; a limp, flickering white glow hints at a moon somewhere behind the drifting clouds. I haven’t slept since we first moved outside at 11 pm. Outside. Did I mention we were outside? I’m hung over. I need the loo. But there is no way on earth that I’m getting out of this sleeping bag. It’s cold. It’s raining. Not just raining, but chucking it down. It’s been doing this non-stop since 10pm. The rain sounds like a hyperactive drummer on the fishing umbrella, and the noise is loud and relentless. Puddles of water are forming all over the groundsheet, and, despite being ensconced inside the plastic pouch, I am wet. The fishing umbrella is protecting me from the worst of the driving rain, but not all of me is under it, and five hours of rain is no match for it. Everything is wet. My left side has become horribly damp. My trainers, which I placed next to my sleeping bag, are now completely soaked through. Mmm. That’ll be fun in the morning. My head is next to Colin’s head on one side, and a multi-pack of McCoy’s cheddar and onion crisps on the other. Don’t ask about the tub of hummus. That’s somewhere down by my ankles, abandoned to the night. Jess, who is lying next to me, rolls over onto her side. A mini waterfall cascades from her rain-pouch onto my face. Bah. I continue to stare into the heavens, willing the dawn to hurry up. Something lands on my face. It’s a mosquito. Great. Midnight, when this still seemed like fun – when we opened some champagne and my colleagues presented me with a caterpillar birthday cake, when everyone was drunk and jolly and full of stoic resolve – feels like a long time ago. Did I mention that today is my birthday? The remains of the caterpillar cake endure the same fate as the hummus, left out in the rain like a metaphor for sadness. It’s getting colder. I can see my breath. Silence has descended on the group, apart from the rustle of plastic. And the rain, obviously. Someone snores, and I envy whoever it is the sweet escape of sleep. I try to wriggle a bit further down inside my sleeping bag, but it doesn’t want to be wriggled into. Only 3 hours till dawn. 180 minutes. I begin to count them off, one at a time.  All nine of us got to go home in the morning, but thousands of children across the UK don’t have that luxury. If you’d like to sponsor the Red Gate Byte Night team, our JustGiving page can be found here.   Chris, before the outside bit actually happened. More photos from Byte Night Cambridge 2012 can be found here.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 25 (sys.dm_db_missing_index_details)

    - by Tamarick Hill
    The sys.dm_db_missing_index_details Dynamic Management View is used to return information about missing indexes on your SQL Server instances. These indexes are ones that the optimizer has identified as indexes it would like to use but did not have. You may also see these same indexes indicated in other tools such as query execution plans or the Database tuning advisor. Let’s execute this DMV so we can review the information it provides us. I do not have any missing index information for my AdventureWorks2012 database, but for the purposes of illustrating the result set of this DMV, I will present the results from my msdb database. SELECT * FROM sys.dm_db_missing_index_details The first column presented is the index_handle which uniquely identifies a particular missing index. The next two columns represent the database_id and the object_id for the particular table in question. Next is the ‘equality_columns’ column which gives you a list of columns (comma separated) that would be beneficial to the optimizer for equality operations. By equality operation I mean for any queries that would use a filter or join condition such as WHERE A = B. The next column, ‘inequality_columns’, gives you a comma separated list of columns that would be beneficial to the optimizer for inequality operations. An inequality operation is anything other than A = B. For example, “WHERE A != B”, “WHERE A > B”, “WHERE A < B”, and “WHERE A <> B” would all qualify as inequality. Next is the ‘included_columns’ column which list all columns that would be beneficial to the optimizer for purposes of providing a covering index and preventing key/bookmark lookups. Lastly is the ‘statement’ column which lists the name of the table where the index is missing. This DMV can help you identify potential indexes that could be added to improve the performance of your system. However, I will advise you not to just take the output of this DMV and create an index for everything you see. Everything listed here should be analyzed and then tested on a Development or Test system before implementing into a Production environment. For more information on this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms345434.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Understanding each other in web development

    - by Pete Hotchkin
    During my career I have been lucky enough to work in several different roles within web development with many extremely talented people, from incredible designers who were passionate about the placement of every pixel right through to server administrators and DBAs who were always measuring the improvements they were making to their queries in the smallest possible unit. The problem I always faced was that more often than not I was stuck in the middle trying to mediate between these different functions and enable each side to understand the other’s point of view. The main areas of contention that there have always been between these functional groups in my experience have been at 2 key points: during the build phase and then when there is a problem post-build. During both of these times it is often easier for someone to pass the buck onto someone else than spend the time to understand the other person’s perspective. Below is a quick look at two upcoming tools that will not only speed up the build phase for each function, but  also help when it comes to the issues faced once a site has been pushed live. In my experience a web project goes through several phases of development. The first of these is design, generally handled as Photoshop files which are then passed onto a front-end developer. This is the first point at which heated discussions can arise. One problem I’ve seen several times is that the designer doesn’t fully understand the platform constraints that need to be considered, and as a result has designed something that does not translate very well or is simply not possible. Working at Red Gate, I am lucky enough to be able to meet some amazing people and this happened just the other day when I was introduced to Neil Kinnish and Pete Nelson, the creators of what I believe could be a great asset in this designer-developer relationship, Mixture. Mixture allows the front end developer to quickly prototype a web page with built-in frameworks such as bootstrap. It’s not an IDE however, it just sits there in the background and monitors the project files in the background so every time you save a file from your favorite IDE, it will compile things like LESS, compact your JavaScript and the automatically refresh your test browser so you can see the changes instantly. I think one of the best parts of this however is a single button that pushes the changed files up to the web so the designer can instantly see how far the developer has got and the problem that he is facing at that time without the need to spend time setting up a remote server. I can see this being a real asset to remote teams where there needs to be a compromise between the designer and the front-end developer, or just to allow the designer to see how the build is progressing and suggest small alterations. Once the design has been built into the front end the designer’s job is generally done and there are no other points of contention between the designer and the other functions involved in building these web projects. As the project moves into the stage of integrating it into the back end and deploying it to the production server other functions start to be pulled in and other issues arise such as the back-end developer understanding the frameworks that they are using such as the routes that are in place in an MVC application or the number of database calls that the ORM layer is actually making. There are many tools out there that can actually help with these problems such as mini profiler that gives you a quick snapshot of what is going on directly in the browser. For a slightly more in-depth look at what is happening and to gain a deeper understanding of an application you may be working on though, you may want to consider Glimpse. Created by Nik and Anthony, it is an application that sits at the bottom of your browser (installed via NuGet) which can show you information about how your application is pieced together and how the information on screen is being delivered as it happens. With a wealth of community-built plugins such as one for nHibernate and linq2SQL (full list of plugins on NuGet). It can be customized directly to your own setup to truly delve into the code to see what is happening, and can help to reduce the number of confusing moments about whether it is your code that is going wrong or whether there is something more sinister happening directly on the server. All the tools that I have mentioned in this post help to do one thing above all, and that is to ease the barrier of understanding between the different functions that are involved in building and maintaining a web application. In my experience it is very easy to say “Well, that’s not my problem”, simply because the two functions involved don’t truly understand the other’s point of view. Software should not only be seen as a way to streamline our own working process or as a debugging tool but also a communication aid to improve the entire lifecycle of a web project. Glimpse is actually the project that I am the designer on and I would love to get your feedback if you do decide to try it out or if you would like to share your own experiences of working on web projects please fill in your details at https://www.surveymk.com/s/joinGlimpse  or add a comment below and I will get in touch with you.

    Read the article

  • Play Thousands of Online Radio Stations with Shoutcast in VLC

    - by DigitalGeekery
    Are you looking for more variety from your radio stations? Today we’ll take a look at how to easily stream thousands of radio stations to your desktop with VLC media player. Getting Started Select Media from the menu, go to Services Discovery, and click Shoutcast radio listings.   Next, select View from the menu and click Playlist.   Or, click on the Show Playlist button In the Playlist window, click on Shoutcast radio listings in the left pane. You should then see a very long list of Titles displayed on the right. Scroll though the list to find a music genre or topic that interests you. Double-click to expand the list of station options. Select one of the channel listings from the list and double click to begin playing.   Looking for a specific station? Type a search term into the search filter box to see if it is available.   That’s it. Sit back and enjoy listening to your favorite Internet radio programming. If you are a music or talk radio fan, you aren’t likely to run out of listening options in VLC. Want to find some more uses for VLC? Check out our articles on how to copy a DVD, convert video files to MP3, and how to set a video as your desktop wallpaper. Download VLC Similar Articles Productive Geek Tips Listen to Over 100,000 Radio Stations in Windows Media CenterListen to Local FM Radio in Windows 7 Media CenterListen and Record Over 12,000 Online Radio Stations with RadioSureGeek Reviews: Play And Record Internet Radio With Screamer RadioWeekend Fun: Watch Television on Your PC with AnyTV TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7 Map the Stars with Stellarium Use ILovePDF To Split and Merge PDF Files TimeToMeet is a Simple Online Meeting Planning Tool

    Read the article

  • Vote of Disconfidence to Entity Framework

    - by Ricardo Peres
    A friend of mine has found the following problem with Entity Framework 4: Two simple classes and one association between them (one to many): One condition to filter out soft-deleted entities (WHERE Deleted = 0): 100 records in the database; A simple query: 1: var l = ctx.Person.Include("Address").Where(x => (x.Address.Name == "317 Oak Blvd." && x.Address.Number == 926) || (x.Address.Name == "891 White Milton Drive" && x.Address.Number == 497)); Will produce the following SQL: 1: SELECT 2: [Extent1].[Id] AS [Id], 3: [Extent1].[FullName] AS [FullName], 4: [Extent1].[AddressId] AS [AddressId], 5: [Extent202].[Id] AS [Id1], 6: [Extent202].[Name] AS [Name], 7: [Extent202].[Number] AS [Number] 8: FROM [dbo].[Person] AS [Extent1] 9: LEFT OUTER JOIN [dbo].[Address] AS [Extent2] ON ([Extent2].[Deleted] = 0) AND ([Extent1].[AddressId] = [Extent2].[Id]) 10: LEFT OUTER JOIN [dbo].[Address] AS [Extent3] ON ([Extent3].[Deleted] = 0) AND ([Extent1].[AddressId] = [Extent3].[Id]) 11: LEFT OUTER JOIN [dbo].[Address] AS [Extent4] ON ([Extent4].[Deleted] = 0) AND ([Extent1].[AddressId] = [Extent4].[Id]) 12: LEFT OUTER JOIN [dbo].[Address] AS [Extent5] ON ([Extent5].[Deleted] = 0) AND ([Extent1].[AddressId] = [Extent5].[Id]) 13: LEFT OUTER JOIN [dbo].[Address] AS [Extent6] ON ([Extent6].[Deleted] = 0) AND ([Extent1].[AddressId] = [Extent6].[Id]) 14: ... 15: WHERE ((N'317 Oak Blvd.' = [Extent2].[Name]) AND (926 = [Extent3].[Number])) 16: ... And will result in 680 MB of memory being taken! Now, Entity Framework has been historically known for producing less than optimal SQL, but 680 MB for 100 entities?! According to Microsoft, the problem will be addressed in the following version, there is a Connect issue open. There is even a whitepaper, Performance Considerations for Entity Framework 5, which talks about some of the changes and optimizations coming on version 5, but by reading it, I got even more concerned: “Once the cache contains a set number of entries (800), we start a timer that periodically (once-per-minute) sweeps the cache.” Say what?! The next version of Entity Framework will spawn timer threads?! When Code First came along, I thought it was a step in the right direction. Sure, it didn’t include some things that NHibernate did for quite some time – for example, different strategies for Id generation that do not rely on IDENTITY columns, which makes INSERT batching impossible, or support for enumerated types – but I thought these would come with the time. Now, enumerated types have, but so did… timer threads! I’m afraid Entity Framework is becoming a monster.

    Read the article

  • PeopleSoft at Alliance 2012 Executive Forum

    - by John Webb
    Guest Posting From Rebekah Jackson This week I jointed over 4,800 Higher Ed and Public Sector customers and partners in Nashville at our annual Alliance conference.   I got lost easily in the hallways of the sprawling Gaylord Opryland Hotel. I carried the resort map with me, and I would still stand for several minutes at a very confusing junction, studying the map and the signage on the walls. Hallways led off in many directions, some with elevators going down here and stairs going up there. When I took a wrong turn I would instantly feel stuck, lose my bearings, and occasionally even have to send out a call for help.    It strikes me that the theme for the Executive Forum this year outlines a less tangible but equally disorienting set of challenges that our higher education customer’s CIOs are facing: Making Decisions at the Intersection of Business Value, Strategic Investment, and Enterprise Technology. The forces acting upon higher education institutions today are not neat, straight-forward decision points, where one can glance to the right, glance to the left, and then quickly choose the best course of action. The operational, technological, and strategic factors that must be considered are complex, interrelated, messy…and the stakes are high. Michael Horn, co-author of “Disrupting Class: How Disruptive Innovation Will Change the Way the World Learns”, set the tone for the day. He introduced the model of disruptive innovation, which grew out of the research he and his colleagues have done on ‘Why Successful Organizations Fail’. Highly simplified, the pattern he shared is that things start out decentralized, take a leap to extreme centralization, and then experience progressive decentralization. Using computers as an example, we started with a slide rule, then developed the computer which centralized in the form of mainframes, and gradually decentralized to mini-computers, desktop computers, laptops, and now mobile devices. According to Michael, you have more computing power in your cell phone than existed on the planet 60 years ago, or was on the first rocket that went to the moon. Applying this pattern to Higher Education means the introduction of expensive and prestigious private universities, followed by the advent of state schools, then by community colleges, and now online education. Michael shared statistics that indicate 50% of students will be taking at least one on line course by 2014…and by some measures, that’s already the case today. The implication is that technology moves from being the backbone of the campus, the IT department’s domain, and pushes into the academic core of the institution. Innovative programs are underway at many schools like Bellevue and BYU Idaho, joined by startups and disruptive new players like the Khan Academy.   This presents both threat and opportunity for higher education institutions, and means that IT decisions cannot afford to be disconnected from the institution’s strategic plan. Subsequent sessions explored this theme.    Theo Bosnak, from Attain, discussed the model they use for assessing the complete picture of an institution’s financial health. Compounding the issue are the dramatic trends occurring in technology and the vendors that provide it. Ovum analyst Nicole Engelbert, shared her insights next and suggested that incremental changes are no longer an option, instead fundamental changes are affecting the landscape of enterprise technology in higher ed.    Nicole closed with her recommendation that institutions focus on the trends in higher education with an eye towards the strategic requirements and business value first. Technology then is the enabler.   The last presentation of the day was from Tom Fisher, Sr. Vice President of Cloud Services at Oracle. Tom runs the delivery arm of the Cloud Services group, and shared his thoughts candidly about his experiences with cloud deployments as well as key issues around managing costs and security in cloud deployments. Okay, we’ve covered a lot of ground at this point, from financials planning, business strategy, and cloud computing, with the possibility that half of the institutions in the US might not be around in their current form 10 years from now. Did I forget to mention that was raised in the morning session? Seems a little hard to believe, and yet Michael Horn made a compelling point. Apparently 100 years ago, 8 of the top 10 education institutions in the world were German. Today, the leading German school is ranked somewhere in the 40’s or 50’s. What will the landscape be 100 years from now? Will there be an institution from China, India, or Brazil in the top 10? As Nicole suggested, maybe US parents will be sending their children to schools overseas much sooner, faced with the ever-increasing costs of a US based education. Will corporations begin to view skill-based certification from an online provider as a viable alternative to a 4 year degree from an accredited institution, fundamentally altering the education industry as we know it?

    Read the article

  • Subterranean IL: Fault exception handlers

    - by Simon Cooper
    Fault event handlers are one of the two handler types that aren't available in C#. It behaves exactly like a finally, except it is only run if control flow exits the block due to an exception being thrown. As an example, take the following method: .method public static void FaultExample(bool throwException) { .try { ldstr "Entering try block" call void [mscorlib]System.Console::WriteLine(string) ldarg.0 brfalse.s NormalReturn ThrowException: ldstr "Throwing exception" call void [mscorlib]System.Console::WriteLine(string) newobj void [mscorlib]System.Exception::.ctor() throw NormalReturn: ldstr "Leaving try block" call void [mscorlib]System.Console::WriteLine(string) leave.s Return } fault { ldstr "Fault handler" call void [mscorlib]System.Console::WriteLine(string) endfault } Return: ldstr "Returning from method" call void [mscorlib]System.Console::WriteLine(string) ret } If we pass true to this method the following gets printed: Entering try block Throwing exception Fault handler and the exception gets passed up the call stack. So, the exception gets thrown, the fault handler gets run, and the exception propagates up the stack afterwards in the normal way. If we pass false, we get the following: Entering try block Leaving try block Returning from method Because we are leaving the .try using a leave.s instruction, and not throwing an exception, the fault handler does not get called. Fault handlers and C# So why were these not included in C#? It seems a pretty simple feature; one extra keyword that compiles in exactly the same way, and with the same semantics, as a finally handler. If you think about it, the same behaviour can be replicated using a normal catch block: try { throw new Exception(); } catch { // fault code goes here throw; } The catch block only gets run if an exception is thrown, and the exception gets rethrown and propagates up the call stack afterwards; exactly like a fault block. The only complications that occur is when you want to add a fault handler to a try block with existing catch handlers. Then, you either have to wrap the try in another try: try { try { // ... } catch (DirectoryNotFoundException) { // ... // leave.s as normal... } catch (IOException) { // ... throw; } } catch { // fault logic throw; } or separate out the fault logic into another method and call that from the appropriate handlers: try { // ... } catch (DirectoryNotFoundException ) { // ... } catch (IOException ioe) { // ... HandleFaultLogic(); throw; } catch (Exception e) { HandleFaultLogic(); throw; } To be fair, the number of times that I would have found a fault handler useful is minimal. Still, it's quite annoying knowing such functionality exists, but you're not able to access it from C#. Fortunately, there are some easy workarounds one can use instead. Next time: filter handlers.

    Read the article

  • MythTV lost recordings - "No recordings available" and no recording rules either

    - by nimasmi
    I have a c.6 year old mythtv database. I recently upgraded from Ubuntu 10.04 to 12.04. This brought a MythTV upgrade from 0.24 to 0.25, which went well. Today, all my recordings have disappeared. They still exist in the /var/lib/mythtv/recordings folder, and the 'M' key in the Watch Recordings page says that there are 201 recordings available somewhere, but they will not display. See screenshot: (implicit thanks to whomever upvoted this, giving me sufficient reputation to upload images) Changing the filter does not remedy the fact that there is nothing shown in the lists. My Upcoming Recordings screen says that there are no rules set, but my list of previously recorded shows is still there, and has an entry from as recently as 3am today. mythbackend --printsched gives the following: user@box:~$ mythbackend --printsched 2012-09-22 12:59:20.537008 C mythbackend version: fixes/0.25 [v0.25.2-15-g46cab93] www.mythtv.org 2012-09-22 12:59:20.537043 C Qt version: compile: 4.8.1, runtime: 4.8.1 2012-09-22 12:59:20.537048 N Enabled verbose msgs: general 2012-09-22 12:59:20.537076 N Setting Log Level to LOG_INFO 2012-09-22 12:59:20.537142 I Added logging to the console 2012-09-22 12:59:20.537152 I Added database logging to table logging 2012-09-22 12:59:20.537279 N Setting up SIGHUP handler 2012-09-22 12:59:20.537373 N Using runtime prefix = /usr 2012-09-22 12:59:20.537394 N Using configuration directory = /home/user/.mythtv 2012-09-22 12:59:20.537999 I Assumed character encoding: en_GB.UTF-8 2012-09-22 12:59:20.538599 N Empty LocalHostName. 2012-09-22 12:59:20.538610 I Using localhost value of box 2012-09-22 12:59:20.538792 I Testing network connectivity to '192.168.1.2' 2012-09-22 12:59:20.539420 I Starting process manager 2012-09-22 12:59:20.541412 I Starting IO manager (read) 2012-09-22 12:59:20.541715 I Starting IO manager (write) 2012-09-22 12:59:20.541836 I Starting process signal handler 2012-09-22 12:59:20.684497 N Setting QT default locale to EN_GB 2012-09-22 12:59:20.684694 I Current locale EN_GB 2012-09-22 12:59:20.684813 N Reading locale defaults from /usr/share/mythtv//locales/en_gb.xml 2012-09-22 12:59:20.697623 I New static DB connectionDataDirectCon 2012-09-22 12:59:20.704769 I MythCoreContext: Connecting to backend server: 192.168.1.2:6543 (try 1 of 1) Calculating Schedule from database. Inputs, Card IDs, and Conflict info may be invalid if you have multiple tuners. 2012-09-22 12:59:27.710538 E MythSocket(21dfcd0:14): readStringList: Error, timed out after 7000 ms. 2012-09-22 12:59:27.710592 C Protocol version check failure. The response to MYTH_PROTO_VERSION was empty. This happens when the backend is too busy to respond, or has deadlocked in due to bugs or hardware failure. Things I have tried so far: restart the backend restart the frontend run mythtv-setup and check database passwords and IP addresses change the frontend setting for backend IP from localhost to 192.168.1.2 (the backend/frontend's IP) run optimize_mythdb.pl Other suggestions appreciated.

    Read the article

  • What&rsquo;s new in RadChart for 2010 Q1 (Silverlight / WPF)

    Greetings, RadChart fans! It is with great pleasure that I present this short highlight of our accomplishments for the Q1 release :). Weve worked very hard to make the best silverlight and WPF charting product even better. Here is some of what we did during the past few months.   1) Zooming&Scrolling and the new sampling engine: Without a doubt one of the most important things we did. This new feature allows you to bind your chart to a very large set of data with blazing performance. Dont take my word for it give it a try!   2) New Smart Label Positioning and Spider-like labels feature: This new feature really helps with very busy graphs. You can play with the different settings we offer in this example.   3) Sorting and Filtering. Much like our RadGridview control the chart now allows you to sort and filter your data out of the box with a single line of code!   4) Legend improvements Weve also been paying attention to those of you who wanted a much improved legend. It is now possible to customize the look and feel of legend items and legend position with a single click.   5) Custom palette brushes. You have told us that you want to easily customize all palette colors using a single clean API from both XAML and code behind. The new custom palette brushes API does exactly that.   There are numerous other improvements as well, as much improved themes, performance optimizations and other features that we did. If you want to dig in further check the release notes and changes and backwards compatibility topics.   Feel free to share the pains and gains of working with RadChart. Our team is always open to receiving constructive feedback and beer :-)Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How do you exclude yourself from Google Analytics on your website using cookies?

    - by Keoki Zee
    I'm trying to set up an exclusion filter with a browser cookie, so that my own visits to my don't show up in my Google Analytics. I tried 3 different methods and none of them have worked so far. I would like help understanding what I am doing wrong and how I can fix this. Method 1 First, I tried following Google's instructions, http://www.google.com/support/analytics/bin/answer.py?hl=en&answer=55481, for excluding traffic by Cookie Content: Create a new page on your domain, containing the following code: <body onLoad="javascript:pageTracker._setVar('test_value');"> Method 2 Next, when that didn't work, I googled around and found this Google thread, http://www.google.com/support/forum/p/Google%20Analytics/thread?tid=4741f1499823fcd5&hl=en, where the most popular answer says to use a slightly different code: SHS Analytics wrote: <body onLoad="javascript:_gaq.push(['_setVar','test_value']);"> Thank you! This has now set a __utmv cookie containing "test_value", whereas the original: pageTracker._setVar('test_value') (which Google is still recommending) did not manage to do that for me (in Mac Safari 5 and Firefox 3.6.8). So I tried this code, but it didn't work for me. Method 3 Finally, I searched StackOverflow and came across this thread, http://stackoverflow.com/questions/3495270/exclude-my-traffic-from-google-analytics-using-cookie-with-subdomain, which suggests that the following code might work: <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setVar', 'exclude_me']); _gaq.push(['_setAccount', 'UA-xxxxxxxx-x']); _gaq.push(['_trackPageview']); // etc... </script> This script appeared in the head element in the example, instead of in the onload event of the body like in the previous 2 examples. So I tried this too, but still had no luck with trying to exclude myself from Google Analytics. Re-iterate question So, I tried all 3 methods above with no success. Am I doing something wrong? How can I exclude myself from my Google Analytics using an exclusion cookie for my browser?

    Read the article

  • Make Firefox Show Google Results for Default Address Bar Searches

    - by The Geek
    Have you ever typed something incorrectly into the Firefox address bar, and then had it take you to a page you weren’t expecting? The reason is because Firefox uses Google’s “I’m Feeling Lucky” search, but you can change it. Scratching your head? Let’s take a quick run through what we’re talking about… Normally, if you typed in something like just “howtogeek” in the address bar, and then hit enter… you’ll be taken directly to the How-To Geek site. But how? Very simple! It’s the same place you would have been taken to if you typed “howtogeek” into Google, and then clicked the “I’m Feeling Lucky” button, which takes you to the first result. This is what Firefox does behind the scenes when you put something into the address bar that isn’t a URL. But what if you’d rather head to the search results page instead? Luckily, all you have to do is tweak an about:config parameter in Firefox. Just head into about:config in the address bar, and then filter for keyword.url like so: Double-click on the entry in the list, and then delete the &gfns=1 from the value. That’s the part of the URL that triggers Google to redirect to the first result. And now, the next time you type something into the address bar, either on purpose or because you typo’d it, you’ll be taken to the results page instead: About:Config tweaking is lots of fun. Similar Articles Productive Geek Tips Make Firefox Quick Search Use Google’s Beta Search KeysMake Firefox Built-In Search Box Use Google’s Experimental Search KeysCombine Wolfram Alpha & Google Search Results in FirefoxHow To Run 4 Different Google Searches at Once In the Same TabChange Default Feed Reader in Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems Icelandic Volcano Webcams Open Multiple Links At One Go

    Read the article

  • The Oracle Graduate Experience...A Graduates Perspective by Angelie Tierney

    - by david.talamelli
    [Note: Angelie has just recently joined Oracle in Australia in our 2011 Graduate Program. Last week I shared my thoughts on our 2011 Graduate Program, this week Angelie took some time to share her thoughts of our Graduate Program. The notes below are Angelie's overview from her experience with us starting with our first contact last year - David Talamelli] How does the 1 year program work? It consists of 3 weeks of training, followed by 2 rotations in 2 different Lines of Business (LoB's). The first rotation goes for 4 months, while your 2nd rotation goes for 7, when you are placed into your final LoB for the program. The interview process: After sorting through the many advertised graduate jobs, submitting so many resumes and studying at the same time, it can all be pretty stressful. Then there is the interview process. David called me on a Sunday afternoon and I spoke to him for about 30 minutes in a mini sort of phone interview. I was worried that working at Oracle would require extensive technical experience, but David stressed that even the less technical, and more business-minded person could, and did, work at Oracle. I was then asked if I would like to attend a group interview in the next weeks, to which I said of course! The first interview was a day long, consisting of a brief introduction, a group interview where we worked on a business plan with a group of other potential graduates and were marked by 3 Oracle employees, on our ability to work together and presentation. After lunch, we then had a short individual interview each, and that was the end of the first round. I received a call a few weeks later, and was asked to come into a second interview, at which I also jumped at the opportunity. This was an interview based purely on your individual abilities and would help to determine which Line of Business you would go to, should you land a graduate position. So how did I cope throughout the interview stages? I believe the best tool to prepare for the interview, was to research Oracle and its culture and to see if I thought I could fit into that. I personally found out about Oracle, its partners as well as competitors and along the way, even found out about their part (or Larry Ellison's specifically) in the Iron Man 2 movie. Armed with some Oracle information and lots of enthusiasm, I approached the Oracle Graduate Interview process. Why did I apply for an Oracle graduate position? I studied a Bachelor of Business/Bachelor of Science in IT, and wanted to be able to use both my degrees, while have the ability to work internationally in the future. Coming straight from university, I wasn't sure exactly what I wanted to do in terms of my career. With the program, you are rotated across various lines of business, to not only expose you to different parts of the business, but to also help you to figure out what you want to achieve out of your career. As a result, I thought Oracle was the perfect fit. So what can an Oracle ANZ Graduate expect? First things first, you can expect to line up for your visitor pass. Really. Next you enter a room full of unknown faces, graduates just like you, and then you realise you're in this with 18 other people, going through the same thing as you. 3 weeks later you leave with many memories, colleagues you can call your friends, and a video of your presentation. Vanessa, the Graduate Manager, will also take lots of photos and keep you (well) fed. Well that's not all you leave with, you are also equipped with a wealth of knowledge and contacts within Oracle, both that will help you throughout your career there. What training is involved? We started our Oracle experience with 3 weeks of training, consisting of employee orientation, extensive product training, presentations on the various lines of business (LoB's), followed by sales and presentation training. While there was potential for an information overload, maybe even death by Powerpoint, we were able to have access to the presentations for future reference, which was very helpful. This period also allowed us to start networking, not only with the graduates, but with the managers who presented to us, as well as through the monthly chinwag, HR celebrations and even with the sharing of tea facilities. We also had a team bonding day when we recorded a "commercial" within groups, and learned how to play an Irish drum. Overall, the training period helped us to learn about Oracle, as well as ourselves, and to prepare us for our transition into our rotations. Where to now? I'm now into my 2nd week of my first graduate rotation. It has been exciting to finally get out into the work environment and utilise that knowledge we gained from training. My manager has been a great mentor, extremely knowledgeable, and it has been good being able to participate in meetings, conference calls and make a contribution towards the business. And while we aren't necessarily working directly with the other graduates, they are still reachable via email, Pidgin and lunch and they are important as a resource and support, after all, they are going through a similar experience to you. While it is only the beginning, there is a lot more to learn and a lot more to experience along the way, especially because, as we learned during training, at Oracle, the only constant is change.

    Read the article

  • OData Mix10 &ndash; Part Dos

    - by Jon Dalberg
    The other day I had a snazzy post on fetching all the video (WMV) files from Mix ‘10. A simple, console application that grabbed the urls from the OData feed and downloaded the videos. I wanted to change that app to fire the OData query asynchronously so here’s what resulted: 1: static void Main(string[] args) 2: { 3: var mix = new Mix.EventEntities(new Uri("http://api.visitmix.com/OData.svc")); 4:   5: var temp = mix.Files.Where(f => f.TypeName == "WMV"); 6: var query = temp as DataServiceQuery<Mix.File>; 7:   8: query.BeginExecute(OnFileQueryComplete, query); 9:   10: // waiting... 11: Console.ReadLine(); 12: } 13:   14: static void OnFileQueryComplete(IAsyncResult result) 15: { 16: var query = result.AsyncState as DataServiceQuery<Mix.File>; 17: var response = query.EndExecute(result); 18:   19: var web = new WebClient(); 20:   21: var myVideos = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.MyVideos), "Mix10"); 22:   23: Directory.CreateDirectory(myVideos); 24:   25: foreach (Mix.File f in response) 26: { 27: var fileName = new Uri(f.Url).Segments.Last(); 28: Console.WriteLine(f.Url); 29: web.DownloadFile(f.Url, Path.Combine(myVideos, fileName)); 30: } 31: } There are two important things here that are not explained well in the MSDN docs: See lines 5 and 6? That’s where I query for the WMV files and it returns an IQueryable<T>. You *have* to cast that to a DataServiceQuery<T> and then call BeginExecute. The documented example does not filter so it didn’t show that step. Line 16 shows the correct way to get the previously executed DataServiceQuery<T> from the async result. If you looked at the MSDN example docs it shows (incorrectly) just casting the result, like this: // wrong var query = result as DataServiceQuery<Mix.File>; Other than those items it is relatively straight forward and we’re all async-ified. Enjoy!

    Read the article

  • Remote synchronization

    - by Tomas Mysik
    Hi all, today we would like to show you another improvement we have prepared for NetBeans 7.2. Today, let's talk a little bit about remote synchronization. If you already use our simple (S)FTP client, this enhancement could be useful for you. Simply right click on Source Files and select Synchronize. Please notice that the remote synchronization works better only on the whole project (it means that the Source Files must be selected). The Synchronize action is also available on individual files (more files can be selected at once) but the suggested operation (download, upload etc.) does not work so precisely. Also please notice that the suggested operations are not 100% reliable since the timestamps provided by FTP servers are not exact. Once the remote files (their names and paths only, of course) are fetched, the main dialog appears: As you can see, NetBeans tries to suggest you operations (upload, download etc.) which should be done for each individual file of your project. If you are interested only in some particular changes, you can simply filter the list: Since we have a file conflict, we need to resolve it first. Fortunately this is very easy because we just select the desired file and click the Diff button . The remote version of our file is downloaded and compared with the local version. The resut is displayed in the dialog where you can easily apply and/or refuse the remote changes or even simply type manually to the local version of the selected file: Once we are done with our changes, the operation for the selected file changes to Upload and the file is marked with * (since we made some changes). Please notice that if you now click the Cancel button, in fact no changes are done in our local file. As you can see, if we have one or more files selected, we can change their operation to: no operation (file won't be synchronized) download upload delete (both local and remote file) reset (the operation is resetted to the original one suggested by NetBeans and also all changes done via Diff action are discarded) Now we are ready to synchronize our project. NetBeans will show us the synchronization summary (this dialog can be omitted, see the Show Summary checkbox on the previous image). The synchronization itself starts and we can see its progress and of course its result. As always, all the operations can be reviewed in the Output window. That's all for today, as always, please test it and report all the issues or enhancements you find in NetBeans BugZilla (component php, subcomponent FTP support).

    Read the article

  • LINQ: Enhancing Distinct With The SelectorEqualityComparer

    - by Paulo Morgado
    On my last post, I introduced the PredicateEqualityComparer and a Distinct extension method that receives a predicate to internally create a PredicateEqualityComparer to filter elements. Using the predicate, greatly improves readability, conciseness and expressiveness of the queries, but it can be even better. Most of the times, we don’t want to provide a comparison method but just to extract the comaprison key for the elements. So, I developed a SelectorEqualityComparer that takes a method that extracts the key value for each element. Something like this: public class SelectorEqualityComparer<TSource, Tkey> : EqualityComparer<TSource> where Tkey : IEquatable<Tkey> { private Func<TSource, Tkey> selector; public SelectorEqualityComparer(Func<TSource, Tkey> selector) : base() { this.selector = selector; } public override bool Equals(TSource x, TSource y) { Tkey xKey = this.GetKey(x); Tkey yKey = this.GetKey(y); if (xKey != null) { return ((yKey != null) && xKey.Equals(yKey)); } return (yKey == null); } public override int GetHashCode(TSource obj) { Tkey key = this.GetKey(obj); return (key == null) ? 0 : key.GetHashCode(); } public override bool Equals(object obj) { SelectorEqualityComparer<TSource, Tkey> comparer = obj as SelectorEqualityComparer<TSource, Tkey>; return (comparer != null); } public override int GetHashCode() { return base.GetType().Name.GetHashCode(); } private Tkey GetKey(TSource obj) { return (obj == null) ? (Tkey)(object)null : this.selector(obj); } } Now I can write code like this: .Distinct(new SelectorEqualityComparer<Source, Key>(x => x.Field)) And, for improved readability, conciseness and expressiveness and support for anonymous types the corresponding Distinct extension method: public static IEnumerable<TSource> Distinct<TSource, TKey>(this IEnumerable<TSource> source, Func<TSource, TKey> selector) where TKey : IEquatable<TKey> { return source.Distinct(new SelectorEqualityComparer<TSource, TKey>(selector)); } And the query is now written like this: .Distinct(x => x.Field) For most usages, it’s simpler than using a predicate.

    Read the article

  • Selectively Exposing Functionallity in .Net

    - by David V. Corbin
    Any developer should be aware of the principles of encapsulation, cross-tier isolation, and cross-functional separation of concerns. However, it seems the few take the time to consider the adage of "minimal yet complete"1 when developing the software. Consider the exposure of "business objects" to the user interface. Some common situations occur: Accessing a given element requires a compound set of calls that do not "make sense" to the User Interface. More information than absolutely required is exposed to the user interface It would be much cleaner if a custom interface was provided that exposed exactly (and only) the information that is required by the consumer. Achieving this using conventional techniques would require the creation (and maintenance!) of custom classes to filter and transpose the information into the ideal format. Determining the ROI on this approach can be very difficult to ascertain, and as a result it is often ignored completely. There is another approach, which is largely made practical by virtual of the Action and Func delegates. From a callers point of view, the following two samples can be used interchangeably:     interface ISomeInterface     {         void SampleMethod1(string param);         string SamepleMethod2(string param);     }       class ISomeInterface     {         public Action<string> SampleMethod1 {get; }         public Func<string,string> SamepleMethod2 {get; }     }   The capabilities this simple changes enable are significant (and remember it does not cange the syntax at the call site): The delegates can be initialized to directly call the proper method of any target class. The delegates can be dynamically updated based on the current state. The "interface" can NOT be cast to the concrete class (which often exposes more functionallity). This patterns By limiting the interface to the exact functionallity required, the reduced surface area will typically result in lower development, testing and maintenance costs. We are currently in the process of posting a project on CodePlex which illustrates this (and many other) techniques which have proven helpful in creating robust yet flexible solutions that are highly efficient2 and maintainable. This post will be updated as soon as the project is published. 1) Credit: Scott  Meyers, Effective C++, Addison-Wesley 1992 2) For those who read my previous post on performance it should be noted that the use of delegates is on the same order of magnitude (actually a tiny amount faster) as conventional interfaces.

    Read the article

  • Unit testing in Django

    - by acjohnson55
    I'm really struggling to write effective unit tests for a large Django project. I have reasonably good test coverage, but I've come to realize that the tests I've been writing are definitely integration/acceptance tests, not unit tests at all, and I have critical portions of my application that are not being tested effectively. I want to fix this ASAP. Here's my problem. My schema is deeply relational, and heavily time-oriented, giving my model object high internal coupling and lots of state. Many of my model methods query based on time intervals, and I've got a lot of auto_now_add going on in timestamped fields. So take a method that looks like this for example: def summary(self, startTime=None, endTime=None): # ... logic to assign a proper start and end time # if none was provided, probably using datetime.now() objects = self.related_model_set.manager_method.filter(...) return sum(object.key_method(startTime, endTime) for object in objects) How does one approach testing something like this? Here's where I am so far. It occurs to me that the unit testing objective should be given some mocked behavior by key_method on its arguments, is summary correctly filtering/aggregating to produce a correct result? Mocking datetime.now() is straightforward enough, but how can I mock out the rest of the behavior? I could use fixtures, but I've heard pros and cons of using fixtures for building my data (poor maintainability being a con that hits home for me). I could also setup my data through the ORM, but that can be limiting, because then I have to create related objects as well. And the ORM doesn't let you mess with auto_now_add fields manually. Mocking the ORM is another option, but not only is it tricky to mock deeply nested ORM methods, but the logic in the ORM code gets mocked out of the test, and mocking seems to make the test really dependent on the internals and dependencies of the function-under-test. The toughest nuts to crack seem to be the functions like this, that sit on a few layers of models and lower-level functions and are very dependent on the time, even though these functions may not be super complicated. My overall problem is that no matter how I seem to slice it, my tests are looking way more complex than the functions they are testing.

    Read the article

  • Build 2012, some thoughts..

    - by Dennis Vroegop
    I think you probably read my rant about the logistics at Build 2012, as posted here, so I am not going into that anymore. Instead, let’s look at the content. (BTW If you did read that post and want some more info then read Nia Angelina’s post about Build. I have nothing to add to that.) As usual, there were good speakers and some speakers who could benefit from some speaker training. I find it hard to understand why Microsoft allows certain people on stage, people who speak English with such strong accents it’s hard for people, especially from abroad, to understand. Some basic training might be useful for some of them. However, it is nice to see that most speakers are project managers, program managers or even devs on the teams that build the stuff they talk about: there was a lot of knowledge on stage! And that means when you ask questions you get very relevant information. I realize I am not the average audience member here, I am regular speaker myself so I tend to look for other things when I am in a room than most audience members so my opinion might differ from others. All in all the knowledge of the speakers was above average but the presentation skills were most of the times below what I would describe as adequate. But let us look at the contents. Since the official name of the conference is Build Windows 2012 it is not surprising most of the talks were focused on building Windows 8 apps. Next to that, there was a lot of focus on Azure and of course Windows Phone 8 that launched the day before Build started. Most sessions dealt with C# and JavaScript although I did see a tendency to use C++ more. Touch. Well, that was the focus on a lot of sessions, that goes without saying. Microsoft is really betting on Touch these days and being a Touch oriented developer I can only applaud this. The term NUI is getting a bit outdated but the principles behind it certainly aren’t. The sessions did cover quite a lot on how to make your applications easy to use and easy to understand. However, not all is touch nowadays; still the majority of people use keyboard and mouse to interact with their machines (or, as I do, use keyboard, mouse AND touch at the same time). Microsoft understands this and has spend some serious thoughts on this as well. It was all about making your apps run everywhere on all sorts of devices and in all sorts of scenarios. I have seen a couple of sessions focusing on the portable class library and on sharing code between Windows 8 and Windows Phone 8. You get the feeling Microsoft is enabling us devs to write software that will be ubiquitous. They want your stuff to be all over the place and they do anything they can to help. To achieve that goal they provide us with brilliant SDK’s, great tooling, a very, very good backend in the form of Windows Azure (I was particularly impressed by the Mobility part of Azure) and some fantastic hardware. And speaking of hardware: the partners such as Acer, Lenovo and Dell are making hardware that puts Apple to a shame nowadays. To illustrate: in Bellevue (very close to Redmond where Microsoft HQ is) they have the Microsoft Store located very close to the Apple Store, so it’s easy to compare devices. And I have to say: the Microsoft offerings are much, much more appealing that what the Cupertino guys have to offer. That was very visible by the number of people visiting the stores: even on the day that Apple launched the iPad Mini there were more people in the Microsoft store than in the Apple store. So, the future looks like it’s going to be fun. Great hardware (did I mention the Nokia Lumia 920? No? It’s brilliant), great software (Windows 8 is in a league of its own), the best dev tools (Visual Studio 2012 is still the champion here) and a fantastic backend (Azure.. need I say more?). It’s up to us devs to fill up the stores with applications that matches this. To summarize: it is great to be a Windows developer. PS. Did I mention Surface RT? Man….. People were drooling all over it wherever I went. It is fantastic :-) Technorati Tags: Build,Windows 8,Windows Phone,Lumia,Surface,Microsoft

    Read the article

  • Windows Azure Diagnostics: Next to Useless?

    - by Your DisplayName here!
    To quote my good friend Christian: “Tracing is probably one of the most discussed topics in the Windows Azure world. Not because it is freaking cool – but because it can be very tedious and partly massively counter-intuitive.” <rant> The .NET Framework has this wonderful facility called TraceSource. You define a named trace and route that to a configurable listener. This gives you a lot of flexibility – you can create a single trace file – or multiple ones. There is even nice tooling around that. SvcTraceViewer from the SDK let’s you open the XML trace files – you can filter and sort by trace source and event type, aggreate multiple files…blablabla. Just what you would expect from a decent tracing infrastructure. Now comes Windows Azure. I was already were grateful that starting with the SDK 1.2 we finally had a way to do tracing and diagnostics in the cloud (kudos!). But the way the Azure DiagnosticMonitor is currently implemented – could be called flawed. The Azure SDK provides a DiagnosticsMonitorTraceListener – which is the right way to go. The only problem is, that way this works is, that all traces (from all sources) get written to an ETW trace. Then the DiagMon listens to these traces and copies them periodically to your storage account. So far so good. But guess what happens to your nice trace files: the trace source names get “lost”. They appear in your message text at the end. So much for filtering and sorting and aggregating (regex #fail or #win??). Every trace line becomes an entry in a Azure Storage Table – the svclog format is gone. So much for the existing tooling. To solve that problem, one workaround was to write your own trace listener (!) that creates svclog files inside of local storage and use the DiagMon to copy those. Christian has a blog post about that. OK done that. Now it turns out that this mechanism does not work anymore in 1.3 with FullIIS (see here). Quoting: “Some IIS 7.0 logs not collected due to permissions issues...The root cause to both of these issues is the permissions on the log files.” And the workaround: “To read the files yourself, log on to the instance with a remote desktop connection.” Now then have fun with your multi-instance deployments…. </rant>

    Read the article

  • State / Screen management in Entity Component Systems

    - by David Lively
    My entity/component system is happily humming along and, despite some performance concerns I initially had, everything is working fine. However, I've realized that I missed a crucial point when starting this thing: how do you handle different screens? At the moment, I have a GameManager class which owns a component manager and entity manager. When I create an entity, the entity manager assigns it an ID and makes sure it's tracked. When I modify the components that are assigned to an entity. an UpdateEntity method is called, which alerts each of the systems that they may need to add or remove the entity from their respective entity lists. A problem with this is that the collection of entities operated on by each system is determined solely by the individual Systems, typically based on a "required component" filter. (An entity has to have a Renderable component to be rendered, for instance.) In this situation, I can't just keep collections of entities per screen and only Update/Draw those collections. They'd have to either be added and removed depending on their applicability to the current screen, which would cause their associated components to be removed, or enable/disable entities in a group per screen to hide what's not supposed to be visible. These approaches seem like really, really crappy kludges. What's a good way to handle this? A pretty straightforward way that comes to mind is to create a separate GameManager (which in my implementation owns all of the systems, entities, etc.) per screen, which means that everything outside of the device context would be duplicated. That's bothersome because some things are always visible, or I might want to continue to display the game under a translucent menu window. Another option would be to add a "layer" key to the GameManager class, which could be checked against a displayable layer stack held by the game manager. *System.Draw() would be called for each active layer, in the required order as determined by the stack. When the systems request an iterator for their respective entity collections, it would be pre-filtered to a (cached) set of those entities that participate in the active layer. Those collections could be updated from the same UpdateEntity event that's already used to maintain each system's entity collections. Still, kinda feels like a hack. If I've coded myself into a corner, feel free to throw tomatoes as long as they're labeled with a helpful suggestion. Hooray for learning curves.

    Read the article

  • Configure Calendar Server 7 to Use the davUniqueId Attribute

    - by dabrain
    Starting with Calendar Server 7 Update 3 (Patch 08) we introduce a new attribute davUniqueId in the davEntity objectclass, to use as the unique identifier.  The reason behind this is quite simple, the LDAP operational attribute nsUniqueId  has been chosen as the default value used for the unique identifier. It was discovered that this choice has a potential serious downside. The problem with using nsUniqueId is that if the LDAP entry for a user, group, or resource is deleted and recreated in LDAP, the new entry would receive a different nsUniqueId value from the Directory Server, causing a disconnect from the existing account in the calendar database. As a result, recreated users cannot access their existing calendars. How To Configure Calendar Server to Use the davUniqueId Attribute? Populate the davUniqueId to the ldap users. You can create a LDIF output file only or (-x option) directly run the ldapmodify from the populate-davuniqueid shell script. # ./populate-davuniqueid -h localhost -p 389 -D "cn=Directory Manager" -w <passwd> -b "o=red" -O -o /tmp/out.ldif The ldapmodify might failed like below, in that case the LDAP entry already have the 'daventity' objectclass, in those cases run populate-davuniqueid script without the -O option. # ldapmodify -x -h localhost -p 389 -D "cn=Directory Manager" -w <passwd> -c -f /tmp/out.ldif modifying entry "uid=mparis,ou=People,o=vmdomain.tld,o=red" ldapmodify: Type or value exists (20) In this case the user 'mparis' already have the objectclass 'daventity', ldapmodify do not take care of this DN and just take the next DN (if you start ldapmodify with -c option otherwise it stop's completely) dn: uid=mparis,ou=People,o=vmdomain.tld,o=red changetype: modify add: objectclass objectclass: daventity - add: davuniqueid davuniqueid: 01a2c501-af0411e1-809de373-18ff5c8d Even run populate-davuniqueid without -O option or changing the outputfile to dn: uid=mparis,ou=People,o=vmdomain.tld,o=red changetype: modify add: davuniqueid davuniqueid: 01a2c501-af0411e1-809de373-18ff5c8d The ldapmodify works fine now. The only issue I see here is you need verify which user might need the 'daventity' objectclass as well. On the other hand start without the objectclass and only add the objectclass for the users where you get 'Objectclass violation' report. That's indicate the objectclass is missing. # ldapmodify -x -h localhost -p 389 -D "cn=Directory Manager" -w <passwd> -c -f /tmp/out.ldif modifying entry "uid=mparis,ou=People,o=vmdomain.tld,o=red" Now it is time to change the configuration to use the davuniquid attribute # ./davadmin config modify -o davcore.uriinfo.permanentuniqueid -v davuniqueid It is also needed to modfiy the search filter to use davuniqueid instead of nsuniqueid # ./davadmin config modify -o davcore.uriinfo.subjectattributes -v "cn davstore icsstatus mail mailalternateaddress davUniqueId  owner preferredlanguageuid objectclass ismemberof uniquemember memberurl mgrprfc822mailmember" Afterward IWC Calendar works fine and my test user able to access all his old events.

    Read the article

  • Setting up Ubuntu Server as a Router with DHCPD and 3 Ethernet devices

    - by cengbrecht
    My configuration: Ubuntu 12.04 DHCP3-server eth0, eth1, eth2 Edit: removed br0&br1 eth0 is the external connection eth1 & eth2 are the internal network eth1 and eth2 are supposed to be seperate networks of student/teachers respectivly. What I would like to have is the internet from external device bridged to device 1 and 2, with the DHCP server controlling the two internal devices. Its already working with DHCP, the part I am stuck on is bridging for internet. I have setup a script that I found here: Router With the original script he linked here: Ubuntu Router Guide echo -e "\n\nLoading simple rc.firewall-iptables version $FWVER..\n" IPTABLES=/sbin/iptables #IPTABLES=/usr/local/sbin/iptables DEPMOD=/sbin/depmod MODPROBE=/sbin/modprobe EXTIF="eth0" INTIF="eth1" INTIF2="eth2" echo " External Interface: $EXTIF" echo " Internal Interface: $INTIF" echo " Internal Interface: $INTIF2" EXTIP=`ifconfig $EXTIF | grep 'inet addr:' | sed 's#.*inet addr\:\([0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\).*#\1#g'` echo " External IP: $EXTIP" #====================================================================== #== No editing beyond this line is required for initial MASQ testing == The rest of the script below this is as is. I can get ip from the eth1 & eth2 devices, and my computer can see them, and them it, however, internet is not being passed through. If you need more information please just let me know. EDIT: So I had a 255.255.254.0 network, I believe that was causing the issue. Not sure if it will matter on the second card, I will test later. After changing the subnet to 255.255.255.0 the pings will pass through, however, I cannot get DNS requests to pass? My new Config for Firewall Rules # /etc/iptables.up.rules # Generated by iptables-save v1.4.12 on Wed Nov 28 19:43:28 2012 *mangle :PREROUTING ACCEPT [39:4283] :INPUT ACCEPT [39:4283] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [12:4884] :POSTROUTING ACCEPT [13:5145] COMMIT # Completed on Wed Nov 28 19:43:28 2012 # Generated by iptables-save v1.4.12 on Wed Nov 28 19:43:28 2012 *filter :FORWARD ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A FORWARD -j LOG -A FORWARD -m state -i eth1 -o eth0 --state NEW,ESTABLISHED,RELATED -j ACCEPT -A FORWARD -m state -i eth2 -o eth0 --state NEW,ESTABLISHED,RELATED -j ACCEPT -A FORWARD -m state -i eth0 -o eth1 --state NEW,ESTABLISHED,RELATED -j ACCEPT -A FORWARD -m state -i eth0 -o eth2 --state NEW,ESTABLISHED,RELATED -j ACCEPT COMMIT # Completed on Wed Nov 28 19:43:28 2012 # Generated by iptables-save v1.4.12 on Wed Nov 28 19:43:28 2012 *nat :INPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -A POSTROUTING -o eth0 -j MASQUERADE -A POSTROUTING -o eth0 -j SNAT --to-source 192.168.1.25 COMMIT # Completed on Wed Nov 28 19:43:28 2012 Not sure what else you may need, but I am using Webmin to control the server(Needed for the operators on site to know how to use it.) If you could explain it as standard CLI commands, or edits to this file directly then we should be ok. :) And thanks again Erik, I do believe your edits did help.

    Read the article

  • Redgate ANTS Performance Profiler

    - by Jon Canning
    Seemingly forever I've been working on a business idea, it's a REST API delivering content to mobiles, and I've never really had much idea about its performance. Yes, I have a suite of unit tests and integration tests, but these only tell me that it works, not how well it works. I was also about to embark on a major refactor, swapping the database from MongoDB to RavenDB, and was curious to see if that impacted performance at all, so I needed a profiler that supported IIS Express that I can run my integration tests against, and Google gave me:   http://www.red-gate.com/supportcenter/content/ANTS_Performance_Profiler/help/7.4/app_iise   Excellent. Following the above guide an instance of IIS Express and is launched, as is Internet Explorer. The latter eventually becomes annoying, I would like to decide whether I want a browser opened, but thankfully the guide is wrong in that it can be closed and profiling will continue. So I ran my tests, stopped profiling, and was presented with a call tree listing the endpoints called and allowing me to drill down to the source code beneath.     Although useful and fascinating this wasn't what I was expecting to see, I was after the method timings from the entire test suite. Switching Show to Methods Grid presented me with a list of my methods, with the slowest lit up in red at the top. Marvellous.     I did find that if you switch to Methods Grid before Call tree has loaded, you do not get the red warnings.   StructureMap was very busy, and next on the list was a request filter that I didn't expect to be so overworked. Highlighting it, the source code was presented to me in the bottom window with timings and a nice red indicator to show me where to look. Oh horror, that reflection hack I put in months ago, I'd forgotten all about it. It was calling Validate<T>() which in turn was resolving a validator from StructureMap. Note to self, use //TODO: when leaving smelly code lying around.     Before refactoring, remember to Save Profile Results from the File menu. Annoyingly you are not prompted to save your results when exiting, and using Save Project will only leave you thankful that you have version control and can go back in time to run your tests again.   Having implemented StructureMap’s ForGenericType, I ran my tests again and:     Win, thankyou ANTS (What does ANTS stand for BTW?)   There's definitely room in my toolbox for a profiler; what started out as idle curiosity actually solved a potential problem. When presented with a new codebase I can see enormous benefit from getting an overview of the pipeline from the call tree before drilling into the code, and as a sanity check before release it gives a little more reassurance that you've done your best, and shows you exactly where to look if you haven’t.   Next I’m going to profile a load test.

    Read the article

  • Newsletter sent with drupal goes to Spam Folder [closed]

    - by HerrSerker
    Possible Duplicate: How could I prevent my mail from being recognized as spam? I'm sending a newsletter with drupals simplenews module The website is hosted on an 1und1 server in germany (as seen in in header domains online.de and kundenserver.de) When I send it, it goes to Spam folder in Yahoo & GMail Mailbox, but not in Spam Folder in web.de, hotmail and GMX Mailboxes Here is, what I have in the Mail Header (for yahoo in this example) Received: from 12.345.678.90 (EHLO sXXXXXXXXX.online.de) (12.345.678.90) by mtaXXX.mail.kks.yahoo.co.jp with SMTP; Fri, 15 Jun 2012 18:45:24 +0900 Received: from [127.0.0.1] (helo=infongdXXXXX.rtr.kundenserver.de) by sXXXXXXXXX.online.de with esmtp (Exim 4.72) (envelope-from <[email protected]>) id 1SfT5k-00068r-Q8 for [email protected]; Fri, 15 Jun 2012 11:45:20 +0200 Received: from 83.136.130.41 (IP may be forged by CGI script) by infongdXXXXX.rtr.kundenserver.de with HTTP id 0Z04SW-1SQTKp3LPr-00YxYk; Fri, 15 Jun 2012 11:45:20 +0200 From: SENDER <[email protected]> To: "[email protected]" <[email protected]> Date: Fri, 15 Jun 2012 11:45:20 +0200 Subject: This is the subject of the newsletter Thread-Topic: This is the subject of the newsletter Thread-Index: Ac1K3nT42juzo7uCSkq5dTlby1ZvpQ== List-Unsubscribe: <http://www.example.com/newsletter/confirm/remove/XXXXXXXXX> X-MS-Has-Attach: X-Auto-Response-Suppress: All X-MS-TNEF-Correlator: x-originating-ip: [12.345.678.90] authentication-results: mtaXXX.mail.kks.yahoo.co.jp from=example.com; domainkeys=neutral (no sig); dkim=neutral (no sig) [email protected] errors-to: "SENDER" <[email protected]> received-spf: none (sXXXXXXXXX.online.de: domain of [email protected] does not designate permitted sender hosts) x-apparently-to: [email protected] via 123.45.67.890; Fri, 15 Jun 2012 18:45:25 +0900 x-sender-info: <[email protected]> content-length: 13762 Content-Type: multipart/alternative; boundary="_000_7471797868716571796675707173696675806577726778666766687_" MIME-Version: 1.0 I cannot see any direct spam filter message in this. But I'm kind of stunned by the Received: from 83.136.130.41 (IP may be forged by CGI script) part. After I searched a bit, it seems, that this is a special 'feature' of 1und1 Mail servers. Here are my questions: Is it possible that, if I get rid of the 'Ip maybe forged' part, that the Mail is not regarded as spam anymore? If so, Does anyone know, how I can get rid of it in drupal?

    Read the article

< Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >