Search Results

Search found 676 results on 28 pages for 'the deals dealer'.

Page 24/28 | < Previous Page | 20 21 22 23 24 25 26 27 28  | Next Page >

  • Microsoft Forcing Dev/Partners Hands on Win 8 Through Certification

    - by D'Arcy Lussier
    I remember 2.5 years ago when Microsoft dropped a bomb on the Microsoft Partner community: all Gold competencies would require .NET 4 based premiere certifications (MCPD). Problem was, this gave a window of about 6 months for partners to update their employees’ certifications. At the place I was working, I put together an aggressive plan and we were able to attain the certs needed. Microsoft is always open that the certification requirements will change as the industry changes. .NET 1.0 certifications are useless here in 2012, and rightfully so they’ve been retired for a long time now. But now we’re seeing a new tactic by Microsoft – shifting gears away from certifications that speak to what industry needs and more to the Windows 8 agenda. Consider that currently the premiere development certification is the Microsoft Certified Professional Developer, which comes in three flavours – Web, Windows, and Azure. All require WCF and Data Access exams, as well as one that deals with the associated base technologies (ASP.NET, WinForms/WPF, Azure), and one that ties all three together in a solution-based exam. For Microsoft-based organizations, these skills aren’t just valid but necessary in building Microsoft applications. But the MCPD is being replaced with our old friend Microsoft Certified Solutions Developer (MCSD). So far, Microsoft has only released two types of MCSD – Web and Windows Store Apps. Windows Store Apps?! In a push to move developers to create WinRT-based applications, desktop development is now considered a second-class citizen in the eyes of Redmond. Also interesting are the language options for the exams: HTML5 and C#. Sorry VB folks, its time to embrace curly braces whether they be JavaScript or C#. Consider too the skills being assessed for the Windows Store Apps: Get your MCSD: Windows Store Apps Using HTML5 Get your MCSD: Windows Store Apps Using C# *Image Source: http://www.microsoft.com/learning/en/us/certification/mcsd-windows-store-apps.aspx Nov 21/2012 If you look at the skills being tested in each exam, you’ll find that skills like WCF and Data Access are downplayed compared to things like integrating Charms, facilitating Search, programming for the microphone and camera – all very Windows 8 focussed items. Where this becomes maddening is that Microsoft is still pushing Windows 7 with enterprise clients. According to a ZDNet article, Microsoft wants to see Windows 7 on 70% of enterprise desktops by mid 2013. Assuming they somehow meet that (its a pretty lofty goal), there’s years of traditional desktop-based development that will still be required at some level. For those thinking they’ll just write and stick with the MCPD certification, note that most exams that go towards that certification will be retired at the end of July 2013! (Read the small print). And while details haven’t been finalized, its a safe bet that MCPD certifications eventually won’t count towards Gold-level competencies in the Microsoft Partner program. What this means for Microsoft Partners and Developers is that certification for desktop development is going to be limited to Windows Store Apps unless Microsoft re-introduces a traditional desktop (WPF) based MCSD cert. Web Application Development – It’s Not All Bad There’s big changes on the web side of certification, but I actually see these changes as being for the good! Check out the new exam requirements for MCSD – Web Applications: Get your MCSD: Web Applications certification *Image Source: http://www.microsoft.com/learning/en/us/certification/cert-mcsd-web-applications.aspx Nov 21, 2012 We now *start* with HTML5, JavaScript, and CSS3! Now I’m sure that these will be slanted towards web development in IE, and I can hear designers everywhere bemoaning the CSS/IE combination. Still, I applaud Microsoft for adopting HTML5 as the go-to web technology and requiring certified developers to prove they have skills in the basics of web dev. The fact that the second exam clearly states “MVC Web Applications” shows that Web Forms is truly legacy and deprecated. That’s not to say there aren’t those out there that are still supporting or (for whatever reason) doing new dev with Web Forms, but this move by Microsoft is telling the community they better get on the MVC bandwagon if they want to stay current. Fantastic! And of course Azure needs to be here as well, and this is where the Microsoft agenda fits in. It’s no secret that there’s been a huge push in getting developers on to Azure. I don’t see this as being a bad thing either, as cloud computing (whether Azure, private, or 3rd party) is a necessary skill for developers to have here in 2012. The cynic in me realizes that the HTML5/JavaScript/CSS push wouldn’t be as prominent though if not for the Windows 8 Store App play, where HTML5 is a first class citizen (and an available language for the MCSD Windows Store App cert). In this case, the desktop developers loss is the web developers gain. Get Ready for Changes In addition to the changes in certifications, the Microsoft Partner competencies are going through changes as well. Web and Software Development are being merged into a single competency, meaning that licenses you would have received from having both as Gold are reduced. Other competencies are either being removed or changed, as are the exam requirements. In the same way that we’re seeing faster release cycles from Microsoft, so too will we see the Microsoft Partner Program and MS Certifications evolve faster than ever before. Many of us got caught in the last wave of changes, but this time we can see the wave coming – and it looks pretty big!

    Read the article

  • Using Resources the Right Way

    - by BuckWoody
    It’s an interesting time in computing technology. At one point there was a dearth of information available for solving a given problem, or educating ourselves on broader topics so that we can solve problems in the future. With dozens, perhaps hundreds or thousands of web sites and content available (for free, in many cases) from vendors, peers, even colleges and universities, it seems like there is actually too much information. Who has the time to absorb all this information and training? Even if you had the inclination, where to start? In fact, it seems so overwhelming that I often hear people saying that they can’t find the training they need, or that vendor X or Y “doesn’t help their users”. On questioning these folks, however, I often find that they – and sometimes I - haven’t put in the effort to learn what resources we have. That’s where blogs, like this one, can help. If you follow a blog, either by checking it often or perhaps subscribing to the Really Simple Syndication (RSS) feed, you’ll be able to spread out the search or create a mental filter for the information you need. But it’s not enough just read a blog or a web page. The creators need real feedback – what doesn’t work, and what does. Yes, you’re allowed to tell a vendor or writer “This helped me because…” so that you reinforce the positives. To be sure, bring up what doesn’t work as well –  that’s fine. But be specific, and be constructive. You’d be surprised at how much it matters. I know for a fact at Microsoft we listen – there is a real live person that reads your comments. I’m sure this is true of other vendors, and I also know that most blog authors – yours truly most especially – wants to know what you think.   In this blog entry I’d to call your attention to three resources you have at your disposal, and how you can use them to help. I’ll try to bring up things like this from time to time that I find useful, and cover in them in more depth like this. Think of this as a synopsis of a longer set of resources that you can use to filter whether you want to research further, bookmark, or forward on to a circle of friends where you think it might help them.   Data Driven Design Concepts http://msdn.microsoft.com/en-us/library/windowsazure/jj156154 I’ll start with a great site that walks you through the process of designing a solution from a data-first perspective. As you know, I believe all computing is merely re-arranging data. If you follow that logic as well, you’ll realize that whenever you create a solution, you should start at the data-end of the application. This resource helps you do that. Even if you don’t use the specific technologies the instructions use, the concepts hold for almost any other technology that deals with data. This should be a definite bookmark for a developer, DBA, or Data Architect. When I mentioned my admiration for this resource here at Microsoft, the team that created it contacted me and asked if I’d share an e-mail address to my readers so that you can comment on it. You’re guaranteed to be heard – you can suggest changes, talk about how useful – or not – it is, and so on. Here’s that address:  [email protected]   End-to-End Example of a complete Hybrid Application – with Live Demo https://azurestocktrader.cloudapp.net/Default.aspx I learn by example. I also like having ready-made, live, functional demos that show the completed solution at work. If you’ve ever wanted to learn how a complex, complete, hybrid application that bridges on-premises systems with cloud-based databases, code, functions and more, this is it. It’s a stock-trading simulator, and you can get everything from the design to the code itself, or you can just play with the application. It’s running on Windows Azure, the actual production servers we use for everything else. Using a Cloud-Based Service https://azureconfigweb.cloudapp.net/Default.aspx Along with that stock-trading application, you have a full demonstration and usable code sample of a web-based service available. If you’re a developer, this is a style of code you need to understand for everything from iPhone development to a full Service-Oriented Architecture (SOA) environment. So check out these resources. I’ll post more from time to time as I run across them. Hopefully they’ll be as useful to you as they are to me. Oh, and if you have a comment on any of the resources, let them know. And if you have any comments about these or any of my entries, feel free to post away. To quote a famous TV Show: “Hello Seattle – I’m listening…”

    Read the article

  • What are some concise and comprehensive introductory guide to unit testing for a self-taught programmer [closed]

    - by Superbest
    I don't have much formal training in programming and I have learned most things by looking up solutions on the internet to practical problems I have. There are some areas which I think would be valuable to learn, but which ended up both being difficult to learn and easy to avoid learning for a self-taught programmer. Unit testing is one of them. Specifically, I am interested in tests in and for C#/.NET applications using Microsoft.VisualStudio.TestTools in Visual Studio 2010 and/or 2012, but I really want a good introduction to the principles so language and IDE shouldn't matter much. At this time I'm interested in relatively trivial tests for small or medium sized programs (development time of weeks or months and mostly just myself developing). I don't necessarily intend to do test-driven development (I am aware that some say unit testing alone is supposed to be for developing features in TDD, and not an assurance that there are no bugs in the software, but unit testing is often the only kind of testing for which I have resources). I have found this tutorial which I feel gave me a decent idea of what unit tests and TDD looks like, but in trying to apply these ideas to my own projects, I often get confused by questions I can't answer and don't know how to answer, such as: What parts of my application and what sorts of things aren't necessarily worth testing? How fine grained should my tests be? Should they test every method and property separately, or work with a larger scope? What is a good naming convention for test methods? (since apparently the name of the method is the only way I will be able to tell from a glance at the test results table what works in my program and what doesn't) Is it bad to have many asserts in one test method? Since apparently VS2012 reports only that "an Assert.IsTrue failed within method MyTestMethod", and if MyTestMethod has 10 Assert.IsTrue statements, it will be irritating to figure out why a test is failing. If a lot of the functionality deals with writing and reading data to/from the disk in a not-exactly trivial fashion, how do I test that? If I provide a bunch of files as input by placing them in the program's directory, do I have to copy those files to the test project's bin/Debug folder now? If my program works with a large body of data and execution takes minutes or more, should my tests have it do the whole use all of the real data, a subset of it, or simulated data? If latter, how do I decide on the subset or how to simulate? Closely related to the previous point, if a class is such that its main operation happens in a state that is arrived to by the program after some involved operations (say, a class makes calculations on data derived from a few thousands of lines of code analyzing some raw data) how do I test just that class without inevitably ending up testing that class and all the other code that brings it to that state along with it? In general, what kind of approach should I use for test initialization? (hopefully that is the correct term, I mean preparing classes for testing by filling them in with appropriate data) How do I deal with private members? Do I just suck it up and assume that "not public = shouldn't be tested"? I have seen people suggest using private accessors and reflection, but these feel like clumsy and unsuited for regular use. Are these even good ideas? Is there anything like design patterns concerning testing specifically? I guess the main themes in what I'd like to learn more about are, (1) what are the overarching principles that should be followed (or at least considered) in every testing effort and (2) what are popular rules of thumb for writing tests. For example, at one point I recall hearing from someone that if a method is longer than 200 lines, it should be refactored - not a universally correct rule, but it has been quite helpful since I'd otherwise happily put hundreds of lines in single methods and then wonder why my code is so hard to read. Similarly I've found ReSharpers suggestions on member naming style and other things to be quite helpful in keeping my codebases sane. I see many resources both online and in print that talk about testing in the context of large applications (years of work, 10s of people or more). However, because I've never worked on such large projects, this context is very unfamiliar to me and makes the material difficult to follow and relate to my real world problems. Speaking of software development in general, advice given with the assumptions of large projects isn't always straightforward to apply to my own, smaller endeavors. Summary So my question is: What are some resources to learn about unit testing, for a hobbyist, self-taught programmer without much formal training? Ideally, I'm looking for a short and simple "bible of unit testing" which I can commit to memory, and then apply systematically by repeatedly asking myself "is this test following the bible of testing closely enough?" and then amending discrepancies if it doesn't.

    Read the article

  • Custom NSView in NSMenuItem not receiving mouse events

    - by Dennis
    I have an NSMenu popping out of an NSStatusItem using popUpStatusItemMenu. These NSMenuItems show a bunch of different links, and each one is connected with setAction: to the openLink: method of a target. This arrangement has been working fine for a long time. The user chooses a link from the menu and the openLink: method then deals with it. Unfortunately, I recently decided to experiment with using NSMenuItem's setView: method to provide a nicer/slicker interface. Basically, I just stopped setting the title, created the NSMenuItem, and then used setView: to display a custom view. This works perfectly, the menu items look great and my custom view is displayed. However, when the user chooses a menu item and releases the mouse, the action no longer works (i.e., openLink: isn't called). If I just simply comment out the setView: call, then the actions work again (of course, the menu items are blank, but the action is executed properly). My first question, then, is why setting a view breaks the NSMenuItem's action. No problem, I thought, I'll fix it by detecting the mouseUp event in my custom view and calling my action method from there. I added this method to my custom view: - (void)mouseUp:(NSEvent *)theEvent { NSLog(@"in mouseUp"); } No dice! This method is never called. I can set tracking rects and receive mouseEntered: events, though. I put a few tests in my mouseEntered routine, as follows: if ([[self window] ignoresMouseEvents]) { NSLog(@"ignoring mouse events"); } else { NSLog(@"not ignoring mouse events"); } if ([[self window] canBecomeKeyWindow]) { dNSLog((@"canBecomeKeyWindow")); } else { NSLog(@"not canBecomeKeyWindow"); } if ([[self window] isKeyWindow]) { dNSLog((@"isKeyWindow")); } else { NSLog(@"not isKeyWindow"); } And got the following responses: not ignoring mouse events canBecomeKeyWindow not isKeyWindow Is this the problem? "not isKeyWindow"? Presumably this isn't good because Apple's docs say "If the user clicks a view that isn’t in the key window, by default the window is brought forward and made key, but the mouse event is not dispatched." But there must be a way do detect these events. HOW? Adding: [[self window] makeKeyWindow]; has no effect, despite the fact that canBecomeKeyWindow is YES.

    Read the article

  • Creating thousands of records in Rails

    - by willCosgrove
    Let me set the stage: My application deals with gift cards. When we create cards they have to have a unique string that the user can use to redeem it with. So when someone orders our gift cards, like a retailer, we need to make a lot of new card objects and store them in the DB. With that in mind, I'm trying to see how quickly I can have my application generate 100,000 Cards. Database expert, I am not, so I need someone to explain this little phenomena: When I create 1000 Cards, it takes 5 seconds. When I create 100,000 cards it should take 500 seconds right? Now I know what you're wanting to see, the card creation method I'm using, because the first assumption would be that it's getting slower because it's checking the uniqueness of a bunch of cards, more as it goes along. But I can show you my rake task desc "Creates cards for a retailer" task :order_cards, [:number_of_cards, :value, :retailer_name] => :environment do |t, args| t = Time.now puts "Searching for retailer" @retailer = Retailer.find_by_name(args[:retailer_name]) puts "Retailer found" puts "Generating codes" value = args[:value].to_i number_of_cards = args[:number_of_cards].to_i codes = [] top_off_codes(codes, number_of_cards) while codes != codes.uniq codes.uniq! top_off_codes(codes, number_of_cards) end stored_codes = Card.all.collect do |c| c.code end while codes != (codes - stored_codes) codes -= stored_codes top_off_codes(codes, number_of_cards) end puts "Codes are unique and generated" puts "Creating bundle" @bundle = @retailer.bundles.create!(:value => value) puts "Bundle created" puts "Creating cards" @bundle.transaction do codes.each do |code| @bundle.cards.create!(:code => code) end end puts "Cards generated in #{Time.now - t}s" end def top_off_codes(codes, intended_number) (intended_number - codes.size).times do codes << ReadableRandom.get(CODE_LENGTH) end end I'm using a gem called readable_random for the unique code. So if you read through all of that code, you'll see that it does all of it's uniqueness testing before it ever starts creating cards. It also writes status updates to the screen while it's running, and it always sits for a while at creating. Meanwhile it flies through the uniqueness tests. So my question to the stackoverflow community is: Why is my database slowing down as I add more cards? Why is this not a linear function in regards to time per card? I'm sure the answer is simple and I'm just a moron who knows nothing about data storage. And if anyone has any suggestions, how would you optimize this method, and how fast do you think you could get it to create 100,000 cards? (When I plotted out my times on a graph and did a quick curve fit to get my line formula, I calculated how long it would take to create 100,000 cards with my current code and it says 5.5 hours. That maybe completely wrong, I'm not sure. But if it stays on the line I curve fitted, it would be right around there.)

    Read the article

  • What to do when you need more verbs in REST

    - by Richard Levasseur
    There is another similar question to mine, but the discussion veered away from the problem I'm encounting. Say I have a system that deals with expense reports (ER). You can create and edit them, add attachments, and approve/reject them. An expense report might look like this: GET /er/1 => {"title": "Trip to NY", "totalcost": "400 USD", "comments": [ "john: Please add the total cost", "mike: done, can you approve it now?" ], "approvals": [ {"john": "Pending"}, {"finance-group": "Pending"}] } That looks fine, right? Thats what an expense report document looks like. If you want to update it, you can do this: POST /er/1 {"title": "Trip to NY 2010"} If you want to approve it, you can do this: POST /er/1/approval {"approved": true} But, what if you want to update the report and approve it at the same time? How do we do that? If you only wanted to approve, then doing a POST to something like /er/1/approval makes sense. We could put a flag in the URL, POST /er/1?approve=1, and send the data changes as the body, but that flag doesn't seem RESTful. We could put special field to be submitted, too, but that seems a bit hacky, too. If we did that, then why not send up data with attributes like set_title or add_to_cost? We could create a new resource for updating and approving, but (1) I can't think of how to name it without verbs, and (2) it doesn't seem right to name a resource based on what actions can be done to it (what happens if we add more actions?) We could have an X-Approve: True|False header, but headers seem like the wrong tool for the job. It'd also be difficult to get set headers without using javascript in a browser. We could use a custom media-type, application/approve+yes, but that seems no better than creating a new resource. We could create a temporary "batch operations" url, /er/1/batch/A. The client then sends multiple requests, perhaps POST /er/1/batch/A to update, then POST /er/1/batch/A/approval to approve, then POST /er/1/batch/A/status to end the batch. On the backend, the server queues up all the batch requests somewhere, then processes them in the same backend-transaction when it receives the "end batch processing" request. The downside with this is, obviously, that it introduces a lot of complexity. So, what is a good, general way to solve the problem of performing multiple actions in a single request? General because its easy to imagine additional actions that might be done in the same request: Suppress or send notifications (to email, chat, another system, whatever) Override some validation (maximum cost, names of dinner attendees) Trigger backend workflow that doesn't have a representation in the document.

    Read the article

  • Storing simulation results in a persistent manner for Python?

    - by Az
    Background: I'm running multiple simuations on a set of data. For each session, I'm allocating projects to students. The difference between each session is that I'm randomising the order of the students such that all the students get a shot at being assigned a project they want. I was writing out some of the allocations in a spreadsheet (i.e. Excel) and it basically looked like this (tiny snapshot, actual table extends to a few thousand sessions, roughly 100 students). | | Session 1 | Session 2 | Session 3 | |----------|-----------|-----------|-----------| |Stu1 |Proj_AA |Proj_AB |Proj_AB | |----------|-----------|-----------|-----------| |Stu2 |Proj_AB |Proj_AA |Proj_AC | |----------|-----------|-----------|-----------| |Stu3 |Proj_AC |Proj_AC |Proj_AA | |----------|-----------|-----------|-----------| Now, the code that deals with the allocation currently stores a session in an object. The next time the allocation is run, the object is over-written. Thus what I'd really like to do is to store all the allocation results. This is important since I later need to derive from the data, information such as: which project Stu1 got assigned to the most or perhaps how popular Proj_AC was (how many times it was assigned / number of sessions). Question(s): What methods can I possibly use to basically store such session information persistently? Basically, each session output needs to add itself to the repository after ending and before beginning the next allocation cycle. One solution that was suggested by a friend was mapping these results to a relational database using SQLAlchemy. I kind of like the idea since this does give me an opportunity to delve into databases. Now the database structure I was recommended was: |----------|-----------|-----------| |Session |Student |Project | |----------|-----------|-----------| |1 |Stu1 |Proj_AA | |----------|-----------|-----------| |1 |Stu2 |Proj_AB | |----------|-----------|-----------| |1 |Stu3 |Proj_AC | |----------|-----------|-----------| |2 |Stu1 |Proj_AB | |----------|-----------|-----------| |2 |Stu2 |Proj_AA | |----------|-----------|-----------| |2 |Stu3 |Proj_AC | |----------|-----------|-----------| |3 |Stu1 |Proj_AB | |----------|-----------|-----------| |3 |Stu2 |Proj_AC | |----------|-----------|-----------| |3 |Stu3 |Proj_AA | |----------|-----------|-----------| Here, it was suggested that I make the Session and Student columns a composite key. That way I can access a specific record for a particular student for a particular session. Or I can merely get the entire allocation run for a particular session. Questions: Is the idea a good one? How does one implement and query a composite key using SQLAlchemy? What happens to the database if a particular student is not assigned a project (happens if all projects that he wants are taken)? In the code, if a student is not assigned a project, instead of a proj_id he simply gets None for that field/object. I apologise for asking multiple questions but since these are closely-related, I thought I'd ask them in the same space.

    Read the article

  • Lazy loading the addthis script? (or lazy loading external js content dependent on already fired eve

    - by Keith Bentrup
    I want to have the addthis widget available for my users, but I want to lazy load it so that my page loads as quickly as possible. However, after trying it via a script tag and then via my lazy loading method, it appears to only work via the script tag. In the obfuscated code, I see something that looks like it's dependent on the DOMContentLoaded event (at least for firefox). Since the DOMContentLoaded event has already fired, the widget doesn't render properly. What to do? I could just use a script tag (slower)... or could I fire (in a cross browser way) the DOMContentLoaded (or equivalent) event? I have a feeling this may not be possible b/c I believe that (like jQuery) there are multiple tests of the content ready event, and so multiple simulated events would have to occur. Nonetheless, this is an interesting problem b/c I have seen a couple widgets now assume that you are including their stuff via static script tags. It would be nice if they wrote code that was more useful to developers concerned about speed, but until then, is there a work around?? And/or are any of my assumptions wrong? Edit: Because the 1st answer to the question seemed to miss the point of my problem, I wanted to clarify the situation. This is about a specific problem. I'm not looking for yet another lazy load script or check if some dependencies are loaded script. Specifically this problem deals with external widgets that you do not have control over and may or may not be obfuscated delaying the load of the external widgets until they are needed or at least, til substantially after everything else has been loaded including other deferred elements b/c of the how the widget was written, precludes existing, typical lazy loading paradigms While it's esoteric, I have seen it happen with a couple widgets - where the widget developers assume that you're just willing to throw in another script tag at the bottom of the page. I'm looking to save those 500-1000 ms** though as numerous studies by yahoo, google, and amazon show it to be important to your user's experience. **My testing with hammerhead and personal experience indicates that this will be my savings in this case.

    Read the article

  • PHP MVC Principles

    - by George
    I'm not using an off-the-shelf framework and don't particularly want to (nor d I want to go into the reasons why...). Anyway, onto my question(s), I hope it make sense.... I'm trying to get my head around what should go in the model and what should go in the controller. Originally I had the impression that a model class should represent an actual object (eg - a car from the cars table of a database) and model properties should mirror the database fields. However I'm now getting the feeling that I've got the wrong idea - should an instance of a model class represent an actual item, or should it contain a number of methods for doing stuff - sometimes to one car or sometimes to multiple cars based on my example earlier. For example I want to get all the cars from a the database and show them in the view. Am I right in think it should be along the lines of this? Controller File function list() { $cars = $this->model->get_all(); $this->view->add($cars); $this->view->render('cars-list'); } Model File function get_all() { // Use a database interaction class that I've written $cars = Database::select(); return $cars; } Now, if the car had a "status" field that was stored as an integer in the database and I wanted to change that to a string, where should that be done? By looping the SQL results array in the get_all() method in the model? Also, where should form validation live? I have written a validation class that works a little like this: $validator = new Validator(); $validator->check('field_name', 'required'); If the check fails, it adds an error message to the array in the Validator. This array of error messages would then get passed to the view. Should the use of my validator class go in model or the controller? Thanks in advance for for any help anyone can offer. If you know of any links to a simple MVC example / open source application that deals with basic CRUD, they would be much appreciated.

    Read the article

  • What is the best way to archive data in a relational database?

    - by GenericTypeTea
    I have a bit of an issue with a particular aspect of a program I'm working on. I need the ability to archive (fix) a table so that a change anywhere in the system will not affect the results it returns. This is the basic structure of what I need to fix: Recipe --> Recipe (as sub recipe) Recipe --> Ingredients So, if I fix a Recipe, I need to ensure all the sub recipes (including all the sub recipes sub recipes and so forth) are fixed and all its ingredients are fixed. The problem is that the sub recipe and ingredients still need to be modifiable as they are used by other recipes that are not fixed. I came up with a solution whereby I serialize (with protobuf-net) a master object that deals with the recipe and all the sub recipes and ingredients and save the archive data to a table like follows: Archive{ ReferenceId, (i.e. RecipeId) ReferenceTypeId, (i.e. Recipe) ArchiveData varbinary(max) } Now, this works great and is almost perfect... however I totally forgot (I'd love to blame the agile development mentally, however this was just short sighted) that this information needs to be reported on. As far as I'm aware I can't think how I could inflate the serialized data back into my Recipe Object and use it in a Report. I'm using the standard SQL 2005 report services at the moment. Alternatively, I guess I could do the following: Duplicate every table and tag the word "Archive" on the end of the table name. This would then give me an area of specific archive data... but ignoring my simplified example, there'd actually be about 15 tables duplicated. Add a nullable, non-foreign key property called "CopiedFromId" to every table that contains fixed data and duplicate every record that the recipe (and all it's sub recipes and all their sub recipes) touches. Create some sort of denormalised structure that could be restored from at a later date to the original, unfixed recipe. Although I think this would be like option 1 and involve a lot of extra tables. Anyway, I'm at a total loss and do not like any of the ideas particularly. Can anyone please advise the best course of action? EDIT: Or 4) Create tables specific to what the report requires and populate them with the data when the user clicks the report button? This would cause about 4 extra tables for the report in question.

    Read the article

  • LINQ Query Returning Multiple Copies Of First Result

    - by Mike G
    I'm trying to figure out why a simple query in LINQ is returning odd results. I have a view defined in the database. It basically brings together several other tables and does some data munging. It really isn't anything special except for the fact that it deals with a large data set and can be a bit slow. I want to query this view based on a long. Two sample queries below show different queries to this view. var la = Runtime.OmsEntityContext.Positions.Where(p => p.AccountNumber == 12345678).ToList(); var deDa = Runtime.OmsEntityContext.Positions.Where(p => p.AccountNumber == 12345678).Select(p => new { p.AccountNumber, p.SecurityNumber, p.CUSIP }).ToList(); The first one should hand back a List. The second one will be a list of anonymous objects. When I do these queries in entities framework the first one will hand me back a list of results where they're all exactly the same. The second query will hand me back data where the account number is the one that I queried and the other values differ. This seems to do this on a per account number basis, ie if I were to query for one account number or another all the Position objects for one account would have the same value (the first one in the list of Positions for that account) and the second account would have a set of Position objects that all had the same value (again, the first one in it's list of Position objects). I can write SQL that is in effect the same as either of the two EF queries. They both come back with results (say four) that show the correct data, one account number with different securities numbers. Why does this happen??? Is there something that I could be doing wrong so that if I had four results for the first query above that the first record's data also appears in the 2-4th's objects??? I cannot fathom what would/could be causing this. I've searched Google for all kinds of keywords and haven't seen anyone with this issue. We partial class out the Positions class for added functionality (smart object) and some smart properties. There are even some constructors that provide some view model type support. None of this is invoked in the request (I'm 99% sure of this). However, we do this same pattern all over the app. The only thing I can think of is that the mapping in the EDMX is screwy. Is there a way that this would happen if the "primary keys" in the EDMX were not in fact unique given the way the view is constructed? I'm thinking that the dev who imported this model into the EDMX let the designer auto select what would be unique. Any help would give a haggered dev some hope!

    Read the article

  • Spring.NET & Immediacy CMS (or how to inject to server side controls without using PageHandlerFactor

    - by Simon Rice
    Is there any way to inject dependencies into an Immediacy CMS control using Spring.NET, ideally without having to use to ContextRegistry when initialising the control? Update, with my own answer The issue here is that Immediacy already has a handler defined in web.config that deals with all aspx pages, & so it's not possible add an entry for Spring.NET's PageHandlerFactory in web.config as per a normal webforms app. That rules out making the control implement ISupportsWebDependencyInjection. Furthermore, most of Immediacy's generated pages are aspx pages that don't physically exist on the drive. I have changed the title of the question to reflect this. What I have done to get Dependency Injection working is: Add the usual entries to web.config for Spring.NET as outlined in the documentation, except for the adding the entry to the <httpHandlers> section. In this case I've got my object definitions in Spring.config. Create the following abstract base class that will deal with all of the Dependency Injection work: DIControl.cs public abstract class DIControl : ImmediacyControl { protected virtual string DIName { get { return this.GetType().Name; } } protected override void OnInit(EventArgs e) { if (ContextRegistry.GetContext().GetObject(DIName, this.GetType()) != null) ContextRegistry.GetContext().ConfigureObject(this, DIName); base.OnInit(e); } } For non-immediacy controls, you can make this server side control inherit from Control or whatever subclass of that you like. For any control with which you wish to use with Spring.NET's Inversion of Control container, define it to inherit from DIControl & add the relelvant entry to Spring.config, for example: SampleControl.cs public class SampleControl : DIControl, INamingContainer { public string Text { get; set; } protected string InjectedText { get; set; } public SampleControl() : base() { Text = "Hello world"; } protected override void RenderContents(HtmlTextWriter output) { output.Write(string.Format("{0} {1}", Text, InjectedText)); } } Spring.config <objects xmlns="http://www.springframework.net"> <object id="SampleControl" type="MyProject.SampleControl, MyAssembly"> <property name="InjectedText" value="from Spring.NET" /> </object> </objects> You can optionally override DIName if you wish to name your entry in Spring.config differently from the name of your class. Provided everything's done correctly, you will have the control writing out "Hello world from Spring.NET!" when used in a page. This solution uses Spring.NET's ContextRegistry from within the control, but I would be surprised if there's no way around that for Immediacy at least since the page objects themselves aren't accessible. However, can this be improved at all from a Spring.NET perspective? Is there maybe an Immediacy plugin that already does this that I'm completely unaware of? Or is there an approach that does this in a more elegant way? I'm open to suggestions.

    Read the article

  • Starting company and getting a good deal

    - by Dan
    Hi, I'm having the possibility to start a small company with someone else. I'm a software programmer and been working on a field where there isn't much competition, at least in the platform I'm developing, which is the one with highest market share. This other person comes from marketing so he would find / provide clients and be in charge of the business side. So initially it would be me, the tech guy with the knowledge to develop our product and products based on my development, and this person who would become CEO (basically a 50/50 share). The idea is getting a product on our own, but also perform projects for others in order to get some cash. The problem is that his idea is doing this without initial capital. He tells me that he could get some customers from past business (which is mostly true as he has been in the business for some time) and then with the money obtained from such development, we could hire some freelance developers and build our own platform. I've already discussed with him that I don't find this the best way, provided that we need to compete against other companies some of who are VC funded and have many developers working fulltime. But most of all, I'm thinking of whether this is a fair deal to do, provided that this person is not providing anything other than clients, and I'd be the one having to do the work and dedicate time, thus also taking most of the risks. In all fairness, I'd expect him to put some initial capital, given that initially I'm putting some of my code which in some way is money given the time I've dedicated to it. So the question here is, does this seem fair to you? Is this the way it is usually done? My only concern here is that I don't have previous experience with such deals, and I ignore whether this is the usual thing to do in other cases. Certainly having a marketing person helps a lot. As you probably know being a programmer doesn't make me the most indicated person for marketing ;-) but at the same time I'm not sure if this could be a good deal or I'd just be making a pretty inconvenient deal. Hope I made my point clear, but feel free to let me know if more details are needed. Thanks in advance

    Read the article

  • C#/.NET Project - Am I setting things up correctly?

    - by JustLooking
    1st solution located: \Common\Controls\Controls.sln and its project: \Common\Controls\Common.Controls\Common.Controls.csproj Description: This is a library that contains this class: public abstract class OurUserControl : UserControl { // Variables and other getters/setters common to our UserControls } 2nd solution located: \AControl\AControl.sln and its project: \AControl\AControl\AControl.csproj Description: Of the many forms/classes, it will contain this class: using Common.Controls; namespace AControl { public partial class AControl : OurUserControl { // The implementation } } A note about adding references (not sure if this is relevant): When I add references (for projects I create), using the names above: 1. I add Common.Controls.csproj to AControl.sln 2. In AControl.sln I turn off the build of Common.Controls.csproj 3. I add the reference to Common.Controls (by project) to AControl.csproj. This is the (easiest) way I know how to get Debug versions to match Debug References, and Release versions to match Release References. Now, here is where the issue lies (the 3rd solution/project that actually utilizes the UserControl): 3rd solution located: \MainProj\MainProj.sln and its project: \MainProj\MainProj\MainProj.csproj Description: Here's a sample function in one of the classes: private void TestMethod<T>() where T : Common.Controls.OurUserControl, new() { T TheObject = new T(); TheObject.OneOfTheSetters = something; TheObject.AnotherOfTheSetters = something_else; // Do stuff with the object } We might call this function like so: private void AnotherMethod() { TestMethod<AControl.AControl>(); } This builds, runs, and works. No problem. The odd thing is after I close the project/solution and re-open it, I have red squigglies everywhere. I bring up my error list and I see tons of errors (anything that deals with AControl will be noted as an error). I'll see errors such as: The type 'AControl.AControl' cannot be used as type parameter 'T' in the generic type or method 'MainProj.MainClass.TestMethod()'. There is no implicit reference conversion from 'AControl.AControl' to 'Common.Controls.OurUserControl'. or inside the actual method (the properties located in the abstract class): 'AControl.AControl' does not contain a definition for 'OneOfTheSetters' and no extension method 'OneOfTheSetters' accepting a first argument of type 'AControl.AControl' could be found (are you missing a using directive or an assembly reference?) Meanwhile, I can still build and run the project (then the red squigglies go away until I re-open the project, or close/re-open the file). It seems to me that I might be setting up the projects incorrectly. Thoughts?

    Read the article

  • Is a Multi-DAL Approach the way to go here?

    - by Krisc
    Working on the data access / model layer in this little MVC2 project and trying to think things out to future projects. I have a database with some basic tables and I have classes in the model layer that represent them. I obviously need something to connect the two. The easiest is to provide some sort of 'provider' that can run operations on the database and return objects. But this is for a website that would potentially be used "a lot" (I know, very general) so I want to cache results from the data layer and keep the cache updated as new data is generated. This question deals with how best to approach this problem of dual DALS. One that returns cached data when possible and goes to the data layer when there is a cache miss. But more importantly, how to integrate the core provider (thing that goes into database) with the caching layer so that it too can rely on cached objects rather than creating new ones. Right now I have the following interfaces: IDataProvider is used to reach the database. It doesn't concern itself with the meaning of the objects it produces, but simply the way to produce them. interface IDataProvider{ // Select, Update, Create, et cetera access IEnumerable<Entry> GetEntries(); Entry GetEntryById(int id); } IDataManager is a layer that sits on top of the IDataProvider layer and manages the cache interface IDataManager : IDataProvider{ void ClearCache(); } Note that in practice the IDataManager implementation will have useful helper functions to add objects to their related cache stores. (In the future I may define other functions on the interface) I guess what I am looking for is the best way to approach a loop back from the IDataProvider implementations so that they can access the cache. Or a different approach entirely may be in order? I am not very interested in 3rd party products at the moment as I am interested in the design of these things much more than this specific implementation. Edit: I realize the title may be a bit misleading. I apologize for that... not sure what to call this question.

    Read the article

  • .htaccess mod_rewrite URL query

    - by 1001001
    I was hoping someone could help me out. I'm building a CRM application and need help modifying the .htaccess file to clean up the URLs. I've read every post regarding .htaccess and mod_rewrite and I've even tried using http://www.generateit.net/mod-rewrite/ to obtain the results with no success. Here is what I am attempting to do. Let's call the base URL www.domain.com We are using php with a mysql back-end and some jQuery and javascript In that "root" folder is my .htaccess file. I'm not sure if I need a .htaccess file in each subdirectory or if one in the root is enough. We have several actual directories of files including "crm", "sales", "finance", etc. First off we want to strip off all the ".php" extensions which I am able to do myself thanks to these posts. However, the querying of the company and contact IDs are where I am stuck. Right now if I load www.domain.com/crm/companies.php it displays all the companies in a list. If I click on one of the companies it uses javascript to call a "goto_company(x)" jQuery script that writes a form and submit that form based on the ID (x) of the company. This works fine and keeps the links clean as all the end user sees is www.domain.com/crm/company.php. However you can't navigate directly to a company. So we added a few lines in PHP to see if the POST is null and try a GET instead allowing us to do www.domain.com/crm/company.php?companyID=40 which displays company #40 out of the database. I need to rewrite this link, and all other associated links to www.domain.com/crm/company/40 I've tried everything and nothing seems to work. Keep in mind that I need to do this for "contacts" and also on the sales portion of the app will need to do something for "deals". To summarize here's what I am looking to do: Change www.domain.com/crm/dash.php to www.domain.com/crm/dash Change www.domain.com/crm/company.php?companyID=40 to www.domain.com/crm/company/40 Change www.domain.com/crm/contact.php?contactID=27 to www.domain.com/crm/contact/27 Change www.domain.com/sales/dash.php to www.domain.com/sales/dash Change www.domain.com/sales/deal.php?dealID=6 to www.domain.com/sales/deal/6 (40, 27, and 6 are just arbitrary numbers as examples) Just for reference, when I used the generateit.net/mod-rewrite site using www.domain.com/crm/company.php?companyID=40 as an example, here is what it told me to put in my .htaccess file: Options +FollowSymLinks RewriteEngine On RewriteRule ^crm/company/([^/]*)$ /crm/company.php?companyID=$1 [L] Needless to say that didn't work.

    Read the article

  • I need to modify a program to use arrays and a method call. Should I modify the running file, the data collection file, or both?

    - by g3n3rallyl0st
    I have to have multiple classes for this program. The problem is, I don't fully understand arrays and how they work, so I'm a little lost. I will post my program I have written thus far so you can see what I'm working with, but I don't expect anyone to DO my assignment for me. I just need to know where to start and I'll try to go from there. I think I need to use a double array since I will be working with decimals since it deals with money, and my method call needs to calculate total price for all items entered by the user. Please help: RUNNING FILE package inventory2; import java.util.Scanner; public class RunApp { public static void main(String[] args) { Scanner input = new Scanner( System.in ); DataCollection theProduct = new DataCollection(); String Name = ""; double pNumber = 0.0; double Units = 0.0; double Price = 0.0; while(true) { System.out.print("Enter Product Name: "); Name = input.next(); theProduct.setName(Name); if (Name.equalsIgnoreCase("stop")) { return; } System.out.print("Enter Product Number: "); pNumber = input.nextDouble(); theProduct.setpNumber(pNumber); System.out.print("Enter How Many Units in Stock: "); Units = input.nextDouble(); theProduct.setUnits(Units); System.out.print("Enter Price Per Unit: "); Price = input.nextDouble(); theProduct.setPrice(Price); System.out.print("\n Product Name: " + theProduct.getName()); System.out.print("\n Product Number: " + theProduct.getpNumber()); System.out.print("\n Amount of Units in Stock: " + theProduct.getUnits()); System.out.print("\n Price per Unit: " + theProduct.getPrice() + "\n\n"); System.out.printf("\n Total cost for %s in stock: $%.2f\n\n\n", theProduct.getName(), theProduct.calculatePrice()); } } } DATA COLLECTION FILE package inventory2; public class DataCollection { String productName; double productNumber, unitsInStock, unitPrice, totalPrice; public DataCollection() { productName = ""; productNumber = 0.0; unitsInStock = 0.0; unitPrice = 0.0; } //setter methods public void setName(String name) { productName = name; } public void setpNumber(double pNumber) { productNumber = pNumber; } public void setUnits(double units) { unitsInStock = units; } public void setPrice(double price) { unitPrice = price; } //getter methods public String getName() { return productName; } public double getpNumber() { return productNumber; } public double getUnits() { return unitsInStock; } public double getPrice() { return unitPrice; } public double calculatePrice() { return (unitsInStock * unitPrice); } }

    Read the article

  • Vector Troubles in C++

    - by DistortedLojik
    I am currently working on a project that deals with a vector of objects of a People class. The program compiles and runs just fine, but when I use the debugger it dies when trying to do anything with the PersonWrangler object. I currently have 3 different classes, one for the person, a personwrangler which handles all of the people collectively, and a game class that handles the game input and output. Edit: My basic question is to understand why it is dying when it calls outputPeople. Also I would like to understand why my program works exactly as it should unless I use the debugger. The outputPeople function works the way I intended that way. Edit 2: The callstack has 3 bad calls which are: std::vector ::begin(this=0xbaadf00d) std::vector ::size(this=0xbaadf00d) PersonWrangler::outputPeople(this=0xbaadf00d) Relevant code: class Game { public: Game(); void gameLoop(); void menu(); void setStatus(bool inputStatus); bool getStatus(); PersonWrangler* hal; private: bool status; }; which calls outputPeople where it promptly dies from a baadf00d error. void Game::menu() { hal->outputPeople(); } where hal is an object of PersonWrangler type class PersonWrangler { public: PersonWrangler(int inputStartingNum); void outputPeople(); vector<Person*> peopleVector; vector<Person*>::iterator personIterator; int totalPeople; }; and the outputPeople function is defined as void PersonWrangler::outputPeople() { int totalConnections = 0; cout << " Total People:" << peopleVector.size() << endl; for (unsigned int i = 0;i < peopleVector.size();i++) { sort(peopleVector[i]->connectionsVector.begin(),peopleVector[i]->connectionsVector.end()); peopleVector[i]->connectionsVector.erase( unique (peopleVector[i]->connectionsVector.begin(),peopleVector[i]->connectionsVector.end()),peopleVector[i]->connectionsVector.end()); peopleVector[i]->outputPerson(); totalConnections+=peopleVector[i]->connectionsVector.size(); } cout << "Total connections:" << totalConnections/2 << endl; }

    Read the article

  • httpd.conf configuration - for internal/external access

    - by tom smith
    hey. after a lot of trail/error/research, i've decided to post here in the hopes that i can get clarification on what i've screwed up... i've got a situation where i have multiple servers behind a router/firewall. i want to be able to access the sites i have from an internal and external url/address, and get the same site. i have to use portforwarding on the router, so i need to be able to use proxyreverse to redirect the user to the approriate server, running the apache/web app... my setup the external urls joomla.gotdns.com forge.gotdns.com both of these point to my router's external ip address (67.168.2.2) (not really) the router forwards port 80 to my server lserver6 192.168.1.56 lserver6 - 192.168.1.56 lserver9 - 192.168.1.59 lserver6 - joomla app lserver9 - forge app i want to be able to have the httpd process (httpd.conf) configured on lserver6 to be able to allow external users accessing the system (foo.gotdns.com) be able to access the joomla app on lserver6 and the same for the forge app running on lserver9 at the same time, i would also like to be able to access the apps from the internal servers, so i'd need to be able to somehow configure the vhost setup/proxyreverse setup to handle the internal access... i've tried setting up multiple vhosts with no luck.. i've looked at the different examples online.. so there must be something subtle that i'm missing... the section of my httpd.conf file that deals with the vhost is below... if there's something else that's needed, let me know and i can post it as well.. thanks -tom ##joomla - file /etc/httpd/conf.d/joomla.conf Alias /joomla /var/www/html/joomla <Directory /var/www/html/joomla> </Directory> # Use name-based virtual hosting. #NameVirtualHost *:80 # NOTE: NameVirtualHost cannot be used without a port specifier # (e.g. :80) if mod_ssl is being used, due to the nature of the # SSL protocol. # VirtualHost example: # Almost any Apache directive may go into a VirtualHost container. # The first VirtualHost section is used for requests without a known # server name. #<VirtualHost *:80> # ServerAdmin [email protected] # DocumentRoot /www/docs/dummy-host.example.com # ServerName dummy-host.example.com # ErrorLog logs/dummy-host.example.com-error_log # CustomLog logs/dummy-host.example.com-access_log common #</VirtualHost> NameVirtualHost 192.168.1.56:80 <VirtualHost 192.168.1.56:80> #ServerAdmin [email protected] #DocumentRoot /var/www/html #ServerName lserver6.tmesa.com #ServerName fforge.tmesa.com ServerName fforge.gotdns.com:80 #ErrorLog logs/dummy-host.example.com-error_log #CustomLog logs/dummy-host.example.com-access_log common #ProxyRequests Off ProxyPass / http://192.168.1.81:80/ ProxyPassReverse / http://192.168.1.81:80/ </VirtualHost> <VirtualHost 192.168.1.56:80> #ServerAdmin [email protected] DocumentRoot /var/www/html/joomla #ServerName lserver6.tmesa.com #ServerName fforge.tmesa.com ServerName 192.168.1.56:80 #ErrorLog logs/dummy-host.example.com-error_log #CustomLog logs/dummy-host.example.com-access_log common #ProxyRequests Off </VirtualHost>

    Read the article

  • How to transfer data between two networks efficiently

    - by Tono Nam
    I would like to transfer files between two places over the internet. Right now I have a VPN and I am able to browse, download and transfer files. So my question is not really how to transfer the files; Instead, I would like to use the most efficient approach because the two places constantly share a lot of data. The reason why I want to get rid of the VPN is because it is two slow. Having high upload speed is very expensive/impossible in residential places so I would like to use a different approach. I was thinking about using programs such as http://www.dropbox.com . The problem with Dropbox is that the free version comes with only 2 GB of storage. I think the deals they offer are OK and I might be willing to pay to get that increase in speed. But I am concerned with the speed of transferring data. Dropbox will upload the file to their server then send it from the server to the other location. I would like it to be even faster. Anyway I was thinking why not create a program myself. This is the algorithm that I was thinking of. Let me know if it sounds too crazy. (Remember my goal is to transfer files as fast as possible) Things that I will use in this algorithm: Server on the internet called S (Has fast download and upload speed. I pay to host a website and some services in there. I want to take advantage of it.) Client A at location 1 Client B at location 2 So lets say at location 1, 20 large files are created and need to be transferred to location 2. Client A compresses the files with the highest compression ratio possible. Client A starts sending data via UDP to client B. Because I am using UDP I will include the sequence number on each packet. Have server S help speed up things. For example every time a packet is lost we can use Server S to inform client A that it needs to resend a packet. Anyways I think this approach will increase the transfer rate. I do not know if it is possible to start sending data while it is being compressed. Or if it is possible to start decompressing data even if we are not done receiving the whole file. Maybe it will be faster to start sending the files right away without compressing. If I knew that I will always be sending large text files then I will obviously use the compression. I need this as a general algorithm. So I guess my question is could I increase performance by using UDP instead of TCP and by using an extra server to keep track of lost packets? And how should I compress files before sending? Compressing a 1 GB file with the highest compression ratio takes about 1 hour! I would like to take advantage of that time by sending it as it is being compressed.

    Read the article

  • How to transfer data between two netowks efficiently

    - by Tono Nam
    I will like to transfer files between two places over the internet. Right now I have a VPN and I am able to browse, download and transfer files. So my question is not really how to transfer the files; Instead, I will like to use the most efficient approach because the two places constantly share a lot of data. The reason why I want to get rid of the vpn is because it is two slow. Having high upload speed is very expensive/impossible on residential places so I will like to use a different approach. I was thinking about using programs such as http://www.dropbox.com . The problem with dropbox is it only enables 2 GB of storage in order for it to be free. I think the deals they offer are ok and I might be willing to pay to get that increase in speed. But I am concerned with the speed of transferring data. Dropbox will upload the file to their server then send it from the server to the other location. I will like it even faster lol. Anyways I was thinking why not create a program my self. This is the algorithm that I was thinking let me know if it sounds to crazy. (remember my goal is to transfer files as fastest as possible) Things that I will use in this algorithm: Server on the internet called S ( has fast download and upload speed. I pay to host a website and some services in there. I want to take advantage of it) Client A on location 1 Client B on location 2 So lets say on location 1 20 large files are created and need to be transferred to location 2. Client A compresses the files with the highest compression ratio possible. Client A starts sending data via UDP to client B. Because I am using UDP I will include the sequence number on each package. Have server S help speed up things. For example every time a package is lost we can use Server S to inform client A that it needs to resend a package. Anyways I think this approach will increase the transfer rate. I do not know if it is possible to start sending data meanwhile it is being compressed. Also if it is possible to start decompressing data even if we are not done receiving all the info. Maybe it will be faster to start sending the files right away without compressing. If I knew that I will always be sending large text files then I will obviously use the compression. I need this as a general algorithm. So i guess my question is should using UDP over TCP could increase performance by using an extra server to keep track of lost packages? and How should I compress files before sending? compressing a 1 GB file with the highest compression ration takes about 1 hour! I will like to take advantage of that time by sending it meanwhile it is compressed.

    Read the article

  • Inconsistent file downloads of (what should be) the same file

    - by Austin A.
    I'm working on a system that archives large collections of timetstamped images. Part of the system deals with saving an image to a growing .zip file. This morning I noticed that the log system said that an image was successfully downloaded and placed in the zip file, but when I downloaded the .zip (from an apache alias running on our server), the images didn't match the log. For example, although the log said that camera 3484 captured on January 17, 2011, when I download from the apache alias, the downloaded zip file only contains images up to January 14. So, I sshed onto the server, and unzipped the file in its own directory, and that zip file has images from January 14 to today (January 17). What strikes me as odd is that this should be the exact same file as the one I downloaded from the apache alias. Other experiments: I scp-ed the file from the server to my local machine, and the zip file has the newer images. But when I use an SCP client (in this case, Fugu for OSX), I get the zip file for the older images. In short: unzipping a file on the server or after downloading through scp or after downloading through wget gives one zip file, but unzipping a file from Chrome, Firefox, or SCP client gives a different zip file, when they should be exactly the same. Unzipping on the server... [user@server ~]$ cd /export1/amos/images/2011/84/3484/00003484/ [user@server 00003484]$ ls -la total 6180 drwxr-sr-x 2 user groupname 24 Jan 17 11:20 . drwxr-sr-x 4 user groupname 36 Jan 11 19:58 .. -rw-r--r-- 1 user groupname 6309980 Jan 17 12:05 2011.01.zip [user@server 00003484]$ unzip 2011.01.zip Archive: 2011.01.zip extracting: 20110114_140547.jpg extracting: 20110114_143554.jpg replace 20110114_143554.jpg? [y]es, [n]o, [A]ll, [N]one, [r]ename: y extracting: 20110114_143554.jpg extracting: 20110114_153458.jpg (...bunch of files...) extracting: 20110117_170459.jpg extracting: 20110117_173458.jpg extracting: 20110117_180501.jpg Using the wget through apache alias. local:~ user$ wget http://example.com/zipfiles/2011/84/3484/00003484/2011.01.zip --12:38:13-- http://example.com/zipfiles/2011/84/3484/00003484/2011.01.zip => `2011.01.zip' Resolving example.com... ip.ip.ip.ip Connecting to example.com|ip.ip.ip.ip|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 6,327,747 (6.0M) [application/zip] 100% [=====================================================================================================>] 6,327,747 1.03M/s ETA 00:00 12:38:56 (143.23 KB/s) - `2011.01.zip' saved [6327747/6327747] local:~ user$ unzip 2011.01.zip Archive: 2011.01.zip extracting: 20110114_140547.jpg (... same as before...) extracting: 20110117_183459.jpg Using scp to grab the zip local:~ user$ scp user@server:/export1/amos/images/2011/84/3484/00003484/2011.01.zip . 2011.01.zip 100% 6179KB 475.3KB/s 00:13 local:~ user$ unzip 2011.01.zip Archive: 2011.01.zip extracting: 20110114_140547.jpg (...same as before...) extracting: 20110117_183459.jpg Using Fugu to download 2011.01.zip from /export1/amos/images/2011/84/3484/00003484/ gives images 20110113_090457.jpg through 201100114_010554.jpg Using Firefox to download 2011.01.zip from http://example.com/zipfiles/2011/84/3484/00003484/2011.01.zip gives images 20110113_090457.jpg through 201100114_010554.jpg Using Chrome gives same results as Firefox. Relevant section from apache httpd.conf: # ScriptAlias: This controls which directories contain server scripts. # ScriptAliases are essentially the same as Aliases, except that # documents in the realname directory are treated as applications and # run by the server when requested rather than as documents sent to the client. # The same rules about trailing "/" apply to ScriptAlias directives as to # Alias. # ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" Alias /zipfiles/ /export1/amos/images/

    Read the article

  • ActiveX component can't create Object Error? Check 64 bit Status

    - by Rick Strahl
    If you're running on IIS 7 and a 64 bit operating system you might run into the following error using ASP classic or ASP.NET with COM interop. In classic ASP applications the error will show up as: ActiveX component can't create object   (Error 429) (actually without error handling the error just shows up as 500 error page) In my case the code that's been giving me problems has been a FoxPro COM object I'd been using to serve banner ads to some of my pages. The code basically looks up banners from a database table and displays them at random. The ASP classic code that uses it looks like this: <% Set banner = Server.CreateObject("wwBanner.aspBanner") banner.BannerFile = "wwsitebanners" Response.Write(banner.GetBanner(-1)) %> Originally this code had no specific error checking as above so the ASP pages just failed with 500 error pages from the Web server. To find out what the problem is this code is more useful at least for debugging: <% ON ERROR RESUME NEXT Set banner = Server.CreateObject("wwBanner.aspBanner") Response.Write(err.Number & " - " & err.Description) banner.BannerFile = "wwsitebanners" Response.Write(banner.GetBanner(-1)) %> which results in: 429 - ActiveX component can't create object which at least gives you a slight clue. In ASP.NET invoking the same COM object with code like this: <% dynamic banner = wwUtils.CreateComInstance("wwBanner.aspBanner") as dynamic; banner.cBANNERFILE = "wwsitebanners"; Response.Write(banner.getBanner(-1)); %> results in: Retrieving the COM class factory for component with CLSID {B5DCBB81-D5F5-11D2-B85E-00600889F23B} failed due to the following error: 80040154 Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)). The class is in fact registered though and the COM server loads fine from a command prompt or other COM client. This error can be caused by a COM server that doesn't load. It looks like a COM registration error. There are a number of traditional reasons why this error can crop up of course. The server isn't registered (run regserver32 to register a DLL server or /regserver on an EXE server) Access permissions aren't set on the COM server (Web account has to be able to read the DLL ie. Network service) The COM server fails to load during initialization ie. failing during startup One thing I always do to check for COM errors fire up the server in a COM client outside of IIS and ensure that it works there first - it's almost always easier to debug a server outside of the Web environment. In my case I tried the server in Visual FoxPro on the server with: loBanners = CREATEOBJECT("wwBanner.aspBanner") loBanners.cBannerFile = "wwsitebanners" ? loBanners.GetBanner(-1) and it worked just fine. If you don't have a full dev environment on the server you can also use VBScript do the same thing and run the .vbs file from the command prompt: Set banner = Server.CreateObject("wwBanner.aspBanner") banner.BannerFile = "wwsitebanners" MsgBox(banner.getBanner(-1)) Since this both works it tells me the server is registered and working properly. This leaves startup failures or permissions as the problem. I double checked permissions for the Application Pool and the permissions of the folder where the DLL lives and both are properly set to allow access by the Application Pool impersonated user. Just to be sure I assigned an Admin user to the Application Pool but still no go. So now what? 64 bit Servers Ahoy A couple of weeks back I had set up a few of my Application pools to 64 bit mode. My server is Server 2008 64 bit and by default Application Pools run 64 bit. Originally when I installed the server I set up most of my Application Pools to 32 bit mainly for backwards compatibility. But as more of my code migrates to 64 bit OS's I figured it'd be a good idea to see how well code runs under 64 bit code. The transition has been mostly painless. Until today when I noticed the problem with the code above when scrolling to my IIS logs and noticing a lot of 500 errors on many of my ASP classic pages. The code in question in most of these pages deals with this single simple COM object. It took a while to figure out that the problem is caused by the Application Pool running in 64 bit mode. The issue is that 32 bit COM objects (ie. my old Visual FoxPro COM component) cannot be loaded in a 64 bit Application Pool. The ASP pages using this COM component broke on the day I switched my main Application Pool into 64 bit mode but I didn't find the problem until I searched my logs for errors by pure chance. To fix this is easy enough once you know what the problem is by switching the Application Pool to Enable 32-bit Applications: Once this is done the COM objects started working correctly again. 64 bit ASP and ASP.NET with DCOM Servers This is kind of off topic, but incidentally it's possible to load 32 bit DCOM (out of process) servers from ASP.NET and ASP classic even if those applications run in 64 bit application pools. In fact, in West Wind Web Connection I use this capability to run a 64 bit ASP.NET handler that talks to a 32 bit FoxPro COM server which allows West Wind Web Connection to run in native 64 bit mode without custom configuration (which is actually quite useful). It's probably not a common usage scenario but it's good to know that you can actually access 32 bit COM objects this way from ASP.NET. For West Wind Web Connection this works out well as the DCOM interface only makes one non-chatty call to the backend server that handles all the rest of the request processing. Application Pool Isolation is your Friend For me the recent incident of failure in the classic ASP pages has just been another reminder to be very careful with moving applications to 64 bit operation. There are many little traps when switching to 64 bit that are very difficult to track and test for. I described one issue I had a couple of months ago where one of the default ASP.NET filters was loading the wrong version (32bit instead of 64bit) which was extremely difficult to track down and was caused by a very sneaky configuration switch error (basically 3 different entries for the same ISAPI filter all with different bitness settings). It took me almost a full day to track this down). Recently I've been taken to isolate individual applications into separate Application Pools rather than my past practice of combining many apps into shared AppPools. This is a good practice assuming you have enough memory to make this work. Application Pool isolate provides more modularity and allows me to selectively move applications to 64 bit. The error above came about precisely because I moved one of my most populous app pools to 64 bit and forgot about the minimal COM object use in some of my old pages. It's easy to forget. To 64bit or Not Is it worth it to move to 64 bit? Currently I'd say -not really. In my - admittedly limited - testing I don't see any significant performance increases. In fact 64 bit apps just seem to consume considerably more memory (30-50% more in my pools on average) and performance is minimally improved (less than 5% at the very best) in the load testing I've performed on a couple of sites in both modes. The only real incentive for 64 bit would be applications that require huge data spaces that exceed the 32 bit 4 gigabyte memory limit. However I have a hard time imagining an application that needs 4 gigs of memory in a single Application Pool :-). Curious to hear other opinions on benefits of 64 bit operation. © Rick Strahl, West Wind Technologies, 2005-2011Posted in COM   ASP.NET  FoxPro  

    Read the article

  • Issue 15: The Benefits of Oracle Exastack

    - by rituchhibber
         SOLUTIONS FOCUS The Benefits of Oracle Exastack Paul ThompsonDirector, Alliances and Solutions Partner ProgramsOracle EMEA Alliances & Channels RESOURCES -- Oracle PartnerNetwork (OPN) Oracle Exastack Program Oracle Exastack Ready Oracle Exastack Optimized Oracle Exastack Labs and Enablement Resources Oracle Exastack Labs Video Tour SUBSCRIBE FEEDBACK PREVIOUS ISSUES Exastack is a revolutionary programme supporting Oracle independent software vendor partners across the entire Oracle technology stack. Oracle's core strategy is to engineer software and hardware together, and our ISV strategy is the same. At Oracle we design engineered systems that are pre-integrated to reduce the cost and complexity of IT infrastructures while increasing productivity and performance. Oracle innovates and optimises performance at every layer of the stack to simplify business operations, drive down costs and accelerate business innovation. Our engineered systems are optimised to achieve enterprise performance levels that are unmatched in the industry. Faster time to production is achieved by implementing pre-engineered and pre-assembled hardware and software bundles. Our strategy of delivering a single-vendor stack simplifies and reduces costs associated with purchasing, deploying, and supporting IT environments for our customers and partners. In parallel to this core engineered systems strategy, the Oracle Exastack Program enables our Oracle ISV partners to leverage a scalable, integrated infrastructure that delivers their applications tuned, tested and optimised for high-performance. Specifically, the Oracle Exastack Program helps ISVs run their solutions on the Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, and Oracle SPARC SuperCluster T4-4 - integrated systems products in which the software and hardware are engineered to work together. These products provide OPN members with a lower cost and high performance infrastructure for database and application workloads across on-premise and cloud based environments. Ready and Optimized Oracle Partners can now leverage our new Oracle Exastack Program to become Oracle Exastack Ready and Oracle Exastack Optimized. Partners can achieve Oracle Exastack Ready status through their support for Oracle Solaris, Oracle Linux, Oracle VM, Oracle Database, Oracle WebLogic Server, Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, and Oracle SPARC SuperCluster T4-4. By doing this, partners can demonstrate to their customers that their applications are available on the latest major releases of these products. The Oracle Exastack Ready programme helps customers readily differentiate Oracle partners from lesser software developers, and identify applications that support Oracle engineered systems. Achieving Oracle Exastack Optimized status demonstrates that an OPN member has proven itself against goals for performance and scalability on Oracle integrated systems. This status enables end customers to readily identify Oracle partners that have tested and tuned their solutions for optimum performance on an Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, and Oracle SPARC SuperCluster T4-4. These ISVs can display the Oracle Exadata Optimized, Oracle Exalogic Optimized or Oracle SPARC SuperCluster Optimized logos on websites and on all their collateral to show that they have tested and tuned their application for optimum performance. Deliver higher value to customers Oracle's investment in engineered systems enables ISV partners to deliver higher value to customer business processes. New innovations are enabled through extreme performance unachievable through traditional best-of-breed multi-vendor server/software approaches. Core product requirements can be launched faster, enabling ISVs to focus research and development investment on core competencies in order to bring value to market as quickly as possible. Through Exastack, partners no longer have to worry about the underlying product stack, which allows greater focus on the development of intellectual property above the stack. Partners are not burdened by platform issues and can concentrate simply on furthering their applications. The advantage to end customers is that partners can focus all efforts on business functionality, rather than bullet-proofing underlying technologies, and so will inevitably deliver application updates faster. Exastack provides ISVs with a number of flexible deployment options, such as on-premise or Cloud, while maintaining one single code base for applications regardless of customer deployment preference. Customers buying their solutions from Exastack ISVs can therefore be confident in deploying on their own networks, on private clouds or into a public cloud. The underlying platform will support all conceivable deployments, enabling a focus on the ISV's application itself that wouldn't be possible with other vendor partners. It stands to reason that Exastack accelerates time to value as well as lowering implementation costs all round. There is a big competitive advantage in partners being able to offer customers an optimised, pre-configured solution rather than an assortment of components and a suggested fit. Once a customer has decided to buy an Oracle Exastack Ready or Optimized partner solution, it will be up and running without any need for the customer to conduct testing of its own. Operational costs and complexity are also reduced, thanks to streamlined customer support through standardised configurations and pro-active monitoring. 'Engineered to Work Together' is a significant statement of Oracle strategy. It guarantees smoother deployment of a single vendor solution, clear ownership with no finger-pointing and the peace of mind of the Oracle Support Centre underpinning the entire product stack. Next steps Every OPN member with packaged applications must seriously consider taking steps to become Exastack Ready, or Exastack Optimized at the first opportunity. That first step down the track is to talk to an expert on the OPN Portal, at the Oracle Partner Business Center or to discuss the next steps with the closest Oracle account manager. Oracle Exastack lab environments and other technical enablement resources are available for OPN members wishing to further their knowledge of Oracle Exastack and qualify their applications for Oracle Exastack Optimized. New Boot Camps and Guided Learning Paths (GLPs), tailored specifically for ISVs, are available for Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, Oracle Linux, Oracle Solaris, Oracle Database, and Oracle WebLogic Server. More information about these GLPs and Boot Camps (including delivery dates and locations) are posted on the OPN Competency Center and corresponding OPN Knowledge Zones. Learn more about Oracle Exastack labs and ISV specific enablement resources. "Oracle Specialized partners are of course front-and-centre, with potential customers clearly directed to those partners and to Exadata Ready partners as a matter of priority." --More OpenWorld 2011 highlights for Oracle partners and customers Oracle Application Testing Suite 9.3 application testing solution for Web, SOA and Oracle Applications Oracle Application Express Release 4.1 improving the development of database-centric Web 2.0 applications and reports Oracle Unified Directory 11g helping customers manage the critical identity information that drives their business applications Oracle SOA Suite for healthcare integration Oracle Enterprise Pack for Eclipse 11g demonstrating continued commitment to the developer and open source communities Oracle Coherence 3.7.1, the latest release of the industry's leading distributed in-memory data grid Oracle Process Accelerators helping to simplify and accelerate time-to-value for customers' business process management initiatives Oracle's JD Edwards EnterpriseOne on the iPad meeting the increasingly mobile demands of today's workforces Oracle CRM On Demand Release 19 Innovation Pack introducing industry-leading hosted call centre and enterprise-marketing capabilities designed to drive further revenue and productivity while reducing costs and improving the customer experience Oracle's Primavera Portfolio Management 9 for businesses delivering on project portfolio goals with increased versatility, transparency and accuracy Oracle's PeopleSoft Human Capital Management (HCM) 9.1 On Demand Standard Edition helping customers manage their long-term investment in enterprise-wide business applications New versions of Oracle FLEXCUBE Universal Banking and Oracle FLEXCUBE Investor Servicing for Financial Institutions, as well as Oracle Financial Services Enterprise Case Management, Oracle Financial Services Pricing Management, Oracle Financial Management Analytics and Oracle Tax Analytics Oracle Utilities Network Management System 1.11 offering new modelling and analysis features to improve distribution-grid management for electric utilities Oracle Communications Network Charging and Control 4.4 helping communications service providers (CSPs) offer their customers more flexible charging options Plus many, many more technology announcements, enhancements, momentum news and community updates -- Oracle OpenWorld 2012 A date has already been set for Oracle OpenWorld 2012. Held once again in San Francisco, exhibitors, partners, customers and Oracle people will gather from 30 September until 4 November to meet, network and learn together with the rest of the global Oracle community. Register now for Oracle OpenWorld 2012 and save $$$! We'll reward your early planning for Oracle OpenWorld 2012 with reduced rates. Super Saver deals are now available! -- Back to the welcome page

    Read the article

  • [GEEK SCHOOL] Network Security 4: Windows Firewall: Your System’s Best Defense

    - by Ciprian Rusen
    If you have your computer connected to a network, or directly to your Internet connection, then having a firewall is an absolute necessity. In this lesson we will discuss the Windows Firewall – one of the best security features available in Windows! The Windows Firewall made its debut in Windows XP. Prior to that, Windows system needed to rely on third-party solutions or dedicated hardware to protect them from network-based attacks. Over the years, Microsoft has done a great job with it and it is one of the best firewalls you will ever find for Windows operating systems. Seriously, it is so good that some commercial vendors have decided to piggyback on it! Let’s talk about what you will learn in this lesson. First, you will learn about what the Windows Firewall is, what it does, and how it works. Afterward, you will start to get your hands dirty and edit the list of apps, programs, and features that are allowed to communicate through the Windows Firewall depending on the type of network you are connected to. Moving on from there, you will learn how to add new apps or programs to the list of allowed items and how to remove the apps and programs that you want to block. Last but not least, you will learn how to enable or disable the Windows Firewall, for only one type of networks or for all network connections. By the end of this lesson, you should know enough about the Windows Firewall to use and manage it effectively. What is the Windows Firewall? Windows Firewall is an important security application that’s built into Windows. One of its roles is to block unauthorized access to your computer. The second role is to permit authorized data communications to and from your computer. Windows Firewall does these things with the help of rules and exceptions that are applied both to inbound and outbound traffic. They are applied depending on the type of network you are connected to and the location you have set for it in Windows, when connecting to the network. Based on your choice, the Windows Firewall automatically adjusts the rules and exceptions applied to that network. This makes the Windows Firewall a product that’s silent and easy to use. It bothers you only when it doesn’t have any rules and exceptions for what you are trying to do or what the programs running on your computer are trying to do. If you need a refresher on the concept of network locations, we recommend you to read our How-To Geek School class on Windows Networking. Another benefit of the Windows Firewall is that it is so tightly and nicely integrated into Windows and all its networking features, that some commercial vendors decided to piggyback onto it and use it in their security products. For example, products from companies like Trend Micro or F-Secure no longer provide their proprietary firewall modules but use the Windows Firewall instead. Except for a few wording differences, the Windows Firewall works the same in Windows 7 and Windows 8.x. The only notable difference is that in Windows 8.x you will see the word “app” being used instead of “program”. Where to Find the Windows Firewall By default, the Windows Firewall is turned on and you don’t need to do anything special in order for it work. You will see it displaying some prompts once in a while but they show up so rarely that you might forget that is even working. If you want to access it and configure the way it works, go to the Control Panel, then go to “System and Security” and select “Windows Firewall”. Now you will see the Windows Firewall window where you can get a quick glimpse on whether it is turned on and the type of network you are connected to: private networks or public network. For the network type that you are connected to, you will see additional information like: The state of the Windows Firewall How the Windows Firewall deals with incoming connections The active network When the Windows Firewall will notify you You can easily expand the other section and view the default settings that apply when connecting to networks of that type. If you have installed a third-party security application that also includes a firewall module, chances are that the Windows Firewall has been disabled, in order to avoid performance issues and conflicts between the two security products. If that is the case for your computer or device, you won’t be able to view any information in the Windows Firewall window and you won’t be able to configure the way it works. Instead, you will see a warning that says: “These settings are being managed by vendor application – Application Name”. In the screenshot below you can see an example of how this looks. How to Allow Desktop Applications Through the Windows Firewall Windows Firewall has a very comprehensive set of rules and most Windows programs that you install add their own exceptions to the Windows Firewall so that they receive network and Internet access. This means that you will see prompts from the Windows Firewall on occasion, generally when you install programs that do not add their own exceptions to the Windows Firewall’s list. In a Windows Firewall prompt, you are asked to select the network locations to which you allow access for that program: private networks or public networks. By default, Windows Firewall selects the checkbox that’s appropriate for the network you are currently using. You can decide to allow access for both types of network locations or just to one of them. To apply your setting press “Allow access”. If you want to block network access for that program, press “Cancel” and the program will be set as blocked for both network locations. At this step you should note that only administrators can set exceptions in the Windows Firewall. If you are using a standard account without administrator permissions, the programs that do not comply with the Windows Firewall rules and exceptions are automatically blocked, without any prompts being shown. You should note that in Windows 8.x you will never see any Windows Firewall prompts related to apps from the Windows Store. They are automatically given access to the network and the Internet based on the assumption that you are aware of the permissions they require based on the information displayed by the Windows Store. Windows Firewall rules and exceptions are automatically created for each app that you install from the Windows Store. However, you can easily block access to the network and the Internet for any app, using the instructions in the next section. How to Customize the Rules for Allowed Apps Windows Firewall allows any user with an administrator account to change the list of rules and exceptions applied for apps and desktop programs. In order to do this, first start the Windows Firewall. On the column on the left, click or tap “Allow an app or feature through Windows Firewall” (in Windows 8.x) or “Allow a program or feature through Windows Firewall” (in Windows 7). Now you see the list of apps and programs that are allowed to communicate through the Windows Firewall. At this point, the list is grayed out and you can only view which apps, features, and programs have rules that are enabled in the Windows Firewall.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28  | Next Page >