Search Results

Search found 14199 results on 568 pages for 'dirty bird design'.

Page 277/568 | < Previous Page | 273 274 275 276 277 278 279 280 281 282 283 284  | Next Page >

  • Oracle Fusion Supply Chain Management (SCM) Designs May Improve End User Productivity

    - by Applications User Experience
    By Applications User Experience on March 10, 2011 Michele Molnar, Senior Usability Engineer, Applications User Experience The Challenge: The SCM User Experience team, in close collaboration with product management and strategy, completely redesigned the user experience for Oracle Fusion applications. One of the goals of this redesign was to increase end user productivity by applying design patterns and guidelines and incorporating findings from extensive usability research. But a question remained: How do we know that the Oracle Fusion designs will actually increase end user productivity? The Test: To answer this question, the SCM Usability Engineers compared Oracle Fusion designs to their corresponding existing Oracle applications using the workflow time analysis method. The workflow time analysis method breaks tasks into a sequence of operators. By applying standard time estimates for all of the operators in the task, an estimate of the overall task time can be calculated. The workflow time analysis method has been recently adopted by the Applications User Experience group for use in predicting end user productivity. Using this method, a design can be tested and refined as needed to improve productivity even before the design is coded. For the study, we selected some of our recent designs for Oracle Fusion Product Information Management (PIM). The designs encompassed tasks performed by Product Managers to create, manage, and define products for their organization. (See Figure 1 for an example.) In applying this method, the SCM Usability Engineers collaborated with Product Management to compare the new Oracle Fusion Applications designs against Oracle’s existing applications. Together, we performed the following activities: Identified the five most frequently performed tasks Created detailed task scenarios that provided the context for each task Conducted task walkthroughs Analyzed and documented the steps and flow required to complete each task Applied standard time estimates to the operators in each task to estimate the overall task completion time Figure 1. The interactions on each Oracle Fusion Product Information Management screen were documented, as indicated by the red highlighting. The task scenario and script provided the context for each task.  The Results: The workflow time analysis method predicted that the Oracle Fusion Applications designs would result in productivity gains in each task, ranging from 8% to 62%, with an overall productivity gain of 43%. All other factors being equal, the new designs should enable these tasks to be completed in about half the time it takes with existing Oracle Applications. Further analysis revealed that these performance gains would be achieved by reducing the number of clicks and screens needed to complete the tasks. Conclusions: Using the workflow time analysis method, we can expect the Oracle Fusion Applications redesign to succeed in improving end user productivity. The workflow time analysis method appears to be an effective and efficient tool for testing, refining, and retesting designs to optimize productivity. The workflow time analysis method does not replace usability testing with end users, but it can be used as an early predictor of design productivity even before designs are coded. We are planning to conduct usability tests later in the development cycle to compare actual end user data with the workflow time analysis results. Such results can potentially be used to validate the productivity improvement predictions. Used together, the workflow time analysis method and usability testing will enable us to continue creating, evaluating, and delivering Oracle Fusion designs that exceed the expectations of our end users, both in the quality of the user experience and in productivity. (For more information about studying productivity, refer to the Measuring User Productivity blog.)

    Read the article

  • Reducing Deadlocks - not a DBA issue ?

    - by steveh99999
     As a DBA, I'm involved on an almost daily basis troubleshooting 'SQL Server' performance issues. Often, this troubleshooting soon veers away from a 'its a SQL Server issue' to instead become a wider application/database design/coding issue.One common perception with SQL Server is that deadlocking is an application design issue - and is fixed by recoding...  I see this reinforced by MCP-type questions/scenarios where the answer to prevent deadlocking is simply to change the order in code in which tables are accessed....Whilst this is correct, I do think this has led to a situation where many 'operational' or 'production support' DBAs, when faced with a deadlock, are happy to throw the issue over to developers without analysing the issue further....A couple of 'war stories' on deadlocks which I think are interesting :- Case One , I had an issue recently on a third-party application that I support on SQL 2008.  This particular third-party application has an unusual support agreement where the customer is allowed to change the index design on the third-party provided database.  However, we are not allowed to alter application code or modify table structure..This third-party application is also known to encounter occasional deadlocks – indeed, I have documentation from the vendor that up to 50 deadlocks per day is not unusual !So, as a DBA I have to support an application which in my opinion has too many deadlocks - but, I cannot influence the design of the tables or stored procedures for the application. This should be the classic - blame the third-party developers scenario, and hope this issue gets addressed in a future application release - ie we could wait years for this to be resolved and implemented in our production environment...But, as DBAs  can change the index layout, is there anything I could do still to reduce the deadlocks in the application ?I initially used SQL traceflag 1222 to write deadlock detection output to the SQL Errorlog – using this I was able to identify one table heavily involved in the deadlocks.When I examined the table definition, I was surprised to see it was a heap – ie no clustered index existed on the table.Using SQL profiler to see locking behaviour and plan for the query involved in the deadlock, I was able to confirm a table scan was being performed.By creating an appropriate clustered index - it was possible to produce a more efficient plan and locking behaviour.So, less locks, held for less time = less possibility of deadlocks. I'm still unhappy about the overall number of deadlocks on this system - but that's something to be discussed further with the vendor.Case Two,  a system which hadn't changed for months suddenly started seeing deadlocks on a regular basis. I love the 'nothing's changed' scenario, as it gives me the opportunity to appear wise and say 'nothings changed on this system, except the data'.. This particular deadlock occurred on a table which had been growing rapidly. By using DBCC SHOW_STATISTICS - the DBA team were able to see that the deadlocks seemed to be occurring shortly after auto-update stats had regenerated the table statistics using it's default sampling behaviour.As a quick fix, we were able to schedule a nightly UPDATE STATISTICS WITH FULLSCAN on the table involved in the deadlock - thus, greatly reducing the potential for stats to be updated via auto_update_stats, consequently reducing the potential for a bad plan to be generated based on an unrepresentative sample of the data. This reduced the possibility of a deadlock occurring.  Not a perfect solution by any means, but quick, easy to implement, and needed no application code changes. This fix gave us some 'breathing space'  to properly fix the code during the next scheduled application release.   The moral of this post - don't dismiss deadlocks as issues that can only be fixed by developers...

    Read the article

  • Retrofit Certification

    - by Bill Evjen
    Impact of Regulations on Cabin Systems Installation John Courtright, Structural Integrity Engineering There are “heightened” FAA attention to technical issues related to IFE and Wi-Fi Systems Installations The Aging Aircraft Safety Rule – EWIS & Damage Tolerance Analysis The Challenge: Maximize Flight Safety While Minimizing Costs Issue Papers & Testing, Testing, Testing The role of Airworthiness Directives (ADs) on the design of many IFE systems and all antenna systems. Goal is safety AND cost-effective maintenance intervals and inspection techniques The STC Process Briefly Stated Type Certifications (TC) Supplemental Type Certifications (STC) The STC Process Project Specific Certification Plan (PSCP) Managed by FAA Aircraft Certification Office (ACO) Type of Project (Electrical/Mechanical Systems or Structural) Specific Type of Aircraft Being Modified Schedule Design & Installation Location What does the STC Plan (PSCP) Cover? System Description – What does the system do? System qualification – Are the components qualified? Certification requirements – What FARs are applicable? Installation detail – what is being modified? Prototype installation – What is new? Functional hazard Assessment (FHA) – is it safe? EZAP-EWIS Requirements – Any aging aircraft issues? Certification Data – How is compliance achieved? Delegation and FAA involvement – Who is doing the work? Proposed certification schedule – When is the installation? Certification documentation – What the FAA Expects to see Cabin Systems Certification Concerns In addition to meeting the requirements for DO-160, Cabin System Certification needs to address issues related to: Power management: Generally, IFE and Wi-Fi Systems are classified as “Non-Essential Equipment” from a certification viewpoint. Connected to “non-essential” power buses Must be able to shed IFE & Wi-Fi Systems in a smoke/fire event or Other electrical emergency (FAA Policy 00-111-160) FAA is more relaxed with testing wi-fi. It used to be that you had to have 150 seats with laptops running wi-fi, but now it is down to around 50. Aging aircraft concerns – electrical and structural Issue papers addressing technical concerns involving: “Structural Certification Criteria for Large Antenna Installations” Antenna “Vibration/Buffeting Compliance Criteria” DO-160 : Environmental Test Procedures DO 160 – “Environmental Conditions and Test Procedures for Airborne Equipment”, Issued by RTCA Provides guidance to equipment manufacturers as to testing requirements Temperature: –40C to +55C Vibration and Shock Contaminant susceptibility – fluids and dust Electro-magnetic Interference Cabin systems are generally classified as “non-essential” Swissair 111 crashed (in part) due to non-standard wiring practices. EWIS Design Implications Installation design must take EWIS Requirements into account. This generally means: Aircraft surveys are needed to identify proper wire routing Ensure existing wiring diagrams are correct Identify primary/Secondary/Tertiary bus locations Verify proper separation of wire bundles exist Required separation from fuel quantity indicator system (FQIS) to prevent fuel tang ignition Enhanced Zonal Analysis Procedure (EZAP) Performed EZAP was developed by the Aging Transport Systems Rulemaking Advisory Committee (ATSRAC) EZAP is the method for analyzing airplane zones with an emphasis on evaluating wiring systems and the existence of combustibles  in the cabin. Certification Considerations for Wi-Fi Systems Electrical – All existing DO 160 testing required Issue papers required Onboard EMI testing – any interference with aircraft systems when multiple wi-fi users are logged on? Vibration/Buffeting compliance criteria – what is the effect of the antenna on aircraft flight characteristics? Structural certification criteria – what are the stress loads on the aircraft at the antenna location and what is the impact on maintenance inspection criteria for the airline? Damage tolerance analysis required Goal – minimize maintenance inspection intervals

    Read the article

  • Web Developer - How to enhance my skillset?

    - by atif089
    First of all pardon my English. I am not a native English speaker I have been a Web Developer for the past 4 years. In these 4 years I have spent my time on the internet to learn things. My current skillset comprises of HTML CSS PHP MySQL jQuery (I would not say js and rather say jQuery because I am good at using jQuery and bad with plain javascript.) The above things seemed like an easier part of my life as I quickly learned them. But now I would really like to enhance my skillset and I am pretty confused which way to move ahead considering that I have to learn things using the web and references on my own. Design My first option is towards design. Shall I get started with design and start using Adobe Illustrator, Photoshop, Flash, Flex. Designing along with my previous skills looks like a money maker to me. As both are co-related to each other when web design is considered. And its easier to learn the first 2 and I hope I can get tutorials for the last 2 as well. Marketing A lot of my existing clients asked me if I do SEO. So this looked as a good field to me as well. I cannot estimate the scope of SEO but I assume it has a long future. Since I am business minded as well and there are a lot of tutorials around, should I start with SEO, SEM, Social Media, PPC or whatever it consists of. Software Development The complex plight and hardest thing (perhaps) but the easiest way to find a decent job in my location. If I go for software development what platform should be that I should be ideally going after? Should it be C# for windows development, or ASP.NET (once again enhances my skill set), J2EE (there are a lot of jobs for J2EE developers here) or plain C and C++. Also I think it is difficult to learn software languages right from Hello World, using internet? I have no clue how I learned PHP but I am sort of a pro now, but these other languages seems like a disaster to me? I cant figure out the reason if its because PHP is easier or there was a lot of tutorials around for PHP. Anyways is it also possible to learn software development right from Hello World using the web? Database / Server (Linux) / Network Administration Seems like a job with a decent pay but less number of jobs and a bit harder to learn online. (not sure) What should be the right track I should move ahead. P.S - Age is not a constraint for me as I am between 20-21, and I come from an IT background. I know quite little basics about C (upto structures) C++ (upto objects, I was not able to understand templates) Core Java (some basics and OOP concept) RDBMS Visual Basic 6 (used to do this long back) UNIX (a bunch of commands like who, finger, chmod, ls and a bit of #bash) Or is there anything else that I left out? I need you guys to please give me a feedback and the reason why I should select that field.

    Read the article

  • Notes - Part II - Play with JavaFX

    - by Silviu Turuga
    Open the project from last lesson Double click on NotesUI.fmxl, this will open the JavaFX Scene Builder On the left side you have a area called Hierarchy, from there press Del or Shift+Backspace on Mac to delete the Button and the Label. You'll receive a warning, that some components have been assigned an fx:id, click Delete as we don't need them anymore. Resize the AnchorPane to have enough room for our design, eg. 820x550px From the top left pick the Container called Accordion and drag over the AnchorPane design Chose then from Controls a List View and drag inside the Accordion. You'll notice that by default the Accordion has 2 TitledPane, and you can switch between them by clicking on their name. I'll let you the pleasure to do the rest in order to get the following result  Here is the list of objects used Save it and then return to NetBeans Run the application and it should be run without any issue. If you click on buttons they all are functional, but nothing happens as we didn't link them with any action. We'll see this in the next episode. Now, let's play a little bit with the application and try to resize it… Have you notice the behavior? If the form is too small, some objects aren't visible, if it is too large there is too much space . That's for sure something that your users won't like and you as a programmer have to care about this. From NetBeans double click NotesUI.fmxl so to return back to JavaFX Scene Builder Select the TextField from bottom left of Notes, the one where I put the text Category and then from the right part of JavaFX Scene Builder you'll notice a panel called Inspector. Chose Layout and then click on the dotted lines from left and bottom of the square, like you see in the below image This will make the textfield to have always the same distance from left and bottom no matter the size of the form. Save and run the application. Note that whenever the form is changing the Height, the Category TextField has the same distance from the bottom. Select Accordion and do the same steps but also check the top dotted line, because we want the Accordion to have the same height as the main form has. I'll let you the pleasure to do the same for the rest of components. It's very important to design an application that can be resize by user and in the same time, all the buttons are on place. Last step is to make sure our application is not getting smaller then a certain size, as this will hide parts of our layout. So select the AnchorPane and from Inspector go to Layout and note down the Width and Height. Go back to NetBeans and open the file Main.java and add the following code just after stage.setScene(scene); (around line 26) stage.setMinWidth(820); stage.setMinHeight(550); Use your own width and height. This will prevent user to reduce the width or height of your application to a value that will hide parts of your layout. So now you should have done most of the design part and next time we'll see how can we enter some data into our newly created application… Note: in case you miss something, here are the source files of the project till this point. 

    Read the article

  • How to create basic Adobe Illustrator files programatically?

    - by Jonas Follesø
    I need to create a really basic Adobe Illustrator file on the clipboard that I can paste in Adobe Illustrator or Expression Design. I'm looking for code samples on how to programaticaly generate Adobe Illustrator Files, preferably from C# or some other .NET language (but at the moment any language goes). I have found the Adobe Illustrator 3 File Format documentation online but it's allot to digest for this simple scenario. I don't want to depend on the actual Adobe Illustrator program (COM interop for instance) to generate my documents. Must be pure code. The code is for an Expression Studio addin, and I need to be able to create something on the clipboard I can paste into Expression Design. After looking at the formats Expression Design puts on the clipboard when copying a basic shape I've concluded that ADOBE AI3 i the best one to use (the others are either rendered images, or cfXaml that you cannot paste INTO Design). So based on this I can't use SWG which would probably been easier. Another idea might be to use a PDF component as the AI and PDF format is supposed to be compatible? I'm also finding some references to a format called "Adobe Illustrator Clipboard Format" (AICB), but can't find allot of documentation about it.

    Read the article

  • Focussing on Style Sheets and Cross Browser Compatibility.

    - by Sam
    Hello everyone, Let me begin this topic by explaining my background experience with web design. I have always been more of a back end programmer, with PHP and SQL and things. However I do have a shallow background with HTML and CSS. The problem is, I don't know it all. What I do know is, when it comes to designing (not back end dirty work) I understand basic CSS properties and I also understand HTML and I can usually throw together a sloppy web page with the two and a couple bazillion DIV tags. Anyways.. The problem I always have encountered is that when I design a website in a browser such as IE7 (and then it looks perfect on IE7), and then look at it on IE8 or IE6 or Mozilla (etc.) it gets all spacey and ugly and looks totally different than the way it should look on IE7. Question one: Basically, what I am asking everyone is what route should I take to learn how to properly build the website? Build as in put it togehter with CSS standards and HTML standards that will make my site look the same on every brwoser. (Not only learning standards but where can I learn to properly write my code?) Where is a strong free resource I can use to learn how to these things? Question two: How do I properly code my website? Do I use all external style sheets to make dynamic page design simplistic or do I hard code some things into the DIV tags on each page? What is proper? Oh, and if anyone has any tutorials on how to properly design a complete layout feel free to throw it in a response somewhere. Thank you for taking the time to read my questions, and hopefully you will understand what I am trying to get out to everyone. I need to get on the right route of the designing side of web programming so that I will know how to create successful websites in the future. Thank you, Sam Pardee

    Read the article

  • Seperating Javascript and Html, when dynamically adding html via javascript

    - by optician
    I am currently building a very dynamic table for a list application, which will basically perform basic CRUD functions via AJAX. What I would like to do is separate the visual design and javascript to the point where I can change the design side without touching the JS side. This would only work where the design stays roughly the same(i would like to use it for rapid protyping) Here is an example. <table> <tr><td>record-123</td><td>I am line 123</td><td>delete row</td></tr> <tr><td>record-124</td><td>I am line 124</td><td>delete row</td></tr> <tr><td>record-125</td><td>I am line 125</td><td>delete row</td></tr> <tr><td>add new record</td></tr> </table> Now, when I add a new record, I would like to insert a new row of html, but I would rather not put this html into the javascript file. What I am considering is creating a row like this on the page, near the table. <tr style='visble:none;' id='template-row'><td>record-id</td><td>content-area</td><td>delete row</td></tr> And when I come to add the new row, I search the page for the tags with the id=template-row , and then grab it, do a string replace on it, and then put it in the right place in the page. As long as the design doesn't shift radically, and I keep the placeholder strings the same, it means designs can be quickly modified without touching the js. Can any give any advice on a methodology like this?

    Read the article

  • Text mining on large database (data mining)

    - by yox
    Hello, I have a large database of resumes (CV), and a certain table skills grouping all users skills. inside that table there's a field skill_text that describes the skill in full text. I'm looking for an algorithm/software/method to extract significant terms/phrases from that table in order to build a new table with standarized skills.. Here are some examples skills extracted from the DB : Sectoral and competitive analysis Business Development (incl. in international settings) Specific structure and road design software - Microstation, Macao, AutoCAD (basic knowledge) Creative work (Photoshop, In-Design, Illustrator) checking and reporting back on campaign progress organising and attending events and exhibitions Development : Aptana Studio, PHP, HTML, CSS, JavaScript, SQL, AJAX Discipline: One to one marketing, E-marketing (SEO & SEA, display, emailing, affiliate program) Mix marketing, Viral Marketing, Social network marketing. The output shoud be something like : Sectoral and competitive analysis Business Development Specific structure and road design software - Macao AutoCAD Photoshop In-Design Illustrator organising events Development Aptana Studio PHP HTML CSS JavaScript SQL AJAX Mix marketing Viral Marketing Social network marketing emailing SEO One to one marketing As you see only skills remains no other representation text. I know this is possible using text mining technics but how to do it ? the database is realy large.. it's a good thing because we can calculate text frequency and decide if it's a real skill or just meaningless text... The big problem is .. how to determin that "blablabla" is a skill ? thanks

    Read the article

  • Separating Javascript and Html, when dynamically adding html via javascript

    - by optician
    I am currently building a very dynamic table for a list application, which will basically perform basic CRUD functions via AJAX. What I would like to do is separate the visual design and javascript to the point where I can change the design side without touching the JS side. This would only work where the design stays roughly the same(i would like to use it for rapid protyping) Here is an example. <table> <tr><td>record-123</td><td>I am line 123</td><td>delete row</td></tr> <tr><td>record-124</td><td>I am line 124</td><td>delete row</td></tr> <tr><td>record-125</td><td>I am line 125</td><td>delete row</td></tr> <tr><td>add new record</td></tr> </table> Now, when I add a new record, I would like to insert a new row of html, but I would rather not put this html into the javascript file. What I am considering is creating a row like this on the page, near the table. <tr style='visble:none;' id='template-row'><td>record-id</td><td>content-area</td><td>delete row</td></tr> And when I come to add the new row, I search the page for the tags with the id=template-row , and then grab it, do a string replace on it, and then put it in the right place in the page. As long as the design doesn't shift radically, and I keep the placeholder strings the same, it means designs can be quickly modified without touching the js. Can any give any advice on a methodology like this?

    Read the article

  • SQL Server CLR stored procedures in data processing tasks - good or evil?

    - by Gart
    In short - is it a good design solution to implement most of the business logic in CLR stored procedures? I have read much about them recently but I can't figure out when they should be used, what are the best practices, are they good enough or not. For example, my business application needs to parse a large fixed-length text file, extract some numbers from each line in the file, according to these numbers apply some complex business rules (involving regex matching, pattern matching against data from many tables in the database and such), and as a result of this calculation update records in the database. There is also a GUI for the user to select the file, view the results, etc. This application seems to be a good candidate to implement the classic 3-tier architecture: the Data Layer, the Logic Layer, and the GUI layer. The Data Layer would access the database The Logic Layer would run as a WCF service and implement the business rules, interacting with the Data Layer The GUI Layer would be a means of communication between the Logic Layer and the User. Now, thinking of this design, I can see that most of the business rules may be implemented in a SQL CLR and stored in SQL Server. I might store all my raw data in the database, run the processing there, and get the results. I see some advantages and disadvantages of this solution: Pros: The business logic runs close to the data, meaning less network traffic. Process all data at once, possibly utilizing parallelizm and optimal execution plan. Cons: Scattering of the business logic: some part is here, some part is there. Questionable design solution, may encounter unknown problems. Difficult to implement a progress indicator for the processing task. I would like to hear all your opinions about SQL CLR. Does anybody use it in production? Are there any problems with such design? Is it a good thing?

    Read the article

  • documenting class attributes

    - by intuited
    I'm writing a lightweight class whose attributes are intended to be publicly accessible, and only sometimes overridden in specific instantiations. There's no provision in the Python language for creating docstrings for class attributes, or any sort of attributes, for that matter. What is the accepted way, should there be one, to document these attributes? Currently I'm doing this sort of thing: class Albatross(object): """A bird with a flight speed exceeding that of an unladen swallow. Attributes: """ flight_speed = 691 __doc__ += """ flight_speed (691) The maximum speed that such a bird can attain. """ nesting_grounds = "Raymond Luxury-Yacht" __doc__ += """ nesting_grounds ("Raymond Luxury-Yacht") The locale where these birds congregate to reproduce. """ def __init__(**keyargs): """Initialize the Albatross from the keyword arguments.""" self.__dict__.update(keyargs) Although this style doesn't seem to be expressly forbidden in the docstring style guidelines, it's also not mentioned as an option. The advantage here is that it provides a way to document attributes alongside their definitions, while still creating a presentable class docstring, and avoiding having to write comments that reiterate the information from the docstring. I'm still kind of annoyed that I have to actually write the attributes twice; I'm considering using the string representations of the values in the docstring to at least avoid duplication of the default values. Is this a heinous breach of the ad hoc community conventions? Is it okay? Is there a better way? For example, it's possible to create a dictionary containing values and docstrings for the attributes and then add the contents to the class __dict__ and docstring towards the end of the class declaration; this would alleviate the need to type the attribute names and values twice. edit: this last idea is, I think, not actually possible, at least not without dynamically building the class from data, which seems like a really bad idea unless there's some other reason to do that. I'm pretty new to python and still working out the details of coding style, so unrelated critiques are also welcome.

    Read the article

  • documenting class properties

    - by intuited
    I'm writing a lightweight class whose properties are intended to be publicly accessible, and only sometimes overridden in specific instantiations. There's no provision in the Python language for creating docstrings for class properties, or any sort of properties, for that matter. What is the accepted way, should there be one, to document these properties? Currently I'm doing this sort of thing: class Albatross(object): """A bird with a flight speed exceeding that of an unladen swallow. Properties: """ flight_speed = 691 __doc__ += """ flight_speed (691) The maximum speed that such a bird can attain """ nesting_grounds = "Throatwarbler Man Grove" __doc__ += """ nesting_grounds ("Throatwarbler Man Grove") The locale where these birds congregate to reproduce. """ def __init__(**keyargs): """Initialize the Albatross from the keyword arguments.""" self.__dict__.update(keyargs) Although this style doesn't seem to be expressly forbidden in the docstring style guidelines, it's also not mentioned as an option. The advantage here is that it provides a way to document properties alongside their definitions, while still creating a presentable class docstring, and avoiding having to write comments that reiterate the information from the docstring. I'm still kind of annoyed that I have to actually write the properties twice; I'm considering using the string representations of the values in the docstring to at least avoid duplication of the default values. Is this a heinous breach of the ad hoc community conventions? Is it okay? Is there a better way? For example, it's possible to create a dictionary containing values and docstrings for the properties and then add the contents to the class __dict__ and docstring towards the end of the class declaration; this would alleviate the need to type the property names and values twice. I'm pretty new to python and still working out the details of coding style, so unrelated critiques are also welcome.

    Read the article

  • NSFetchedResultsController sections localized sorted

    - by Gerd
    How could I use the NSFetchedResultsController with translated sort key and sectionKeyPath? Problem: I have ID in the property "type" in the database like typeA, typeB, typeC,... and not the value directly because it should be localized. In English typeA=Bird, typeB=Cat, typeC=Dog in German it would be Vogel, Katze, Hund. With a NSFetchedResultController with sort key and sectionKeyPath on "type" I receive the order and sections - typeA - typeB - typeC Next I translate for display and everything is fine in English: - Bird - Cat - Dog Now I switch to German and receive a wrong sort order - Vogel - Katze - Hund because it still sorts by typeA, typeB, typeC So I'm looking for a way to localize the sort for the NSFetchedResultsController. I tried the transient property approach, but this doesn't work for the sort key because the sort key need to be in the entity. I have no other idea. But I can't believe that's not possible to use NSFetchedResultsController on a derived attribute required for localization? There are related discussions like http://stackoverflow.com/questions/1384345/using-custom-sections-with-nsfetchedresultscontroller but the difference is that the custom section names and the sort key have probably the same order. Not in my case and this is the main difference. At the end I would need a sort order for the necessary NSSortDescriptor on a derived attribute, I guess. This sort order has also to serve for the sectionKeyPath. Thanks for any hint.

    Read the article

  • Git and cloning

    - by jriff
    Hi all! I have done an app for a client called 'A' (not really). I have found out that it is very cool and that I want to sell it to other clients also. The directory 'A' is a Git repository. I think I have a problem with cloning it. As far as I can see I need to make a copy of the dir 'A' and call it 'Generic_A'. Then delete the dir 'A' and do a "git clone Generic_A A" Then I could start changing the 'Generic_A'-repo with a generic design and all client references removed. But that is kind of the other way around. I should have started doing the generic design and then cloned the repo to change to the client specific design. Can I: make a new branch do all the changes to make the design generic create a patch that reflects the changes between the two remove the client specific branch rename the directory to 'Generic_A' clone the repo to a new dir 'A' apply the patch to get the client specific stuff back And if yes - how do I make the patch and apply it? Regards, Jacob

    Read the article

  • Xml updating with linq not working when quering

    - by user1734230
    I have problem i'm trying to update a specific part of the XML with the linq query but it doesn't work. So i an xml file: <?xml version="1.0" encoding="utf-8"?> <DesignConfiguration> <Design name="CSF_Packages"> <SourceFolder>C:\CSF_Packages</SourceFolder> <DestinationFolder>C:\Documents and Settings\xxx</DestinationFolder> <CopyLookups>True</CopyLookups> <CopyImages>False</CopyImages> <ImageSourceFolder>None</ImageSourceFolder> <ImageDesinationFolder>None</ImageDesinationFolder> </Design> </DesignConfiguration> I want to select the part where the part where there is Design name="somethning" and get the descendants and then update the descendants value that means this part: <SourceFolder>C:\CSF_Packages</SourceFolder> <DestinationFolder>C:\Documents and Settings\xxx</DestinationFolder> <CopyLookups>True</CopyLookups> <CopyImages>False</CopyImages> <ImageSourceFolder>None</ImageSourceFolder> <ImageDesinationFolder>None</ImageDesinationFolder> I have this code: XDocument configXml = XDocument.Load(configXMLFileName); var updateData = configXml.Descendants("DesignConfiguration").Elements().Where(el => el.Name == "Design" && el.Attribute("name").Value.Equals("AFP_GRAFIKA")).FirstOrDefault(); configXml.Save(configXMLFileName); I'm getting the null data in the updateData varibale. When I'm trying the Descendat's function through QuickWatch it also returns a null value. When I'm checking the configXML variable it has data that is my whole xml. What am I doing wrong?

    Read the article

  • Array of Sentences?

    - by user1869915
    Javascript noob here.... I am trying to build a site that will help my kids read predefined sentences from a select group, then when a button is clicked it will display one of the sentences. Is an array the best option for this? For example, I have this array (below) and on the click of a button I would like one of these sentences to appear on the page. <script type="text/javascript"> Sentence = new Array() Sentence[0]='Can we go to the park.'; Sentence[1]='Where is the orange cat? Said the big black dog.'; Sentence[2]='We can make the bird fly away if we jump on something.' Sentence[3]='We can go down to the store with the dog. It is not too far away.' Sentence[4]='My big yellow cat ate the little black bird.' Sentence[5]='I like to read my book at school.' Sentence[6]='We are going to swim at the park.' </script> Again, is an array the best for this and how could I get the sentence to display? Ideally I would want the button to randomly select one of these sentences but just displaying one of them for now would help. Thanks

    Read the article

  • Win a place at a SQL Server Masterclass with Kimberly Tripp and Paul Randal

    - by Testas
    The top things YOU need to know about managing SQL Server - in one place, on one day - presented by two of the best SQL Server industry trainers!And you could be there courtesy of UK SQL Server User Group and SQL Server Magazine! This week the UK SQL Server User Group will provide you with details of how to win a place at this must see seminar   You can also register for the seminar yourself at:www.regonline.co.uk/kimtrippsql More information about the seminar   Where: Radisson Edwardian Heathrow Hotel, London When: Thursday 17th June 2010 This one-day MasterClass will focus on many of the top issues companies face when implementing and maintaining a SQL Server-based solution. In the case where a company has no dedicated DBA, IT managers sometimes struggle to keep the data tier performing well and the data available. This can be especially troublesome when the development team is unfamiliar with the affect application design choices have on database performance. The Microsoft SQL Server MasterClass 2010 is presented by Paul S. Randal and Kimberly L. Tripp, two of the most experienced and respected people in the SQL Server world. Together they have over 30 years combined experience working with SQL Server in the field, and on the SQL Server product team itself. This is a unique opportunity to hear them present at a UK event which will:·         Debunk many of the ingrained misconceptions around SQL Server's behaviour   ·         Show you disaster recovery techniques critical to preserving your company's life-blood - the data   ·         Explain how a common application design pattern can wreak havoc in the database ·         Walk through the top-10 points to follow around operations and maintenance for a well-performing and available data tier! Please Note: Agenda may be subject to changeSessions AbstractsKEYNOTE: Bridging the Gap Between Development and Production  Applications are commonly developed with little regard for how design choices will affect performance in production. This is often because developers don't realize the implications of their design on how SQL Server will be able to handle a high workload (e.g. blocking, fragmentation) and/or because there's no full-time trained DBA that can recognize production problems and help educate developers. The keynote sets the stage for the rest of the day. Discussing some of the issues that can arise, explaining how some can be avoided and highlighting some of the features in SQL 2008 that can help developers and DBAs make better use of SQL Server, and troubleshoot when things go wrong.  SESSION ONE: SQL Server MythbustersIt's amazing how many myths and misconceptions have sprung up and persisted over the years about SQL Server - after many years helping people out on forums, newsgroups, and customer engagements, Paul and Kimberly have heard it all. Are there really non-logged operations? Can interrupting shrinks or rebuilds cause corruption? Can you override the server's MAXDOP setting? Will the server always do a table-scan to get a row count? Many myths lead to poor design choices and inappropriate maintenance practices so these are just a few of many, many myths that Paul and Kimberly will debunk in this fast-paced session on how SQL Server operates and should be managed and maintained. SESSION TWO: Database Recovery Techniques Demo-Fest Even if a company has a disaster recovery strategy in place, they need to practice to make sure that the plan will work when a disaster does strike. In this fast-paced demo session Paul and Kimberly will repeatedly do nasty things to databases and then show how they are recovered - demonstrating many techniques that can be used in production for disaster recovery. Not for the faint-hearted! SESSION THREE: GUIDs: Use, Abuse, and How To Move Forward Since the addition of the GUID (Microsoft’s implementation of the UUID), my life as a consultant and "tuner" has been busy. I’ve seen databases designed with GUID keys run fairly well with small workloads but completely fall over and fail because they just cannot scale. And, I know why GUIDs are chosen - it simplifies the handling of parent/child rows in your batches so you can reduce round-trips or avoid dealing with identity values. And, yes, sometimes it's even for distributed databases and/or security that GUIDs are chosen. I'm not entirely against ever using a GUID but overusing and abusing GUIDs just has to be stopped! Please, please, please let me give you better solutions and explanations on how to deal with your parent/child rows, round-trips and clustering keys! SESSION 4: Essential Database MaintenanceIn this session, Paul and Kimberly will run you through their top-ten database maintenance recommendations, with a lot of tips and tricks along the way. These are distilled from almost 30 years combined experience working with SQL Server customers and are geared towards making your databases more performant, more available, and more easily managed (to save you time!). Everything in this session will be practical and applicable to a wide variety of databases. Topics covered include: backups, shrinks, fragmentation, statistics, and much more! Focus will be on 2005 but we'll explain some of the key differences for 2000 and 2008 as well.    Speaker Biographies     Paul S.Randal  Kimberley L. Tripp Paul and Kimberly are a husband-and-wife team who own and run SQLskills.com, a world-renowned SQL Server consulting and training company. They are both SQL Server MVPs and Microsoft Regional Directors, with over 30 years of combined experience on SQL Server. Paul worked on the SQL Server team for nine years in development and management roles, writing many of the DBCC commands, and ultimately with responsibility for core Storage Engine for SQL Server 2008. Paul writes extensively on his blog (SQLskills.com/blogs/Paul) and for TechNet Magazine, for which he is also a Contributing Editor. Kimberly worked on the SQL Server team in the early 1990s as a tester and writer before leaving to found SQLskills and embrace her passion for teaching and consulting. Kimberly has been a staple at worldwide conferences since she first presented at TechEd in 1996, and she blogs at SQLskills.com/blogs/Kimberly. They have written Microsoft whitepapers and books for SQL Server 2000, 2005 and 2008, and are regular, top-rated presenters worldwide on database maintenance, high availability, disaster recovery, performance tuning, and SQL Server internals. Together they teach the SQL MCM certification and throughout Microsoft.In their spare time, they like to find frogfish in remote corners of the world.  

    Read the article

  • SQL SERVER – 5 Tips for Improving Your Data with expressor Studio

    - by pinaldave
    It’s no secret that bad data leads to bad decisions and poor results.  However, how do you prevent dirty data from taking up residency in your data store?  Some might argue that it’s the responsibility of the person sending you the data.  While that may be true, in practice that will rarely hold up.  It doesn’t matter how many times you ask, you will get the data however they decide to provide it. So now you have bad data.  What constitutes bad data?  There are quite a few valid answers, for example: Invalid date values Inappropriate characters Wrong data Values that exceed a pre-set threshold While it is certainly possible to write your own scripts and custom SQL to identify and deal with these data anomalies, that effort often takes too long and becomes difficult to maintain.  Instead, leveraging an ETL tool like expressor Studio makes the data cleansing process much easier and faster.  Below are some tips for leveraging expressor to get your data into tip-top shape. Tip 1:     Build reusable data objects with embedded cleansing rules One of the new features in expressor Studio 3.2 is the ability to define constraints at the metadata level.  Using expressor’s concept of Semantic Types, you can define reusable data objects that have embedded logic such as constraints for dealing with dirty data.  Once defined, they can be saved as a shared atomic type and then re-applied to other data attributes in other schemas. As you can see in the figure above, I’ve defined a constraint on zip code.  I can then save the constraint rules I defined for zip code as a shared atomic type called zip_type for example.   The next time I get a different data source with a schema that also contains a zip code field, I can simply apply the shared atomic type (shown below) and the previously defined constraints will be automatically applied. Tip 2:     Unlock the power of regular expressions in Semantic Types Another powerful feature introduced in expressor Studio 3.2 is the option to use regular expressions as a constraint.   A regular expression is used to identify patterns within data.   The patterns could be something as simple as a date format or something much more complex such as a street address.  For example, I could define that a valid IP address should be made up of 4 numbers, each 0 to 255, and separated by a period.  So 192.168.23.123 might be a valid IP address whereas 888.777.0.123 would not be.   How can I account for this using regular expressions? A very simple regular expression that would look for any 4 sets of 3 digits separated by a period would be:  ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ Alternatively, the following would be the exact check for truly valid IP addresses as we had defined above:  ^(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])$ .  In expressor, we would enter this regular expression as a constraint like this: Here we select the corrective action to be ‘Escalate’, meaning that the expressor Dataflow operator will decide what to do.  Some of the options include rejecting the offending record, skipping it, or aborting the dataflow. Tip 3:     Email pattern expressions that might come in handy In the example schema that I am using, there’s a field for email.  Email addresses are often entered incorrectly because people are trying to avoid spam.  While there are a lot of different ways to define what constitutes a valid email address, a quick search online yields a couple of really useful regular expressions for validating email addresses: This one is short and sweet:  \b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}\b (Source: http://www.regular-expressions.info/) This one is more specific about which characters are allowed:  ^([a-zA-Z0-9_\-\.]+)@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.)|(([a-zA-Z0-9\-]+\.)+))([a-zA-Z]{2,4}|[0-9]{1,3})(\]?)$ (Source: http://regexlib.com/REDetails.aspx?regexp_id=26 ) Tip 4:     Reject “dirty data” for analysis or further processing Yet another feature introduced in expressor Studio 3.2 is the ability to reject records based on constraint violations.  To capture reject records on input, simply specify Reject Record in the Error Handling setting for the Read File operator.  Then attach a Write File operator to the reject port of the Read File operator as such: Next, in the Write File operator, you can configure the expressor operator in a similar way to the Read File.  The key difference would be that the schema needs to be derived from the upstream operator as shown below: Once configured, expressor will output rejected records to the file you specified.  In addition to the rejected records, expressor also captures some diagnostic information that will be helpful towards identifying why the record was rejected.  This makes diagnosing errors much easier! Tip 5:    Use a Filter or Transform after the initial cleansing to finish the job Sometimes you may want to predicate the data cleansing on a more complex set of conditions.  For example, I may only be interested in processing data containing males over the age of 25 in certain zip codes.  Using an expressor Filter operator, you can define the conditional logic which isolates the records of importance away from the others. Alternatively, the expressor Transform operator can be used to alter the input value via a user defined algorithm or transformation.  It also supports the use of conditional logic and data can be rejected based on constraint violations. However, the best tip I can leave you with is to not constrain your solution design approach – expressor operators can be combined in many different ways to achieve the desired results.  For example, in the expressor Dataflow below, I can post-process the reject data from the Filter which did not meet my pre-defined criteria and, if successful, Funnel it back into the flow so that it gets written to the target table. I continue to be impressed that expressor offers all this functionality as part of their FREE expressor Studio desktop ETL tool, which you can download from here.  Their Studio ETL tool is absolutely free and they are very open about saying that if you want to deploy their software on a dedicated Windows Server, you need to purchase their server software, whose pricing is posted on their website. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • What's a good scheme for multi-user database synchronization?

    - by Mason Wheeler
    I'm working on a system to allow multiple users to collaborate on an online project. Everything is fairly straightforward, except for keeping the users in sync. Each user has their own local copy of the project database, which allows them to make changes and test things out, and then send the updates to the central server. But this runs into the classic synchronization question: how do you keep two users from editing the same thing and stomping each other's work? I've got an idea that should work, but I wonder if there's a simpler way to do it. Here's the basic concept: All project data is stored in a relational database. Each row in the database has an owner. If the current user is not the owner, he can read but not write that row. (This is enforced client-side.) The user can send a request to the server to take ownership of a row, which will be granted if the server's copy says that the current owner is NULL, or to release ownership when they're done with it. It is not possible to release ownership without committing changes to the server. It is not possible to commit changes to the server without having first downloaded all outstanding changes to the server. When any changes are made to rows you own, a trigger marks that row as Dirty. When you commit changes, the database is scanned for all Dirty rows in all tables, and the data is serialized into an update file, which is posted to the server, and all rows are marked Clean. The server applies the updates on its end, and keeps the file around. When other users download changes, the server sends them the update files that they haven't already received. So, essentially this is a reinvention of version control on a relational database. (Sort of.) As long as taking ownership and applying updates to the server are guaranteed atomic changes, and the server verifies that some smart-aleck user didn't edit their local database so they could send an update for a row they don't have ownership of, it should be guaranteed to be correct, and with no need to worry about merges and merge conflicts. (I think.) Can anyone think of any problems with this scheme, or ways to do it better? (And no, "build [insert VCS here] into your project" is not what I'm looking for. I've thought of that already. VCSs work well with text, and not so well with other file formats, such as relational databases.)

    Read the article

  • Awesome Serenity (Firefly) – My Little Pony Movie Trailer Mashup [Video]

    - by Asian Angel
    Recently we featured an awesome Watchmen – My Little Pony mashup and today we are back with another great movie trailer mixer. This latest mashup video from BronyVids once again features the ever popular ponies and the movie trailer from the 2005 movie Serenity. Just for fun here is the original Serenity trailer that the video above is based on. My Little Serenity [via Geeks are Sexy] Serenity (2005) Trailer 1080p HD [YouTube] How To Encrypt Your Cloud-Based Drive with BoxcryptorHTG Explains: Photography with Film-Based CamerasHow to Clean Your Dirty Smartphone (Without Breaking Something)

    Read the article

  • SQL Server Master class winner

    - by Testas
     The winner of the SQL Server MasterClass competition courtesy of the UK SQL Server User Group and SQL Server Magazine!    Steve Hindmarsh     There is still time to register for the seminar yourself at:  www.regonline.co.uk/kimtrippsql     More information about the seminar     Where: Radisson Edwardian Heathrow Hotel, London  When: Thursday 17th June 2010  This one-day MasterClass will focus on many of the top issues companies face when implementing and maintaining a SQL Server-based solution. In the case where a company has no dedicated DBA, IT managers sometimes struggle to keep the data tier performing well and the data available. This can be especially troublesome when the development team is unfamiliar with the affect application design choices have on database performance. The Microsoft SQL Server MasterClass 2010 is presented by Paul S. Randal and Kimberly L. Tripp, two of the most experienced and respected people in the SQL Server world. Together they have over 30 years combined experience working with SQL Server in the field, and on the SQL Server product team itself. This is a unique opportunity to hear them present at a UK event which will: Debunk many of the ingrained misconceptions around SQL Server's behaviour    Show you disaster recovery techniques critical to preserving your company's life-blood - the data    Explain how a common application design pattern can wreak havoc in the database Walk through the top-10 points to follow around operations and maintenance for a well-performing and available data tier! Please Note: Agenda may be subject to change  Sessions Abstracts  KEYNOTE: Bridging the Gap Between Development and Production    Applications are commonly developed with little regard for how design choices will affect performance in production. This is often because developers don't realize the implications of their design on how SQL Server will be able to handle a high workload (e.g. blocking, fragmentation) and/or because there's no full-time trained DBA that can recognize production problems and help educate developers. The keynote sets the stage for the rest of the day. Discussing some of the issues that can arise, explaining how some can be avoided and highlighting some of the features in SQL 2008 that can help developers and DBAs make better use of SQL Server, and troubleshoot when things go wrong.   SESSION ONE: SQL Server Mythbusters  It's amazing how many myths and misconceptions have sprung up and persisted over the years about SQL Server - after many years helping people out on forums, newsgroups, and customer engagements, Paul and Kimberly have heard it all. Are there really non-logged operations? Can interrupting shrinks or rebuilds cause corruption? Can you override the server's MAXDOP setting? Will the server always do a table-scan to get a row count? Many myths lead to poor design choices and inappropriate maintenance practices so these are just a few of many, many myths that Paul and Kimberly will debunk in this fast-paced session on how SQL Server operates and should be managed and maintained.   SESSION TWO: Database Recovery Techniques Demo-Fest  Even if a company has a disaster recovery strategy in place, they need to practice to make sure that the plan will work when a disaster does strike. In this fast-paced demo session Paul and Kimberly will repeatedly do nasty things to databases and then show how they are recovered - demonstrating many techniques that can be used in production for disaster recovery. Not for the faint-hearted!   SESSION THREE: GUIDs: Use, Abuse, and How To Move Forward   Since the addition of the GUID (Microsoft’s implementation of the UUID), my life as a consultant and "tuner" has been busy. I’ve seen databases designed with GUID keys run fairly well with small workloads but completely fall over and fail because they just cannot scale. And, I know why GUIDs are chosen - it simplifies the handling of parent/child rows in your batches so you can reduce round-trips or avoid dealing with identity values. And, yes, sometimes it's even for distributed databases and/or security that GUIDs are chosen. I'm not entirely against ever using a GUID but overusing and abusing GUIDs just has to be stopped! Please, please, please let me give you better solutions and explanations on how to deal with your parent/child rows, round-trips and clustering keys!   SESSION 4: Essential Database Maintenance  In this session, Paul and Kimberly will run you through their top-ten database maintenance recommendations, with a lot of tips and tricks along the way. These are distilled from almost 30 years combined experience working with SQL Server customers and are geared towards making your databases more performant, more available, and more easily managed (to save you time!). Everything in this session will be practical and applicable to a wide variety of databases. Topics covered include: backups, shrinks, fragmentation, statistics, and much more! Focus will be on 2005 but we'll explain some of the key differences for 2000 and 2008 as well. Speaker Biographies     Kimberley L. Tripp Paul and Kimberly are a husband-and-wife team who own and run SQLskills.com, a world-renowned SQL Server consulting and training company. They are both SQL Server MVPs and Microsoft Regional Directors, with over 30 years of combined experience on SQL Server. Paul worked on the SQL Server team for nine years in development and management roles, writing many of the DBCC commands, and ultimately with responsibility for core Storage Engine for SQL Server 2008. Paul writes extensively on his blog (SQLskills.com/blogs/Paul) and for TechNet Magazine, for which he is also a Contributing Editor. Kimberly worked on the SQL Server team in the early 1990s as a tester and writer before leaving to found SQLskills and embrace her passion for teaching and consulting. Kimberly has been a staple at worldwide conferences since she first presented at TechEd in 1996, and she blogs at SQLskills.com/blogs/Kimberly. They have written Microsoft whitepapers and books for SQL Server 2000, 2005 and 2008, and are regular, top-rated presenters worldwide on database maintenance, high availability, disaster recovery, performance tuning, and SQL Server internals. Together they teach the SQL MCM certification and throughout Microsoft.In their spare time, they like to find frogfish in remote corners of the world.   Speaker Testimonials  "To call them good trainers is an epic understatement. They know how to deliver technical material in ways that illustrate it well. I had to stop Paul at one point and ask him how long it took to build a particular slide because the animations were so good at conveying a hard-to-describe process." "These are not beginner presenters, and they put an extreme amount of preparation and attention to detail into everything that they do. Completely, utterly professional." "When it comes to the instructors themselves, Kimberly and Paul simply have no equal. Not only are they both ultimate authorities, but they have endless enthusiasm about the material, and spot on delivery. If either ever got tired they never showed it, even after going all day and all week. We witnessed countless demos over the course of the week, some extremely involved, multi-step processes, and I can’t recall one that didn’t go the way it was supposed to." "You might think that with this extreme level of skill comes extreme levels of egotism and lack of patience. Nothing could be further from the truth. ... They simply know how to teach, and are approachable, humble, and patient." "The experience Paul and Kimberly have had with real live customers yields a lot more information and things to watch out for than you'd ever get from documentation alone." “Kimberly, I just wanted to send you an email to let you know how awesome you are! I have applied some of your indexing strategies to our website’s homegrown CMS and we are experiencing a significant performance increase. WOW....amazing tips delivered in an exciting way!  Thanks again” 

    Read the article

  • Add a Flight Full of Color to Your Desktop with the Beautiful Birds Theme for Windows 7

    - by Asian Angel
    Do you enjoy looking at and collecting pictures of beautifully colored birds? Then brighten up your desktop with the grace and gorgeous plumage of swans, flamingoes, peacocks, and other exotic birds with this wonderful theme for Windows 7. Note: The theme comes with seventeen awesome wallpapers full of brightly colored avian goodness. Download the Beautiful Birds Theme [Windows 7 Personalization Gallery] How To Encrypt Your Cloud-Based Drive with BoxcryptorHTG Explains: Photography with Film-Based CamerasHow to Clean Your Dirty Smartphone (Without Breaking Something)

    Read the article

  • How can I estimate cost of creating tile-set similar to HoM&M 2?

    - by Alexey Petrushin
    How to estimate cost of creating tile-set similar to HoM&M 2? I'm mostly interested in the tile-set graphics only, no animation needed, the big images of town and creatures can be done as quick and dirty pensil sketches. The quality of tiles and its amount should be roughly the same as in HoM&M 2. Can You please give a rough estimate how much it will take man-hours and how much will it cost?

    Read the article

  • How much it will cost to create tile-set similar to HoM&M 2?

    - by Alexey Petrushin
    How much it will cost to create tile-set similar to HoM&M 2? I'm mostly interested in the tile-set graphics only, no animation needed, the big images of town and creatures can be done as quick and dirty pensil sketches. The quality of tiles and its amount should be roughly the same as in HoM&M 2. Can You please give a rough estimate how much it will take man-hours and how much will it cost?

    Read the article

< Previous Page | 273 274 275 276 277 278 279 280 281 282 283 284  | Next Page >