Search Results

Search found 59278 results on 2372 pages for 'time estimation'.

Page 518/2372 | < Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >

  • Useful WatiN Extension Methods

    - by Steve Wilkes
    I've been doing a fair amount of UI testing using WatiN recently – here’s some extension methods I've found useful. This checks if a WatiN TextField is actually a hidden field. WatiN makes no distinction between text and hidden inputs, so this can come in handy if you render an input sometimes as hidden and sometimes as a visible text field. Note that this doesn't check if an input is visible (I've got another extension method for that in a moment), it checks if it’s hidden. public static bool IsHiddenField(this TextField textField) { if (textField == null || !textField.Exists) { return false; } var textFieldType = textField.GetAttributeValue("type"); return (textFieldType != null) && textFieldType.ToLowerInvariant() == "hidden"; } The next method quickly sets the value of a text field to a given string. By default WatiN types the text you give it into a text field one character at a time which can be necessary if you have behaviour you want to test which is triggered by individual key presses, but which most of time is just painfully slow; this method dumps the text in in one go. Note that if it's not a hidden field then it gives it focus first; this helps trigger validation once the value has been set and focus moves elsewhere. public static void SetText(this TextField textField, string value) { if ((textField == null) || !textField.Exists) { return; } if (!textField.IsHiddenField()) { textField.Focus(); } textField.Value = value; } Finally, here's a method which checks if an Element is currently visible. It does so by walking up the DOM and checking for a Style.Display of 'none' on any element between the one on which the method is invoked, and any of its ancestors. public static bool IsElementVisible(this Element element) { if ((element == null) || !element.Exists) { return false; } while ((element != null) && element.Exists) { if (element.Style.Display.ToLowerInvariant().Contains("none")) { return false; } element = element.Parent; } return true; } Hope they come in handy

    Read the article

  • New whitepaper: Evolution from the Traditional Data Center to Exalogic: An Operational Perspective

    - by Javier Puerta
    IT organizations are struggling with the need to balance the day-to-day concerns of data center management against the business level requirements to deliver long-term value. This balancing act has proven difficult and inefficient: systems and application management tools are resource intensive and traditional infrastructure management architectures have developed over time on a project by project basis. These traditional management systems consist of multiple tools that require administrators to waste time performing too many steps to handle routine administrative tasks. Operational efficiency and agility in your enterprise are directly linked to the capabilities provided by the management layer across the entire stack, from the application, middleware, operating system, compute, network and storage. Only when this end to end capability is provided will we experience the full benefit of a scalable, efficient, responsive and secure datacenter. Managing Exalogic is substantially less complex and error prone than managing traditional systems built from individually sourced, multi-vendor components because Exalogic is designed to be administered and maintained as a single, integrated system (Figure 1). It is at the forefront of the industry-wide shift away from costly and inferior one-off platforms toward private clouds and Engineered Systems. Read the full whitepaper "Evolution from the Traditional Data Center to Exalogic: An Operational Perspective". Full document is available for download at the Exadata Partner Community Collaborative Workspace (for community members only - if you get an error message, please register for the Community first).

    Read the article

  • New Whitepaper: Evolution from the Traditional Data Center to Exalogic: An Operational Perspective

    - by Javier Puerta
    IT organizations are struggling with the need to balance the day-to-day concerns of data center management against the business level requirements to deliver long-term value. This balancing act has proven difficult and inefficient: systems and application management tools are resource intensive and traditional infrastructure management architectures have developed over time on a project by project basis. These traditional management systems consist of multiple tools that require administrators to waste time performing too many steps to handle routine administrative tasks. Operational efficiency and agility in your enterprise are directly linked to the capabilities provided by the management layer across the entire stack, from the application, middleware, operating system, compute, network and storage. Only when this end to end capability is provided will we experience the full benefit of a scalable, efficient, responsive and secure datacenter. Managing Exalogic is substantially less complex and error prone than managing traditional systems built from individually sourced, multi-vendor components because Exalogic is designed to be administered and maintained as a single, integrated system (Figure 1). It is at the forefront of the industry-wide shift away from costly and inferior one-off platforms toward private clouds and Engineered Systems. Read the full whitepaper "Evolution from the Traditional Data Center to Exalogic: An Operational Perspective". Full document is available for download at the Exadata Partner Community Collaborative Workspace (for community members only - if you get an error message, please register for the Community first).

    Read the article

  • Broadcom 4313 doesn't work following 12.04 upgrade

    - by Lucas
    I have an hp G62-228Ca laptop with a BCM4313 previously running Ubuntu 11.10. I ran the 12.04 upgrade last night without much thought. Following the upgrade and mandatory reboot, the wireless card no longer shows up in the network manager, the first time I've had any kind of issues with wireless under Ubuntu. I've done much Googling on the issue but so far I haven't found a permanent solution. Mucking around with some packages though (I've installed five different ones or so), I've managed to devise a workaround that I must run every time I boot the laptop. I have to remove the Broadcom STA driver and reinstall it. Inspiration taken from here: WiFi does not work, Broadcom STA Wireless driver does not work on a BCM4313 After the second modprobe command, the wireless reappears in the network manager and ten seconds after that I'm back on the wifi. If anyone can provide some advice on how to fix this permanently I will be extremely grateful. I'd rather not roll back to 11.10 or reinstall, but I will if I need to. Just let me know if you need the output from any terminal commands. Thanks in advance!

    Read the article

  • Windows Recovery from Grub messed up my computer?

    - by Hudson Worden
    Ok so I'm a noob when it comes to Operating Systems and I think I really messed up this time. So I have a laptop that dual boots Windows 7 and Linux Mint 11. I was trying to boot into Windows 7 but it would just have a black screen with a blinking cursor. So I turned off my computer and tried again. Still a black screen with a cursor. So I thought "well it must be broken somehow and I remembered seeing something like 'Windows Recovery' from the boot menu so I should try it." So when I turned on my computer a third time I selected 'Windows Recovery' (Something like that I can't remember exactly what it was called). After I had selected that I got a white Windows window that said in big red letters "ERROR". I turned off my computer again a turned it back on expecting the Grub menu to reappear. I was wrong. Instead I am greeted with error: no such partition grub rescue. Then I put in a live CD for ubuntu 11.04 and tried looking at my partitions using the disk manager. Looking at my partitions I notice that there isn't a Linux partition anymore and in its place is a unallocated space partition yet the Linux Swap partition is still there. My windows partition is still fine and I can access the files in it. If you understand what has happened, is there anyway I can get my files back? I don't care about reinstalling the OS again. I just want those files that are in the Linux Mint partition.

    Read the article

  • Working with Git on multiple machines

    - by Tesserex
    This may sound a bit strange, but I'm wondering about a good way to work in Git from multiple machines networked together in some way. It looks to me like I have two options, and I can see benefits on both sides: Use git itself for sharing, each machine has its own repo and you have to fetch between them. You can work on either machine even if the other is offline. This by itself is pretty big I think. Use one repo that is shared over the network between machines. No need to do git pulls every time you switch machines, since your code is always up to date. Never worry that you forgot to push code from your other non-hosting machine, which is now out of reach, since you were working off a fileshare on this machine. My intuition says that everyone generally goes with the first option. But the downside I see is that you might not always be able to access code from your other machines, and I certainly don't want to push all my WIP branches to github at the end of every day. I also don't want to have to leave my computers on all the time so I can fetch from them directly. Lastly a minor point is that all the git commands to keep multiple branches up to date can get tedious. Is there a third handle on this situation? Maybe some third party tools are available that help make this process easier? If you deal with this situation regularly, what do you suggest?

    Read the article

  • Will I be able to get programming interviews at good software companies with a non-CS degree?

    - by friend
    I'll be graduating in a year, but I'll have a degree in Economics. I'm pretty much done with my Economics coursework, and by the time next year comes around I will have devoted 1.5 years to learning CS. I will have almost finished the requirements to graduate with a degree in CS, but unfortunately my school requires a science series that would add another 6-9 months of study if I were to try and get the degree (not to mention a max unit cap). I have or will have taken: Objected Oriented Programming Discrete Math Data structures Calculus through multivariable (doubt this matters at all) Linear Algebra (same) Computer Organization Operating Systems Computational Statistics (many data mining projects in R) Parallel Programming Programming Languages Databases Algorithms Compilers Artificial Intelligence I've done well in the ones I've taken, and I hope to do well in the rest, but will that matter if I can't say to the HR people that I have a CS degree? I'd be happy to get an internship at first too, so should I just apply as if I'm an intern and not looking for fulltime, and then try and parlay that into something? Sidenote if you have time -- Is a computer networks or theory of computation class important? Would it be worth taking either of those in lieu of a class on my list? edit -- I know this isn't AskReddit or College Confidential; I know there will be some outrage at posting a question like this. I'm merely looking for insight into a situation that I've been struggling with, and I think this is the absolute best place to find an answer to this question. Thanks.

    Read the article

  • Is it okay to use a language that isn't supported by your company for some tasks?

    - by systempuntoout
    I work for a company that supports several languages: COBOL, VB6, C# and Java. I use those languages for my primary work, but I often find myself to coding some minor programs (e.g. scripts) in Python because I found it to be the best tool for that type of task. For example: An analyst gives me a complex CSV file to populate some DB tables, so I would use Python to parse it and create a DB script. What's the problem? The main problem I see is that a few parts of these quick & dirty scripts are slowly gaining importance and: My company does not support Python They're not version controlled (I back them up in another way) My coworkers do not know Python The analysts have even started referencing them in email ("launch the script that exports..."), so they are needed more often than I initially thought. I should add that these scripts are just utilities that are not part of the main project; they simply help to get trivial tasks done in less time. For my own small tasks they help a lot. In short, if I were a lottery winner to be in a accident, my coworkers would need to keep the project alive without those scripts; they would spend more time in fixing CSV errors by hand for example. Is this a common scenario? Am I doing something wrong? What should I do?

    Read the article

  • Partition tool with console UI (as in server installation)?

    - by lepe
    Back in 2006, Ray (3DLover) posted the same question in: http://ubuntuforums.org/showthread.php?t=309680 but none of the answers were really useful. Now with a little help from AskUbuntu community, I would like to repeat his question again to see if this time it can be answered correctly. So this is the question (and what I wish too): I'm looking for a UI tool for managing partitions in a console. I have installed Ubuntu Server, so I don't have X Windows at all. fdisk and sfdisk are entirely command line. parted is slightly better, but it's not really a UI. cfdisk has somewhat of a UI, but it only works on one disk at a time, and there's no advanced options like configuring LVM or RAID. Just partitioning. I love the partition tool that is available during the OS install procedure. You can partition, configure RAID's and LMV sets. It can format the partitions with several different file systems, it can set labels, mount options and it can insert your volumes into your fstab. Is this tool available as a stand-alone program? I can't find it anywhere. I think it's called parted_server, but I can't find much information about where to get it. In the past, I have run the Ubuntu install procedure just to use the partition manager that comes with it. (canceling the install after making my partition edits) Anyone help me on this? Thanks -Ray Thanks in advance.

    Read the article

  • Configuring an Engenius 3500

    - by dsiddens
    The title speaks to only half of the issue: the other half are the settings in Ubuntu and the sequences therein. The computer in this issue does receive internet with the external antenna jack at the back being fed with a simple magnetic base antenna designed for putting on the roof of an automobile. However, that signal is weak and the Engenius with an external antenna (Rootenna ~15db gain) and ehternet wire will supply a stronger, faster signal. I've set the Engenius to the desired source and entered the correct WEP password. The lights on the Engenius indicate that it's connected to the access point. At the Ubuntu side of this I've worked to no avail changing settings with "Edit Connections" to the point I'm Ask(ing)Ubuntu for help. I have and have RTFM for Engenius 3500 There is an embarrassing side note to this issue: At one time I had the Engenius working! It seems that I can't recall the settings and sequences I used way back when. And I may as well confess to not knowing the Command Line. I'm a GUI guy. Thank you for your time, Doug

    Read the article

  • Wifi interface changes name seemingly at random

    - by ray_voelker
    I'm currently having some issues getting a wireless interface to work continuously under an install of Ubuntu 12.04.1 LTS. Some of the issues I'm experiencing include Connection will drop out after some time after it has initially worked. Interface will be a different name after a reboot. For example, wlan0 will become wlan4 when using the ifconfig -a command. Ubuntu will take a long time to boot, looking for network adapters. The purpose of this build is to function as a web kiosk in a library. The computer is supposed to boot up into a web browser, and allow for browsing of the catalog. For some reason this interface does not appear to be working as it should. Are there any explanations for some of these problems I'm having, and perhaps some solutions? The wireless card appears as this after doing an lspci ... Ralink corp. RT2561/RT61 802.11g PCI In the /etc/network/interfaces file I have the following configuration for the interface. auto wlan0 iface wlan0 inet dhcp wireless-essid UDwireless wireless-mode Managed Thanks in advance for help on this.

    Read the article

  • MySQL documentation writer wanted

    - by stefanhinz
    As MySQL is thriving and growing, we're looking for an experienced technical writer located in Europe or North America to join the MySQL documentation team.For this job, we need the best and most dedicated people around. You will be part of a geographically distributed documentation team responsible for the technical documentation of all MySQL products. Team members are expected to work independently, requiring discipline and excellent time-management skills as well as the technical facilities to communicate across the Internet.Candidates should be prepared to work intensively with our engineers and support personnel. The overall team is highly distributed across different geographies and time zones. Our source format is DocBook XML. We're not just writing documentation, but also handling publication. This means you should be familiar with DocBook, and willing to learn our publication infrastructure.Candidates should therefore be interested not just in writing but also in the technical aspects of publishing documentation. Regarding your initial areas of authoring, those would be MySQL Cluster, MySQL Enterprise Monitor and Backup, and various parts of the MySQL server documentation (also known as the MySQL Reference Manual). This means you should be familiar with MySQL in general, and preferably also with MySQL Cluster and the MySQL Enterprise offerings.Other qualifications: Native English speaker 3 or more years previous experience in writing software documentation Excellent written and oral communication skills Ability to provide (online) samples of your work, e.g. books or articles Curiosity to learn new technologies Familiarity with distributed working environments and versioning systems such as SVN Comfortable with working on multiple operating systems, particularly Windows, Mac OS X, and Linux Ability to administer own workstations and test environment If you're interested, contact me under [email protected].

    Read the article

  • Fastest approach to 3D animation

    - by HappyFerret
    I'm currently tasked with designing a small HTML5 game. Having done everything by myself so far (3D models, codebase, game design, etc) I'm now at a point where I'm running out of time. I've less than a day to animate and bind everything together. However, that's exactly my problem. I was under the naive impression that everything would be easier if I went with pre-rendered 3D models. However, I didn't consider the most difficult part. Animation. After having spent over an hour trying to figure out messiahStudio, I figured it's time to ask for outside help. Is there any easier solution to 3D animation than 3D rigging? What I'm basically looking for is some sort of tool that allows me to simply grab and move/deform select polygons. It doesn't have to be as life-like and accurate as rigging, just efficient enough. Were the circumstances any different, I might just learn how to rig. But that's sorely out of scope right now. PS:The models were created in Sculptris but are fairly low-poly.

    Read the article

  • Developing a TCK: Spec Lead Call for Spec Leads 20 December

    - by Heather VanCura
    The JCP Program will be hosting a Spec Lead call on 20 December on the topic of developing a Technology Compatibility Kit (TCK).  A Technology Compatibility Kit is a required output of a JSR at Final Release, along with the Specification and Reference Implementation (RI).   The TCK must test all aspects of a specification that impact how compatible an implementation of that specification would be, such as the public API and all mandatory elements of the specification. The Reference Implementation is required to pass the TCK. A vendor's implementation of a specification is only considered compatible if the implementation passes the TCK fully and completely.  The TCK is used to test implementations of the Final Specification to make sure that they are fully compatible. The call will be recorded and posted on the JCP.org multimedia page along with any related materials.   Invitation details for the online meeting:Topic: SL Call: Developing a TCK Date: Thursday, December 20, 2012 Time: 9:30 am, Pacific Standard Time (San Francisco, GMT-08:00) Meeting Number: 804 390 892 Meeting Password: 2222 ------------------------------------------------------- To join the audio conference -------------------------------------------------------     +1 (866) 682-4770 (US)     Conference code: 945-4597    Security code: 52775 ("JCPSL" on your phone handset)     For global access numbers see http://www.intercall.com/oracle/access_numbers.htm         Or +1 (408) 774-4073

    Read the article

  • How does 301 redirection work across the network? & should I use it if there is a chance we made need to change the resource back to the original URL?

    - by Faust
    I've built a CMS that makes it fairly easy for my client to relocate pages in their site hierarchy. This site has all human-readable and intuitive URLs, so moving a page necessarily means that its URL changes. I am storing records of each resource's past URLs in the data store so that requests for bygone URLs are re-routed to their appropriate successors. I'm warning my clients not to re-arrange the site willy-nilly (for numerous reasons). But nevertheless I suspect there's a chance page moves could get reversed from time to time. So I'm trying to figure out whether 301 or 302 or 307 redirects should be used when serving up pages to requests for out-of-date URLs. I understand the value of using 301 for search engine optimization. But my concern is with this system possibly inadvertently making some pages unavailable to some users QUESTIONS: That is, if the clients move a page at location/URL A to a new location B, then users get the redirect for A to B, and then the clients move the page back to A again, how long can I expect any of those users to keep getting their requests for A redirected to B -- in this case sending them to my friendly 404 page? Is it until an item in their browser history is cleared? Is the redirect somehow cached in routers throughout the internet? How does this work? How long can I expect the 301 redirect to linger out there ?

    Read the article

  • How can I make my PHP development environment more efficient?

    - by pixel
    I want to start a home-brew pet project in PHP. I've spent some time in my life developing in PHP and I've always felt it was hard to organize the development environment efficiently. In my previous PHP work, I've used a windows desktop machine and a linux server for development. This configuration had it's advantages: it's easy to configure Apache (and it's modules)/PHP/MySql on a linux box, and, at the time, this configuration was the same like on production server. However, I never successfully set up a debug connection between my Eclipse install and X-debug on server. Transferring files from my local workspace to the server was also very annoying (either ftp or Bazaar script moving files from repository to web root). For my new setup, I'm considering installing everything on my local machine. I'm afraid that it will slow down workstation performance (LAMP + Eclipse), and that compatibility problems will kick-in. What would you recommend? Should I develop using two separate machines? On one? Do you have experience using one of above configurations in your work?

    Read the article

  • Deepest sense of programming [closed]

    - by xralf
    I suffer on depression for a few months. Programming was one of my big passions (as a hobby). I had a motivation to achieve my goals (projects), to read books and articles about it to have interest in algorithms and data structures, compilers etc. Then, my mind started to think that it has no sense, that the result is useless. I realized, that I loved programming because of an illusion that it has deep sense, that I love playing with code every day as nothing else with feeling that it leads somewhere. Could I rationalize that it has sense to work on some programming project? That there is a deep sense to do it and enjoy this activity? I have no idea what else should I do in the free time, the mornings without motivation are very depressing. It was nice time when I had an illusion that programming is enjoyable. Could you help me to figure out the deepest sense of programming in this world? Why to love it again? What everything could be achieved and realized? (things like higher salary and ego are not what I'm looking for)

    Read the article

  • How to avoid the GameManager god object?

    - by lorancou
    I just read an answer to a question about structuring game code. It made me wonder about the ubiquitous GameManager class, and how it often becomes an issue in a production environment. Let me describe this. First, there's prototyping. Nobody cares about writing great code, we just try to get something running to see if the gameplay adds up. Then there's a greenlight, and in an effort to clean things up, somebody writes a GameManager. Probably to hold a bunch of GameStates, maybe to store a few GameObjects, nothing big, really. A cute, little, manager. In the peaceful realm of pre-production, the game is shaping up nicely. Coders have proper nights of sleep and plenty of ideas to architecture the thing with Great Design Patterns. Then production starts and soon, of course, there is crunch time. Balanced diet is long gone, the bug tracker is cracking with issues, people are stressed and the game has to be released yesterday. At that point, usually, the GameManager is a real big mess (to stay polite). The reason for that is simple. After all, when writing a game, well... all the source code is actually here to manage the game. It's easy to just add this little extra feature or bugfix in the GameManager, where everything else is already stored anyway. When time becomes an issue, no way to write a separate class, or to split this giant manager into sub-managers. Of course this is a classical anti-pattern: the god object. It's a bad thing, a pain to merge, a pain to maintain, a pain to understand, a pain to transform. What would you suggest to prevent this from happening?

    Read the article

  • Task-It Source Code

    Download Source Code I've received many questions about when the source code for the Task-It application will be released. Well, the time has finally come. I haven't been able to release this sooner due to the flurry of releases that have been coming out lately. Silverlight 4, WCF RIA Services, and even our Q1 Rad Controls. Each time I got the latest bits I ran into issues (either bugs or visual issues) in the Task-It that needed to be fixed. Having said that, the app is far from perfect. There are still some bugs lurking and things that need to be fixed up visually (especially the RadGridView filtering popup), but the main purpose of this app is to show the RadControls for Silverlight 4 in the context of a real-world application, and I don't want to keep delaying the release of the source code. Minimum requirements To run the app you will need the latest Silverlight bits. Silverlight 4 RTM, VS2010 and the ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Working with volonteers

    - by JavaCecilia
    I've been engaged as a scout leader in the Scout movement since 1993, working on a local and national level, leading both kids and other scout leaders.Last year, the Swedish Scout Association invited 40000 scouts aged 14-17 years old from 150 countries around the world to go camping for 10 days. I was on the planning team with a couple of hundreds of my closest scout friends and during a couple of years we spent our spare time planning logistics, food, program, etc to give these youths an experience of a life time. It was a big and complex project; different languages, religion (Ramadan was celebrated during the camp) and the Swedish weather were some of the factors we had to take into account. The camp was a huge success, the daily wow factor was measured and people truly had fun and got to know each other. I learnt a lot and got friends around the globe - looking back at the pictures it feels unreal that we managed it.The Java platform as OpenJDK and its' future is a similar project in my mind. With 9 million developers and being installed on 3 bn devices, the platform touches a lot of users and businesses. There's a strong community taking Java into the future, making sure it stays relevant. Finding ways to collaborate in a scalable way is the key to success here. We have the bylaws directing how decisions are made, roles are appointed and how to "level" within the community. Using these, we can then make contributions according to our competence and interest and innovate taking our platform into the future.If you find a way to organize volunteers towards a common goal, solving conflicts, making decisions, dividing the work into manageable chunks and having fun while doing it - there's no end to what you can achieve.

    Read the article

  • How to correctly write an installation or setup document

    - by UmNyobe
    I just joined a small start-up as a software engineer after graduation. The start-up is 4 year old, and I am working with the CEO and the COO, even if there are some people abroad. Basically they both used to do almost everything. I am currently on some kind of training phase. I have at my disposition architecture, setup and installation internal documentation. Architecture documentation is like a bible and should contain complete information. The rest are used to give directions in different processes. The issue is that these documents are more or less dated, as they just didn't have the time to change them. I will be in charge of training the next hires, and updating these documents is part of my training. In some there is a lot of hard-coded information like: Install this_module_which_still_exists cd this_dir_name_changed cp this_file_name_changed other_dir_name_changed ./config_script.sh ./execute_script.sh The issues i have faced : Either the module installation is completely different (for instance now there is an rpm, or a different OS) Either names changed, and i need to switch old names by new names Description of the purpose of the current step missing. Information about a whole topic is missing Fortunately these guys are around and I get all the information I want and all the explanations I need. I want to bring a design to the next documents so in the future people don't feel like they are completely rewriting a document each time they are updating it. Do you have suggestions? If there is a lightweight design methodology available online you can point me to it's nice too. One thing I will do for sure is set up a versioning repository for the documents alone. There is already one for the source code so I don't know why internal documents deserve a different treatment.

    Read the article

  • Writing generic code when your target is a C compiler

    - by enobayram
    I need to write some algorithms for a PIC micro controller. AFAIK, the official tools support either assembler or a subset of C. My goal is to write the algorithms in a generic and reusable way without losing any runtime or memory performance. And if possible, I would like to do this without increasing the development time much and compromising the readability and maintainability much either. What I mean by generic and reusable is that I don't want to commit to types, array sizes, number of bits in a bit field etc. All these specifications, IMHO, point to C++ templates, but there's no compiler for it for my target. C macro metaprogramming is another option, but, again my opinion, that greatly reduces readability and increases development time. I believe what I'm looking for is a decent C++ to C translator, but I'd like to hear anything else that satisfies the above requirements. Maybe a translator from another high-level language to C that produces very efficient code, maybe something else. Please note that I have nothing against C, I just wish templates were available in it.

    Read the article

  • Has anyone used game salad before and how does it compare with cocos2d in terms of 2d game development

    - by jih
    First a short intro. I am new to the game development space and want to make some 2d games for iOS. I first come across cocos2d and kobold but then wanted something more graphical for rapid prototyping. I then found Game Maker which doesn't support iOS but is fairly easy to learn and then found Game Salad which supports iOS as well as other platforms. I know this question has been ask before but I want to know in terms of the types of games I want to develop what an learning investment path would be best. The types of games genre I am interest are: Side scrollers Simple games like diamond dash or ninja fruits, shanghai, etc Old fashioned zelda or dragonquest type (nintendo fan here:-) 2d adventure RPG games (real time or turn based) Mystery turn based games like carmen sandiego, wizardry, myst etc. So now the question becomes Which game development environment should I invest my time in learning. Game Salad or cocos2d? It would seem game salad would be great for quickies being graphical but in terms of 2d platform games etc would there be speed/performance/feature penalties? Are there certain 2d games genre of the 4 above that Game salad is better at while certain type cocos2d would be better at? Anyone with experience of both can share some pointers? Thanks. inexperienced jih

    Read the article

  • How can I get a gnome environment in my VNC session?

    - by adante
    When I start VNC I have an empty desktop without the ability to manage windows or start apps etc). I'd like to have a desktop environment to be able to basic desktop things (someone asked me why I wanted this - I can't really say except that I would like my computer to be useful). My focus at the moment is basically having a working environment with as little time/effort expenditure as possible, as opposed to spending a full-time week learning the most trivial and arcane details of x, vnc, gnome or whatever passes for the current desktop architecture standard of the hour. What command or series of hoops do I have to jump to to achieve this? I have tried running gnome-session but it looks like it is attempting to run compiz and fails spectacularly. I've also tried running metacity but this simply gives me a titlebars to my windows (this is great! But I'd also like the taskbar and other stuff). I considered trying to start gnome-session in a way that it uses metacity instead of compiz. But I don't know how to do this. Tutorials on the net exist for changing to metacity - once you already have compiz running. Not so useful if compiz does not run.

    Read the article

  • Domain Specific Software Engineering (DSSE)

    Domain Specific Software Engineering (DSSE) believes that creating every application from nothing is not advantageous when existing systems can be leveraged to create the same application in less time and with less cost.  This belief is founded in the idea that forcing applications to recreate exiting functionality is unnecessary. Why would we build a better wheel when we already have four really good and proven wheels? DSSE suggest that we take an existing wheel and just modify it to fit an existing need of a system. This allows developers to leverage existing codebases so that more time and expense are focused on creating more usable functionality compared to just creating more functionality. As an example, how many functions do we need to create to send an email when one can be created and used by all other applications within the existing domain? Key Factors of DSSE Domain Technology Business A Domain in DSSE is used to control the problem space for a project. This control allows for applications to be developed within specific constrains that focus development is to a specific direction.Technology in DSSE offers a variety of technological solutions to be applied within a domain. Technology Examples: Tools Patterns Architectures & Styles Legacy Systems Business is the motivator for any originations to use DSSE in there software development process. Business reason to use DSSE: Minimize Costs Maximize market and Profits When these factors are used in combination additional factors and benefits can be found. Result of combining Key Factors of DSSE Domain + Business  = Corporate Core Competencies Domain expertise improved by market and business expertise Domain + Technology = Application Family Architectures All possible technological solutions to problems in a domain without any business constraints.  Business + Technology =  Domain independent infrastructure Tools and techniques for building systems  independent of all domains  Domain + Business + Technology = Domain-specific software engineering Applies technology to domain related goals in the context of business and market expertise

    Read the article

< Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >