Search Results

Search found 93612 results on 3745 pages for 'inquisitive one'.

Page 303/3745 | < Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >

  • De-index URL paremeters

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have certain parameters appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • Most effective way to do daily standup meeting when a few people are remote

    - by Burhan Ali
    I am a software developer in a small team of seven. We are not an Agile (with a big 'A') team but are experimenting with some aspects of agile. One of these is the daily "standup" meeting. The difficulty here is that for two days of the week we have at least one person working from home so the full team isn't available in the same room. What is the best way to carry out a daily standup in this situation? Some facts that may be relevant: We all work in a single open plan room. We use Skype in our company. We don't have any video conferencing capability. We all work the same hours so there are no timezone complexities involved. The development manager is one of the people who works from home one day a week. Things we have tried: Conference call using Skype: This is tricky for those in the office because you can hear people speak in the room and then a split second later through the headset. This can e very distracting. Conference phone: Awful experience. Hard to get them to work and poor quality audio. Text-based updates using Skype. This is not as engaging and is no different than just firing off a status email in the morning. I have seen other questions about remote collaboration but they are mainly about completely remote teams and/or teams that span multiple time zones. We are not affected by either of these problems. What can we do to make our standup meetings better in these circumstances?

    Read the article

  • Ubuntu 12.10: Installing proprietary Nvidia driver causes freeze at boot

    - by Greg
    Ok, so I just installed Ubuntu on my laptop, and I immediately encountered an issue: the HDMI audio output won't work. Yes, I know about the sound settings thing where you have to select the HDMI option, but even when it's selected I get no sound out of the TV I'm hooking it up to. This is a dealbreaker for me, because my laptop speakers are terrible, it's one of the big reasons I use my TV monitor. So I decided to work on solving the problem by upgrading my Nvidia drivers. I switched to one of the propriety drivers offered in that software updating utility that comes with the OS, the one option that said (tested). Viola, sound over the HDMI is now working. Unfortunately, this now brings me to my next problem: when I reboot Ubuntu with this or any other proprietary driver installed, it freezes when it tries to load my desktop. As in I can see my wallpaper, but no icons or options of any kind. The system is totally frozen, and gives me one of those "we've experienced an error, do you want to report it messages." So there's my bind. I need HDMI audio out, that's a total dealbreaker for me, but installing the drivers that give me that capability crash the system. Does anyone have any idea what's causing this

    Read the article

  • Recommended storage scheme for home server? (LVM/JBOD/RAID 5...)

    - by j-g-faustus
    Are there any guidelines for which storage scheme(s) makes most sense for a multiple-disk home server? I am assuming a separate boot/OS disk (so bootability is not a concern, this is for data storage only) and 4-6 storage disks of 1-2 TB each, for a total storage capacity in the range 4-12 TB. The file system is ext4, I expect there will be only one big partition spanning all disks. As far as I can tell, the alternatives are individual disks pros: works with any combination of disk sizes; losing a disk loses only the data on that disk; no need for volume management. cons: data management is clumsy when logical units (like a "movies" folder) are larger than the capacity of any single drive. JBOD span pros: can merge disks of any size. cons: losing a disk loses all data on all disks LVM pros: can merge disks of any size; relatively simple to add and remove disks. cons: losing a disk loses all data on all disks RAID 0 pros: speed cons: losing one drive loses all data; disks must be same size RAID 5 pros: data survives losing one disk cons: gives up one disk worth of capacity; disks must be same size RAID 6 pros: data survives losing two disks cons: gives up two disks worth of capacity; disks must be same size I'm primarily considering either LVM or JBOD span simply because it will let me reuse older, smaller-capacity disks when I upgrade the system. The runner-up is RAID 0 for speed. I'm planning on having full backups to a separate system, so I expect the extra redundancy from RAID levels 5 or 6 won't be important. Is this a fair representation of the alternatives? Are there other considerations or alternatives I have missed? And what would you recommend?

    Read the article

  • When to use PHP or ASP.NET? [closed]

    - by loyalpenguin
    I have worked extensively in developing web applications using PHP and ASP.NET, but one of the questions that I'm constantly asked by customers is whether to move forward with a php website or an asp.net website. So naturally the first thing that comes to mind is to answer the question like this: PHP is open-source and ASP.NET is from Microsoft. Usually after something like that is said the customer has a blank look on there face. Apparently the fact that one is open source and the other isn't doesn't really faze them. And for good reason, because when I first heard it, it really doesn't tell me much. I know from working with both that each have their + and - when it comes to developing websites. NOTE: THIS QUESTION IS NOT TO QUESTION WHICH IS BETTER TO DEVELOP WITH. THIS QUESTION IS INTENDED TO BE OBJECTIVE. My question is what are differences between ASP.NET and PHP as far as Features Security Extendability Frameworks Average Development Time And when one is generally used over the other for certain types of projects. I am trying to compile a list of facts to be able to compare with the customer what developement platform is better for there particular project. I have done a simple search on google and a ton of articles come up, but the problem is the majority are usually biased towards PHP or ASP.NET. Also if you can maybe provide examples from experience when one technology was more preferable than the other that would be awesome.

    Read the article

  • Every file on cPanel got deleted (then restored hours later), and I have no idea why

    - by mcranston18
    I apologize in advance if I don't provide proper detail; I am new to server stuff and am looking for general advice about this issue: I was helping out a client doing web design last month. They have about a dozen static sites on one server. The sites are all built on Joomla, except one which I built on Wordpress. Everything was working fine last month when we did the redesign but all of a sudden this morning, every single file on their server got deleted: every web page, file, and all e-mail addresses. I phoned the hosting company (alliancewww.com) to ask, "why did every single file suddenly delete off the server?" They said, "because someone must have deleted it." I said, "well no one did." (Which I'm pretty damn sure no one did.) They said, "you can pay us to look into the problem." I authorized $150 for them to look into the problem. About an hour later, everything was magically re-instated. The host said they had a back-up of everything and just restored everything. What I'm wondering: Does anyone have recommendations of logs I can go through to investigate how the files got deleted in the first place? I've checked out their cPanel logs but found nothing. Is it likely that this is a mess-up on the host's part?

    Read the article

  • Is it good idea to require to commit only working code?

    - by Astronavigator
    Sometimes I hear people saying something like "All committed code must be working". In some articles people even write descriptions how to create svn or git hooks that compile and test code before commit. In my company we usually create one branch for a feature, and one programmer usually works in this branch. I often (1 per 100, I think and as I think with good reason) do non-compilable commits. It seems to me that requirement of "always compilable/stable" commits conflicts with the idea of frequent commits. A programmer would rather make one commit in a week than test the whole project's stability/compilability ten times a day. For only compilable code I use tags and some selected branches (trunk etc). I see these reasons to commit not fully working or not compilable code: If I develop a new feature, it is hard to make it work writing a few lines of code. If I am editing a feature, it is again sometimes hard to keep code working every time. If I am changing some function's prototype or interface, I would also make hundreds of changes, not mechanical changes, but intellectual. Sometimes one of them could cause me to carry out hundreds of commits (but if I want all commits to be stable I should commit 1 time instead of 100). In all these cases to make stable commits I would make commits containing many-many-many changes and it will be very-very-very hard to find out "What happened in this commit?". Another aspect of this problem is that compiling code gives no guarantee of proper working. So is it good idea to require every commit to be stable/compilable? Does it depends on branching model or CVS? In your company, is it forbidden to make non compilable commits? Is it (and why) a bad idea to use only selected branches (including trunk) and tags for stable versions?

    Read the article

  • Why don't windows of the same application behave as they should?

    - by Yuttadhammo
    Somewhere along the upgrade path, Unity has developed some strange logic behind window layering. First, before Oneiric, there was a way to see all the windows of an application - I think it was when you click on the icon in the launcher. Now, clicking on the icon often does nothing. Suppose I have two terminals open, one behind this Firefox window, and one in front of it. Clicking on the launcher does nothing - the only way to find the second terminal, afaics, is to move the Firefox window or use the task switcher. Secondly, once I have both terminals on top, then I decide to close one of them, suddenly they both disappear (the second one, for some reason, has gone into hiding behind the Firefox window). Third (though I can't pin it down now), sometimes when a window is on top, focus is still on a window in back; I click on the top x to close the window in front, only to find I've closed an important window in the back. (Update: this question details the problem) I can't really believe these are bugs, since they seem too obvious to not have been fixed by now. My question is, am I missing something? Some compiz option I can set to make it act like it used to? Or is this really how Unity is supposed to act?

    Read the article

  • Team Software Development using Ruby on Rails

    - by Panoy
    I used to work alone on small to medium sized programming projects before and have no experience working in a team environment. Currently, there will be 3 of us in an in-house software development team that is tasked to develop a number of software for an academic institution. We have decided to use the web for the majority of the projects and are planning to choose Ruby on Rails for this and I would like to ask for your inputs, advices and approaches with regards to software development as a team using the RoR web framework. One thing that has really confounded me is how you divide the programming tasks of a project if there are 3 of you that are really doing the coding. It’s obvious that we as developers approach a problem in a modular way and finish it one after another. If the project consists of 3 modules, should each one of us focus on each of those modules? Would it be faster that way? How about if the 3 of us would focus on one module first (that’s what I really prefer). Is using a distributed version control system such as Git the answer to this type of problem? Please don’t forget to put your tips and experiences with regards to team software development. Cheers!

    Read the article

  • Dual monitor setup can cause boot problem?

    - by kriszpontaz
    I have a two monitor setup, one 22" (1680x1050 res., 16:10) and another 19" (1280x1024 res. 5:4). I've installed ubuntu 11.10 beta2 x86, and the installation worked fine, the system boot was successful. I've upgraded ubuntu from the main server, and after restart, booting with the kernel 3.0.0-13, my system hangs up with a purple screen, and than nothing happens (the system boots successful with the kernel image 3.0.0-8). Nvidia current drivers not installed, but if i install it, the situation is the same. I have an Nvidia 9600GT installed. I tried to boot with one screen attached, I've tried each port, but no luck at all. With kernel image 3.0.0-8 the system successfully boots with each display attached, but the farther kernels (3.0.0-11; 3.0.0-12; ect.) all freezes, even one display, or multiple attached. I have two systems with ubuntu installed, and the other (with Ati HD 2400XT, latest closed drivers) don't have any issues like this, I wrote about. Update: The problem solved by reinstalling the operatin system, without automatically installing updates during install, with only one monitor attached. After completing installation, and clean reboot, i've installed closed nVidia drivers. After all, i found it's safe to connect another monitor to the system, it's not causing any problems. Probably the situation stays like this.

    Read the article

  • How to switch to a generic kernel in a headless Ubuntu Server 12.04?

    - by chmike
    I just got a dedicated server with Ubuntu 12.04 installed with a custom compiled kernel. Since I would like to install VirtualBox and this custom kernel doesn't support dynamic module loading (for security) I need to change the kernel. I'm running some Ubuntu servers for years but never palyed with grub and a headless computer. When the command update-grub is run it shows the different kernel it finds. Here is what I see Generating grub.cfg ... Found linux image: /boot/bzImage-3.2.13-xxxx-grs-ipv6-64 Found linux image: /boot/vmlinuz-3.2.0-34-generic Found initrd image: /boot/initrd.img-3.2.0-34-generic No volume groups found done The first one is the active one as seen with uname -r. To me it looks like the second kernel is the one I should use. But I don't know how to configure grub2 to use it. The computer is also configured with a software RAID using mdadm I guess. Never used that before. I don't know if playing with the grub of changing kernel could brake this. What must I do to set the generic kernel as the default one so that I can get VirtualBox running.

    Read the article

  • Preventing item duplication?

    - by PuppyKevin
    For my game, there's two types of items - stackable, and nonstackable. Nonstackable items get assigned a unique ID that stays with it forever. A character ID is assosicated with the item, as is a state (CHANGED, UNCHANGED, NEW, REMOVED). The character ID and state is used for item saving purposes. Stackable items have one unique ID, as in the entire stack has one unique ID. For example: 5 Potions (stacked ontop of each other) has one unique ID. When dropping a nonstackable item, the state gets set to REMOVED, and the unique ID and state don't change. If picked up by another player, the state gets set to NEW, and the character ID gets changed to the new character's ID. When dropping all items in a stack of stackable items (for example, 5 potions out of 5) - it behaves just like a nonstackable item. When dropping some of a stack of stackable items (for example, 3 potions out of 5)... I really have no clue what to do. The 3 dropped potions have the state of REMOVED, but the same unique ID and character ID. If another player picks it up, it has no choice but to obtain a new unique ID, and its state gets changed to NEW and its character ID to the new one. If the dropping player picks it back up, they'd just be readded to the stack. There's two issues with that though. 1. If the player who dropped the 3 potions picks it back up, there's no way to tell if they legitimately dropped the items, or if they're duped items. 2. If another player picks up the 3 potions (assuming they're duped), there's no way to know if they're duped or not. My question is: How can I create a system that detects duplicated items for both nonstackable and stackable items?

    Read the article

  • Apply bone tranforms when importing FBX in XNA

    - by hichaeretaqua
    Preconditions: I have some models, that does only contain some meshes and one texture. There is no animation within the model. An example: a model of a table. I want to draw the Model with a custom effect, so I have to swap the effect after loading the model. In order to draw them correctly, I have to apply the bone transformation manually on each draw for each mesh and effect as can be seen here. So there are two questions: Is there a option during import that allows my to apply the bone transformation on all vertices, so that during draw call I should not have to do this? Is there a option during import that merges all vertices into a Vertex- and IndexBuffer, that allows me to draw the whole model with just one call? I'm pretty sure that the build-in "Autodesk FBX - XNA Framework" does not support this features, but maybe there is an other imported available or an other possibility I missed. The aim is to speed up rendering a little bit especially by using instancing. So having one VertexBuffer to draw at one time would be pretty nice.

    Read the article

  • Supporting and testing multiple versions of a software library in a Maven project

    - by Duncan Jones
    My company has several versions of its software in use by our customers at any one time. My job is to write bespoke Java software for the customers based on the version of software they happen to be running. I've created a Java library that performs many of the tasks I regularly require in a normal project. This is a Maven project that I deploy to our local Artifactory and pull down into other Maven projects when required. I can't decide the best way to support the range of software versions used by our customers. Typically, we have about three versions in use at any one time. They are normally backwards compatible with one another, but that cannot be guaranteed. I have considered the following options for managing this issue: Separate editions for each library version I make a separate release of my library for each version of my company software. Using some Maven cunningness I could automatically produce a tested version linked to each of the then-current company software versions. This is feasible, but not without its technical challenges. The advantage is that this would be fairly automatic and my unit tests have definitely executed against the correct software version. However, I would have to keep updating the versions supported and may end up maintaining a large collection of libraries. One supported version, but others tested I support the oldest software version and make a release against that. I then perform tests with the newer software versions to ensure it still works. I could try and make this testing automatic by having some non-deployed Maven projects that import the software library, the associated test JAR and override the company software version used. If those projects build, then the library is compatible. I could ensure these meta-projects are included in our CI server builds. I welcome comments on which approach is better or a suggestion for a different approach entirely. I'm leaning towards the second option.

    Read the article

  • Need help with ColdFusion and ASP.NET site [closed]

    - by Michael Stone
    To begin, I wasn't too sure how to title this.. I've got a few questions. First off, I've got a very big site that's in ColdFusion and we've been migrating to ASP.NET C# 4.0 the last 8 months. I've got a team of 7 programmers and no one can seem to figure out these answers, not even our senior C# programmer. We're using Team Foundation Server and we can't figure out how to only push up one small change at time. Right now we're stuck to publishing the entire site and it's causing serious issues. We've currently got the site as a Project and not a Website. We're wondering if that's one issue. I actually think it might be a problem. We're also dealing with an issue where we can't access our regular folders with relative paths. So we're first developing our admin side in .NET and We've got our regular site and then we've got another site within that for our .NET admin tools. By site, I'm referring to them actually being Sites in IIS. This also creates a problem for us when we're creating tools that upload images and want to store them and access them from our parent Site. I'd very much appreciate any advice on how to go about this in the most standardized way. So what I'm hoping for is advise on: -Publishing and managing a site/project in Team Foundation Server. Being able to push up one fix at a time if needed would be GREAT! -Any help figuring out the issuing referencing folders from my .NET child site to my parent ColdFusion site using regular relative paths. "/a/images/b/" would be nice nice instead of only being able to do "/b/images/" We're using ColdFusion 8, C# Asp.NET 4.0/Entity Framework/POCO Templates, and a Windows 2008 R2 Server. Thank you in advance for any help.

    Read the article

  • Are long methods always bad?

    - by wobbily_col
    So looking around earlier I noticed some comments about long methods being bad practice. I am not sure I always agree that long methods are bad (and would like opinions from others). For example I have some Django views that do a bit of processing of the objects before sending them to the view, a long method being 350 lines of code. I have my code written so that it deals with the paramaters - sorting / filtering the queryset, then bit by bit does some processing on the objects my query has returned. So the processing is mainly conditional aggregation, that has complex enough rules it can't easily be done in the database, so I have some variables declared outside the main loop then get altered during the loop. varaible_1 = 0 variable_2 = 0 for object in queryset : if object.condition_condition_a and variable_2 > 0 : variable 1+= 1 ..... ... . more conditions to alter the variables return queryset, and context So according to the theory I should factor out all the code into smaller methods, so That I have the view method as being maximum one page long. However having worked on various code bases in the past, I sometimes find it makes the code less readable, when you need to constantly jump from one method to the next figuring out all the parts of it, while keeping the outermost method in your head. I find that having a long method that is well formatted, you can see the logic more easily, as it isn't getting hidden away in inner methods. I could factor out the code into smaller methods, but often there is is an inner loop being used for two or three things, so it would result in more complex code, or methods that don't do one thing but two or three (alternatively I could repeat inner loops for each task, but then there will be a performance hit). So is there a case that long methods are not always bad? Is there always a case for writing methods, when they will only be used in one place?

    Read the article

  • ORM and component-based architecture

    - by EagleBeek
    I have joined an ongoing project, where the team calls their architecture "component-based". The lowest level is one big database. The data access (via ORM) and business layers are combined into various components, which are separated according to business logic. E.g., there's a component for handling bank accounts, one for generating invoices, etc. The higher levels of service contracts and presentation are irrelevant for the question, so I'll omit them here. From my point of view the separation of the data access layer into various components seems counterproductive, because it denies us the relational mapping capabilities of the ORM. E.g., when I want to query all invoices for one customer I have to identify the customer with the "customers" component and then make another call to the "invoices" component to get the invoices for this customer. My impression is that it would be better to leave the data access in one component and separate it from business logic, which may well be cut into various components. Does anybody have some advice? Have I overlooked something?

    Read the article

  • NHibernate Pitfalls: Cascades

    - by Ricardo Peres
    This is part of a series of posts about NHibernate Pitfalls. See the entire collection here. For entities that have associations – one-to-one, one-to-many, many-to-one or many-to-many –, NHibernate needs to know what to do with their related entities, in three particular moments: when saving, updating or deleting. In particular, there are two possible behaviors: either ignore these related entities or cascade changes to them. NHibernate allows setting the cascade behavior for each association, and the default behavior is not to cascade (ignore). The possible cascade options are: None Ignore, this is the default Save-Update If the entity is being saved or updated, also save any related entities that are either not saved or have been modified and associate these related entities to the root entity. Generally safe Delete If the entity is being deleted, also delete the related entities. This is only useful for parent-child relations Delete-Orphan Identical to Delete, with the addition that if once related entity is removed from the association – orphaned –, also delete it. Also only for parent-child All Combination of Save-Update and Delete, usually that’s what we want (for parent-child relations, of course) All-Delete-Orphan Same as All plus delete any related entities who lose their relationship In summary, Save-Update is generally what you want in most cases. As for the Delete variations, they should only be used if the related entities depend on the root entity (parent-child), so that deleting the root entity and not their related entities would result in a constraint violation on the database.

    Read the article

  • What would be a good topic for research on "edge of multiple processors / computers programming" topic?

    - by Kabumbus
    This is a subjective discussion so we can express our dreams and hopes here. A "topic" must be like a task with point to have as end result a software poduct. A "topic" must be mainly about "Software engineering", "Algorithm and data structure concepts" and perhaps "Design patterns". I mean let us try to look what is not already there? What can be developed in fiew month and give a breakthrue / start a new leap / show somethig not realized before in science of f multiple computers programming? What i see is already there: LAN / wire and other infrastractural programms for connecting on device level MPI/ Bit torrent/Jabber protocols / APIs / servers for messaging on top Boost and analogs on evry OS in most languages for multithreading there are lots of CUDA like on computer frameworks for fast calculating on computers GPUs What I personally do not see out there is a crossplatform framework for multiple processes interaction. Meaning one that would allow easy creation of multyple processes running in paralell inside one hoster app on one machine. In level not harder than needed for threads creation (so no seprate server apps - just one lib doing it all) Is there ny such lib and what can you propose for research topic?

    Read the article

  • What should I recommend a small company looking for C# developers

    - by Coder
    Here is the issue. I am a senior developer, and one of the start-ups I designed the system (management system/database/web) a long time ago, have grown and need software updates. I have left their system to another developer long time ago, but apparently he has left the job, and so they are asking me if I can suggest them where to find a new one. The problem is that the company has no clue that the IT is not cheap. They expect multiple features to be added for 40$, so that's an issue. Actually one of the reasons why I left the project when I did. Lots of expectations, little pay, also I know those people outside work, so I decided to avoided stressing the nonwork-relationships and left the project gracefully. Today they asked me for an advice, and I told them that the feature list they want is probably going to cost some if they'll get a senior developer for the job. So I guess their best bet is to find someone who loves coding and has just finished the school. Which would give someone a chance to code for money which is good for a student, and at the same time, allow the student to get some hands on experience. Then again, the system is not exactly 20 line console program, there is an MSSQL database, ASP.NET web page and content management system with all the AJAX stuff and some other things. So student straight out of school could have some problems with that. But, I thought about the issue some more, and I think that junior developer is a tricky deal, without mentoring, he can either screw up royally, or just do what's asked. Also, it seems no one is coming to interviews at all, which is weird, or maybe not. What should I suggest them?

    Read the article

  • How do you setup Postfix/Dovecot/MySQL to not look for local accounts?

    - by thiesdiggity
    I am having an issue with one of my Postfix/Dovecot mail servers and I'm unsure how to fix the problem. I will try to explain it in detail, here it goes: I have an Ubuntu server setup using Virtual hosting with Postfix, Dovecot and MySQL. We have one domain setup as a virtual domain, for this example I am going to use mail.example.com. Under that domain we have one email address. I have another server (MS Exchange) setup using another one of my sub-domains, ex.example.com. The problem is that when I SMTP into the account on mail.example.com and try to send an email to an account on ex.example.com, I get the email returned back to us with an "unknown host" error. Now, I know that the mail.example.com server can resolve the ex.example.com domain because I can ping/dig while SSH'd into it. I can also log into Postfix via Telnet and send an email to an ex.example.com mailbox. I'm guessing that it has something to do with Postfix/Dovecot looking locally for the domain in the virtual domain list because of the tld domain (example.com)? If that's the case, how do I get Postfix/Dovecot to only look locally for the entire URL (mail.example.com) and if it doesn't find it, send it to the correct server by looking up the MX/A records (which I know exist and are setup correctly)? I have been working on this all day and any guidance would be GREATLY appreciated! Thanks for your time!

    Read the article

  • Is there any simple game that involves psychological factors?

    - by Roman
    I need to find a simple game in which several people need to interact with each other. The game should be simple for an analysis (it should be simple to describe what happens in the game, what players did). Because of the last reason, the video games are not appropriate for my purposes. I am thinking of a simple, schematic, strategic game where people can make a limited set of simple moves. Moreover, the moves of the game should be conditioned not only by a pure logic (like in chess or go). The behavior in the game should depend on psychological factors, on relations between people. In more details, I think it should be a cooperation game where people make their decisions based on mutual trust. It would be nice if players can express punishment and forgiveness in the game. Does anybody knows a game that is close to what I have described above? ADDED I need to add that I need a game where actions of players are simple and easy to formalize. Because of that I cannot use verbal games (where communication between players is important). By simple actions I understand, for example, moves on the board from one position to another one, or passing chips from one player to another one and so on.

    Read the article

  • How do I find add-ons for packages when using the command line?

    - by user74660
    My question is a little bit different from others already asked, I guess. I've already searched for answers, but I didn't find anything related. For example, I've always installed K3B via Terminal with the following command "sudo apt-get install k3b". It always worked, of course. One day, I decided to install it via Ubuntu Sofware Center and, to my surprise, there were a few Add-ons I didn't know about. I checked some of them to be installed as well because I found them useful. Now, here's my question: When we try to install a software via Terminal and this software has add-ons, how do we know that? And how do we install the add-ons via Terminal? I suppose we have to know the names of the add-ons first, and then install them one by one, once the main software has already been installed. But how do we get to know those names via Terminal? Using the Software Center is cool because it shows the add-ons, a brief description for each one and their names in brackets, right? How about that via Terminal? I had never paid attention to this until I used the Software Center. By the way, K3B was just an example, of course.

    Read the article

  • Low-res emacs24 icon in application switcher 12.10

    - by MTS
    I recently upgraded to Quantal, and also switched up to emacs24 from 23. Everything is great, except for one thing: the icon in the Application Switcher for emacs24 is a horrible, low-resolution eyesore. Compare the two side-by-side: I've seen a couple of questions addressing issues like this, but they're not quite the same. This one says that it is happening with all icons, but that's clearly not the case here. And this one seems more relevant, but it is talking about Gnome, not Unity. In the comments to the one answer for the second question, it says to look at the icons in /usr/share/icons to see if they are low-resolution, and if so to replace them with better ones. There's a ton of emacs icons, in fact. They are in various subfolders of /usr/share/icons/hicolor and they are in sizes ranging from 16x16 to 128x128, and also there are scaleable .svg versions of the icons too. I noticed that there are no 192x192 or 256x256 versions. But it seems like that shouldn't matter, since emacs23 also didn't have icons in those sizes. Any help would be much appreciated!

    Read the article

  • Install a i386 printer driver into an amd64 distribution or how can I find a good printer based on features?

    - by Yanick Rochon
    Hi, I just bought a Lexmark Interpret S408 all-in-one printer. The box said that it supported Ubuntu 8.04, but I told myself it should work with Lucid... well no. The only driver I have found is for i386 while I have a amd64 image installed; the architecture is incompatible. So, the quesiton is : Is it possible to install that driver anyway, somehow? Or do I need to take that printer back to the store and buy another one? If the latter is the only alternative, I need a printer that has wireless connection capability can do color printing is of good price (less than $200 CAD) Thank you for your answers, help, and tips. ** UPDATE ** The driver was given in the form of deb package (for Debian distributions) and I managed to extract the actual deb package driver out of the install program. I ran sudo dpkg -i --force-all lexmark-inkjet-09-driver-1.5-1.i386.deb and the driver installed, and I was able to print something out. But that pretty much ends there; I cannot access anymore of the printer settings, etc. (i.g. scanner, fax, wifi settings, etc.) I should suffice for now as I'm satisfied with the printer's features (and size, and prince), but if I could have a full-linux-supported printer like that one, I would return this one in exchange for the other.

    Read the article

< Previous Page | 299 300 301 302 303 304 305 306 307 308 309 310  | Next Page >