Search Results

Search found 6357 results on 255 pages for 'generic relations'.

Page 100/255 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Proposal for a new position at work

    - by Seth P.
    I have an idea at work for a new Product Manager position at our office. I work with several developers, and it would be helpful to have someone working in a type of "Scrum Master" capacity, dividing out assignments and making sure they get complete. This position does not currently exist, however I feel that I have enough evidence to indicate that it be very helpful for our business. What is the best way to present this proposal to my boss? Is there a specific template that you know of for new position? It should be able to describe the qualification for the position, their responsibilities, and what metrics we would use to measure them. Thanks. UPDATE++++ With Anna's suggestion, I gave more details about this specific position. However, I would ideally like the most generic way to present a new position to my boss.

    Read the article

  • Rails show latest entry

    - by Danny McClelland
    Hi Everyone, I have a working Rails application on version 2.3.5 - I am using many to many model relations and have got almost everything working. What I would like to do is on my new kase page show the most recent kase job ref at the top. So for example, if I created a new kase with a job ref of "001", if I then went to create a new kase it would show at the top "Your previous kases reference was 001". I have the field of jobref in the new kase form, so I am trying to workout what I need to do to output only the last jobref. If that makes sense! Thanks, Danny

    Read the article

  • Flattening a Jagged Array with LINQ

    - by PSteele
    Today I had to flatten a jagged array.  In my case, it was a string[][] and I needed to make sure every single string contained in that jagged array was set to something (non-null and non-empty).  LINQ made the flattening very easy.  In fact, I ended up making a generic version that I could use to flatten any type of jagged array (assuming it's a T[][]): private static IEnumerable<T> Flatten<T>(IEnumerable<T[]> data) { return from r in data from c in r select c; } Then, checking to make sure the data was valid, was easy: var flattened = Flatten(data); bool isValid = !flattened.Any(s => String.IsNullOrEmpty(s)); You could even use method grouping and reduce the validation to: bool isValid = !flattened.Any(String.IsNullOrEmpty); Technorati Tags: .NET,LINQ,Jagged Array

    Read the article

  • What is a non commital approach to software analysis

    - by dsjbirch
    When I think about software analysis the first thing which comes to mind is SSADM and the UML. But, what I want is a high level view of the system before I commit to a programming paradigm. Where am I going wrong? How do I approach a problem in a high level and generic way before I commit to a paradigm? What are the diagrams/tools available to support me? Edit: Some examples of tools that appear to be what I'm after are... A block diagram - http://en.wikipedia.org/wiki/Block_diagram A data flow diagram - http://en.wikipedia.org/wiki/Data_flow_diagram

    Read the article

  • IKVM 0.42 Update 1 Released

    I've promoted 0.42 Update 1 RC 2 to an official release. Changes (Update 1 RC 0 + RC 1 + RC 2): Added fix to mangle all artificial type names if they clash with Java type names in the same assembly. Fix for http://gcc.gnu.org/bugzilla/show_bug.cgi?id=41696. Fixed exception sorter to be correct when invoked with two references to the same object. Fix for bug #2946842. Fixed ikvmstub to not emit stubs for generic...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • IKVM 0.42 Update 1 Released

    I've promoted 0.42 Update 1 RC 2 to an official release. Changes (Update 1 RC 0 + RC 1 + RC 2): Added fix to mangle all artificial type names if they clash with Java type names in the same assembly. Fix for http://gcc.gnu.org/bugzilla/show_bug.cgi?id=41696. Fixed exception sorter to be correct when invoked with two references to the same object. Fix for bug #2946842. Fixed ikvmstub to not emit stubs for generic...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Reuseable Platform For Custom Board Game

    - by George Bailey
    Is there a generic platform to allow me to customize the rules to a board game. The board game uses a square grid, similar to Checkers or Chess. I was hoping to take some of the work out of creating this computer opponent, by reusing what is already written. I would think that there would be a pre-written routine for deciding which moves would lead to the best outcome, and all that I would need to program is the pieces, legal moves, what layout constitutes a win/lose or draw, and perhaps some kind of scoring for value of pieces. I have seen chess programs that appear to use a recursive routine, so they think anywhere from 2 to 20 moves ahead to create varying degrees of difficulty. I have noticed this on chess.com. The game I am programming will not be as complex. Is there a platform designed to be re-used for different grid/piece based games. JavaScript would be preferable, but Java or Perl would be acceptable.

    Read the article

  • Can't adjust brightness on Samsung RV420 with fn keys

    - by nicholascamp
    Typing ls /sys/class/backlight/*/brightness outputs /sys/class/backlight/intel_backlight/brightness /sys/class/backlight/samsung/brightness The max_brightness for the second is 8, but changing it with echo 2 | sudo tee /sys/class/backlight/samsung/brightness doesn't change brightness. I can do it by using intel_backlight: echo 2000 | sudo tee /sys/class/backlight/intel_backlight/brightness (max_brightness: cat /sys/class/backlight/intel_backlight/max_brightness outputs 4648). But I want to do this work with the fn brightness keys, as I always did. I don't know what happened to stop working, maybe the use of +1 monitor and removing it in a wrong time or a system update. I'm using Ubuntu 12.04 64 bits in an Samsung RV420 notebook. Kernel Version is 3.2.0-27-generic. Could you help me? Please tell me what more info should I provide. Thanks!

    Read the article

  • How to specify an association relation using declarative base

    - by sam
    I have been trying to create an association relation between two tables, intake and module . Each intake has a one-to-many relationship with the modules. However there is a coursework assigned to each module, and each coursework has a duedate which is unique to each intake. I tried this but it didnt work: intake_modules_table = Table('tg_intakemodules',metadata, Column('intake_id',Integer,ForeignKey('tg_intake.intake_id', onupdate="CASCADE",ondelete="CASCADE")), Column('module_id',Integer,ForeignKey('tg_module.module_id', onupdate ="CASCADE",ondelete="CASCADE")), Column('dueddate', Unicode(16)) ) class Intake(DeclarativeBase): __tablename__ = 'tg_intake' #{ Columns intake_id = Column(Integer, autoincrement=True, primary_key=True) code = Column(Unicode(16)) commencement = Column(DateTime) completion = Column(DateTime) #{ Special methods def __repr__(self): return '"%s"' %self.code def __unicode__(self): return self.code #} class Module(DeclarativeBase): __tablename__ ='tg_module' #{ Columns module_id = Column(Integer, autoincrement=True, primary_key=True) code = Column(Unicode(16)) title = Column(Unicode(30)) #{ relations intakes = relation('Intake', secondary=intake_modules_table, backref='modules') #{ Special methods def __repr__(self): return '"%s"'%self.title def __unicode__(self): return '"%s"'%self.title #} When I do this the column duedate specified in the intake_module_table is not created. Please some help will be appreciated here. thanks in advance

    Read the article

  • Ubuntu 12.10 Nvidia GT555M Bumblebee

    - by herczigem
    I have laptop with Nvidia GT 555M video card. System Ubuntu 12.10, kernel Linux 3.5.0-17-generic step what i do: sudo add-apt-repository ppa:bumblebee/stable sudo add-apt-repository ppa:ubuntu-x-swat/x-updates sudo apt-get update sudo apt-get install bumblebee bumblebee-nvidia restart system optirun glxgears This message give me: Cannot access secondary GPU - error: Could not load GPU driver Aborting because fallback start is disabled. Open sudo gedit /etc/bumblebee/bumblebee.conf and change Driver= to Driver=nvidia and KernelDriver=nvidia-current to KernelDriver=nvidia. Restart the system and run optirun glxgears. This message gives me: The Bumblebee daemon has not been started yet or the socket path /var/run/bumblebee.socket was incorrect. Could not connect to bumblebee daemon - is it running? Anybody have idea?!

    Read the article

  • Databind Parent/Child relation with Mysql

    - by e2k
    What is the best way to databind parent/child relations? Lets say I have two simple objects: Parent with the following properties, ParentId, Name and Childs and Child with the following properties, ChildId, Name and Parent. I want to write a repository using MySql for this but is failing when I bind both Childs and Parent properties, I have only been able to bind either the Childs property of parent or the Parent property och Child or else I would get a infinite loop. Thinking of it the preferable solution would be to only bind this properties when requested. If I use Linq to sql i would be able to write Parent.Childs[0].Parent.Name but how should I accomplish this with my own repository and with MySql, could anyone point me in the right direction? Looking at the Linq 2 sql generated classes they use EntitySet<Child and EntityRef<Parent could this be used with Mysql? I had a thought of using my IChildRepository in the Parent object and let the Public IEnumerable Childs databind the childs but it seems not right? Best regards E2k

    Read the article

  • Is this how dynamic language copes with dynamic requirement?

    - by Amumu
    The question is in the title. I want to have my thinking verified by experienced people. You can add more or disregard my opinion, but give me a reason. Here is an example requirement: Suppose you are required to implement a fighting game. Initially, the game only includes fighters, who can attack each other. Each fighter can punch, kick or block incoming attacks. Fighters can have various fighting styles: Karate, Judo, Kung Fu... That's it for the simple universe of the game. In an OO like Java, it can be implemented similar to this way: abstract class Fighter { int hp, attack; void punch(Fighter otherFighter); void kick(Fighter otherFighter); void block(Figther otherFighter); }; class KarateFighter extends Fighter { //...implementation...}; class JudoFighter extends Fighter { //...implementation... }; class KungFuFighter extends Fighter { //...implementation ... }; This is fine if the game stays like this forever. But, somehow the game designers decide to change the theme of the game: instead of a simple fighting game, the game evolves to become a RPG, in which characters can not only fight but perform other activities, i.e. the character can be a priest, an accountant, a scientist etc... At this point, to make it more generic, we have to change the structure of our original design: Fighter is not used to refer to a person anymore; it refers to a profession. The specialized classes of Fighter (KaraterFighter, JudoFighter, KungFuFighter) . Now we have to create a generic class named Person. However, to adapt this change, I have to change the method signatures of the original operations: class Person { int hp, attack; List<Profession> skillSet; }; abstract class Profession {}; class Fighter extends Profession { void punch(Person otherFighter); void kick(Person otherFighter); void block(Person otherFighter); }; class KarateFighter extends Fighter { //...implementation...}; class JudoFighter extends Fighter { //...implementation... }; class KungFuFighter extends Fighter { //...implementation ... }; class Accountant extends Profession { void calculateTax(Person p) { //...implementation...}; void calculateTax(Company c) { //...implementation...}; }; //... more professions... Here are the problems: To adapt to the method changes, I have to fix the places where the changed methods are called (refactoring). Every time a new requirement is introduced, the current structural design has to be broken to adapt the changes. This leads to the first problem. Rigid structure makes it hard for code reuse. A function can only accept the predefined types, but it cannot accept future unknown types. A written function is bound to its current universe and has no way to accommodate to the new types, without modifications or rewrite from scratch. I see Java has a lot of deprecated methods. OO is an extreme case because it has inheritance to add up the complexity, but in general for statically typed language, types are very strict. In contrast, a dynamic language can handle the above case as follow: ;;fighter1 punch fighter2 (defun perform-punch (fighter1 fighter2) ...implementation... ) ;;fighter1 kick fighter2 (defun perform-kick (fighter1 fighter2) ...implementation... ) ;;fighter1 blocks attacks from fighter2 (defun perform-block (fighter1 fighter2) ...implementation... ) fighter1 and fighter2 can be anything as long as it has the required data for calculation; or methods (duck typing). You don't have to change from the type Fighter to Person. In the case of Lisp, because Lisp only has a single data structure: list, it's even easier to adapt to changes. However, other dynamic languages can have similar behaviors as well. I work primarily with static languages (mainly C and Java, but working with Java was a long time ago). I started learning Lisp and some other dynamic languages this year. I can see how it helps improving my productivity.

    Read the article

  • Checking for cross-site scripting vulnerabilities in Perl web applications

    - by David Scholefield
    I'm putting together some notes for a dev team on how to write secure Perl code - especially taking into account the current OWASP top 10 web application vulnerabilities. For cross-site scripting I've included information on ensuring that all output to the browser is checked and escaped where necessary, but I'm looking for more automated mechanisms that would mean a developer doesn't have to think about every output statement and, potentially, miss one. Perl's 'taint' function sounds like it should be a help because it distrusts all user input, but it doesn't complain on tainted data being output to the browser. Apart from checking all output statements individually (probably by calling a generic sanitizing function) does anyone have any ideas on how Perl can help with this with existing libraries or techniques?

    Read the article

  • File manager respawns with ubuntuone

    - by pygator
    Starting Feb 11, my Ubuntu 10.10 desktop respawns FileManager many times(hundreds). You can observe the "Starting File Manager" processes at the bottom of the gnome desktop. I can make this behaviour stop by: System - Preferences - Ubuntu One - Services - uncheck "Files". Can someone walk me though the debug process? Linux 2.6.35-25-generic #44-Ubuntu SMP Fri Jan 21 17:40:48 UTC 2011 i686 GNU/Linux I'm trying to reset the Ubuntu One configuration. I found good information here: https://wiki.ubuntu.com/UbuntuOne/Bugs Look for "ROOT_MISMATCH in syncdaemon.log" After running through the steps to reset and restart UbuntuOne, no more "Starting File Mangager" respawns.

    Read the article

  • Lucid hangs at booting after kernel upgrade

    - by Thomas Deutsch
    This weekend, one of our servers running Lucid has installed some upgrades: libgcrypt11 1.4.4-5ubuntu2.1 linux-firmware 1.34.14 linux-image-2.6.32-41-generic 2.6.32-41.91 linux-libc-dev 2.6.32-41.91 Afterwards, it rebooted since this was a kernel upgrade. Now, it hangs at booting, after /scripts/init-bottom. init-bottom itself should not be the problem, the last line I can see is "done". So the problem has to be shortly after that. http://manpages.ubuntu.com/manpages/hardy/man8/initramfs-tools.8.html tells me, that the next step is procfs and sysfs are moved to the real rootfs and execution is turned over to the init binary which should now be found in the mounted rootfs. But I don't know how and where. The problem exists with older kernels too, and this one here doesn't fix the problem: http://www.tummy.com/journals/entries/jafo_20111003_160440 Anyone an idea?

    Read the article

  • Windows Phone 7 UserExtenedProperties opinion...

    - by webdad3
    I was thinking of a way to some how connect my phone user base to my site user base. Right now if an item gets added to the site via the phone the userId is generic and the site displays it as SmartPhoneUser. I was thinking it might be cool to display the unique phone id by using the UserExtenedProperties, however, after reading Nick Harris's blog about it I'm thinking it may not be a good idea as I don't want users to think I'm up to anything nefarious. So I'm wondering if there are any suggestions out there on how to accomplish this task. Right now my site uses the JanRain module that allows multiple logins from other sites (Facebook, Yahoo, Google etc.). Any thoughts on how I can accomplish what I want to do without using the ExtendedProperties?

    Read the article

  • How to Achieve Real-Time Data Protection and Availabilty....For Real

    - by JoeMeeks
    There is a class of business and mission critical applications where downtime or data loss have substantial negative impact on revenue, customer service, reputation, cost, etc. Because the Oracle Database is used extensively to provide reliable performance and availability for this class of application, it also provides an integrated set of capabilities for real-time data protection and availability. Active Data Guard, depicted in the figure below, is the cornerstone for accomplishing these objectives because it provides the absolute best real-time data protection and availability for the Oracle Database. This is a bold statement, but it is supported by the facts. It isn’t so much that alternative solutions are bad, it’s just that their architectures prevent them from achieving the same levels of data protection, availability, simplicity, and asset utilization provided by Active Data Guard. Let’s explore further. Backups are the most popular method used to protect data and are an essential best practice for every database. Not surprisingly, Oracle Recovery Manager (RMAN) is one of the most commonly used features of the Oracle Database. But comparing Active Data Guard to backups is like comparing apples to motorcycles. Active Data Guard uses a hot (open read-only), synchronized copy of the production database to provide real-time data protection and HA. In contrast, a restore from backup takes time and often has many moving parts - people, processes, software and systems – that can create a level of uncertainty during an outage that critical applications can’t afford. This is why backups play a secondary role for your most critical databases by complementing real-time solutions that can provide both data protection and availability. Before Data Guard, enterprises used storage remote-mirroring for real-time data protection and availability. Remote-mirroring is a sophisticated storage technology promoted as a generic infrastructure solution that makes a simple promise – whatever is written to a primary volume will also be written to the mirrored volume at a remote site. Keeping this promise is also what causes data loss and downtime when the data written to primary volumes is corrupt – the same corruption is faithfully mirrored to the remote volume making both copies unusable. This happens because remote-mirroring is a generic process. It has no  intrinsic knowledge of Oracle data structures to enable advanced protection, nor can it perform independent Oracle validation BEFORE changes are applied to the remote copy. There is also nothing to prevent human error (e.g. a storage admin accidentally deleting critical files) from also impacting the remote mirrored copy. Remote-mirroring tricks users by creating a false impression that there are two separate copies of the Oracle Database. In truth; while remote-mirroring maintains two copies of the data on different volumes, both are part of a single closely coupled system. Not only will remote-mirroring propagate corruptions and administrative errors, but the changes applied to the mirrored volume are a result of the same Oracle code path that applied the change to the source volume. There is no isolation, either from a storage mirroring perspective or from an Oracle software perspective.  Bottom line, storage remote-mirroring lacks both the smarts and isolation level necessary to provide true data protection. Active Data Guard offers much more than storage remote-mirroring when your objective is protecting your enterprise from downtime and data loss. Like remote-mirroring, an Active Data Guard replica is an exact block for block copy of the primary. Unlike remote-mirroring, an Active Data Guard replica is NOT a tightly coupled copy of the source volumes - it is a completely independent Oracle Database. Active Data Guard’s inherent knowledge of Oracle data block and redo structures enables a separate Oracle Database using a different Oracle code path than the primary to use the full complement of Oracle data validation methods before changes are applied to the synchronized copy. These include: physical check sum, logical intra-block checking, lost write validation, and automatic block repair. The figure below illustrates the stark difference between the knowledge that remote-mirroring can discern from an Oracle data block and what Active Data Guard can discern. An Active Data Guard standby also provides a range of additional services enabled by the fact that it is a running Oracle Database - not just a mirrored copy of data files. An Active Data Guard standby database can be open read-only while it is synchronizing with the primary. This enables read-only workloads to be offloaded from the primary system and run on the active standby - boosting performance by utilizing all assets. An Active Data Guard standby can also be used to implement many types of system and database maintenance in rolling fashion. Maintenance and upgrades are first implemented on the standby while production runs unaffected at the primary. After the primary and standby are synchronized and all changes have been validated, the production workload is quickly switched to the standby. The only downtime is the time required for user connections to transfer from one system to the next. These capabilities further expand the expectations of availability offered by a data protection solution beyond what is possible to do using storage remote-mirroring. So don’t be fooled by appearances.  Storage remote-mirroring and Active Data Guard replication may look similar on the surface - but the devil is in the details. Only Active Data Guard has the smarts, the isolation, and the simplicity, to provide the best data protection and availability for the Oracle Database. Stay tuned for future blog posts that dive into the many differences between storage remote-mirroring and Active Data Guard along the dimensions of data protection, data availability, cost, asset utilization and return on investment. For additional information on Active Data Guard, see: Active Data Guard Technical White Paper Active Data Guard vs Storage Remote-Mirroring Active Data Guard Home Page on the Oracle Technology Network

    Read the article

  • will a mysql query run slower if one of the tables involved has no index defined??

    - by lock
    there's this already populated database which came from another dev im not sure what went on that dev's mind when he created the tables, but on one of our scripts there is this query involving 4 tables and it runs super slow SELECT a.col_1, a.col_2, a.col_3, a.col_4, a.col_5, a.col_6, a.col_7 FROM a, b, c, d WHERE a.id = b.id AND b.c_id = c.id AND c.id = d.c_id AND a.col_8 = '$col_8' AND d.g_id = '$g_id' AND c.private = '1' NOTE: $col_8 and $g_id are variables from a form its only my theory that it's due to tables b and c not having an index, although im guessing that the dev didnt think that it was necessary since those tables only tell relations between a and d, where b tells that the data in a belongs to a certain user, and c tells that the user belongs to a group in d as you can see, there's not even a join or other extensive query functions used but this query which returns only around 100 rows takes 2 minutes to execute. anyway my question is simply this post's title. will a mysql query run slower if one of the tables involved has no index defined??

    Read the article

  • LLBL Gen Predicate Filter

    - by Neil
    I am new to LLBLGen Pro and am checking for duplicate, I have the following SQL: SQL: select a.TopicId,atc.TopicCategoryId,a.Headline from article a inner join ArticleTopicCategory atc on atc.ArticleId = a.Id where a.TopicId = 'C0064FAE-093B-466E-8745-230534867D2F' and a.Headline = 'Test' and atc.TopicCategoryId in ('004D64F7-474C-48F9-9887-17B1E7532A84') Whenever I step though my function, it always returns 0: LLBLGen Code: public bool CheckDuplicateArticle(Guid topicId, List<Guid> categories, string headline) { ArticleCollection articles = new ArticleCollection(); PredicateExpression filter = new PredicateExpression(); RelationCollection relation = new RelationCollection(); relation.Add(ArticleEntity.Relations.ArticleTopicCategoryEntityUsingArticleId); filter.AddWithAnd(ArticleFields.TopicId == topicId); filter.AddWithAnd(ArticleTopicCategoryFields.Id == categories); filter.AddWithAnd(ArticleFields.Headline == headline); articles.GetMulti(filter, 0, null, relation); return articles.Count > 0; } Any help would be appreciated!

    Read the article

  • Consumer Electronics Show (CES) Summit:Best Practices in Transforming Channels and Partnerships

    - by charles.knapp
    Expanding consumer demand is driving the entire high technology industry, accompanied by product lifecycles as short as a few months, continued pricing and promotion pressures, and increased globalization. Unifying global channel management, operations, and execution flow will increase efficiency and growth. IT can help, but one must think beyond generic ERP and CRM. Please join Oracle and IBM at the Bellagio Hotel in Las Vegas, Wednesday January 5, 1-7 pm. Learn from IBM, VTech, Plantronics, Cisco, Symantec and Oracle High Tech Product Strategy how to improve:Channel sales, marketing, and operations management - enhance NPI, sales, forecasts, training, promotion planning, execution and settlement Winning the deal - determining the right price for the right deal for the "perfect quote", capturing the order and order management Collaborative and rapid supply chain planning - improve agility, inventory turns, and profits Register now for this FREE event. We hope you'll join us for our Oracle High Technology CES Summit and networking reception with your peers.

    Read the article

  • How can I create and animate 2D skeletons for HTML5 Javascript games? [on hold]

    - by user414209
    I'm trying to make a 2D fighting game in HTML5(somewhat like street fighter). So basically there are two players, one AI and one Human. The players need to have animations for the body movements. Also, there needs to be some collision detection system. I'm using createjs for coding but to design models/objects/animations, I need some other software. So I'm looking for a software that can: easily make custom animation of 2d objects. The objects structure(skeleton etc.) will be same once defined but need to be defined once. Can export the animations and models in a js readable format(preferably json) Collision detection can be done easily after the exported format is loaded in a game engine. For point 1, I'm looking for some generic skeleton based animation. Sprite-sheet based animations will be difficult for collision detection.

    Read the article

  • How to ensure apache2 reads htaccess for custom expiry?

    - by tzot
    I have a site with Apache 2.2.22 . I have enabled the mod-expires and mod-headers modules seemingly correctly: $ apachectl -t -D DUMP_MODULES … expires_module (shared) headers_module (shared) … Settings include: ExpiresActive On ExpiresDefault "access plus 10 minutes" ExpiresByType application/xml "access plus 1 minute" Checking the headers of requests, I see that max-age is set correctly both for the generic case and for xml files (which are auto-generated, but mostly static). I would like to have different expiries for xml files in a directory (e.g. /data), so http://site/data/sample.xml expires 24 hours later. I enter the following in data/.htaccess: ExpiresByType application/xml "access plus 24 hours" Header set Cache-control "max-age=86400, public" but it seems that apache ignores this. How can I ensure apache2 uses the .htaccess directives? I can provide further information if requested.

    Read the article

  • Failed to retrieve share list from server

    - by Eric Sean Tite Webber
    UBUNTU 11.10 NAUTILUS 3.2.1 We ARE able to see Windows PCs on our network from Ubuntu's NAUTILUS, yet we are NOT able to access their shares from NAUTILUS, even though they work fine with each other, i.e. each windows PC IS able access the other Windows PC's shares just fine. Please infer from this information the answers to any questions about our situation you may have. Note this is a default/pristine configuration, i.e. no changes have been made whatsoever. Our version of Ubuntu is: 11.10, NAUTILUS is 3.2.1 Linux tite-HP-630-Notebook-PC 3.0.0-15-generic #26-Ubuntu SMP Fri Jan 20 17:23:00 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux A screenshot is available upon request. Thanks in advance for your assistance.

    Read the article

  • Can "go" replace C++? [closed]

    - by iammilind
    I was reading wiki article about "go" programming language, where Bruce Eckel states: The complexity of C++ (even more complexity has been added in the new C++), and the resulting impact on productivity, is no longer justified. All the hoops that the C++ programmer had to jump through in order to use a C-compatible language make no sense anymore --they're just a waste of time and effort. Now, Go makes much more sense for the class of problems that C++ was originally intended to solve. Can go really replace C++(11) for new development in future? How about generic programming? I don't know go , but the amount of time (in)wasted in learning C++ seems to go in vain.

    Read the article

  • How to update coffee script?

    - by Tetsu
    I got a following error when I tried to watch coffee scripts by coffee -o js -cw coffee. /usr/local/lib/node_modules/coffee-script/lib/coffee-script/command.js:321 throw e; ^ Error: watch Unknown system errno 28 at errnoException (fs.js:636:11) at FSWatcher.start (fs.js:663:11) at Object.watch (fs.js:691:11) at /usr/local/lib/node_modules/coffee-script/lib/coffee-script/command.js:287:27 at Object.oncomplete (/usr/local/lib/node_modules/coffee-script/lib/coffee-script/command.js:100:11) I have no idea what is going with error. Then I checked the versions, coffee -v is 1.6.1 and node -v is v0.6.12. According the official site( http://coffeescript.org/ ) the latest version is 1.6.3, so I wanted update coffee by npm update -g coffee-script, but this fails also. npm WARN [email protected] package.json: bugs['name'] should probably be bugs['url'] npm http GET https://registry.npmjs.org/coffee-script npm http 304 https://registry.npmjs.org/coffee-script How can I update coffee script? Edit 2013/10/11 In my coffee script directory there is only one file box_wrapper.coffee. $ -> $("body").children().wrap -> "<div id='#{$(@).attr "id"}_box' class='wrapper'/>" Edit 2013/10/16 I tried to re-install coffee, so I've done like this. $ sudo npm -g rm coffee npm WARN Not installed in /usr/local/lib/node_modules coffee $ coffee -v CoffeeScript version 1.6.1 I can't remove coffee. And I tried also like this. $ sudo apt-get remove npm $ npm -v -bash: /usr/bin/npm: No such file or directory $ sudo apt-get install npm $ npm -v 1.1.4 $ sudo npm -g install coffee # I omit a lot of `GET` parts. npm http 304 https://registry.npmjs.org/mkdirp/0.3.4 npm ERR! error installing [email protected] npm http 304 https://registry.npmjs.org/assertion-error/1.0.0 npm http 304 https://registry.npmjs.org/growl npm http 304 https://registry.npmjs.org/jade/0.26.3 npm http 304 https://registry.npmjs.org/diff/1.0.2 npm http 304 https://registry.npmjs.org/mkdirp/0.3.5 npm http 304 https://registry.npmjs.org/glob/3.2.1 npm http 304 https://registry.npmjs.org/ms/0.3.0 npm ERR! error rolling back [email protected] Error: UNKNOWN, unknown error '/usr/local/lib/node_modules/coffee/node_modules/express' npm ERR! error installing [email protected] npm ERR! EEXIST, file already exists '/usr/local/lib/node_modules/coffee/node_modules/mocha/node_modules' npm ERR! File exists: /usr/local/lib/node_modules/coffee/node_modules/mocha/node_modules npm ERR! Move it away, and try again. npm ERR! npm ERR! System Linux 3.2.0-54-generic-pae npm ERR! command "node" "/usr/bin/npm" "-g" "install" "coffee" npm ERR! cwd /home/ironsand npm ERR! node -v v0.6.12 npm ERR! npm -v 1.1.4 npm ERR! path /usr/local/lib/node_modules/coffee/node_modules/mocha/node_modules npm ERR! fstream_path /usr/local/lib/node_modules/coffee/node_modules/mocha/node_modules/___debug.npm npm ERR! fstream_type Directory npm ERR! fstream_class DirWriter npm ERR! code EEXIST npm ERR! message EEXIST, file already exists '/usr/local/lib/node_modules/coffee/node_modules/mocha/node_modules' npm ERR! errno {} npm ERR! fstream_stack /usr/lib/nodejs/fstream/lib/writer.js:161:23 npm ERR! fstream_stack Object.oncomplete (/usr/lib/nodejs/mkdirp.js:34:53) npm ERR! EEXIST, file already exists '/usr/local/lib/node_modules/coffee/node_modules/mocha/node_modules' npm ERR! File exists: /usr/local/lib/node_modules/coffee/node_modules/mocha/node_modules npm ERR! Move it away, and try again. npm ERR! npm ERR! System Linux 3.2.0-54-generic-pae npm ERR! command "node" "/usr/bin/npm" "-g" "install" "coffee" npm ERR! cwd /home/ironsand npm ERR! node -v v0.6.12 npm ERR! npm -v 1.1.4 npm ERR! path /usr/local/lib/node_modules/coffee/node_modules/mocha/node_modules npm ERR! fstream_path /usr/local/lib/node_modules/coffee/node_modules/mocha/node_modules/___debug.npm npm ERR! fstream_type Directory npm ERR! fstream_class DirWriter npm ERR! code EEXIST npm ERR! message EEXIST, file already exists '/usr/local/lib/node_modules/coffee/node_modules/mocha/node_modules' npm ERR! errno {} npm ERR! fstream_stack /usr/lib/nodejs/fstream/lib/writer.js:161:23 npm ERR! fstream_stack Object.oncomplete (/usr/lib/nodejs/mkdirp.js:34:53) npm ERR! npm ERR! Additional logging details can be found in: npm ERR! /home/ironsand/npm-debug.log npm not ok And npm-debug.log is a blank file.

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >